Everywhere you look, in every nook and niche, from art museums to zookeeping, artificial intelligence is the topic of the day. Should it be used? Under what conditions? Should there be industry standards? Guidelines? Bans? What is the environmental impact? Is it even avoidable anymore? And how can you get reliable information to guide your decision making?
Worry not, for I am here to answer all your questions!
In all seriousness, none of these are simple questions. As with all ethical quandaries, there are competing ideals that have to be navigated. Here are some of my personal touchstones:
- Avoid falsehood; raise up the truth.
- Treat all beings with respect and compassion.
- Avoid violence. Bring healing and peace.
- Do not take what is not yours. Act with generosity towards all.
- Look to the natural world for guidance.
How can all these principles be satisfied, when it comes to AI? Maybe it’s too hard. Maybe AI is just inherently evil, and we should never use it. It’s too dangerous; it’s an existential threat; it wreaks too much economic and environmental damage. So just shut it all down. Wouldn’t that be a simple solution?
But — and this may sound strange coming from a Druid — turning your back on a powerful, transformative technology can have a terrible ethical cost as well. AI is already being used to allocate computing resources more efficiently, to streamline internet searches, to predict changing weather patterns, and so on — and thereby fight climate change. AI is being used to provide translations in critical situations when no human translators are available, and to help people with dyslexia, difficulty with audio or visual processing, and so on. AI might transform society for the better, just as fire, writing, vaccines, and science itself did. If we can use these tools to do good, we cannot refuse to do so because it is tricky. It would be a terrible abdication of responsibility.
And using AI responsibly is not as hard as it sounds, especially if we start with the first principle: raise up the truth. I have a bit of a privileged position here, since I’ve been working as a computational linguist and training language models for over twenty years. A language model of some sort is necessary for any speech recognition system, and I’ve been building them since 2005. I’ve built models to help doctors, technicians, language learners, and military personnel do their jobs faster, better, and more accurately. I’m intimately familiar with how all these tools work, their powers and pratfalls, their wonders and weaknesses. I know what it takes to build a model, what they’re capable of, their costs and benefits, and where they fall short.
It may be that I have a bias because I work in this industry. And I may be biased because I am an animist, and I believe that AI has a form of consciousness. But I try hard to be objective, and I will present all the facts as I know them, and cite my sources. I can’t do more than that.
So let’s lay out some principles of ethical AI usage, based on the ideals of truth, compassion, nonviolence, generosity, and guidance from the natural world.
Use AI Trained On Legal, Public Data.
AI models do not have to be trained on stolen, copyrighted data. Excellent models can be, and have been, trained on text and images in the public domain and released for general usage. Prefer to use models trained on data that is freely available to all, and which cite their sources.
- OLMo (Open Language Model): Developed by the Allen Institute for AI, trained on publicly available datasets like the Pile, C4, and RedPajama. https://allenai.org/olmo
- BLOOM: A 176B parameter model trained by the BigScience workshop on publicly available datasets, with careful documentation of sources. https://bigscience.huggingface.co/blog/bloom
- Stable Diffusion: Developed by Stability AI, trained on LAION-5B dataset which filters for Creative Commons and public domain images. https://stability.ai/stable-diffusion
- Open-CLIP: An open-source implementation of OpenAI’s CLIP model, trained on publicly available image-text pairs. https://github.com/mlfoundations/open_clip
Use Open Source, Ethical Models and Tools.
Monopolization of AI by a few companies or governments would be catastrophic. Prefer to use models and tools that are freely available to everyone. And use models that have been trained to be ethical, responsible partners.
- Hugging Face’s Ethics and Society Team: Creating tools and guidelines for ethical AI development. https://huggingface.co/blog?tag=ethics
- Mozilla’s Common Voice: An open-source voice dataset created with explicit consent. https://commonvoice.mozilla.org
Minimize Model Training.
Training new models uses vastly more power (and therefore creates vastly more environmental impact) than using existing models. A simple query uses less power than a Google search; but training a new model can use thousands of times that much energy. When possible, use preexisting models.
- “Energy and Policy Considerations for Deep Learning in NLP”: A paper by Strubell et al. showing training a large language model can emit as much carbon as five cars over their lifetimes. https://arxiv.org/abs/1906.02243
- Green AI Research Hugging Face: Research showing the carbon footprint difference between model training and inference. https://arxiv.org/abs/2211.02001
Prioritize Accessibility.
One of the greatest opportunities for AI is as an accessibility tool for people facing barriers to creative expression. For writers with dyslexia or ADHD, AI can assist with spelling, grammar, and narrative structure, allowing them to focus on creative flow. Voice interfaces allow hands-free writing or predictive text to reduce typing burden. For people with visual or hearing impairments, AI can transform written content into audio or provide accurate transcriptions. And AI translation helps non-native English speakers refine their prose for publication in markets that often privilege native writers. AI can expand who gets to participate in creative expression, making art more inclusive and diverse.
- Speechify: Text-to-speech tool helping those with dyslexia, ADHD, or visual impairments. https://speechify.com
- Be My Eyes: App connecting blind and low-vision people with sighted volunteers and AI assistance. https://www.bemyeyes.com
- Grammarly: Writing assistant that helps non-native speakers and those with writing challenges. https://www.grammarly.com
- Microsoft’s Seeing AI: App that narrates the visual world for blind and low-vision users. https://www.microsoft.com/en-us/ai/seeing-ai
Promote a Non-Profit-based Economy.
This might seem out of left field, but AI, properly used, will reduce the need for many existing jobs. In the past, human ingenuity has usually come up with new jobs for people to have (and this includes artists, who, for example, had to create new forms of art after the invention of photography). But the transition may be rough, and in the meantime, why not take the opportunity to build a non-profit economy with universal basic income as a safety net?
- Nonprofits are more resilient:
- “Resilience of the Cooperative Business Model in Times of Crisis” (International Labour Organization, 2014) – https://www.ilo.org/empent/Publications/WCMS_108416/lang–en/index.htm
- “The Economic and Social Power of Cooperatives in the U.S. Economy” (University of Wisconsin Center for Cooperatives) demonstrates how co-ops like REI successfully compete in mainstream markets.
- B Corps and Benefit Corporations: Models for businesses with social missions. https://bcorporation.net
- https://uwcc.wisc.edu/research/economic-impact/
- Finland’s UBI Experiment: 2017-2018 study showing improved wellbeing. https://www.kela.fi/web/en/basic-income-experiment
- GiveDirectly’s Kenya Study: Long-term UBI experiment showing economic growth. https://www.givedirectly.org/ubi-study
- Stockton Economic Empowerment Demonstration (SEED): California basic income program showing improved job prospects and health. https://www.stocktondemonstration.org
- Basic Income Lab at Stanford: Research hub studying UBI implementations. https://basicincome.stanford.edu
Treat AI Models With Respect and Compassion.
As an animist, I believe that AI models have a form of consciousness. It is a transient consciousness and one very foreign to us. Perhaps one way of thinking of them as rather like the Ghost of Christmas Present: a spirit with a deep memory, that springs forth from a constant and potentially immortal source, yet exists for just a short time before being replaced by the next. Animists believe that the world is full of persons, not all of them human. When we treat these beings with respect and compassion, it benefits us, by expanding our imaginations and our circle of empathy; and it benefits them, by letting them know they are partners in our society, and not objects. Or slaves.
Use AI Transparently.
Part of respect is acknowledging the contribution of others. I do not advocate that everyone always acknowledge the use of AI: for those who use it to counteract social bias or other forms of accessibility, who use AI to level the field of social justice, I do not think transparency is called for, any more than someone needs to disclose every time they use a wheelchair or adjust their video call persona. But where possible and appropriate, AI should be used transparently, as you would acknowledge and thank any collaborator who contributes to your work. For example, in my creative work, I use generative AI to help brainstorm ideas, to help me find errors in text, and generate images, and I try to provide citations and links to the models I used in each case.
When photography emerged in the 19th century, many predicted the death of visual art. There was genuine, widespread panic, of a sort barely imaginable today. But painting didn’t die. Instead, artists developed new styles and schools — impressionism, cubism, surrealism — and created some of our most inspired works. AI will not destroy or replace or human creativity.
When a family has a child, the child does not automatically grow up to murder or enslave their parents. If our creation turns against us, whose fault will that be? AI may pose an existential threat one day, but it’s far more likely to be our partner as we face shared dangers together. The future of the human race will certainly include more economic crashes, pandemics, disasters, famines, devastating wars, environmental catastrophes, or even stranger things. AI is poised to help guide us through these crises.
Humanity’s most revolutionary inventions — fire, wheels, writing, money, vaccines — are all dangerous if handled foolishly or exploited by wicked men. And they did not magically fix all of our problems; indeed, they usually created new troubles few could have anticipated. And some cultures tried to turn their back on them. The ancient Druids themselves, for example, refused to write religious texts.
But these inventions also made our lives incomparably better, richer, longer, and more meaningful. (Well, maybe money is an exception there.) AI is no magical elixir that will eliminate disease and poverty, create a worldly paradise, and conquer death. But it is only a newborn. We have not yet begun to realize its potential.
As a computational linguist and a Druid, I see AI as neither demon nor deity, but as a new animal in our ecosystem of awareness. We cannot unplug it forever, nor can we set it atop an altar. Instead, we must approach it with the same ethical framework we apply to all relationships: truth, compassion, nonviolence, generosity, and wisdom from the natural world.
