Given rapid developments in artificial intelligence, questions about the implications of manufactured minds for Jewish life have grown urgent. Back in 2019 I authored a responsum (rabbinic legal opinion) on this subject for the Rabbinical Assembly’s Committee on Jewish Law and Standards, and I have spoken at several academic conferences examining the implications of AI for Jewish law.
Yet even those of us who have been considering the subject for years have been surprised by powerful new large language models that are capable of generating coherent original texts on nearly any subject and in any style.
In my 2019 responsum, I focused first on topics related to AI-governed autonomous machines. Who is responsible for damage caused by a self-driving vehicle? How might we integrate Jewish ethical and legal norms into the operating systems of these machines? What moral rules should govern autonomous weapons systems?
Such questions would be complicated enough if these machines simply applied algorithms created by people. But machine learning has shifted into an observational paradigm, where the AI starts from scratch — studying texts, images, sounds, and other data found in the world — and draws its own conclusions.
In a sense, machine learning is not so different from the way that children first learn about the world. They encounter strange sights, sounds, smells, and sensations, gradually imposing order on the random-seeming chaos around them. Through trial and error, children start to recognize patterns and learn to interact with the world around them. Only later do they receive formal, rule-based education about, say, proper grammar or mathematical formulas. You could say that machines are learning to learn the way that humans do — through imitation, experimentation, error correction, and repeated efforts.
Does this make them human?
I serve as head of school at Golda Och Academy in West Orange. We are rightly proud of the accomplishments of our students in building robots that can function autonomously and solve problems such as avoiding obstacles in their way. This year our students used AI to enable their robots to recognize distinctive patterns (such as identifying their own cone). Large language models are rapidly becoming as ubiquitous as calculators, computers, and smartphones, and they now are integrated into language arts courses.
And yet we insist on the distinction between using a tool to find information and abusing it to evade our obligation to study and produce our own original content.
What does Judaism have to say about this? Does the advent of generative AI signal the end of the privileged moral status of humanity — at least in our own eyes? If machines can generate reasonable speech, can they also make claims to personhood? What are the implications for Torah study and prayer? If ChatGPT generates a d’var Torah, does that fulfill the mitzvah of Torah study for people who hear it?
The Talmud, in Sanhedrin 65b, tells several intriguing stories about rabbis who employed mystical methods to create new forms of life. One begins:
“Rava created a man and sent him [to appear] before Rabbi Ze’era. He [Rabbi Ze’era] spoke to him, but he [the man] did not reply to him. [Rabbi Ze’era] said to him: You came from the fellowship [of magicians], return to your dust!”
This brief passage is the basis for medieval and early modern stories of rabbi-made androids, eventually known as golems. Notice that in this story Rabbi Ze’era identifies the man before him as artificial based on his inability to speak. This limitation became a standard feature of golem legends. Only God can grant the power of speech, and only humans have this power. See, for example, the comment of Rashi to Genesis 2:7: “humans have additional vitality, since they also possess intelligence and speech.”
The belief that humans are the only animals that can reason and speak has grown shakier by the year. Recent books, such as “An Immense World” by Ed Yong, illustrate the remarkable capacities of other species to sense and shape the world around them — capacities that exceed those of humans. Other authors have argued that plants and even fungi are capable of solving complex problems and communicating across their own networks. (See, for example, “The Hidden Life of Trees” by Peter Wohlleben and “Entangled Life” by Merlin Sheldrake.) True, these organisms do not communicate in the same way as humans do, but does this difference indicate the limits of their intelligence, or the limits of our imagination?
Because golems couldn’t speak and because they were not the product of human reproduction, the rabbis denied them humanity. Thus, Rabbi Ze’era was judged not guilty of murdering the silent man who knocked on his door, and later rabbis used this distinction to exclude golems theoretically from the minyan.
However, it now appears that artificial intelligence can generate speech and might even fool experts with its persuasive abilities. The recent screenwriter’s guild strike was motivated in part by concerns that generative AI could soon replace their jobs, much as automated devices have rendered other professions obsolete. Such concerns would not be salient if AI’s products were not so impressive.
It is time to draw distinctions before the exuberance occasioned by these developments causes us to forget the essence of our humanity and the basis of our identity as Jews. First, we do not believe that humanity is defined solely by cognitive and vocal abilities. A person who is incapable of speech remains a person, fully vested with the divine image and fully protected by human rights. Likewise with a person of limited cognitive abilities — it may be difficult for neurotypical people to understand and value the experience of differently abled individuals, but that is no excuse for dehumanizing them.
The advent of machines that generate speech should not diminish the value of humanity, but rather cause us to look deeper at the true foundation of our worth. Humans should be valued not only for their ability to generate novel content, but for their relationship to each other, to morality, and ultimately to the divine source of their lives.
Likewise with Torah study. The value of that study is based not on the quantity of text summoned, but on the meaning it has for a person who studies it and lives by its word. For many years I have relied on databases of Jewish texts such as the Bar Ilan Responsa Library and Sefaria to find texts and filter results. A small thumb drive contains many more sacred texts than I will read in my lifetime. Yet that device has no knowledge. It is not a servant of the Holy One. It does not study, and it does not observe. The silicon wafer encoded with texts is neither wise nor stupid, virtuous nor guilty.
Generative AI is different from databases, but not entirely. It uses data sets to learn patterns and predict the next letter, word, or phrase. It has no sense of meaning, deserves no credit for innovative thoughts, and bears no moral responsibility for its errors. AI is a tool, not a person.
As machine learning is trained to generate ever more realistic texts, pictures, songs, and other approximations of human creativity, it becomes ever more important to mark the difference. Human morality is linked to our mortality, our obligations, our understanding. The sanctity of human life is based not on our utility, but by our very existence as people made from other people in the image of God.
Rabbi Danny Nevins is head of school at Golda Och Academy in West Orange. A scholar of contemporary Jewish law, he previously served as dean of the JTS Rabbinical School, and as senior rabbi of Adat Shalom Synagogue in Farmington Hills, MI.