Reading the Sermon on the Mount with ChatGPT

In the past several months, it has been difficult to avoid discussion of “artificial intelligence” or more particularly ChatGPT and a host of other chatbots developed by technology companies. Based on the popularity of the topic (and often hand-wringing tone) within higher education and technology reporting, I decided I needed to see what ChatGPT would say about the Bible. In my conversations with the bot, I was struck by the fact that ChatGPT holds up a mirror to the North American church, as well as to the broader Western scholarly community, sharing three major shortcomings with us as we have been shaped by the spirit of our age.

Enjoying this article? Read more from The Biblical Mind.

First, the responses from ChatGPT metaphorize and individualize Scripture without a clear method for when and why, without warrant, and often in direct contradiction to the text itself. Second, the bot’s interpretations are ignorant of the interpretive tradition(s) that produce them. And third, as a disembodied bot, the interpretations themselves are necessarily disembodied and thus unable to recognize the realities of Scripture and interpretation.

Since human interpretation can fall prey to each of the above tendencies, I consider it a great honor to mentor a group of young undergraduate men who have committed to living out Jesus’ Sermon on the Mount in their everyday lives. Each of us is convinced that the Sermon contains teachings of Jesus that are meant to be followed. After all, Jesus himself emphasizes that his disciples should practice what he teaches (5:17–20 and 7:21–29). Thus, the true interpretive experiment is to start by living the teaching.  

For the sake of these men, then, I decided to explore how ChatGPT interprets the Sermon on the Mount, especially since many have said that undergraduate students are those most vulnerable to using this new technology as a shortcut. (As a side note, I have become convinced that undergraduates are not in fact the most likely to use this technology or the most at risk from it, but that is a different essay.)

What Is ChatGPT?

ChatGPT is a responsive large-language model chat bot, essentially a souped-up version of a smartphone’s text prediction. ChatGPT is trained to respond to human dialogue partners in ways that would make sense. What makes ChatGPT so revolutionary is that its training involved more content than that of previous language models, yielding about 175 billion language nodes, which is more than three times as many nodes as previous models had.1These figures are in reference to the ChatGPT model using the GPT3 technology and the conversations reported here also used the model with GPT3 technology. Since the writing of this piece, OpenAI has integrated a model based on the GPT4 technology into a fee-based ChatGPT interface. OpenAI boasts that GPT4 is safer (more guardrails are in place against harmful or offensive content) and more sophisticated/natural in its responses because it has been trained on more data and also has more computation power than GPT 3. This is the same technology behind Google’s chatbot LaMDA and the new chatbot being integrated into Microsoft’s Bing search engine.

None of these technologies understands either language or the ways humans use language to point to real-world phenomena. Instead, the words they receive from a human interlocutor are treated as a series of tokens. These chatbots then return a series of further tokens that humans understand. This nature of ChatGPT explains how its responses can quickly be shown to be wrong, ridiculous, or both. The words do not signify anything to the program. They are not representative of truth because truth is not something these bots can process. It is for this reason that the beginning of my conversation with ChatGPT turns out how it does.

ChatGPT Metaphorizes the Sermon on the Mount

All quotations from ChatGPT are verbatim, including spelling and grammar errors.

I opened by asking ChatGPT about interpreting the Sermon on the Mount:

Me: How should we interpret the sermon on the mount?

ChatGPT: The sermon on the mount is a set of teachings found in the Bible, specifically in the book of Matthew. In this sermon, Jesus teaches his followers about the importance of living a righteous and moral life, and offers guidance on how to do so. Many people interpret the sermon on the mount as a blueprint for how to live a good and meaningful life, and as a guide for how to treat others with kindness and compassion. Some key themes in the sermon on the mount include the importance of humility, forgiveness, and love. Ultimately, how the sermon on the mount is interpreted will depend on an individual’s personal beliefs and values.

Me: How do you interpret it?

ChatGPT: As a large language model trained by OpenAI, I do not have personal beliefs or values, so I cannot offer my own interpretation of the sermon on the mount. I am designed to provide general information and answer questions to the best of my ability, based on the information and knowledge that I have been trained on. I am not capable of personal opinions or interpretations.

So far, this response is okay. ChatGPT accurately states that it cannot offer an interpretation of the Sermon on the Mount. But it quickly became apparent that, when asked the right questions, ChatGPT would indeed interpret the Sermon on the Mount, whether by providing a specific interpretation or by providing several and adjudicating among them, though it could not explain its method of interpretation.

For example, when asked about Jesus’s teachings in Matthew 5:29–30 (“If your right eye causes you to sin . . .”), ChatGPT responded, “It is important to note that this passage should not be taken literally. Jesus is using figurative language to make a point about the seriousness of sin and the importance of avoiding it. He is not literally instructing his followers to mutilate their bodies in order to avoid sin.”

It’s not that I disagree with this claim (although its claim that “most Christians do not interpret Jesus’ statement . . . as a literal command to physically harm oneself” may be subject to some scrutiny), but I was struck by the firmness and certainty it expressed when only two questions earlier it told me it was incapable of interpreting the Sermon.

The more that I explored specific teachings with ChatGPT, the more it seemed to pitch towards a metaphorical interpretation of Jesus’s teachings. So, I finally asked it whether there might be a specific teaching that should be taken literally:

Me: Where is one place in the sermon on the mount where Jesus speaks literally?  

ChatGPT: It is difficult to say with certainty where in the sermon on the mount Jesus speaks literally, as the use of figurative language in the Bible is complex and often debated. The sermon on the mount . . . contains many examples of figurative language. In general, the Bible often uses figurative language to illustrate a concept or to make a point, and it is not always clear whether a specific passage is meant to be understood literally or figuratively.

One possible example . . . where Jesus speaks literally is Matthew 5:38-39, where he says: “You have heard that it was said, ‘An eye for an eye and a tooth for a tooth.’ But I say to you, Do not resist the one who is evil. But if anyone slaps you on the right cheek, turn to him the other also.” In this passage, Jesus is quoting a well-known proverb, “an eye for an eye and a tooth for a tooth,” and he is then offering his own teaching on how to respond to violence and aggression. He is not using figurative language, but rather is offering a clear and straightforward instruction about how his followers should respond to such situations.

However, it is important to note that this is only one possible interpretation of this passage, and there may be other ways to understand it.

Even ChatGPT’s identified teaching where Jesus spoke literally comes with cautions about how this is only one possible interpretation, despite its strong assertion earlier in the answer that Jesus “is not using figurative language, but rather is offering a clear and straightforward instruction.” No such cautions accompanied its interpretations that claimed Jesus was speaking figuratively. Moreover, as I continued to press it on these specific teachings, it ended up claiming that anyone who takes Jesus’s teachings about non-violence literally is “pollyannish.” When I questioned it about that claim, it retreated and apologized. But it continued to maintain its caution around taking this and other teachings in the Sermon on the Mount literally. (Two additional examples of this: that Matt 5:32 does not forbid divorce, but simply prizes marriage and that Matt 5:22 is not actually about someone being in danger of hell, but rather emphasizes the seriousness of anger.) As I questioned its process of arriving at these interpretations, ChatGPT pointed to its training on a large number of texts, including translations of the Bible and literature about the Bible. It also stated that its claims are based on “the broader message of the Bible.”

ChatGPT’s Ignorance of Its Training and Preference for Individual Interpretations

After I queried ChatGPT’s understanding of the broader message of the Bible, it repeated that its training gave it that understanding. Yet it also stated that it is unable to say which translations or writings on the Bible were used in its training. (Although it can and does reference particular translations, it will not confirm whether such translations were used in its training.) Its framework of a broader message of the Bible, though, led it on occasion to turn to Paul’s writings.

Our discussion of Matthew 5:29–30 brought us to 1 Corinthians, because I was trying to suss out what ChatGPT means when it produces lines of text like “the Bible consistently teaches,” a passage is “generally understood to mean,” and “most Christians understand [a text] as.” In the process, ChatGPT revealed a tendency to individualize passages that are actually addressed to a community. This tendency emerged when I posed the following question:

Me: How did you arrive at the conclusion, then, that the “Bible consistently teaches that harming oneself is not an acceptable or moral response to temptation or sin,” when historically Christians have denied themselves food, water, medical care, sleep, etc. in attempts to stave off sin?

ChatGPT: I apologize for any confusion that my earlier response may have caused. While it is true that some Christians have practiced self-denial or self-mortification as a means of resisting temptation or sin, the statement that “the Bible consistently teaches that harming oneself is not an acceptable or moral response to temptation or sin” is not accurate or complete. The issue of self-harm or self-mortification is complex and has been subject to different interpretations throughout Christian history, and the Bible itself contains passages that could be interpreted as supporting or condemning such practices.

ChatGPT goes on to reassert its interpretation that Jesus was speaking figuratively in Matt 5:29–30 and then summons Paul in 1 Cor 6:19–20 for backup, making the striking claim that “our bodies are temples of the Holy Spirit and should be honored and kept holy.” In doing so, ChatGPT implies that this verse is referring to individuals rather than to the church as a whole. (I did explore the use of the first-person plural pronoun here, asking after its own body and whether the Holy Spirit dwells “in” it, but alas that conversation was rather banal.)

This response and our continued chat about 1 Corinthians 6 were illustrative of the bot’s tendency to confidently assert an interpretation of a biblical text, admit an error around a detail, and then, upon questioning, to change the interpretation with a response that includes an admission of inaccuracy or exaggeration. Yet the bot usually wraps that admission in a reassertion of the very claim it has just undermined.

Our discussion of Paul’s statements in 1 Corinthians show this. I pointed out that Paul uses a plural second person pronoun in 1 Cor 6, not a singular pronoun, and then asked about its individualizing interpretation. After admitting my correct claim about the plural pronoun, it defended its use before concluding with this statement:

Furthermore, throughout the New Testament, the concept of the individual believer’s body as a temple of the Holy Spirit is emphasized. For example, in 1 Corinthians 3:16-17, Paul writes: “Do you not know that you are God’s temple and that God’s Spirit dwells in you? If anyone destroys God’s temple, God will destroy him. For God’s temple is holy, and you are that temple.” Here, the language is even more explicitly directed at the individual believer, as Paul uses the second person singular pronoun “you” to address each individual Corinthian believer.

But Paul does not use a singular second person pronoun in 1 Cor 3. When I asked, ChatGPT admitted the mistake, but asserted that 1 Cor 6, the passage we had just discussed, supported its individualizing claim about chapter 3 because chapter 6 uses the singular pronoun! The conversation continued until ChatGPT finally concluded the opposite of what it had originally claimed: “Based on the evidence from Paul’s theology, the literary context, and the use of language in the Greek text, the most well-supported interpretation of Paul’s temple imagery in 1 Corinthians is the collective interpretation.” All it took for the chatbot to flip its interpretation was a series of questions asked in the Socratic style.

In our discussion, then, of 1 Corinthians and of ChatGPT’s training and understanding of the broader message of the Bible, we see both its preference for individualizing interpretations of the Bible and also its own ignorance of the interpretive methods and traditions that lead it to provide the interpretations it does. ChatGPT can quote various English translations when asked, and it can respond in excellent ancient Greek language and discuss the finer points of grammar in the Greek text of the NT.

However, it will not (cannot?) provide an account for which translations and which interpretations of the Bible were used in its training, and therefore it does not have the wherewithal to push back on my questions about 1 Corinthians using the well-reasoned and valid history of interpretation that would treat both 1 Cor 3 and 1 Cor 6 as individual, based not only on the permissibility of the grammar for those interpretations, but also on the broader history of interpretation of Paul and the Bible. Instead, in the face of my sophisticated prodding questions, it retreated and flipped its own assertions.

The Challenge of Interpreting without Experiencing

Why does ChatGPT continue to produce figurative and metaphorical interpretations of Jesus’s teachings? Why is it so easy to convince the chatbot to flip its claims on something like Paul’s use of temple imagery? The reasons are at least two: first, ChatGPT has no account of its own training and the traditions informing these interpretations; and second, ChatGPT has no connection to lived experience or reality. As it confidently asserted when I first asked it, it has no “personal beliefs or values.” Despite this, it vigorously pursues an interpretation when asked, privileging certain perspectives and sometimes outlawing or excluding other perspectives. It does so because the words are a statistical game, not Scripture to be lived. It is only parroting what it has been trained on, a body of texts that it cannot share because it seemingly no longer knows (if it ever knew) what they are (if “know” is even the proper term).

This presents a two-fold problem for Christians who might seek out information about the Bible from ChatGPT. First, one cannot be certain of the sources of the perspectives offered by ChatGPT. Jesus asserts several times in Matthew that his true disciples may be known by the fruits evident in their lives (Matt 5:15–20, 12:33–37, 21:33–46). If one cannot access the life of the interpreter and thus the fruits it has produced, how might the Christian know whether the interpretation comes from a true disciple of Jesus?

Second, ChatGPT and other large language models are “black boxes,” meaning we do not know what is happening to generate the responses they provide. Both Christianity and Judaism have historically emphasized engaging with the past and present religious community and that community’s interpretations of sacred texts and traditions. ChatGPT, however, denies it is interpreting while it simultaneously pulling a sleight of hand to interpret the Sermon on the Mount through a tradition that it has hidden, even from itself. For ChatGPT, there is no community debating, discerning, and living their way into and through interpretations. There is only the statistical game of language tokens.

Finally, as the young men I mentor know well, there is a knowledge that comes only through living the teachings of Jesus. ChatGPT cannot do this and it never will be able to do it. As it told me when I asked about its use of “our” in discussing bodies as temples of the Holy Spirit, the Scriptures are “not applicable” to it. Because it has no body, it has no experience of the topics signified by the words it interprets and uses.

The Lessons We Can Learn from ChatGPT’s Interpretation

In the end, ChatGPT offers us a mirror and a caution. ChatGPT’s interpretations of the biblical texts are a distillation of the spirit of our age. Like ChatGPT, we tend toward metaphorical interpretations of Jesus’ words in the Sermon on the Mount. Like ChatGPT, we tend toward individualizing interpretations of Paul. And like ChatGPT, we tend to forget or be ignorant of whose interpretations have formed our own interpretations of the text. These tendencies are interconnected. After all, it is not the tendency to metaphorize Scripture that is the problem, but rather the default to a metaphorizing interpretation as a kind of short circuit when we are unable or do not wish to examine the interpretative tradition and application of the text. ChatGPT cannot do these latter two things, but we can. Thus, ChatGPT exposes us to ourselves.

This experience with ChatGPT therefore cautions us to consider the value of any of the interpretive tendencies it exposes. Will we treat the Bible as a statistical game of tokens? As words and ideas to be bandied about in disembodied dialogue? Or will we take up the challenge of Jesus to be wise, to hear his teachings and to do them (Matt 7:24) in community with one another? In a word, will we outsource our understanding of the Scriptures? Or will we choose to live together, seeking the meaning of these texts through our history and our common commitment to God, and thus teaching them to those who come after us (Matt 28:20)?

End Notes

1. These figures are in reference to the ChatGPT model using the GPT3 technology and the conversations reported here also used the model with GPT3 technology. Since the writing of this piece, OpenAI has integrated a model based on the GPT4 technology into a fee-based ChatGPT interface. OpenAI boasts that GPT4 is safer (more guardrails are in place against harmful or offensive content) and more sophisticated/natural in its responses because it has been trained on more data and also has more computation power than GPT 3.

Image created by Rubner Durais

Did you enjoy this article? Check out The Biblical Mind podcast.