Meeting

Social Justice Webinar: Religion and AI

Thursday, September 14, 2023
Reuters / Dado Ruvic
Speakers

Senior Rabbi, Jewish Center of the Hamptons (East Hampton)

Professor of Theology and Computer Science, College of St. Benedict and St. John's University

Presider

Senior Technology Reporter and Editor, The Guardian

Josh Franklin, senior rabbi at the Jewish Center of the Hamptons, and Noreen Herzfeld, professor of theology and computer science at the College of Saint Benedict and Saint John’s University, discuss how AI is affecting religious communities and the relationship between science, technology, and religion. Johana Bhuiyan, senior tech reporter and editor for the Guardian, moderated. 

Learn more about CFR's Religion and Foreign Policy Program.

FASKIANOS: Welcome to the Council on Foreign Relations Social Justice Webinar Series, hosted by the Religion and Foreign Policy Program. This series explores social justice issues and how they shape policy at home and abroad through discourse with members of the faith community. I’m Irina Faskianos, vice president of the National Program and Outreach here at CFR.

As a reminder, this webinar is on the record and the video and transcript will be available on CFR’s websites, CFR.org, and on the Apple podcast channel Religion and Foreign Policy. As always, CFR takes no institutional positions on matters of policy.

We’re delighted to have Johana Bhuiyan with us to moderate today’s discussion on religion and AI. Johana Bhuiyan is the senior tech reporter and editor at the Guardian, where she focuses on the surveillance of disenfranchised groups. She has been reporting on tech and media since 2013 and previously worked at the L.A. Times, Vox Media, Buzzfeed News and Politico New York. And she attended Lehigh University where she studied journalism as well as global and religion studies. She’s going to introduce our panelists, have the discussion, and then we’re going to invite all of you to ask your questions and share your comments. So thank you, Johana. Over to you.

BHUIYAN: Thank you so much, Irina. Thank you, everyone, for joining us. As Irina said, my name is Johana Bhuiyan, and I cover all the ways tech companies infringe on your civil liberties.

And so today we’ll be talking about a topic that’s not completely unrelated to that but is a little bit of a tangent. But we’re talking about “Religion and AI.” And AI is unfortunately a term that suffers from both being loosely defined and often misused. And so I kind of want to be a little bit specific before we begin. For the most part my feeling is this conversation will focus on a lot of generative AI tools and the way that these play a role in religious communities and play a role for faith leaders, and some of the issues and concerns with that. That being said, if the conversation goes in that direction, I will take it there. I would love to also touch on sort of the religious communities’ roles in thinking about and combating the harms of other forms of AI as well. But again, we’ll be focusing largely on generative AI.

And today with us we have two really wonderful panelists who come from various perspectives on this. Both are really well-versed in both theology, of course, as well as artificial intelligence and computer science.

First, we have Rabbi Josh Franklin, who wrote a sermon with ChatGPT that you may have read in news articles, including one of mine. He is a senior rabbi at the Jewish Center of the Hamptons in East Hampton, and he co-writes a bimonthly column in Dan’s Papers called “Hamptons Soul,” which discusses issues of spirituality and justice in the Hamptons. He received his ordination at Hebrew Union College and was the recipient of the Daniel and Bonnie Tisch Fellowship, a rabbinical program exploring congregational studies, personal theology, and contemporary religion in North America.

And we also have Noreen Herzfeld, who most recently published a book titled The Artifice of Intelligence: Divine and Human Relationship in a Robotic World. That was published by Fortress, so go out and get a copy. She is the Nicholas and Bernice Reuter professor of science and religion at St. John’s University and the College of St. Benedict, where she teaches courses on the intersection of religion and technology. Dr. Herzfeld holds degrees in computer science and mathematics from Pennsylvania State University and a PhD in theology from the Graduate Theological Union in Berkeley. Thank you both so much for having this conversation with me.

FRANKLIN: Thank you for having us.

BHUIYAN: I do want to set the stage a little bit. I don’t want to assume anyone has a very thorough knowledge of all the ways AI has sort of seeped into our religious communities. And, in particular people when they think of ChatGPT and other chatbots like that, they’re not necessarily thinking of, OK, well, how is it used in a sermon? And how is it used in a mosque? Or how is it used in this temple? So, we’ve had the one-off situations like, Rabbi Franklin, your sermon. But I think it’d be great to get an idea of how else you’ve been seeing chatbot and other—ChatGPT and other chatbots using both of your respective worlds and communities.

One example I can give before I turn it over is that there was a very short-lived chat bot called HadithGPT, which purportedly would answer questions about Islam based on Hadiths, which is the the life and saying of the Prophet, peace be upon him. But immediately the community was like, one, this is really antithetical to the rich, scholarly tradition of Islam. Two, the questions that people might be asking can’t only be answered by Hadiths. And, three, chatbots are not very good at being accurate. And so the people behind it immediately shut it down. I want to turn it over to, Rabbi Franklin, you first. Is there a version of HadithGPT in the Jewish community? Are you still using ChatGPT to write sermons? Or what other use cases are you seeing?

FRANKLIN: I actually did see a version of some kind of parallel within the Jewish world to HadithGPT. It was RabbiGPT, something along those lines. But actually, Google has done a great job already for years answering very trivial questions about Judaism. So if you want to know, where does this particular quote come from in the Torah, and you type it into Google, and you get the answer. And if you want to know how many times you shake the lulav, this traditional plant that we shake on Sukkot, you can find that on Google. ChatGPT, the same in terms of purveying information and actually generating trivial content or answering trivial questions, yeah. That far surpasses any rabbi’s ability, really. It’s a dictionary or encyclopedia of information.

But religion goes far beyond answering simple questions. We’re asking major questions, ultimate questions about the nature of life, that I don’t think artificial intelligence is quite there yet. But when you get into the philosophical, the ethical, the moral, the emotional, that’s when you start to see the breakdown in terms of the capabilities of how far artificial intelligence can really answer these kinds of questions.

BHUIYAN: Right. And I do want to come back to that, but I first want to go to Noreen. I mentioned that the immediate reaction to HadithGPT was, OK, this is antithetical to the scholarly tradition within Islam. But is there actually a way that religious scholars and religious researchers, or people who are actually trying to advance their knowledge about a particular faith, are using ChatGPT and other chatbots to actually do that in a useful and maybe not scary and harmful way? (Laughs.)

HERZFELD: Well, I’m in academia. And so, of course, ChatGPT has been a big issue among professors as we think about, are our students going to be using this to do their assignments? And there’s a lot of disagreement on whether it makes any sense to use it or not. I think right now, there’s some agreement that the programs can be helpful in the initial stages. So if you’re just brainstorming about a topic, whether you’re writing an academic paper, or writing a homily, or even preparing for, let’s say, a church youth group or something, it can be helpful if you say, give me some ideas about this topic, or give me some ideas for this meeting that we’re going to have.

But when it comes to a more finished product, that’s the point where people are saying, wow, now you have to really be careful. Within the Christian tradition there are now generative AI programs that supposedly explicate certain verses or pericopes in the Bible. But they tend to go off on tangents. Because they work stochastically in just deciding what word or phrase should come next, they’ll attribute things to being in the Bible that aren’t there. And so, right now I think we have to warn people to be extremely careful.

There have been earlier AIs. Like Germany had a robot called BlessU-2. And if someone asked it for a prayer about a particular situation, it would generate a prayer. If someone asked it for a Bible verse that might fit a particular setting, it actually would come out with a real Bible verse. But I think a lot of people—and this goes back to something Josh said, or something that you said about the Hadith—the Christian tradition is an extremely embodied tradition. When you go to mass, you eat bread, you drink wine, you smell incense, you bow down and stand up. The whole body is a part of the worship. And that’s an area that AI, as something that is disembodied, that’s only dealing with words, it can’t catch the fullness. I think one would find the same thing in Muslim tradition, where you’re prostrating yourself, you’re looking to the right and the left. It's all involving the whole person, not just the mental part.

FRANKLIN: Yeah, I’d phrase some of that a little bit differently in terms of the biggest lacking thing about AI is definitely the sense of spirituality that AI can generate. And I think part of the reason that is, is that spirituality has to do with feeling more than it does data. Whereas AI can think rationally, can think in terms of data, and it can actually give you pseudo-conclusions that might sound spiritual, at the end of the day spirituality is something that is really about ineffability. That is, you can’t use words to describe it. So when you have a language model or generative language model that’s trying to describe something that’s really a feeling, that’s really emotional, that’s really a part of the human experience, even the best poets struggle with this. So maybe AI will get better at trying to describe something that, up until now, has very much been about emotion and feeling.

But at the end of the day, I really don’t think that artificial intelligence can understand spirituality nor describe spirituality. And it definitely can’t understand it, because one of the things that AI lacks is the ability to feel. It can recognize emotion. And it can do a better job at recognizing emotion than, I think, humans can, especially in terms of cameras, being able to recognize facial expressions. Humans are notoriously bad at that. Artificial intelligence is very good at that. So it can understand what you might be feeling, but it can’t feel it with you. And that’s what genuine empathy is. That’s what religion is at its best, where it’s able to empathize with people within the community and be in sacred encounter and relationships with them. And although AI can synthesize a lot of these things that are extraordinarily meaningful for human encounter and experience, it’s not really doing the job of capturing the meat of it, of capturing really where religion and spirituality excel.

BHUIYAN: Can I—

HERZFELD: I’m sorry, but to underline the importance of emotion, when people talk about having a relationship with an AI, and especially expecting in the future to have close relationships with an AI, I often ask them: Well, would you like to have a relationship with a sociopath? And they’re like, well, no. And I said, but that’s what you’re going to get. Because the AI might do a good job of—you know, as Josh pointed out, it can recognize an emotion. And it can display an emotion if it’s a robot, or if there’s, let’s say, an avatar on a screen. But it doesn’t ever feel an emotion. And when we have people who don’t feel an emotion but might mentally think, oh, but what is the right thing to do in this situation, we often call those people sociopaths. Because they just don’t have the same empathetic circuit to feel your pain, to know what you’re going through. And coming back to embodiment, so often in that kind of situation what we need is a touch, or a hug, or just someone to sit with us. We don’t need words. And words are all the generative AI has.

FRANKLIN: I would agree with you like 99.9 percent. There’s this great scene and Sherry Turkle’s book, Alone Together. I don’t know if you read it.

HERZFELD: Yes.

FRANKLIN: She talks about this nursing home where they have this experimental—some kind of a pet that would just kind of sit with you. It was a robotic pet that would just make certain sounds that would be comforting, that a pet would make. And that people found it so comforting. They felt like they had someone to listen to, that was responding to what they were saying, although it really wasn’t. It was synthetic. And Sherry Turkle, who’s this big person in the tech world, it automatically kind of transformed her whole perspective on what was going on in such an encounter. And she transformed her perspective on technology based on this one little scene that she saw in this nursing home. Because it was sociopathic, right? This doesn’t have actual emotion. It’s faking it, and you can’t be in legitimate relationship with something that isn’t able to reciprocate emotion. It might seem like it.

And I know, Noreen, I asked you a question a little earlier—before we got started with this—about Martin Buber, who I do want to bring up. Martin Buber wrote this book exactly 100 years ago, I and Thou, which at the time really wasn’t all that influential, but became very influential in the field of philosophy. And Martin Buber talks about encounter that we have with other individuals. He says most of our transactions that we have between two people are just that, transactional. You go to the store, you buy something, you give them cash, they give you money back, and you leave. But that’s an I-it encounter. That person is a means to an end.

But when you’re really engaged with another human being in relationship, there’s something divine, something profound that’s happening. And he says, through that encounter, you experience God, or that spark that’s within that encounter, that’s God. And I have changed my tune during the age of COVID and being so much on Zoom, to say that, actually, I do believe you can have an encounter with another individual on Zoom. That was a stretch for me. I used to think no, no, you can’t do that, unless you have that touch, you have that presence, that physical presence, maybe even through some kind of being with another human being. But in terms of having encounter with artificial intelligence, no matter how much it might be able to synthesize the correct response, it can’t actually be present because it’s not conscious. And that’s a major limitation in terms of our ability to develop relationships or any kind of encounter with something that’s less than human.

HERZFELD: Yeah. It seems to fake consciousness, but it doesn’t actually have the real thing. The Swiss theologian Karl Barth said that to have a truly authentic relationship you need four things. And those were to look the other in the eye, to speak to and hear the other, to aid the other, and to do it gladly. And the interesting thing about those four, I mean, to look the other in the eye, that doesn’t mean that a blind person cannot have an authentic relationship. But it is to recognize the other is fully other and to recognize them as fully present. To speak to and hear the other, well, you know, AI is actually pretty good at that. And to aid the other—computers aid us all the time. They do a lot of good things.

But then you get to the last one, to do it gladly. And I think there is the real crux of the matter, because to do it gladly you need three things. You need consciousness, you need free will, and you need emotion. And those three things are the three things that AI really lacks. So far, we do not have a conscious AI. When it comes to free will, well, how free really is a computer to do what it’s programmed to do. And then can it do anything gladly? Well, we’ve already talked about it not having emotion. So it cannot fulfill that last category.

FRANKLIN: Yeah, it does it almost so well. And I really say “almost.” We really do confuse intelligence and consciousness quite often. In fact, AI can accomplish a lot of the tasks that we accomplish emotionally through algorithms. Now it’s kind of like a submarine can go underwater without gills, but it’s not a fish. It’s accomplishing the same thing but it’s not really the same thing. It’s not living. It doesn’t have anything within it that enables us to be in relationship with it. And that is—yeah, I love that—those four criteria that you mentioned. Those are really great and helpful.

HERZFELD: And you just mentioned that it’s not living. When you were talking about the pet in the nursing home, I was thinking, well, there are degrees of relationality. I can be soothed by a beautiful bouquet that somebody brings if I’m in the hospital, let’s say, just looking at the flowers. And certainly everyone knows now that we lower our blood pressure if we have a pet, a cat or a dog, that we can stroke. And yet, I feel like I have a certain degree of relationship with my dog that I certainly don’t have with the flowers in my garden, because the dog responds. And sometimes the dog doesn’t do what I tell her to. She has free will.

There’s another story in that same book by Sherry Turkle where instead of giving the patient in the nursing home this robotic seal, they give them a very authentic-looking robotic baby. And what was really sad in that story was that one of the women so took to this robotic baby, and to cradling it and taking care of it, that she ignored her own grandchild who had come to visit her. And Sherry Turkle said at that point she felt like we had really failed. We had failed both the grandchild and the grandmother. And that’s where I think we fail.

One of the questions that keeps bedeviling me is what are we really looking for when we look for AI? Are we looking for a tool or are we looking for a partner? In the Christian tradition, St. Augustine said, “Lord, you have made us for yourself and our hearts are restless until they rest in you.” I think that we are made to want to be in relationship, deep relationship, with someone other to ourselves, someone that is not human. But as we live in a society where we increasingly don’t believe in God, don’t believe in angels, don’t believe in the presence of the saints, we’re looking for a way to fill that gap. And I think for many people who are not religious, they’re looking towards AI to somehow fill this need to be in an authentic relationship with an other.

BHUIYAN: And we’re talking a lot about sort of that human connection. And, Noreen, you said this in your book, that AI is an incomplete partner and a terrible surrogate for other humans. And it sounds like both of you agree that there is not a world where AI, in whatever form, could sufficiently replace—or even come close to replacing that human connection. But on a practical note Rabbi Franklin, you mentioned Rabbi Google. You know, a lot of faith practices are incredibly, to reuse the word, practice-centric, right? That that is the building block of the spirituality. Within the Muslim community, of course, right, the five daily prayers. There’s a version of this in many different faith practices.

And so if people are seeking answers about the practical aspect of their spirituality from a tool even if they’re thinking, yeah, this is a tool. Trust, but verify. If they’re seeking those answers from this tool that has a tendency to hallucinate or make mistakes, is there a risk that they will over-rely on this particular tool, and then that tool can create sort of a friction between them and the community? Because, I’ll admit it, as someone who practices a faith and also is well-versed in the issues with Google and the misinformation that it can surface, I will still Google a couple—(inaudible). I will turn to Google and be, like: How do I do this particular prayer? I haven’t done it in a very, very long time.

And of course, I’m looking through and trying to make sure that the sources are correct. But not everyone is doing that. Not everyone is going through with a fine-tooth comb. And ChatGPT, given how almost magical it feels to a lot of people, there is even less of a likelihood that they will be questioning it. And it is getting more and more sophisticated. So it’s harder to question. So is there a concern within religious communities that this tool will become something that will create even one more obstacle between a person and their faith leader, or their clergy, or their local scholars?

FRANKLIN: I don’t seem that worried about it. I think what synagogues and faith-based communities do is something that’s really irreplicable by ChatGPT. We create community. We create shared meaningful experience with other people. And there is a sense that you need physical presence in order to be able to do that. Having said that, yeah, I use ChatGPT as a tool. I think other people will use it too. And it will help a lot with how do you get the information that you need in a very quick, accessible way? Sometimes it’s wrong. Sometimes it makes mistakes. I’ll give you an example of that.

I was asking ChatGPT, can you give me some Jewish texts from Jewish literature on forgiveness? And it gives me this text about the prodigal son. And I typed right back in, and I said: That’s not a Jewish text. That’s from the Gospels. And it says, oh, you’re right. I made a mistake. It is from the Gospels. It’s not a Jewish text. I actually thought the most human thing that it did in that whole encounter was admit that it was wrong. Maybe that’s a lack of human—because human beings have an inability often to admit that we were wrong, but I actually love the fact that it admitted, oh, I made a mistake, and it didn’t double down on its mistake.

It’s learning and it’s going to get better. I think if we measure artificial intelligence by its current form, we’re really selling it short for what it is going to be and how intelligent it actually is. And, by the way, I think it is extraordinarily intelligent, probably more intelligent than any of us. But we have human qualities that artificial intelligence can never really possess. And I think the main one, which we already touched on, is the idea of consciousness. And I think the experiences that you get within a faith-based community are those experiences that specifically relate to human consciousness and not relate to human—not developing intelligence.

People don’t come to synagogue to get information. I hope they go to ChatGPT or Google for that. That’s fine. People come to synagogue to feel something more within life, something beyond the trivial, something that they can’t get by reading the newspaper, that they can’t get by going on Google. It’s a sense of community, a sense of relationship. And so I don’t think that there can be a way that artificial intelligence is going to distract from that. Yeah, I guess it’s possible, but I’m not too worried about it.

BHUIYAN: And—go ahead, Noreen, yeah.

HERZFELD: I was just going to say, I think you need to be a little careful when you say it’s more intelligent than we are. Because there are so many different kinds of intelligence.

FRANKLIN: Yes. IQ intelligence, let me qualify.

HERZFELD: If intelligence is just having immediate access to a lot of facts, great, yeah. It’s got access we don’t have. But if intelligence is having, first of all, emotional intelligence, which we’ve already discussed. But also just having models of the world. This is often where these large language models break down, that they don’t have an interior model of the world and the way things work in the world, whether that’s the physical world or the social world. And so they’re brittle around the edges. If something hasn’t been discussed in the texts that has been trained on, it can’t extrapolate from some kind of a basic model, mental model that—which is the way we do things when we encounter something brand new. So, in that sense, it’s also lacking something that we have.

BHUIYAN: There’s a question from the audience that I think is a good one, because it sounds to me, and correct me if I’m wrong, that, Noreen, you in particular believe that the doomsday scenario that people are always talking about, where AI becomes sentient, takes over, is more—we become subservient to AI, is unlikely. And, OK. And so the question from the audience is that, it seems like most of the arguments are, we can tell the difference so AI won’t replace human connection. But what happens if and when AI does pass the Turing test? Is that something that you see as a realistic scenario?

HERZFELD: Oh, in a sense we could say AI has already passed the Turing test. If you give a person who isn’t aware that they’re conversing with ChatGPT sometime to converse with it, they might be fooled. Eventually ChatGPT will probably give them a wrong answer. But then, like Josh said, it’ll apologize and say, oh yeah, I was wrong. Sorry. So we could say that, in a sense, the Turing test has already been passed.

I am not worried about the superintelligent being that’ll decide that it doesn’t need human beings, or whatever. But I’m worried about other things. I mean, I think in a way that that’s a red herring that distracts us from some of the things we really should be worried about. And that is that AI is a powerful tool that is going to be used by human beings to exert power over other human beings. Whether it’s by advertently or inadvertently building our biases into this tool so that the tool treats people in a different fashion.

I’m also worried about autonomous weapons. They don’t need to be superintelligent to be very destructive. And a third thing that I’m worried about is climate change. And you might say, well, what has that got to do with AI? But these programs, like the large language models, like ChatGPT, take a great deal of power to train them. They take a great deal of power to use them. If you ask a simple question of ChatGPT instead of asking Google, you’re using five to ten times the electricity, probably generated by fossil fuels, to answer that question. So as we scale these models up, and as more and more people start using them more and more of the time, we are going to be using more and more of our physical resources to power it.

And most of us don’t realize this, because we think, well, it all happens in the cloud. It’s all very clean, you know. This is not heavy industry. But it’s not. It’s happening on huge banks of servers. And just for an example, one of Microsoft’s new server farms in Washington state is using more energy per day than the entire county that it’s located in. So we just are not thinking about the cost that underlies using AI. It’s fine if just a few people are using it, or just using it occasionally. But if we expect to scale this up and use it all the time, we don’t have the resources to do that.

BHUIYAN: Yeah, and you mentioned electricity. A couple of my coworkers have done stories about the general environmental impact. But it’s also water. A lot of these training models use quite a bit of water to power these machines.

HERZFELD: To cool the machines, yeah.

BHUIYAN: And so yeah, I’m glad that you brought that up, because that is something that I think about quite a bit, covering surveillance, right? Religious communities are this sort of, incredibly strong communities that can have a really huge social impact. And we’ve had various versions of AI for a very, very long time that have harmed some religious communities, other marginalized groups. You mentioned a couple of them.

Surveillance is one of them. There’s also things that feel a little bit more innocuous but there’s bias and discrimination built into them like hiring algorithms, mortgage lending algorithms, algorithms to decide whether someone should qualify for bail or not.

And so my general question is, is there a role that religious communities can play in trying to combat those harms. How much education should we be doing within our communities to make sure people are aware that it’s not just the fun quirky tool that will answer your innocuous question. AI is also powering a lot more harmful and very damaging tools as well.

FRANKLIN: I’d love for religious leaders to be a part of the ethics committees that sit at the top of how AI decides certain decisions that are going to be a part of everyday real life. So, for example, when your self-driving car is driving down the road and a child jumps out in the middle of the street your car has to either swerve into oncoming traffic, killing the driver, or hit the child. Who’s going to decide how the car behaves, how the artificial intelligence behaves?

I think ethics are going to be a huge role that human beings need to take in terms of training AI and I think religious leaders as well as ethicists, philosophers, really need to be at the head, not the lay leadership programmers or the lay programmers. Not the lay but they’re not really trained in ethics and philosophy and spirituality, for that matter, and religion.

I really think that we need to be taking more of an active role in making sure that the ethical discussions of the programming of artificial intelligence have some kind of strong ethical basis because I think the biggest danger is who’s sitting in the driver’s seat. Not in the car scenario but, really, who’s sitting in the driver’s seat of the programming.

BHUIYAN: Noreen, do you have anything to add onto that?

HERZFELD: No, I very much agree with that. I do think that if we leave things up to the corporations that are building these programs the bottom line is going to be what they ultimately consult. I know that at least one car company—I believe it’s Mercedes-Benz—has publicly said that in the scenario that Josh gave the car is going to protect the driver. No matter how many children jump in front of the car the car will protect the driver and the real reason is that they feel like, well, who’s going to buy a car that wouldn’t protect the driver in every situation. If you had a choice between a car that would always protect the driver and a car that sometimes would say, no, those three kids are more valuable—

FRANKLIN: And that’s a decision made by money, not made by ethics.

HERZFELD: Exactly.

FRANKLIN: Yeah.

BHUIYAN: Right. Rabbi Franklin, I have a question. There’s a good follow-up in the audience. Are there ethics committees that you know of right now that are dealing with this issue, and then the question from the audience from Don Frew is how do we get those religious leaders into those committees.

FRANKLIN: We have to be asked, in short, in order to be on those committees. I don’t know if it’s on the radar even of these corporations who are training AI models. But I think there are going to be very practical implications coming up in the very near future where we do need to be involved in ethical discussions.

But there are religious leaders who sit on all sorts of different ethics committees but as far as I know there’s nothing that’s set up specifically related to AI. That doesn’t mean there isn’t. I just don’t know of any.

But, if you were to ask me, right now we’ve seen articles about the decline of humanities in college and universities. I would actually say that humanities is—if I had to make a prediction is probably going to make a comeback because these ethical, philosophical, spiritual questions are going to be more relevant than ever, and if you’re looking at programming and law and the medical industry and medicine those are actually things where AI is going to be more aggressive and playing a larger role in doing the things that humans are able to do.

BHUIYAN: Right. I do want to bring the question or the conversation back to, you know, religion, literally.

In your book, Noreen, you bring up a question that I thought was just so fascinating, whether we should be deifying AI and it sounds like the short answer is no. But my fascination with it is how realistic of a risk is that, and I know there’s one example that I just knew off the top of my head was the Church of AI, which has been shut down and was started by a former Google self-driving engineer who was later pardoned for stealing trade secrets. His name is Anthony Levandowski. So, yeah, take what he says with a grain of salt, I guess is what I’m saying.

But the church was created to be dedicated to, quote, “The realization, acceptance, and worship of a godhead based on AI developed through computer hardware and software.”

Is this a fluke? Is this a one off? Do you think there’s, like, a real risk of as AI gets more sophisticated people will be sort of treating it as, like, a kind of god like, I don’t know, figure, if that’s the right word, but some sort of god?

FRANKLIN: It sounds like a gimmick to me. I mean, look, it’s definitely going to capture the media headlines for sure. You do something new and novel like that no matter how ridiculous it is people are going to write about it, and it’s not surprising that it failed because it didn’t really have a lot of substance.

At least I hope the answer is no, that that’s not going to be a real threat or that’s not going to be a major concern. Who knows? I mean, I really think that human beings are bad at predicting the future. Maybe AI will be better at predicting the future than we are. But my sense, for what it’s worth, is that no, that’s not really a concern.

HERZFELD: Well, I would be a little more hesitant to say it’s not any type of a concern. I do not think there are going to be suddenly a lot of churches like the one you mentioned springing up in which people deify AI with the same sorts of ways in which we’ve worshipped God.

But, we worship a lot of stuff. We worship money all too often. We worship power. And we can easily worship AI if we give it too much credence.

If we really believe that everything it says is true, that what it does is the pinnacle of what human beings do and this is what worries me is that if we say, well, it’s all about intelligence, I’ve often thought, well, we’re trying to make something in our own image and what we’re trying to give it is intelligence. But is that the most important thing that human beings do?

I think in each of our religious traditions we would say the most important thing that human beings do is love and that this is something that it can’t do. So my worry is that—because in some ways we’re more flexible than machines are and as the machines start to surround us more, as we start to interact with them more we’re going to, in a sense, make ourselves over in their image and in that way we are sort of deifying it because when we think about—in the Christian tradition we talk about deification as the process of growing in the image and likeness of God, and if instead we grow in the image and likeness of the computer that’s another way of deifying the computer.

BHUIYAN: I want to turn it over to audience questions; there are some hands raised. So I want to make sure that we get some of them in here as well.

OPERATOR: Thank you. We will take the next question from Rabbi Joe Charnes.

CHARNES: I appreciate that there are potential benefits from AI. That’s simply undeniable. The question I have is and the concern that I have that I think you certainly both share and I don’t know the way around it is as humans we do often relate to human beings. That’s our goal in life. That’s our purpose.

But human relationships are often messy and it’s easier to relate to disembodied entities or objects, and I see people in the religious world relating now through Zoom. Through their Zoom sessions they have church so they’re relating to church and God through a screen, and when you speak of ethics and spirituality, Rabbi, of somehow imposing that or placing that into this AI model I don’t see how you can do that and I do fear we lean—if there’s a way out of human connection but modeling human connection to some extent I do fear we’re going to really go in that direction because it’s less painful.

FRANKLIN: So I’ll try to address that. There’s a great book that’s going to sound like it’s completely unrelated to this topic. It’s by Johann Hari and the book is called Chasing the Scream. What he argues is that, generally, addiction is not about being the opposite of sobriety. Addiction is about being disconnected from other individuals and using the substance or a thing as a proxy for a relationship that we have with other people.

Love that idea. I think there is a huge danger that artificial intelligence can be just that, the proxy for human relationship when we’re lonely, when we’re disconnected from others, and it’s going to be the thing that we are going to turn to.

I would even echo Noreen’s fear that we end up turning to AI in very inappropriate ways and making it almost idolatrous, that when we say deifying it what we’re really doing is idol worshipping AI as something that really won’t actually give you the connection even though you think that it will. I think that’s a very legitimate fear.

Having said that, I think that AI is going to be a great tool for the future if it’s used as a tool. Yes, there are tremendous amount of dangers with new technology and newness. Every single new innovation, every single revolutionary change technologically has come with huge dangers and AI is no different. I hope we’re going to be able to figure out how to really put the correct restrictions on it, how to really make sure that the ethics of AI has involvement from spiritual leaders and ethicists and philosophers.

Am I confident that we’ll be able to do that? I don’t know. I think we’re still at the very beginning stages of things and we’ll see how it develops.

HERZFELD: Two areas that I worry about because these are areas that people are particularly looking at AI are the development of sex bots, which is happening, and the use of AI as caregivers either for children or for the elderly. But particularly for the elderly this is an area that people are looking at very strongly.

I think for religious leaders the best thing that you can do is to try to make sure that the people in your congregation—to do everything you can to foster the relationships among the people because as Josh was saying, we’ll use this as a substitute if we don’t have the real thing.

But if we are in good and close and caring relationships with other human beings then the computer will not be enticing as a substitute and we might merely use it as a tool or just not bother with it at all.

So I think what we really need to do is tend to the fostering of those relationships and particularly for those that are marginalized in some ways, whether it’s the elderly, whether it’s parents with children, particularly single parents who might be needing help, and whether it’s those that are infirm in some way.

OPERATOR: We will take our next question from Ani Zonneveld of Muslims for Progressive Values.

ZONNEVELD: Hi. Good morning. Good afternoon.

You had raised that question, Johana, about what are the faith communities doing or can contribute to a better aggregated response on AI and I just wanted to share that members of our community has been creating images of, for example, women leading prayer in Muslim communities. So that those are some of the aggregated information that could be filtered up into the way AI is being used as a tool.

So I think, at the end of the day, the AI system works as an aggregate of pulling in information that’s already out there and I think it’s important for us in the faith communities to create the content itself from which the AI can pull, and that also overcomes some of the biases, particularly the patriarchal interpretations of faith traditions, for example, right?

The other thing I wanted to also share with everyone is that there’s a real interest in it at the United Nations. That is being led by an ethics professor from the university in Zurich. I taught a master’s ethics class there as a person of faith and so there’s this international database system agency that is being created at the UN level.

Just thought I would share that with everyone. Thanks.

FRANKLIN: Thank you.

HERZFELD: And I would also share that the Vatican is working on this as well. I am part of a committee that’s part of the dicastery of culture and education and we’ve just put together a book on AI and the Pope is going to be using his address on January 1 on the Day of World Peace to address AI as a topic.

FRANKLIN: I’m pretty sure rabbis across the country right now are going to be writing sermons for tomorrow, which begins Rosh Hashanah, our high holiday season, and many rabbis—most rabbis, perhaps—are going to be preaching about AI.

OPERATOR: We will take our next question from Shaik Ubaid from the Muslim Peace Coalition.

UBAID: Thank you for the opportunity. Can you hear me?

BHUIYAN: Yes.

UBAID: Overall, we are sort of sort of putting down AI because it does not have the human qualities of empathy. But if instead of that we focus on using it as a tool whether in educating the congregations or jurisprudence then we would be using it.

When it comes to the human quality, another quality is courage. We may have the empathy, but many times we do not show the courage. For example, we see pogroms going on in India and an impending genocide. But whether it be the—a (inaudible) chief or the chief rabbi of Israel or the Vatican, they do not say a word to Modi, at least publicly, to put pressure, and same with the governments in the West. And sometimes their mouthpieces in the U.S. are even allowed to come and speak at respectable fora, including sometimes even in CFR.

So instead of expecting too much from the AI we should use it with its limitations and sometimes the bias and the arrogance that we show thinking that we are humans, of course, we are superior to any machine. But many times we fail ourselves. So if the machines are failing us that should not be too much of a factor.

Thank you.

FRANKLIN: Very well said.

HERZFELD: Yeah.

BHUIYAN: There are other audience questions that sort of build on that. We’re talking about humans having bias and our own thoughts sort of being a limiting factor for us. But, obviously, these machines and tools are being built by humans who have biases that may be and putting them into the training models.

And so one of the questions or one of the topics that Frances Flannery brought up is the ways in which AI is circumventing our critical thinking. We talked about over reliance on these tools within the faith practice but is there—beyond that, right? We talked about AI when it comes to very practical things like these practices that we do.

I understand it doesn’t replace the community and it doesn’t replace these spaces where we’re seeking community. But people are asking questions that are much more complex and are not trivial and are not just the fundamentals of the religion.

Is there a concern with people using chat bots in place of questioning particular things or trying to get more knowledge about more complex topics?

FRANKLIN: I would actually just kind of respond by saying that I don’t think AI circumvents critical thinking. I actually think it focuses us to think more critically, and by getting rid of the trivial things and the trivial data points and rational kind of stuff that AI can actually do and piece together and solve even just complex IQ-related issues it focuses us to think about more critical issues in terms of philosophy, in terms of faith and spirituality and theology, all things that I think AI might be able to parrot. But it can’t actually think creatively and original thoughts.

So I actually think that AI gets rid of the dirty work, the summaries of what other people have said, maybe even generating two ideas together. But really true creativity, I think, is in the human domain and it’s going to force us to think more creatively. Maybe I’m just an optimist on that but that’s my sense.

HERZFELD: And I’ll give the more pessimistic side, which is not to say—I mean, I believe that everything that Josh just said is correct. My concern is that we might end up using AI as a way to evade responsibility or liability.

In other words, if decisions are made—Johana, you were talking earlier about how we use AI to decide who gets bail, who gets certain medical treatments, these things, and if we simply say, well, the computer made a decision and we don’t think critically about whether that was the right decision or whether the computer took all things into account I think we need to think about the same thing when we look at autonomous weapons, which are really coming down the pike, and that is how autonomous do we really want them to be.

We can then, in a way, put some of the responsibility for mistakes that might be made on the battlefield onto the computer. But in what sense can we say a computer is truly responsible? So I do fear that as long as we use it as a component in our decision-making, which I think is what Josh was saying, this can be a powerful tool.

But when we let it simply make the decision—and I’ve talked to generals who are worried about the fact that if we automate warfare too much the decision—the pace of warfare may get to be so fast that it’s too fast for human decision-makers to actually get in there and make real decisions and that’s a point where we’ve then abdicated something that is fully our responsibility and given it to the machine.

FRANKLIN: Let’s not forget, though, how strong human biases are. I mean, read Daniel Kahneman’s book Thinking, Fast and Slow and you’ll see all these different heuristics for human bias that are unbelievable. Going to the realm of bail, there was a study that showed that judges who haven’t had their lunch yet are much more likely to reject bail than those who just came out of their lunch break.

I mean, talk about biases that exist in terms of the ways that we make decisions. I would say that ultimately although there are biases that we implant within these algorithms that will affect the way that outcomes actually come out probably artificial intelligence and these algorithms are going to do a better job than human beings alone.

Having said that, to echo Noreen, when we use them in tandem with human decision-making I think we get the best of both worlds.

BHUIYAN: Right. I mean, there are so many examples. Forget warfare and other places. I mean, in policing it happens all the time, right? There’s facial recognition tools that are intended to be used as sort of a lead generator or something that—a tool in an investigation. But we’ve seen time and again that it’s being used as the only tool, the only piece of evidence that then leads to the arrest and false incarceration of many, often black, people.

And, again, to both of your points, it’s because of the human biases that these AI tools, particularly when used alone, are unable to—I mean, they’re just going to do what the human was going to do, too—the human with the bias was going to do as well. And I have seen in my reporting that there are a lot of situations where police departments or other law enforcement agencies will kind of use that as an excuse just like you said, Noreen, or sort of, like, well, the computer said, and they validated our data so it must be right.

So I do think that there’s a little bit of the escape of liability and responsibility as well.

We don’t have a ton more time and, Noreen, you talked a little bit about some of your major fears.

Rabbi Franklin, you’re a little bit more optimistic about this than maybe Noreen or even I am. I would like to hear what your great fears of this tool are.

FRANKLIN: My biggest fear is that it’s going to force me to change and, look, I think that’s a good thing, ultimately, but change is always really scary. I think I’m going to be a different rabbi five years from now, ten years from now than I am right now and I think AI is going to be one of the largest reasons for that.

I think it’s going to force me to hone certain abilities that I have and really abandon and rely on artificial intelligence for other ones. And even going back to the original thought experiment that involved me in this conversation to begin with, which was using AI to write a sermon or ChatGPT to write a sermon at the very beginning of its infancy of ChatGPT, really, what a sermon looks like is going to be profoundly different.

And it was part of one of the points that I was making when I actually delivered that original sermon. The only thing that was scripted was the part that was written by AI. Everything else was a conversation, back and forth questioning, engagement with the community who was there.

I think sermons are going to look more like that, more like these kind of conversations than they will a scripted, written, and delivered words that come from a paper and are just spoken by a human being. Rabbis, preachers, imams, pastors, priests, are not going to be able to get away with that kind of homiletical approach. We’re going to have to really radically adapt and get better at being rabbis and clergy with different skill sets than we currently have, and that’s scary. But at the same time it’s exciting.

BHUIYAN: And, Noreen, to end on a positive note, is there anything that you see that ChatGPT or other forms of generative AI or AI, broadly, what are some of the most positive ways that you see these tools being used in the future?

HERZFELD: Well, we haven’t even mentioned tools that work with images is like DALL-E or Midjourney. But I think that those tools have sparked a new type of creativity in people, and I think if there’s a theme that goes through everything that the three of us have said today it’s a great tool, bad surrogate—that as long as we use this as a tool it can be a very good tool.

But it’s when we try to use it as a complete replacement for human decision-making, for human courage, for human critical thinking, for human taking of responsibility, that we realize that just as we are flawed creatures we’ve created a flawed creature.

But in each of our religious traditions I think we hold dear that what we need to do is love God and love each other and that we as religious people keep raising that up in a society that views things instrumentally.

BHUIYAN: Thank you both. I am just going to turn it over to Irina now.

FASKIANOS: Yes. Thank you all. This was a really provocative and insightful discussion. We really appreciate it.

We encourage you to follow Rabbi Josh Franklin’s work on rabbijoshfranklin.com. Noreen Herzfeld is at @NoreenHerzfeld and Johana is at @JMBooyah—it’s B-O-O-Y-A-H—so on X, formally known as Twitter. And, obviously, you can follow Johana’s work in the Guardian. Please, I commend Noreen’s book to you.

And please do follow us on Twitter at @CFR_religion for announcements and other information. And please feel free to email us at [email protected] with suggestions for future topics and feedback. We always look forward to hearing from you and soliciting your suggestions.

So, again, thank you all for this great conversation. We appreciate your giving us your time today and we wish you a good rest of the day.

Top Stories on CFR

Syria

China

Zoe Liu, the Maurice R. Greenberg Senior Fellow for China Studies at CFR, sits down with James M. Lindsay to discuss how Trump’s victory is being viewed in China and what his presidency will mean for the future of U.S.-China economic relations. This episode is the seventh in a special TPI series on the U.S. 2025 presidential transition and is supported by the Carnegie Corporation of New York.

France

The fall of the French government, along with political uncertainty in Germany, has upped the pressure on President Emmanuel Macron amid growing European tensions over migration, Ukraine, and energy policy.