Meeting

Religion and Foreign Policy Webinar: AI’s Religious and Policy Implications

Tuesday, June 17, 2025
Speakers

Founder, Center for Christogenesis

Nicholas and Bernice Reuter Professor of Science and Religion, College of Saint Benedict and Saint John's University

Presider

Vice President for National Program and Outreach, Council on Foreign Relations

FASKIANOS: Thank you. Welcome to the Council on Foreign Relations Religion and Foreign Policy Webinar Series. I’m Irina Faskianos, vice president of the National Program and Outreach here at CFR. Thank you for being with us. 

As a reminder, this webinar is on the record and the video and transcript will be available on CFR’s website, CFR.org. As always, CFR takes no institutional positions on matters of policy.  

We’re delighted to have Ilia Delio and Noreen Herzfeld with us to discuss how religious worldviews and spiritual traditions can inform global AI policy and to explore the role of faith leaders in shaping inclusive, ethical, and responsible governance of artificial intelligence. 

Ilia Delio is a Franciscan sister of Washington, D.C., and the Josephine Connelly endowed chair in theology at Villanova University. She founded the Center for Christogenesis, an organization committed to deepening the integration of science and religion. Dr. Delio was previously the director of Catholic studies and a visiting professor at Georgetown University, as well as professor and chair of spirituality studies at Washington Theological Union. She’s the author of twenty books which have received several awards, and a leading voice in the areas of cosmology and theology.  

Noreen Herzfeld is the Nicholas and Bernice Reuter professor of science and religion at the College of St. Benedict and St. John’s University in Minnesota. Dr. Herzfeld is the author of The Limits of Perfection in Technology, Religion, and Science, Technology and Religion: Remaining Human in a Co-Created World, and In Our Image: Artificial Intelligence and the Human Spirit. Holding degrees in computer science and mathematics from Pennsylvania State University and a doctorate in theology from the Graduate Theological Union of Berkeley, she is an expert on AI, computer theory, Christian and Islamic theology, and religion and conflict.  

So, Ilia and Noreen, thanks very much for being with us today. Noreen, I thought we could begin with you to give us your thoughts on this matter. 

HERZFELD: OK. Thank you very much, Irina. I’d like to share my screen.  

OK, because we’re talking about foreign relations here and AI, I want to begin by talking a little bit about what happens when foreign relations break down. We all know that right now a big impetus towards developing AI in a number of countries is precisely the need to—or, the desire to develop lethal autonomous weapons. So what can we say as religious leaders or as religions toward this impetus? We are seeing AI intelligence being used right now in both the Ukraine and in Gaza. And one of the things I’d like to point out is that what AI does is it speeds up the pace of warfare. And it also is a tool for the post-hoc rationalization of mass killing and destruction, rather than promoting the precision that people often say it promotes. 

Retired Army General Robert Latiff has called AI a “moral Rubicon” for the twenty-first century. Just as the development of air warfare moved the soldier away from an immediate confrontation with his adversary, we now find that we are removed even further from the field of warfare by using artificial intelligence. There are, of course, a lot of benefits to using lethal autonomous weapons. Many of those benefits have to do with the speed at which computers work. But they can also operate in difficult environments. You don’t have boots on the ground with humans. This, of course, is one of the biggest reasons why people want lethal autonomous weapons. But we’ve also said that whoever wins the AI race will win the foreign relations race.  

What can we say to this? Christians have long had a theory of just war, of reasons when one should go to war and ways to fight war. So we see here that there must be a just cause, a legitimate authority, a last resort, a right intention in fighting, a reasonable chance of success, and that the fighting is proportional, and that noncombatants and citizens are not the main targets. We see this last part here and how to conduct the war. We see a similar thing in Islam, where warfare is called the “lesser jihad.” The greater jihad is our fighting against our own sinful tendencies. But in fighting the lesser jihad, again, there is noncombat immunity, there’s proportionality. There are rules, in other words, of how to conduct oneself.  

Would AI follow these rules? Some have said yes. Professor Ron Arkin of Georgia Tech believes that lethal autonomous weapons will fight more justly because they will not have emotions on the field of battle. They will not target someone out of anger or retaliation. But Major General Robert Latiff believes that lethal autonomous weapons will lead to less justice when we fight one another. Some other considerations are would fully autonomous weapons make warfare too easy, so it would no longer be considered a last resort? Would facial recognition make genocide too easy? And ultimately, who should be held responsible when war crimes are committed, which they always will be, or when mistakes are made?  

We talk a lot about alignment in AI. And the question here is, alignment with whose values? So on the left, you see a recent statement that came out from the Vatican and saying that while AI holds many possibilities for promoting good, it can also hinder or counter human development and the common good. Pope Francis noted that evidence to date suggests that digital technologies have increased inequality in our world, not just in material wealth but also differences in access to political and social influence. So it could perpetuate discrimination, create new forms of poverty, widen the digital divide, worsen existing social inequalities, and worsen inequalities between and among countries.  

On the other side, of course, we have some of the leaders in the AI field. Sam Altman saying AI will probably most likely lead to the end of the world, but, in the meantime, there’ll be great companies. Or Mark Zuckerberg, move fast and break things. Unless you are breaking stuff, you are not moving fast enough. Here I guess I’d like to say that I believe it is very important for religious leaders to speak out often and loudly about AI, so that the alignment becomes more with our values than with the values of those who are creating these machines. We need to keep humans in the loop. This is one of the things I think we need to keep stressing, because it’s a worry.  

AI can move at speeds that human thought, human planning, human discussion with one another cannot move. And if we allow AI to have its own agency, particularly in warfare, the tempo of war could be so fast that we can no longer keep humans in the decision-making loop. Once you have decision-making capacity located in the machine itself, it becomes much harder to say that it ought to be humans at the top of the decision-making tree who are solely responsible. There’s an accountability gap that could arise that could lend itself to a situation where no one is, effectively, accountable. So it is up to us to keep raising the accountability question. And the fact that machines cannot be held accountable in and of themselves.  

Finally, and this would be a whole other presentation but I want to put this on the table, we have to think about the environmental footprint of AI. Just one Microsoft server farm uses 30 percent more energy than the entire county its located in, as much as 40,000 homes. AI right now is, you know, positioning itself to use more energy than many of the countries in the globe. And ultimately, will probably be somewhere between India and Japan in its energy use. Can we afford that kind of energy use? We are an evolved species that evolved to live on this planet with the rest of nature on this planet. AI is not. It is created. And so, as Pope Leo has already said, AI poses new challenges for the defense of human dignity, justice, and labor.  

And the church offers to anyone her social teachings in response. And one of those teachings is the reminder that human beings are both body and spirit. And that we need a flourishing environment for our bodies to flourish. All tools amplify something of ourselves. And unfortunately, it’s true what Martin Luther said, that we are simul justus et peccator—we are at the same time justified and we are also sinners. So we will use AI to amplify our intelligence, our creativity, our care, and our presence. But it will also amplify our self-justification, our untruths, our narcissistic impulses, our sloth, and our anger. Technology evolves, but do we? You know, what will we use this technology for? 

We can use it to destroy ourselves and each other. But religion teaches us to fight that greater jihad, to fight against the sinfulness in ourselves and reach out to others. And that is the primary teaching of every religious tradition. And I think Ilia is going to go into that more, so I’m going to hand it over to her now. Thank you. 

FASKIANOS: Wonderful. Ilia, over to you. 

DELIO: Thank you. Thank you, Noreen. I’m going to just share some slides as well. Let’s see here.  

So just so I can tell you from my position, so I’m a theologian. And I have been working in the area of evolution and religion for quite some time. I take the perspective that religion is an evolutionary phenomenon, that it’s arising within the emergence of the human person, or personhood, and the development of a higher level of consciousness. One of my guides to understanding technology and religion is the Jesuit scientist Pierre Teilhard de Chardin, who lived at the dawn of the computer age in 1950. He wrote that we should consider inter-thinking humanity as a new type of organism whose destiny it is to realize new possibilities for evolving life on this planet. And I think, as we’re talking about global development and foreign policy, we’re really talking about not just so much, you know, how do we work together, but how are we, in a sense, developing together? That, in a sense, is more of the issue right now as we constantly engage through technology with one another. 

I think the question we want to begin to ask with religion is not so much can it perfect us or overcome our, you know, inclinations toward fracture or sin, but is what we’re about—both religiously, materially, biologically, internationally—are we becoming something more together, or are we becoming more fractured because of technology? Certainly, Teilhard thought that we can increase wealth for some parts of the world. It’s always at the expense of other parts of the world. We can create better healthcare systems for some parts of the world. And the question is, can that be for all parts of the world? So, you know, materialism, wealth, can bring about, maybe, some wellbeing. But it’s spirituality that many, many religious writers, including Teilhard, would say, increases more being.  

So the question about, I think, foreign relations is a question of—it’s about community and planetarity. And therefore, how can we build, we might say, a new type of person? I don’t take what we are as the final end of evolution. I take us as a species in evolution. And I think our question of technology then is, how are we building tools that can shape us into a more personable person? And we see this today with younger generations, I think, and their greater sense or consciousness of belonging to a whole, the quest for justice, the quest for, you know, ecological sustainability. And I think we see also the movement today towards spirituality on the rise. And that’s not too unlikely, given the phase that human consciousness has shifted with technology.  

So what can religious traditions, or spiritual traditions, offer us? First of all, these are tried and true wisdom traditions. I mean, these religious traditions have been around for a long time. And they have really survived a lot of perilous situations. And so I think what the question is, in some ways, is, how do we tap into the best of religious traditions in order for our human development as a planetary community of life? And what kind of ethical frameworks around AI policy should we be tapping into? Noreen’s very thoughtful, you know, presentation—and concern, I might add—for this type of techno-warfare that eliminates the human person from the reality of war, and therefore makes war an efficient, you know, function of technology, that is really disturbing and frightening.  

And so we want to, in a sense, raise the questions of human dignity and justice, and the type of responsibility we have in using and developing technological tools. And so some basic principles. I think this question of human dignity is an important one. And by dignity, we mean that the human person, as person, has value. We’re not just it’s and bits. We’re not just disposable pieces of—you know, or objects to power. There’s something about the human person, certainly from a religious perspective, that is sort of a fractal of divinity. There’s a divine beauty of human personhood. And therefore, you know, it’s not just respecting what we are, but also recognizing that we do have agency. We can have a voice in what this world becomes.  

So one thing is, I think, in the face of technology and its very rapid increase, we have felt rather voiceless and powerless. It’s—and we have, in a sense, silently acceded to large-scale corporations, in a sense, shaping our future. And that’s a deep concern. And so how do we raise a collective voice, you know, over purely economic gains? And I think one of the things is that even in terms of, say, you know, sort of the United Nations idea, can we come together from various countries, various religions, various ethnicities, and to ask: What do we want together as a planetary way of life?  

I think religions certainly like Buddhism, and Christianity, Hinduism, emphasize our interconnectedness. This is so reflective of what the new science is telling us today. We are, in a sense, deeply connect—fundamentally connected on the level of physics or matter. And so how can we—in some ways, quite honestly, I find the human person today—I find—we, us, human persons, the most artificial of technology. (Laughs.) I don’t think technology is artificial at all. I think it’s quite an extension of biological life. Biological life has always been technological. And we extend that.  

But there’s something about the way we have not only developed technology, but the way we’re using it. It is a mirror of us. And I think there’s something that has become somewhat unraveled or undone in the human person. And that’s maybe a question for another time, but we don’t seem to have a sense of moral obligation toward one another, or towards, you know, the creaturely life of this Earth. And so it’s very difficult to begin to talk about moral obligations, even in terms of technology, beyond sort of legal compliance. But we have to—I think technology needs to be embedded better into not only our public policies, but into our social institutions. We still have a sense that technology is something outside us. We treat it as an object or a tool for our use. But this computer technology, artificial technology, is emerging in and through us.  

And therefore, the question of technology is the question of us, of we humans. And so, you know, developing policies around technology also must be integrated on developing, say, ecological policies, social policies that respect personhood, that respect climate, and the things of the Earth, but also with a view toward inclusivity. And here I do think religious traditions are valuable in, in a sense, raising the bar of inclusivity. What counts as community? Who counts for community? And therefore, I think religious traditions can help in our shaping of stewardship or considering the impact for future generations. It’s not just about surviving the Earth, you know, in a technological age. It is that technology is not going away, but it’s actually becoming more and more not only efficient, but sophisticated in its development.  

So it is here to stay. And we’re going to have to begin to rethink what we are in light of now a technological age. And in this way, I wonder if religion can help balance the technological evolution that we’re in. One of the problems, I think, is the sheer rapidity of technology. Today, it’s, you know, 5G and your ChatGPT. Tomorrow it’s going to be something embedded in your brain. And so we’re just not sufficiently able to cope with the rate of technological development that really does need to slow down. And how to do so?  

I think one way is, can we draw from our religious traditions, contemplative practices, even in terms of developing public policy. I realized that religion is not to be, you know, embedded in any kind of public decision-making, but there is a way of becoming mindful, of taking that moment and kind of centering within oneself before any kind of decision making is made. So mindful practices, or, rather, practices of mindfulness or contemplative practices can be very helpful in developing policies that can have a wider lens to the human community at large, to who’s being left out, to the voices of marginalization, to the wider scope of ecological life.  

So a kind of discernment. You know the Jesuits, the Catholic Jesuit order, is very good on discernment. How do we tell the difference between something that is good and worthy and something that is harmful? What kind of—what kind of rubrics do we have in place? And who is going to make that decision? And here, I think, again, we need some fundamental Earth values, quite honestly, that can work with religious traditions. What do we hold in common? What do we hold valuable, as a human community around this globe? What do we consider harmful? When is it harmful? And so I’ve always been a proponent that spirituality and activity must work together. So whether it’s inner purification, as we find in Hinduism, or spiritual development, as we find in Christian traditions, there’s something that needs to take place within ourselves and deepen as we continue to build these sophisticated AI systems.  

The other thing about religion is it can hold contradictory truths in tension. Religions are—you know, they’re composed of many, many different types of people, all with, you know, various interpretations of the tradition. And yet, when you come together in that tradition there is a type of community. And we need something like that in an AI world as well. How do we, with our various positions on AI, what is the larger—you know, what’s the larger umbrella of values that can hold us together in the face of our differences? And so I think, like many religious traditions, we need frameworks that can embrace progress and preservation.  

And here, we’re going to have to know what is it about us as humans, as homo sapiens sapiens, that’s worth preserving? What is—in a sense, what’s worth innovating while holding to tradition? How can we advance in a way that is beneficial to what we are and yet hold a collective responsibility? So technology is not just a—it’s not at one end and we’re at the other. We have sort of a dialectical position that needs to be held in place. And, quite honestly, religion is very good at that. It always works in the in-between spaces. And that’s basically there.  

So the other thing is, religious traditions do have wide networks, especially among marginalized populations. And here, you know, is just to point out that AI, and technology on the whole, can really favor those of wealth, those of first and second degree—you know, first- and second-world nations. And we can lose sight of the poor, where is—and religious traditions, certainly, you know, within Christianity, or Judaism, or Islam, and other traditions, reaching out to those on the margins. And we have to make sure that everyone has a voice in the development of AI. That it’s not just, you know, wealthy people making the decisions. And so, you know, faith-based organizations who have worked in these areas can help, in a sense, with AI and the risks involved. They can serve as bridges for ensuring that global policies meet the needs of the world’s most vulnerable populations. Otherwise, AI just continues the same problems we have, in fact, may exacerbate them.  

Religious institutions can provide trusted spaces for AI education and dialogue, as we’re seeing—beginning to see now in communities where there’s skepticism, as there’s still a lot of skepticism around AI. I think also religious leaders need to step up and advocate for responsible development within the communities and in the broader sphere as we navigate these cultural and spiritual values. Again, I think AI policy needs a broad framework of values, cultural, national, religious, ideological. Like, how do we—how do we navigate across all these boundaries? And religion does tap into something that is deep within the human person. And in some ways, I think the world’s religious leaders really need to come together and help create, you might say, a type of forum for thinking through religion in an AI world.  

So we know that religion does—you know, continues to express, as Pope Leo is doing now, you know, questions that affect all humanity, whether it’s environmental stewardship to social justice. But again, I think what I worry about is that these conversations become disparate from corporations, those who are actually building the technology and implementing it. So we need better modes of communication. And we have to realize that what we are as human persons are religious, intelligent, socially bonded human beings in evolution. Religion is a vital portion of what we are, but it can’t dictate how we form. I think religion serves as a guide to religious development.  

As Teilhard de Chardin said, not all directions are good as we advance with technology. And that’s the question we need to entertain. What is it that we want to be as we continue to evolve? And toward what? What is our goal, again, as we continue to build, as we continue to change with technology? What is our up ahead? And how do we envision a collective humanity, say, in the next 500 years, or 1,000 years, which, in our space time, is a blink of an eye. So religion can help guide and frame the ethical, moral boundaries of what we are, but we also need a thinking religion to realize that we’re becoming something more together. 

FASKIANOS: Thank you very much, Ilia. Appreciate it.  

So now we’re going to go to all of you for your questions and comments.  

(Gives queuing instructions.) 

So the first question, written question comes from David Barstow, Commissioner of the World Council of Churches in Austin, Texas. Noreen said it is important for religious leaders—oh, he’s raised his hand. Actually, I’m going to let you ask it. That’s more interesting than having me read it. So if you could accept the unmute prompt. David? You have to—there you go. 

Q: OK. Yes. Yeah, I just had to wait for the button to appear.  

FASKIANOS: Yes. Yes. 

Q: Yeah. 

Noreen, you said it’s very important for religious leaders to speak often and loudly. This is reminiscent of a conversation we had, a meeting we had at the WCC just a month ago, about the importance of religious leaders speaking loudly and often. And what I’m wondering is, what are the best ways for us to coordinate across religions? It seems to me that the voices will be strongest if we all talk together and similarly at the same time. 

HERZFELD: Yes. I agree with you. And I think, you know, ultimately we will need to find some way to coordinate. I think the World Council of Churches could play a large role in doing that. I think, for example, if we think back to when Pope Francis invited other religious leaders to meet and to talk about ecology, about the time that Laudato Si came out, that it gave this a stronger voice. We will need to do the same thing with AI. But I also feel that it’s important for leaders within every religious tradition to also speak to those who are out in the field, those who are out in the churches, the synagogues, the temples, the parishes, and tell them that this is something they need to speak to their people about. 

Because if people are only hearing about AI from those who are building the AI and from the media—and the media is mostly hearing from those who are building the AI—they’re not getting the full picture. And they’re not really aware, I think, often. For example, most people are not aware of the ecological footprint of AI. And this is a place where I think our religious traditions, which have already started speaking up about our ecological situation and climate change, need to educate, to start bringing this into the temple, and the synagogue, and the pulpit, and telling people, you know, be aware. Every time you just play with ChatGPT, you’re using vast resources that are exacerbating climate change. 

FASKIANOS: Ilia, do you have any comment before we move on? 

DELIO: No, I agree with Noreen’s position. And I think she highlighted the point, you know, that technology does consume a lot of energy. So it’s really costing the Earth. We don’t really realize the ecological footprint of our computer technology. So, yeah, I think religious leaders do need to come together more explicitly. 

FASKIANOS: Great. 

I’m going to go—we have lots of raised hands. I’m going to go next to Gabriel Salguero. And tell us who you’re with. 

Q: Hello. Thank you. Gabriel Salguero. Thank you for this opportunity. And president of the National Latino Evangelical Coalition, NLEC. 

Two-part question. You say that religious leaders should raise their voices, but what about the bad actors in religion? Or, from where I sit, poor theological approaches to the advancement, speed, rapidity, all the things you mentioned, how they view transhumanism on AI? That’s one. And then part two, you all said that religion includes marginalized people and it’s broadly inclusive. How do we make sure that those voices are being heard and not just being brokered in these conversations around AI, given both the digital divide and both access to platforms to speak on these issues. So one is the kind of bad religion speaking, and the other one is voices are not being brokered but genuinely heard on this issue. Thank you.  

DELIO: Thanks, Gabriel. Now when you say “bad religion,” are you thinking about, like, transhumanism as a religion? Can you— 

Q: No, I’m talking—thank you. That’s a great clarifying question. My question is, religion is not always necessarily good. It can be coopted for nefarious things. 

DELIO: Ah, for sure. 

Q: And so there’s bad theology around so many things, ecology, and life, and so many other things. And how do we navigate, when those voices also speak, in a way that helps us lead to the common good and human flourishing? 

DELIO: Yeah. So, OK, I get it. Yeah. And, Irina, but—I think for sure. And in this way, religion is very much like technology. Both of these are affected—these affect us deeply and personally. So it is always a question of discernment and discernment together. You know, is what we’re teaching and preaching, is it deepening our lives together? Is it bringing us closer to the one we name as God or as the source of our lives? Or is it fracturing us? Like, how do we—for what purpose do we use religion and technology? And so I think, you know, that’s a question that we constantly need, and to be vigilant about.  

And there’s probably some—you know, some litmus tests. And here I would take religion as, does it create greater unity? Does it create more beauty? Is it—is it promoting the good? You know, I think that’s one question a religion, certainly from a Christian perspective, is it promoting love, love as the deepest good that binds us together? Is it truth? Does it cohere? Does it bring life into a greater sense of beauty and wholeness? Religion that divides, that is warring, that is judgmental, is very fracturing. And the same thing with technology. You know, it can—it can divide and fracture, or can unify.  

And so that we don’t demonize technology, it has done a lot—a lot of good for planetary life. You know, there’s a lot of studies to show that poverty globally is decreasing, education is increasing, the rate of convergence, even in terms of a planetary life, is actually enhancing. So there’s a lot of studies to show that technology is actually a good overall. And I think one thing we want to do is to—how do we lift up that goodness, in the face of the possibility of destruction, which technology can also do? And so we’re always—we need sort of an ongoing discernment process with technology, as we do with religion.  

And it can’t be just a set of points or guidelines. It has to be an ongoing discernment together. And that calls for sort of a forum for ongoing dialogue, for shared values, for shared vision. I think that’s one way forward. And using technology, like Zoom, is indeed very helpful in this regard. 

HERZFELD: I would like to mention two things briefly with regard to that. One is that the AI we currently have is large language models. And the language is English. And so one of the ways we can help other groups not be marginalized is to find ways for AI to be implemented using other languages, using their own languages.  

My other concern with religion is not so much, you know, bad—well, you know, bad religious actors. It’s superficial religious actors. That so many times when you say, oh, I’m giving a talk on religion and AI, it’s simply assumed, oh, that means how can we use AI in our worship service? Or, how can we use AI to maybe reach more people with our scriptures? And I think many of the things that Ilia and I have both been talking about today are to say, you’ve got to think about AI on a much deeper level. We’ve got to get religious leaders thinking about how AI affects the human community, how it affects the way we relate to each other, the way we understand ourselves, and the way that we actually relate to God, and bring God into our lives.  

Because in many ways it’s all too easy, as the transhumanists wish to do, to use AI as a hopeful shortcut to things that religion has generally promised, such as eternal life, such as relation—deep relationships, ameliorating loneliness. We don’t want to use AI as a substitute. And that is something that I see all too often. It’s a very superficial look at AI and saying, OK, you know, this will solve our problems. And it won’t. 

FASKIANOS: Thank you. 

I’m going to go next to Azza Karam. 

Q: Hello. Thank you so very, very much, once again, Irina, for hosting another one of these excellent sessions, and for both the professors, Professor Noreen and Ilia. Thank you so much for excellent presentations.  

My question/comment is really directed more towards Professor Herzfeld. Thank you for an excellent presentation. I learned so much. I especially appreciate the link you made with how AI is environmentally detrimental—its use can be environmentally detrimental. Now, I think the issue here is not so much good religion/bad religion, but we are living in moments where religions are backing certain political regimes which are at war already with one another. So the notion of idealistically how religious leaders can be helpful is not exactly very helpful at the moment. I think what the question here is, how do we—are you aware, Professor Herzfeld, of programs or initiatives that bring together religious institutions and religious leaders to bring them up to speed so that they’re on the same playing field in terms of the knowledge about the positives and not-so-positives of AI?  

Are there such initiatives that can be educational, informing to our religious institutions, diverse, that, as you said, you mentioned different religious traditions and how they see certain things, and where there is a complementarity. Is there or are there initiatives that can help enlighten, educate faith leaders, diverse faith leaders, about AI? Such that they can perhaps be better agents of, themselves, information, as you were pointing out in your answer to the last question. Are there such initiatives that you are aware of? If not, what would you see as important? Because the idea that religious leaders are good and can do good is really not very pertinent at the moment. It’s how do we get them up to speed? Thank you. 

HERZFELD: Thank you for your question. At the moment, I am not aware of any really broad-based initiatives this way. However, the American Association for the Advancement of Science has a program on the Dialogue on Ethics, Science, and Religion. And they have run several initiatives called Science in the Seminaries, in which they give grants to seminaries specifically so that they can teach future pastors, future rabbis, you know, in all aspects of the dominant religions here in America, about different aspects of science. The one they are currently running is regarding the environment and climate change, trying to help get information simply to seminarians and prospective rabbis about what climate change actually is, and the science behind it. I am hoping, and have been talking, you know, somewhat with leaders of that, that in the future they will launch a similar sort of AI in the seminaries initiative. But it would be good for there to be a variety of initiatives like this. 

FASKIANOS: Ilia. 

DELIO: Well, you know, I think that’s very good. It’s very practical. But, honestly, I think a deep dive into AI would actually challenge religions to update some of their own teachings. A lot of these traditions are based on ancient philosophies and ancient theologies that are really—in a sense, some are just out of sync with what we now know about—from science about what is matter, what constitutes human personhood, or even how personhood itself emerges. And I think there’s a reluctance in some—a kind of inner reluctance to get too close to technology. I mean, we can talk about these policies, but a lot of them are actually—they’re proscriptive on how to keep technology at bay while, you know, we do our religious thing. And I think, you know, I said very quickly before, technology and religion cannot be separated. And that’s the part we really can’t get our heads around. 

And why tech transhumanism is so alluring is, in a sense, it that it does have—Noreen pointed this out—it has religious tendencies. It wants—one of the maxims is, technology will fulfill what religion promises. Religion is, in a sense, a failure. And technology now will succeed in bringing them humanity into that new level of, you know, salvific life, or a happy life, or whatever we can see that we want in kind of this superabundant religious perspective. So I think that the issues at hand here are more complicated than meets the eye. And we do not have sufficiently created forums for in-depth, critical discussion between science and religion, between technology and religion.  

And so I think we can talk about how religious leaders can come together and the kind of moral aims we should have, but I find that kind of restrictive and proscriptive, let’s keep this at bay while we keep ourselves, you know, human, is naïve, quite honestly. We’re already post human, in many ways. And we don’t know what that means for us. I think we’re already living in between evolutionary leaps. I don’t think younger generations—they are slightly different, they are slightly changed because technology is shifting how we’re developing. It’s shifting the brain, for example, how the brain processes information.  

And so we do need these discussions, but we need them on deeper levels of interaction because we are going to find ourselves in a very—can be a very dystopic future, if we are not savvy about, what is technology, what are we as religious persons, what are we in this flow of evolutionary life, and understand these things together. Looking for simple remedies on how we can sustain ourselves in a technological world, in my view, is not going to—is not going to bode us well in the future. And it’s going to actually create the conditions for the type of technology that is destructive—that can be destructive up ahead.  

FASKIANOS: Thank you. 

I’m going to go next to John Pawlikowski. John, you have to hit the unmute. There we go.  

Q: OK. Thank you very much to both speakers for laying good foundations for an important discussion.  

I have two issues. One is, I’d like to really say how important I think the most recent comments by Ilia have it for been. I think there’s a real question of whether religion is so tied into tradition that it cannot accept, truly accept, evolutionary change. I think that’s a theological issue that has to be worked through the churches, the religious institutions.  

My original question was, Pope Leo recently announced that this—the AI issue is going to be an essential part of his papacy. And he’s announced that he wants to establish an international Vatican commission. And I was wondering what you might think of that, to the extent that you’re aware of it, or to the extent that he’s revealed what it might entail. But also, would that kind of commission be able to really integrate younger people, who are, I think, very, very key? I took part in that twenty-four-hour recent university webinar on climate change, that brought together universities from every sector of the world.  

And I was amazed by several things. First of all, how much the program was dominated by women, which I very strongly support. Secondly, it was dominated by younger people. And thirdly, there was almost no mention of religion. The only place where religion seemed to come up a little bit was in the session that was conducted by the indigenous. So one of the problems that I see in the ecological area is that many more scientifically or academically influenced groups regard religion as of little value. And any comments you have on those points, I’d appreciate. Thank you. 

HERZFELD: Well, regarding what Leo has said about a commission, I would just say that this is not, like, something brand new that is coming up here. The Vatican has been deeply concerned about AI for some time. I, myself, am part of an AI research group that is connected with the Dicastery of Culture and Education. And if you are interested, we published a book a year ago called Encountering AI: Ethical and Anthropological Investigations. And you can download this book for free if you go to the website of the Journal of Moral Theology. We are currently working on two more books, one on AI and human agency and one on AI and education. And as you know, the Congregation for the Doctrine of Faith recently came out with a statement on AI, Antiqua et nova, which I quoted in my presentation. And I would suggest that you look at that statement as well.  

But all of this is to say that even under Francis the Vatican was beginning to have very serious conversations about AI, including conversations in which they brought industry leaders, those who were developing the AI themselves, to Rome to speak with the Vatican. I think, as both Ilia and I mentioned in our presentations, ultimately, they are the ones who will be instilling the values and who will be determining the direction of the AI they are creating. So it’s really important that our conversations include the scientists, and the developers, and even the venture capitalists, because they will be determining the direction.  

And, as you so rightly mentioned, this is seen differently by young people. AI is very much a part of their world, and will be as they move forward. So it’s very important that they are included. And the Vatican already knows that. One of the first people who they are looking at for beatification and canonization is a young man who was a computer developer. So these things are not being kept separate. It’s just going to take a lot of coordination to bring it all together. 

FASKIANOS: I’m going to try to squeeze in one last question. If we can get to more, we’ll try. Robin Mohr. 

Q: Thank you very much. My name is Robin Mohr. And I am the clerk of Green Street Monthly Meeting here in Philadelphia, and until recently the executive secretary of the Friend’s World Committee for Consultation in the Americas.  

I wanted to start by just reinforcing that we are using the very convenient shorthand of “AI” to refer to what is a lot of different kinds of technology. My other credential is being the mother of two twenty-something software engineers. So this—I’m hearing about this from multiple perspectives. What I wanted to ask was, do you know of any initiatives to engage lower-income people to discuss the uses, and potential, and concerns around AI? 

DELIO: Yes, actually, I do. So a few weeks ago I had dinner with a Dr. Petra Ahrweiler from Germany. And she is using participatory AI, and going to places where—communities that have been marginalized or are poor and using their input to develop AI programs. So what she says is that most of our AI is developed, in a sense, from a first-world perspective. So we’re not thinking about the poor or the ramifications of technology for poor and marginalized communities. But she has a book that you might be interested in. I have to kind of locate the book for you. But it’s something, like—it’s a very odd title, quite honestly, something like, cows and something or other. But her name is Petra—I could put it in here, because you might be interested in looking up her work. She’s written quite a bit on participatory AI and marginalized communities. So I can just start there, and you can kind of check that out. 

FASKIANOS: Wonderful. Noreen, do you— 

HERZFELD: No, that’s—can we squeeze in one more question? 

FASKIANOS: Let’s try to squeeze in one more question, yes. I’m going to go to Curtis Baxter, the American Association for the Advancement of Science. 

Q: Thank you very much. And thank you, Noreen, for the plug there. As Noreen mentioned, I’m from the AAAS Dialogue on Science, Ethics, and Religion. And this has been a wonderful conversation. 

And I think what everyone is calling for, and what it seems like—and this is just a comment, and maybe I’ll have a question at the end; please forgive me—is this is a(n) Asilomar moment, if folks know about recombinant DNA and Asilomar Conference back in the ’70s where scientists, ethicists—it was a very secluded group of folks. But I think this is—it calls for more of that, more interfaith and ecumenical. And it seems like the Catholic Church, the Roman Catholic Church, really seems like they’re trying to spearhead that.  

But can you talk more about more of an ecumenical interfaith piece in regards to it was not too long ago two prominent Evangelical conservative Baptist preachers sent a letter to President Trump in regards to AI. And how do we open up this umbrella and conversation to bring in all perspectives in this, because it’s going to hit everyone. And to one quick point, in regards to Ilia, in regards to the connectivity between science—or faith and technology, is true, especially after COVID, a lot of congregations had to adapt, and overcome, and learn how to use technology, Zoom and all the like. But I think they’re becoming more and more familiar with it and comfortable with this well. And I’ll just leave it at that. And thank you very much. 

FASKIANOS: So if you both want to just address, make closing—brief closing remarks, as we’re at the end of our hour. 

DELIO: Well, I think, you know, as Noreen pointed out, she’s involved already with some of these organizations that are, you know, bringing religion and technology together. I wonder if we need something, another type of forum, you know, that is something like the World Parliament of Religions, something like the World Parliament of Religions with technology as its principal focus. That every, you know, person who’s involved in some type of religious tradition, and yet, you know, very engaged in the world, can begin to come together, dialogue, and think together. We need a type of thinking together for a technological world that—and so not so much in pockets of, like, Catholics think here, and Baptists think here, and Jews think here. You know, we have to think across—I think that religion, in my view, is becoming secondary to what we have in common, and that’s the life of this planet, the ecological, human, and technological life of this planet.  

So I think we’re going to need a new type of forum. Maybe one exists, I don’t know. But we need something that brings these level into a new depth of discussion. And just by way of mention, I’ve been involved in an endeavor called Human Energy. And Human Energy is seeking to map out the next phase of human evolution on the level of the noosphere, that term that Teilhard de Chardin used, the level of the mind. And it’s across religions. It’s a combination of scientists, religion, religion scholars, ethicists, philosophers. And it’s quite varied in its composition. But the discussions are also quite rich. And I think something like Human Energy—the Human Energy conference, or the Human Energy endeavor can be very helpful in navigating our future. 

HERZFELD: I’d just like to thank all of you for coming to this. You know, such a wide variety of people. But to say that you are the leaders on the field. You know, we’re academic thinkers. It’s really up to you to use your organizations and your influence to bring this discussion out into a wider realm, and to be heard. The Catholic Church is doing a lot about this. There are other groups, such as—there’s a group called AI and Faith that is comprised both of academic thinkers and industry leaders, that is working on this question of how to align AI with the values that have been the values propounded in all of our major faiths over the years. And I think every group that can add their voice to this very necessary conversation, the more the merrier. So thank you all for attending this, for being willing to think about and discuss this issue. 

Irina, you’re muted. 

FASKIANOS: Oh. Yeah, I am muted. Thank you. So thank you both, Ilia Delio and Noreen Herzfeld, for leading this conversation. We really appreciate it. I regret that we couldn’t get to all of your questions. We will just revisit this topic. We must. The Council is working and standing up a lot of research on AI. And it’s clear to me from this conversation that our value add is to bring different voices, religious leaders and others, together so that we can tackle this question and really sort of move it forward. So with that, thank you again to both of you.  

You can email us at [email protected] with any questions, comments, suggestions. We will share out the video, the transcript, and the resources that were mentioned today. And with that, I hope you enjoy the rest of your day. So thank you both. 

Top Stories on CFR

Climate Change

The legislation promoted by Trump and the White House will undo many of the climate and energy initiatives and tax credit programs passed during the Joe Biden administration.

Iran

Countries without nuclear weapons could decide nuclear nonproliferation and transparency efforts that the world has taken for granted now pose more risk than reward.

Ukraine

President Donald Trump is right to pursue diplomacy in Ukraine, but success requires a dual approach. To deliver on his promise to end the Russia-Ukraine war, Trump will need to offer Russia sticks as well as carrots.