Economics

Technology and Innovation

  • Religion
    Social Justice Webinar: Religion and AI
    Play
    Josh Franklin, senior rabbi at the Jewish Center of the Hamptons, and Noreen Herzfeld, professor of theology and computer science at the College of Saint Benedict and Saint John’s University, discuss how AI is affecting religious communities and the relationship between science, technology, and religion. Johana Bhuiyan, senior tech reporter and editor for the Guardian, moderated.  Learn more about CFR's Religion and Foreign Policy Program. FASKIANOS: Welcome to the Council on Foreign Relations Social Justice Webinar Series, hosted by the Religion and Foreign Policy Program. This series explores social justice issues and how they shape policy at home and abroad through discourse with members of the faith community. I’m Irina Faskianos, vice president of the National Program and Outreach here at CFR. As a reminder, this webinar is on the record and the video and transcript will be available on CFR’s websites, CFR.org, and on the Apple podcast channel Religion and Foreign Policy. As always, CFR takes no institutional positions on matters of policy. We’re delighted to have Johana Bhuiyan with us to moderate today’s discussion on religion and AI. Johana Bhuiyan is the senior tech reporter and editor at the Guardian, where she focuses on the surveillance of disenfranchised groups. She has been reporting on tech and media since 2013 and previously worked at the L.A. Times, Vox Media, Buzzfeed News and Politico New York. And she attended Lehigh University where she studied journalism as well as global and religion studies. She’s going to introduce our panelists, have the discussion, and then we’re going to invite all of you to ask your questions and share your comments. So thank you, Johana. Over to you. BHUIYAN: Thank you so much, Irina. Thank you, everyone, for joining us. As Irina said, my name is Johana Bhuiyan, and I cover all the ways tech companies infringe on your civil liberties. And so today we’ll be talking about a topic that’s not completely unrelated to that but is a little bit of a tangent. But we’re talking about “Religion and AI.” And AI is unfortunately a term that suffers from both being loosely defined and often misused. And so I kind of want to be a little bit specific before we begin. For the most part my feeling is this conversation will focus on a lot of generative AI tools and the way that these play a role in religious communities and play a role for faith leaders, and some of the issues and concerns with that. That being said, if the conversation goes in that direction, I will take it there. I would love to also touch on sort of the religious communities’ roles in thinking about and combating the harms of other forms of AI as well. But again, we’ll be focusing largely on generative AI. And today with us we have two really wonderful panelists who come from various perspectives on this. Both are really well-versed in both theology, of course, as well as artificial intelligence and computer science. First, we have Rabbi Josh Franklin, who wrote a sermon with ChatGPT that you may have read in news articles, including one of mine. He is a senior rabbi at the Jewish Center of the Hamptons in East Hampton, and he co-writes a bimonthly column in Dan’s Papers called “Hamptons Soul,” which discusses issues of spirituality and justice in the Hamptons. He received his ordination at Hebrew Union College and was the recipient of the Daniel and Bonnie Tisch Fellowship, a rabbinical program exploring congregational studies, personal theology, and contemporary religion in North America. And we also have Noreen Herzfeld, who most recently published a book titled The Artifice of Intelligence: Divine and Human Relationship in a Robotic World. That was published by Fortress, so go out and get a copy. She is the Nicholas and Bernice Reuter professor of science and religion at St. John’s University and the College of St. Benedict, where she teaches courses on the intersection of religion and technology. Dr. Herzfeld holds degrees in computer science and mathematics from Pennsylvania State University and a PhD in theology from the Graduate Theological Union in Berkeley. Thank you both so much for having this conversation with me. FRANKLIN: Thank you for having us. BHUIYAN: I do want to set the stage a little bit. I don’t want to assume anyone has a very thorough knowledge of all the ways AI has sort of seeped into our religious communities. And, in particular people when they think of ChatGPT and other chatbots like that, they’re not necessarily thinking of, OK, well, how is it used in a sermon? And how is it used in a mosque? Or how is it used in this temple? So, we’ve had the one-off situations like, Rabbi Franklin, your sermon. But I think it’d be great to get an idea of how else you’ve been seeing chatbot and other—ChatGPT and other chatbots using both of your respective worlds and communities. One example I can give before I turn it over is that there was a very short-lived chat bot called HadithGPT, which purportedly would answer questions about Islam based on Hadiths, which is the the life and saying of the Prophet, peace be upon him. But immediately the community was like, one, this is really antithetical to the rich, scholarly tradition of Islam. Two, the questions that people might be asking can’t only be answered by Hadiths. And, three, chatbots are not very good at being accurate. And so the people behind it immediately shut it down. I want to turn it over to, Rabbi Franklin, you first. Is there a version of HadithGPT in the Jewish community? Are you still using ChatGPT to write sermons? Or what other use cases are you seeing? FRANKLIN: I actually did see a version of some kind of parallel within the Jewish world to HadithGPT. It was RabbiGPT, something along those lines. But actually, Google has done a great job already for years answering very trivial questions about Judaism. So if you want to know, where does this particular quote come from in the Torah, and you type it into Google, and you get the answer. And if you want to know how many times you shake the lulav, this traditional plant that we shake on Sukkot, you can find that on Google. ChatGPT, the same in terms of purveying information and actually generating trivial content or answering trivial questions, yeah. That far surpasses any rabbi’s ability, really. It’s a dictionary or encyclopedia of information. But religion goes far beyond answering simple questions. We’re asking major questions, ultimate questions about the nature of life, that I don’t think artificial intelligence is quite there yet. But when you get into the philosophical, the ethical, the moral, the emotional, that’s when you start to see the breakdown in terms of the capabilities of how far artificial intelligence can really answer these kinds of questions. BHUIYAN: Right. And I do want to come back to that, but I first want to go to Noreen. I mentioned that the immediate reaction to HadithGPT was, OK, this is antithetical to the scholarly tradition within Islam. But is there actually a way that religious scholars and religious researchers, or people who are actually trying to advance their knowledge about a particular faith, are using ChatGPT and other chatbots to actually do that in a useful and maybe not scary and harmful way? (Laughs.) HERZFELD: Well, I’m in academia. And so, of course, ChatGPT has been a big issue among professors as we think about, are our students going to be using this to do their assignments? And there’s a lot of disagreement on whether it makes any sense to use it or not. I think right now, there’s some agreement that the programs can be helpful in the initial stages. So if you’re just brainstorming about a topic, whether you’re writing an academic paper, or writing a homily, or even preparing for, let’s say, a church youth group or something, it can be helpful if you say, give me some ideas about this topic, or give me some ideas for this meeting that we’re going to have. But when it comes to a more finished product, that’s the point where people are saying, wow, now you have to really be careful. Within the Christian tradition there are now generative AI programs that supposedly explicate certain verses or pericopes in the Bible. But they tend to go off on tangents. Because they work stochastically in just deciding what word or phrase should come next, they’ll attribute things to being in the Bible that aren’t there. And so, right now I think we have to warn people to be extremely careful. There have been earlier AIs. Like Germany had a robot called BlessU-2. And if someone asked it for a prayer about a particular situation, it would generate a prayer. If someone asked it for a Bible verse that might fit a particular setting, it actually would come out with a real Bible verse. But I think a lot of people—and this goes back to something Josh said, or something that you said about the Hadith—the Christian tradition is an extremely embodied tradition. When you go to mass, you eat bread, you drink wine, you smell incense, you bow down and stand up. The whole body is a part of the worship. And that’s an area that AI, as something that is disembodied, that’s only dealing with words, it can’t catch the fullness. I think one would find the same thing in Muslim tradition, where you’re prostrating yourself, you’re looking to the right and the left. It's all involving the whole person, not just the mental part. FRANKLIN: Yeah, I’d phrase some of that a little bit differently in terms of the biggest lacking thing about AI is definitely the sense of spirituality that AI can generate. And I think part of the reason that is, is that spirituality has to do with feeling more than it does data. Whereas AI can think rationally, can think in terms of data, and it can actually give you pseudo-conclusions that might sound spiritual, at the end of the day spirituality is something that is really about ineffability. That is, you can’t use words to describe it. So when you have a language model or generative language model that’s trying to describe something that’s really a feeling, that’s really emotional, that’s really a part of the human experience, even the best poets struggle with this. So maybe AI will get better at trying to describe something that, up until now, has very much been about emotion and feeling. But at the end of the day, I really don’t think that artificial intelligence can understand spirituality nor describe spirituality. And it definitely can’t understand it, because one of the things that AI lacks is the ability to feel. It can recognize emotion. And it can do a better job at recognizing emotion than, I think, humans can, especially in terms of cameras, being able to recognize facial expressions. Humans are notoriously bad at that. Artificial intelligence is very good at that. So it can understand what you might be feeling, but it can’t feel it with you. And that’s what genuine empathy is. That’s what religion is at its best, where it’s able to empathize with people within the community and be in sacred encounter and relationships with them. And although AI can synthesize a lot of these things that are extraordinarily meaningful for human encounter and experience, it’s not really doing the job of capturing the meat of it, of capturing really where religion and spirituality excel. BHUIYAN: Can I— HERZFELD: I’m sorry, but to underline the importance of emotion, when people talk about having a relationship with an AI, and especially expecting in the future to have close relationships with an AI, I often ask them: Well, would you like to have a relationship with a sociopath? And they’re like, well, no. And I said, but that’s what you’re going to get. Because the AI might do a good job of—you know, as Josh pointed out, it can recognize an emotion. And it can display an emotion if it’s a robot, or if there’s, let’s say, an avatar on a screen. But it doesn’t ever feel an emotion. And when we have people who don’t feel an emotion but might mentally think, oh, but what is the right thing to do in this situation, we often call those people sociopaths. Because they just don’t have the same empathetic circuit to feel your pain, to know what you’re going through. And coming back to embodiment, so often in that kind of situation what we need is a touch, or a hug, or just someone to sit with us. We don’t need words. And words are all the generative AI has. FRANKLIN: I would agree with you like 99.9 percent. There’s this great scene and Sherry Turkle’s book, Alone Together. I don’t know if you read it. HERZFELD: Yes. FRANKLIN: She talks about this nursing home where they have this experimental—some kind of a pet that would just kind of sit with you. It was a robotic pet that would just make certain sounds that would be comforting, that a pet would make. And that people found it so comforting. They felt like they had someone to listen to, that was responding to what they were saying, although it really wasn’t. It was synthetic. And Sherry Turkle, who’s this big person in the tech world, it automatically kind of transformed her whole perspective on what was going on in such an encounter. And she transformed her perspective on technology based on this one little scene that she saw in this nursing home. Because it was sociopathic, right? This doesn’t have actual emotion. It’s faking it, and you can’t be in legitimate relationship with something that isn’t able to reciprocate emotion. It might seem like it. And I know, Noreen, I asked you a question a little earlier—before we got started with this—about Martin Buber, who I do want to bring up. Martin Buber wrote this book exactly 100 years ago, I and Thou, which at the time really wasn’t all that influential, but became very influential in the field of philosophy. And Martin Buber talks about encounter that we have with other individuals. He says most of our transactions that we have between two people are just that, transactional. You go to the store, you buy something, you give them cash, they give you money back, and you leave. But that’s an I-it encounter. That person is a means to an end. But when you’re really engaged with another human being in relationship, there’s something divine, something profound that’s happening. And he says, through that encounter, you experience God, or that spark that’s within that encounter, that’s God. And I have changed my tune during the age of COVID and being so much on Zoom, to say that, actually, I do believe you can have an encounter with another individual on Zoom. That was a stretch for me. I used to think no, no, you can’t do that, unless you have that touch, you have that presence, that physical presence, maybe even through some kind of being with another human being. But in terms of having encounter with artificial intelligence, no matter how much it might be able to synthesize the correct response, it can’t actually be present because it’s not conscious. And that’s a major limitation in terms of our ability to develop relationships or any kind of encounter with something that’s less than human. HERZFELD: Yeah. It seems to fake consciousness, but it doesn’t actually have the real thing. The Swiss theologian Karl Barth said that to have a truly authentic relationship you need four things. And those were to look the other in the eye, to speak to and hear the other, to aid the other, and to do it gladly. And the interesting thing about those four, I mean, to look the other in the eye, that doesn’t mean that a blind person cannot have an authentic relationship. But it is to recognize the other is fully other and to recognize them as fully present. To speak to and hear the other, well, you know, AI is actually pretty good at that. And to aid the other—computers aid us all the time. They do a lot of good things. But then you get to the last one, to do it gladly. And I think there is the real crux of the matter, because to do it gladly you need three things. You need consciousness, you need free will, and you need emotion. And those three things are the three things that AI really lacks. So far, we do not have a conscious AI. When it comes to free will, well, how free really is a computer to do what it’s programmed to do. And then can it do anything gladly? Well, we’ve already talked about it not having emotion. So it cannot fulfill that last category. FRANKLIN: Yeah, it does it almost so well. And I really say “almost.” We really do confuse intelligence and consciousness quite often. In fact, AI can accomplish a lot of the tasks that we accomplish emotionally through algorithms. Now it’s kind of like a submarine can go underwater without gills, but it’s not a fish. It’s accomplishing the same thing but it’s not really the same thing. It’s not living. It doesn’t have anything within it that enables us to be in relationship with it. And that is—yeah, I love that—those four criteria that you mentioned. Those are really great and helpful. HERZFELD: And you just mentioned that it’s not living. When you were talking about the pet in the nursing home, I was thinking, well, there are degrees of relationality. I can be soothed by a beautiful bouquet that somebody brings if I’m in the hospital, let’s say, just looking at the flowers. And certainly everyone knows now that we lower our blood pressure if we have a pet, a cat or a dog, that we can stroke. And yet, I feel like I have a certain degree of relationship with my dog that I certainly don’t have with the flowers in my garden, because the dog responds. And sometimes the dog doesn’t do what I tell her to. She has free will. There’s another story in that same book by Sherry Turkle where instead of giving the patient in the nursing home this robotic seal, they give them a very authentic-looking robotic baby. And what was really sad in that story was that one of the women so took to this robotic baby, and to cradling it and taking care of it, that she ignored her own grandchild who had come to visit her. And Sherry Turkle said at that point she felt like we had really failed. We had failed both the grandchild and the grandmother. And that’s where I think we fail. One of the questions that keeps bedeviling me is what are we really looking for when we look for AI? Are we looking for a tool or are we looking for a partner? In the Christian tradition, St. Augustine said, “Lord, you have made us for yourself and our hearts are restless until they rest in you.” I think that we are made to want to be in relationship, deep relationship, with someone other to ourselves, someone that is not human. But as we live in a society where we increasingly don’t believe in God, don’t believe in angels, don’t believe in the presence of the saints, we’re looking for a way to fill that gap. And I think for many people who are not religious, they’re looking towards AI to somehow fill this need to be in an authentic relationship with an other. BHUIYAN: And we’re talking a lot about sort of that human connection. And, Noreen, you said this in your book, that AI is an incomplete partner and a terrible surrogate for other humans. And it sounds like both of you agree that there is not a world where AI, in whatever form, could sufficiently replace—or even come close to replacing that human connection. But on a practical note Rabbi Franklin, you mentioned Rabbi Google. You know, a lot of faith practices are incredibly, to reuse the word, practice-centric, right? That that is the building block of the spirituality. Within the Muslim community, of course, right, the five daily prayers. There’s a version of this in many different faith practices. And so if people are seeking answers about the practical aspect of their spirituality from a tool even if they’re thinking, yeah, this is a tool. Trust, but verify. If they’re seeking those answers from this tool that has a tendency to hallucinate or make mistakes, is there a risk that they will over-rely on this particular tool, and then that tool can create sort of a friction between them and the community? Because, I’ll admit it, as someone who practices a faith and also is well-versed in the issues with Google and the misinformation that it can surface, I will still Google a couple—(inaudible). I will turn to Google and be, like: How do I do this particular prayer? I haven’t done it in a very, very long time. And of course, I’m looking through and trying to make sure that the sources are correct. But not everyone is doing that. Not everyone is going through with a fine-tooth comb. And ChatGPT, given how almost magical it feels to a lot of people, there is even less of a likelihood that they will be questioning it. And it is getting more and more sophisticated. So it’s harder to question. So is there a concern within religious communities that this tool will become something that will create even one more obstacle between a person and their faith leader, or their clergy, or their local scholars? FRANKLIN: I don’t seem that worried about it. I think what synagogues and faith-based communities do is something that’s really irreplicable by ChatGPT. We create community. We create shared meaningful experience with other people. And there is a sense that you need physical presence in order to be able to do that. Having said that, yeah, I use ChatGPT as a tool. I think other people will use it too. And it will help a lot with how do you get the information that you need in a very quick, accessible way? Sometimes it’s wrong. Sometimes it makes mistakes. I’ll give you an example of that. I was asking ChatGPT, can you give me some Jewish texts from Jewish literature on forgiveness? And it gives me this text about the prodigal son. And I typed right back in, and I said: That’s not a Jewish text. That’s from the Gospels. And it says, oh, you’re right. I made a mistake. It is from the Gospels. It’s not a Jewish text. I actually thought the most human thing that it did in that whole encounter was admit that it was wrong. Maybe that’s a lack of human—because human beings have an inability often to admit that we were wrong, but I actually love the fact that it admitted, oh, I made a mistake, and it didn’t double down on its mistake. It’s learning and it’s going to get better. I think if we measure artificial intelligence by its current form, we’re really selling it short for what it is going to be and how intelligent it actually is. And, by the way, I think it is extraordinarily intelligent, probably more intelligent than any of us. But we have human qualities that artificial intelligence can never really possess. And I think the main one, which we already touched on, is the idea of consciousness. And I think the experiences that you get within a faith-based community are those experiences that specifically relate to human consciousness and not relate to human—not developing intelligence. People don’t come to synagogue to get information. I hope they go to ChatGPT or Google for that. That’s fine. People come to synagogue to feel something more within life, something beyond the trivial, something that they can’t get by reading the newspaper, that they can’t get by going on Google. It’s a sense of community, a sense of relationship. And so I don’t think that there can be a way that artificial intelligence is going to distract from that. Yeah, I guess it’s possible, but I’m not too worried about it. BHUIYAN: And—go ahead, Noreen, yeah. HERZFELD: I was just going to say, I think you need to be a little careful when you say it’s more intelligent than we are. Because there are so many different kinds of intelligence. FRANKLIN: Yes. IQ intelligence, let me qualify. HERZFELD: If intelligence is just having immediate access to a lot of facts, great, yeah. It’s got access we don’t have. But if intelligence is having, first of all, emotional intelligence, which we’ve already discussed. But also just having models of the world. This is often where these large language models break down, that they don’t have an interior model of the world and the way things work in the world, whether that’s the physical world or the social world. And so they’re brittle around the edges. If something hasn’t been discussed in the texts that has been trained on, it can’t extrapolate from some kind of a basic model, mental model that—which is the way we do things when we encounter something brand new. So, in that sense, it’s also lacking something that we have. BHUIYAN: There’s a question from the audience that I think is a good one, because it sounds to me, and correct me if I’m wrong, that, Noreen, you in particular believe that the doomsday scenario that people are always talking about, where AI becomes sentient, takes over, is more—we become subservient to AI, is unlikely. And, OK. And so the question from the audience is that, it seems like most of the arguments are, we can tell the difference so AI won’t replace human connection. But what happens if and when AI does pass the Turing test? Is that something that you see as a realistic scenario? HERZFELD: Oh, in a sense we could say AI has already passed the Turing test. If you give a person who isn’t aware that they’re conversing with ChatGPT sometime to converse with it, they might be fooled. Eventually ChatGPT will probably give them a wrong answer. But then, like Josh said, it’ll apologize and say, oh yeah, I was wrong. Sorry. So we could say that, in a sense, the Turing test has already been passed. I am not worried about the superintelligent being that’ll decide that it doesn’t need human beings, or whatever. But I’m worried about other things. I mean, I think in a way that that’s a red herring that distracts us from some of the things we really should be worried about. And that is that AI is a powerful tool that is going to be used by human beings to exert power over other human beings. Whether it’s by advertently or inadvertently building our biases into this tool so that the tool treats people in a different fashion. I’m also worried about autonomous weapons. They don’t need to be superintelligent to be very destructive. And a third thing that I’m worried about is climate change. And you might say, well, what has that got to do with AI? But these programs, like the large language models, like ChatGPT, take a great deal of power to train them. They take a great deal of power to use them. If you ask a simple question of ChatGPT instead of asking Google, you’re using five to ten times the electricity, probably generated by fossil fuels, to answer that question. So as we scale these models up, and as more and more people start using them more and more of the time, we are going to be using more and more of our physical resources to power it. And most of us don’t realize this, because we think, well, it all happens in the cloud. It’s all very clean, you know. This is not heavy industry. But it’s not. It’s happening on huge banks of servers. And just for an example, one of Microsoft’s new server farms in Washington state is using more energy per day than the entire county that it’s located in. So we just are not thinking about the cost that underlies using AI. It’s fine if just a few people are using it, or just using it occasionally. But if we expect to scale this up and use it all the time, we don’t have the resources to do that. BHUIYAN: Yeah, and you mentioned electricity. A couple of my coworkers have done stories about the general environmental impact. But it’s also water. A lot of these training models use quite a bit of water to power these machines. HERZFELD: To cool the machines, yeah. BHUIYAN: And so yeah, I’m glad that you brought that up, because that is something that I think about quite a bit, covering surveillance, right? Religious communities are this sort of, incredibly strong communities that can have a really huge social impact. And we’ve had various versions of AI for a very, very long time that have harmed some religious communities, other marginalized groups. You mentioned a couple of them. Surveillance is one of them. There’s also things that feel a little bit more innocuous but there’s bias and discrimination built into them like hiring algorithms, mortgage lending algorithms, algorithms to decide whether someone should qualify for bail or not. And so my general question is, is there a role that religious communities can play in trying to combat those harms. How much education should we be doing within our communities to make sure people are aware that it’s not just the fun quirky tool that will answer your innocuous question. AI is also powering a lot more harmful and very damaging tools as well. FRANKLIN: I’d love for religious leaders to be a part of the ethics committees that sit at the top of how AI decides certain decisions that are going to be a part of everyday real life. So, for example, when your self-driving car is driving down the road and a child jumps out in the middle of the street your car has to either swerve into oncoming traffic, killing the driver, or hit the child. Who’s going to decide how the car behaves, how the artificial intelligence behaves? I think ethics are going to be a huge role that human beings need to take in terms of training AI and I think religious leaders as well as ethicists, philosophers, really need to be at the head, not the lay leadership programmers or the lay programmers. Not the lay but they’re not really trained in ethics and philosophy and spirituality, for that matter, and religion. I really think that we need to be taking more of an active role in making sure that the ethical discussions of the programming of artificial intelligence have some kind of strong ethical basis because I think the biggest danger is who’s sitting in the driver’s seat. Not in the car scenario but, really, who’s sitting in the driver’s seat of the programming. BHUIYAN: Noreen, do you have anything to add onto that? HERZFELD: No, I very much agree with that. I do think that if we leave things up to the corporations that are building these programs the bottom line is going to be what they ultimately consult. I know that at least one car company—I believe it’s Mercedes-Benz—has publicly said that in the scenario that Josh gave the car is going to protect the driver. No matter how many children jump in front of the car the car will protect the driver and the real reason is that they feel like, well, who’s going to buy a car that wouldn’t protect the driver in every situation. If you had a choice between a car that would always protect the driver and a car that sometimes would say, no, those three kids are more valuable— FRANKLIN: And that’s a decision made by money, not made by ethics. HERZFELD: Exactly. FRANKLIN: Yeah. BHUIYAN: Right. Rabbi Franklin, I have a question. There’s a good follow-up in the audience. Are there ethics committees that you know of right now that are dealing with this issue, and then the question from the audience from Don Frew is how do we get those religious leaders into those committees. FRANKLIN: We have to be asked, in short, in order to be on those committees. I don’t know if it’s on the radar even of these corporations who are training AI models. But I think there are going to be very practical implications coming up in the very near future where we do need to be involved in ethical discussions. But there are religious leaders who sit on all sorts of different ethics committees but as far as I know there’s nothing that’s set up specifically related to AI. That doesn’t mean there isn’t. I just don’t know of any. But, if you were to ask me, right now we’ve seen articles about the decline of humanities in college and universities. I would actually say that humanities is—if I had to make a prediction is probably going to make a comeback because these ethical, philosophical, spiritual questions are going to be more relevant than ever, and if you’re looking at programming and law and the medical industry and medicine those are actually things where AI is going to be more aggressive and playing a larger role in doing the things that humans are able to do. BHUIYAN: Right. I do want to bring the question or the conversation back to, you know, religion, literally. In your book, Noreen, you bring up a question that I thought was just so fascinating, whether we should be deifying AI and it sounds like the short answer is no. But my fascination with it is how realistic of a risk is that, and I know there’s one example that I just knew off the top of my head was the Church of AI, which has been shut down and was started by a former Google self-driving engineer who was later pardoned for stealing trade secrets. His name is Anthony Levandowski. So, yeah, take what he says with a grain of salt, I guess is what I’m saying. But the church was created to be dedicated to, quote, “The realization, acceptance, and worship of a godhead based on AI developed through computer hardware and software.” Is this a fluke? Is this a one off? Do you think there’s, like, a real risk of as AI gets more sophisticated people will be sort of treating it as, like, a kind of god like, I don’t know, figure, if that’s the right word, but some sort of god? FRANKLIN: It sounds like a gimmick to me. I mean, look, it’s definitely going to capture the media headlines for sure. You do something new and novel like that no matter how ridiculous it is people are going to write about it, and it’s not surprising that it failed because it didn’t really have a lot of substance. At least I hope the answer is no, that that’s not going to be a real threat or that’s not going to be a major concern. Who knows? I mean, I really think that human beings are bad at predicting the future. Maybe AI will be better at predicting the future than we are. But my sense, for what it’s worth, is that no, that’s not really a concern. HERZFELD: Well, I would be a little more hesitant to say it’s not any type of a concern. I do not think there are going to be suddenly a lot of churches like the one you mentioned springing up in which people deify AI with the same sorts of ways in which we’ve worshipped God. But, we worship a lot of stuff. We worship money all too often. We worship power. And we can easily worship AI if we give it too much credence. If we really believe that everything it says is true, that what it does is the pinnacle of what human beings do and this is what worries me is that if we say, well, it’s all about intelligence, I’ve often thought, well, we’re trying to make something in our own image and what we’re trying to give it is intelligence. But is that the most important thing that human beings do? I think in each of our religious traditions we would say the most important thing that human beings do is love and that this is something that it can’t do. So my worry is that—because in some ways we’re more flexible than machines are and as the machines start to surround us more, as we start to interact with them more we’re going to, in a sense, make ourselves over in their image and in that way we are sort of deifying it because when we think about—in the Christian tradition we talk about deification as the process of growing in the image and likeness of God, and if instead we grow in the image and likeness of the computer that’s another way of deifying the computer. BHUIYAN: I want to turn it over to audience questions; there are some hands raised. So I want to make sure that we get some of them in here as well. OPERATOR: Thank you. We will take the next question from Rabbi Joe Charnes. CHARNES: I appreciate that there are potential benefits from AI. That’s simply undeniable. The question I have is and the concern that I have that I think you certainly both share and I don’t know the way around it is as humans we do often relate to human beings. That’s our goal in life. That’s our purpose. But human relationships are often messy and it’s easier to relate to disembodied entities or objects, and I see people in the religious world relating now through Zoom. Through their Zoom sessions they have church so they’re relating to church and God through a screen, and when you speak of ethics and spirituality, Rabbi, of somehow imposing that or placing that into this AI model I don’t see how you can do that and I do fear we lean—if there’s a way out of human connection but modeling human connection to some extent I do fear we’re going to really go in that direction because it’s less painful. FRANKLIN: So I’ll try to address that. There’s a great book that’s going to sound like it’s completely unrelated to this topic. It’s by Johann Hari and the book is called Chasing the Scream. What he argues is that, generally, addiction is not about being the opposite of sobriety. Addiction is about being disconnected from other individuals and using the substance or a thing as a proxy for a relationship that we have with other people. Love that idea. I think there is a huge danger that artificial intelligence can be just that, the proxy for human relationship when we’re lonely, when we’re disconnected from others, and it’s going to be the thing that we are going to turn to. I would even echo Noreen’s fear that we end up turning to AI in very inappropriate ways and making it almost idolatrous, that when we say deifying it what we’re really doing is idol worshipping AI as something that really won’t actually give you the connection even though you think that it will. I think that’s a very legitimate fear. Having said that, I think that AI is going to be a great tool for the future if it’s used as a tool. Yes, there are tremendous amount of dangers with new technology and newness. Every single new innovation, every single revolutionary change technologically has come with huge dangers and AI is no different. I hope we’re going to be able to figure out how to really put the correct restrictions on it, how to really make sure that the ethics of AI has involvement from spiritual leaders and ethicists and philosophers. Am I confident that we’ll be able to do that? I don’t know. I think we’re still at the very beginning stages of things and we’ll see how it develops. HERZFELD: Two areas that I worry about because these are areas that people are particularly looking at AI are the development of sex bots, which is happening, and the use of AI as caregivers either for children or for the elderly. But particularly for the elderly this is an area that people are looking at very strongly. I think for religious leaders the best thing that you can do is to try to make sure that the people in your congregation—to do everything you can to foster the relationships among the people because as Josh was saying, we’ll use this as a substitute if we don’t have the real thing. But if we are in good and close and caring relationships with other human beings then the computer will not be enticing as a substitute and we might merely use it as a tool or just not bother with it at all. So I think what we really need to do is tend to the fostering of those relationships and particularly for those that are marginalized in some ways, whether it’s the elderly, whether it’s parents with children, particularly single parents who might be needing help, and whether it’s those that are infirm in some way. OPERATOR: We will take our next question from Ani Zonneveld of Muslims for Progressive Values. ZONNEVELD: Hi. Good morning. Good afternoon. You had raised that question, Johana, about what are the faith communities doing or can contribute to a better aggregated response on AI and I just wanted to share that members of our community has been creating images of, for example, women leading prayer in Muslim communities. So that those are some of the aggregated information that could be filtered up into the way AI is being used as a tool. So I think, at the end of the day, the AI system works as an aggregate of pulling in information that’s already out there and I think it’s important for us in the faith communities to create the content itself from which the AI can pull, and that also overcomes some of the biases, particularly the patriarchal interpretations of faith traditions, for example, right? The other thing I wanted to also share with everyone is that there’s a real interest in it at the United Nations. That is being led by an ethics professor from the university in Zurich. I taught a master’s ethics class there as a person of faith and so there’s this international database system agency that is being created at the UN level. Just thought I would share that with everyone. Thanks. FRANKLIN: Thank you. HERZFELD: And I would also share that the Vatican is working on this as well. I am part of a committee that’s part of the dicastery of culture and education and we’ve just put together a book on AI and the Pope is going to be using his address on January 1 on the Day of World Peace to address AI as a topic. FRANKLIN: I’m pretty sure rabbis across the country right now are going to be writing sermons for tomorrow, which begins Rosh Hashanah, our high holiday season, and many rabbis—most rabbis, perhaps—are going to be preaching about AI. OPERATOR: We will take our next question from Shaik Ubaid from the Muslim Peace Coalition. UBAID: Thank you for the opportunity. Can you hear me? BHUIYAN: Yes. UBAID: Overall, we are sort of sort of putting down AI because it does not have the human qualities of empathy. But if instead of that we focus on using it as a tool whether in educating the congregations or jurisprudence then we would be using it. When it comes to the human quality, another quality is courage. We may have the empathy, but many times we do not show the courage. For example, we see pogroms going on in India and an impending genocide. But whether it be the—a (inaudible) chief or the chief rabbi of Israel or the Vatican, they do not say a word to Modi, at least publicly, to put pressure, and same with the governments in the West. And sometimes their mouthpieces in the U.S. are even allowed to come and speak at respectable fora, including sometimes even in CFR. So instead of expecting too much from the AI we should use it with its limitations and sometimes the bias and the arrogance that we show thinking that we are humans, of course, we are superior to any machine. But many times we fail ourselves. So if the machines are failing us that should not be too much of a factor. Thank you. FRANKLIN: Very well said. HERZFELD: Yeah. BHUIYAN: There are other audience questions that sort of build on that. We’re talking about humans having bias and our own thoughts sort of being a limiting factor for us. But, obviously, these machines and tools are being built by humans who have biases that may be and putting them into the training models. And so one of the questions or one of the topics that Frances Flannery brought up is the ways in which AI is circumventing our critical thinking. We talked about over reliance on these tools within the faith practice but is there—beyond that, right? We talked about AI when it comes to very practical things like these practices that we do. I understand it doesn’t replace the community and it doesn’t replace these spaces where we’re seeking community. But people are asking questions that are much more complex and are not trivial and are not just the fundamentals of the religion. Is there a concern with people using chat bots in place of questioning particular things or trying to get more knowledge about more complex topics? FRANKLIN: I would actually just kind of respond by saying that I don’t think AI circumvents critical thinking. I actually think it focuses us to think more critically, and by getting rid of the trivial things and the trivial data points and rational kind of stuff that AI can actually do and piece together and solve even just complex IQ-related issues it focuses us to think about more critical issues in terms of philosophy, in terms of faith and spirituality and theology, all things that I think AI might be able to parrot. But it can’t actually think creatively and original thoughts. So I actually think that AI gets rid of the dirty work, the summaries of what other people have said, maybe even generating two ideas together. But really true creativity, I think, is in the human domain and it’s going to force us to think more creatively. Maybe I’m just an optimist on that but that’s my sense. HERZFELD: And I’ll give the more pessimistic side, which is not to say—I mean, I believe that everything that Josh just said is correct. My concern is that we might end up using AI as a way to evade responsibility or liability. In other words, if decisions are made—Johana, you were talking earlier about how we use AI to decide who gets bail, who gets certain medical treatments, these things, and if we simply say, well, the computer made a decision and we don’t think critically about whether that was the right decision or whether the computer took all things into account I think we need to think about the same thing when we look at autonomous weapons, which are really coming down the pike, and that is how autonomous do we really want them to be. We can then, in a way, put some of the responsibility for mistakes that might be made on the battlefield onto the computer. But in what sense can we say a computer is truly responsible? So I do fear that as long as we use it as a component in our decision-making, which I think is what Josh was saying, this can be a powerful tool. But when we let it simply make the decision—and I’ve talked to generals who are worried about the fact that if we automate warfare too much the decision—the pace of warfare may get to be so fast that it’s too fast for human decision-makers to actually get in there and make real decisions and that’s a point where we’ve then abdicated something that is fully our responsibility and given it to the machine. FRANKLIN: Let’s not forget, though, how strong human biases are. I mean, read Daniel Kahneman’s book Thinking, Fast and Slow and you’ll see all these different heuristics for human bias that are unbelievable. Going to the realm of bail, there was a study that showed that judges who haven’t had their lunch yet are much more likely to reject bail than those who just came out of their lunch break. I mean, talk about biases that exist in terms of the ways that we make decisions. I would say that ultimately although there are biases that we implant within these algorithms that will affect the way that outcomes actually come out probably artificial intelligence and these algorithms are going to do a better job than human beings alone. Having said that, to echo Noreen, when we use them in tandem with human decision-making I think we get the best of both worlds. BHUIYAN: Right. I mean, there are so many examples. Forget warfare and other places. I mean, in policing it happens all the time, right? There’s facial recognition tools that are intended to be used as sort of a lead generator or something that—a tool in an investigation. But we’ve seen time and again that it’s being used as the only tool, the only piece of evidence that then leads to the arrest and false incarceration of many, often black, people. And, again, to both of your points, it’s because of the human biases that these AI tools, particularly when used alone, are unable to—I mean, they’re just going to do what the human was going to do, too—the human with the bias was going to do as well. And I have seen in my reporting that there are a lot of situations where police departments or other law enforcement agencies will kind of use that as an excuse just like you said, Noreen, or sort of, like, well, the computer said, and they validated our data so it must be right. So I do think that there’s a little bit of the escape of liability and responsibility as well. We don’t have a ton more time and, Noreen, you talked a little bit about some of your major fears. Rabbi Franklin, you’re a little bit more optimistic about this than maybe Noreen or even I am. I would like to hear what your great fears of this tool are. FRANKLIN: My biggest fear is that it’s going to force me to change and, look, I think that’s a good thing, ultimately, but change is always really scary. I think I’m going to be a different rabbi five years from now, ten years from now than I am right now and I think AI is going to be one of the largest reasons for that. I think it’s going to force me to hone certain abilities that I have and really abandon and rely on artificial intelligence for other ones. And even going back to the original thought experiment that involved me in this conversation to begin with, which was using AI to write a sermon or ChatGPT to write a sermon at the very beginning of its infancy of ChatGPT, really, what a sermon looks like is going to be profoundly different. And it was part of one of the points that I was making when I actually delivered that original sermon. The only thing that was scripted was the part that was written by AI. Everything else was a conversation, back and forth questioning, engagement with the community who was there. I think sermons are going to look more like that, more like these kind of conversations than they will a scripted, written, and delivered words that come from a paper and are just spoken by a human being. Rabbis, preachers, imams, pastors, priests, are not going to be able to get away with that kind of homiletical approach. We’re going to have to really radically adapt and get better at being rabbis and clergy with different skill sets than we currently have, and that’s scary. But at the same time it’s exciting. BHUIYAN: And, Noreen, to end on a positive note, is there anything that you see that ChatGPT or other forms of generative AI or AI, broadly, what are some of the most positive ways that you see these tools being used in the future? HERZFELD: Well, we haven’t even mentioned tools that work with images is like DALL-E or Midjourney. But I think that those tools have sparked a new type of creativity in people, and I think if there’s a theme that goes through everything that the three of us have said today it’s a great tool, bad surrogate—that as long as we use this as a tool it can be a very good tool. But it’s when we try to use it as a complete replacement for human decision-making, for human courage, for human critical thinking, for human taking of responsibility, that we realize that just as we are flawed creatures we’ve created a flawed creature. But in each of our religious traditions I think we hold dear that what we need to do is love God and love each other and that we as religious people keep raising that up in a society that views things instrumentally. BHUIYAN: Thank you both. I am just going to turn it over to Irina now. FASKIANOS: Yes. Thank you all. This was a really provocative and insightful discussion. We really appreciate it. We encourage you to follow Rabbi Josh Franklin’s work on rabbijoshfranklin.com. Noreen Herzfeld is at @NoreenHerzfeld and Johana is at @JMBooyah—it’s B-O-O-Y-A-H—so on X, formally known as Twitter. And, obviously, you can follow Johana’s work in the Guardian. Please, I commend Noreen’s book to you. And please do follow us on Twitter at @CFR_religion for announcements and other information. And please feel free to email us at [email protected] with suggestions for future topics and feedback. We always look forward to hearing from you and soliciting your suggestions. So, again, thank you all for this great conversation. We appreciate your giving us your time today and we wish you a good rest of the day.
  • Technology and Innovation
    The TikTok Trap
    TikTok is an easy scapegoat, but the lack of tech regulation and data protection is the underlying cause of our collective anxiety in the digital age.
  • China
    U.S. Strategic Competition With China
    Play
    Bipartisan leadership of the House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party, Chairman Mike Gallagher (R) and Ranking Member Raja Krishnamoorthi (D), discuss the work of the committee to ensure the United States is well positioned to counter growing competition with China, across the trade, technology, development, manufacturing, and military sectors.
  • Argentina
    Latin America This Week: August 31, 2023
    BRICS becomes BRICS+, though not all may join; Latin America looks West, not East, for its technology; more frequent droughts promise more slow downs in the Panama Canal.
  • Russia
    EU Crackdowns on Big Tech, After Prigozhin, Two Years of Taliban, and More
    Podcast
    Major technology companies rush to comply with the European Union (EU) Digital Services Act, which makes online platforms responsible for moderating harmful content; questions mount about the Russian private military company Wagner Group after its leader Yevgeny Prigozhin is reportedly killed in a plane crash; the Taliban enters its third year in power since the U.S. military evacuated from Afghanistan; and Iranian Foreign Minister Hossein Amir-Abdollahian visits Saudi Arabia as the former rival countries to normalize relations.
  • Latin America
    Latin America This Week: August 9, 2023
    Hot winters in the Andes and Southern Cone threaten Latin America’s advantages; Panama, Costa Rica, Mexico aim to ride U.S. semiconductor industrial policy coattails; Colombia’s new ceasefire agreement remains fragile.
  • Competitiveness
    Building a Competitive U.S. Workforce
    Play
    Panelists discuss the increasing demand for technical talent in the current age of automation, how to foster a competitive workforce, and resources available to state and local governments through the CHIPS and Science Act. TRANSCRIPT FASKIANOS: Welcome to the Council on Foreign Relations State and Local Officials Webinar. I’m Irina Faskianos, vice president for the National Program and Outreach here at CFR. We’re delighted to have participants from forty-nine states and U.S. territories for today’s conversation, which is on the record. CFR is an independent and nonpartisan membership organization, think tank, publisher, and educational institution focusing on U.S. foreign and domestic policy. CFR is also the publisher of Foreign Affairs magazine. And as always, CFR takes no institutional positions on matters of policy. Through our State and Local Officials Initiative, CFR serves as a resource on international issues affecting the priorities and agendas of state and local governments by providing analysis on a wide range of policy topics. For today’s discussion, we are going to be talking about “Building a Competitive U.S. Workforce,” and we have an amazing panel of speakers today. Bo Machayo is the director of U.S. government and public affairs at Micron Technology. He has a decade of experience as a public policy and public engagement advisor the local, state, and federal levels of U.S. government, and has had a number of positions including in the office of Virginia Senator Mark Warner, Loudoun County’s Board of Supervisors, and in the Obama administration. David Shahoulian is the director of workforce policy and government affairs at Intel Corporation. Previously, he worked at the Department of Homeland Security on border and immigration policy. He’s also served on the House Judiciary Committee for over ten years. Dr. Rebecca Shearman is the program director for technology innovation and partnerships at the National Science Foundation. Previously, she was an assistant professor in the biology department at Framingham State University and holds a Ph.D. in evolution and developmental biology from the University of Chicago. We also will be joined by Abi Ilumoka, who currently serves as a program director for engineering education in the Division of Undergraduate Education at NSF. And prior to that, she was a professor of electrical and computer engineering at the University of Hartford in Connecticut. And finally, I’m happy to introduce Sherry Van Sloun, who is the national intelligence fellow at CFR. Previously, she served as a deputy assistant director of national intelligence for human capital at the Office of the Director of National Intelligence for nine years. And she’s also held various positions with the National Security Agency and served in the U.S. Army as a signals analyst for eight years. Sherry is going to be moderating this conversation. She brought this great panel together, and can talk a little bit about her research, and basically the provisions for state and local governments and the CHIPS and Science Act. We will then open it up for questions and turn to all of you. Again, this is a forum where we can share best practices. So we do want to hear from you. You can either write your question or raise your hand when we get there. So, Sherry, over to you to take it away. VAN SLOUN: Thanks so much, Irina. And thanks to you and your staff for putting this webinar together. I really feel lucky to be here today. I want to say thanks to Becky, Bo, David, and Abi for being here as well. I know your schedules are busy, so we really appreciate you taking the time out of your day. And then I want to thank all of you who joined today. I think it’s great to have all of us here to talk about this important topic. So a little context. My last few assignments in the intelligence community revolved around building talent pipelines to meet the emerging demands of intelligence work. So my time here at CFR, I’ve spent some time looking into the implementation of the CHIPS and Science Act, specifically the human capital aspect of the act. My focus has really been around the need to build semiconductor manufacturing talent but, to be clear, the CHIPS and Science Act covers many other STEM workforce advancements and future technologies, from AI, to biotechnology, to quantum computing. So today, we have Becky and Abi here from NSF to share about the broader reach the CHIPS and Science Act gave the NSF regarding cultivating workforce, and then Bo and David to dive into some of the semiconductor manufacturing perspective around talent. So looking forward to this. And I think we’re going to kick it off with going to Becky and Abi at the NSF. Let me start here, and say the NSF has been involved in promoting science for many decades. It’s been active in supporting workforce development through your directorate of STEM education. And what the CHIPS Act legislation did was create the director of technology, innovation, and partnerships. And one of those new programs under that new directorate is the Experiential Learning for Emerging and Novel Technologies, which is the ExLENT program. Which I think, Becky, you helped to create that program. So we’re glad you’re here. So can one of you share how the ExLENT program works, the timelines you’ve laid out, and the impact you’re hoping to see over time? And then specifically maybe you could focus a little bit for a minute on the semiconductor workforce specifically, and how the ExLENT program will help to build this much-needed body of talent for the U.S. SHEARMAN: Sure, Sherry, happy to jump in. You’re correct, I was involved in the development of the ExLENT program. And we are super excited about it. So TIP is—which is the acronym of our new directorate—just celebrated its first birthday very end of the spring. And we’re really in just our first funding cycle of ExLENT. So you read out the full acronym, right? So this is really centered around experiential learning. And we’re named emerging and novel technologies. So emerging technologies really are those technologies that we—you know, we point to the CHIPS and Science Act and say that’s, you know, what we’re interested in funding. But we did keep it kind of open. So, novel technologies, right? We are kind of allowing the community to tell us, look, this may not fall precisely in the line of these emerging technologies, but we need to be building a workforce that can do X, Y, and Z. And we specifically developed this program with a few things in mind. We need to build a workforce that is nimble in its ability to get training as expertise evolves, as our technologies evolve. And we’ve got to engage all Americans in the STEM enterprise, if they’re interested in being in the STEM enterprise. For us to be really competitive, everyone needs to have access to a good STEM education. And then we also built it around the fact that we felt like we really need to be bringing organizations across different sectors together to do this correctly, right? We need to have those experts in education, but we also need to have those industry partners who understand the needs of the industry and the needs of a specific company. So the program is really designed to address those things. It’s very broad. So we allow the applicant—who can be from academia, they can be from the private sector, they can be nonprofits, we’re really trying to reach everybody here. They can say: This is the population we’re trying to reach. So maybe it’s, you know, middle school/high school students. Maybe it’s adult learners at any point in their educational career, and trying to get them hands-on experience that’s going to give them some credential, expose them to something so that they, if they choose, can kind of be on that educational path towards a good-paying job in an emerging tech field. And of course, the semiconductor industry is central to that, right? We don’t have a specific call-out to semiconductors, but we highlight it as one of the emerging technologies. VAN SLOUN: And Becky, thank you. So can you share a little bit more with the audience about, like, how they would go about engaging with you on a proposal? What is the process that folks do there? I know you have calls, but can you explain that a little bit about how a call goes out and then what that looks like once it closes? SHEARMAN: Absolutely. So we have a solicitation out. And if I’m allowed to drop something into the chat, I’m happy to share the link and you can go right to it. And there’s—we have deadlines. In fact, our next deadline is September 14. So if anyone’s really interested and has nothing to do in the next month, you can take a look at the solicitation and consider applying for the program. It outlines—the solicitation will outline everything you need to do but, basically, you’re writing up a proposal, submitting it through our standard process at NSF through a site called research.gov. And then your proposal goes through a merit review process, where we bring in experts from the community that will include people with the expertise in education, expertise in industry. You know, we try to have a very broad cross-sector expertise represented on that panel. And they review all the proposals and give us recommendations and feedback around where we should make our funding decisions. The best thing to do if you go to that solicitation, there are links on that first page to an inbox and to program officers that you can reach out to. A good place to start is just reaching out to them and trying to connect, and have an initial conversation. VAN SLOUN: Thank you. And if I recall, your first grant announcement will be announced soon, right? SHEARMAN: Very soon. VAN SLOUN: And then the call in September will be announced later this year or early next year. Super. OK. Thank you very much, Becky. Bo, let’s move to you and, you know, really kind of diving into semiconductors specifically. You know, your role allows you to see kind of across Micron and how it’s working with partners to build the talent pipeline that you all need for your existing locations and where you’re also expanding at new locations across the country. Can you share a little bit about how Micron has responded to the passing of the CHIPS Act legislation, specifically here in New York? And how you’re tracking that talent pipeline gaps at all levels of the manufacturing lifecycle? MACHAYO: Yeah. Thanks, Sherry, for that question. And it’s great to be a part of this discussion. Per, you know, your conversation, we’re happy at Micron. Thanks to the CHIPS and Science Act and also thanks to the incentives from the, you know, states and localities, we were able to make investments of, you know, in New York, of $100 billion over the course of the next couple of decades. And a big part of that is around how we can address the talent pipeline needs. You know, we’ll have 9,000 direct jobs and over 40,000 indirect jobs due to economic activity that will happen in the central New York region. But we know that all those—you know, that talent won’t be able to come directly from central New York. It will have to be a whole of New York approach, but also a regional approach across the northeast. And so specifically in New York, we’ve, you know, been able to, you know, establish partnerships from what we’re calling the K through gray level, really making sure that from K-12 we’re doing interactive activities and sponsoring what we call chip camps, that are unique to Micron and we’re able to make sure that we are, you know, engaging young K through eight, you know, students to be able to really understand the jobs that are available in semiconductor industry. Another thing that we’re doing specifically in New York is really working on kind of both curriculum development and how we can partner with schools. As a part of our announcement, we made a commitment to doing $10 million into the steam school, which is a local initiative that will focus on both career—or, both technical kind of education, but also kind of an engineering pathway to assure that, you know, we can get students interested in the semiconductor industry early on. We’re also—you know, have half of those jobs are going to be technician jobs, and the other half will be engineering jobs. So how we’re partnering with, you know, local building trades unions through our PLA to make sure that we’re educating folks, establishing certificate programs so that we can make sure that folks who are looking to transition to the semiconductor industry, thanks to the investment that we’re making there, how can folks be part of the Micron experience? And then also, how are we doing that with community colleges and also higher ed institutions, as well? And so we partnered with the SUNY system in New York, and also the CUNY system in New York to make sure that we’re building the pipeline from a community college there. Particularly investing in creating clean rooms at Onondaga Community College and then utilizing the existing clean rooms across the state. We also established a couple of regional networks for New York, especially the Northeast University Semiconductor Network, to really make sure that we’re taking, you know, what individual community colleges and higher ed institutions have to be able to make sure that we’re addressing those gaps. You know, that is—these are kind of examples of ways. And as a matter of fact, earlier this week when I was in central New York we also are able to partner with the local museum, a science and technology museum in central New York, to create a semiconductor exhibit so that kids from K-12 can actually be able to understand what a semiconductor is, what a memory chip is, and multiple different ways and avenues to be able to attract talent to be able to come and to meet the gaps that we have throughout the semiconductor industry. And so those are just a couple of ways in which we’re looking to build partners and to address some of the needs that we’ll have in New York. VAN SLOUN: Thanks, Bo. That’s fantastic. David, I’m going to turn to you now. I just got back from Portland, Oregon last week, where I was able to get a tour of Intel’s fab and their innovation center. And it was really incredible to see firsthand the different kinds of talent needed to make this industry possible. Can you share a little bit about the makeup of Intel’s workforce? I think many people will be surprised that the bulk of it really isn’t Ph.Ds., but how you’re building efforts for a talent pipeline needed for your major investment in Ohio, specifically. I know it was a huge one for you guys. I know, the Ohio State University is kind of the hub of that consortium there, but—which makes me very proud. I’m a Buckeye. But can you talk a little bit about that and what’s happening there? SHAHOULIAN: Sure. Happy to do that. So, first of all, thank you for having me. It is a pleasure to be here. Second, like Bo mentioned, you know, we’re excited about the opportunity the CHIPS and Science Act provides. And, you know, because of that, and the incentives that we’re getting from the federal government and the state governments, you know, we are right now building—expanding all of our sites, and building a new greenfield site in Ohio. So yes—on your first question, yes. People are generally surprised to hear about the makeup of our manufacturing workforce. Let me just—to just give it—summarize it really quickly, right, each of our fabs is generally around 1,500 positions that we create for that fab. About 60 to 70 percent of those jobs are for semiconductor technicians. These are individuals that can have an associate’s degree, but in some cases we don’t even require that. A certificate would do. And in some cases, you know, we hire people with even less than that to be technicians. These are people that oversee and troubleshoot the manufacturing process and then all of the support systems, like the electrical, water, gas, and air filtration systems that, you know, support manufacturing operations. So that’s, like—that’s the bulk of the jobs that we will be creating with our new factories. The other—the remainder is about 20 to 25 percent, you know, individuals with bachelor’s degrees in electrical engineering, computer science. And then it’s about, you know, somewhere between 5 and 10 percent individuals with advanced degrees. I will just want to say—just add a little caveat for Oregon, right? Because Oregon is a location where we do manufacture, but we also develop our manufacturing technology, there we do—you know, there is a higher ratio of Ph.Ds. So there, you know, there are more advanced degree folks. Second, with respect to Ohio, we’re very excited about the work that we’re doing there. One of the reasons we chose Ohio as a site was because of the great educational system that already existed there and their history with advanced manufacturing. When we announced that we were going to be building there, we immediately committed $50 million into, sort of, you know, expanding that education ecosystem that already exists. And that’s, you know, modernizing the curricula, creating modules that are semiconductor specific, providing semiconductor manufacturing equipment, helping build clean rooms. These are all the things that are necessary to train individuals and give them, you know, hands-on training in our industry. We’ve already awarded 17.7 million dollars of that. That has gone to eight collaborations involving almost 80 schools across the entire state of Ohio. We’re really proud of that effort. One of them—just to give you two examples—one of them is being led by Columbus State Community College. They’re working with every other community college system in the state of Ohio to create semiconductor technician curricula with shared credits, right, that can be shared across all of the different institutions. There’s another one that’s being led by the Ohio State University, I should have said, The Ohio State University. Forgive me for that. Right, they’re partnering with nine other universities to create an education and research center for the semiconductor industry to lead on innovation and education. So, you know, these are the—of course, the things that are necessary, you know, to create the education ecosystem that will help not only us but our suppliers, and then other semiconductor companies across the country. VAN SLOUN: So do you—thanks, David. Do you think that what you’re doing in Ohio, you’ve got quite the consortium, like you’ve just talked about. Is that going to be enough to be able to source the talent pipeline for that fab and the outlying things that are going to happen around that fab in Ohio? Or is there a way that other—that you’re going to reach into other areas, like Bo mentioned a regional approach, to that space in Ohio? SHAHOULIAN: Yeah, so that is—you know, that is a regional approach, in the sense that we’ve reached out to all of Ohio. We are also—we also have interest from other universities in the rest—you know, the remainder of the region. Purdue, Michigan, you know, other universities in the Midwest. You know, what we’ve asked is for them to help partner with the Ohio universities, and, you know, working on trying to build those partnerships and those collaborations. You know, we’ve also, you know, collaborated with NSF, right? So, you know, when NSF got $200 million to build out the education ecosystem, you know, we know Micron partnered and put some money on the table. We did as well. You know, we matched 50 million dollars in funding to create $100 million partnership with NSF to sort of also bring those opportunities nationwide to any school, not just ones where we’re operating. So NSF has already rolled out two programs with that funding. And, you know, we anticipate they will be rolling out more this year. And, you know, schools anywhere in the country will be able to apply for that funding. VAN SLOUN: That’s fantastic. Thank you very much, David. That’s very helpful, I think, for the audience today. Becky, if we could come back to your or Abi, it seems to me that the U.S. wants to be a leader in this industry, for semiconductors specifically. It’s going to take a village, right? I mean, how do we best prepare the partnerships between private sector, academia, and community organizations to really find ways to bring exposure to this kind of work? I know Micron and Intel are doing their great work, but is there anything that NSF is doing kind of to get this message out and get excitement built around this industry? SHEARMAN: So I’ll start, but then I really do want to invite Abi to join me and add anything she may have. She sits in a different place than I do at NSF. I can at least speak from the from TIPs directorate. I know we’ve been doing a lot. So TIP stands for Technology, Innovation and Partnerships. And we are very much interested in really trying to move emerging tech innovations into practice kind of at speed, at scale. And a big part of that includes making sure we’re thinking about the workforce needed to do that successfully, right? And so everything that’s coming out of TIP is really emphasizing these partnerships. So even when it comes to workforce development, we feel like we’re not going to be able to do this well unless we’re really engaging all the people who bring some sort of expertise to it. And I think when you listen to David and Bo talk about what they’re doing, right, they’re talking about doing this in partnership, in collaboration. And you know, the ExLENT program in particular is—so, I guess, let me start by saying I just—with TIP being a new directorate and all the attention that has brought, we’re trying to bring these different sectors who maybe aren’t used to talking with each other into the same room. And all of our programs that are coming out are doing that, and ExLENT is no exception there. And we are trying to get the community thinking beyond—although, you know, Intel and Micron are absolutely central to the success—but we’re trying to get the—as is, you know, The Ohio State. But we also recognize that if we want to educate the domestic workforce, there’s a lot of other organizations that could bring real value. So we are being very intentional about reaching out to community organizations, to nonprofits that are thinking a lot about reaching specific communities to get folks who would never consider themselves someone who would be in this space, have a job, you know, in a in a semiconductor manufacturing plant, working for Intel, right? It just—it wouldn’t occur to them that that’s something that they would do. We’re trying to create those pathways to reach out and give them some initial exposure and bring them into the fold so the opportunities are there for them, if they want them. And we’re also including those industry partners and the large universities, but we think that the more different perspectives we can get together in a room the better we’re going to be able to diversify the pathways and reduce the barriers to those jobs. And that’s what ExLENT is really trying to do. And, like I said, I’d love to—I’d love to give Abi an opportunity to share anything from her perspective at NSF, if she wants. ILUMOKA: Thank you, Rebecca. I agree. I agree with everything Rebecca has said. What I would like to add is that in addition to ensuring that the content is being provided, and experiential learning is being provided to students across the spectrum of academic levels, we in the education directorate are focused on ensuring that evidence-based teaching and learning practices are brought into the classrooms. We want to ensure that the right environments are available to students, the right kinds of support for learning, right kinds of assessment. And so we have partnered with TIP on some innovative opportunities, known as DCLs, dear colleague letters. These are opportunities that bring together programs in the education directorate and programs in the TIP directorate to fund investigators that are focused on not just teaching, in the case of semiconductors, how to design chips, but also how to teach the design of chips. I taught the design of chips for twenty years before I joined NSF, so I know exactly how challenging that is. You know, designing structures that you can’t see, essentially, and you’re having to refine and redesign to ensure that they work—to test and ensure that they work. And so in the education directorate, we have held a number of events to get the public excited about chip design, and chip design education. In May, we had a workshop to which we invited folks in academia, all the way from universities to kindergarten. And we had a wonderful attendance. Over three hundred people showed up for the workshop. It was a two-day workshop. And folks were invited to brainstorm on how to teach microelectronics at all levels. So a lot of interesting information came out of that. We had participants from industry, Intel, Micron, and so forth. We had participants from government and from academia. So that was a very successful event. We have a second webinar on the eighth of August along the same lines. So we have currently two DCLs. And I’ll put the links in the chat, dear colleague letters. One is called Advancing Microelectronics Education, which looks at ways in which you can actually teach this stuff to folks who don’t have the extensive math, and physics, and chemistry background. The second thing we’re doing is making sure that we integrate these opportunities with existing programs in the education directorate. For example, the IUSE program is Improving Undergraduate STEM Education. It is a well-established program in the directorate, and it looks at innovations for teaching and learning in STEM in general. Now, by bringing this program into play with the ExLENT program, then we attract investigators that have an interest not just in the content, the chip design, but also in how to teach the chip design. Now, that confluence brings up very exciting, very interesting proposals on ways in which you can present this material to folks who are not experts at all, or are not in the domain. So I hope that answers your question on how to get folks excited. We have a couple of workshops and webinars scheduled going forward that will draw in participants from all over the country. And we generally keep pretty good notes on what goes on at those workshops, the kinds of questions, the kinds of ideas that are shared, and move forward on those to help the community grow. VAN SLOUN: Abi, that’s fantastic. Thank you very much. It’s really helpful. If you could put those things in—the links in the chat, that would be fantastic for the folks listening in today. Irina, it’s 3:30. Do you want me to turn this over to you for Q&A? FASKIANOS: Yes, I think that would be great. Let’s go to all of you now for questions. You can either write your question in the Q&A box. If you do that, please include your affiliation. Or you can raise your hand, and I’ll recognize you, and then you can ask your question. And don’t be shy. We really want to hear from you. Right now, we have no questions, which I think people are just collecting their thoughts. So Sherry, if you have one—another question while people are thinking about what they want to ask. VAN SLOUN: I’m actually—oh, ahead. FASKIANOS: We do have one question. Raised hand from Usha Reddi. And if you could identify yourself and unmute yourself. And you’re still muted. There you go. Q: Thank you. So my name is—I’m from Kansas. I’m Senator Usha Reddi, but I’m also a public school teacher, elementary school. And I also am part of several nonprofits which advocate for STEM learning, especially for young women and girls. So I wanted to know, can anybody apply for these NSF grants? And do you have to be a doctorate or affiliated with a university? Can it be a teacher? Can it be a nonprofit organization? Who is eligible for these types of grants? SHEARMAN: Sure. Can I just jump in? VAN SLOUN: Yeah, please do Becky. SHEARMAN: OK. So that is a great question. I’m so glad that you asked that. So I guess in reality it depends. NSF historically, you know, makes grants to academic institutions. We are trying to change that quite a bit. So for a lot of our—for a lot of our funding opportunities you can be something other than an academic institution to submit. But you would have to look at the eligibility, right? So some are some types of organizations are not eligible. For example, the federal government can’t apply for an NSF grant, right? But nonprofits, some local government offices, if they’re related to education, can apply for these for these funding opportunities. So those opportunities definitely exist. And if there’s a program that you’re specifically interested in, I would encourage you to reach out to a program officer associated with that program. And if you can sort of Google the program if you happen to know it—if you’re familiar with the program, it’ll direct you to a contact. FASKIANOS: Fantastic. Let’s go next to the raised hand from Mayor Melissa Blaustein. Q: Hi, everyone. Thanks for a great session. I really appreciate it. And actually, Sherry, I was so happy to see—(inaudible)—intelligence. I’m coming to you from the Naval Postgraduate School. I’m a student at CHDS right now, the master’s program for local governments on homeland security. And in that vein, I’m wondering—I’m from a smaller municipality. Sausalito is quite small, but very well known. And we don’t often think about the issues of how we can attract hiring for these types of industries, but I’d love to hear maybe from Bo and David a little bit about what you’re seeing smaller communities or policies do to attract these type of people, or perhaps if remote working is being qualified or considered for folks who want to pursue a career in chips and semiconductors. And any advice any of you have as well for smaller local governments to attract a conversation around this type of topic. Thanks again for your time. Really appreciate it. VAN SLOUN: Bo, do you want to take that first? And then, David, if you want to chime in, that’d be great. MACHAYO: Yeah, no, I think—so we are investing in, you know, Boise and in—Boise and in in central New York, and in Onondaga County, but in a small town called Clay. But one of the things that we have been—we had found successful, and I’ll focus on the New York model, was working with the state and the locality to come up with something called a Community Investment Framework. So it was a partnership between Micron, the state, and the locality to really look at how are we investing in things that the community needs. Everything from housing, to workforce, to childcare, and really kind of focusing on what those barriers to entry were, to ensure that folks could be able to work in the semiconductor network. And then also using that as a model to say, what around—like, what will we be able to do similar to that model in Boise? And how do we make sure it’s a whole-of-state approach and also kind of a regional approach to invest in these barriers to entry to the semiconductor network? And how can Micron do—Micron play their role in that? And so in the—(inaudible)—in particular, we decided to invest $250 million of that $500 million over the—and then committed to raising the other 150 (million dollars). And the state put in 100 (million dollars), and the locality also put in some of those dollars to ensure that we meet those needs and those barriers. And to be able to make sure that over the course of the next couple of decades, as we implement our project, that we are providing and addressing—whether that’s a skills gap, or a barriers to workforce gap, or providing or investing in childcare or whatnot—to make sure that we’re able to attract talent from across the area. And then also making sure to kind of work with our localities and other localities that are surrounding to make sure that we’re also partnering with them to do the exact same thing, and to replicate that model. And that’s something that we’ve found successful, is that just intentional partnership to make sure that we are kind of building up that next generation of workforce to have those skills that are necessary. But I’ll turn it over to David to talk a little bit about what Intel is doing. SHAHOULIAN: Yeah, thanks, Bo. You know, I don’t want to speak for Micron. I assume this is also true. We sort of take a both-and approach to building up the education ecosystem in across the country, right? I mean, we have national partnerships. You know, like Micron, Intel partnered with NSF. We put in money, along with government money, to create, you know, grant opportunities for schools across the country to apply for if they, you know, wanted to get into the semiconductor space, or they wanted to, you know, up their game in that space. And then both companies, right, we also have regional partnerships, right? Particularly in the communities in which we, you know, build facilities, we dedicate a lot of our effort. Partly because, you know, the reality is with technicians, you know, community colleges are only going to build technician programs for their communities if there are facilities nearby where their community members can work. You know, you don’t see community colleges far from semiconductor spaces actually bringing on semiconductor programs, you know, if there isn’t a job anywhere in in that area for the community members who go to that school. So that is—so that is why we worked really closely with the local community colleges in Oregon, Arizona, New Mexico, now in Ohio, to build programs near the facilities. That said, you know, we are happy to share their certificate programs, the curricula, the—you know, the associate degree program curricula with any community college that that wants to build that. You know, I’ll say we’re also partners with the American Semiconductor Academy, right? Which is, you know, along with the SEMI Foundation is working to try to build curricula that is shared across, you know, all universities so that, you know, again universities, and community colleges, and other educational institutions can basically start or upgrade their semiconductor-related curricula much more easily. So I just want to say that, you know, there are—there are both opportunities near where we are, and national opportunities as well. FASKIANOS: Fantastic. So we have a written question from Shawn Neidorf. What is the career path for a person who comes in as a semiconductor processing technician? What does a career in semiconductors look like for a person with an associate’s or less education? And then a related comment/question from Alison Hicks, who is the mayor of Mountain View, a Silicon Valley city and home of Google headquarters. The big thing I hear from constituents regarding barriers to jobs is getting a first job after getting an engineering degree. People tell me there are 100 more applicants for many, if not most, jobs, and they can barely even get interviews. They feel their resumes are being auto-screened out if they don’t have a degree from Stanford, Berkeley, et cetera. So they rarely make it even the first step of the hiring process, let alone getting a job. Can your programming do anything about that? I know engineers who give up and don’t even work in the field. They’re not just applying in the Bay Area. They’re applying throughout the United States. So if you could speak to both of those, that would be great. SHAHOULIAN: Bo, do you want me to go first, or do you want to do it? MACHAYO: You can take it first. SHAHOULIAN: You know, I’ll just go very quickly. So, first of all, you know, at least the engineers in the semiconductor space, particularly electrical engineers, I mean, that the unemployment rate for electrical engineers right now is, I think, at 1 percent. I mean, it is full employment. So we are desperate for talent. (Laughs.) So I’m happy to have a conversation offline. I don’t know whether the engineers you’re speaking to have semiconductor skills or not. But, you know, we have strategic partnerships with many universities across the country. And that goes from the MITs and Berkeleys of the world to, you know, the Arizona States and Oregon States, or, you know, an Ohio State now, where we have two—we have partnerships with Historically Black Colleges and Universities and other MSIs to help build their engineering and computer science programs. And we hire directly from those, and we sponsor undergraduate research and things like that to really kind of build the talent pipeline. I would just say, for technicians, I—you know, the technicians I’ve met love the job, right? It’s a different lifestyle than I think many other jobs, right? It’s like, basically, they do these rotating weeks where they do three days on four days off, or four days on three days off, so you got like three or four days in a row off, and then, you know, they work either 36 or, like, 40-some hours a week in those jobs. They are jobs that, you know, we have—you know, we’re not paying six-figure starting salaries, but we have lots of technicians who do earn, with an associate’s degree or even less, more than six—I mean, you know, over 100,000 (dollars) a year. And that’s just base salary. You know, with us you’re getting stock options, you’re getting annual and quarterly bonuses. So it is, again, a really good life. And we have people with, you know, high school diplomas who are earning over six figures—you know, who are earning six figures. MACHAYO: Yeah so, you know, I’ll add to what David was saying. For us, in terms of what does a career look like, you have your technician pathways, you’ve got your engineering pathways. But, you know, holistically for us for to attract this next generation of talent and to also be able to get folks who are looking to transition from an industry and come to Micron, you know, we want to make sure that, you know, the jobs that are available at Micron, are skill-based. And so not necessarily looking at the levels of degrees of what folks have, but to be able to make sure that the skills can easily translate to work at Micron. So for example, you know, we’ve been really successful in this with the veterans community, where we have about a two times higher national average in terms of hiring veterans than kind of other tech companies as well. And so being able to attract those folks, not only because they align with, you know, the skill set that we have, but also the values that Micron has and, you know, the values that are aligned throughout the entire semiconductor industry as well. We also are able to utilize our existing footprint to be able to have folks have the opportunities at different fab locations across the U.S. A great thing that we’ll be able to do is having our, you know, fab in Manassas—in Manassas, Virginia, our R&D site and our new manufacturing fab in Idaho, and then also our four fabs that would be in New York. Having the ability for folks to go from site to site, and to be able to learn the different aspects, both from the kind of legacy fabs to the—to the leading edge as well, on both the R&D. And then also our international footprint as well. And so, we have that—you know, we are looking at this as an opportunity to be able to ensure that we, you know, allow more folks to be a part of the semiconductor industry, but also, you know, making sure that we’re—you know, as we create, you know, the 50,000 jobs in New York, the, you know, 17,000 jobs in in Idaho, looking at it from a regional approach. You know, Intel will be making—has made announcements across the country as well. So have other folks in the semiconductor industry. And so we know it’s going to need to be an all-hands approach that we’ll be able—that, you know, we need to make—think about things as regional, both northwest and northeast, and, you know, making sure that we’re incorporating, you know, everyone to be able to be a part of this industry. And that’s going to be, you know, us working with localities like the ones you’re part of, and the institutions as well, to be able to make sure that we are attracting talent early on, and then also making sure that, you know, we’re addressing, and having, and equipping the skill sets necessary to come and work into the industry. FASKIANOS: Fantastic. The next written question is from Gail Patterson-Gladney, Van Buren County commissioner in Michigan. Where the materials come from for the semiconductors? Are they recycled after use? I do not know much about the semiconductor, but am willing to learn more. Where do I educate myself and community members about programs? VAN SLOUN: Can we go to David for that? And we’ll start with David. SHAHOULIAN: If I had the answers to those questions, I’d be happy to answer them. (Laughs.) I am the workforce policy lead. And so I don’t know about our materials, and I just—yeah. I’m happy to let Bo try to take it. MACHAYO: (Laughs.) Yeah, so from a supplier standpoint, you know, there’s going to be materials suppliers, there’s going to be, you know, chemical suppliers that will be needed for the semiconductor industry to be successful. A huge part of that will be, you know, how successful are we going to be—the Microns, the Intels, the Samsungs, the TSMCs of the world, of making sure that we’re investing in building up these fabs that are needed to manufacture folks. And then ultimately the suppliers will need to be able to kind of co-locate around us, and also make sure that we’re equipping those talent—those folks that are going to be at, you know, all of our fabs. And we’ll need all of those suppliers, both chemical and material suppliers, to be effective. And so, you know, those folks are constantly—I’ll speak for Micron, but I think this is probably true for Intel as well—will be at our fabs throughout the duration of our construction phases, and as we get chips out the doors. And are important to kind of continue to make sure that we have the leading-edge chips that are coming out of their facilities. So, you know, happy to—there’s a supplier page on Micron’s site that you’re more than—you’re more than welcome to visit to kind of learn about the suppliers. We’ve been doing webinars both kind of regionally and throughout the state as well, to be able to, you know, talk to folks about what’s going to be needed as we kind of implement our two projects, our two investments in the U.S. FASKIANOS: Thank you. I’m going to take a question from Eno Mondesir, who is executive health officer in the health department in Brockton, Mass. If you can unmute yourself. Q: Good afternoon. I am posing this question perhaps to Bo or to David, or anyone. I wonder if—how do you see AI affecting hiring human subjects? Maybe not now, but maybe two to five years down the road? SHAHOULIAN: Is your question—sorry, you don’t mind, you know, is your question about AI in the hiring process when it comes to screening applicants, for example? Or do you mean AI, you know, potentially replacing— Q: I mean replacing human labor force. SHAHOULIAN: Yeah. Well, let me just say, I mean, I think all of the semiconductor companies see AI as a value-add, right? You know, these are very complex—you know, designing and manufacturing semiconductors is the most difficult human endeavor on the planet, or among them, right? I mean, it is the most complicated process there is. So to the—to the degree that AI can help us perfect chip designs, perfect software and coding that goes with those, you know, discover flaws, those things, you know, those are absolutely beneficial to the industry. You know, at this point in time, we don’t foresee that, you know, really supplanting, you know—(laughs)—our employees, right? I mean, you need workers, again. You know, fabs, right—again, every factory, I just pointed out, creates at least 1,500 to 2,000 jobs. A lot of the work that’s done in the fab is already automated, right? You have robots that move the chips around. The lithography tools, you know, themselves—the etching tools, the chemical layering, you know, all of that happens basically automatically. The work is for, you know, people, right, that is all about maintaining that process, you know, troubleshooting, discovering flaws, tuning the machines. I mean, that work will continue, right? We’re not at a point where that work gets supplanted anytime soon. I don’t know if, Bo, you want to add anything. VAN SLOUN: Bo, do you want to add anything to that? MACHAYHO: Yeah, you know, I agree. I think the job—the economic impact and the jobs that we’ve relayed on the figures for our investments in both Boise and New York, we anticipate, you know, remain the same. And to make sure—and we know that, you know, AI is an important thing kind of moving forward in the semiconductor industry, and for Micron particularly. You know, memory chips are going to be important for AI, and in that conversation. But really believe and have seen, you know, throughout the globe the economic impact that’s been made from the investment of the semiconductor industry in terms of jobs, both direct and indirect jobs, and believe that would continue. FASKIANOS: Great. So we have a written question—or, comment from David Di Gregorio, who’s an administrator at Tenafly High School, and also as a councilman in Englewood Cliffs. And he wants to work with you all. He’s responsible for engineering and design. So I will share his contact information with you all after this. We have a written—or, sorry, a raised hand from Michael Semenza in the office of Representative Puppolo. If you want to go next, and unmute yourself. There you go. Q: Hello. Good afternoon. Are you able to hear me? FASKIANOS: We can hear you. Yes, we can. Go ahead. Q: OK, great. I apologize. Would you be able to repeat the question real quick? FASKIANOS: Oh, I thought you were asking a question. You had raised your hand? Q: Oh, I don’t know how that happened. I’m sorry. FASKIANOS: Oh, OK. No problem. That’s, you know, technology, it’s sometimes—we’ll go next to Senator Javier Loera Cervantes. Q: Hello? FASKIANOS: Yes, we can hear you. Q: Hi, my name’s Anelli (ph). I’m actually the digital director representing Senator Javier Loera Cervantes from the state of Illinois. First, I’d just like to say thank you to everyone who did come out today, because I know this is a sort of the first step, and taking initiatives to our curriculums, to our districts. We did discuss a lot education. And I just had a quick question. Especially for New York and sort of your approaches to discussing with principals how to bring these initiatives to the schools, when you essentially decide which districts to sort of work with, what does that—what does that approach look like? Do you sort of target low-income communities? Ones that just kind of tend to work more vigilantly with your company? Or just sort of sort of what’s the approach that you take when you want to bring these initiatives and change of curriculums to the districts in New York? MACHAYO: Yeah, so it’s been a kind of an all-hands approach. Obviously, we want to make sure that we are investing in the community in which we are going to be at, but know that especially in New York it’ll be a kind of an all-hands and all-state effort, both kind of central New York, where we’re located, downstate in the city, and then also in Albany, and Buffalo, and Rochester, and really an all-encompassing approach. And so, you know, we both work with the New York State Department of Education and local—our local K-12 superintendents and school systems to be able to make sure that we’re identifying and sharing exactly what is needed in terms of curriculum development, but also how are we spurring the interest of—to make sure that we’re getting a diverse set of employers and workforce, not only to be interested in the semiconductor industry and working directly for Micron, but also for the suppliers and the other indirect jobs that will be associated with Micron that are going to be important for Micron to thrive and succeed there. And so it is working with kind of everyone, and identifying, in New York, you know, a handful of places right now that we can have a prototype. And knowing—and then expanding, and knowing, and understanding that this project is going to, you know, take a couple of decades to make sure that we’re—to make sure that we are implementing our project correctly, both kind of in in New York and then also in Boise. And so knowing that it’ll expand, and the partnerships will expand as well throughout the entire state. VAN SLOUN: Irina, are there any more questions? FASKIANOS: Yes. We have a question from Ernest Abrogar, who is the—let’s see, I have lost it—the research specialist at Oklahoma Department of Commerce. How can suppliers to semiconductor manufacturers participate to provide educational or practicum opportunities to those areas that don’t have a major fab facility nearby? VAN SLOUN: David, do you want to—do you want to take a first shot at that? SHAHOULIAN: Sure. Look, I mean, we have suppliers in every state in the union, and the territories as well. So, you know, we partner with our suppliers in many different ways. You know, we work with suppliers, you know, to grow their businesses, to improve their practices, to, you know, ensure compliance, right? And we work with them also on workforce, you know, development strategies as well. You know, we do that. A lot of our suppliers are co-located or near located to our facilities, but a lot of them are not, I guess most are not. And so we are happy to partner them on these efforts. Again, there are—you know, we’re happy to share, you know, the curriculum, the modules, the things that we have designed in partnership with the schools that have been our partners, right? We’re happy to share that with other educational institutions. So if there’s, you know, a curricula or something that you, you know, want to—you know, want to take or modify, you know, or expand on in Oklahoma, you know, we’re happy to assist with that. VAN SLOUN: Great. Bo, you have anything to add? MACHAYHO: Yeah, no, I’d share that too. I mean, I think anything that you—anything that you’re doing in Oklahoma, or any state in the country, if you’re focusing on, you know, education and investing in semiconductor education, if you are focusing on, you know, incentives for suppliers in certain states, and are looking to attract that part of the industry, I think, you know, we’d be happy to talk to you and figure out how we can kind of partner together in states—in states that we are currently investing in for the manufacturing side. But understand that, you know, we’ll need to also work with other states to make sure that we have the suppliers and their downstream suppliers that will be helpful for us to be successful. FASKIANOS: So, we have one other question that just came in from council member Anita Barton. Do either of your companies plan to get together with any universities in Penna? SHAHOULIAN: Not sure. I understand that. Universities—say the last part? FASKIANOS: Companies plan to get together with any universities in Penna. Maybe Pennsylvania? I’m not— VAN SLOUN: I’m thinking that’s what it is, yeah. FASKIANOS: Yeah, I’m thinking it’s probably Pennsylvania. MACHAYO: So I can take that. I mean, we—so we launched our—along with the NSF director, and Senator Schumer, and our CEO, Sanjay, and, you know, some of our other leadership team, we were able to launch the Northeast University Semiconductor Network. And there are universities that are a part of that network that are based in Pennsylvania. And we are kind of—again, understand that it’s going to be a regional approach to be able to attract the semiconductor folks—or, the next generation of semiconductor workforce to work at Micron. And so happy to partner in that way as well. And we also just recently launched a northwest one as well to kind of do the same thing, look at states within our footprint region to be able to make sure that we’re attracting the workforce that’s needed. FASKIANOS: Great. VAN SLOUN: David—(inaudible)—on Pennsylvania, or? SHAHOULIAN: You know, I know that we have been in some conversations with Pennsylvanian institutions. I cannot tell you right now which ones they are, because I have not been part of those conversations. But, you know, given our proximity—the proximity to Ohio, I know that in the western part of the state, there has been some interest. I would just say, again, we are participating with NSF in, you know, ensuring that there is funding available to, you know, schools nationwide. VAN SLOUN: Thanks, David. So I think we only have a few minutes left. And I’m going to turn to Irina to close this out. But I just wanted to say thank you to, you know, Becky, David, Bo. You guys have been fantastic in sharing information that’s going to help, I think, across the entire United States thinking about semiconductors, and the need to build this pipeline and get excitement around this. And I’m really excited to hear about some of the programs you all have going on. So thank you so much. Irinia, I’m going to turn to you to close us out here. But thank you for joining us. FASKIANOS: Yes. And thank you all. This is a great hour discussion. We appreciate you taking the time, and for all the great comments and questions. We will be sending out links to the resources that were mentioned. And we will go back to Becky, David and Bo, and Sherry for anything else that they want to include, along with a link to the—this webinar and the transcript. And as always, we encourage you to visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for more expertise and analysis. And you can also email [email protected] to let us know how CFR can support the important work that you are doing in your communities. So thank you again for joining us today. We appreciate it. VAN SLOUN: Thanks, everyone.
  • Cybersecurity
    Schrödinger’s Hacking Law and Cyber Burnout: Capacity Building in U.S. Cybersecurity
    Recruiting problems in cybersecurity will continue until private and public sector organizations make defenders' mental health a priority and policymakers address the poorly written Computer Fraud and Abuse Act. 
  • Artificial Intelligence (AI)
    AI Meets World, Part Two
    Podcast
    The rapid emergence of artificial intelligence (AI) has brought lawmakers and industry leaders to the same conclusion: regulation is necessary to ensure the technology changes the world for the better. The similarities could end there, as governments and industry clash on what those laws should do, and different governments take increasingly divergent approaches. What are the stakes of the debate over AI regulation?
  • Technology and Innovation
    Reporting on AI and the Future of Journalism
    Play
    Dex Hunter-Torricke, head of global communications & marketing at Google DeepMind, discusses how AI technology could shape reporting the news and the role of journalists, and Benjamin Pimentel, senior technology reporter at the San Francisco Examiner, discusses framing local stories on AI in media. The webinar is hosted by Carla Anne Robbins, senior fellow at CFR and former deputy editorial page editor at the New York Times.  TRANSCRIPT FASKIANOS: Thank you. Welcome to the Council on Foreign Relations Local Journalists Webinar. I am Irina Faskianos, vice president for the National Program and Outreach here at CFR. CFR is an independent and nonpartisan membership organization, think tank, publisher, and educational institution focusing on U.S. foreign policy. CFR is also the publisher of Foreign Affairs magazine. As always, CFR takes no institutional positions on matters of policy. This webinar is part of CFR’s Local Journalists Initiative, created to help you draw connections between the local issues you cover and national and international dynamics. Our program aims to put you in touch with CFR resources and expertise on international issues and provides a forum for sharing best practices. Again, today’s discussion is on the record. The video and transcript will be posted on our website after the fact at CFR.org/localjournalists, and we will share the content after this webinar. We are pleased to have Dex Hunter-Torricke, Benjamin Pimentel, and host Carla Anne Robbins to lead today’s discussion on “Reporting on AI and the Future of Journalism.” We’ve shared their bios with you, but I will highlight their credentials here. Dex Hunter-Torricke is the head of global communications and marketing at Google DeepMind. He previously worked in communications for SpaceX, Meta, and the United Nations. He’s a New York Times bestselling ghostwriter and frequent public commentator on the social, political, and organizational challenges of technology. Benjamin Pimentel is a senior technology reporter for the San Francisco Examiner covering Silicon Valley and the tech industry. He has previously written on technology for other outlets, including Protocol, Dow Jones Marketwatch, and Business Insider. He was also a metro news and technology reporter at the San Francisco Chronicle for fourteen years. And in 2022, he was named by Muck Rack as one of the top ten crypto journalists. And finally, Carla Anne Robbins, our host, is a senior fellow for CFR—at CFR, excuse me. She is the faculty director of the Master of International Affairs Program and clinical professor of national security studies at Baruch College’s Marxe School of Public and International Affairs. Previously, she was deputy editorial page editor at the New York Times and chief diplomatic correspondent at the Wall Street Journal. Welcome, all. Thank you for this timely discussion. I’m going to turn it now to Carla to start the conversation, and then we will turn to all of you for your questions and comments. So, Carla, take it away. ROBBINS: Thank you so much, Irina. And thank you so much to you and your staff for setting this up, and to Dex and to Ben for joining us today. You know, I am absolutely fascinated by this topic—fascinated as a journalist, fascinated as an academic. Yes, I spend a lot of time worrying whether my students are using AI to write their papers. So far, I don’t know. So, as Irina said, Dex, Ben, and I will chat for about twenty-five minutes and then throw it open to you all for questions. But if you have something that occurs along the way, don’t hold back, and post it, and you know, we will get to you. And we really do want this to be a conversation. So I’d like to start with Ben. I’m sure everyone here has already played with ChatGPT or Bard if they get off the waitlist. I’ve already needled Dex about this. You know, I asked ChatGPT, you know, what questions I should be asking you all today, and I found it sort of thin gruel but not a bad start. But, Ben, can you give us a quick summary of what’s new about this technology, generative AI, and why we need to be having this conversation today? PIMENTEL: Yes. And thank you for having me. AI has been around for a long time—since after the war, actually—but it’s only—you know, November 30, 2022, is a big day, an important date for this technology. That’s when ChatGPT was introduced. And it just exploded in terms of opening up new possibilities for the use of artificial intelligence and also a lot of business interest in it. For journalists, of course, quickly, there has been a debate on the use of ChatGPT for reporting and for running a news organization. And that’s become a more important debate given the revelations and the disclosures of an organization like AP, CNET, and recently even insiders now saying that they’re going to be using AI for managing their paywall or in terms of deciding whether to offer a subscription to a reader or not. For me personally, I think the technology has a lot of important uses in terms of making newsgathering and reporting more efficient/faster. For instance, I come from a—I’m going to date myself, but when I started it was before—when I started my career in the U.S.—I’m from the Philippines—it was in June 1993. That was two months after the World Wide Web became public domain. That’s when the websites started appearing. And around that time, whenever I’m working nights to—you know, that was before websites and before Twitter. To get a sense of what’s going on in San Francisco, especially at night—and I’m working at night—I would have to call every police station, fire department, hospital from Mendocino down to Santa Cruz to get a sense of what’s going on. It’s boring. It’s a thankless job. But it actually helped me. But now you can do that with technology. I mean, you now have sites that can pull from the Twitter feed of the San Francisco Police Department or the San Francisco Fire Department to report, right, on what’s going on. And AI now creates a possibility of actually pulling that information and creating a news report that in the past I would have to do it, like a short 300-word report on, hey, Highway 80 is closed because of an accident. Now you can automate that. The problem that’s become more prominent recently is the use of AI and you don’t disclose it. I was recently in a, you know, panel—on a panel where an editor disclosed—very high on the technology, but then also said, when we asked him are you disclosing it on your site: Well, frankly, our readers don’t care. I disagree vehemently that when you’re—if you’re going to use it, you have to disclose it. Like, if you are pulling information and creating reports on, you know, road conditions or a police action, you have to say that AI created it. And it’s definitely even more so for more—for bigger stories like features or, you know, New Yorker-type of articles. You wouldn’t want—I wouldn’t want to read a New Yorker article and not know that it was done by an AI or by a chatbot. And then for me personally, I worry about what it means for young reporters, younger journalists, because they’re not going to go through what I went through, which in many ways is a good, right? You don’t have to call every police station in a region to get the information. You can pull that. You can use AI to do that. But for me, I worry when editors and writers talk about, oh, I can now write a headline better with AI, or write my lede and nut graf with AI, that’s worrisome because, for me, that’s not a problem for a journalist, right? Usually you go through that over and over again, and that’s how you get better. That’s how you become more critically minded. That’s how you become faster; I mean, even develop your own voice in writing a story. I’ll stop there. ROBBINS: I think you’ve raised a lot of important questions which we will delve into some more. But I want to go over to Dex. So, Dex, can you talk a little bit more about this technology and what makes it different from other artificial intelligence? I mean, it’s not like this is something that suddenly just we woke up one day, it was there. What makes generative AI different? HUNTER-TORRICKE: Yeah. I mean, I think the thing about generative AI which, you know, has really, you know, wowed people has been the ability to generate content that seems new. And, obviously, how generative AI works—and we can talk much more about that—a lot of what it’s creating is, obviously, based on things that exist out there in the world already. And you know, the knowledge that it’s presenting, the content that it’s creating is something that can seem very new and unique, but, obviously, you know, is built on training from a lot of previous data. I think when you experience a generative AI tool, you’re interacting with it in a very human kind of way—in a way that previous generations of technology haven’t necessarily—(audio break). You’re able to type in natural language prompts; and then you see on many generative AI tools, you know, the system thinking about how to answer that question; and then producing something very, very quickly. And it feels magical in a way that, you know, certainly—maybe I’m just very cynical having spent so long in the tech industry, but you know, certainly I don’t think lots of us feel about a lot of the tools that we take for granted. This feels qualitatively different from many of the current systems that we have. So I think because of that, you know, over the last year, as generative AI—(audio break)—starts to impact on a lot of different knowledge-type industries and professions. And of course, you know, the media industry is, you know, one of those professions. I think, you know, lots of reporters and media organizations are obviously thinking not just how can I use generative AI and other AI tools as part of my work today, but what does this really mean for the profession? What does this mean for the industry? What does this mean for the economics over the long term? And those are questions that, you know, I think we’re all still trying to figure out, to an extent. ROBBINS: So I want to ask you—you know, let’s talk about the good for a while, and then we get into the bad. So, you know, I just a piece in Neiman Reports, which we’ll share with everybody, that described how a Finnish newspaper, Yle, is using AI to translate stories into Ukrainian, because it’s now got tens of thousands of people displaced by the war. The bad news, at least for me, is Buzzfeed started out using AI to write its quizzes, which I personally didn’t care much about, and then said but that’s all we’re going to use it for. But then it took a nanosecond and then it moved on to travel stories. Now, as a journalist, I’m worried—I mean, as it is the business is really tight. Worried about displacement. And also about—you know, we hear all sorts of things. But we can get into the bad in a minute.  You know, if you were going to make a list of things that didn’t make you nervous, that, you know, Bard could do, that ChatGPT could do, that makes it—you know, that you look at generative AI and you say, well, it’s a calculator. You know, we all used to say, oh my God, you know, nobody’s ever going to be able to do a square root again. And now everybody uses a calculator, and nobody sits around worrying about that. So I—just a very quick list. You know, Ben, you’ve already talked about, you know, pulling the feed on traffic and all of that. You know, give us a few things that you really think—as long as we disclose—that you think that this would really be good, particularly for, you know, cash-strapped newsrooms, so that we could free people up to do better work? And then, Dex, I’m going to ask you the same question. PIMENTEL: City council meetings. I mean, I started my career— ROBBINS: You’re going for the boring first. PIMENTEL: Right, right. School board meetings. Yeah, it’s boring, right? That’s where you start out. That’s where I started out. And, if—I mean, I’m sort of torn on this, because you can use ChatGPT or generative AI to maybe present the agenda, right? The agenda for the week’s meeting in a readable, more easily digestible manner, instead of having people go to the website and try to make sense of it. And even the minutes of the meeting, right, to present it in a way that here’s what happened. Here’s what they decided. I actually look back—you know, like you said, and like I said, it’s boring. But it’s valuable. For me, the experience of going through that process and figuring out, OK, what did they decide? Trying to reach out to the councilman, OK, what did you mean—I mean, to go deeper, right? But at the same time, given the budget cuts, I would allow—I would accept a newsroom that decides, OK, we’re going to use ChatGPT to do summaries of these things, but we’re going to disclose it. I think that’s perfectly—especially for local news, which has been battered since the rise of the web.  I mean, I know this because I work for the Chronicle and I work in bureaus in the past. So that’s one positive thing, aside from, you know, traffic hazard warning. That it may take a human reporter more time. If you automate it, maybe it’s better. It’s good service to the community.  ROBBINS: Dex, you have additions to the positive list? Because we’re going to go to the negative next.  HUNTER-TORRICKE: Yeah, absolutely. I mean, look, I think that category of stuff which, you know, Ben might talk about as boring, you know, but certainly, I would say, is useful data that just takes a bunch of time to analyze and to go through, that’s where AI could be really, really valuable. You know, providing, you know, analysis, surfacing that data. Providing much broader context for the kinds of stories that reporters are producing. Like, that’s where I see systems that are able to parse through a lot of data very quickly being incredibly valuable. You know, that’s going to be something that’s incredibly useful for identifying local patterns, trends of interest that you can then explore further in more stories. So I think that’s all a really positive piece. You know, the other piece is just around, you know, exposing the content that local media is producing to a much wider audience. And there, you know, I could see potential applications where, you know, AI is, you know, able to better transcribe and translate local news. You know, you mentioned the Ukrainian example, but certainly I think there’s a lot of, you know, other examples where outlets are already using translation technology to expose their content to a much broader and global audience. I think that’s one piece. You know, also thinking about how do you make information more easily accessible so that, you know, this content then has higher online visibility. You know, every outlet is, you know, desperately trying to, you know, engage its readers and expose, you know, a new set of readers to their content. So I think there’s a bunch of, you know, angles there as well. ROBBINS: So let’s go on to the negative, and then we’re going to pass it over because I’m sure there’s lots of questions from the group. So, you know, we’ve all read about the concerns about AI and disinformation. There have been two recent reports, one by NewsGuard and another by ShadowDragon that found that AI-created sites and AI-created content, filled with fabricated events, hoaxes, dangerous medical advice, you’ve got that on one hand. So there was already, you know, already an enormous amount of disinformation and bias out there. You know, how does AI make this worse? And do we have any sense of how much worse? Is it just because it can shovel a lot more manure faster? Or is there something about it that makes this different? Ben? PIMENTEL: I mean, as Dex said, generative AI allows you to create content that looks real, like it was created by humans. That’s sort of the main thing that really changes everything. We’ve been living with AI for a number of years—Siri, and Cortana, and all that. But when you listen to them, you know that it’s not human, right? Eventually you will have technologies that will sound human, and you can be deceived by it. And that’s where the concern about disinformation comes up.  I mean, hallucinations is what they call it in terms of they’re going to present you—I don’t know if you ever search yourself on ChatGPT, and they spit out a profile that’s really inaccurate, right? You went to this university or what. So that’s a problem. And the thing about that, though, is the more data it consumes, it’ll get better. That’s sort of the worrisome, but at the same time positive, thing. Eventually all these things will be fixed. But at the same time, you don’t know what kind of data they’re using for these different models. And that’s going to be a major concern.  In terms of the negative—I mean, like I said, I mentioned the training of journalists is a concern to me. I mean, I mentioned certain things that are boring, but I think—I also wonder, so what happens to journalists if they don’t go through that? If they already go to a certain level because, hey, ChatGPT can take care of that so you don’t have to cover a city council meeting? Which, for me, was a positive experience. I mean, I hated that I was doing it, but eventually looking back that was good. I learned how to talk to a city politician. I learned to pick up on whether he’s lying to me or not. And that enables me to create stories later on in my career that’re more analytical, you know, more nuanced, more sensitive to the needs of my readership.  Another thing is in journalism we know there is no such thing as absolute neutrality, right? Even and especially analytical stories, your point of view will come up. And that brings up the question, OK, what point of view are we presenting if you have ChatGPT write those stories? Especially the most analytical ones, like features, a longer piece that delves into a certain problem in the community and tries to explore it. I worry that you can’t let ChatGPT or an AI program do that without questioning whether, OK, what’s the data that is the basis of this analysis, of this perspective? I’ll stop there. ROBBINS: So, Dex, jump in anywhere on this, but I do have a very specific technical thing. Not that I want to get into this business but, you know, I’ve written a lot in the past about disinformation. And it’s one thing for hallucinations, where they’re just working with garbage in so you get garbage out, which is—and you certainly saw that in the beginning with Wikipedia, which has gotten better with crowdsourcing over time. But from my understanding of these reports from NewsGuard and ShadowDragon, that there were people who were malevolently using AI to push out bad information. So is this—how is generative AI making that easier than what we just had before? HUNTER-TORRICKE: I mean, I think the main challenge here is around how compelling a lot of this content seems, compared to what came before, right? So, you know—you know, I think Ben spoke to this—you know, a lot of this stuff isn’t exactly news. AI itself has been around for a long time. And we then had manifestations of these challenges for quite a long time with the entire generation of social media technology. So like deepfakes, like that’s something we’ve been talking about for years. The thing about deepfakes which made it such an interesting debate is that for years every time we talked about deepfakes, everyone knew exactly what a deepfake was because they were so unconvincing. You know—(audio break)—exactly what was a deepfake and what wasn’t. Now, it’s very different because of the quality of the experience.  So, you know, a few weeks ago you may have seen there was a picture that was trending on Twitter of the pope wearing a Balenciaga jacket. And for about twenty-four hours, the internet was absolutely convinced that the pope was rocking this $5,000 jacket that was, like, perfectly color coordinated. And, you know, it was a sort of—you know, it was a funny moment. And of course, it was revealed that it had been generated using an AI. So no harm done, I guess. But, like, it was representative of how—(audio break)—are being shared. Potentially it could have very serious implications, you know, when they are used by bad actors, you know, as you described, you know, to do things that are much more nefarious than simply, you know, sharing a funny meme. One piece of research I saw recently which I thought was interesting, and it spoke to what some of these challenges might look like over time, I believe this was from Lancaster University. It compared how trustworthy AI-generated faces of people were compared to the faces of real humans. And it found that actually amongst the folks they surveyed as part of this research, that faces of AI-generated humans were rated 8 percent more trustworthy than actual humans. And, you know, I think, again, it’s a number, right, that, you know, I think a lot of people laugh at because, you know, we think oh, well, you know, that’s kind of funny and—(audio break)—of course, I can tell the difference between humans and AI-generated people. You know, I’m—(audio break)—were proved wrong when they actually tried to detect the differences themselves. So I do think there’s going to be an enormous number of challenges that we will face over the coming years. These are issues that, you know, certainly on the industry side, you know, I think lots of us are taking very seriously, certainly governments and regulators are looking at. Part of the solution will have to be other technologies that can help us parse the difference between AI-generated content and stuff that isn’t. And then part of that, I think, will be human solutions. And in fact, that may actually be the largest piece, because, of course, what is driving disinformation are a bunch of societal issues. And it’s not always going to be as simple as saying, oh, another piece of technology will fix that. ROBBINS: So I want to turn this over to the group. And I’ve got lots more questions, but I’m sure the group has—they’re journalists. They’ve got lots of questions. So the first question is from Phoebe Petrovic. Phoebe, can—would you like to ask your question yourself? Or I can read it, but I always love it when people ask their own questions. Q: Oh, OK. Hey, everyone. So, I was curious about how we might—just given all the reporting that’s been done about ChatGPT and other AI models hallucinating information, faking citations to Washington Post articles that don’t exist, making fake—totally make up research article citations that do not exist, how can we ethically or seriously recommend that we use generative AI for newsgathering purposes? It seems like you would just have to factcheck everything really closely, and then you might as well have done the job to begin with and not get into all these ethical implications of, like, using a software that is potentially going to put a lot of us out of business?  ROBBINS: And Phoebe, are you—you’re at Wisconsin Watch, right? Q: Mmm hmm. And we have a policy that we do not—at this point, that none of us are going to be using AI for any of our newsgathering purposes. And so that’s where we are right now. But I just wonder about the considerable hallucination aspect for newsgathering, when you’re supposed to be gathering the truth. ROBBINS: Dex, do you want to talk a little bit about hallucinations? HUNTER-TORRICKE: Yeah, absolutely. So I think, you know, Phoebe has hit the nail on the head, right? Like, that there are a bunch of, you know, issues right now with existing generative AI technology. You do have to fact-check and proof absolutely everything. So it is—it is something that—you know, it won’t necessarily save you lots of time if you’re looking to just generate, you know, content. I think there are two pieces here which, you know, I think I would focus on.  One is, obviously, the technology is advancing rapidly. So these are the kinds of issues which I expect with future iterations of the technology we will see addressed by more sophisticated models and tools. So absolutely today you’ve got all those challenges. That won’t necessarily be the case over the coming years. I think the second piece really is around thinking what’s the value of me experimenting with this technology now as a journalist and as an organization? It isn’t necessarily to think, oh, I can go and, you know, replace a bunch of fact-heavy lifting I have to do right now as a reporter. I think it’s more becoming fluent with what are the things that generative AI might conceivably be able to do that can help integrate into the kind of work that you’re doing?  And I expect a lot of what I think reporters and organizations generally will use generative AI for over the coming years, will actually—to be doing some of the things that I talked about, and that Ben talked about. You know, it’s corralling data. It’s doing analysis. It’s being more of a researcher rather than as a co-writer, or entirely taking over that writing. I really see it as something that’s additive and will really augment the kind of work that reporters and writers are going, rather than replacing it. So if you do it from that context and, you know, obviously, you know, it does depend on you experimenting to see what are all the different applications in your work, then I think that might lead to very different outcomes. ROBBINS: So we have another question, and we’ll just move on to that. And of course, Ben, you can answer any question you want at any time. So— PIMENTEL: Can I add something on that? It’s almost like the way the web has changed reporting. In the past, like, I covered business. To find out how many employees a company has or when it was founded, I would have to call the PR department or the media rep. Now I can just go quickly to the website, where they have all the facts about the company. But even so, I still double check if that’s an updated information. I even go to the FCC filings to make sure. So I see it as that kind of a tool, the way the web—or, like, when you see something on Wikipedia, you do not use that as a source, right? You use that as a starting point to find other sources. ROBBINS: So Charles Robinson from Maryland Public Television. Charles, do you want to ask your question? Q: Sure. First of all, gentlemen, appreciate this. I’m working on a radio show on ChatGPT and AI. And one of the questions that I’ve been watching in this process is the inability of AI and ChatGPT to get the local nuances of a subject matter, specifically reporting on minority communities. And, Ben, I know you being out in San Francisco, there’s certain colloquialisms in Filipino culture that I wouldn’t get if I didn’t know it. Whereas, like, to give you an example, there’s been a move to kind of, like, homogenize everybody as opposed to getting the colloquialisms, the gestures, and all of that. And I can tell you, as a Black reporter, you know, it’s the reason why I go into the field because you can’t get it if all I do is read whatever someone has generated out there. Help me understand. Because, I’m going to tell you, I write a specific blog on Black politics. And I’m going to tell you, I’m hoping that ChatGPT is not watching me to try and figure out what Black politics is. ROBBINS: Ben. PIMENTEL: I mean, I agree. I mean, when I started my career, the best—and I still believe this—the best interviews are face-to-face interviews, for me. We get more information on how people react, how people talk, how they interact with their surroundings. Usually it’s harder to do that if you’re, you know, doing a lot of things. But whenever I have the opportunity to report on—I mean, I used to cover Asian American affairs in San Francisco. You can’t do that from a phone or a website. You have to go out into the community. And I cover business now, which is more—you know, I can do a lot of it by Zoom. But still, if I’m profiling a CEO, I’d rather—it’d be great if I could meet the person so that I can read his body language, he can react to me, and all that. In terms of the nuances, I agree totally. I mean, it’s possible that ChatGPT can—I mean, as we talked about—what’s impressive and troubling about this technology is it can evolve to a point where it can mimic a lot of these things. And for journalism, that’s an issue for us to think about because, again, how do you deal with a program that’s able to pretend that it’s, you know, writing as a Black person, or as a Filipino, or as an Asian American? Which, based on the technology, eventually it can. But do we want that kind of reporting and journalism that’s not based on more human interactions? ROBBINS: So thank you for that. So Justin Kerr who’s the publisher of the McKinley Park News—Justin, do you want to ask your question? Q: Yes. Yes. Thank you. Can folks hear me OK? ROBBINS: Absolutely. Q: OK. Great. So I publish the McKinley Park News, which is, I call it, a micro-local news outlet, focusing on a single neighborhood in Chicago. And it’s every beat in the neighborhood—crime, education, events, everything else. And it’s all original content. I mean, it’s really all stuff that you won’t find anywhere else on the internet, because it’s so local and, you know, there’s news deserts everywhere. A handful of weeks ago, I discovered through a third party that seemingly the entirety of my website had been scraped and included in these large language models that are used to power ChatGPT, all of these AI services, et cetera.  Now, this is in spite of the fact that I have a terms of service clearly linked up on every page of my website that expressly says: Here are the conditions that anyone is allowed to access and use this website—which is, you know, for news consumers, and no other purpose. And I also list a bunch of expressly prohibited things that, you know, you cannot access or use our website for. One of those things is to inform any large language model, algorithm, machine learning process, et cetera, et cetera, et cetera.  Despite this, everything that I have done has been taken from me and put into these large language models that are then used in interfaces that I see absolutely no benefit from—interfaces and services. So when someone interacts with the AI chat, they’re going to get—you know, maybe they ask something about the McKinley Park neighborhood of Chicago. They’re not—you know, we’re going to be the only source that they have for any sort of realistic or accurate answer. You know, and when someone interacts with a chat, I don’t get a link, I don’t get any attention, I don’t get a reference. I don’t get anything from that.  Not only that, these companies are licensing that capability to third parties. So any third party could go and use my expertise and content to create whatever they wanted, you know, leveraging what I do. As a local small news publisher, I have absolutely no motivation or reason to try to publish local news, because everything will be stolen from me and used in competing interfaces and services that I will never get a piece of. Not only that, this— ROBBINS: Justin, we get—we get the—we get the point. Q: I guess I’m mad because you guys sit up here and you’re using products and services, recommending products and services without the—without a single talk about provenance, where the information comes from. ChatGPT doesn’t have a license to my stuff. Neither do you. ROBBINS: OK. Q: So please stop stealing from me and other local news outlets. That’s—and how am I supposed to—my question is, how am I supposed to operate if everything is being stolen from me? Thank you very much. ROBBINS: And this is a—it’s an important question. And it’s an important question, obviously, for a very small publisher. But it’s also an important question for a big publisher. I mean, Robert Thompson from News Corp is raising this question as well. And we saw what—we saw what the internet did to the news business and how devastating it’s been. So, you know, it’s life and death—life and death for some—life and death for a very small publisher, but it’s very much life and death for big publishers as well. So, Dex, this goes over to you. HUNTER-TORRICKE: Yeah, sure. I mean, I think—you know, obviously I can’t comment on any, you know, specific website or, you know, terms and conditions on a website. You know, I think, you know, from the deep mind perspective, I think we would say that, you know, we believe that training large language models using open web content, you know, creates huge value for users and the media industry. You know, it leads to the creation of more innovative technologies that will then end up getting used by the media, by users, you know, to connect with, you know, stories and content. So actually, I think I would sort of disagree with that premise. I think the other piece, right, is there is obviously a lot of debate, you know, between different, you know, interests and, you know, between different industries over what has been the impact of the internet, you know, on, you know, the news industry, on the economics of it. You know, I think, you know, we would say that, you know, access to things like Google News and Google Search has actually been incredibly powerful for, you know, the media industry. You know, there’s twenty-four, you know, billion visits to, you know, local news outlets happening every month through Google Search and Google News. You know, there’s billions of dollars in ad revenue being generated by the media industry, you know, through having access to those platforms. You know, I think access to AI technologies will create similar opportunities for growth and innovation, but it’s certainly something which I think, you know, we’re very, very sensitive to, you know, what will be the impacts on the industry. Google has been working very, very closely with a lot of local news outlets and news associations, you know, over the years. We really want to have a strong, sustainable news ecosystem. That’s in all of our interest. So it’s something that we’re going to be keeping a very close eye on as AI technology continues to evolve. ROBBINS: So is—other than setting up a paywall, how does—how do news organizations, you know, protect themselves? And I say this as someone who sat on the digital strategy committee at the New York Times that made this decision to put up a paywall, because that was the only way the paper was going to survive. So, you know, yes, Justin, I understand that payrolls or logins kill your advertising revenue potential. But I am—yes, and we had that debate as well. And I understand the difference between your life and the life of the New York Times. Nevertheless, Justin raises a very basic question there. Is there any other way to opt out of the system? I mean, that’s the question that he’s asking, Dex. Is there? HUNTER-TORRICKE: Well, you know, I think what that system is, right, is still being determined. Generative AI is, you know, in its infancy. We obviously think it’s, you know, incredibly exciting, and it’s something that, you know, all of us—(audio break)—today to talk about it. But the technology is still evolving. What these models will look like, including what the regulatory model will look like in different jurisdictions, that is something that is shifting very, very quickly. And, you know, these are exactly the sorts of questions, you know, that we as an industry—(audio break)—is a piece which, you know, I’m sure the media industry will also have a point of view on these things.  But, in a way, it’s sort of a difficult one to answer. And I’m not deliberately trying to be evasive here with a whole set of reporters. You know, we don’t yet know what the full impacts really will be, with some of the AI technologies that have yet to be invented, for example. So this is something where it’s hard to say this is a definitively, like, model that is going to produce the greatest value either for publishers or for the industry or for society, because we need to actually figure out how that technology is going to evolve, and then have a conversation about this. And different, you know, communities, different markets around the world, will also have very different views on what’s the right way, you know, to protect the media industry, while also ensuring that we do continue to innovate? So that’s really how I’d answer at this stage. ROBBINS: So let’s move on to Amy Maxmen, who is the CFR Murrow fellow. Amy, would you like to ask your question? Q: Yeah. Hi. Can you hear me? ROBBINS: Yes. Q: OK, great. So I guess my question actually builds on, you know, what the discussion is so far. And part of my thought for a lot of the discussion here and everywhere else is about, like, how AI could be helpful or hurtful in journalism. And I kind of worry how much that discussion is a bit of a distraction. Because, I guess, I have to feel like the big use of AI for publishers is to save money. And that could be by cutting salaries further for journalists, and cutting full-time jobs that have benefits with them. Something that kind of stuck with me was that I heard another—I heard another talk, and the main use of AI in health care is in hospital billing departments to deny claims. At least, that’s what I heard. So it kind of reminds me that, you know, where is this going? This is going for a way for administrators and publishers to further cut costs.  So I guess my point is, knowing that we would lose a lot if we cut journalists and kind of just—you know, and cut editors, who really are needed to be able to make sure that the AI writing isn’t just super vague and unclear. So I would think the conversation might need to shift away from the good and the bad of AI, to actually, like, can we figure out how to fund journalists still, so that they use AI like a tool, and then also to make sure that publishers aren’t just using it to cut costs, which would be short-sighted. Can you figure out ways to make sure that, you know, journalists are actually maybe paid for their work, which actually is providing the raw material for AI? Basically, it’s more around kind of labor issues than around, like, is AI good or bad? HUNTER-TORRICKE: I think Amy actually raises, you know, a really important, you know, question about how we think conceptually about solving these issues, right? I actually really agree that it’s not really about whether AI is good or bad. That’s part of the conversation and, like, what are the impacts? But this is a conversation that’s about the future of journalism. You know, when social media came along, right, there were a lot of people who said, oh, obviously media organizations need to adapt to the arrival of social media platforms and algorithms by converting all of their content into stuff that’s really short form and designed to go viral.  And, you know, that’s where you had—I mean, without naming any outlets—you had a bunch of stuff that was kind of clickbaity. And what we actually saw is that, yeah, that engaged to a certain extent, but actually people got sick of that stuff, like, pretty quickly. And the pendulum swung enormously, and actually you saw there was a huge surge in people looking for quality, long-form, investigative reporting. And, you know, I think quality journalism has never been in so much demand. So actually, you know, even though you might have thought the technology incentivized and would guide the industry to one path, actually it was a very different set of outcomes really were going to succeed in that world.  And so I think when we look at the possibilities presented by technology, it’s not as clear-cut as saying, like, this is the way the ecosystem’s going to go, or even that we want it to go that way. I think we need to talk about what exactly are the principles of good journalism at this stage, what kind of environment do we want to have, and then figure out how to make the technology support that. ROBBINS: So, Ben, what do you think in your newsroom? I mean, are the bosses, you know, threatening to replace a third of the—you know, a third of the staff with our robot overlords? I promised Dex I would only say that once. Do you have a guild that’s, you know, negotiating terms? Or you guys are—no guild? What’s the conversation like? And what are you—you know, what are the owners saying? PIMENTEL: I mean, we are so small. You know, the Examiner is more than 150 years old, but it’s being rebuilt. It’s essentially just a two-year-old organization. But I think the point is—what’s striking is the use of ChatGPT and generative AI has emerged at a time when the media is still figuring out the business model. Like I said, I lived through the shift from the pre-website world, World Wide Web world, to—and after, which devastated the newspaper industry. I mean, I started in ’93 with the year that the website started to emerge. Within a decade, my newspaper back then was in trouble. And we’re still figuring it out. Dex mentioned the use of social media. That’s what led to the rise of Buzzfeed News, which is having problems now. And there are still efforts to figure out, OK, how do we—how do we make this a viable business model? The New York Times and more established newspapers have already figured out, OK, a paywall works. And that works for them because they’re established, they’re credible, and there are people who are willing to pay to get that information. So that’s an important point. But for others, the nonprofit model is becoming also a viable alternative in many cases. Like, in San Francisco there’s an outlet called Mission Local, actually founded by a professor of mine at Berkeley. Started out as a school project, and now it’s a nonprofit model, covering the Mission in a very good way. And you have other experiments. And what’s interesting is, of course, ChatGPT will definitely be used by—you know, as you said—at a time when there’s massive cuts in newsroom, they’re already signaling that they’re going to use it. And I hope that they use it in a responsible way, the way I explained it earlier. There are—there are important uses for it, for information that’s very beneficial to the community that can be automated. But beyond that, that’s the problem. I think that’s the discussion that the industry is still having. ROBBINS: So, thank you. And we have a lot of questions. So I’m going to ask—I’m going to go through them quickly. Dan MacLeod from the Bangor Daily News—Dan, do you want to ask your question? And I think I want to turn it on you, which is why would you use it, you know, given how committed you are and your value proposition, indeed, is local and, you know, having a direct relationship between reporters and local people? Q: Hi. Yeah. Yeah, I mean, that’s really my question. We have not started using it here. And the big kind of question for us is that the thing that, you know, we pride ourselves on, the thing our audience tells us that it values about us, is that we understand the communities we serve, we’re in them, you know, people recognize the reporters, they have, like, a pretty close connection with us. But this also seems to be, like, one of those technologies that is going to do to journalism what the internet did twenty-five years ago. And it’s sort of, like, either figure it out or, you know, get swept up. Is there anything that local newsrooms can do to leverage it in a way that maintains its—this is a big question—but sort of maintains its sort of core values with its audience?  My second question is that a lot of what this seems to be able to do, from what I’ve seen so far, promises to cut time on minor tasks. But is there anything that it can do better than, like, what a reporter could do? You know, like a reporter can also back—like, you know, research background information. AI says, like, we can do it faster and it saves you that time. Is there anything it can do sort of better? ROBBINS: Either of you?  HUNTER-TORRICKE: Yeah, so—yeah, go ahead. Sorry, go ahead, Ben. PIMENTEL: Go ahead. Go ahead, please. HUNTER-TORRICKE: Sure. So one example, right? You know, I’ve seen—(audio break)—using AI to go and look through databases of sport league competitions. So, you know, one, you know, kind of simple example is looking at how sport teams have been doing in local communities, and then working out, by interpreting the data, what are interesting trends of sport team performance. So you find out there’s a local team that just, you know, won top of its league, and they’ve never won, you know, in thirty years. Suddenly, like, that’s an interesting nugget that can then be developed into a story. You’ve turned an AI into something that’s actually generating interesting angles for writing a story. It doesn’t replace the need for human reporters to go and do all of that work to turn it into something that actually is going to be interesting enough that people want to read it and share it, but it’s something where it is additive to the work of an existing human newsroom. And I honestly think, like, that is the piece that I’m particularly excited about. You know, I think coming from the AI industry and looking at where the technology is going, I don’t see this as something that’s here to replace all of the work that human reporters are doing, or even a large part of it. Because being a journalist and, you know, delivering the kind of value that a media organization delivers, is infinitely more complex, actually, than the stuff that AI can deliver today, and certainly for the foreseeable future. Journalists do something that’s really, really important, which is they build relationships with sources, they have a ton of expertise, and that local context and understanding of a community. Things that AI is, frankly, just not very good at doing right now. So I think the way to think about AI is as a tool to support and enhance the work that you’re doing, rather than, oh, this something that can simply automate away a bunch of this. ROBBINS: So let’s—Lici Beveridge. Lici is with the Hattiesburg American. Lici, do you want to ask your question? Q: Sure. Hi. I am a full-time reporter and actually just started grad school. And the main focus of what I want to study is how to incorporate artificial intelligence into journalism and make it work for everybody, because it’s not going to go away. So we have to figure out how to use it responsibly. And I was just—this question is more for Benjamin. Is there any sort of—I guess, like a policy or kind of rules or something of how you guys approach the use of, like, ChatGPT, or whatever, in your reporting? I mean, do you have, like, a—we have to make sure we disclose the information was gathered from this, or that sort of thing? Because I think, ethically, is how we’re going to get to use this in a way that will be accepted by not just journalists, but by the communities—our communities. PIMENTEL: Yes. Definitely. I think that’s the basic policy that I would recommend and that’s been recommended by others. You disclose it. That if you’re using it in general, and maybe on specific stories. And just picking up on what Dex said, it can be useful for—we used to call it computer-assisted reporting, right? That’s what the web and computers made easier, right? Excel files, in terms of processing and crunching data, and all that, and looking for information.  What I worry about, and what I hope doesn’t happen, is—to follow up on Dex’s example—is, you know, you get a—it’s a sports event, and you want to get some historical perspective, and maybe you get the former record holders for a specific school, or whatever. And that’s good. The ChatGPT or the web helps you find that out. And then instead of finding those people and maybe doing an interview for profiles or your perspective, you could just ask ChatGPT, can you find their Instagram feed or Twitter feed, and see what they’ve said? And let the reporting end there. I mean, I can imagine young reporters will be tempted to do that because it’s easier, right? Instead of—as Dex said, it’s a tool as a step towards getting more information. And the best information is still going face-to-face with sources, or people, or a community. Q: Yeah. Because I know, like, I was actually the digital editor when—for about fifteen years. And, you know, when social media was just starting to come out. And everything was just, you know, dive into this, dive into that, without thinking of the impact later on. And as we quickly discovered, you know, things like we live in a place where there’s a lot of hurricanes and tornadoes. So we have people creating fake pictures of hurricanes and tornadoes. And, you know, they were submitting as, you know, user-generated content, which it wasn’t. It was all fake stuff. So, you know, we have to—I just kind of want to, like, be able to jump in, but do it with a lot of caution. PIMENTEL: Definitely, yes. ROBBINS: Well, you know, I thought Ben’s point about Wikipedia is a really interesting one, which is any reporter who would use Wikipedia as their sole source for a story, rather than using it as a lead source, you know, I’d fire them. But it is an interesting notion of —do you use this as a lead source, knowing that it makes errors, knowing that it’s lazy, knowing that it’s just a start, versus—and that is a—you know, that’s not even ethics. That’s your just basic sort of the rule that we also have to do inside the newsroom, which then to me raises a question for Dex, which is do we have any sense of how often—you know, this term of hallucinations. I mean, how often does it make mistakes right now? Do you have a sense of with Bard how often it makes mistakes? Certainly everybody has stories of fake sources that have showed up, errors that have showed up. Do we have a sense of how reliable this is? And, like, my Wikipedia page has errors in it, and I’ve never even fixed it because I find it faintly bemusing, because they’re really minor errors.  HUNTER-TORRICKE: Right, yeah. I mean, I don’t have any data points to hand. Absolutely it is something that we’re aware of. I expect that this is something that future iterations of the technology will continue to tackle and to, you know, diminish that problem. But, you know, going back to this bigger point, right, which is at what point can you trust this, I think you can trust a lot of things you find there. But you do have to verify them. And certainly, you know, as journalists, as media organizations, I mean, there’s a big much larger responsibility to do that than folks, you know, who may be looking at these experimental tools right now and using it, you know, just to share for, you know, fun and amusement. You know, the kinds of things that you’re sharing are going to really have a huge societal impact. I do think when you look at the evolution of tools like Wikipedia, though, we will go through this trajectory where, you know, at the beginning people will—a lot of folks will think, oh, this is really, like, not that reputable, because it’s something that’s been generated in a very novel way. And there are other more established, you know, formats where you would expect there to be a greater level of fact checking, a greater level of verification. So, you know, obviously, like, the establishment incumbent example to compare against Wikipedia back in the day was something like Encyclopedia Britannica. And then a moment was reached, you know, several years into the development of Wikipedia, where then research was finding that on average Wikipedia had fewer errors in it than Encyclopedia Britannica.  So we will absolutely see a moment come when AI will get more sophisticated, and we will see the content generally being good enough and with more minor errors which, you know, again, technology will continue to diminish over time. And at that point, I think then it will be a very, very different proposition than what we have today, where absolutely, you know, all of these tools are generally labeled with massive caveats and disclaimers warning that they’re experimental and that they’re not, you know, at the stage where you can simply trust everything that’s been put through them. ROBBINS: So Patrick McCloskey who is the editor-in-chief of the Dakota Digital Review—Patrick, would you like to ask your question? We only have a few minutes left. No, Patrick is—may not still be with us. So we actually only have three minutes left. So do you guys want to sum up? Because we actually have other questions, but they look long and complicated. So would you like to have any thoughts? Or maybe I will just ask you a really scary question, which is: We’re talking about this like it is Wikipedia or like it is a calculator. And that, yes, it’s going to have to be fixed, and we have to be careful, and we have to disclose, and we’re being very ethical about it. We’ve had major leaders of the tech industry have put out a letter that have said: Stop. Pause. Think about this before it destroys society. Is there some gap here that we need to be thinking about? I mean, this is—they are raising some really, really frightening notions. And are we perhaps missing a point here if we’re really just talking about this as, well, it’ll perfect itself. Dex, do you want to go first, and then we’ll have Ben finish up?  HUNTER-TORRICKE: Yeah. So, I mean, the CEO of Google DeepMind signed a letter recently, I think this might be one of the several letters that you referenced, you know, which called on folks to take the potential extinction risks associated with AI as seriously as other major global existential risks. So, for example, the threat of nuclear war, or a global pandemic. And that doesn’t mean at all that we think that that is the most likely scenario. You know, we absolutely believe in the positive value of AI for society, or we wouldn’t be building it.  It is something that if the technology continues to mature and evolve in the way that we expect it will, with our understanding of what is coming, it is something that we should certainly take seriously though, even if it’s a very small possibility. With any technology that’s this powerful, we have to apply the proportionality principle and ensure that we’re mitigating that risk. If we only start preparing for those risks, you know, when they’re apparent, it will probably be too late at that point. So absolutely I think it’s important to contextualize this, and not to induce panic or to say this is something that we think is likely to happen. But it’s something that we absolutely are keeping an eye on amongst very, very long-term challenges that we do need to take seriously. ROBBINS: So, Ben, do you have a sense that—I mean, I have a sense, and I don’t cover this. I just read about it. But I have the sense that these industries are saying, yes, we’re conscious that the world could end, but, you know, we’d sort of like other people to make the decision for us. You know, regulate us, please. Tell us what to do while we continue to race and develop this technology. Is there something more? Are they—can we trust these industries to deal with this? PIMENTEL: I mean, the fact that they used the phrase “extinction risk” is really, I think, very important. That tells me that even the CEOs of Google, DeepMind, and OpenAI, and Microsoft know—don’t know what’s up ahead. They don’t know how this technology is going to evolve. And of course, yes, there will be people who—in these companies, including Dex, who will try to ensure that we have guardrails, and policies, and all that. My problem is, it’s now a competitive landscape. It becomes part of the new competition in tech. And when you have that kind of competition, things get missed, or shortcuts are done. We’ve seen that over and over again. And that’s where you can’t leave this to these companies, not even to the regulators. I mean, the communities have to be involved in the conversations. Like, one risk of AI—it goes beyond journalism—that I’ve heard of, which is for me partly one of the most troubling, is the use of AI for persuasion. And on people who don’t even know that they’re being—they’re communicating with an AI system. The use of AI to, in real time, figure out how to sell you something or convince you about a political campaign. And, in real time, figure out how you’re reacting and adjust, because they have the data, they know that if you say something or respond in a certain way, or you have a facial expression—a certain kind of facial expression, they know how to respond. That, for me, is even scarier. That’s why the European Union just passed the—which could be the law—called AI Act, which would ban that, the use of AI for emotional cognition recognition and manipulation, in essence. The problem, again, is this has become a big wave in tech. Companies are scrambling. VCs are scrambling to fund the startups or even existing companies with mature programs for AI. And on the other hand, you have the regulators and the concerns about the fears of what is the impact. Who’s going to win? I mean, which thread is going to prevail? That’s the big question. ROBBINS: So this has been a fabulous conversation. And we will invite you back probably—you know, things are moving so fast—maybe in six months. Which is a lifetime in technology. I just really want to thank Dex Hunter-Torricke and Ben Pimentel. It’s a fabulous conversation. And everybody who asked questions. And sorry we didn’t get to all of them, but it shows you how fabulous it was. And we’ll do this again soon. I hope we can get you back. And over to Irina. FASKIANOS: Thank you for that. Thank you, Carla, Dex, and Ben. Just to—again, I’m sorry we couldn’t get to all your questions. We will send a link to this webinar. We will also send the link to the Nieman Report that Carla referenced at the top of this. You can follow Dex Hunter-Torricke on Twitter at @dexbarton, and Benjamin Pimentel at @benpimentel. As always, we encourage you to visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for the latest developments and analysis on international trends and how they are affecting the United States. And of course, do email us to share suggestions for future webinars. You can reach us at [email protected]. So, again, thank you all for being with us and to our speakers and moderator. Have a good day. ROBBINS: Thank you all so much. (END)
  • Taiwan
    U.S.-Taiwan Relations in a New Era
    Although a conflict in the Taiwan Strait has thus far been avoided, deterrence has dangerously eroded. To maintain peace, the United States must restore balance to a situation that has been allowed to tilt far too much in China’s favor.
  • Cybersecurity
    The Great Firewall of Montana: How Could Montana Implement A TikTok Ban?
    Montana banned TikTok a month ago. Enforcing this ban would require the creation of a surveillance regime that would be far more detrimental to privacy and civil liberties than TikTok could ever be.