Welcome to Why It Matters, the podcast where we delve into the crucial issues shaping our world. In today's episode, join me, your host Gabrielle Sierra, as we unravel the complex web of geopolitics surrounding artificial intelligence and its impact on global power dynamics.
Hi guys. So...that wasn't me. The voice you heard was an audio deep fake that we created using free online software. And the text it read was generated by Chat GPT from a very short prompt. I know, creepy.
Artificial intelligence has been having its coming out party. Everyone is wondering how it will change our lives - and we’re right to wonder.
I mean, look, I work at the Council on Foreign Relations. I am absolutely used to hearing people talking about serious things. But, I don’t know about you, for me, the conversation about AI feels... different. There’s simply no precedent for it, and no way to know how far it will go. Many believe it’s going to change everything we know about the world. Depending who you ask, it could upend our social order, solve profound scientific and economic problems - or even end human life altogether.
It’s a big topic, and the world is just starting to understand it. So we’ve decided to do a rare two-part episode.
In part one, we’ll outline the stakes of the AI race, particularly in terms of national security, economics, and small things like the meaning of life.
In part two, we’ll get into the choices governments face as they decide how much to regulate AI, and how much to push its development.
I'm the real Gabrielle Sierra - I think - and this is Why it matters. Today, a picture of the world on the brink of an artificial intelligence revolution - part one.
Sebastian MALLABY: AI is going to influence everything including international economics. It's probably the biggest event in human history since the Industrial Revolution a bit more than 200 years ago. So whether you think about economics or whether you think about what it means to be a human being, AI is just a magnetic possibility for a researcher to look at.
This is Sebastian Mallaby. He’s the senior fellow for international economics at the council. He’s currently writing a book on artificial intelligence.
MALLABY: The news since December last year has been dominated by chat, artificial intelligence chat. So openAI, this startup, which is affiliated with Microsoft, launched ChatGPT. This was the most downloaded product in software history, which is saying a lot. So faster than Instagram, faster than Facebook, faster than WhatsApp and so forth, even TikTok.
https://youtu.be/pOmpqdlVCoo?t=5
CNBC: ChatGpt...ChatGpt...ChatGpt...
https://youtu.be/aO39YvRwfkM?t=44
HJ Evelyn: I’m excited to let ChatGPT choose what we eat for today.
https://youtube.com/shorts/4begfsf9AQY?feature=share
CleoAbram:The fun thing here is to see how far you can get it to go.
https://youtu.be/BWCCPy7Rg-s?t=5
BBC News: This promises to be the viral sensation that could completely reset how we do things.
MALLABY: And since then, others have launched competitor chatbots. But in actual fact, some of the most exciting things I think are not about chat. I mean, there's lots of stuff around, for example, protein folding, and that's in turn going to give rise to a bunch of medical and other breakthroughs where you build new medicines and therapies because you've understood the shapes of proteins in the human body. So I think that we've had a slightly narrow conversation about artificial intelligence in the last six months, although it's been an exciting time to be in the field because there's been so much stuff being rolled out all the time.
Protein folding is just one example. All over the world, at universities, research labs, and private companies, AI is already working on scientific problems with extraordinary power and speed. These research-focused models are particularly good at analyzing vast data sets that a human being could never hope to wrap their head around. As such, some experts believe AI will add rocket fuel to the pace of scientific discovery. And this makes pursuing AI seem almost irresistible.
MALLABY: Let's just take climate change and the desire to fight that. One of the exciting breakthroughs in artificial intelligence has been the creation of a control system for the plasma inside a nuclear fusion reactor.
Those who listened to our episode on nuclear power will remember that nuclear fusion is still a far-off possibility, but one that could provide the world with almost unlimited clean energy.
MALLABY: So in nuclear fusion, everything's operating at an insanely hot temperature, and you have essentially a bunch of molten matter which is as hot as the sun, the plasma. And it has to be suspended magnetically inside the reactor because if it touches the sides, it will lose heat and the reaction won't work. So you suspend it magnetically. But the problem is, at that temperature, the atoms are flying around in all directions in a chaotic state, and so you have to adjust the magnetism all the time to prevent it from hitting the wall of the reactor. Now with artificial intelligence, this group in London, DeepMind, which is a part of Google, has figured out how to dynamically adjust the magnetism and not only control the plasma and stop it from hitting the side of the reactor, but also actually even sculpt the plasma like an ice sculpture, which is the most amazing feat. And so that's not a solution to the challenge of bringing nuclear fusion into the world and giving us massively abundant energy with no climate emissions. But it is a piece potentially of the answer. And so I think for any country to say it doesn't want to be part of those sorts of scientific advance is a sort of crazy sacrifice.
It can seem like AI is new to the scene, but we’ve actually been living with it for years. That said, it’s evolving all the time, and there are a lot of ways to break down the different types. So, here’s a basic rundown:
At the simplest level, we have so-called “reactive AI” which is used in products like video games, or text autocomplete. It’s relatively simple and predictable, and requires a lot of human training. It will always give the same results in response to the same inputs.
Next up is something known as narrow or “limited memory” AI. The name sells it short though, because this AI is able to learn on its own by combing through troves of data and storing it to solve future problems. Most of the advanced AI products we’re seeing these days, like CHAT GPT, are in this category. They can answer questions, they can do incredible research and many other feats, but they can't yet set their own agendas in pursuit of a goal.
Lastly, there’s strong AI, also known as Artificial General Intelligence, which doesn’t exist. But it could. This type of AI would not be built for any one application, but instead could investigate problems of all kinds. It would be adaptable, like a human being, but with superhuman capabilities, and it could pursue its goals without fresh input.
Janet HAVEN: As somebody who runs a research institute, I will say I think also we need to know more. And I think that most of what we are talking about in the public discourse around AI and its implications for society, is speculative.
This is Janet Haven. She’s the Executive Director of Data & Society, a nonprofit research institute. She’s also a member of President Joe Biden’s National Artificial Intelligence Advisory Committee.
HAVEN: It's not based in research. It's not based in a kind of deep empirical analysis of how people are actually interacting with these, not technical, but sociotechnical systems. We learn and adapt very quickly. We're not looking at a static situation. And I think that's also really critical to remember. And I guess in terms of the longer term risk I'm really much more concerned about how automation will impact society at large.
MALLABY: So if you think about the huge gains in productivity that I'm predicting from artificial intelligence, those gains come by displacing workers. If you imagine that many cognitive tasks, maybe even the majority will be better done by AI in 10 years or something from now, it means that companies will need to employ far fewer people to get the same stuff done.
Experts have been talking about job automation for years. But for a long time, the concerns tended to focus on blue-collar positions - one classic example is robots replacing factory workers. In recent years though, it has become clear that AI threatens white-collar jobs too.
A new report shows that AI replaced nearly 4,000 workers just last month. And, according to a survey from a group called AI Impacts, which studies existential risks from AI, more than half of experts believe AI will be able to do every task better and more cheaply than humans by 2061.
MALLABY: And as those people are displaced, they may be looking for positions in the remaining roles that have not been computerized. And so that competition for the remaining roles will drive down the wages. And we may need to contemplate ideas like universal basic income, which have been slightly on the fringe of the economic debate, but I think will now become central. There's also other kinds of risks, right? There's a kind of risk where AI falls into the hands of malevolent individuals and becomes a multiplier of their power to disrupt. So you could think about anything from creating deep fakes to manipulate elections or just manipulate public opinion to using AI to hack into people's bank accounts online or to just even commit theft. Imagine a world in which your front door is locked, but it's on a sort of home control system and the AI can pick the electronic lock. So that's one idea, the tech falls into the hands of bad actors. A more science fiction kind of idea, but not to be dismissed is that the tech itself becomes the bad actor.
https://youtu.be/Mme2Aya_6Bc?t=41
2001: A Space Odyssey: This mission is too important for me to allow you to jeopardize it… and I’m afraid that’s something I cannot allow to happen.
https://youtu.be/SDd49NF583g?t=45
Transcendence: Can you prove you’re self-aware? That’s a difficult question Dr. Tagger. Can you prove that you are?
https://youtu.be/_Wlsd9mljiU?t=251
Terminator 3: Skynet has become self-aware. In one hour, it will initiate a massive nuclear attack on its enemy. What enemy? Us! Humans!
MALLABY: In other words, super intelligent, autonomous agents may have a will of their own and may do things that, I mean, there are scientists out there who say, this could mean the extinction of humanity. So you're talking about big ones. Yes, this is a big one, and therefore this is the extreme, the extremes in both directions, which make it such a compelling subject. It could be fantastic - cures for cancer, cures for climate change. It could be literally an extinction event.
Sebastian isn’t alone in considering this possibility. This May, over 350 leading AI developers and tech executives, including the CEO of OpenAI, signed a joint statement saying AI poses a ‘risk of extinction’ on par with pandemics and nuclear war. The very first signatory, Geoffrey Hinton, also known as the ‘Godfather of AI,’ announced that same month that he had left Google due to major concerns about the pace at which A.I. is being developed.
Yes, the idea that AI could end the world is theoretical. But experts have been gaming it out for a while, so there are some well-established models for AI risk.
MALLABY: The classic formulation is sometimes called a paperclip problem, but it comes in various forms where an intelligent machine is tasked by a human being to accomplish something that sounds benign. So I'll give you the example of let's get rid of the algae, which is choking the oceans and causing fish life to die off. And the AI figures out that the way to fix the problem of algae in the oceans is to pump oxygen in. So it sort of gets hooked up to an oxygen pumping machine, and in the process drains the atmosphere in the air of oxygen and human beings asphyxiate. Now, one might say “That's a ridiculous example. There's no such machine that takes oxygen from the atmosphere and stuffs it into the ocean. And anyway, even if there were, people would start to notice that they were not being able to breathe properly and they would switch the stupid AI off.”
SIERRA: Right...
MALLABY: But on the other hand, if you imagine a future in which there are thousands of AIs being tasked with goals where the means to the end might involve something that's bad for people, would people know which of these thousands of AIs is guilty of the fact that you can't breathe properly? Would we be able to trace it to the right AI and shut it off in time? So my bet is that this is too alarmist and isn't going to happen. But I've spent a lot of time with AI researchers recently, and there are very thoughtful people who do not rule that out. There is also a third bucket of risks associated with AI, which is maybe a little bit more subtle, but at the same time, one of the most likely to come to pass. And that is that AI becomes so good at generating content, including stories and narratives that it starts to shape the way we see ourselves as human beings.
https://youtu.be/LWiM-LuRe6w?t=355
Yuval Noah Harari: The most important aspect of the current phase of the ongoing AI revolution, is that AI is gaining mastery of language at a level that surpasses the average human ability. And by gaining mastery of language, AI is seizing the master key... the operating system of every human culture in history has always been language.
MALLABY: There is a longstanding recognition and philosophy that what sets Homo Sapiens apart is the ability to tell stories, which then in turn organize us all into groups and around beliefs and into coalitions. In some sense, money is a story, right? Its value is only there because we believe it is there. The nation state is a story. We feel patriotic and loyal to the nation state because we've been told stories that encourage that. If the power of storytelling were to be dominated by machines, we would be believing the machine stories. And we don't quite know where that will take us.
Okay, so, we have... AI accidentally killing everyone in pursuit of a narrow goal or AI changing the narrative of human life so profoundly that it becomes hard for people to find meaning in their lives. It’s heavy stuff! And many prominent minds are really worried about it.
But, at the same time, you have experts who believe these questions are still too far in the future, and that in the meantime we would be best advised to focus on near-term risks that are already materializing.
HAVEN: So the idea of existential risk is something that is still quite far out. It's not something that we have seen yet. It is relatively theoretical. I certainly would not dismiss existential risk, but I think that to focus on existential risk only, to drive the policymaking conversation in that direction, is a huge risk in and of itself. So I'm concerned about the future of creativity. I'm concerned about the future of things like care work and the lack of space that we have as a society to talk about what we're willing to automate and what we are not willing to automate. Whether we want to see and live in a society in which elder care is entirely automated, that our parents are taken care of by robots at the end of their lives. If we want to live in a society where our children are cared for by automated systems, are educated by automated systems. And I think, in each of those cases, there isn't a right answer. There's a set of tradeoffs. And so what I worry about quite a bit is that so much of the decision making in these systems and how they are designed, how they're rolled out, who has access to them, is happening in a very tight circle of people and companies who have access to data, to compute, to money and to talent. And it is not a democratic process. It's not a space for societal deliberation. That is very worrying to me. And I see that as a great risk both in the near term, but truly in the longer term. There are near-term risks that I think we need to be worried about that are emerging, and I think of those as threats to democratic systems. One of the ways that I think that gets talked about the most is the sort of hyper-charged environment or expected environment around deep fakes and media manipulation in the upcoming 2024 election cycle.
https://youtu.be/KTmBfYmNdig
CNN: For the first time there may be another complicating factor in the upcoming elections. Artificial intelligence could play a major role in the 2024 election. Some tech experts are warning that the technology could change the entire landscape of campaigning.
https://youtu.be/wbWLEUiVHVQ?t=24
Fox 11 Los Angeles: The RNC responding to Biden’s campaign launch with an attack ad - the ad generated using artificial intelligence.
https://youtu.be/GryBbJLS444?t=18
WION: The 2024 Presidential race in America is about to get crazier than ever before. And what’s causing this whirlwind of chaos are deep fakes.
HAVEN: And I do think that that is a real concern. And so not losing sight of that as we become more concerned about the role of deep fakes and synthetic media and its impact on democracy, I think is really, really critical.
As if we didn’t already have enough to worry about with our elections, malicious use of AI is a serious problem, and we don’t yet fully know how the technology will be put to use by hackers, terrorists, rogue states, and others.
But AI doesn’t need malicious actors in order to turn our world on its head. And this seems particularly true when it comes to labor and economics. You know, our jobs and our money.
SIERRA: What happens in a world where say half the jobs disappear and half still exist, is that workable?
MALLABY: So I think when that happens, we will have to find new ways of defining ourselves, of finding meaning and purpose and I think the main challenge there is psychological. We need to be telling our kids different kinds of stories about what it is to live a fulfilling life. If we inculcate our children with the idea that you should work hard and earn an honest living and you define yourself through work, it will be tough for them to live in a world where the artificial intelligence does much of the work and they receive some sort of free money from whatever is left of the government at that point. The problem with that is that the half of the jobs that have been taken over by machines means that half of the population is crushing into the job opportunities afforded by the remaining jobs. And so the wages in those remaining jobs are going to collapse, and it may be that, you know, you don't even earn a living wage from those things. So it's going to be healthy for society if at least a decent chunk of the population voluntarily chooses that they don't particularly need to work in order to find meaning and they can pursue these other creative or intellectual pursuits, not because of a paycheck, but because they believe in it.
While some analysts like Sebastian entertain the possibility of an end to work, Janet and others anticipate a more measured change, the early effects of which we are already seeing.
HAVEN: Historically, we know that, again, this feels to me it ends up being a bit of an ahistorical conversation. So the idea of machines replacing workers and bringing about economic catastrophe is not a new one. The beginning of the 20th century was all about that and the end of the 19th century, that was a core part of the narrative of industrial automation. I think we also have absolutely seen the idea that machines will free us. In the 1950s, the introduction of washing machines and vacuum cleaners into the mid-century home was sold as a way to give women back their leisure time or give them leisure time, not give it back. And what we saw was that society changed. Women entered the workforce in different ways. New roles emerged. We saw that happen in the 1990s, in the first wave of the 1990s and the early 2000s. The first wave of the internet as a commercial and societal phenomenon led to the creation of a huge number of jobs, of things that me as a kid growing up in the 1980s couldn't have imagined. Like, what do you want to be when you grow up? None of the things that I and my peers ended up doing were on the table in 1985. So I think that we will absolutely see that. A very likely way that this moves forward and already has moved forward is integration with AI systems. Because I think it seems safe and non-disruptive, that AI is a helper and a supporter to people in doing their jobs better, but is not a replacer of people. And it makes us all more effective and more productive, et cetera. The jury's out on that. We don't know that, and we do know that there will be some job loss. I think we are really remiss not to talk about the ways in which automated systems and AI are already and will continue to change the workplace. And this is particularly how it impacts low income workers. So we see already much greater workplace surveillance and control of workers. We see the rise of algorithmic management, so the management of people, both in more traditional factory jobs or warehouse jobs through algorithmic systems rather than through people. And that is worrying - looking at the ways in which surveillance systems in the workplace are becoming more and more widespread and the ways that they impact both worker agency, but also the ways in which a job is done and delivered.
That AI will reshape our jobs, and even the overall economy, seems inevitable, though just how and how much they will change remains unclear. And as countries consider all these possibilities, they also face an entirely different geopolitical concern raised by AI: weapons that can think.
MALLABY: There's been this possibility for a while that traditional weapon systems, which tend to be incredibly expensive per unit, the model is the aircraft carrier or you know, some next generation fighter jet or you know, some next generation missile. All of those things, the model is you invest billions and billions of dollars and then each unit you build costs billions and billions of dollars. It's super expensive, but with a small number of weapons, you hope to have supremacy. The difference with AI is that you can put tons of cheap drones into the sky, which are then commanded and controlled by AI, and each one of them might cost a few thousand bucks. And so you can afford to throw 10,000 drones controlled by AI at an attack on an aircraft carrier. And even if 95% of the drones are shot down or jammed or taken out of the air somehow, you only need 5% to get through to destroy the aircraft carrier. And so I think it changes the balance, and there's a real possibility that existing weaponry, state of the art as of five years ago, loses its potency relative to this new class of weapon. If you look at the Ukraine battlefield, and these videos of drones attacking tanks. It's kind of what I'm talking about, a small cheap weapon attacking a big expensive one. So I think the future of warfare will hinge on AI, and so therefore, the country that has the edge in AI has a big advantage.
SIERRA: We did an episode on automated weapons a little while ago, and there are so many experts with serious concerns about the possibility of unintended escalation.
MALLABY: And that gets to the human in the loop debate where you might say, well, ideally, you don't want autonomous weapons to make any autonomous decisions. In other words, before that drone actually fires the weapon, there has to be a human pushing a button. Whether that's realistic, given the speed at which AI driven warfare is going to take place, I'm not really sure. In a game theoretical way, one side can't afford to restrict itself if the other side isn't going to do that. And I think we are a long way from even beginning arms control talks on this stuff. And in the current geopolitical climate, is any arms control of any sort likely to work out. I sincerely hope the answer is yes, but I'm not certain it's right.
Long time listeners might remember that we have an episode exclusively devoted to the threat of autonomous weapons - or killer robots. Go check it out.
Here’s a memorable clip from AI expert Toby Walsh, from that episode:
https://www.cfr.org/podcasts/robots-kill
Toby WALSH: I am pretty convinced that we will ultimately see these weapons and decide that they are horrible, a horrific, terrible, terrifying way to fight war, and we’ll ban them. I’m actually very confident. What I’m not confident about is that we will have the courage and conviction and foresight to do that in advance of them being used against civilian populations. The sad history of military technology is that we have normally, in most instances, had to see weapons being used in anger before we’ve had the intelligence to ban them.
SIERRA: Okay, so AI has major advantages and major dangers. What does a country or, I guess, a company need in order to be competitive in the AI race?
MALLABY: Well, the components of making great AI come down to having an enormous amount of data to feed into the program to train it. And that's pretty much universally available because anybody can scrape the internet, which is a global commons thing. So that's not differentiating. Second of all, you need great scientists to develop the programs or implement the existing algorithmic innovations that exist. And there's a reasonably good distribution of those people. They are concentrated, number one in the U.S., probably number two in Britain, Canada's strong. China is strong, but it's not as if there's nobody doing this elsewhere. And then the third element you need is semiconductors, tons of high-end semiconductors. And because of the embargo placed on China last October by the Biden administration, which cuts off the export of advanced semiconductors to China, although China has a stockpile of GPU chips, which it can use for AI, and although we'll try to get around the embargo, I think it's future path of AI development has been damaged by the embargo. And so economies that refuse to adopt AI are going to be left behind. And I think that does create a sort of race dynamic where controlling how we deploy AI into society becomes a little bit difficult. If you think about economic growth and human progress more generally, it's really the story of the progress of understanding and ideas. In other words, science and sort of applied science. And the way to think about AI is that there's a massive acceleration in that progress. And so everything from strategies to contain climate change to medical challenges, to making something like nuclear fusion work, almost any cognitive challenge you can think of is going to become more soluble thanks to artificial intelligence.
SIERRA: Overall you sound optimistic about this. Are you looking towards the future, which many people are freaked out about. Do you feel good about where we're heading with AI?
MALLABY: Maybe we need to invent a new word here, which kind of is a combination of frightened and excited. I'm fre-sighted.
SIERRA: Fre-sighted, yeah...
MALLABY: But I'm definitely excited. I mean, the upside is amazing. The transformation, I think to science, to our knowledge base, to our understanding of the world is incredible. We have a chance to fix huge problems around climate and medicine and all that. Look, it's amazing. So I am excited, I'm realistic that there are significant risks, as I said earlier, but I'm hopeful that smart people of goodwill can help to manage them.
The thing is, AI is coming whether we like it or not. As individuals we will be facing choices about how much we want to embrace it, and how much we want to try to maintain our old ways of life.
Countries are in a similar position - but with far greater responsibilities and consequences. It’ll be up to governments, ultimately, to decide whether AI should be taking care of children and the elderly, whether it should take over labor from workers, and whether it should be wielded as a tool of war.
In the next episode, we’ll dive into the ways in which different societies are approaching regulation, and what long-term rules for AI might look like. Stay tuned!
For resources used in this episode and more information, visit CFR.org/whyitmatters and take a look at the show notes. If you ever have any questions or suggestions or just want to chat with us, email at [email protected] or you can hit us up on Twitter at @CFR_org.
Why It Matters is a production of the Council on Foreign Relations. The opinions expressed on the show are solely that of the guests, not of CFR, which takes no institutional positions on matters of policy.
The show is produced by Asher Ross and me, Gabrielle Sierra. Our sound designer is Markus Zakaria. Our associate podcast producer is Molly McAnany.
Robert McMahon is our Managing Editor, and Doug Halsey is our Chief Digital Officer. Extra help for this episode was provided by Noah Berman and Kali Robinson.
Our theme music is composed by Ceiri Torjussen. We’d also like to thank Richard Haass, Jeff Reinke, and our co-creator Jeremy Sherlick.
You can subscribe to the show on Apple Podcasts, Spotify, Stitcher, YouTube or wherever you get your audio.
For Why It Matters, this is Gabrielle Sierra signing off. See you soon!