CFR-Brookings Institution Election 2024 Virtual Public Event: Technology and Electoral Dynamics
The Council on Foreign Relations (CFR) and the Brookings Institution Foreign Policy Program collaborated to convene an expert discussion that examined the role of technology and electoral dynamics in the 2024 election.
As part of a series of virtual events convened by CFR and Brookings in the lead-up to Election Day, the conversation examined how the perception of technology is influencing electoral credibility; cybersecurity and election integrity; and what is at stake for safeguarding our democratic processes in an era of disinformation.
The series is a part of Election 2024, a CFR initiative focused on exploring the United States’ role in the world, how international affairs issues affect voters, and the foreign policy issues at stake in November, and Election ’24: Issues at Stake, a Brookings initiative aimed to bring public attention to consequential policy issues confronting voters and policymakers in the run up to the 2024 election. Both projects are made possible in part by grants from Carnegie Corporation of New York.
MABRY: Welcome to today’s Election 2024 Virtual Public Event, cohosted by the Council on Foreign Relations and the Brookings Institution. My name is Marcus Mabry, and I will be presiding over today’s discussion on technology and electoral dynamics.
This is the second in a series of virtual events cosponsored by CFR and the Brookings Institution in the leadup to Election Day. I mean the actual Election Day, because some of us are voting already. The series is a part of Election 2024, a CFR initiative focused on exploring the United States’ role in the world, how international affairs issues affect voters, and the foreign policy issues at stake in November; and Election ’24: Issues at Stake, a Brookings initiative aimed to bring public attention to consequential policy issues confronting voters and policymakers in the run up to the 2024 election.
Both projects were made possible in part by grant from the Carnegie Corporation of New York. And today’s conversation will examine how the perception of technology is influencing electoral credibility, cybersecurity, and election integrity; and what is at stake for safeguarding our democratic processes in an era of disinformation. We may not get to every issue in our time together this afternoon, but we hope you will bring up those most important to you during the question-and-answer session. So let’s get to it.
We are very fortunate to have with us today Kat Duffy, who is a senior fellow for digital and cyberspace policy at the Council on Foreign Relations. She previously directed the Task Force for a Trustworthy Future Web at the Atlantic Council’s Digital Forensic Research Lab. There’s a lot more to that bio, but you have it available, so look at it at your leisure.
And Elaine Kamarck, who is a senior fellow in governance studies and director of the Center for Effective Public Management at the Brookings Institution. She is also a lecturer in public policy at the Harvard Kennedy School of Government. And again, to get more on Elaine, you can look at the bios you all have.
As a reminder, today’s discussion is on the record, and will be posted on CFR.org and the Brookings website.
So, Elaine, let’s start with you. How would you describe the technological threats and landscape around this year’s elections?
KAMARCK: Well, thank you, Marcus.
I would describe this as a perfect storm. The election of 2024, we’re finding several things coming together.
First of all, the technology for doing deepfakes, for spreading disinformation, for having disinformation moved by bots and other algorithms is better than ever. Anybody can use it, and it’s hard to detect what’s fake and what’s not fake. So that’s the first problem. And we don’t really have—we don’t really have an easy way to discern what we’re—what we’re seeing and where it came from.
The second thing that’s going on is that we are in the midst of a highly polarized period in our—in our country. And so if you’re on one side, you are absolutely willing to believe anything about the other side even if it’s kind of unbelievable itself. People want to think that the other side is even worse than maybe it is. And so we’ve got a very sort of ripe public for disinformation.
And finally, we have a very close election. The election is so close that even if hundreds of thousands of people say, oh, that story is nonsense, all it takes is a couple thousand to swing a state in a very tight race.
So let me start the discussion today by taking you through what I call the anatomy of disinformation. In other words, how do these things happen? And I want to start with a story that I’m sure everybody has heard, which is the story out of Springfield, Ohio, about how Haitian immigrants in Springfield were capturing people’s cats and dogs and eating them. Now, it has been widely disclaimed by many people, and yet, here’s how it spread.
Back in June, local Facebook groups in Ohio posted pictures of children chasing geese and ducks in the local parks. Then somehow rumors about these animals going missing, and possibly being eaten by Haitian immigrants, started showing up on these Facebook posts.
By August, a neo-Nazi group called Blood Tribe started posting rumors on Telegram and Gab, and members of the organization actually marched in Springfield on August 10. One Springfield resident, Anthony Harris, told the Springfield City Commission that migrants were eating ducks in the park. And the same claim was being spread on Gab, and then, of course, on X. The story then gets picked up by something called the @EndWokeness account, which is also on X, and it used a photo of a man holding a bird. No confirmation of the man’s nationality, no confirmation of where it was from or anything. Rumors about the geese and the ducks turned into conspiracies about cats and dogs and household pets. Originally—and false claims were spread by far-right activists; local Republicans in Springfield, Ohio; and neo-Nazis.
On the fifth of September, a Trump follower called LemmiesLuLu, at @buckeyegirrl, posted the story on X, and she had two thousand followers. By the sixth of September, it was picked up by a site called End Wokeness with three million followers. And on the seventh of September, a guy named Ian Miles Cheong, who has millions of followers, posted a photo of police arresting a woman with a dead cat. It was found out later, though, that the photo was taken as Canton, Ohio, not Springfield. The woman was not Haitian. In fact, she was mentally ill and taken into custody, where she faces animal cruelty charges.
On the ninth, a guy named Tyler Oliveira began posting interviews about people whose cats and dogs had disappeared. And that’s the day that J.D. Vance, the vice presidential candidate for the Republican Party, picked up the claim and posted it on X saying, “Reports now showed that people have had their pets abducted and eaten by people who shouldn’t be in our country. Where is our border czar?” Obviously, a reference to Kamala Harris. And then the very next day, Trump made the baseless claim during the presidential debate. Trump and—Trump are urging their supporters to keep the cat memes going, even though Vance acknowledged that the claims were false. But he thought that it was worth it to keep the discussion about immigration going.
Now, what’s the result of this evolution of this story, classic disinformation? Is Springfield has had thirty-three bomb threats. It has had to evacuate public buildings and state police have been put in all the schools. This is what we mean, my coauthor Darrell West and I, mean in a new book that we call Lies that Kill, because in fact a lot of these things do, in fact, end up in somebody getting hurt, or somebody even dying. Let’s look back at the Hillary Clinton campaign in 2016. You’ll remember an episode called Pizzagate. And the story was this, that the Democratic—leaders of the Democratic Party were engaged in a child pornography ring, and that they were keeping abused children in the basement of a pizza parlor in Washington called Comet Ping Pong Pizza.
One young father, well-meaning young man named Edgar Welch in North Carolina, was so incensed at this—which had spread all over the internet. And, by the way, this was early in our experience with disinformation, so the Hillary campaign didn’t know what to do with it. They weren’t really on top of this. So a young man got in his truck with a shotgun and a handgun, drove to Ping Pong Pizza, shot his way in, couldn’t find the door to the basement because there was no—was no door to the basement, there was no basement. He finally found a closet with this very stunned and frightened pizza delivery guy unloading dough into the closet. And by that time, the police arrived, and Edgar Welch had to admit that, no, there were no children being held in this pizza parlor.
He could have killed somebody, right? He was armed. He could have killed somebody. He was ready to go. He thought he was going to be doing something great for humanity by rescuing these children. Where did this come from? OK, it came from John Podesta’s hacked emails, where he talked with his brother and his friends about having a cheese pizza party, CP. Which the 4chan people, conspiracy theorists, turned into cheese pizza—I’m sorry—they said “cheese pizza” was really short for “child pornography.” And as this developed, they looked into John Podesta’s emails about having pizza parties, and somehow turned this into a child pornography ring. That ended up with poor Mr. Welch shooting into a pizza parlor in Washington, DC.
Now think about this for a moment. And this is the essence of what we think needs to happen right away. We need common sense. We need—the public needs to be very alert to this sort of thing. And they need to apply common sense. So start with Hillary Clinton. By the time the Pizzagate story broke, Hillary Clinton had been a national and international figure for twenty-five years. Twenty-five years. Every time—some of us can remember this—every time Hillary Clinton changed her hairstyle, it was a news story, OK? Her finances were gone over with a fine-tooth comb. That, of course, we can—and, actually, they found a minor scandal on Whitewater as a result of doing that. How on earth would Hillary Clinton be involved with an international crime ring for a quarter of a century, and nobody noticed, nobody knew anything? OK, that’s what I mean by common sense.
Again, don’t you think that if a lot of people in Springfield, Ohio were actually losing their pets, that, in fact, people would sort of look into it, and that maybe you would find some evidence of pets being slaughtered in a park, or something like that? In other words, a lot of these things simply need common sense applied to them. And yet, in the polarized world we’re in, we are all ready to believe absolutely the worst thing about anybody else. And then you add in the algorithms and the bots. OK, so the more exciting, the more horrifying the story is, right, the more these stories move at the speed of light across the internet. And of course, the more they move, the more they get picked up, the more they get picked up. And so it’s very hard to stop these stories and stop the disinformation. And sometimes they do result in actual hurt—actually hurting people or in killing people. And that’s why we call them lies that kill.
MABRY: Thank you, Elaine, for that very, very chilling testimony. We will come back to some of these antidotes, right, how to combat these challenges.
I want to go to Kat. And, Kat, your assessment of the greatest technological challenges and threats that we face in the current campaign.
DUFFY: So I don’t actually think the greatest threats are technological. I think the greatest threats are sociological. I think they are about the response of a population that feels—not only in the United States, but abroad—that feels unsettled and confused by a technology that is transformative. The rise of generative AI tools over the past two years have made it dramatically easier, and cheaper, and faster to generate seemingly real text, audio, photo, and videos. Videos, I would say, are not quite as far along as the other medium, but when I speak with sort of information researchers and scholars in this area our greatest concern, frankly, is twofold.
One, it’s something called the liar’s dividend. The rise of the liar’s dividend. And this is a term that was created by scholars Danielle Citron and Bobby Chesney back in 2018. It’s not a new concept in information circles, right? But the idea of the liar’s dividend, it’s that there can be no accountability if nothing can be true. So if everything can be fake, then nothing can be true. And that means it’s very hard to be accountable for one’s actions. This is a nonpartisan issue. This is a sociological phenomenon. But a good example of it in the most recent memory is Mark Robinson’s response in North Carolina to a very clear history of having used the same social media handle across a number of pretty awful sites in which he was using pretty terrible rhetoric.
Now, even three to four years ago—first, I should say, his argument has been that there is a conspiracy against him to use AI-generated tools, and that this was not real, that all these posts were AI-generated. Anyone who has been a social media researcher will tell you those were not AI-generated. Someone using the same social media handle for years and years and years and doing not smart things with it is a—is a far more likely scenario than someone going in and trying to fake social media entries from different platforms over decades. So none of that has the smell of a sort of AI-generated campaign.
But even three years ago the discourse around those images or around those posts would have been, what does it mean about the candidate that he did this? And instead, the liar’s dividend allows Robinson to make the discourse did he or did he not do this? That is a fundamentally different debate. And when the discourse is whether or not it actually happened, you then don’t get to the substance of the actual candidate who is asking for your vote, right? You’re just having fact arbitrage.
Now, what’s interesting is that there has been an AI-generated campaign, if you will, by a Democratic PAC in North Carolina now against Mark Robinson. It’s run by a group, a PAC, called Americans for Prosparody, P-A-R-O-D-Y. And these are AI-generated ads where Mark Robinson’s words are sort of put into his own mouth in different contexts. But in these ads, he has something like fourteen fingers, right? And there’s humans in the ads with their heads coming out. It’s very clearly AI-generated, very clearly a parody, very clearly a spoof.
This takes me to the larger, again, sociological phenomenon, which is that we don’t have an agreement even within households, within schools, within communities, certainly across states or across the country, on what an acceptable use of this technology is, and what is not. And so we are sort of in the Wild West at the moment of how this technology is being deployed. Interestingly, some states have tried to work with that. Indiana passed a law banning the use of AI unless it was disclosed in campaigns.
And yet, just this week as part—as the first entry in a million-dollar ad buy, a Republican candidate—the Republican candidate for Senate put out an ad against his opponent where he took campaign signs from behind her in a rally and they were all digitally altered to reflect a totally different message—something about no gas stoves. That wasn’t true. It looks quite realistic from the photo. And it’s, in fact, in violation of Indiana law. The campaign finally came back and said, no, no, no. That was a mistake. We meant to say that we had used AI on those images. And so, no, those ads are still running, but the use of AI in those ads is being disclosed.
First of all, disclosing it doesn’t mean that people conceptualize it differently, right? So the neural pathways and the cognitive bias that we have for the different ways that we receive information are not something we fully understand yet. And so the way that people absorb these technologies, disclosure or not, is still not entirely clear to us. And so that takes me then to the second critical concern, which is a loss of faith generally in the credibility of the elections.
There’s a really interesting survey that came out from the Rainey Center this morning in which they polled a sort of wide sample of individuals and asked for their top concerns in terms of technology. And it was privacy, kid safety, and electoral integrity, as the top three results. There’s also a recent poll that came out from Issue One, where they were polling on questions of electoral integrity and credibility. And only 37 percent of Americans said they would have a lot of trust in the results of the 2024 election, regardless of who wins. And Trump reporters were substantially less likely than Harris reporters to say that they would trust the election results a lot, regardless of who wins.
So only 21 percent of Trump supporters said they would trust the election a lot if Trump lost, whereas 60 percent of Harris supporters said they would trust the election a lot if Harris lost. And so when we then have reporting around possibilities of foreign influence, around the possibility of cyberattacks, around the role of deepfakes, all of those elements then feed into a greater and greater fear that our elections are not, in fact, credible, that they will not be free, and that they will not be fair. And that, by extension, we cannot trust the republic that we have and the process that we have.
When, in fact, while there certainly have been attempts by China, by Russia, by Iran to involve themselves in our electoral ecosystem in terms of information, and while we are certainly—the U.S. government and major companies are always certainly monitoring cyberthreats to U.S. electoral systems, in fact there’s no evidence at all that this election would be any less secure or any less credible than any that we have had before. In fact, efforts are probably even greater to secure it and protect it than they have been before. But the feeling, the loss of trust—we all know that once you lose trust building it back is so incredibly hard to do.
And so for me, the biggest concern is not the technology. It’s the use of the technology and the norms outweighing—or, the use of the technology outweighing our societal agreement on what makes it OK, and where and how we trust it or would use it. And then that is resulting in a just overreaching sense that we’re in a different moment in our democracy, that I think we actually may be truly.
MABRY: Again, incredibly chilling. And I think if you combine what both of you said, talking about that perfect storm that Elaine started talking about around election 2024—with disinformation, polarization, a very—what is going to be, we know, a very, very close election, likely decided by a few thousand people in a few—half a dozen states. Plus, adding all—Kat, you were talking about the no agreement, no law, no consensus on any of this—the technology and how to combat disinformation—it comes up with a very, very scary place in American democracy. I mean, I would also add to that last night was the vice presidential debate, where J.D. Vance, the Republican nominee, would—once again, would not say that Donald Trump had lost the 2020 election.
So and the difference between 2016, 2018, and now is the fact that, at least back then we all agreed there should be objective truth and we should try to search for it. We’re now in a place where there’s consensus that we can’t even agree on objective truth anymore. And that is, wow, shocking. So in that—right before we throw it open to questions, I do want to give each of you the opportunity to say, in that landscape, is there hope in that reality—and the bots—one could argue, the bots and the businesses are doing their job. They’re increasing shareholder value. They are making money. They’re clicking all those little things in your brain, in the user’s brain, so they can get them to click. And that’s how they make their money. That’s their business model. Given all that, where do we find hope for our democracy and for ourselves? And, Elaine, you want to start with that one, and then we’ll go to Kat?
And, Elaine, you’re muted. There you go.
KAMARCK: Happy to start with it, sure. Look, there’s a short term—there’s a short term and a long term, OK? In the short term, we really need to flood the airwaves with truth, to fight—to fight the false information coming out there. Now, to their credit, the secretaries of state in the United States are actually doing a pretty good job doing that. They are demystifying the election procedure, something that they never even thought to do before. So take the case of the conspiracy theory in 2020 in Phoenix, Arizona, where the Republicans alleged that the Chinese had sent in 40,000 ballots marked for Joe Biden. And everybody was looking at the ballots with a magnifying glass to try to see if there was bamboo in them, because apparently Chinese paper has bamboo in them.
And, I mean, first of all, there’s a common sense aspect to this. I mean, do you think really that the Chinese would send bamboo ballots to the United States if they were trying to get them into the election system? I don’t think so. But more importantly is this told the officials in Arizona, we need to explain things like ballot security, like, where ballots are printed, like, they are held under lock and key. We need to explain things. The simplest thing that I think those of us who’ve been in politics for a long time understand but a lot of people don’t know, is that every time ballots are being counted, no matter where, there is a representative from the Republican Party and a representative from the Democratic Party, and if there’s another party with enough votes a representative from that party, in the room watching this, armed with the depositions.
At this moment, there are thousands and thousands of Democratic lawyers and Republican lawyers getting ready for Election Day. Now, unless one party is completely asleep at the switch, you know, you hear about these things right afterwards. They go right to court, you know, and they move their way through the system. Of all the challenges that the—all the things that the—Trump said went wrong in 2020, right, he filed—there were only sixty-one law—sixty-two lawsuits filed, and he lost sixty-one of them, because there was nothing to them. So there is a system in place.
And so what the secretaries of state have been doing is they’ve been educating people on election administration. Usually a boring topic. Not anymore. In Michigan, there was some false information put out on the internet. The secretary of state took that information, stamped false on it, and threw it back out on the internet, and on the airwaves, and every place. So you have to just simply fight back. And you have to understand that competition actually works. You know, competition in our marketplace—in our capitalist system generally works so that after a while people don’t buy cars where the doors fall off, right? We get pretty smart. There’s always somebody out there saying, don’t buy that, right? Well, competition works in elections too, because the political parties each have a stake, and they are each watching. So that makes it hard to manipulate the election system.
I think when we look longer, countering lies and using common sense are our short term—the only short-term options here. When we look longer, we have to realize that disinformation makes people money. And I don’t think we really understood this as a society until Alex Jones was put on trial for broadcasting widely that the children murdered at Sandy Hook were, in fact—were in fact fake, that this was—this is all a fake deal for people who supported gun control. Because of that, the parents of those children put a lawsuit against Alex Jones. Turns out Alex Jones had a lot of money. Turns out that his show had subscriber money, that he sold merchandise, that he had speaking fees. There was a lot of money there.
Disinformation often is financially lucrative, which is why people keep on doing it. So I think that looking at the—looking at the money angle is one way we can get at this in the long run. And then I think there’s other things, like regulating the artificial intelligence companies, requiring algorithm audits of those companies, requiring, for instance, social media companies to provide real name registration to accounts, so that there’s an actual person who can be held accountable if something goes wrong, if somebody gets killed, OK? A real person who you can go after.
Realize this, that a lot of the crimes—the bad things, really bad things, that happen from disinformation are crimes, OK? They just happen to have started on the internet. So you don’t need new laws against disinformation. Or, I mean, defamation—I’m sorry. You don’t need new laws against defamation. We have laws against defamation. But somehow, they’re not being applied and prosecuted in the—in this digital world. And ultimately, we have to prosecute bad actors, OK? There has to be a cost to this. And particularly when the lies end up causing real harm to real people. And I think that these are all things we’re going to have to work through the legal system, et cetera. But in the short run, it’s common sense and publicity.
MABRY: Kat.
DUFFY: I mean, we have, what, thirty-five, thirty-four days at this point? Is that where we’re at? I’m going to focus on, in the next thirty-four days, where should we have hope? I would say there is exceptional reporting and exceptional monitoring coming out, not only from CISA, the cybersecurity information body of the Department of Homeland Security that monitors electoral security and looks for cyberattacks, but also from the leading companies. Microsoft has excellent reporting. Cloudflare sometimes does reporting. Google does reporting. So there is a—there is a tremendous focus. And there is no nation in the world that is more capable and more technologically sophisticated in terms of its government and other actors for monitoring for irregularities or monitoring for cyberattacks.
Now, I would say in the week or so leading up to the election, if we try to learn from elections that have happened around the world, I think there is a greater risk of some types of information being pushed out there as real when they may not be. Audio deepfakes I think are the highest risk because they have the fewest cognitive tools that you can use to vet them. And we have seen that in a number of elections around the world. We also have an incredibly skilled population of individuals who can work on quick analysis and quick debunking of what has been falsified and what hasn’t. And so I think it will be imperative on the media, frankly, not to report on rumors, but instead to report on facts. Because once they start—once you start reporting on the rumor, the rumor becomes the truth in people’s heads. It’s very hard to take that back.
I also think there is an enormous amount that can be done by simply going and volunteering at the polls, and supporting the poll workers, and supporting your secretaries of state, working with community groups to support the secretaries of state. There is a—what we see as global electoral monitoring is that a very significant percentage of someone’s feeling that an election was credible or trustworthy reflects their experience of voting, their human experience of Election Day or of casting their vote. Did it feel safe? Did it feel secure? And so making sure that our polling stations are well staffed, are friendly, feel safe, making sure that place, process, and timing are all incredibly clear. So the secretaries of state are all working on that.
It’s interesting. There are a lot of—there are a lot of mixed feelings about what information, if any, different social media platforms should carry about elections. There’s actually not a lot of disagreement, according to polls. There’s pretty widespread consensus that social media platforms can do a lot to ensure that people know where their local polling station is, how to register for the vote, what time the electoral window is, and what that process is. That isn’t something where people have a lot of anxiety in terms of that information being pushed out. So I also think that we’ll see a really concerted effort among those who do have information to get out good information as fast as they can in the coming weeks.
One of the challenges will be some of that information won’t become clear until weeks after the election, right? It’s not always clear in real time. I think another challenge, and I’ve spoken with several secretaries of state about this, is that their protocols aren’t set up to be able to necessarily generate a clear result on the day of the election. And so the longer that window goes between having a clear winner, especially at the presidential level but also at state and local levels, the greater the window for bad information and conspiracies to build. And so I think the faster also that we can get to definitive results—and, to Elaine’s point, the more transparency we can have with that process—I think there is a lot that we can do to instill a stronger feeling of confidence and hope. If not full confidence and hope, a stronger feeling of confidence and hope among the electorate.
MABRY: I know here at CNN one of the things that we talk about a lot, and that we will—we’re committing ourselves to doing for our audiences between now and Election Day, is flooding the zone with that truth, as Elaine was talking about. So all the transparency we can of what voters and audiences can expect between Election Day and when we get a result, and why it may take some time. And last year—excuse me—last time, four years ago, it was more than a week. And so we can expect that same kind of delay. And, as you said, the dangers of false information and conspiracy theories rushing into that gap are huge. So we’re going to do all we can to educate the public to why they can expect a delay. So we’re going to work really hard on that.
At this time—
DUFFY: And I would—I would be remiss if I didn’t say that CFR and Brookings will also be doing all that they can to educate the public. And so there will be resources available. There are, and there will continue to be resources on both of our websites that give people as much fact-based information as we can about the current dynamics.
I know we’re going to move to questions, but if I could just—one quick thing on the idea of sort of flooding, you know, with true information. This question of what is true, again, is a very divisive one. And so the other way that this has been approached in other countries that has proven empirically to be useful is through the concept of prebunking. And the idea of prebunking is that instead what you’re doing is trying to increase media or digital literacy among a population by letting them know how—the tone and tenor of what is probably not solid information that would be coming to them.
So a lot of the worst information will also be the information that is the most emotionally heightening when you see it. If you feel a sense of outrage, if you feel a sense of shock, if you feel a sense of anger, that’s actually a really good indication that you should probably try to double check whatever it is that you’re reading against some other sources. And so there is actually a fair amount that can be done in terms of educating the public in the lead-up to the election to say, hey, like, don’t be a sucker. Don’t be manipulated. If you’re getting this type of sensibility, the odds are extremely high that this information is being fed to you to generate a reaction and to get you to share it, right? To get you to send it along to others. And that is how this spirals. So I also think the lessons we’ve learned about prebunking in other countries could be really useful in really useful and applicable in the United States.
MABRY: Excellent. Excellent.
At this time, I would like to invite all of you to join the conversation with your questions. We have about 1,500 people registered for this virtual event, so please limit yourself to one concise question. We’ll do our best to get to as many as possible before we have to wrap up. Deanna, will you—will now give instructions on how to join the question queue.
OPERATOR: (Gives queuing instructions.)
We will take the first question from students at Lewis University in Romeoville, Illinois, who ask: What would be a good way for the government to combat disinformation on social media platforms without jeopardizing the First Amendment’s freedom of speech? Is there a way to reach a balance of free speech while also minimizing the effects of disinformation? And how can U.S. voters be better prepared and educated on recognizing false information?
MABRY: Really great question to start. I think Kat’s last response does something for the second part of that question. But who wants to take the first part, Elaine or Kat?
KAMARCK: Oh, I’ll just say very simply, we need to know who says what. We need to know who is doing this. Are there real people behind this? It’s just too easy on the internet now to make fake names and to hide your identity. And the problem with hiding your identity is if you cause something really bad to happen, if you engage in consumer fraud, if you engage in inciting violence, if you engage in targeting somebody who is murdered or who has it has attacks on them, you need to be found and prosecuted. And right now, that doesn’t happen. So I’d say it’s very simple. Start with making sure that people who are on these social media platforms, there is a real human being who can be held accountable, if need be.
MABRY: Kat, would you like to add anything?
DUFFY: It’s certainly an excellent question. And it’s one that has, I think, pervaded this entire electoral cycle. We’ve had a lot of Supreme Court decisions on these questions, including Murthy v. Missouri earlier this summer. One immediate impact has been that CISA, again, the agency within DHS, no longer engages on questions of content. They only engage on questions of cybersecurity and sort of electoral threats that we’re seeing in the networks. This is a—this is a very tricky issue. I mean, I think it’s important to remember that if you are a company in the United States, you have your own free speech rights vis-à-vis the U.S. government. You have no obligation as a company to protect or not violate the First Amendment. Only the government can violate the First Amendment. And this is something that I think occasionally gets a little bit lost in the discourse.
Every company and every platform within those companies has its own different policies and procedures, and those change with time as well. And so the way that you see Facebook, for example, handling this now, versus the way Facebook handled it in four years, has shifted dramatically. The way that Telegram is going to operate is different than the way that Reddit works, which is different than Discord, which is different than WhatsApp, which is different than your group chat, which is different than YouTube, which is different than Instagram, which is different than—and on, and on, and on. And so I think that part of the problem that we’re seeing with these echo chambers, at least, is that people also tend to gravitate to the platforms, and then to the groups within those platforms that feed their own internal biases. And then they will also go to, you know, media outlets that do the same. And so we really end up in these strong echo chambers.
I think that the best approach is—certainly, if the government becomes aware with—in working with, you know, the companies and researchers, that there is a foreign influence campaign at play, that is certainly something that the government can take on because at that point you’re not talking about free speech, right? You are talking about national security and a foreign influence campaign. That’s not the majority of what’s happening, though, right? The majority of this—the call is coming from inside the house. And what we’re talking about is Americans—even if it’s a foreign-generated narrative—Americans picking up that narrative and amplifying it. And that’s when they start to have their own free speech rights, again, not vis-à-vis the companies. The companies, have complete control over the content they put up and the content they don’t put up, with a very limited number of exceptions around things like child pornography, terrorist content, you know, fraud. There’s a couple of exceptions there.
But we have Americans who are engaging in this speech and amplifying it. And so I don’t think it’s as simple as a content moderation question or a governmental policy. I think it’s actually—we’re just going to have to learn as a society how we navigate this collectively. And people will end up, frankly, choosing—there will be a bit of a caveat emptor situation here. And people will end up gravitating to the platforms where they feel it has the most use for them. I, for example, no longer post on X, because I have found it to just become such a toxic site at this point. So I primarily now live on LinkedIn, whereas I used to live on Twitter, so.
MABRY: Thank you.
Deanna, next question.
OPERATOR: We will take the next question from Sarah Valero.
Q: Hi, there. Thank you so much for this conversation. This is Sarah Valero from the Global Association of Risk Professionals.
Those of us who are on this webinar and understand the nuance and importance of information and fact checking online, I want to note, though, that, of course, the majority of this country are average citizens going about their daily lives. To bring up a point that Elaine was saying, that there is—you know, there is an individual behind the social media and technology. What are actions that we can do with our family, our friends, our community, to share this, in ways to avoid the challenge of being divisive? Because I think for some people that seems a bit intimidating. They may not want to even watch the news or see anything because of the divisiveness and the noise. So then how as we, individuals, can reach those folks that we know and best communicate with them? Thank you.
MABRY: Another great question. Who’d like to take that one? And I’ll just have one of you take that one, then we’ll go back to questions. So who’s—
KAMARCK: All right. I’ll try one of them. Listen, that is a great question. And I will come back to common sense. And I’ll tell you another little story about this. You all may know of a football player called Damar Hamlin. Damar Hamlin just played for the Buffalo Bills this past Sunday. And in January of 2022, he got hit in the chest and he suffered cardiac arrest. Was taken off the field, people didn’t know if he was going to live. A very scary situation. That spring, there was a producer making a movie called Died Suddenly. And the theme of the movie was that COVID vaccines were killing people in the prime of life. It wasn’t just old people. They were actually killing young, you know, athletes like Damar Hamlin. And they put Damar Hamlin in the movie. And the movie came out over the summer.
Well, in September, Damar Hamlin was back on the field playing for the Buffalo Bills. And somebody asked the movie producer, said, wait a minute, the guy didn’t die. Look, he’s right there. The producer said, oh, it’s a body double. Now, again—and this is—this is a sort of technique you can use with some people. Think about it for a moment. You can probably find a Black man in his twenties who looked like Damar Hamlin in America, OK. Probably could. Could you, however, take this person and put them on an NFL field and expect them to play in the NFL, where you have the most skilled and fabulous athletes in the world? Could you really? Don’t you think somebody would notice? (Laughs.) Don’t you think that, you know, maybe the athlete would say to themselves, oh, I’m not doing that. Don’t you think his teammates might notice?
In other words, there are so many of these things that just don’t add up. Just don’t add up. Look at the assassination attempt on Donald Trump in Pennsylvania just a few weeks ago. Immediately, the left wing, you know, social media was filled with, oh, he made this up. It’s all a fake. He did it to get sympathy. His campaign was lagging. He set the whole thing up. Now, wait a minute. If you were going to set this up, would you, in fact, hire some twenty-year-old nobody to shoot at you—(laughs)—to shoot at your head, OK? The guy wasn’t—you know, the guy wasn’t a sharpshooter from the IDF or anything like that, right? He was just some, you know, confused young man, right? That’s kind of a risky thing to do. Do you think somebody would really do that, OK?
So, again, you’ve got to take this back to a sort of common sense one, because a lot of these things, it just simply doesn’t add up. And as we’re in this very sensitive period where—as we’ve all remarked, where the technology is really good, and people are really primed to believe anything. And I love Kat’s characterizing this as a sociological problem as opposed to a technological problem. As long as we’re in this period of time, the only way you can do that is to try to bring people back to a sense of truth and common sense. And that’s about as good as we could do until we get a better handle on the problem.
MABRY: Thank you, Elaine.
Deanna has our next question.
OPERATOR: We will take the next question from Shannon Raj Singh.
Q: Hi everybody. This is Shannon. I’m in California. I’m the former human rights counsel for Twitter, although I share Kat’s reticence to using the platform now. I think it’s a really different place.
Thanks so much for all of your remarks. And, Elaine, I thought it was really helpful how you articulated sort of the financial incentives around disinfo and just how lucrative the industry can be. And I wondered if you could just drill down on that a bit more, because it seems to me that the disinfo-for-hire industry is really the elephant in the room in addressing some of these dynamics. How do we—how do we address that as a community? What do we do about that, in sort of protecting electoral integrity? Thank you.
KAMARCK: Disinformation, the financial incentives that—
Q: Disinfo-for-hire, yeah, exactly.
KAMARCK: Yeah. Yeah. I think we have to prosecute and I think we have to sue, OK? We have a legal system. It is set up for this. If you commit consumer fraud—if somebody commits consumer fraud, they can be sued. They can lose their money. I think that the Alex Jones trial, where he was—he was fined hundreds of millions of dollars, OK, that’s actually very instructive. But somehow we don’t do that, right? Part of the reason we don’t do that is we can’t find these people. We can’t find the real human beings who are behind a lot of this stuff. And that, I think, is—I think, once we, you know, crack that nut, then I think we might be able to see the legal system moving more effectively into prosecution of things that are real crimes.
Now, you know, there’s always going to be a gray area, as there always is with these things. One of—a woman—I was giving a talk like this the other day. And in the audience was a woman from Austria. And she said, there is a satire magazine in Austria that everyone in Austria knows is a satire magazine. They make fun of politicians, they make fun of movie stars, et cetera. It’s very funny. But in the last couple years, they’ve seen stories from their magazines showing up in newspapers and magazines throughout Europe—in France, and Portugal, and Germany—as if they were true. OK, so there’s a—there’s a lot of difficult stuff to sort out here. And satire, of course, being one of them.
We just have somebody here at Brookings who just was writing about memes. You know, memes can be really funny. I love memes. I mean, we all do. They can be really funny. But they can also be disinformation, hiding under satire. And so these things—these things get really, really tough. But we do have to start looking at—and I think the way to begin is look at consequences. If the consequences are serious, then you got to peel it back. If the consequences are somebody’s hurt feelings, I’m not sure that that’s a crime.
MABRY: Kat, did you have something on that?
DUFFY: So if I could—yeah. I mean, I think I just—there’s a little bit of nuance that I want to add to that in terms of the content farms, right? And that’s sort of the—because it’s not just even—and, Shannon, I mean, you know this better than anyone. It’s not just a sort of a disinformation economy, right? It’s an attention economy. It’s an engagement economy. And so it’s the vast production of clickbait. Some of that is disinformation. Again, it’s not disinformation with any political incentive whatsoever. It is simply an attempt to get people to click because it generates into the sort of creator economy revenue model that is behind a lot of these different platforms.
And so, you know, one of the challenges is that a lot of these farms are operating, you know, on the border of, like, Thailand and Myanmar, right? They’re operating—they’ll be operating out of Malaysia. Some of them are operating out of Latin America, out of Africa. And so we don’t have the ability to simply go and arrest these individuals who are sitting in these content production farms. And they are not engaging in anything with political drivers or incentives at all. They are literally just looking at, algorithmically, what is the stuff we can pump out that gets the most clicks and the most engagement? And then, by doing the most clicks and the most engagement, they then become intriguing as a potential advertising revenue source for a platform. So then, they sort of start to feed into the financial ecosystem of that platform. They are now a creator on that platform. They have a certain rank on that platform. It makes it harder to get to them. And then they just add subscribers and links and more advertising revenue. And that is the whole economy behind this.
And so the challenge is that this isn’t just about disinformation. Especially with the rise of generative AI tools, the speed with which you can just essentially pump pollutants into the information ecosystem has been so wildly increased that, to some degree, what we’re going to have to start dealing with is these revenue models, and what examinations of these revenue models need to look like, and also how we develop a stronger suite of trust and safety tools that a range of platforms and a new entrants into the space could use to identify these bots, to take them down, to share information about them, and to coordinate on who these actors are. Because these are actors who are operating across fifty platforms, not just one. And so if that information is trapped inside of one platform, as a society we’re not being served.
And so there’s also a lot that we can do in terms of building out stronger trust and safety tooling that is open source and that is available, that different companies can build off of. But also helping companies to develop the information sharing ecosystems that they need to, so that they can start to—as Elaine has very rightly pointed out—get to the source, and try to cut some of this off at the spigot, as opposed to trying to deal with just the flow of water across the board.
MABRY: Which are all huge business opportunities, you know, all those questions that you lay out.
I hope we can get in one more question. (Laughs.)
OPERATOR: We will take the next question from Makenzie Holland.
Q: Hi.
How much does—kind of going off of this theme of stopping—cutting this off at the spigot, how much does the domain name system contribute to the issue of disinformation? Is there anything to be done about that? Because this is kind of going off of the DOJ seizure of, I think it was, thirty-two Russian-backed domains last month in a disinformation campaign, mimicking legitimate news sites. And there’s nothing stopping anyone from creating more domain names like that, where those articles are then spread on social media platforms. And organizations, like the Internet Corporation for Assigned Names and Numbers, ICANN, do not moderate content. So how do you address that? Like, how much does that domain name process contribute to disinformation?
DUFFY: OK, I actually think AI can help with this. And I never—it’s very rare for me to say that, but in this instance I think it might, y’all. So what you saw with the DOJ taking that down—first of all, I just think we’re going to see, not only in the United States but in a lot of other countries, we’re going to see increased capacity to identify this, and then to report those domains and coordinate on those domains. This is something where I do think, you know, in terms of coordinated inauthentic behavior and just completely false information getting pumped out, again, through the lens of foreign influence, right, as opposed to, like, an American who’s generating these websites. And this is where it gets really tricky on the First Amendment issues, because, you know, the Russian cyber mercenary group has no First Amendment right in the United States, but an American citizen certainly does.
And so I do think we’ll see increased capacity here. I also know that at a lot of the largest companies and at the platforms that are hosting these sites, you know, there are reporting mechanisms as well for saying that this site is fraudulent, this site is, you know, malicious. And so there’s also potentially something that can be done around those reporting features that allow the companies to work with each other to spot platforms that are getting a lot of—sort of a lot of traffic and/or a lot of reporting, that don’t have a long tail. So the back end and the analytics there are something where I think AI is going to be helpful in analyzing those sets of data more quickly than humans might have been able to. But it is going to be a learning process. And the minute we, you know, find one threat and shut it down, people are very creative about spinning up another one. And so it’s—but it’s such a great point. I’m really glad you raised it.
MABRY: Well, that is a relatively upbeat point to end on, Kat. (Laughs.) So we’re going to take that win. And I’m going to thank you, Kat and Elaine, for leading this discussion, which is very insightful. Again, and as chilling as it was, also giving us some green shoots, right, of where we can also bring our ingenuity and our dedication to our democracy. And thank you all for participating. We hope you will visit CFR.org/election2024 for additional nonpartisan information and analysis on the election, and that you will join us for the next CFR-Brookings virtual public event on the future of the Middle East, on Tuesday, October 8, at 4:00 p.m. Eastern Time. And I can’t imagine what would be more timely than that. So thank you all for joining us today. Have a terrific day.
KAMARCK: Thank you.
(END)