Social Media

  • Digital Policy
    Invisible Workers on the Global Assembly Line
    In her newly published book, Behind the Screen: Content Moderation in the Shadows of Social Media, Dr. Sarah T. Roberts discusses the world of content moderation, which increasingly plays a major role in keeping social media firms functioning.
  • Social Issues
    Invisible Workers on the Global Assembly Line: Behind the Screen
    In her new book, Behind the Screen: Content Moderation in the Shadows of Social Media, Dr. Sarah T. Roberts reveals the inner workings of the world of content moderation on social media platforms. 
  • Russia
    Disinformation Colonialism and African Internet Policy
    Russia’s recent disinformation campaign in African countries highlights the challenges that African states face in crafting internet policy that is responsive to both external threats and internal political dynamics. African countries will likely not push back against Russian disinformation campaigns, but rather will try to exploit the campaigns for their own international and domestic political goals. 
  • Media
    Distinguished Voices Series With Ted Koppel
    Play
    Ted Koppel discusses his distinguished career and the changing nature of journalism and social media. The Distinguished Voices Series focuses particular attention on the contributions made by a prominent individual at a critical juncture in the history of the country or the world.
  • Cybersecurity
    Cyber Week in Review: November 8, 2019
    Twitter suspends terrorist group accounts, backtracking from former exceptions; Russia strives for sovereign internet with uncertain future; The United States and Taiwan hold first joint cyberwar exercise; Dutch chipmaking supplier delays shipment to Chinese semiconductor manufacturer; and India’s space agency is the latest victim of suspected North Korean cyberattack.
  • United States
    Connecting the World: The Internet's Next Billion Users
    Play
    Panelists discuss the prospect of an inclusive internet, how the technology industry is reaching and interacting with developing countries, and the policy implications of a connected world. HAMMER: OK. Ladies and gentlemen, welcome to today’s Council on Foreign Relations meeting. The topic is “Connecting the World: The Internet’s Next Billion Users.” I am Craig Hammer. I’m a program manager at the World Bank, and I’ll be presiding over our discussion this evening. Our discussion will take two parts this evening. I’ll kick things off with a facilitated discussion until about seven p.m., and then we’ll open up to members for a thirty-minute Q&A, and we’ll end at 7:30 sharp. Let me begin by briefly introducing our distinguished speakers by their titles, since you have their detailed bios. Robert Pepper is head of global connectivity policy and planning at Facebook. Jen Spies is product manager of the Next Billion Users Initiative at Google. And Mike Pisa is a policy fellow at the Center for Global Development here in D.C. Welcome. The implications of adding a billion users to the internet from across the world from a range of cultures and contexts, with all their infinite complexities, is vast. The pendulum swings a full 180 degrees, from the tremendous potential of the social and economic development implications of internet connectivity to the truly terrifying: from the collision between social media and surveillance, to the incitement of genocide through social media platforms. My sense is that we’ll cover a lot of ground in our discussion this evening. But I’d like to begin on a slightly more positive side of the spectrum. Jen, Google’s Next Billion Users Initiative is making products specifically with emerging markets in mind. Talk about the local benefits that can occur with the expansion of large digital platforms in emerging markets. SPIES: Sure. And thanks for having us tonight. I think from Google’s perspective on of the primary benefits of building for the next billion users is that in 2019 if you can make a product that succeeds in emerging markets, it’s really a product that can succeed globally in scale. And I think one of the key factors contributing to that is building a product that works on a mobile device. So I think one of the key trends that we see in this market is what we call the leapfrog effect. So if you think about who are these users getting online today, they’ve often never been on a desktop PC. If you think about my internet user journey, I came online first on a computer, and then around 2010 there was this huge shift to mobile where people actually started spending more time on mobile than they did on a desktop PC. And actually, the internal team structure of these companies, Google and Facebook, was completely pivoted to accommodate this shift in time spent that was seen. If you think about, you know, a twenty-year-old coming online in South Africa today, they’re coming online through perhaps a $20 smartphone, and that’s their experience of the internet. And so understanding how to build products that are mobile first and that are native to that platform is really critical to winning in NBU markets. So I think that that’s a key component that can contribute success both in the U.S. and in these markets. HAMMER: In terms of the local development, though, I mean, speak to what NBU is thinking about in terms of local economic growth and social development. SPIES: Yeah. One of our key tenets is to grow economies in NBU markets, and the internet is a true democratizing force that can help do that. I think if you think about—one of the really big engines of growth we’ve seen is small business empowerment. In many regions in NBU you have shopkeepers coming online. Oftentimes they’re inheriting these businesses from an older generation, and they really understand the power of digital and the power of using their phone and internet platforms to sell goods. And so using platforms like Google and Facebook and WhatsApp to help find new customers, to grow their business, to have secure payments and secure transaction is really something that we’re seeing happening in NBU markets that’s sort of growing the size of the digital economy, and helping local economies, and helping people participate in the digital economy in a way that is accelerating every year. HAMMER: Robert, Facebook is working with governments and others around the world to extend connectivity to the 49 percent of the global population that’s not yet connected. Can you talk about some of the major issues or barriers that you see and what’s happening as you address them? PEPPER: Sure. Thank you. And again, thanks for putting this together. So the 49 percent is a reference to the fact that there’s three-and-a-half—3.8 billion people connected and 3.8 billion people not connected. There’s a little bit more—the ITU just came out just recently and said it’s just crossed that line. That’s a real milestone. So we think of it not as the next billion, but as the next 3.8 billion. And so—and the reason that that’s really important goes to some of the reasons that Jen just talked about. Each year now for the last four—we’re into our fourth year—we do a study with the Economist Intelligence Unit called the Inclusive Internet Index. And it looks at over time 120 countries, fifty-three indicators for each country, looking at availability, affordability, relevance, and readiness. And one of the things that the Economist found in the most recent study, released earlier this year, was that the progress that we’ve seen over the last decade seems to have stalled out for the lowest-income countries. In other words, the upper-income, upper-middle-, and lower-middle-income countries are continuing to improve; the lowest-income countries did not improve year over year. So you look at that, and then you combine that with something we do as part of that project, a study called the Value of Internet Survey, and each year the theme shifts. The theme last year that we looked at was how do people view the internet in terms of livelihood. And this is across the globe, so this is, right, low income—I mean, it’s all one hundred countries. It didn’t make any difference: 75—73 to 78 percent, so call it 75 percent of people in a country said the internet helped them get a job, they used the internet to help them do better at work, they used the internet to learn more as it relates to their livelihood. If that’s the case, the fact that we have this, you know, stalling out at the lowest income, at the bottom of the pyramid, that’s a real problem, right? We don’t know whether it was a one-year data anomaly. We’ll find out when our—we get the latest data in. That’s bad enough, but if it’s a trend that’s really bad. And that’s why, if you’re looking at the economic development and the benefits—whether it’s small business or individuals or healthcare, education—it can’t be about the next billion. It has to be about everybody. And that’s why we’re looking at it in that respect. So in terms of the barriers, going back to the analysis, the barriers to getting another billion, two billion, four billion—by the way, out of the 3.8 (billion), probably a billion and a half are children you don’t want online. So it’s not quite as big a number, but it’s still important. It’s a combination of supply side and demand side. The prerequisite, you need the network. You need the connection. You need a minimum of 3G, preferably 4G connection. It’s all mobile. That’s the minimum that you need. If it doesn’t exist, you have to have it. But that’s only the necessary but not sufficient prerequisite. It’s all about the applications, the demand driving local content, local language, relevant content, including e-gov applications, e-commerce, e-entertainment. Those are all things that you need. And people on the readiness side need to know how to get on safely. So it’s about digital literacy. It’s about knowing how to set your settings. It’s knowing how to prevent not just spam, but fraud, phishing attacks. So all of those things need to be in place for people to be able to get online and then benefit from being online. So some of the things that, you know, we’ve been doing—and we work with government—those are the barriers. We work with governments, using the analysis from the Inclusive Internet Index as a diagnostic, on how to build digital—national digital strategies. But for example, in Uganda, there’s an—multiple operators. Airtel, which is an Indian company which has networks in Africa, has a 3G network. They’re building out a 4G network in Kampala and the cities, but not in the rural areas. In the rural areas, what prevents them from doing that is they don’t have what’s called backhaul, right? They’re using narrowband microwave, which won’t even support a single smartphone, right? They need fiber to the towers. So we coinvested with them and jointly built a 770-kilometer fiberoptic network. It’s an open fiber. We do everything on an open basis. Any of the operators can use dark fibers. They’ve now connected cell sites and they’re converting from 2G directly to 4G. They’re leapfrogging, all right? That’s supply side. That’s a prerequisite. We’ve co-built—we’re building a subsea cable from Brazil to Argentina which is the first new subsea cable to Argentina in eighteen years. The single cable is going to triple the international capacity to Argentina. Again, open fiber; it’s not just for us. So those are, you know—and we can talk more about some of the other infrastructure investments with technologies, but we’re also working on the applications side with zero-rated services, digital literacy, education applications, just like you are. I mean, there—you know, the internet companies all have programs to build local content, local language, relevant content, working with governments to create those kinds of applications. HAMMER: Let’s zoom out a bit and talk about the larger kind of governance context. So, Mike, we’ve had some discussion already of the pros of digital platform penetration to emerging economies. Let’s talk about the greater international cooperation, what it could or should look like. Is some version of some global governance mechanism feasible, from a policy perspective? PISA: Well, so let me start by—I know we’ve heard about the pros, but I think because I focus on governance, and often the discussions about governance kind of drift towards dealing with challenges and mitigating risks, they kind of have a negative—take a negative tint. So before I start talking about the governance issues, I do want to underline, especially from a lower-income-country perspective, the benefits that the things we have I think started to take for granted here—you know, reducing transaction costs, reducing the cost of search, enabling greater access to information—what this means. And I think oftentimes when we’re talking about this through a development lens we tend to forget that some of the benefits that we’re debating here in OECD and rich countries play out very differently in lower-income countries. So one quick example would be the gig economy that shared—that two-sided platforms enable. I mean, here we have concerns about what gig workers—are gig workers losing out in terms of the level of informality attached to their jobs, the level of worker protections? Whereas in poorer countries this is not a debate, right? First of all, those worker protections don’t exist. And they actually can be a step—and most of the economic activity is in the informal economy, so there’s actually a step towards formalization by just participating on one of these platforms. So I think another issue is that these benefits are quite diffuse, so they tend to I think get discounted in debates around the benefits of certain policies. So, actually, I want to start at a national level rather than go into global governance first, if you don’t mind, and that’s because I think you have to understand this from the perspective of national policymakers. And actually, I think where I want to start is the idea of why are governments in the position of assessing or reassessing how they approach data governance? Which inevitably means: Why are they assessing or reassessing their relationship vis-à-vis big tech platforms? And I think it’s two reasons. One is, you know, we’re more aware today of the risks of the misuse of personal data. And, two, there’s a growing recognition of the value of data as an economic input. And so I’ll start with the second, actually. From the perspective of policymakers—and let’s just say policymakers in poorer countries—again, the benefits are diffuse, but they also—there’s also an aspect of very concentrated wealth being generated from personal data. So if you look at—you know, one quick stat is 90 percent of the—let’s just start with market capitalization. Ninety percent of market capitalization of digital platforms in the world accrues to companies in the U.S. and China, and two-thirds of that value accrues to eight companies, and we all know their names. Now, if you’re a policymaker in a poorer country, and you’ve listened to the World Bank and others rightfully tell you that in order to take the next step in the global economy and the digital economy you need to build an enabling framework, and that will lead to wealth creation, you’re going to look—first of all, the benefits, again, diffuse, oftentimes not feeding into GDP statistics for reasons we can touch on, oftentimes not feeding into revenue. And yet, you see the pie growing elsewhere. And to me, I think that leads to—that’s why the data-as-oil metaphor has had so much staying—sticking power. It’s not—we all know that it’s not a very apt metaphor for a number of reasons, right? But there is a feeling among at least the policymakers we’ve spoken to at the Center for Global Development, where I work—and maybe it’s because we talk to folks from ministries of finance and central banks more often than not—but that they’re being shortchanged somehow in this—in this interaction. And I think the worst thing about the data-as-oil metaphor is it leads to this idea that data is inherently valuable or valuable on its own, right? And it’s kind of like the flip of the Hal Varian argument, that the real value that platforms provide is not the data—not the value they derive from the data, but it’s the knowhow and it’s the—you know, the proprietary algorithms and it’s the human capital. And of course, the truth is somewhere in the middle. But once you’ve kind of accepted this frame as data-as-oil, it becomes all about this idea of, oh, well, I have to somehow grasp or hold that data to have a better negotiating position in this discussion. And that leads to forced localization. And I think—I mean, and these localization policies have been around since the internet, but there has been an acceleration of their use and there also has been a change in the rhetoric around them. It’s much more—you see it much more frequently that countries justify the use of localization policy for economic reasons. I think the clearest example is the India—the draft data protection bill, which says we are arguing in favor of localization because it’s going to help India develop its digital infrastructure and its AI industry. And I think—my concern in this space is that right now the policymakers we talk to seem to think that there’s a continuum of policies: on one hand you let big tech platforms run free, and on the other you kind of pursue these forced data-localization policies, and there’s not much in the middle. I do think eventually you’ll have to have some enlightened taxation policies that will fill that middle, and that’s maybe something we can talk about in the future. HAMMER: So, Jen, how are technologies of the internet impacting users in sub-Saharan Africa specifically? SPIES: Yeah. We think about it in terms of a multistep journey. And in sub-Saharan Africa specifically, the first pillar of that is access and affordability. I think if you compare those economies to others in Asia, Southeast Asia, Latin America, data as a percent of income is the highest, there’s the lowest smartphone penetration. And so access to internet, access to smartphones is really the key pillar to participating in the digital economy, to getting online. That’s step one. Step two is what we sort of call ecosystem and platform development. So you can imagine you bought your first smartphone, you bought a prepaid data plan, you’re getting online; if there’s not content that’s locally translated, that’s relevant for you—if you’re going on YouTube to watch celebrities that you care about and you don’t recognize any of the names—that’s not a great experience. And like you said, there’s a lot of job hunting that happens online. I think the behaviors on the internet are fairly similar. Like, people are talking to their friends. They are watching content that they care about. They are searching for jobs or ways to make money. And if that is not comprehensible or relevant or localized to you, then it’s also inaccessible. So a lot of parts of Google are focused on making sure that there are local Indian content creators uploading videos on YouTube and using machine learning algorithms to translate into local dialects and making sure that Google Assistant can translate into vernacular languages that are maybe more intuitive for a first-time internet user to talk to and to access. And I think the third stage, and the pillar that we think about, is platform development and business creation. And so once you have all of these stages of, you know, you’re online, you can afford a device, you can afford internet there is content in your local language, you can sort of participate in the internet as we experience it in the U.S., then there’s this opportunity to create businesses that are localized, that are taking advantage of trends like social commerce, of digital payments, of, you know, people really sort of hacking our products and using them in interesting ways. But you can build an ecosystem on top of that. So I think this trend of leapfrogging to mobile, of really leaning into social commerce and using platforms like WhatsApp to sort of run a business end-to-end on a mobile device, of developing content locally in local languages are all trends we see that are really pronounced in sub-Saharan Africa, but also apply to other NBU markets. HAMMER: Robert, maybe you can speak to some of the public goods that Facebook is doing to help support that leapfrogging process. PEPPER: Yeah. So just picking up a little bit on that. One of the things that we’ve looked at in terms of analysis goes to the devices. Out of the three-point-eight billion people., we believe that there’s about a billion people who actually could receive a 3G or 4G signal but cannot afford the device. In Africa, it’s 240 million people. So I mean, you don’t have to build a new network, right? You have to make the device more affordable, but it has to be a good enough device so that it’s actually either a very high-end feature phone or a low-end smartphone. But that actually works and that’s not hackable, in terms of the device. So you know, as we were discussing earlier, one of the things they are doing is working on that, plus other variables, with the World Bank on its Africa moonshot project, which was just launched. I guess it was at the annual meeting about ten days ago. And if you haven’t—if any people here haven’t seen that, you should take a look at it. It’s a great document. It was actually a working group paper that was released by the U.N. Broadband Commission, put together by the Bank in conjunction with the Bank on the Africa moonshot. And it goes through, you know, what needs to be done. And, you know, it’s—you have to take a deep breath, right? The goal of the moonshot is to have everybody over ten years old in Africa connected by 2030—eleven years, right? They estimate it’s going to cost $100 billion. What’s interesting is about twenty-three, twenty-four billion (dollars) is capex for networks, supply—pure supply side. Eighteen billion (dollars) is the demand side. It’s building the applications, the local content, the e-gov applications. Creating local businesses that do that. There’s a small policy piece and some others. The biggest chunk, over forty-five billion (dollars), is what you don’t even think about, which is opex, maintenance, replacement, right? The really boring stuff that makes it work, right? And so the question is, where does that come from? And that’s going to have to be—and the Bank’s talking about putting a lot of money into it. And the device affordability is a chunk, and that’s one of the things we’re working on. So there are these great opportunities. And in terms of some of the public good aspects, you know, what we’ve seen is that there are literally thousands of businesses that in just sub-Saharan Africa—that exist only online. There is no brick and mortar. So it’s not like there was a shop that then went online. These are businesses that would not exist if they were not online. And they’re selling things not just in their communities but broadly in their—in their region, or their country, and even sometimes to other countries in the region, or internationally outside of their continent. So we know these benefits, but they’re—again, I want to come back on to—there are these, you know, very legitimate concerns and questions. And I think you have to separate—and I thought it was really good the way you laid out, for example, data localization. Data localization in many places is really sort of an industrial policy that, you know, usually starts off as being dressed up as we need it for security. But localizing data actually makes your data less secure. We know that. And the irony, by the way, of the India draft localization bill, if you think about the India BPO industry, their back-office industry, literally it employs millions of people, it’s worth billions of dollars. And it only existed because Indian companies could process data and build call centers for companies based in the U.S. and Europe and have customers in the U.S. and Europe. If there was data localization, right, and it applied in the reverse, that industry would never exist in India, right? They don’t think about it that way. HAMMER: So, Mike, let me ask you one question. I think, just because these issues are very much alive in much of the work where we happen to be, it may not be the case elsewhere around the world, but let’s talk briefly about data privacy, disinformation. As the digital platform market concentration expands, let’s say, to emerging economies, have these issues like data privacy and disinformation manifested in lower-income countries? And how have policymakers in those countries begun to respond? PISA: So I think all countries are grappling with these issues. I mean, I’m not going to go through the list of how they’ve manifested themselves—(laughs)—but I think all countries are grappling with these issues in different ways. I’m actually—since I punted on your question on the global governance I’m going to punt back to that global governance question around these issues, because I think it’s going to point to why governance is going to be so tricky to do at the global level on these questions. So I think you can make an argument around data privacy and data protection. I’m going to stick with data protection, because I think privacy is such a culturally laden term. That there are efficiency gains to be made by having a harmonized approach globally, right, that would allow companies—big tech companies—to apply the same standards globally, rather than having bespoke models for different countries that they work in. And frankly, I mean, you know, countries have been willing—I’m sorry—companies have been willing to change, often at great cost, how they go about their business in order to comply with GDPR because Europe is the world’s second-biggest market. If Benin, or Bhutan, or another country asked—you know, enacts privacy regulations that are quite different and viewed by industry as quite onerous, then they probably won’t make that same distinction. So I think there’s a stronger argument for harmonization in that space. Other rationales for global governance. When you have cross-border spillovers. So I came from U.S. Treasury, where there’s a—you know, a large architecture around protecting against banking crisis spillovers, right? So we have the FSB, we have the Basel Committee, and many other structures. And we don’t have those similar things for digital platforms, even though there may be some spillovers—I think the spillovers are less acute and less immediate. But you can have some arguments around, you know, whether there are spillovers, about how companies managed or regulated in one country affect political stability in another country. And there are more clear spillovers in the areas like cybersecurity. But I think the one argument I want to make is I think the overarching need for global governance in this space is we need to make the experience for individual users, but also for governments of using the open internet better, in the sense that there is now a more clear challenge to the open internet. And actually, I watched Robert’s—his Turing speech in 2015 over the weekend, like we talked about it. He said in 2015, we’re at the crossroads because national governments are reasserting more control over internet policy, and they’re often going towards a more closed model. You said this two years before President Xi in China says, you know, we’re going to make China a cyber superpower, and we are going to treat—we are going to, you know, kind of set China as a model, in his words it was, for countries who want to speed development while preserving independence, which I think is code for closed—have a closed internet system. So I think we’re still at the crossroads. We’ve probably gone a few steps in the wrong direction. But these debates are still happening. And my view is, the reason you want to pursue—you might want to think about pursuing global governance over specific issues areas related to how you regulate digital platforms, is because you want to make governments more comfortable with the open internet model. And right now, there’s a lot of reasons to not be comfortable. Having said all that, I think the challenges of having effective global governance in this space are huge. I mean, just take disinformation. I mean, how countries go about regulating, or monitoring, or treating, or dealing with disinformation is going to depend on how they value privacy, how they value transparency, how they value freedom of speech. As we can tell from debates that have happened here in Washington over the last few weeks, and speeches, that we’re still figuring this out. And why do we have any belief that having, you know, a global discussion on disinformation on digital platforms would be any different than a discussion on freedom of press in the more analog world? So I don’t think we’re likely to actually have productive conversations unless we take very narrow, specific areas where cross-border spillovers are real, and the efficiency gains from cooperation are really clear. HAMMER: Thank you, Mike. PISA: Yeah. HAMMER: So at this time I’d like to invite members to join our conversation with your questions. So let me first begin by reminding everyone this meeting is on the record. When I call you, please wait for the microphone and speak directly into it. Please stand and state your name and affiliation before asking a question. And please do limit yourself to one question and keep it concise. This will allow as many members as possible to ask questions and share insights. So let’s begin here. Q: All right. Thank you. My name’s Dave Harden, and actually my colleague Sara Agarwal is over there. And we run a tech company that has its research and development lab in Ramallah. So imagine kind of all the big challenges that you all have described and think about, and then add the political overview on that. All of our employees have stock options, and there could be a dramatic impact if we were able to drive a big exit. One of the challenges that we have, and this goes to the question of how do you get to the next billion or three-point-eight billion, if they’re only users as opposed to developers, or creators, or value-adders into the internet. And so it’s very easy to pick up rick capital in Palo Alto, or in Cambridge, or in Shanghai. Not so easy to do it when you have operations in Ramallah, and Amman, and you’re struggling to do that. So how do you see kind of the nature of capital over the next—this next ten-year period where you’re trying to increase the number of actors that have access to the internet? Thank you. HAMMER: So I think should looking, I think, closely at Jen and Robert on this one. But I think we can start with you. SPIES: Yeah. I can answer somewhat narrowly, in terms of how the company has thought about expanding in a probably conservative but effective way in these markets. So we’ve actually opened offices globally. And we think about product development offices, which are probably the analog to the startups, and the engineers, and coders, and people building these products. We’ve opened tech hubs in Bangalore and in Singapore, in markets that are just closer than mountain view to our next billion users. So I know there is a balance between rapidly expanding everywhere and trying to be in every market, and have engineers in every market, and trying to build centers where you have sort of hubs and knowledge that builds up over time. I know that’s one way that the companies thought about just sort of expanding slowly and methodically and trying to find a product market fit with various initiatives before expanding too quickly. So it’s perhaps not like on the capital allocation side, but at least internally it’s who we’ve thought about expanding our footprint. PEPPER: No, we think about it very similarly in terms of, you know, where the engineers are, and growing globally. But that doesn’t—I mean, that’s toward Facebook. That’s not really your question. Your question is, you know, where are the—where’s the venture capital going to come from to fund startups in Ramallah, Kinshasa, you know, Cape Town, Bangkok? And that—and that is a challenge. In fact, earlier today I was having a conversation with a former minister from an African country, who is trying—he’s now in the U.S., at one of our major universities, two-year fellowship. And one of the things that he’s working on is trying to figure out exactly that. He was telling me that, you know, there’s capital available in Africa, but the—but the people in Africa who can write the checks are writing checks in Europe, not in Africa. And that really bothers him. The question is, why and what can be done about that. And what he was saying is that, you know, you have to have the right—create an ecosystem in terms of law, local law, and also taxation, bankruptcy law, rule of—you know, rule of law—all of the things that we take for granted—before people are going to write checks, whether it’s at—you know, for a startup, an angel writing a small check, or a VC writing a large check. And that’s not always easy. So this is—this is one of the big questions. The only optimism I see, right, near-term optimism, is if you go back ten years, fifteen years—no, not even fifteen. Ten years, when I was having meetings with startups in China, when I was—actually I worked for CISCO then—they were asking the same question, right? They now have loads of money, capital markets. You went to Europe, you went to Berlin and talked to startups, they were struggling. They couldn’t get capital, right? At that point it was beginning to go to London, right? So I think that they’re—you know, the optimism is that it is happening, but it’s happening too slowly. But that is an issue, because we see huge talent globally. Huge talent—for example, there are startups in—and incubators, for example, in Kenya that are struggling to get capital to get to the next stage, right? It’s not for the lack of talent. It’s not for the lack of effort. It’s not for the lack of market, right? And so it’s—this is something that as a broader global community I think we need to—need to address. HAMMER: Let’s come here, please. Q: Thank you very much. Miriam Sapiro from— PEPPER: Put the mic closer, Miriam. Q: Oh! Can you hear me now? Is that better? Closer? I don’t—I have to eat it, like an ice cream code? How’s that? HAMMER: There you go. (Laughter.) Q: (Laughs.) Mmm, delicious. Anyway, I want to focus for a minute more on the challenges because in some ways part of the conversation hasn’t really changed much in the last five, ten, fifteen years. Pepper’s nodding. And so, whether we’re talking about capital or infrastructure, we’re also talking about governments that either need to get out of the way or, better scenario, is to facilitate the kind of investment that’s needed. And yet, in the last few years, we’ve seen how in the developed world, especially in the U.S., the darker side of the internet has become more obvious. Whether it’s influencing elections, or cybercrime, or extremism, et cetera. And so how—you know, what do each of you, in your different roles, say to the governments, especially middle-income and lower where we really do want to be able to help more people get online, what is the ability that you think we have to try to make a difference in that regard and build, if not a governance structure, which is proving very challenging, at least some ways to acknowledge that we’ve learned some lessons from what we’ve seen in the developed world. And we will be able to try to help the less-developed world learn from them. PISA: So maybe I’ll start on that. I think—so the Pathways for Prosperity Commission from Oxford University Blavatnik School just did a survey of developing country policymakers, and what were their priorities around digital technology and the internet? And their first priority was job creation and the second was digital infrastructure. And much further down were these issues of data protection and data privacy. And then cybersecurity was up near the top, and also revenue collection—which I kind of alluded to before, right? So they want to see some direct benefit into their own coffers, but again the highest priority for poorer countries—at least in the policymakers in that sample—were: How do we make the digital economy work for us? And I think the trust issue—you know, we’ve—my remarks have focused on governance, and I’ve talked about why I’m skeptical of a broad-based global governance framework. I think one step could be the U.S. taking useful steps to regulate the industry from here. And then the other, and I’ll turn this over to you guys, is the large platforms convincing their users and the governments where they operate, the countries where they operate, that they’ve kind of corrected from, learned from some of the missteps in the past. SPIES: And I can give a really specific, but hopefully enlightening, example. I’m a product manager on Google Station, which is Google’s free public wi-fi initiative in emerging markets. And we generally have, like, the most success when we go into a market and at launch we are partnering with local governments and, you know, the minister of telecommunications is on stage with us at launch championing the arrival of free public wi-fi to his country. So we like to look at this as a partnership with local government. And often we find partners in government who are really excited to bring this product to their country and champion it. I think on the—on the technical and regulatory aspect of it, we try to hold our product to the same data standards as GDPR, which means oftentimes when we’re evaluating new markets to go into if the local government has requests around data sharing or data—like, data regulation that is outside of the bounds of that threshold, it prevents us from going into market. And we just don’t have the—I think, the technical capabilities, because we do really value encrypted data and user privacy in a way that’s incompatible with some requests from governments that would like Station to be in market. So I think that’s just a concrete example, where we’re prevented in operating in certain markets where we are holding ourselves to the standard that’s perhaps different than, like, the local government. PEPPER: In terms of the product—very similar. We have something called Express Wi-Fi, which is very similar to the Google Station. And there are issues, right, because there are some countries in which they want it to be open, so they have access to the data. And that’s not what we do. But going to another way to think about your question is, you know, you’re all familiar with Mark’s op-ed back in April, where he called for sensible regulation. It’s not—it’s no—you know, the shift has been—again, from ten years ago to today it was the internet is different, the internet is special, don’t regulate the internet, all right? The internet has now grown up. And we realize that there will be regulation. The question is, what form, what type? And, you know, we think the regulation needs to be not only—you know, how do you—what do you mean by sensible, proportionate, targeted, and not sort of overreaching? And you have to get this balance between—you want to continue the innovation, but you’d need frameworks and guardrails. You also have a continuum of what we mean by regulation. There are some things in which we think it’s completely appropriate, it’s within the remit of government. One of the big issues is election advertising. We think actually just tell us what the law is; we will abide by it. We don’t think we should be making law based upon, or decisions about—we probably have greater transparency than any medium—broadcast, print, certainly more than print. We have more transparency on election advertising than anybody anywhere. You want to take a—you want to find an ad and find out who bought it, where it ran, how many people saw it—in political advertising, we make that available. But that’s not the regulatory side, so what should the framework be? What should the rules be? So there are some things—and we believe, by the way—again, picking up on what Mike said, having harmonized privacy regulation we think is actually very important, because you don’t want—in the U.S. we don’t want 51 different flavors. And globally, you can’t be a global company without having a framework. GDPR’s a good starting point. We abide by it. Could it be improved here and there? Absolutely. But as a place to start for an approach to being able to have a global framework of, in quotes, “governance for privacy,” yeah. And that’s a regulation. But there are other things that we believe are going to be sort of regulated in more traditional self-regulatory ways, but there’s still going to be—so there’ll be industry codes. And the analogies there in the U.S., for example, are the ratings from MPA—it’s now MPA; it’s no longer MPAA—MPA promotion of pictures. So again, that’s a form of self-regulation. So you’re going to see, I think, a range of what the instruments are and the relative participation of governments in that process. But we’re already regulated and the question is what type of regulation makes the most sense to get the balance right between protecting users/consumers, protecting democracy and protecting and fostering continuing—to foster innovation and investment. HAMMER: More questions? Here, please? Q: Thanks. Hi. I’m Sabeen Dhanani with the Center for Digital Development at USAID. One thing we hear a lot about is 5G and how it’s going to completely again change the digital landscape. Given the infrastructure challenges you mentioned earlier and the regulatory challenges, what really—what is the realistic timeline on 5G in some of the emerging markets, and how should we all prepare to deal with the new set of challenges that that might pose? PEPPER: I’m somewhat of a 5G cynic. I think 5G actually is going to be transformational, but not in the way we’re hearing about the hype. Most people talk about—especially at a very publicity-driven level or certainly globally at the political level, it’s being characterized as this is going to be super AK, high-definition video to your smart phone. No. Maybe eventually. That’s not the transformational part of it. To me, 5G’s transformational, first of all, in terms of the types of applications that will be enabled by 5G are, in the first instance, the most transformational ones are going to be on the industrial side. It’s going to be smart factories. It’s going to be augmented transportation. It’s going to be precision agriculture. In previous years people called it the internet of things. That will be enabled by 5G. That is going to have very significant transformational impact in terms of economies globally in the developed world and developing world. That architecture and the types of deployments that are going to happen with that are very different than sort of rebuilding our consumer-oriented mobile networks. They are converting to 5G, but I can tell you that I’ve talked to CEOs of operators in developing countries, one in particular in the Middle East, and it’s a state-owned operator and he was told by the government, the king, this 5G thing, this is really important. I want you to build 5G right away. So he did. He converted his network to 5G. He said, I’m dying. I have no business case. Nobody’ll pay extra. My margins are crushed. I had to make these big investments. He said maybe, I hope someday, I’ll be able to justify that investment. So we hear a lot about 5G and I think it is going to be transformational, but not necessarily in the near term in the ways we’re hearing about. But I think it’s really important. But that’s going to be a very different type of deployment. The last point on that, what’s interesting is Germany just made some 5G spectrum available. Guess who won the auction for it? It was not a mobile operator. It was Siemens. They wanted that spectrum for 5G for smart factories and building smart factory systems. To me that was really interesting and a harbinger, I think, of what we may be seeing. HAMMER: We have a question here. Q: Welby Leaman, from Walmart. I totally buy the importance of social media, search, e-government to create that critical mass of demand for the next billion users to come online. Where do you rank retail in that, among the other sectors? Does retail have a big role to play in pulling people into the digital economy? And since the retail world’s now in sort of a big fight between Walmart and Amazon and others, between sort of a digital native approach and a Walmart approach which would be basically blending increasingly digitally enabled stores with e-commerce, do you see one or the other as more likely to pull people into the digital economy? Because our proposition has been that the blending gives people more on-ramps. So for example digital payment, you can go into a store, digitize your cash in Walmart Mexico and thus join the digital economy. SPIES: I think from everything I’ve seen it is—it can be really tough to go completely digitally native from a retail perspective in these markets. And the challenges often look very different than what a U.S. retailer would think about. So cash on demand, or cash on delivery is still a huge percentage of payments in many of these markets and a lot of major retailers have built the option to pay through credit card, through online payments and then cash on delivery. I think Uber offered this option before offering it in the U.S., Trust and Safety. So there’s a ton of issues with scam and fraud and being able to have a verified checkmark by your badge or by your store and know that you’re not participating in a fraudulent purchase is incredibly important, more so than it is in the U.S. And I think lastly even just logistics. So there’s a ton of focus on being able to map out cities that previously maybe didn’t have a formal address system and it was informal directions that even the national post office wasn’t delivering to, like building an e-commerce company around those logistics is often something that’s going to be done, often it’s proprietary to local startups, if they’re able to crack that in a city. So yeah, I would say that for retail and e-commerce specifically, localization is incredibly important because the challenges are so unique to these markets. Retail is still a much bigger percent of online shopping than e-commerce, and often this blended approach is something that has to happen just by nature of business there. HAMMER: We’ll come here. Q: Mike Jobbins with Search for Common Ground. A lot of the societies that you’re talking about—Congo, Mali, Yemen, where these next billion people live, are as divided as Myanmar was five years ago and have the same level of internet penetration as Myanmar did. So what are the lessons that you take from Myanmar or any of these places as they come online to shift the social norms of accepted and expected behavior on your platforms? PEPPER: So Myanmar’s a good example, but it’s not alone. That I mentioned earlier being prepared and digital literacy. Getting the first 3 billion people, 3.5 billion people online was relatively easy, compared to the next 3.5 billion. You take a look at basic literacy rates, you take a look at who’s connected and who’s not, even in emerging markets, and so the needs for even very basic digital literacy are extremely important. The training—and I mentioned earlier, giving people basic skills that when they go online, how do they set their settings on their phone? How do they—how can they know if there’s a somebody’s—a phishing attack for information? How do they keep their passwords safe, or even know that they’re supposed to? So these actually, I think, are extremely important and—for example, we’re working—we have multiple digital literacy programs that we have in place and others, new ones that we’re building all the time. And in fact, one of the most effective ones which we used to call OTG, On The Ground, which nobody knew what that meant, and I kept saying, OK, it’s a great program but that’s a silly name. They actually now call it Internet 101, which I thought was—makes more sense. The first place they launched it was in Myanmar, and it’s been actually quite effective. And if you’re interested, I can get you information about that. But that to us is extremely important for a safe internet experience going forward, especially in places with low literacy and very limited digital literacy. We’re also working with NGOs, for NGOs to develop their own digital literacy programs in particular countries. SPIES: I’d say for us, often our values are baked into the product itself. With free wi-fi specifically, as we’ve looked into other companies, what you see is there’s a much lower standard of user data privacy and protection. A lot of people have gotten onto wi-fi networks that are not secure and they get hacked in some ways. And so we’ve talked a lot internally about how do you even shift the perception of this product experience in this industry, because what we’re offering is a much more secure encrypted experience that really prioritizes user privacy. HAMMER: (Off mic.) PISA: Sure, I think it’s a great question. I think if you look at the infrastructure that Facebook very belatedly put in place in Myanmar, the thing that’s striking to me is how expensive it must have been to deal with that problem ex-post. They have hundreds of local language content moderators. Obviously the development of new AI systems, those can scale rather cheaply. But I think as digital platforms broadly step into new countries and are used en masse in new countries where they have to kind of get up to speed with local language content moderation, it seems like to me that one question that arises is when is it not going to make sense as a business proposition? Because if you look at—I know the Facebook average revenue per user last year in the United States was $27 per user, and in most—in sub-Saharan Africa was less than two dollars. So at a certain stage—and the next billion users, it’s a long game, right? And it’s a numbers game, but it’s not hugely profitable in the short term, or maybe even the medium term. And then the questions around how you do—how do you mitigate those risks, it seems like it can be a very costly exercise. So I wonder at some point does it become—do companies just kind of wash their hands of certain problematic situations? HAMMER: I think we have one more in the back, please. Q: Hi. Thank you for this session. My name is Ibrahim. I’m from Deloitte. My question is around cybersecurity and future threats with Google’s Sycamore quantum computer. We have things in place that ensure that these nations are secured in the future, especially around cybersecurity, encryption algorithm, and things like that? Also, looking atm, like, the World Bank report, which is set from I believe 2020 to 2030, which is around the time that quantum computer and things like that will start being really widespread. Are there strategies in place to ensure that these nations are also secured from infrastructure and capacity building? SPIES: Yeah, and I wish I could speak more specifically to quantum computing and some of the cybersecurity efforts. I know on the policy side we are working mostly with some of the transnational bodies that are thinking about policies to put in place and what are global regulations that could be a template for countries on this. Some of the points raised earlier, any time you can get a framework that is globally applicable and scalable, that’s easier for a company whether you’re talking about privacy or cybersecurity. So I know we’re participating in those forums, but I don’t have more detail on them. Maybe Robert can speak to the Facebook efforts. PEPPER: We’re not doing quantum at this point. (Laughter.) SPIES: Yeah, just to pass that over to you. PISA: Neither is CGD, so—yeah. (Laughter.) HAMMER: Neither is the World Bank. And so, with that—(laughter)—I am so grateful to each of you for coming this evening. Thank you very much for spending your time and please thank your panelists for the insight they shared. (Applause.)
  • Russia
    Stemming the Tide of Global Disinformation
    Play
    Panelists discuss the extent of disinformation, its impact on democracy, and what can be done to prevent, mitigate, and stop its spread. THOMPSON: Welcome to today’s Council on Foreign Relations meeting on “Stemming the Tide of Global Disinformation.” I’m Nicholas Thompson. I’m the editor-in-chief of Wired. I’ll be your moderator today. Let’s get crackin’. Rick, how are you? STENGEL: Good. How are you? THOMPSON: Great. You have just spent the last three years writing about disinformation. He has a new book; it will be available later. You spent the last three years thinking about disinformation. Tell me how your thoughts deepened as you went along, because we all know why disinformation’s a problem. There’re some obvious reasons why it’s a problem. But now that you’ve spent more time thinking about it than anybody else, tell us what you learned that we don’t know. STENGEL: I don’t think I’ve spent more time thinking about it than the president has. (Laughter.) What a way to begin! THOMPSON: Yeah. (Laughter.) STENGEL: The other false premise of your question is that my thinking has deepened about it. So my book, Information Wars, is about the time I spent at the State Department countering disinformation, countering Russian disinformation, countering ISIS propaganda. And I had never really seen it before. I’d been a—I was editor of Time for a bunch of years, had always been in media, and after the annexation of Crimea by Putin in 2014, we saw this tsunami of disinformation around it, you know, recapitulating Putin’s lies about it, and it was a kind of a new world. And the idea of disinformation as opposed to misinformation is disinformation is deliberately false information used for a strategic purpose. Misinformation is something that’s just wrong, something that we all, you know, can get in the habit of it. And I saw this whole new world being born. I don’t mean to steal your thunder with the question, but inside we were talking about whether there’s more disinformation relative to correct information now in history than ever before. I don’t know the answer to that, but what I do know is it’s easier to access it. And once upon a time the Russians, who pioneered something called “active measures,” which was their idea that warfare, the future or the present of warfare is about information, not just kinetic warfare. The way they used to do it in the ’50s was they bought out a journalist in a remote newspaper in India to put out a false story about something and then the Russian media would start echoing it and then it would get into the mainstream. Now, they hire a bunch of kids to work in a troll farm in St. Petersburg and put it up on social media with no barrier to entry, no gatekeepers to prevent it from happening. And I don’t know the answer to whether there’s more of it, but there’s easier access to it. And I do think as we approach 2020, part of the other problem of disinformation is it’s not just a supply problem; it’s a demand problem. People want it. You know, confirmation bias means we seek out information that we agree with. If you’re likely to think that Hillary Clinton is running a child sex trafficking ring out of a pizza parlor in Washington, D.C., you’re likely to believe anything and seek out information that confirms that. That’s a problem, and that’s a human nature problem. THOMPSON: Paul, let me ask you a variation on this, having just listened to Rick. You’ve just published a report on this very topic. You could have written reports on lots of topics. BARRETT: I suppose. (Laughter.) THOMPSON: You’ve got a varied career. Look at the man’s bio. You know a lot of things. Why are you so worried about disinformation right now? BARRETT: Because it is a foot in the land, it is pervasive, and without a good distinction between real facts and fake facts, we can’t run a democracy in an effective way. People can only make honest political choices with real information. And I think we have at key moments and in key places a lot of false information, and intentionally false information. THOMPSON: And is it getting worse? Is it worse today than it was yesterday? Is it worse today than it was four years ago? BARRETT: I think that’s hard to say. I mean, I think it is—it was present and significant in 2016 and has not stopped since, and I think there’s every reason to think that we’ll see it kick up again as we get closer to the next election as well. THOMPSON: Amanda, how worried are you? BENNETT: You know, I’m going to—I’m going to be the dull and boring person at this party, because nothing I do has—I was going to say has anything to do with tech or bots or deep fakes or anything like that. THOMPSON: Help people set up zingers. (Laughter.) BENNETT: Well, let’s hope that that’s true. But my argument is that we are under-valuing the pursuit of straightforward, truthful, honest news and information in our fight to push back this other thing, these fake things. And that disinformation, misinformation—give me your definition of disinformation again. STENGEL: Deliberately false information used for a strategic purpose, nefarious purpose. BENNETT: So I’m the director of Voice of America and yes, we still exist; no, I don’t wear the funny hats anymore; and no, we don’t do propaganda. Thank you. But that definition right there, I will maintain that half of the world at least lives under that condition daily, with no other—no other news or information. And so this is nothing new, the thing that we’re talking about. If you’re talking about the kind of technologically sophisticated things, I’m going to be the boring person to say that this exists in great proliferation throughout the world already, and that what seems to be an antidote to it in many ways is putting something straightforward in front of people. THOMPSON: So I totally buy that. Are you also saying that we talk about disinformation too much in this country? BENNETT: No, I’m saying—I stipulate that everything you guys say is true. Everything is true, we should be worried, these deep fakes are a problem and we should— THOMPSON: And we should talk about it, or does talking about it so much make us think there’s more of it than there is? BENNETT: And this—the article that we just read back there that talks about cynicism, I think if you talk about this type of disinformation and misinformation and its goal being to breed cynicism and confusion, the fact that when you talk about straightforward truthful news and information being possible or desirable people roll their eyes at you, says to me that they’ve already kind of won, that we’ve already come to the idea that we—that that is not an effective way of pushing back at things, but actually we need technological solutions to things. STENGEL: I mean, the thing that has exponentially increased is user-generated content. Remember, the biggest platform in the world, in the history of the world, is all created by content that we put on it, not professional journalists. It’s not vetted. I mean—and so that has—is the thing that has exponentially increased, and because it is created by regular folks and isn’t professional content, the possibility for disinformation, misinformation, anything wrong is that much higher. And one of the things we’ve also—and we will chat about—is the fact—is that the law, the Communications and Decency Act that created all these platforms, does give primacy to third-party content and doesn’t give them any liability for publishing false content if it’s put on by a third-party person, as opposed to a professional journalist like we all are or were. THOMPSON: All right, but let me give each of you a hypothetical. So let’s assume I have two kids; they’ve just graduated from college. They’re really interested in this problem. One of them says, I’m going to go into deep fake detection. I’m going to figure out how to get rid of disinformation. I’m going to help Facebook fix their algorithms so they can identify disinformation. And the other says, I’m going to go be a reporter and I’m just going to tell the truth about everything and I’m going to tweet out all my stories. BARRETT: Well, one will have a job and the other one likely won’t. (Laughter.) THOMPSON: True. But who’s doing the more important work, the work that we need right now? BARRETT: Well, it’s—seem like equally important work. Sorry not to—(laughs)—not to go for your bait, but— BENNETT: I would have said I’ve been a really good parent if that’s—if I have—I have that choice there, I’d say I’ve been a really good parent, that both those things—(laughter)—are incredibly, incredibly useful. THOMPSON: Probably my kids are going to work for troll farms. But anyway—(laughter). STENGEL: You wouldn’t say the reporter. BENNETT: No, not necessarily. I’m not—I’m not saying that one of them is more. I’m saying that right now all this attention is going onto things that, actually, a lot of people out here in the room, it’s more scary because we can’t touch it—we can’t do anything about it—when, in fact, I think what’s happening is that your attention is being turned away from the fact that really truthful—people can distinguish truth from lies. One of the ways they can do it is by seeing things head-to-head and they can make decisions. I mean, the famous example is the Chinese trail (sic; train) derailment. Remember that, when they said nothing here to watch, nobody hurt? And the people that were there doing their—you know, uploading their photos, were showing that there actually was. And that caused a lot of dissonance in the Chinese media ecosystem. So I’m maintaining that not just there’s too much of it or we shouldn’t be doing it, just that there’s something else out there. There’s something else. STENGEL: I would—I’d actually tell my kid to do the deep fake detection, and I’ll tell you why. Because disinformation warfare—information warfare is asymmetrical warfare, right? It’s like a bunch of young people in a troll factory in St. Petersburg, which costs a relatively tiny amount of money compared to an F-35, can do more damage than an F-35. And so it’s asymmetric warfare in that countries that can’t afford missiles or jets or tankers or whatever can engage in this. So that, in that sense, what has also happened is the offensive weapons in disinformation war have matured and evolved faster than defensive weapons. We actually need better defensive weapons, and we need to spend more money on it. So I would argue that somebody who could figure out a system to detect deep fakes instantly would be doing a lot of good for the world. THOMPSON: I appreciate that answer and I also appreciate something you said in there, which is offensive weapons. Should the United States have offensive weapons when it comes to disinformation? STENGEL: (Pause.) Are you talkin’ to me, Nick? (Laughter.) THOMPSON: Based on the way you paused, I am 100 percent talking to you. (Laughter.) STENGEL: Well, I mean, we do have—I mean, I’m not in government anymore, but I think I’m still—have to abide by the strictures of classified information and all of that, or I’ll be prosecuted by the State Department. But I—you know, we do have offensive weapons. I mean, there are—there are well-publicized examples of us using them in Iran, for example. I actually think— BARRETT: But those are cyberattack weapons, as opposed to actually spreading— STENGEL: Yes. BARRETT: —you know, spreading bad information. So what the U.S. Cyber Command does is actually pretty distinct from what we the United States could be doing, which is matching what the Russians are doing with information operations. And we—so far as I know, we don’t do that, at least not anymore, and I think that’s a good policy. I don’t think we should do that. I don’t think we should be in the truth-twisting business. STENGEL: Yeah. So Paul makes a good point, and I talk about this in the book. There’s a—on the spectrum of hard and soft power of information, the hard end of information war is cyberattacks, malware, things like that. The soft end is propaganda, content and this and that. On the soft end we don’t do—I mean, I was involved in the creation of the—of what is now known as the Global Engagement Center, which is a not-completely-funded department which is a kind of whole-of-government department residing at the State Department to combat disinformation. But again, it’s all done in a non-classified way. All the content is labeled U.S. Government. It doesn’t create false information or disinformation. THOMPSON: So what about—OK, so let’s take another example. So Amazon has this thing where they pay tons of people, some of whom work for Amazon, and they pay them to tweet what an amazing place it is to work at Amazon, and they give them scripts, right? And so there’s this kind of this steady flow of—it’s not false; these people may genuinely like to work for Amazon, particularly since they’re being paid to tweet. And so they tweet out, but just kind of garbage. Should the U.S. pay people to tweet out positive things about the U.S. image and tweet The Star-Spangled Banner in Russia? STENGEL: (Pause.) You’re still lookin’ at me. (Laughter.) Well, I— THOMPSON: We got a definitive answer of no. STENGEL: So Amanda and I— BARRETT: But that’s a little different. I mean, the idea of someone—you know, tweeting the United States is great and its enemies are not great. And doesn’t the State Department set up projects and programs that essentially do that? STENGEL: Look, once upon a time, the U.S. Information Agency, which was then folded into the State Department, did create what I would call positive propaganda about the U.S. I was dinged here on another panel a couple of years ago for saying that there’s such a thing as good propaganda as well as negative propaganda. I don’t think propaganda just is automatically a terrible thing and that nations do practice it. So all those trolls will get upset again. But we don’t really do what USIA used to do anymore, you know, in terms of Frank Capra—why we fight and documentaries about great black athletes and things like that. I mean, all of which was true content, it just was used to give people a better picture of the United States. And I always argued when I was in government that we do that already. I think U.S.—I would always want to make people around the world be able to see U.S. media and not only what we say about ourselves that’s good, but what we say about ourselves that’s critical, so people see that we have an open press and what that’s like. I think that sends a great message, which is essentially the message that you send, Amanda. BENNETT: Hmm. The word “message” is a very, very dirty word at the Voice of America because that implies that you are deliberately moving your content in order to achieve a particular end. Yes, I say that we have an offensive weapon, and I do say that this whole argument has in fact won a little bit, because I’m going to tell you what I think our offensive weapon is and I’m going to see a collective eye-roll around the room, which is our most effective weapon is our First Amendment. And I say that we export the First Amendment and that people can tell the difference. Not completely. I stipulate that everything you guys are saying is true, that deep fakes and all this stuff in troll farms are bad and dangerous and hazardous. I’m glad that you guys are paying attention to it. But I’m also saying that—and I’m so glad you brought up that F-35, because my personal budget at the Voice of America is less than two F-35s. If anybody’s out there listening would like to help fix that problem anyplace, that would be great, because I think that we reach, you know, hundreds of millions of people around the world for a very small amount of money. And so the First Amendment, neutral news, truthful news, not messaging, independent of a government. People can tell if it’s—if it’s being moved around. Here’s my—here’s my question right now. You guys all read newspapers still, right? In paper? Any of you in this room? Somebody? Thank you. And sometimes you see these inserts like from the China Daily or from, you know, Abu Dhabi, the City of the Future that kind of stuff? How many of you read them? Have you ever read a single word of them? One word? OK, a couple words out there. And why don’t you read them? Because you know that they are moving something, they are trying to sell you something. You’ll read the newspaper that surrounds it, but you’re not going to read the thing inside. That’s what I’m saying that propaganda is like, and that you can tell the difference. Maybe not if you have good deep fake that’s doing things, but—so I agree that you guys are good, but people can tell the difference and it’s a worthwhile thing to do. It’s a very worthwhile thing to do. (Applause.) THOMPSON: All right, let’s move to the platforms, the social media platforms. That was a good answer. BENNETT: That wasn’t the eyeroll I was expecting. (Laughter.) THOMPSON: Standing up for truthful news? Journalists are going to—I’m certainly going to applaud that. All right, let’s talk about the technology platforms. Paul, you’ve just published a report on what they’re doing, what they need to do. Last time when we talked about the 2016 election, we mostly complained about Facebook and Twitter. After 2020 when we’re all diagnosing what went horribly wrong on the social media platforms, which ones will we be looking at? BARRETT: Well, I say in my report that Instagram, which is owned by Twitter, a photo- and video-based— STENGEL: Owned by Facebook. THOMPSON: Owned by Facebook. BARRETT: Excuse me. Forgive me. By Facebook, I apologize—deserves more attention. And the main reason for that is because we already know that it is a disinformation magnet. The Russian Internet Research Agency, the main trolling operation that the Russians ran in 2016, had more engagement on Instagram than it did on either Facebook or Twitter. And experts in this area have pointed out to me that increasingly, disinformation is being conveyed visually, and that is Instagram’s specialty. And I think that’s the platform to focus one’s attention on, at least initially. STENGEL: It’s also harder to find— BARRETT: Harder to detect, that’s a very good point. THOMPSON: Rick, would you agree? STENGEL: I do agree. I mean, I—Paul’s report, by the way, is absolutely terrific, and it’s a great primer, I think, on disinformation, both what happened in 2016 and going forward. The Senate Intelligence Committee report that came out, I think two days ago— THOMPSON: Yeah. STENGEL: —you know, had a lot that Robert Mueller had, and the stuff in my book that Robert Mueller didn’t have—I’m just telling you that too. But one of the things that they did have is that the Russians have actually increased in terms of volume what they’ve been doing since 2016, and largely on Instagram and other platforms that we probably don’t even know about. What Mueller didn’t have—and I want to get to the platform things in a second—is that what the Internet Research Agency was doing was completely integrated with what Russian mainstream media was doing, with Russia Today and Sputnik and TASS. And with the Russian, you know, foreign minister, who used to echo canards and misinformation that was created from the Internet Research Agency and start talking about it at a press conference, and then it was covered worldwide. So it had a much greater impact than just the audiences that the Internet Research Agency was going for. But in terms of the platforms, I do think—and we—and Paul also talks about this in his report—they need to have more responsibility and more liability for the content that they publish. They cannot escape this idea that they’re—that they’re not publishers anymore. The gentleman from NewsGuard here, which is a fantastic new organization that is fact-checking information on the web. I actually stole some language from you about what the companies need to do. They can’t be liable the way Time magazine or Wired is for every word that they publish, but they have to make a good-faith effort to take down demonstrably false content, as Paul talks about. I would argue hate speech, speech that leads to violence, those—there’s no excuse for that, even if it’s framed as political speech. That should just be off, and they should be liable if they don’t take it off. THOMPSON: So let’s do an example. Let’s talk about, I don’t know, the famous example that came up was the video of Nancy Pelosi slowed down so it looked like she was slurring her speech and drunk. So you can make the argument that’s demonstrably false or you can make the argument it was satire. Satire’s got to be a protected form of speech. What do you guys think? Would you take that down if you were Mark Zuckerberg, would you knock that off the internet? STENGEL: I— BARRETT: Well—I don’t want to—I say yes. STENGEL: I say yes. I mean, and I think also one of the things that they did, so they slug-did (ph) or—“slugged” is a journalism word. They had a—you know, a chyron up saying this is not true content, or this is manipulated content. One of the things that influences all of this, and I write a little bit about it in my book, are these cognitive biases. And there’s a terrific dissertation, and I forget the young woman’s name who wrote it, about belief echoes, she called it, which is that this idea that if you see something false, even if you then immediately are told that it’s false, and even are persuaded that it’s false, it creates a belief echo in your head that never gets erased. So to me, part of the problem of putting a caption under the Nancy Pelosi video is that you can’t un-ring the bell. You can’t un-see that. That stays in your brain. It should not—it should not have been on the platform at all. THOMPSON: So you would knock Andy Borowitz off the platform too? I mean, political satire, making fun of things that—pretending that Trump said things that he didn’t say? Because there could be belief echoes with that, even though it’s slugged as humor. STENGEL: You’re trying to trick me now, Nick. I am—(laughter)— THOMPSON: I’m just trying to get some of the complexities here. BENNETT: Do you—do you remember when the People’s Daily re-ran the story about Kim Jong-un being the world’s sexiest man, that was written as satire? And they were like, “world’s sexiest man declared by U.S. publication,” right? THOMPSON: I ran traffic analytics at the New Yorker and sometimes Andy Borowitz’s post would be picked up as true in China, and the traffic spikes we got were killer. (Laughter.) BENNETT: Yeah. Yeah. THOMPSON: All right, so let’s—so we’re kind of ragging on the platforms right now and talking about some of the problems they have. 2016, obviously lots of problems. We had a 2018 election and as far as I can tell, wasn’t a whole lot of misinformation. The only thing that I read about was a bunch of Democrats running a test to try to take—to criticize Roy Moore in Alabama, right? We had very different disinformation problems. So maybe it’s under control. Maybe we’re over-indexing on 2016. BARRETT: Maybe, but I don’t think we should take the risk that that’s the case. You’re absolutely right that the Russians’ level of interference was negligible immediately around the time of the election. We don’t know exactly why that is; they’re keeping their powder dry for 2020, a much more important engagement perhaps. Perhaps the platforms deserve some credit for having gotten more on the stick and more in the business of taking down phony accounts which they are now doing in some numbers, whereas in 2016 they were completely asleep to that. The Cyber Command that we mentioned earlier reportedly ran an operation that shut down the IRA, at least for a few days, around the election itself so that they were taken off the internet temporarily. All those things may have played a role. But the general problem continues. There is disinformation flowing from abroad, not just Russia but also Iran. And I just don’t think this is the kind of problem that you say, well, we had one good outing, so we’re done, all our problems are taken care of. THOMPSON: But are the signs that you’re seeing, right—we’re a year out. Are you starting to pick up a sense that it’s going to be like 2016 or are you picking up a sense that’s going to be like 2018? STENGEL: One of the things in the Senate Intelligence report that I found interesting was this idea that the Russians masquerading as Americans would seduce or entice actual Americans to do their bidding on the Web. You wrote about some examples that they did in 2016. BARRETT: Right. STENGEL: The one that still kills me that actually wasn’t in the final Mueller report—it was in the first Mueller indictment, and I think you mentioned it in your report—that from St. Petersburg the guys from the Internet Research Agency create—did a rally, a pro-Trump rally in Palm Beach where they hired a flatbed truck and an actress to play Hillary Clinton in a prison cell on the back of a flatbed truck, and they did that from St. Petersburg. That was in the first Mueller indictment. I don’t know why he didn’t put it into the Mueller report. But in terms of them using Americans to do their bidding, I would worry about that in 2020. That’s very hard to detect. Because if you persuade somebody in Palm Beach to do something like that again, then that’s an American person expressing their First Amendment rights to, you know, say Hillary Clinton should be in prison. THOMPSON: All right. Let’s spend the last five minutes we have before we go to Q&A, coming up with an agenda for the United States of America, for citizens of America, for the government of America, to lessen the risk of disinformation. Because, as Paul said at the very beginning, democracy can’t function if nobody believes anything. So we should have engineers looking for deep fakes. We should have true and faithful news. The platforms should be looking for this stuff much harder. What else do we need to do? BARRETT: And cooperating with each other to a greater degree than they do, and cooperating with the government to a greater degree than they do in order to exchange information and, you know, sort of suss out threats sooner than otherwise they might. And they need to do a lot of what—a lot more of what they’ve already been doing, hiring more people to review content and continuing to improve their artificial intelligence filters. THOMPSON: Amanda, what else do we put on the agenda? BENNETT: You know, I would go back to the same thing, which is keep your eye on the ball. What are you trying to push back disinformation for? What is—what is the thing you are trying to push it away from? And that, I would definitely strengthen that, and I would not roll our eyes at the 1999 concept that this stuff actually has value. And that it—and it can be believed, that people can believe it. STENGEL: I agree with all that we’ve said. I think vetting mechanisms like NewsGuard and others are valuable. I also think a long-term solution—I mean, one of the things I say in the book is we don’t have a fake news problem; we have a media literacy problem. Lots and lots of people—once I left journalism I realized wow, lots and lots of people can’t actually tell the provenance of information and where it comes from and what’s a reliable source and what’s not a reliable source. It has to be taught in schools, starting like in elementary school. And that’s the reason that so much of this has purchase is that people can’t tell that it’s false and they’re more susceptible to believe it. THOMPSON: All right, so let’s give a lesson to everybody in this room. We’re all going to—at some, point we’re going to see information that might be false. How should people evaluate it? How can we learn media literacy? Members of the Council on Foreign Relations, well educated, but they’re not going to go back to school for this. STENGEL: Well, actually, one of the proposals I have is about—is about journalism, digital journalism being way, way, way more transparent, right? So when—in the day when we did stories, we did interviews, we did research, we talked to people, it was fact-checked, we wrote an outline. I think all of that—you should be able to link to that, that you write the story, in the New York Times there’s a link to “here’s my interview with the national security adviser.” “Here are the photographs we took that we didn’t use.” “Here’s the research I did, this chapter from this great new book by Rick Stengel.” (Laughter.) Oh, sorry. And would every reader look at that? No, but it would show the kind of the—how the building is created and it would create more confidence in the result. THOMPSON: How about changing the law? Should we make the social media companies liable if there’s an excessive amount of disinformation on their platforms? STENGEL: I think so. BENNETT: And I will say what I always say, is write the laws as if your adversaries are going to be the ones implementing them. Just make sure you know what’s going on. You can write them because you think of what you want, but think about—think about a law like that in the hands of somebody you don’t like. BARRETT: And interestingly, Mark Zuckerberg has actually proposed something roughly along those lines, has talked about having some type of government body that would assess the prevalence of bad content on the sites and sort of superintend whether the sites were making progress. I doubt he would go for actually creating, you know, private liability and litigation to flow from that, but the idea is not as far out as you might think. THOMPSON: But he might go for that, because the only company to be able to comply with those laws is his. STENGEL: Is his. Exactly. THOMPSON: And any start-up would be wrecked because they won’t be able to hire all the lawyers and lobbyists they need, which is one of the problems with these laws is locking in monopolies. But, Rick, you said yes, we should change the law. Which laws? STENGEL: Section 230 of the Communications and Decency Act, which basically gives all of these companies zero liability for the content that they publish, because it’s third-party content. Now, when it was written—when you write a law to incentivize some behavior, like you write a law saying hey, we need to have more people go to Staten Island, let’s—you know, I’m going to create a law where you can build a bridge, you can have a toll for it for ten years, but then you change the law. The law from 1996 did incentivize this, in a massive way, in a way that unintendedly created all of this other stuff. Needs to be changed now. These platforms need to make a good-faith effort to do that. And one reason they don’t take content down is because if they took content down Congress would go, oh, you’re an editor after all, so you should have liability for the stuff on your content. That’s why—one reason that Facebook is so loath to take things down, because they don’t want people to say, hey, you’re performing an editorial function. THOMPSON: All right. It’s 1:30. I’d like to invite members to join our conversation with their questions. A reminder, the meeting is on the record. Second reminder, the Council on Foreign Relations is not liable for any defamatory statements that you put in your questions. (Laughter.) Please wait for the microphone. Speak directly into it. Stand. State your name and affiliation. Please ask just one question and keep it concise so we can get as many as possible. All right. In the back, in the light blue. Q: Hi. Kathryn Harrison, CEO of the Deep Trust Alliance. You talked about media literacy. That’s like telling everyone who drives their car poorly that they need to go back to school. STENGEL: I agree with that too. (Laughter.) Q: An important—an important part of the solution for sure. But as the equivalent of cars, as the technology for creating videos, images, text get better, faster, stronger, cheaper, is there not an opportunity to make in the technology itself standards, labels, or other elements that would provide the guardrails, the seatbelts, or the airbags for consumers who are viewing that content? STENGEL: What would that be? Q: You could have a very simple labeling system, human-generated, computer-generated. You need to be able to track the provenance—what’s the source, how is it manipulated—but that would at least give you a signal, much like when you go to the movies, you know if you’re going into an R-rated movie that there’s going to be violence or sex or language, versus if you go into a G-rated movie. That’s the first place where we’ve shown kind of information that isn’t real. How can we use some of the models that we already have in society to tackle some of these problems? Because it definitely needs technological as well as human remedies. BENNETT: I often thought that was really interesting. You know, like, I’ve got friends who forward really stupid things like the one-cent tax on emails. How many of you have got friends that forward the one-cent tax on email thing? I think, oh, guys, get a grip, you know? (Laughter.) But on the other hand, I would really love to see something. This thing was posted by something that in the last thirty seconds posted ten thousand other things. I just think that would be a really useful thing to have and it wouldn’t be that hard to do. I mean, Facebook and Twitter can both do that right now. STENGEL: So what—I’m a little wary about the content purveyor creating the definition. Now one of the things that a lot of bills that are out there, like the Honest Ads Act for political advertising, or almost any advertising, is to show the provenance of the advertising. Why were you selected to get this particular ad? Well, it turns out that you bought a pair of Nikes last year and they’re looking for people who bought Nikes in Minnesota. I think all advertising that—and I actually think advertising has a role to play in the rise of disinformation, because automated advertising, when people started buying audience as opposed to brands, that allowed disinformation to rise. So I think the kind of transparency in terms of political advertising and other advertising insofar as that could be applied to content, without prejudging it, I would—I would welcome that. THOMPSON: All right. In the back, who also might be in turquoise—slightly misleading my initial calling. Yes. Q: My name is Aaron Mertz. I direct the Aspen Institute Science and Society Program. A lot of the examples you gave came from very large entities, governments, major corporations, often for quite nefarious aims. I’m thinking about individuals who might have ostensibly good intentions, parents who want the best for their children, but then who are propagating false information about things like vaccines. How do you counteract that kind of disinformation that’s coming from individuals who then can band together, form these groups and then potentially even lobby governments to change policy? BENNETT: I think you’ve just put your finger on one of the real—the real, you know, radioactive things about this whole discussion. How far do you go from vaccines which we don’t agree with to a form of religion we don’t agree with? Let’s talk about Christian Scientists. Would you like to ban that from the internet? I mean, that’s—you’ve just put your finger on the third rail. THOMPSON: So how do we solve the third rail? BARRETT: Well, I would encourage the platforms to diminish the distribution of or take down altogether phony life-threatening medical information. So, I mean, you have to do it carefully, you have to be very serious-minded about it, but I— THOMPSON: Who determined—who gets to determine what’s phony? BARRETT: Hmm? THOMPSON: Who determines what’s phony? BARRETT: I would go with doctors and scientists. (Laughter.) BENNETT: Me. BARRETT: You? BENNETT: I’m going to do it, yeah. BARRETT: Well, I’m less impressed by you. (Laughter.) STENGEL: But to say something that will also be unpopular, when I went into government, and having been a journalist, I was as close to being a First Amendment absolutist as you could be, you know? Justice Holmes, the First Amendment doesn’t just protect ideas that we love, it protects ideas that we hate. And traveling around the world, particularly in the Middle East, and people would say, why did you allow that reverend in Florida burn a Quran? Well, the First Amendment. There’s no understanding of the First Amendment around the world. It’s a gigantic outlier. All of these societies don’t understand the idea that we protect thought that we hate. I actually think that, particularly the platforms, the platforms have their own constitutions; they’re called terms of service agreements. They are not—they don’t have to abide by the First Amendment as private companies. Those need to be much stricter about content closer to what the—what the EU regards as hate speech and other countries do. There’s a phrase called dangerous speech, which is speech that indirectly leads to violence. I think we have to be stricter about that, and I—and the platforms can do that because they are private entities. THOMPSON: All right. I’ve got so many follow-ups. We’ve got a lot of questions. George Schwab in the front center here. Q: Thank you. George Schwab, National Committee on American Foreign Policy. From the perspective of international law, does state-sponsored misinformation constitute aggression? BENNETT: Not my thing. STENGEL: Well, one of the things I’ve been saying for a long time is that the Russians didn’t meddle in our election; they did an act of cyber warfare against the foundation of our democracy. That’s not meddling. I think when there’s state-sponsored disinformation, I think there should be repercussions for it. And part of the reason there’s more and more is that no country pays any consequences for it. I mean, yes, we sanctioned the Russians, or a few Russians, but it’s not a disincentive for them to do more. THOMPSON: So what should we have done? STENGEL: I’m sorry? THOMPSON: What should we have done to the Russians after 2016? We’re not going to nuke them, right? (Laughter.) Like, where’s the line that we’re going to— STENGEL: Well, I think we should have declared—there’s a—something akin to a kind of information national emergency, that our election is being interfered with by a foreign hostile power in ways that we still don’t know, and people have to be wary. THOMPSON: OK. Far right here. Q: Peter Varis, from TechPolis. Richard, you mentioned two cases that you actually worked on, the ISIS misinformation and the Russians after Crimea. It’s obvious that we have a lot more misinformation because the cost has declined. But what’s the difference from a terrorist group or, ditto, an insurrection like ISIS, and a state-sponsored little campaign of misinformation, which is—both are linked to actual kinetic warfare. STENGEL: Yeah. Q: But what’s the difference? Because that helps us to understand the budget difference. With $50 you can have a lot of impact with targeting on the internet, but what did you feel, hands-on, on those two experiences? STENGEL: So I write about both trying to counter ISIS messaging and Russian disinformation. And the former is easier in the sense that the ISIS disinformation, they weren’t masquerading. They weren’t pretending to be other people or Americans. They were digital jihadis, and when then advocated violence, right there was stuff that you could take off. I mean some—and I, in the book I talk about how—what great things that Facebook and Google and YouTube did in taking down violent extremist content. In fact, someone at Facebook likened it to child pornography, where the image itself is the crime; you’re under arrest. Promotion of violence, you’re out. The problem with the Russians is they pretended to be Americans. They pretended to be other people. They were hidden in plain sight, and that is—that’s a lot more difficult, and it’s still more difficult. THOMPSON: All right, let’s get some questions on the left. As far left as we can go. Right here. Q: Speaking of far left. (Laughter.) Peter Osnos with Public Affairs Books. So some of us grew up with Russian propaganda. Then it was called Soviet propaganda. And what we all agreed was that it was incredibly clumsy. So in 2016 and beyond, suddenly those same Russians, now a new generation, managed to create vast amounts of bits and pieces that were considered effective. And you referred to the stuff up in St. Petersburg, and there are people who say it was in Moldavia or some other places. Who was doing all that stuff? Who—low-paid trolls? Who created tens and tens of millions of these bits and pieces, many of which were, I’m sorry to say, very effective? BARRETT: Well, there was a—the main engine for the information operation side of it, as opposed to the cyberattack against the DNC computers, which was brought off by the GRU, the intelligence wing of the Russian military. The information side, the IRA, was run like a company that was owned by a crony of Putin’s and allegedly, according to Robert Mueller and U.S. intelligence agencies, was something that Putin himself approved of. So— Q: That’s not the answer. BARRETT: Not the answer? STENGEL: But Peter, I’d make a distinction between effective and sophisticated. What they did was effective; it wasn’t sophisticated. I was a recipient of all the stuff from trolls. I can’t even—I can’t say the words that they said. They couldn’t even spell them. The grammar was atrocious; they had terrible English. We looked at the handbook that the trolls would get when they went to the Internet Research Agency; it’s laughable. But as someone said to me, a marketing guy said to me, you know the emails you get from the Nigerian prince who needs $20,000 to get out of prison and you’re going to get $10 million? I said, yeah. He said, and you know they’re like filled with spelling errors and grammatical mistakes? And I said, yeah. He said, that’s deliberate. Why? Because if you respond to it, they know they’ve got a live wire. So the stuff that the Russians did were for people, as I said before, who will believe these strange conspiracies people, who don’t really know about the Oxford comma. (Laughter.) So they don’t really care about it, and that’s why it’s effective. THOMPSON: All right. Let’s go to the back. The very, very back. Q: Steve Hellman, Mobility Impact Partners. Do you expect more vectors of interference in the 2020 election, particularly Chinese, for example? Do we expect foreign adversaries to weigh in on both sides of the election at this stage? What do you think? BARRETT: Possibly. I mean, I think the Chinese are a possibility. We’ve just seen them active in Hong Kong, where they used Facebook and Twitter accounts, some of them English language, to try to undermine the democracy protestors in Hong Kong. I see shifting the attention over to the United States as only a minor potential adjustment. I think the Russians could be back and the Iranians have already test-driven their information operation. So I think there’s every possibility that there could be more vectors, as you put it, coming from abroad. And in terms of volume, we should remember that the vast majority of dis- and misinformation comes from right here at home where we’re doing this to ourselves, in a sense. So there’ll be that aspect of it as well. THOMPSON: But isn’t one of the interesting questions when you try to think about what countries will try to influence our election is which country has a clear goal in the outcome, right? So who—will China want Trump or his Democratic opponent to win? Like, Russia had a clear goal in ’16— BARRETT: In promoting Trump, and presumably China would have the opposite goal. THOMPSON: Perhaps, unless they think that the backlash Trump has created is beneficial to them. I mean, I’m not a China foreign policy expert, but— BARRETT: Me either. THOMPSON: Who is going to—who has a clear interest in the outcome? STENGEL: One of the things that we saw about Chinese disinformation and propaganda operations was that it wasn’t directed outward. It was much more directed inward, both for the Chinese audience itself and also for marketing the Chinese miracle around the world. They weren’t trying to effect particular political outcomes. I mean, that may have changed, and what’s going on in Hong Kong is evidence that they’re getting more sophisticated about it. But they were not nearly as aggressive as the Russians, of course, and the Iranians, who do also have an interest. But I also would quibble a little bit with—the Russians did end up of course helping Trump, but in the beginning, I mean, their whole goal, and has been— THOMPSON: Helping Bernie first. STENGEL: Well, but their whole goal was sewing disunity, discord, grievance. That’s what they’ve been doing since the ’40s and ’50s and ’60s. It was only when they saw Trump starting to lead the pack and praising Putin to the skies that they turned and started marshaling resources about it. I mean, one of the things I write about is that in the beginning, the first six weeks, you know, Trump was made fun of by the Russians just like people here were doing. THOMPSON: All right. Do we have a Chinese foreign policy expert who wants to raise their hand? BENNETT: This poor lady’s been right in front waving her hand. It’s driving me crazy. (Laughter.) Q: I’m Lucy Komisar. I’m a journalist. In the New York Times yesterday there was a story with the headline Ukrainian President Says ‘No Blackmail’ In Phone Call With Trump by Michael Schwirtz. He said Mr. Zelensky also said “he ‘didn’t care what happens’ in the case of Burisma, the Ukrainian gas company that once employed” a son of former Vice President Joe Biden. “In the phone call, President Trump had asked Mr. Zelensky to do him a ‘favor’ and investigate the debunked theory that Mr. Biden had directed Ukraine to fire” an anti-corruption “prosecutor who had set his sights on the company.” “Debunked” was the word of the author, not of Trump. Well, go back to January 23, 2018. In this room, Joe Biden, speaking to the Council, on the record. “And I went over, I guess, the twelfth or thirteenth time to Kyiv and I was supposed to announce that there was another billion-dollar loan guarantee, and I’d gotten a commission from Poroshenko and from Yatsenyuk that they would take action against the state prosecutor, and they didn’t.” I’m eliminating a couple of paragraphs just for time, just to get to the nut-graph. “I looked at them and said, I’m leaving in six hours. If the prosecutor is not fired, you’re not getting the money. Well, son of a bitch—(laughter)—he got fired.” Now what would you say about this disinformation in the New York Times yesterday? And do you think that they should take down this demonstrably false information? STENGEL: What are you saying is false about it? Q: Well, the writer says that it was a “debunked theory” that Biden directed the Ukraine to fire an anti-corruption prosecutor who had his sights on the company. In this—in the Council here, Biden says exactly that he said we would not give the billion-dollar loan guarantee unless you fired this prosecutor. It seems to me that Biden in one place is telling the truth and in another place he’s not. Maybe we have to figure out that, but I don’t think he lied to the Council. It’s all online; anybody can see it. Therefore, it seems to me the Times wrote fake news and they should be asked to take it down. BENNETT: I think the point that you’re—that you’re actually making the larger point I think people would be interested in is that a reputable organization that does this looks at errors and puts—researches them and corrects them when they make them. If it in fact is an error, then people should correct it. But that’s a generalized principle, and I don’t know anything about the truth or falsehood of what you just said. I’m just saying that’s one of the things you want that Rick’s talked about, is transparency and correction. THOMPSON: Let’s not—I don’t think we want to litigate this, because we don’t— BENNETT: Yeah, we— THOMPSON: We’re not experts on that particular statement. BENNETT: We’re not expert on that. We don’t— STENGEL: If I could just to go in the weeds for a second, having gone to Ukraine several times at the same time that Vice President Biden was there—he was there twelve or thirteen times; I went three times. That prosecutor was a corrupt prosecutor who was shaking down the people he would potentially prosecute who already had exonerated Burisma, the company that his son worked for. So he was saying the prosecutor that exonerated Burisma needed to be fired. And you know who else was saying it? The IMF, the World Bank, the EU, everybody else. It was a corrupt prosecutor. Q: He now says he—(off mic). THOMPSON: All right. Woman at the table behind. Right there. Yes, you. Yes. Q: Going back to the question of whether there was disinformation— THOMPSON: Oh, and your name and affiliation. Q: Oh, sorry. Absolutely. Ann Nelson, Columbia University. The question of disinformation in the 2018 campaign, I wonder whether you were looking at U.S. intermediaries at state-level campaigns. So specifically the National Rifle Association, which has its own apps and its own dedicated social media platforms and they have repurposed Russian memes and as the Senate Commerce Committee minority report pointed out last week, the NRA, Maria Butina, were very heavily involved with the Russian campaigns over a few years, including supporting her attendance at the Council for International Policy. So looking at campaigns such as Heidi Heitkamp and Claire McCaskill, where the NRA was extremely involved both online and on the ground, do you still think they weren’t very involved in 2018? BARRETT: Not sure exactly how to answer that. The NRA was active in—I mean, the Russians had certain contact with the NRA. I’m not sure that that is—fits in exactly the same frame as the information operations that we’ve been talking about, but certainly you’re right that the NRA is reputed, certainly by its foes, to stretch the truth on a regular basis and they have that intertwining with certain Russian agents, namely that woman. Beyond that, I don’t really have the—know what else to say. THOMPSON: OK. Gentleman in the far back, in the blue jacket. Q: Hi. Jamaal Glenn, Alumni Ventures Group. What’s your prescription for how to deal with information that doesn’t fall in the demonstrably false category? I want to challenge this notion that some of the Russian operation weren’t sophisticated. I would argue—maybe not technically sophisticated, but incredibly sophisticated if you look at their ability to identify American political fault lines and play to those. Things like race. I have friends exceptionally well educated who played right into the hands of some of these actors. And many of these things weren’t technically false. So I’m curious. What’s your prescription for these things that sort of fit in this non-demonstrably false gray area? BARRETT: Well, I was going to say the platforms, but mainly Facebook, already has a mechanism for what they end up calling false news, which would be broader than in my—in my thinking than demonstrably false information, and they down-rank it and label it, if they—if their fact-checkers have found it to be false, they label it so that when you go to share it, you’re told with a little pop-up that what you’re trying to share here is false, so, you know, think twice before you do it. I think that mechanism, for something that’s determined to be false, but where there’d be some difficulty in calling it demonstrably false, might be the way to deal with that. A certain amount of misleading information, you’re not going to be able to do anything with because you’re not going to be able to know in the first instance where it came from or who’s manipulating it. THOMPSON: But what if it’s true? Q: But what if it’s true? BARRETT: OK, well— THOMPSON: So what if the Russian government is spending money to promote stories that are irrefutably true. Say they’re about— BARRETT: Yeah, then you’re looking for categories of behavior that indicate that there’s some inauthenticity to the accounts that are sending it. The platforms have been moving more in that direction, taking down accounts on that basis. But all of this points to the fact that you’re not going to be able to get everything. No matter how aggressive you are, and not everyone wants to be that aggressive, this environment is going to be shot through with material of questionable provenance. THOMPSON: OK. Right here on the right, gentleman in the orange tie. Q: Michael Skol of Skol and Serna. Isn’t this partially a generational problem? I am one of those who does read the morning papers on—in paper. the Times, the Journal, the Post when there’s a funny headline. But I don’t—I don’t think there’s a lot of people a lot younger than I am who follow this, and which—what are the implications of this, that this problem is only going to get worse because the younger people who don’t pay attention, who don’t prioritize demonstrably true media outlets, are growing up and they overwhelmingly, possibly, there will be a population that’s worse than it is now. BENNETT: Again, let me—let me be the cheerful, non-cynical person in the room. Because we are able to look at digital behavior around the world, and let’s just stipulate that based on what you said, paper is for our generation; digital’s for everybody else. One thing we are finding that is fascinating is that people are coming to look for news and coverage from other countries, and I’ll give you one specifically. In China, what we found in the last six months or so is that the volume of traffic coming out and looking for news on Venezuela has just gone through the roof. Now, why would that be, and who is it? I think it’s because they’re trying to find out things that they’re not being told at home. I think that is a really interesting thing. It says to me that these things are true that we’re saying here.  It is also true that people want to know what’s really going on and they have a search for truth. I know this is, like, 1990s, 1980s, but I still believe that that is true. And we’re watching our digital behavior. When there were the street protests in Iran, our traffic went crazy. Our Instagram traffic went crazy. This is all people coming off of cell phones, so it’s young people carrying their cell phones. They were looking for stuff. So we saw this happening. And so I’m saying that I’m not sure you can say that everybody under the age of 65 is kind of undiscerning and stupid. I don’t actually believe that. Well, sometimes I do, but— BARRETT: Some of us are. (Laughter.) BENNETT: But not often. Anyway— THOMPSON: I would just add that the data from 2016 shows that there is a real generational problem with fake news. But it’s the older people. (Laughter.) BARRETT: Yeah. BENNETT: Yeah. THOMPSON: On the left. (Laughter.) Q: Jove Oliver, Oliver Global. My question is with your journalist hats on, when you see , say, a public figure, maybe the president of the U.S. breaking the terms of service on a certain platform, whether that’s by spreading, you know, disinformation on maybe Twitter or something, what’s the—what’s the remedy for that with your journalist hat on? It’s a public figure. Arguably, what they’re saying is in the public interest. At the same time, they could be causing violence against people or certainly spreading disinformation, which is against the terms of service of these platforms? Thank you. THOMPSON: Or we could even make it more specific. Rick, you sit on the board of Snapchat. Should you kick Trump off? STENGEL: Well, I’ll—(laughter)—I’ll answer that in a second, but I’m going to—the previous question. It’s a well-known fact that stories on paper are more factual than stories on telephones. Wasn’t that the implication of your question? That’s a joke. Q: Depending on which paper. (Laughter.) STENGEL: OK. I think the highest order of magnitude—and again, one of the things that’s been great about this panel, Nick, is you’ve actually caused us to have to think while we’re up here, which is usually not allowed on panels. But to me, the highest value is whether something is demonstrably true or false, rather than the news value of a certain story or the news value of a certain news figure making that statement or the higher protections that political speech has than regular speech. So that was the—that was the story about Facebook and the—now taking off that ad. They were privileging political speech over regular speech, and they—basically they were saying, to me, was that political speech, even if it’s false, is protected, whereas regular speech, if it’s false, is not protected. I would say the highest order is the falseness or trueness and even if it’s a public figure, then that content should be taken off. THOMPSON: Banning Trump from Snapchat? STENGEL: You know, not everything he says is false. And there is a—he is a newsmaker, I believe, and one of the things that—and as Nick mentioned, I’m an adviser to Snapchat. Snapchat does more of a traditional curation of news where the news is linked to a brand, rather than a topic or audience. And in fact, one of the things that I also say in the book is that the rise of automated advertising where you buy an audience, as opposed to buying an ad in Time magazine or the Economist or Wired, is one of the reasons that all of this disinformation becomes out there. And I’m going to say something very unpopular now among my news brethren, that I actually think the movement toward subscriptions also creates a greater volume of disinformation because the true content is now behind a paywall that very—that relatively fewer people can get, whereas the bad content is open and free. So talking about this age discrepancy, young people are now going to think well, I got to pay $68 a month to subscribe to the New York Times but I can get all this other stuff for free, free is a very powerful word in our society. And in fact, I used to say in the early days was, you know, when people used to say information wants to be free, I would say people want free information and we gave it to them and that’s why they are biased in favor of it. So I think the subscription paywall model is also a recipe for the increase of disinformation. THOMPSON: Well, there’s only one way to solve that problem and that’s for everybody in this room to subscribe to Wired. (Laughter.) All right. It’s 2:00. We’re done. Thank you very much to this panel. Please turn on your phones and spread some true information. (Applause.) (END)
  • Cybersecurity
    Cyber Week in Review: September 20, 2019
    Australia concludes China responsible for cyberattack; North Korean hacking groups sanctioned; Facebook heads to Washington; and U.S. blacklisting still hurting Huawei.
  • Cybersecurity
    Hey LinkedIn, Sean Brown Does Not Work at CFR: Identity, Fake Accounts, and Foreign Intelligence
    A fake LinkedIn account of a Sean Brown claiming to work for CFR highlights the issues with fake accounts.
  • Women and Women's Rights
    Behind the Screen: Gender, the Digital Workforce, and the Hidden World of Content Moderation
    Podcast
    As user-generated content on the internet continues to increase in popularity, the question of who moderates this content comes to the forefront when discussing the future of social media . Dr. Sarah Roberts, author of Behind the Screen, Content Moderation in the Shadows of Social Media and assistant professor of information studies at the UCLA Graduate School of Education and Information Studies, joined the Women and Foreign Policy program to speak about the world of content moderation and the importance of this invisible work.     POWELL: We’re going to go ahead and get started. I’d like to welcome everyone. And my name’s Catherine Powell. I’m with the Women and Foreign Policy Program here. And I want to acknowledge the Digital and Cybersecurity Program which we’re co-hosting with tonight. So some of you may have gotten the email through that program as well. It’s with great pleasure that I introduce Sarah Roberts, who is a professor at UCLA School of Education and Information Studies. I’m not going to read her whole bio because you have it in front of you, other than to say that she came to my class today—I’m a professor at Fordham Law School where I’m teaching a new seminar this semester on digital civil rights and civil liberties. And she came to speak with my students about this fantastic new book she has out, Behind the Screen: Content Moderation in the Shadows of Social Media. And it was a very engaging discussion. So Sarah’s going to outline some ideas for about ten minutes or so, and then we will open up for discussion because we have a number of experts in the room and the discussion is always the most fun part. Just as a reminder, this is on the record. Without further ado, let me turn it over to Sarah. ROBERTS: All right. Thank you so much. Thanks to everyone for choosing to spend time here this evening. It’s certainly a delight to be a part of this series, and to be present with you. So thank you. I am going to do my best. I’m a professor, so I have a problem with verbosity. We try to keep it short and sweet. I’m going to try to speak quickly so we can get to discussion. So if there’s anything that seems like a bit of unpacking, we can return to it. But I’m going to do my best to give an overview, assuming that you have not all spent as much time as I have with the subject. So basically I’ll talk a little bit about the research that’s contained in the book, and then I want to tee-up some issues that I think are pertinent to the present moment particularly, because this work is the culmination of nine years of research. We like a slow burn in academia, so it’s been simmering for some time. When I began this work in 2010, I was myself still a doctoral student at the University of Illinois, but I had a previous career in the IT field, although I had, you know, perhaps made the unfortunate decision of finishing my French degree—French literature degree rather than running out to Silicon Valley during the first kind of net bubble in the mid-’90s, so there you have it. But I have fun when I go to France, I guess. Anyway. So I was working in IT for about fifteen years before I decided to go back to school. It was going to just be a quick in and out sort of master’s degree. And I became enthralled with really feeling like I needed to pursue some of the issues that I had to live through first-hand, mainly the widespread adoption and also commercialization of the internet. I had been a user of the internet at that point for almost twenty years in 2010, and I had considered myself a late adopter. I thought I kind of missed the wave of the social internet. But anyway. So in the—in the summer of 2010, I always want to give credit where it’s due, I read but brief but powerful report in the New York Times tech section. It was sort of what we would consider below the fold. I didn’t say that to your students today because I didn’t know if they’d know what I was talking about. (Laughter.) But it was a below the fold kind of piece, a small piece about a firm in rural Iowa. I was sitting at the time in the middle of a corn field at the University of Illinois, so I could relate to these people who were described in the article as working in really what, for all intents and purposes, seemed to be a call center environment. And they were working not taking service calls for, like, your Maytag washer or your Sears home product, but in fact what they were doing was looking at material from unnamed social media sites that had been uploaded by users and which had been flagged by other users as having some issue, being problematic. And this typically fell around issues of perhaps being pornographic, or obscene, gratuitously violent, all the way to things like child sexual exploitation material, images of abuse of other sorts, and the list goes on and on. And I won’t belabor it with examples, but you can sort of imagine what one might see. What I wanted to do upon learning that as really get a sense of what to what extent this need was in fact a fundamental part of the at that time ramping up but very significant social media industry emanating from Silicon Valley. So I should just contextualize this by saying I’m talking about American companies that are based in Silicon Valley. I’m not an expert, unfortunately, on some other parts of the world. But these companies, of course, cover the globe. And in fact, last I knew the stat, Facebook’s userbase is 87 percent outside the United States. So it’s quite significant that these American firms and their norms are circulating around the globe. The findings are detailed in here. It’s also a bit—I have to admit, I guess this is a first-time book writer’s thing where you sort of go into your own autobiography and you really wax poetic. That’s in there too. You don’t have to take that too much to heart, but I think what I wanted to do was contextualize the way in which from that period in the—in the early to mid-’90s to where we are now, the internet has become—and what we consider the internet, which is our social media apps usually on our phones, right—has really become an expectation, a norm, a part of our daily life in terms of interpersonal relationships, in terms of maybe romantic relationships, business relationships, political discourse, and the list goes on and on at how these platforms are a part of—a part of the fabric of our social life. And how those companies that provide these essentially empty vessels rely upon people like us to fill them with our so-called content. Content is a funny word, because it just stands for, evidently, any form of human self-expression you can think of. And so I often come back to that as an interesting thing to unpack. I’ll tell you a little bit about what we found, and we’ll buzz through this, and then we’ll—I’ll get off the mic for a minute. But essentially what I discovered over the subsequent years was that this activity of content moderation on behalf of companies as a for-pay job—something that I came to call commercial content moderation—was viewed by the firms that solicited it as a mission critical activity. In other words, these firms viewed this practice so important as to be really unwilling to function without this kind of stopgap measure to control the content on their sites. This—you know, we can think of this as a gatekeeping mechanism, which means it’s also a mechanism by which content is allowed to stay up as much as it is a mechanism to remove. But what was really important to me to understand about the impetus for this particular activity, and then the creation and shoring up of a global workforce, was that the activity was taking place primarily as a function of brand management for these firms. What do I mean by that? Well, I mean that just as, I don’t know, CBS Studios is unlikely to flip on the camera, open the door, and ask New Yorkers to just come in and get in front of the camera and do what they will—without any control—neither are these platforms. But one of the biggest differences about the ways those kinds of relationships have come to be understood in our—in our everyday life is that I think the expectation about the former is much clearer than it is about the latter. These platforms have come to take up and occupy such an important space, in large part because they were predicated or sold to us on a—on a claim that essentially it would be us, to the platform, to the world. In fact, YouTube’s on-again, off-again slogan has been: Broadcast Yourself. I mean, they say it better than I can, right? You just get on there and emote, and do your thing, and it’s going to broadcast all over the world. So what I came to find was that in fact there was a workforce in the middle. And to me, that was revelatory, and it was shocking. I had never considered it. And I was supposed be getting a Ph.D., right, in this stuff. And I had worked for fifteen years in this area. I actually started asking other colleagues around campus—esteemed professors who shall remain nameless, but who are victimless here—they also said, gosh, I’ve never heard of that. I’d never heard that companies would hire people to do that. That’s the first thing they said. Then they said, don’t computers do that? Now, if these are the people who are—have their fingers on the pulse of what’s going on in social media, why didn’t they know? Well, that led me to speculate that in fact this practice was intended to be, to a certain extent, hidden. That actually is the case. So I’m just going to talk for a minute about what this workforce looks like, and then we’ll go into some of the maybe provocations, I guess we can call it. As we speak today, I would—it’s difficult to put numbers on what kind of global workforce we’re talking about, but I would estimate that we’re thinking about maybe 100,000 at this given moment. The number I arrive at for that may be conservative. But I take that number from looking just at the public numbers that Google and Facebook now offer up around their workforce, which are in the tens of thousands. Those are two platforms out of how many? The Snaps, the Instagrams—they’re not counting Instagram—the TikToks of the world, right, whatever the latest thing is. I’m starting to show my age and I don’t even know what’s going on anymore. But anyway, so any—essentially any company that opens the opportunity and—as a commercial entity—opens the opportunity for someone to upload is going to introduce a mechanism to control that. And that’s where we can arrive at these numbers. The thing about this globalized workforce is that it’s diverse, it’s dispersed. You can find it a number of different industrial sectors. But there are some things we can say about them overall that they share in common. And those characteristics that I think are important to mention is that this work, and the workers who undertake it, are typically viewed as low status. They are typically low wage earners. And they are typically limited term workers for a firm. So the expectation is not that one would make a lifelong career at this work. We can think about why that maybe is. It may in fact be because you wouldn’t be able to stomach this beyond this—right? We’ve got the shaking heads. It’s like, no thank you. I—personally, I couldn’t do it for a day, much less a year. But it’s often limited term. The work is often also to some extent done at remove from the platform that actually needs the service. So how do they do that? Well, no surprise, it’s going to be contracting, outsourcing, and other sorts of arrangements that look other than permanent and look other than direct employ. They often, of course, in the case of the United States, for one, given that circumstance, lack significant workplace benefits. Now, when we start thinking about the fact that this work can put workers in harm’s way psychologically because of what they view as a precondition of the work, that lack of benefits, that lack of—and even under the Affordable Care Act people might not be able to afford mental health support, because we know that’s often extra. I mean, I know even in my health care plan as a UCLA professor that’s something I would have to pay for, to a certain extent, out of pocket. How might a worker, who’s contractual, and low wage, and low status, go about obtaining that? Now, when we think about this work being global, we also know that there are places in this country and in other parts of the world where mental health issues are highly stigmatized. And so seeking that help is also encountering barriers just based on cultural norms and other sorts of attitudes towards that kind of—that kind of support. And so really, what we’re looking at is a system of work that has been essentially outsourced and devalued. And yet, those who are near to this kind of operational activity within firms know that it’s mission critical. They know that this gatekeeping mechanism and ability to control what’s on the platform has a fundamental function in their operations. And they really wouldn’t go forward without it. As one person quite candidly put it to me once: If you open a whole on the internet, it gets filled with, blank. And so that was her way of telling me, therefore every time we allow someone to upload something into essentially this empty vessel, we have to have a mechanism to control it. OK. So I’ll talk a little bit about the outcomes here. I’m just going to list them. We can come back to them. But the primary findings in the book, I would say, are as follows. We’re talking about a fractured, stratified and precarious workforce, as I’ve mentioned. You will find this workforce not sort of in a monolithic site that can be easily identified as this is where commercial content moderation is done, but instead in a variety of industrial sites and sectors, some of which might not be recognizable to workers who are actually doing the same work because of the setting or because of the nature of the work. What do I mean by that? Well, some people go to work every day at Silicon Valley. They sit next to engineers, or maybe down the hall as the case may be. But they have a different color badge. They’re contractors. While others do this work disembodied over a digital piecework site, like Amazon Mechanical Turk. It maybe even has a different name. One person might be doing work called “community management,” and another person is doing dataset training for machine learning algorithms. And guess what? They both might be doing some form of commercial content moderation. So this—when we think about how workers might self-identify, these are the features that make it difficult. There is a patchwork approach to covering this labor need. So, again, global. And, again, using these different industrial sectors, because there’s simply often not enough people available to just be taken up into the apparatus and be put on the job, just going straight through one particular firm or one particular place. This is where we’re now seeing big labor provision firms in the mix. The Accentures of the world and others are now in this—in this field. Workers, again, globally dispersed. And one final thing that I’ll say that I think is actually very key, again, it is often secretive. The work is often under a nondisclosure agreement. Now, many of you know that Silicon Valley asks you to sign a nondisclosure agreement every time you turn around. It’s sort of a cultural norm. But this is taken actually very seriously for these workers in particular. So I had to guarantee a certain level of anonymity and use pseudonyms and other things when talking about particular cases in the book. I talk about a case called Megatech. And I was speaking to the class earlier today, one of the funniest things that I never would have expected is that when I meet people from industry and Silicon Valley, and have over the last few years, they say to me: Well, we’re sure that our company is Megatech. We know you’re talking about us, Megatech. I’ve had, like, six different firms tell me they’re certain that they were the field site. (Laughter.) Now, I can neither confirm nor deny that, so that leaves them a little anxious. But I find it fascinating that so many companies see themselves in what I reported here. I never would have expected that. That’s the beauty of doing research, I guess, and having it out in the world. OK. I just want to give a few provocations or thoughts about—I know everyone here is quite interested in policy implications and things of that nature. So I want to give a couple highlights about that. I’ll say I’m not a lawyer. I kind of like to hang around with them a lot. They seem to be a good crowd for the most part. (Laughter.) But I’m not one myself. But the nature of this work, and the operationalizing of this work, means that I have to have a direct relationship to what’s going on at a policy level, and then ever further at a regulatory level. And that’s sort of been an interesting trajectory for me that I might have not expected originally nine years ago. So what’s going on? What are the pressure points on industry around this type of work? Well, I would identify them as follows—and I guess this is sort of in a sort of order, but not really. The first one I would say is regulation. Regulation is the big one, right? That could mean European maneuvers at the EU level. So we’ve seen things like that already—GDPR and other regulations passed sort of at a pan-European level, but also at the state level. Germany, for example, or Belgium, or France, where they have pushed back on firms. We have heard about, and are seeing, movement around antitrust in the United States, for example. We have seen discussion and invocation of perhaps a need to revisit Section 230 in the United States, which is what has allowed these firms to grow to where they are now, because it has given them the discretion to both decide to keep up, but also to remove, again at their discretion and to their benefit, content over the years. And then there’s this—I guess the next kind of layer I would talk about would be litigation. We have seen a number of interesting cases crop up just in the last few years—this is a new trend—of current or former content moderation workers, working as commercial content moderations, who are filing lawsuits. So there’s been a couple lawsuits that have been individual cases. One that was very interesting was about Microsoft. They had a hard time claiming these workers were not direct employees because, as you may know, Microsoft got the pants sued off of it a couple decades ago around this issue of having long-term employees they called contractors. So that was an interesting thing with that case, where the people were unequivocally direct employees. But also there is a—there is a class-action suit that’s in the state of California right now. There’s also rumblings of some cases being filed in places like Ireland which, as you may know, is a huge operations center for Silicon Valley firms, for no small reason because it’s a tax haven. OK. What else? Journalism. Negative journalistic coverage. This has been a big one, a big pressure points on the firms. Exposés around working conditions for content moderation. We’ve seen them over the years. I’ve partners with many journalists, and many journalists have broken stories themselves around this. It tends to focus on the negative working conditions and the impact on the workers. Doesn’t often go to some of the deeper policy dimensions, but it’s the kind of headline that shocks people, and it brings people into a position of, first of all, knowing that these people exist and, secondly, taking issue with their—with their treatment. And that leads us to consumer concern. Of course, the biggest fear for platforms is—well, maybe not the biggest—but a huge fear for platforms is losing their userbase. They want to gain a userbase and they want to keep people coming back. Of course, they’re advertising firms and they connect these users to advertisers. But if consumers become dissatisfied enough they may leave the platform. So when that sort of rumbling occurs, they respond. And then finally, last but not least—well, I guess I would say also academic research has had an impact to a certain extent here. But last but not least, labor organizing. This is seen as a huge threat. Again, the same with the regulatory pushback. I think labor organizing they’re trying to avoid at all costs. I think it goes without saying that these firms site their content moderation operations in places that are probably on the low scale for strong organized labor—places like the Philippines, for example. Places where the United States might have had a long-standing colonial relationship, and therefore firms there can say things like, our workers have great colloquial American English, as if that just happened by happenstance. (Laughs.) It didn’t, right? All right. So I think I’ll leave it there and we can just open it up? Is that good? All right. I tried to be short, sorry. (Laughs.) POWELL: So as is our way here, please just turn your card sideways if you would like to ask a question. I certainly have questions of my own, but I’m going to first turn to you. And I’ll just jump in later. So let’s start with Anne (sp). Q: OK. So I just want some facts. ROBERTS: Yes. Q: Where are these people geographically? What is their demographic? Are we talking about Evangelical Christians? What are their value sets? What is their filter? Because—you know, how hard is it to control what they do? ROBERTS: That’s right. OK. So the first company that I came in contact with was this Iowa firm. And this firm’s tagline was quite literally, “Outsource to Iowa, not India.” So they were setting up this relationship of don’t go to the racialized other somewhere around the world. You want your content moderation homegrown, are good Iowa, you know, former farm family workers. Of course, their farms are gone, so now they’re working in call centers. So that was something that they actually saw value in and they were willing to treat as a commodity, to a certain extent. What’s going on now with the larger firms is that—so these are—these sites can be found in places like the Philippines, especially for American firms, but also in India. Then for each country that sort of introduces legislation that’s country-specific—for example, Germany. Suddenly, there needs to be a call center in Germany, because they need to respond quickly to German law, and those people have to be linguistically and culturally sensitive to the German context. So these places are springing up, frankly, like mushrooms all over the world to respond to the linguistic and cultural needs. How do they homogenize the responses? This is the difficulty. Well, you would not believe the granularity of the policies that are internal. If there are—if the percentage of the image is flesh tone to this extent, delete. If not, leave up. If the nipple is exposed, delete, except if it’s breastfeeding. You can now leave that up. Except if it’s sexualized, delete that. So these are the kinds of decisions that have been codified— Q: From headquarters? ROBERTS: From headquarters, correct. And the expectation is that the workers actually have very little agency. But what they do have is the cognitive ability to hold all these things in their mind at once, which guess can’t do that very well? Computers. Algorithms. Not successful in the same way on all of this content. Some things computers can do well, but the cost of building the tools to do this and the worry of having false positives, or losing control over the process, means that humans are filling the gap. I think there’s a sensibility in Silicon Valley that this is just for now. That soon that we’re going to have computers that can do this. Q: But— ROBERTS: Right? Thank you. That’s what I say too. And if you talk to folks close to the operations, you know, in a candid moment they’ll say something, like, look, there’s never going to be a moment where we let the machine loose without some sort of engagement in human oversight. In fact, when the algorithmic tools are unleashed on some of the content, what has been happening is that it goes out and aggregates so much content that they actually need more human workers to sift through the stuff. So it’s actually not eliminating the humans out of the pipeline at all. Hopefully that answers— Q: But in the U.S. case, Facebook, Twitter, they are using Filipinos and Indians? It’s an outsourcing industry right now? ROBERTS: And, again, that’s— Q: In some instances. ROBERTS: Yeah. Yeah. I mean, there’s—again, it’s like a patchwork, right? So there might be folks who are local. There might be people who have specific competencies who are employed to look at certain types of content, or certain cases. An example I would give is Myanmar, which Facebook took a lot of heat for not having appropriate staffing. You know, they’ve staffed up. So there are people who are, you know, kind of country-specific, like the way we think about people who work in policy work, actually, right? But there is often a fairly significant gap between those people who are—who are putting into operations the rules, and those people who are making the rules. And that’s another big kind of tension point, if you will. POWELL: Let’s go to Joan next. Q: Hi, Sarah. ROBERTS: Hi, Joan. Q: I’m Joan Johnson-Freese, a professor at the Naval War College. ROBERTS: Hi. Q: Thank you for a great presentation. I’m wondering if you could talk a little more specifically about the gender aspect. ROBERTS: Yes. So actually in my research I found that it was fairly gender-equal in terms of who was doing the work. One of the interesting things however is that in talking to some of the workers who were female or female-identified, in particular one woman who was working on a news platform, she talked about the way in which her exposure to hate speech that was particularly misogynist in nature, or that would typically include rape threats or other kinds of gender-derogatory terms, was affecting her personally to the point that she described herself as—I’m sorry you heard this already—as a sin-eater. And she was supposed to be employed part time, but she found herself when she would be maybe out to dinner, out to a restaurant, sneaking away to check her computer to see what had filtered in. And she talked—she was a person. She’s female-identified. She self-identifies as queer, working class, and oriented towards racial and social justice although she’s white herself. And she talked about the way that misogynist language in particular and threats, homophobic speech and threats and behavior, and racially insensitive and hostile material was starting to affect her so much that she felt like when she was not even on the clock she would go in and check the site, because if she wasn’t there doing it she felt like others who weren’t prepared to see the material were being exposed. Right? So she described herself as a sin-eater to me. And she said, I call myself a sin-eater—as if I knew what that was. I didn’t know what it was, I admit. So I asked her to describe it, and I looked into this later. And for those who don’t know, it’s a figure—something of a folkloric figure. But it’s a person who in the context of England and Wales was typically a poor villager, man or woman, someone destitute in a particular community, who would upon the death of someone more prominent volunteer to take a loaf of bread or other kind of food that had been passed over that individual, was imagined to be imbued with their sins, and would eat it. That person would therefore be absolved of the sins and go to heaven, and the person who was eating the sins would, I guess, suffer the consequences later. So that’s how she described it. And she—in the book we go into detail about her experience and how it became very difficult for her to separate her multiplicity of identities. But especially as a woman, and as a queer-identified woman, dealing with the kind of vitriol that she was responsible, essentially, for cleaning up. So that was a pretty stark one. (Laughs.) That was—that was tough. Yeah, thanks. POWELL: Let’s go to Catherine (sp). Q: Yeah. This is super interesting. And I actually have an experience as an early comment moderator myself, because I was the sixth employee of the Huffington Post, who would get phone calls from heads of—like Dick Cheney’s office, calling and saying: Could you please take this negative comment down about the vice president? And we would—you know, it was from the Secret Service. So, anyway, lots of stories there. But my bigger question is, what—like, it sounds like you’re talking about the labor force and this unrecognized labor force. But then from what you just said, it’s the fact that we have this unbridled comment stream of hate and how are companies ever going to really reconcile? Like, when is the moment where they finally say: We have to do something bigger than just moderate all day? ROBERTS: Well—(laughter)—what—if we can solve that this evening we can go find VC investment and we will—we’ll resolve it. But I think—you know, if I can sort of read into what you’re saying, I mean, I think your discomfort is on a couple of levels. One is, this is the function of—good, bad, or ugly, however you feel about it—Section 230s, internet intermediary definition of these platforms as being able to decide to what extent and for what purposes they will moderate. So that’s the first thing. But I think the second thing is a little less apparent. And it has to do with business model. It’s not as if it was a foregone conclusion that the whole world would just flood channels with cat pictures, and this was my sister-in-law’s wedding, and whatever they’re posting or, you know, Nazi imagery or other—you know, terrorist material, child sexual exploitation material. But there’s actually a direct relationship on these platforms to the circulation of material that we call content—which already, again, I would say is a ridiculous, too-general category—and monetization, and treating that material as commodity. So what I’m getting at here is that the platforms are a little bit in a bit of a pickle, to say the least, about how they have developed a business model that’s predicated on a constant influx of new material. Why? Well, because they want us to come back. If it’s just the same stuff all day every day, they don’t think we’re going to come back. What is—what is going to get the most hits from viewers? Is it going to be something really boring and uninteresting, or is it going to be the things that’s just maybe this side of bearable and everyone’s talking about it because it’s viral, right? So these are the kind of economics and logics that have been built up around the constant influx of content. And so it’s gotten to the point where this computer scientist that was at Dartmouth, and he’s now at Stanford, who developed one of the primary AI tools to combat child sexual exploitation material, and actually does work very well in that use case, he pointed out in a paper that he wrote, and then I cited him heavily in a recently in a paper I wrote, where he said: Look, what’s never on the table when I’m in boardrooms is, what if we slow down the hose of the influx of the material? That’s never under question. And he’s—for heaven’s sake, he’s the developer of this tool. And he’s the one thinking, hello, the always on, constant on, kind of unvetted uploading from anyone in the world is maybe not an awesome idea, right? Like after the Christchurch shooting in New Zealand, which was a horrible massacre, that was maybe the first time you heard Facebook seriously question, maybe we shouldn’t just let everyone in the world turn on a livestream and go for it. Maybe it should only be trusted users, or people whose info we have or something, right? So we get back to this problem of the business model. And it’s the thing that it’s kind of like the elephant in the room. It’s, like, the thing that they don’t want to touch because that’s how they make their money. They monetize the content that we provide. I’d also say that we are unfortunately fairly implicated. And I mean, like, look, I’m sitting here with my phone, tweeting, doing all of the things, right? We are implicated ourselves as users and being a part of the economy. But I can’t in good conscience tell everybody to throw out their phone and get off the platform, because I can’t do it. So they’re—I don’t know. There’s got—you know, there’s a slow food movement that came up a number of years ago because people were sick of the scary supply chain industrialization of their food, right? And I often think about, who’s going to come up with slow social media? Q: Yeah. No, that’s sort of my—I have a friend who’s pretty high up at Facebook. And they’re complaining about how the guy who wrote what the Zucc, or something—or, Zucked, advertises on Facebook all the time. Like, the very— ROBERTS: Yeah, right? Q: But then they’re making money off of that. Which is like a terrible cycle. ROBERTS: Which is, like, also—yeah. And these people are probably completely disembodied from that ecosystem anyway, right? So I think one of the other things I just throw in the mix to think about is that we’ve hardly tapped any control mechanisms that might be at our avail in other realms. So things like—again, like some of these regulatory things. Or even the fact that these firms have been, for fifteen years, been able to self-define almost exclusively, without intervention, as tech firms. It’s not just because they have an allegiance to the tech world that they call themselves that, but what if they called themselves media broadcast companies. Well, what happens when you’re in broadcast media? Can you just air anything you want? I mean, George Carlin made a career out of lampooning the fact that you can’t, right? So, you know, one day at some point years ago I thought, let me just go look at the FCC’s rules on broadcast media and what you can and can’t do. Let me go find the comparable thing for social media—oh, right? And yet, they’re all engaged not only in soliciting our material, but now they’re engaged in production of their own material too. I think about YouTube as, like, the prime example of that business model relationship, where we have literally people getting checks cut to them if they—if they get a lot of views. So there’s a whole economy now, and the logic of the platform, that almost goes unquestioned and seems innate. And yet, it hasn’t been that long that it’s been this way—which is one of the things I’d like to think about. I don’t have the solution, however. Remember, I— Q: More like, is there going to be a tipping point? I mean, that’s what I—yeah, if you’re seeing it. ROBERTS: Yeah. I mean, I don’t—I’ll tell you this. Like, I don’t like to do prognostication because, again, I decided to do my French degree and not go to Silicon Valley in the ’90s. (Laughter.) But I don’t think—if I had to bet, I don’t think the pressure will come from the U.S. I think the pressure is coming from Europe. Yep, and they’re very, very worried about that. Q: Did you see that the Danes have an ambassador to Silicon Valley? ROBERTS: Yes, they do. I saw that. Indeed. Q: I was just in Denmark. And you know, these people think differently. And they’re going to think harder about the regulation issues. ROBERTS: But you’ll also see—you’ll also see social media CEOs be received as though heads of state. I mean, we’re talking about policy that rivals legal code. Q: And economies that rival maybe the GDP of some small countries as well. ROBERTS: Correct. Correct. POWELL: So we’ve got Rufus (sp), and then Kenneth (sp), and Abby (sp). Let’s go to Rufus (sp). Q: So, a two-part question. And they kind of play with each other. So this is mission critical from a brand point of view, and it supports their advertising, and, you know, you want to have control over your platform. But I’m curious in terms of is the—is it somewhat a resource problem? Like, are they just not investing enough in it, and therefore you have very bad labor practices, and that’s the problem? And then the second part of that, of my question, actually has to do with maybe how it’s different in China, because it seems like they moderate their content real well. (Laughter.) And they have social platforms— ROBERTS: Yeah. Let’s copy that model, right? Yeah. (Laughs.) Q: Yeah, no, but, you know, I’m just curious. Like, clearly they have control over their social platforms in a way that we don’t. And I wonder if there’s anything to learn from that or be afraid of in terms of we should control more. Does that— ROBERTS: Well, to answer the first question, I think it—I can’t just say yes or no, right? I’m going to have to— Q: Sure. ROBERTS: Sorry. (Laughter.) I’m sorry. I think it is a resource problem, but it’s also a problem of prioritization. So how can I put this? This function, although it’s been present in some form, I would argue, since the platform started, was never thought of as central. So it was always a bit of an afterthought, playing catch up. And I think that that position of the activity within the firm has always lagged, essentially. There’s an interesting moment in this film called The Cleaners that I was involved in, where Nicole Wong, who was at the time the general counsel at Google, was up one night making content decisions. So there were people in the firms who knew—I mean, at those high echelons—who knew this was an issue and a problem. But, you know, it was sort of, like, someone else’s problem? And it wasn’t a problem that was seen as—it wasn’t—it wasn’t a bucket that was going to generate revenue, right? It was a cost center. I mean, there’s a lot of ways to slice that. I think you could argue, for example that, well, a PR disaster in the absence of this activity would be immensely costly, or you could say that a company that has good, solid practices and has an identity that maybe they even build around their content moderation that gives a character of flavor to the platform could even market on those grounds. But the decision was made early on to sort of treat this activity as secondary at best in terms of how it was presented to the public. I think that was also because they didn’t want to be accountable. They wanted to make the decisions and have the discretion to make the decisions. So because it’s always been laggard, it’s like there’s been this huge resource shift within the firms to figure out, go figure, you know, if all you have is a hammer, everything looks like a nail. So the answer is let’s get computation on it solve it. Well, one of the reasons that they want to use computation is, of course, the problem of scale. So despite there being maybe a hundred thousand people working in this—in this sector, that pales against the amount of content that’s produced. It means that just some portion, some miniscule portion of content is ever reviewed by humans. That’s one of the reasons why they want to use computation. But another reason—there are a few reasons. Another reason is because that’s what they’re in the business of doing. And that, again, also takes out this worry about rogue actors, people resisting, people making their own choices, making errors, disclosing to the media or others—academics, such as myself—what they’re up to, disclosing to regulators or others who might want to intervene. So there are other—so we should be suspicious about some of the motives around the computation. But I think functionally, at the end of the day, there are very few companies that could actually build the tools. I mean, we’re talking about bleeding edge AI implementation. When I started this research I went over to the National Center for Supercomputing Applications at Illinois. We were in the—in the cheaper side of campus, so I went over to the monied side where the big computational stuff was going on. And I went to this computer vision lab. Now, again, this is 2010, to be fair. But I went into this computer vision lab and I spoke to this research scientist. And I said, look, here’s the problem. Let me sketch out the problem for you. Can computers do this? Is that reasonable? And he said, see that over there? And he pointed at an oak table in the middle of this darkened cube—visualization cube kind of space. I said, yeah. He said, right now we’re working on making the computer know that the table is a table. Like, controlling for every—(laughs)—you know, aspect of the—we’re way beyond that today. But it kind of shows the fundamental problem. First of all, what does it mean for the computer to know? Usually it’s pattern matching, or it’s some kind of matching. So the best kinds of computational tools for content moderation are matching against something known. This is why the problem of child sexual exploitation can be effectively treated with a computational tool, because for better or for worse people who traffic in that material tend to recirculate a whole lot of material. So it can be put in a database and can be known. But for stuff that doesn’t exist anywhere else in the world, or is hard to understand, or has symbols and meanings that you have to be a cultural insider to understand, or you have to just be a human being to understand, there are only a few firms in the world that have staff, money, know-how, the need to put workers on it. For many firms, it’s just cheaper to get humans. Now, your second question about China, I confess to being an ignoramus when it comes to China. But I would say that, you know, just off the cuff, a huge difference is that Chinese companies don’t just spring up and do what they want from the start. I mean, they are—(laughs)—I mean, they are fostered by the state and they’re typically quite intertwined with the state at first. There is no Section 230 in China, in other words, right? And there’s probably a lot more labor to put on this in China, and more of a sensibility that it’s going on, I think, than in the United States. But people have creative ways around it, always. POWELL: I guess it would be harder to carry out your research in China too, to document what’s going on there. ROBERTS: I mean, yes. Although, you know, I should tell you, I have a new Ph.D. student coming in a matter of weeks. And he’s coming to work with me because he told me he wants to do comparatives studies of the Chinese case of content moderation versus the United States case. And we’re on Skype and I’m, like, dude—shut up, dude. (Laughter.) You know? Like, we’ll talk about it when you get here, man. I’m, like, all nervous, because I don’t know who’s listening. Yeah. So I think that work will come. And I think we need comparative studies, because I am limited by my cultural context, which is the American one. But that is an important one to understand right now, because of the global impact. POWELL: Kenneth (sp). Q: To what extent can you offer specific normative suggestions on how to improve content moderation towards the ideals that you have? ROBERTS: Well, I think—yeah, it depends on what we consider an improvement. I think for the purposes of the book, it has to do with working conditions. So let’s take that as the goal. And to get ideas around that, I’ve often relied on the workers themselves, since they’ve thought so much about what would help them. I think there are a few things—well, I think there are a number of things that we can think about. The first thing that comes out of everyone’s mouth, you won’t be surprised to learn, is: Pay us more. I mean, it’s sort of a flip response, but I think it says a lot, because when I hear workers say that I hear them say: Value our work more. I also think the secretive nature of the work is something that impacts the psychological difficulty of dealing with the work. So— Q: Excuse me. What are they paid? What’s the range? I mean, are we talking— ROBERTS: So I’ll give you a very recent example. In May, Facebook made a big announcement, sort of leading the way in this arena of how to better support content moderation workers. They’ve taken a lot of heat, so that’s part of the reason. And they announced that for all of their American-based content moderators who are in third-party call centers, or wherever they are in the chain of production, the base rate of pay would be $15. And in other metro areas, New York, kind of—San Francisco, high-expense areas, it would be a higher rate of pay. So fifteen’s the floor, and then going up from there. Q: My maid makes twenty (dollars). ROBERTS: So, right. So this raises some important issues. Q: That’s like basic minimum wage now. ROBERTS: For—right. We know that also, again, they’re a step out from basic minimum wage that will be enacted in California, first of all. So again, thinking about how this—there’s a strategy of being head of regulation a lot of times. Q: But without benefits? ROBERTS: Well, right. And then the other thing that this brings up—there was sort of, like, the deafening silence from other industry players. I thought maybe some of them would follow suit. Q: That was way too high, yeah. ROBERTS: Yeah. But they haven’t. Google went on record and they said that, I think, 2022 they were going to get everyone there. Also, this was American only, but we know that there is so much of this work that’s outside of the United States. Unless it’s a place where the mandatory minimum wage is higher, which might be in some European cases— Q: Not the Philippines. (Laughs.) ROBERTS: Correct. So it’s usually very low wage. The other thing that companies have started doing—Facebook is one, and others—is bringing on psychological support on site. Workers told me a bit about this in their case. And they said that while one the one hand that was a welcome improvement, because they didn’t really necessarily have access to those services, it was in some cases voluntary. And what ended up happening was that the therapist, the psychological services person would come at the appointed time, take a room in the department, and anyone could come and speak to him or her. So that worker who’s struggling and having a hard time has to get off the queue, tap out of his or her work, stand up, walk through the department, walk past the boss, walk past the coworkers, and go in and sit with the therapist—thereby, letting everyone know: I’m struggling with looking at content, which is the precondition of my job. So some of them said: It would be nice if that were mandatory, and if everybody had to visit with a therapist at some—at some prescribed time. That’s another thing. I think benefits is another big thing. And I would also add that very little has been done by way of developing tools that could be supportive or assistive. When I talked to some of the workers, they were using outmoded kind of homebrew solutions. Or, in the book, we talk about a firm that was using, like, Google tools—like, Google Docs, Google Chat, like, sort of kitbashed or kind of—kind of quasi-internally developed but really, like, just commercially available stuff. I think there’s a market for tools that would allow workers to do things like specify a queue that I’m not comfortable being in today. Like, today I just—if something comes in and it’s flagged child abuse, I just can’t see that today. I’m going to tap out. I’ll take the one that’s, yeah, take your pick, right? Rape threats. I’ll take that one. But, you know, when we—when we as users report content, we usually go kind of triage that material. So that could be used proactively on the worker side to allow them to opt out. And it’s not—you know, some days you can handle it, some days you can’t. These were kinds of things that the workers reported to me. You know, usually I’m OK with animal abuse. That day I just couldn’t do it. One guy said, I just can’t take it when there’s someone screaming in the video. So maybe he could look at videos with audio off. So there’s, like, little things that we could do. Making the screen black and white rather than color or making the screen fuzzy might be a tool. Again, based on and maybe tailored to a worker preference. Workers told me that they would do things like they would look at the screen by squinting so that they would only get—you know, they would know it was gory and they could tell just by squinting if it was too much blood, according to the rules, or too much kind of violence, and then they wouldn’t have to, like, experience the whole thing. We could develop tools that could do that for them, right? And maybe if they felt like, I need—unfortunately I need a closer look, I’ll press the thing to unveil the entire image. So there are—I think there’s a lot of things we can do that it’s just frankly not been prioritized, right? It’s not the thing that’s going to—it’s not the new function that they’re going to blast around. POWELL: So we have two more questions. I think we can—oh, OK. Q: Sorry. POWELL: No, it’s fine. (Laughter.) So let’s see. We might have to get the last two together. But let’s go to Abby (sp). Q: Sure. So just—it’s a—it’s a bit of an expansion on the question that Kenneth (sp) just asked. But what do you think the changes to the labor workforce would be on the actual product, which is the moderation? So let’s hypothetically say we have a workforce that is appropriately compensated, that is centered, maybe directly employed. How would the product of content moderation change, in your view? What would look different to the user? What would look different to the company? ROBERTS: Well, again, I think there’s sort of a fundamental missed opportunity in the fact that the work was rendered secret, whereas again there were all sorts of experiences we have in our daily life where we look for expertise and curation. So what if we thought of people who did content moderation not just as cleaners or janitors, or people who sweep up a mess—which, of course, are important activities but are typically undervalued in our daily life. But what if we thought about them as people who were curators, or tastemakers, you know? I don’t know, sommelier of the internet. I’m just making stuff up, so please don’t—(laughter)—don’t say, that woman said sommelier of the internet. But, you know, people who can help be a guide rather than an invisible agent. I think that that has really hamstrung the circumstances for the workers in a lot of ways. I think thinking about—I didn’t—wasn’t able to get into this in the talk, but there’s a whole host of metrics—productivity metrics that are laid on these workers in terms of how much stuff they’re supposed to process in a given shift, for example. When I was in the Philippines, the workers described to me that they used to have something like thirty seconds per review, per item that they were looking at. And it had been cut to more, like, ten to twelve seconds. Now, another way of thinking about that is their productivity had been more than doubled, the expectation. Or that their wage had been cut in half vis-à-vis productivity. So I don’t think anyone benefits from a ten-second look. I don’t think the workers benefit. I don’t think users benefit. I don’t think the ecosystem benefits. Ultimately, I mean, from just, like, just a cost-benefit analysis on a balance sheet, I guess that comes out looking good for the firms. But I don’t think in a perfect world that we get any kind of quality any more than we think of a McDonald’s hamburger and, you know, a—I don’t know, a farm-to-table meal as the same thing. They’re fundamentally different. Q: What you’re saying is things get through that shouldn’t and things that should go through don’t? The famous image of the girl in Vietnam. You know, you all know that. ROBERTS: That’s right, the terror of war. Q: Right. ROBERTS: Now— Q: You know, just don’t do it very well. ROBERTS: Right. And you have ten seconds, and you’re a twenty-two-year-old college graduate in Manilla, you’re educated—you asked a bit about demographics. All of the workers I talked to were college grads. That has shifted somewhat now, but the workers in Silicon Valley in particular were grads of places like Berkeley and USC. But they had, you know, such as yours truly, made the unfortunate decision to major in econ, or history, these other—I’m kidding, right? (Laughter.) Like, I mean, I think these are very important disciplines. But they—you know, to be employed in STEM or to be employed in the valley, they were, like, kind of not prized disciplines. And yet, they actually had the acumen and the knowledge to make better decisions than some of their peers would. POWELL: So let’s collect Lawrence (sp) and Donna (sp) together, and then let you make concluding remarks. ROBERTS: OK. Q: So you’ve been discussing the irregularities, inconsistencies in the workforce in terms of particular categories of content, which need some measure of moderation—sexualized images, violence, and hate speech. But all these, I think there’s some margin of error. In the case of sexualized imagery, A, it’s—they’ve been able to quantify it, to some extent. I’ve had pictures I took at major museums that were censored because they thought the content was oversexualized. I thought it was silly, but so what that they censored it. Which way they err doesn’t bother me very much, unless it’s child pornography, and you say that they have pretty good methods for that. In the case of violence, again, I hope the err on the side of eliminating violence. It’s not a First Amendment concern or something like that. In the case of ethnic—things that stir up ethnic discord, such as what happened in Myanmar, again, I hope they err on the side of eliminating that kind of hate speech. But what really concerns me is inaccurate—is false content, often spread by governments, the Russian other manipulation of the U.S. elections, the Chinese and Taiwan elections, others in Europe, where it’s a question of facts. And here you have huge competing values. You have—here, you’re talking about real political issues, and governance issues, and it really should be a First Amendment right to speak on these issues. And yet, this false information is doing—particularly deliberately spread false information—is doing enormous damage to democracies around the world. So how do you begin to train people to moderate that, which is far more critical. If anything there’s, to me, less room for error. Less room for error in censoring what should be allowed. And it’s quite a tragedy that so much of this is being propagated and that we’re unable to control it. So how do you begin to deal with that? How do you train a workforce to deal with that? POWELL: So we’re going move to this question. We’re going to collect Donna’s (sp) question as well, and then you can answer them together. ROBERTS: All right. Q: Sarah, I don’t know how you remember that. I’ll make mine, I guess, kind of simple. You are familiar with the Verge—the Verge articles that have come out? ROBERTS: Yes. Q: I guess one question I have for you is I’m trying to get my head around listening to this and saying: What is it that’s really concerning you? Because part of this conversation has been about the worker, about the human piece of it. You have asserted—and I’ll say it’s an assertion—that technology can’t clear—can’t significantly reduce the gap, it seems. And then we’re talking about the social media companies, but we know that this is an internet issue. It is not just a social—it is not just Google, YouTube, and a Facebook issue. So it’s like, when you sit there and look at that—so I was trying to figure out too, OK, is your angle, you know, this is—we’ve got to go after—is this about Facebook and Google? Because if you think about it, right, they’re cleaning—they’re required, in essence, because they are commercially operating a channel, to keep that as clean as they can. And we do regulate that a little bit, right? But the fact of the matter is, our challenge in the content era is this content can show up anywhere on the internet, on any—you know, any website. And that’s the challenge. I’m sure if you followed, you know, child pornography, right, they’re not just looking on social media channels. They’re going to find it anywhere, including the Dark Web. You know, anywhere, parse video. So I guess it’s, like, who are we as society looking to to address this issue? And I guess, is it the worker piece that you’re—are you—and I understand there’s a big issue with humans, you know, involved in the processes. POWELL: You have approximately a minute and a half to answer both questions. (Laughter.) ROBERTS: So the answer to your question is, yes, it’s the worker welfare piece that first compelled me, yeah. And I think I wanted to address my remarks for an audience that I thought would have maybe more direct relationship to policy issues and regulation. But that’s—the book is concerned with the worker welfare, and that’s what my concern has always been, and that was my point of entry. I think what I found is that you can’t really carve that out somehow from the other issues. So for me, that was a foot in the door to now I have to understand the ecosystem. So what I tried to do was also map that out to a certain extent. I’m not certain that—(laughs)—I mean, I’m not sure I would necessarily agree with you, per se, in the way that you framed up the issue of it’s not an XYZ issue, it’s an internet issue, in the sense that I would say this: I find it difficult to, in the American context, locate many internet platforms or services that are not commercial. And that’s part of my—you know, that’s part of the claim that I make of why there is an ecosystem of this work going on. It’s because there was great profit to be made in setting up channels that encouraged people to upload, and to do it all the time, and to actually, in some cases indirectly but in other cases directly, monetize that activity. And that is fundamentally different from what the internet used to look like, which was not—I’m not Pollyanna about it. It wasn’t the halcyon days. In fact, it was a real mess for a lot of the—a lot of the interaction. But it was a different kind a mess and a different set of problems. So that’s sort of the conceit here. But it’s not some—you know, it’s not—it’s not a simple case of exploitation writ large without any other complexities. And it’s not a simple case of Facebook is trash, and sucks, and should close down either. Which has put me in the weird position of, like, working with these people, right, to problem solve. The other question was about basically veracity of information and gaming of the platforms. The one soundbite I’ll give you with that is I think that the issue that you raise is fundamental to the protection of democracy around the world. And I would also say that it’s much harder to make determinations about those issues than it is to know if too much of a boob is showing. And so what the companies tend to do—and I call them on this all the time. I say, you are—your level of granularity on things that maybe don’t matter is in the absence of your ability—or your willingness, let’s say, to articulate your own politics. Because guess what? Other countries where these platforms are engaged don’t have the same commitments to democracy, or to freedom of expression, or whatever it is. And they want to be in the Turkish marketplace, and they want to be in China. And that’s put them on the ropes, and put others in the position of making demands on the firms of, like, well, what are your commitments? Well, they’re very mushy middle. And so then it’s easier to look for and take care of, in a way, some of this content that is obviously bad, versus sitting and spending time, and money, and energy figuring out is this truthful or false? Is this from a vetted source, or is this propaganda? And I think, just to close out, your point that state actors are the ones who should be scaring everybody the most is a great point, for sure, because those are the folks, like you said, who are calling up Facebook and saying: Take down blah. POWELL: Yeah. We should end it there, but please join me in thanking Sarah Roberts. ROBERTS: Thanks. (Applause.) (END) This is an uncorrected transcript.
  • Democratic Republic of Congo
    Disinformation and Disease: Social Media and the Ebola Epidemic in the Democratic Republic of the Congo
    The proliferation of disinformation online amidst the DRC’s outbreak of the Ebola virus is a serious threat to global health. Efforts to curb bad information and conspiracy theories on social media about the disease and other health issues have been no more successful in health than in other contexts.
  • Development
    Last Month, Over Half-a-Billion Africans Accessed the Internet
    Last month, more people in Africa accessed the internet than did in Latin America, North America, or the Middle East. There were 525 million internet users in Africa, 447 million in Latin America and the Caribbean, 328 million in North America, and 174 million in the Middle East. About 40 percent of all Africans were online last month, but usage varies from country to country. In Kenya it was 83 percent; in South Africa, 56 percent; and in Nigeria, 60 percent. However, Nigeria is so much bigger in population than any other African country, its citizens comprised about 20 percent of all African internet users. Though Africa is behind only Asia and Europe in the absolute number of internet users, it lags behind every other region in the proportion of internet users. June internet users comprised 52 percent of Asians, 87 percent of Europeans, and 89 percent of North Americans.  The good news, simply put, is that 40 percent of all Africans have access of some sort to the internet. On a continent in which, by and large, newspapers are expensive, telephone landlines are underdeveloped, authoritarian governments seek to manipulate the media, and most people have traditionally received news from the radio, often broadcasting in local languages, the internet provides access to a new and much bigger world. The downside, of course, is that the internet is unfiltered, with both wisdom and garbage. There are also fewer ways to verify internet stories than in other parts of the world where other forms of media are more developed. Internet penetration is likely to grow at a faster rate than elsewhere in the world, and the fact that there are already more than half a billion internet users in Africa raises the possibility of a greater number of profound social, political, and economic changes. Internet usage may be a sign that the African giant is awakening.