Centennial Speaker Series Session 8: Will Technology Save Us or Threaten Us?
Fei-Fei Li discusses artificial intelligence and other emerging technologies that are certain to have enormous implications for this country and the world.
This meeting is the eighth session in CFR’s centennial speaker series, The 21st Century World: Big Challenges & Big Ideas, which features some of today’s leading thinkers and tackles issues that will define this century.
This event series was also presented as a special podcast series, “Nine Questions for the World,” in celebration of CFR’s centennial. See the corresponding episode here.
HAASS: Well, thank you and welcome, everyone, to today’s Council on Foreign Relations virtual meeting with Fei-Fei Li. I’m Richard Haass, president of the Council and I will be presiding over the conversation today, which is another way of saying I will be learning from the conversation today. This meeting is the eighth in our series The 21st Century World: Big Challenges and Big Ideas, which commemorates our 100th year, which we are now five-sixths of the way through. And the whole idea has been to feature some of the leading thinkers and tackle some of the big questions that will define the 80 percent of the 21st century that remains.
Well, today we have exactly what we intended to do when we put this program together, which is have one of the leading thinkers in the country tackling some of the biggest questions about the century. Fei-Fei Li is the Sequoia Capital Professor of computer science at Stanford University. And she’s also the Stanford Denning Family co-director of the Institute of Human-Centered Artificial Intelligence, or HAI for short. Dr. Li, I want to thank you for all you do. And I want to thank you in particular for joining us for the next hour.
LI: Thank you Dr. Haass. I’m so excited and really honored to join this. And please call me Fei-Fei. (Laughs.)
HAASS: Only if you’ll call me Richard, then. We’ll have our first deal.
LI: Yes.
HAASS: Great. So I want to begin with one line in your bio. And it says the following: Li’s current research interests include cognitively inspired artificial intelligence, machine learning, deep learning, computer vision, and artificial intelligence, plus is suppose it means health care, especially ambient intelligent systems for health care delivery. And the only thing it doesn’t mention probably is quantum computing or some type of robotics. But we’re going to throw those in too.
So what I’d love to do, before I get into the policy questions, Fei-Fei, is—my background in college was Middle Eastern studies. There was a science requirement at Oberlin, and I took geology. And I learned a little bit about rocks and a little bit about tectonic plates. And actually, the tectonic plates and continental drift were useful for images for social science. But I don’t have a whole lot of science background. And I’ll just hazard to guess that I’m not unique in this. We probably have a fairly wide range of knowledge and skillsets here.
So just to even out the playing field a little bit, let’s just go through that. Cognitively inspired artificial intelligence. What is that?
LI: I think, Richard, first of all these are great questions. Let’s just define artificial intelligence first before we describe cognitively inspired.
HAASS: Good. I was going to go there, so thank you.
LI: Yeah. (Laughs.) So AI is a big word these days, but it is really rooted in the science of computing, combined with mathematics. What it does is a computing science that takes data of some form—like you said in geology and, you know, data of Earth images, or data of texts, data of X-rays—and then intelligently compute it so that we can discern patterns, help to make decisions. And if you put that capability into an object like a car or a robot, the decisions can even involve actions, like do I turn right or do I stop? That is the result of intelligent computing.
So in one sentence, if I can make it a long sentence, is that—(laughs)—it’s a computing science that takes in data, learn to discern patterns and reason with the data, so that it can help making inferences and make decisions and even make actions happen.
HAASS: OK. Thank you. My next cocktail party I will be much more successful than the previous ones. Machine learning. Because often I hear these used somewhat interchangeably, but I assume they’re not synonymous. So how do I understand the phrase “machine learning”?
LI: You’re not wrong. Actually, to help move this conversation forward, there is a bunch of words—machine learning, deep learning, including the cognitively inspired, we haven’t touched on—all this is just a way to emphasize different kind of tools. You know, just—I was—actually started as a physics major. And think about the world of physics, right, from Newton to Maxwell to Einstein to Feynman, these physicists used different tools from calculus to partial differential equations to statistics, quantum mechanics. So in a way AI is similar. It’s a much younger field, but our tools did change.
We started as a field using tools that are logic based or rule based, and then we started to use tools that are what we call machine learning based, which has probabilities, statistics. And then recently, when the public became aware of AI, it’s mostly because the latest wave of tools becomes so effective, and they’re called deep learning tools. And those of you with a slightly more technical background in the audience, deep learning is another word for a neural network. It’s, yet again, just a different set of tools. So whether it’s rule-based or machine learning based or deep learning based, and other, you know, tools, they all try to help the field of AI to achieve some of the questions and tasks that we aspire to do.
HAASS: OK. So let’s move from the definitional to the consequential. So there’s deep learning, and machine learning, and AI, and at some point we’ll get to quantum computers and the rest. As a first effort, what’s the answer to the so-what question? To what extent are these incremental increases over what we can do before? To what extent are they fundamental, if you will, orders of magnitude? And what will it allow us to conceivably do? What can we do now? Where might we be heading for this set of tools?
LI: Yeah. That really, Richard, is a great question. The so-what question, the answer to that is a big deal. So, you know, if we’d just be a little bit more epic, looking at human civilization, we as a species, since our activities are documented from, you know, cave drawings, we’ve never stopped innovating. You know, from the discovery of fire, to using, you know, sharpened stone to cut some animal bones, all the way to today, innovating tools to improve our lives in the DNA of humans as a species.
But once in a while—an economist friend said to me that once every 200 years or so, we really invent some tools that really takes a huge leap forward in our ability. I use the example of fire. Obviously steam, steam engines. Obviously, electricity. Obviously, the invention of cars. And then PC, biotech. You can see there are points in human innovation that the tools we created, innovated, fundamentally changes the way economics and society work. And it changes productivity. It changes people’s lives.
So the so-what of AI in my opinion, and many of our colleagues, is that it’s as big as that kind of—that level of fundamental changes of human society. Why? Because we—AI as a tool creates very, very powerful computational machines that can understand the world, data, whether it’s medical, or transportation, or financial, whatever it is, in such a powerful way that even human brain sometimes cannot compete with that. Once you have that capability to discern patterns, to make inferences, and to help making decisions, you change the way people work and you change the way people live. We can have a lot of examples, if you want me to give some, but that’s why the so-what is big. It is a transformative force of our economy, and the way people work and live.
HAASS: Just so I understand, there’s those things where we do them, and something comes along, an invention, and we can do those things differently or better than we can do them, more efficiently, faster. And there’s those inventions that come along and we can actually do different things. It’s qualitatively different. Where does AI fit in that? Is it both? Or how do I understand it?
LI: Yeah, great question. I think it’s both. And let me use health care as an example, so that let’s start with we do them now but we can do better with AI. Take a radiologist, right? Radiologist on call today takes, say, urgent care or ER’s data and try to help the doctors in the ER to say—to triage patients. If a patient has, you know, a life-threatening condition versus a patient who has early signs of pneumonia, which is not as life threatening, the radiologist has to read that X-ray and make a decision on how do we triage and rank these patients in terms of priorities. That’s already happening.
But in—but a radiologist, even the most seasoned, let’s say takes seconds or minutes to do this. On top of that, humans will make mistakes and humans will have moments of fatigue. But imagine that process is now helped by machines who has infinite capacity of computing, for the practical purposes we’re talking about. Who doesn’t fatigue, who doesn’t need a dinner or lunch, and who can help the radiologist to triage or make some inference faster. Suddenly, the existing work of triaging patients based on radiology reading is much more improved. Mistakes are reduced. The efficiency is increased. This is one way of helping the existing work.
Let me take you to another extreme, which is some work that humans cannot do and we cannot imagine. Here’s one example that in our ICUs patients are fighting for life and death. Our nurses and doctors are working extremely hard. American nurses are fatigued. Our ICU nurses are extremely fatigued. Yet, we still require that 24/7 continuous monitoring of the patient because their condition just can go—you know, go sideways so fast. Even one possible condition, which is a safety of patient getting delirious because of drugs and falling out of patient bed, can be a fatal injury to our patient.
So now what do we do? Well, frankly, not much, because our nurses are overworked, and these things just happen. Imagine there is an extra pairs of—I wouldn’t say eyes, but sensors that continuously help our nurses to monitor the physical mobility condition of our patient? And as soon as there’s a sign of a dangerous move or a predicted early sign of dangerous move, the nurses are alerted. This is something that doesn’t happen in our ICUs. This is part of my research. I talk to nurses. They’re worried. They’re constantly worried about this. But they don’t have a way of really staying on top of that. And if that happens, that is a new technology that can help our health care workers to take care of our patients better. So that’s where we haven’t been able to do today, and we can imagine.
HAASS: That’s an obvious example where this emerging technology is a positive it can save lives, and whether helping us read MRIs or, in this case, sense some disturbance in a patient’s situation that could be life threatening.
LI: Yes.
HAASS: What are the potential applications of this that are, shall we say, going in the other direction, that keep you up at night because they could have, whether for individuals or at a social or even international level, that you worry about could have really negative or destructive consequences potentially?
LI: Yeah. Richard, actually, a lot of it. Because if I wasn’t worried, I wouldn’t have co-started this Human-Centered AI Institute at Stanford. (Laughs.) So honestly, as a scientist, even since my days of a student of physics, I have learned technology is a double-edged sword. It is invented by humans and used by humans, and depending on the value system and all that it can be used badly, right? So for example, the—even in medicine. Let’s say a piece of AI technology that helps our dermatologists to predict skin—let’s say skin cancer condition. That sounds so benevolent, and that’s what we wish for.
But if we don’t train this algorithm in a fair way, we suddenly could be in a bias territory where we think this technology is helping everybody, except it’s trained with biased data and people with certain types of skin tone were not well represented in this data. And then we use this technology downstream. Suddenly, we create a very unfair and actually life-threatening application to some of the members of the society. So bias is something that keep me awake, whether it’s intentional or unintentional. Of course, privacy. Again, I—even in our example of the patient-sensing, what if that is hacked? You know, what if that capability is coming to the hands of adversarial players that uses information in adversarial ways that violates privacy and other situations? That can be individual adversaries as well as, you know, organized adversaries. So that’s another area of concern.
Labor market change and the macroeconomics is also something that we need to double down, to study, because history have told us whenever a transformative technology is introduced to our society, it really upsets the traditional composition of labor. And in this process, people might be losing jobs. Jobs might be shifting. And how do we deal with that macroscopic level issue as well as individuals’ livelihood? And of course, there is the whole, you know, military aspect of a technology. And again, history has seen that. Again, as a student of physics, we have learned about that, you know, in the early days of our study. And AI is also another example. So there are many aspect that this technology can be used, whether intended or not intended, in adversarial way.
HAASS: I agree, that I think almost all technologies have the potential to be used in benign or malign ways, domestically or internationally. Given the nature of the technology, the speed at which it is changing, the number of places where research is going on, does government stand a chance? Or will the technology inevitably outpace any attempts to regulate either areas of research or areas of application?
LI: Yeah, great question. At Stanford HAI we actually discuss a lot about this. One thing I want to say is anything I say is a result of learning from so many multidisciplinary experts. So in the past few years, this is a topic we talk a lot. I think it’s actually both. I think it’s not about government standing a chance or not. Government is part of the ecosystem. And it plays an important role in our society. There are two aspects to this. You talk about regulation. I think as we have seen, right, think about, for example, cars with seatbelts, or how clinical studies are regulated through FDA. Government has always participated in proper regulation of technology, putting guardrails to protect people. And I think AI is a technology that government needs to participate in this kind of regulatory aspect.
In the meantime, government also plays an important role in invigorating the ecosystem of innovation. And this is especially true as a proud American scientist. We have seen for the past decades how U.S. government have played positive role in invigorating our country’s innovation. And that’s why we’re the most innovative country in the world, whether it’s biosciences or computer sciences or physical sciences. And I think in the age of AI, we actually—we, being those of us who are in public sector and in academia—are eager to see that to happen. So much resource is siloed in a small number of big tech companies. And our talents are flowing in a disproportional way into these companies. It’s important government participating, invigorating this ecosystem. And I actually participate on a taskforce by White House, OSDP and NSF, establishing national AI research resource. And these are the efforts I believe strongly that government should participate in.
HAASS: Can I just press you on that a little bit? If Goldilocks were joining our conversation, what would be too much or too little government participation in the ecosystem? How does one right-size the government role? Or, to put it another way, what’s the optimal role between the universities, private companies, and government? Because we obviously have a much more what I would call bottom-up ecosystem than other countries, China, others, which have more of a top-down ecosystem. What do you see as the—as the right mix in this ecosystem?
LI: Wow, Richard, I almost wish I know the answer. (Laughs.) But I hear you and I know why you ask this question. It’s an important one. You know, I don’t think I can tell you the exact right mix, but I think there is a methodology that’s really important and special to America’s success, which is the multistakeholder methodology, which is that we do want to involve the civil society, the higher education and public sector industry—private industry and government onto the table and help to invigorate this together. I think that is what’s unique in America. You know, in a way, the less than many other parts of the world regulatory or top-down force in America is part of the fundamental reason we are so much more innovative, because we have a lot more freedom as a society to innovate. But in the meantime, we have seen government has played important roles in innovation.
I’ll tell you, all the important early studies of AI technology, like things you might not have heard of, such as back propagation and neural network methodology or some data-driven work—came out of academia, that are largely supported by government grants. So even in AI, the early days, the foundational days of AI, we needed the government support. And we’re still just at the beginning. We continue to need the multistakeholder approach.
HAASS: Can I ask you to put on—take out your crystal ball for a minute, and let’s talk a little bit about trends and futures. There’s a lot of us who think the greatest challenge to the United States today is democracy. It’s our own political system, its ability to function, and so forth. When you look at these emerging technologies, do you see them as contributing to, if you will, the solution, or contributing to the problem?
LI: Richard, I see them as both. And I see them as an important influencing factor as both. I think that part of this—the right use of this technology can strengthen our democracy, can strengthen the way government, policymaking helps. We’ve got colleagues at Stanford, HAI, who works with local and state government in making—in making policies much more efficient, in understanding data so that government can make much better decisions. We’ve got a lot of colleagues who work in different aspect of policy recommendation, whether it is national security or economics. And this tool set, this AI as a tool, really is very, very useful.
In the meantime, if we don’t use it in the right way or we don’t understand the adversarial usage of it, it might exacerbate the problem, right? Look at today’s social media, the recommendation systems, and deep fake technology that is deeply disturbing and might exacerbate the process of democracy. So in my opinion, it’s both.
HAASS: Is there anything in the—I don’t know how much you focused on military applications, but when you look at the future of conflict and the future of warfare, I guess the question is something like AI, which you’ve thought about more than anybody or as much as anybody, to what extent do you see that as revolutionizing? Like, can we already tell, in terms of warfare, in ways that would potentially have real consequences either for the individual soldier or for platforms, such as ships, airplanes, tanks, what have you? What is the generic or directional impact, if I had to describe it that way, of AI, as best you can see it?
LI: So I think the military and intelligence use of AI, intelligence here meaning the national intelligence community, is—it’s inevitable, you know? For example, you mentioned robotics earlier, right? And in defense scenarios, whether we’re talking about things running on the ground, or in the water, or flying in the air, all of that can be technically, you think of it, related to robotics and robotics ability. And the AI is a technology that basically is the brain of the robot. So whether you’re talking about a civilian self-driving car or a militarized vehicle or airplane or ship, that technology will be deeply, profoundly impactful.
My colleague at Hoover Institute at Stanford, Dr. Amy Zegart, also is working on AI and algorithms impacting intelligence. I cannot speak for her. (Laughs.) I don’t have that expertise. But I know that some of her research is very—is very deep in analyzing the impact of national security and the intelligence community as well.
HAASS: Is directionally, though, the implication, would you think, given the level of detail, the quantity of information that AI can contend with, the speed at which it can, the reduction of errors—does it inevitably move towards, what, a reduction in the human role in a lot of these enterprises? Whether your enterprise is warfare or something else? That all things being equal, the labor model, if you will, becomes less human-centric?
LI: That is a great question, Richard. That question applies to both military as well as civilian use of this technology. What is the role of—human role here? And in fact, at HAI, when we established this institute, we thought of—we designed three pillars that are fundamental to AI—to human-centered AI. And one of them is what we call human enhancement. It is really the way to think about a responsible way of using AI, that is a true reflection of important values of human centeredness and human rights. So when you say human role, maybe technology might be changing the physical labor role, but it should not change the human values, human rights, and the human centeredness.
You know, a friend of mine just did a surgery where the surgeon didn’t even touch her. The entire surgery was done by a surgical robot. But the surgeon was in the room. The whole process was human centered and human serving, right? She felt—I was actually, as an AI technologist, I was still a little terrified hearing, because I care about the friend. I say, are you OK allowing this robot to work on you? And she gave me the most compelling answer. She said: Well, a surgeon’s hand—the best surgeon’s hand has a resolution of, say, five millimeters. But the robot can have a resolution of one millimeter, or even less. And that part of physical labor the robot can do better. But the entire design of the system and the process is human designed and human centered. So I think the human role, the human values, and the human rights of the application of AI system must be there, must be preserved and enhanced.
HAASS: Somewhere in there there’s a joke about that robots have better bedside manner than some surgeons, but I won’t go there. (Laughter.) Maybe the robots will make house calls.
So let me—one other technology question, then I want to end with some education questions, which is about quantum computing. I read a lot about that also. And we—but we haven’t really talked about it. Again, how much of a game-changer is that, beyond, if you will, quote/unquote “traditional” computing? How do we understand that?
LI: Yeah, so this is getting outside of my realm of expertise, despite my Princeton undergraduate degree in physics. (Laughs.) But I’m very excited, from a technology point of view, quantum computing fundamentally, when it works, can up the computing capability of our machines by orders of magnitude. And this is once again the same trajectory of human innovation is we innovate tools to outpace ourselves. Somewhere when wheels were invented, humans were outrun. Airplane outfly humans. You know, computers out calculate people. AI out compute humans. And quantum will add to that. So that is an inevitable trend. It will be a game-changer because of the orders of magnitude change of the compute.
Imagine climate, right? Everybody’s worried about climate. Climate computing is extremely, extremely prohibitively large, because we’re talking about atmospherical water and Earth data that comes in petabytes of quantity. Even today’s biggest computers are still going to crunch on these numbers, you know, in a very difficult way, and also not to mention the energy it comes through. Quantum computing can be a game-changer when the amount of computing can happen much more efficiently.
HAASS: Interesting. Fei-Fei, I want to end with two educational questions. The Council on Foreign Relations in recent years, really over the last decade, has become much more of an educational institution. We don’t have students in the sense that Stanford does, but we are in the business of trying to educate and being a resource. So one is—I want to ask the same question from two different directions. One is, you know, Stanford’s churning out all these wonderful young graduates in computer sciences, and engineering, and the like. What is it, though, they also need to know, do you believe, that goes beyond engineering and computer sciences?
What do you believe is the—that every Stanford graduate also needs? Because they’re dealing with these technologies. And we’ve been speaking for over a half an hour. And they, obviously, have all sorts of impact on our societies, on our lives. So one would ideally want them to be somewhat informed, to think through almost philosophically, or ethically, or morally about potential uses, or economically what would the implications for labor and so forth. What is your sense then of, even for someone who’s concentrating or majoring, as we used to say, in computer sciences, what have you, what else do they need?
LI: Yeah, Richard, this question really is—it just touches my heart. It really is the entire foundation of what I believe in, especially after my sabbatical at a major technology company in Silicon Valley, at Google. And I met so many engineers, young engineers, who just came into the workforce who come to me and, in a way, cry for help—some literally—because they are seeing the deficiency of their education. They are struggling with the seismic social impact of the technology they create, and they don’t even know how to contextualize what they’re doing in the social, moral, philosophical, you know, ethical framework.
So whether it’s CFR educating leaders or Stanford our community college educating workforce and tomorrow’s generation, I really, really believe in what we call a new bilingual education. And this is not between Spanish and English. (Laughs.) This is really about humanity and humanism and technology. Because we cannot pretend this technology is just a bunch of numbers and equations. It impacts our society. You know, even when you’re writing that line of code for X-ray reading for radiologists, it’s important as a technologist that you understand how multistakeholder methodology works, you understand the implication of your technology to radiologists and to patients. And you understand the bias, the human bias that comes into your data and its downstream implications. And that takes a bilingual, human-centered education. So I agree with you.
At Stanford we are already starting what we call embedded ethics in CS program, where not only we have CS courses of ethics and of technology, but even for the hardcore technology course—like I teach deep learning for—(inaudible)—we will have a unit that is the ethics unit. And our research lab engages with legal scholars and ethicists and bioethicist, because we do a lot of health care, that guides our design of the project. So it’s already happening but, in my opinion, not fast enough. We should do more.
HAASS: I think it’s great that it’s happening. I would also hope there would be some embedded study of international relations and some embedded study of citizenship and American democracy.
LI: Yes.
HAASS: Because engineers are also going to be full citizens in this society, they’re going to be participants in a 21st century world. What about the other direction of bilingual, people like me? Study social science not science science, international relations in my case. You know, one of the best reasons I know to have children is they can help with the gadgets around the house. Very quickly I’m out of my depth. What is now, given what we’re talking about—and I don’t need to write computer code, but what do I need to understand? What is, if you will, a basic level of literacy in science that non-scientists need to have, given the importance of the issues we’re discussing?
LI: Richard, I cannot agree more. I believe the science of computing is the new foundational knowledge of 21st century and going forward, just like there is a basic requirement of math and natural science for any undergraduate degree or high school diploma in this country. I think computing—some basic understanding of computing should be required. I remember when I was an undergrad at Princeton, we actually have a course called physics for poets. (Laughs.) And I think we—
HAASS: I took rocks—I had rocks for jocks, is what geology was in my day. But physics for poets is also offered.
LI: Yeah. (Laughs.) So we need to have a computer science for humanists. And it’s happening, but, you know, we hear a lot of people talking about—if you hear congressional hearings of Silicon Valley business leaders. And our policymakers ask questions that reflect a basic understanding of how internet business works, or how computer—AI-based, you know, products and services work. And I think that’s more and more a problem, you know? We need policymakers, artists, teachers, many parts of our civil society to understand the fundamental science of computing, because that’s just going to be more and more.
HAASS: I like your idea so much of bilingualism between science and the humanities, in both directions. And I’m going to do you the ultimate honor. I’m going to steal it.
LI: Awesome. (Laughter.)
HAASS: But thank you. And thank you for this conversation. What I’d like to do now is invite our members to join us. A reminder that this is a virtual meeting, but also it’s on the record. And Kayla, I think you’re going to instruct people how to ask questions of Dr. Li. So over to you.
OPERATOR: Thank you.
(Gives queuing instructions.)
We’ll take the first question from Hamid Biglari.
Q: Can you hear me?
HAASS: Yes, sir.
Q: OK. This is Hamid Biglari, Point72 Asset Management.
Fei-Fei, speaking also as someone who was trained as a physicist, sometimes it’s helpful to drastically simplify concepts to get at the essence of their implications. In that spirit, I’m sure you’ve heard the argument that AI is the ultimate technology for authoritarian governments, because it gives them enormous control over their people while blockchain-based technology are their greatest foil because such technologies distribute control and remove the need for central authority. Where do you come out on this argument? And what do you see as its implications?
LI: Hamid, thank you for the question. I think in your question you have assumed that we—or, I have deep understanding of blockchain, and also assumed this technology is going to work. From what I have read, which is not a lot so I defer to more experts in the audience, that it is very exciting to look at the blockchain decentralizing technology and its implication for data and for AI. So I would definitely continue to—I don’t know if should say the word place hope—or invest, I guess, as a society in this kind of blockchain decentralizing technology. I also think technology alone cannot solve our society’s problem. Policymaking and governance is a critical part of it. This includes international cooperation and established norms. And I think there is a lot of effort, whether it is AI or blockchain, we need to—you know, especially the parts of the world that we believe we share some of our values—need to come and work together in a cooperative way and set the norms and governance models of any technology, AI for sure.
HAASS: We’re doing actually a lot of work on that here, work on global governance. And it’s hard to get past the fact there will always be significant outliers or exceptions. And there will be things going on—and research and applications going on that are potentially worrisome. I wanted to follow-up though on Hamid’s question in one particular way. All things being equal, Fei-Fei, is—there were two visions of the future when it came to technology. One was Orwell’s, of technology being centralizing. And then a different vision is one that’s more decentralized, distributed, proliferated. All things being equal, does what we’re talking about here—AI and other related technologies—do they essentially centralize or decentralize more? What’s the general trajectory?
LI: (Laughs.) Great question. So I think AI as a technology is more and more powerful when you have more and more data, and the compute capability. And that is a force towards aggregation. And this is a legitimate concern of—whether it’s huge companies or top-down players like states and governments. So from that point of view, this technology right now has shown that trend. In the meantime, this technology is changing. For example, in my own health care technology, everything we do is focusing on what we call edge computing. And edge computing is actually going the opposite direction. Instead of doing learning and computing in centralized server and centralized, you know, locations, we do it on device and we distribute it and decentralize it. Of course it’s not yet mature, but that trend is also happening, let alone other trends like Hamid just mentioned.
So in a way, I think that’s what’s exciting, especially exciting in an innovative democratic society, is that there are just so many flowers blooming. And if we can invigorate that fertile playground, I actually don’t think it’s—the jury is out there that all technology is going towards centralization. I actually don’t think so at all, especially as edge computing, federated learning, and just, you know, device-based, you know, products all starting to happen. And we’re seeing that.
HAASS: Which will be a mixed blessing in and of itself, I expect.
LI: As always.
HAASS: As always. Hey, let’s get another question.
OPERATOR: We’ll take the next question from Sarah Danzman.
Q: Thank you very much for a really fascinating discussion. I want to return to the topic of increased government support of emerging and foundational tech. Oh, and I’m from Indiana University in Bloomington.
So is this is an area that I’m really interested in. And I’m curious if Dr. Li can talk a little bit about what she thinks this increased government support will mean for the regulation of cross-border flows of technology and foreign finance of innovating firms, and how you think about the policy problem of getting the balance between protecting national security and fostering an open, innovation economy.
LI: Yeah. Great question, Sarah. I’ve been to Indiana University at Bloomington. Beautiful campus.
So, yeah. So first of all, as I said earlier, I think in this new AI age working in academia and first-hand observing the flow—the disproportional flow of talent into private sector, and more or less being siloed there, as well as the concentration of resource in industry or small number of companies, I definitely think government should play a huge role in resourcing our American universities and public sector. And on the taskforce that I’m currently on, which is National AI Resource Taskforce, the inclusion and diversity and representation aspect of that is actually strongly discussed, and we believe in that—democratizing that resource from the government.
In terms of international, I think it’s—you know, America’s science has always benefitted from international collaborations. I enjoy watching Nobel Prize announcements every year. Just look at every year, almost, a lot of these awarded projects are a result of international collaboration. And as AI is also becoming a big science because of the growth of this field, just like physics and biology and chemistry, that international collaboration aspect have—I think, you know, would be an important piece of this.
I think it will be important scientists like us receive proper guidance from our government in terms of not necessarily micromanagement guidance, but really guidance through government and university on how to properly collaborate. A lot of scientists from labs just do not have that understanding and training, right? Richard, we talked about this bilingualism. Things are changing fast and I think it’s important we work together with the government and possibly industry to understand—to formulate that guideline, and to understand how to implement those guidelines.
HAASS: Thank you. Kayla, we’ve got—let’s get another question teed-up.
OPERATOR: We’ll take the next question from Marc Rotenberg.
Q: Hi, Richard, Dr. Li. This is Marc Rotenberg at the Center for AI and Digital Policy.
First of all, thank you very much for organizing this session today. And thank you, Dr. Li, for your comments. We at the Center are particularly interested in the relationship between artificial intelligence and democratic values. And I think, as Richard knows, democratic values is a phrase now that U.S. policy leaders use frequently in international settings when talking about technology. So I have two related questions. The first is, as you think about the relationship between AI and democratic values, what metrics would you look at to try to assess progress or to compare countries in terms of their alignment with their AI policies and democratic values? And my second question is, should there be red lines on certain AI applications, such as the social scoring credit system in China, for example, or the use of facial recognition for mass surveillance?
LI: Thank you, Marc. Your first question is metrics. I’m not an expert in democratic values, so my answer will be a little bit vague. So at Stanford, we actually have a program called AI Index, that is led by our faculty and researchers, where annually we report a whole bunch of metrics measuring AI trends and progress. They’re not necessarily measuring democratic values. I don’t believe there’s a specific line item there yet. But depending on what you want to measure, there are many different kind of trends and evaluations.
In terms of what you specifically ask on democratic values, I have to confess I don’t have a concrete answer yet. (Laughs.) this is where the bilingualism comes in. I, myself, need to learn more. But I do think, you know, for example, at HAI Stanford, we are engaging more and more with our Freeman Spogli International Studies Institute, our institute for economic policy out of Hoover Institute, our political science department, because we’re recognizing the importance and this, and we want to be participating or investing in these areas of research as well as policy studies. So that’s an ongoing effort that—you come from a very interesting organization. I would love to engage more and understand what you guys do.
The second question is the red line technology. I think, yes. I think government and international government regulation on this technology has to come in. I mean, any other technology has red lines, right? Like, for example, electricity. We have a lot of guideline on how to use it. Cars, the same thing. In terms of AI, my colleagues at Stanford has an amazing piece of research working on pediatric—wait, what’s the word? The last moments of a patient before they die is actually a very—it’s a palliative care that my AI machine learning colleagues start to realize algorithmically there can machine learning indicators or AI indicators that can predict palliative care’s trajectory or, to put it bluntly, can predict death.
And common sense says—if we think about this, common sense, it’s not that shocking, because a person’s vital sign, their lab reading, even their behaviors can tell us when death is near, sometimes. But the question is, should we have such a technology? Where is the red line? My colleagues have courageously written pieces in their research essays discussing the ethical red line of this technology. And I think this applies to facial recognition, applies to, you know, Richard was mentioned militarized use, and all this. So, yes, I think there needs to be work in this area.
HAASS: I’m going to give a slightly different answer to that, Marc, because I’m not optimistic at all. If you, for example, with authoritarian systems, they will obviously place a premium on what they would consider to be social order or stability. And if these technologies, from their point of view, enhance them, they will employ them as much as they can. If certain technologies they thought detracted from their definition of social order, the political control of this or that individual and party, they will do their best to repress them. Our ability to have much influence in these areas, I would say, is extraordinarily limited, as we’ve seen in any other domain dealing with any other policy or technology when it comes to trying to affect the internal trajectory of other countries, particularly when they believe their vital interests are at stake.
So, again, I think this is a fruitful area for intellectual exploration. And I like the idea of bringing scientists and social scientists together. But I’m profoundly skeptical of the ability to come up with norms when it gets into the political or intelligence or military domain, as opposed to, say, something with health. And I’m very—I’m skeptical about the ability to enforce those norms and the rest. We don’t have a very successful track record across the board there. But that’s just one man’s opinion.
Kayla, we probably have time for one more question so let’s get it in.
OPERATOR: We’ll take our next question from Cathy Taylor.
Q: Thank you. Cathy Taylor, Kentanna Group.
Thank you very much, Dr. Li, not only for your contribution but also for the role you play as a role model to young women interested in political science.
LI: Thank you.
Q: I have two daughters. We really appreciate that.
My question goes back to your conversation with Richard on ecosystems. Much has been made about the fact that we are at a competitive disadvantage in the U.S. relative to a country like China, given that we don’t have—or, we’re not using, let’s say, the vast amount of data towards AI. And it’s only as powerful as the data that we have. Is there a mechanism in terms of some sort of private-public partnership between corporate data and public data that could be put in place to help gain this—or, let’s say, shorten this competitive distance, that would still be respectful—albeit quite tricky, I’m sure. But still be respectful of the privacy laws that we have in place?
LI: Yeah. Thank you. Thank you, Cathy, for the great question. I’ll answer the two aspect of your question. One is, is there a protocol we can work with private data—private company data? The other one is competitive advantage. On the second—on the work with industry and also government on data, I actually am hopeful. Not that every data can be shared. Of course not. But I think that’s actually happening and that’s going to happen, because, again, this ecosystem—America’s ecosystem is about—multistakeholder collaboration is about this—you know, a lot of bottom-up innovation. And we can see, for example, companies holding, say, OSHA data that has the need to understand what’s happening.
I think there’s great promise and potential possibilities for collaboration. Of course, you know, how to collaborate with a hospital? I face that every day. But even there, we are starting to have ways, right? Respect HIPAA, respect PHI. Can we onboard some of the computer scientists onto the internal, say, compute cloud of the hospital, so that this collaboration can be secure and private? All this is—and also, can there be—can we imagine a regulatory framework that helps to incentivize collaboration, but protects privacy and others. So the short answer is I think more and more people are thinking about it. It’s still gruelingly painful, but it’s happening and people are trying.
From the competitive advantage, I get this question a lot about is data the singular reason? You know, if person A has more data and person B has less, therefore person A’s AI is better than person B’s. I really disagree on that. I’ll give you one example. For those of you who have kids, you observe the way kids learn. Does it impress you that you show one or two examples of a tiger on a storybook, and you take the kid to the zoo and the kid can recognize a tiger, despite the fact that the kid has seen, what, one or two tiger on a page that has no resemblance to the tiger in a zoo? So this is an example to illustrate that intelligence, whether it’s human or machine, is not just purely data driven. There is a lot more different aspect.
You know, in machine learning we can call two-shot learning, one-shot learning, and other, you know, transfer learning, self-supervised learning. These are all very geeky words to show you it’s not just data being the single factor that determines the advancedness and innovation of AI technology. There’s so many things and so many fruitful directions that America’s advantage is our fertile ground for innovation. Our base is the largest in the world. Our higher education, our researchers, our government-funded labs. We are still the best in the world, and we should continue to invigorate that.
HAASS: Fei-Fei Li, thank you for ending on such a positive, reassuring note, particularly the part where intelligence and data not being highly linked. I breathed a sigh of relief at that point. Again, I want to thank you for all you do day in, day out, week in, week out, year in, year out. And I want to thank you for spending an hour with us today. I want to let all of our members know that the video and transcript of this conversation will be hosted on our member services portal within a short amount of time. And the audio will be released as part of a podcast later in 2021.
But again, Dr. Li, I want to thank you. I’m not bilingual yet, but I feel I could do the equivalent now of what we call restaurant French. I can now do restaurant AI. And I can fake it a little bit. So thank you for getting me to this point. And, again, thank you for all you do.
LI: Thank you, Dr. Haass. And it’s—as a technologist, I’m learning every day. And I do call for all technologists to learn the human societal aspect of our technology as well.
HAASS: Good. Thank you all for joining us. Stay safe and stay well.
LI: Thank you. Bye.
(END)