Artificial Intelligence (AI)

  • Russia
    Cyber Week in Review: February 15 , 2019
    This week: Russia unplugs itself from the internet; a new AI might be cause for concern; Trump's big AI plan is here; and Moldova becomes the latest arena for stopping disinformation. 
  • Technology and Innovation
    Transformative Technology, Transformative Governance: A New Blog Series on the Future
    A flood of technological innovation has left global governance floundering. A new blog series explores this inundation and how to strengthen the levees. 
  • Cybersecurity
    Week in Review: January 4, 2019
    This week: Germany's political establishment is hit by data leak; Iran bans Instagram; internet shutdown in the DRC; the U.S. Congress opens federal data sets to AI companies. 
  • China
    Exporting Repression? China's Artificial Intelligence Push into Africa
    China's increasing investments in AI across the African continent, especially in countries with poor human rights record, should be treated with caution.
  • Technology and Innovation
    Moderating Online Content With the Help of Artificial Intelligence
    Play
    This panel examines AI’s role in moderating online content, and its effectiveness, particularly with respect to disinformation campaigns.
  • Artificial Intelligence (AI)
    A Conversation With Richard H. Ledgett Jr.
    Play
    This symposium convenes policymakers, business executives, and other opinion leaders for a candid analysis of artificial intelligence’s effect on democratic decision-making.
  • Technology and Innovation
    Deep Fakes and the Next Generation of Influence Operations
    Play
    This panel identifies guidelines tech companies can follow to limit their negative use and offer views on how governments should react to deep fakes, if at all. 
  • Artificial Intelligence (AI)
    Pairing AI and Nukes Will Lead to Our Autonomous Doomsday
    As we commemorate the 100th anniversary of the end of World War I, which transformed how wars are fought and won, the world again stands on the precipice of a dramatic revolution in warfare, this one driven by artificial intelligence. While both AI and the debate about the implications of autonomous decision capabilities in warfare are only in their early stages, the one area where AI holds perhaps the most peril is its potential role in how and when to use nuclear weapons. Advances in AI are on a fast track, and the United States is indisputably in an AI arms race with two of our most formidable competitors, both nuclear powers, China and Russia — the former with a smaller but growing nuclear arsenal in size and sophistication, and the latter, which along with the U.S., possesses 90 percent of the global nuclear weapons stockpile. Early and determined U.S. leadership is essential to ensure that we are not just competing but also jointly cooperating with our nuclear-capable adversaries to ensure that AI does not destabilize nuclear command and control. The stakes are high; the consequences potentially existential. While an autonomous nuclear command-and-control program might be easily dismissed as not realistic, the past is prologue. The history of the Cold War is riddled with near misses when accident, mistake or miscalculation due to computer errors in both the Soviet Union and the United States almost triggered nuclear war. But perhaps one of the most stunning revelations of the post-Cold War period relevant to today’s AI revolution is detailed by David Hoffman in his revelatory 2009 book “The Dead Hand.” In the early 1980s, the Soviet Union actually considered deploying a fully automated retaliation to a U.S. nuclear strike, a Doomsday machine, where a computer alone would issue the command for a retaliatory nuclear strike if the Kremlin leadership had been killed in a first-strike nuclear attack from the U.S. Eventually the Soviets deployed a modified, nearly automatic system where a small group of lower-ranking duty officers deep underground would make that decision, relying on data that the Kremlin leadership had been wiped out in a U.S. nuclear strike. The plan was not meant to deter a U.S. strike by assuring the U.S. that if they attacked first, even with a limited strike against the Soviet leadership, that there would be a guaranteed nuclear response. The Soviets kept the plan secret from the United States. It was meant to ensure an all out, nearly automatic nuclear response, which would have existential consequences. As AI develops and confidence in machine learning increases, the U.S. needs to be leading the effort diplomatically by reinvigorating strategic stability talks with both China and Russia, which should include this issue and ensure that this type of nuclear planning does not make its way back into the thinking of our nuclear adversaries or our own, whether secretly or as a form of deterrence. While concerns have been raised in Congress about having the decision to use nuclear weapons solely in the hands of the commander in chief, an even more ominous, impending threat is having that command and control in the hands of AI. The potential application for this developing, powerful technology in increasing stability and the effectiveness of arms control in areas such as early warning, predictive decision-making by bad actors, tracking and stopping the spread of nuclear weapons, and empowering verification for further reductions is as yet unknown. Potential stabilizing applications need to be a defense-funding priority and also a private sector/university funding priority, similar to the public-private efforts that underpinned and propelled the nuclear arms control and strategic stability process during the Cold War. But the destabilizing potential needs to be addressed early and jointly among the nuclear powers — and here, U.S. leadership is indispensable. AI is projected to rapidly and disruptively change the world within a very short time frame — on the economic side perhaps even eliminating 40 percent of U.S. jobs in as short as 10-15 years. Yet, leading AI experts agree that machine learning should enhance — not replace — human decision-making. This must be a central tenet of nuclear command and control. One of the heroes of the Cold War, who later became known and honored as the man who saved the world from nuclear war, Soviet Lt. Col. Stanislav Petrov, overrode repeated and sequential computer warnings in 1983 of a U.S. nuclear missile attack and did not pass on the warning to his superiors. In a 2010 interview with Der Spiegel, Petrov explained that the reason he did not pass the warning on, and correctly determined in the few minutes he had to make a decision that they were a false alarm, was because he factored that: “We are wiser than the computers. We created them.” We are entering this disruptive period of rapid technological change, knowing the consequences of nuclear war and the need for U.S. leadership to guide the use of technology so that it taps that wisdom and enhances the control and reduction of these very dangerous weapons. The most immediate priority for the U.S. must be to lead the process to ensure that these rapid advancements in AI strengthen the command and control of nuclear weapons — not repeat the past and relinquish it to an automatic or nearly automatic Doomsday machine.
  • China
    The Global Artificial Intelligence Race
    Play
    Panelists discuss the global race for leadership in artificial intelligence, and provide an analysis of major AI legislation and initiatives in China, the European Union, and the United States. 
  • Technology and Innovation
    The Artificial Intelligence Race and the New World Order
    Play
    Kai-Fu Lee discusses the advances in artificial intelligence technology, the effects on the future of work, and the technology race between the United States and China.
  • China
    Trump is Rising to the China Challenge in the Worst Way Possible
    China has 800 million Internet users and is overtaking the United States in areas such as drones, mobile payments, bike sharing and artificial intelligence. But Trump is responding to the China challenge in the worst way possible.
  • Defense Technology
    Killer Robots and Autonomous Weapons With Paul Scharre
    Podcast
    Paul Scharre, senior fellow and director of the technology and national security program at the Center for a New American Security (CNAS), discusses autonomous weapons and the changing nature of warfare with CFR's James M. Lindsay.