Defense and Security

Intelligence

  • Cybersecurity
    Germany’s Cybersecurity Law: Mostly Harmless, But Heavily Contested
    Dr. Sandro Gaycken is a senior researcher at the European School of Management and Technology in Berlin, a former hacktivist, and a strategic advisor to NATO, some German DAX-companies and the German government on cyber matters. This summer, Germany adopted a new law, known in German as the IT-Sicherheitsgesetz, to regulate cybersecurity practices in the country. The law requires a range of critical German industries establish a minimal set of security measures, prove they’ve implemented them by conducting security audits, identify a point of contact for IT-security incidents and measures, and report severe hacking incidents to the federal IT-security agency, the BSI (Bundesamt für Sicherheit in der Informationstechnik). Failure to comply will result in sanctions and penalties. Specific regulations apply to the telecommunications sector, which has to deploy state of the art protection technologies and inform their customers if they have been compromised. Other tailored regulations apply to nuclear energy companies, which have to abide by a higher security standard. Roughly 2000 companies are subject to the new law. The government sought private sector input early on in the process of conceptualizing the law—adhering to the silly idea of multistakeholderism—but it hasn’t been helpful in heading off conflict. German critical infrastructure operators have been very confrontational and offered little support. Despite some compromises from the Ministry of the Interior, which drafted the law, German industry continues to disagree with most of its contents. First, there are very few details to clarify what is meant by “minimal set of security measures” and “state of the art security technology.” The vagueness of the text is somewhat understandable. Whenever ministries prescribed concrete technologies and detailed standards in the past, they were mostly outdated when the law was finally enacted (or soon after that), so some form of vagueness prevents this. But vagueness is inherently problematic. Having government set open standards limits market innovation as security companies will develop products to narrowly meet the standards without considering alternatives that could improve cybersecurity. Moreover, the IT security industry is still immature. It is impossible to test and verify a product’s ultimate effectiveness and efficiency, leading to vendors promising a broad variety of silver bullet cybersecurity solutions—a promise that hardly lasts longer than the first two hours of deployment. The vague language gives significant leeway to the Interior ministry, leaving companies unsure whether they meet the law’s requirements. Will the ministry be happy with a chosen product? Will it be sufficient to meet the standards? Would a lesser product have done the trick? And what exactly is “state of the art in security technology” when everyone promises to be state of the art without any way of verifying claims? The lax formulation of the law creates clear risks for German industry. It’s possible that the ministry will never be sufficiently happy and will use the absence of concrete formulations to continually raise standards, which in turn means continuous costs for the companies. Furthermore, German companies will be forced to buy something in the absence of clarity about the quality of a product, which may drive some of them into costly path dependencies. Second, the mandatory severe incident reporting requirement is problematic. It has been a major point of contention since the draft law was introduced two years ago. There are three problems: It is not clear what constitutes a “severe” incident. This is especially difficult as many incidents have a certain potential to be severe in some way. Also, verifying severity means additional costs through forensic analysis. The costs of reporting are unclear. The ministry has made some assumptions regarding costs—namely that a company usually has about seven severe incidents per year, with reporting costs of €660 per incident, amounting to total costs of €9.24 million to implement the law across the roughly 2000 companies to whom it would apply. It’s unclear how the ministry came up with those numbers. For its part, the German industry alliance BDI (Bundesverband der deutschen Industrie) estimated the total cost of implementation at €1.1 billion. There are reputational damages that come with reporting incidents. If only German industries have a reporting requirement while competing foreign companies do not, customers may get the impression that German companies and products are particularly careless and vulnerable, hindering Germany’s economy. Accordingly, industry groups have been seeking a compromise to avoid naming and shaming. They proposed making reporting anonymous and that reports be submitted to a trusted third party, similar to U.S. information sharing and analysis organizations, not to the government. Under existing German law, reporting incidents to government may require law enforcement agencies to open a criminal case, bringing unwanted attention. Not reporting to government would also minimize the risk of having incident reports accidentally leaked to the public. While the government conceded on making reporting anonymous, they didn’t accede to the third party reporting as it may have been inconsistent with German law. This solution, however, is somewhat half-baked. The government is not entirely happy with keeping reports anonymous since it is interested in knowing more about incidents and security laggards. Industry is skeptical that the government will be able to keep reports anonymous and fears further regulation following the large number of severe incidents expected to be reported annually. In the end, the law will probably not be very effective, require a lot of follow-up regulation, and remain controversial. Nevertheless, there’s silver lining. At least there is a law to improve cybersecurity. It will serve as a beachhead for future regulations and incentivize the private sector to be less negligent when it comes to cybersecurity. From this perspective, the law has already changed a lot.
  • Cybersecurity
    Avoiding Escalation in Cyberspace
    Brandon Valeriano is a Senior Lecturer at the University of Glasgow and Ryan C. Maness is a Postdoctoral Researcher at Northeastern University. They have recently published Cyber War versus Cyber Realities on Oxford University Press. Restraint is the strategic underpinning of how many states confront cyber actions. Despite calls for a response to cyber aggression, the U.S. government still has not decided on a viable reaction given limited options. As David Sanger recounts in the New York Times, “in a series of classified meetings, officials have struggled to choose among options that range from largely symbolic responses … to more significant actions that some officials fear could lead to an escalation of the hacking conflict between the two countries.” Strategic restraint tends to defy a form of conventional wisdom that sees the future of cyberspace as a lawless wild west where anything goes and offensive capabilities need to be built up in order to deter an adversary. This defines the tone of the New York Times story. In fact, some of the most cantankerous states in cyberspace tend to behave in a responsible manner because to act otherwise would invite terrible consequences. Why do governments tend to not respond to cyber actions? According to our research, despite the massive influx of cyber operations that we are aware of we find little evidence of the escalation processes inherent in typical conflicts. In fact, we might be witnessing an era of cyberpeace. States operating in cyberspace react differently than in most strategic domains, a reality that drastically differs from perception given the way the news media reports the latest cyber violation as if it is the spark of a new onslaught and validation of the concept of cyberwar. There are two reasons for this: the dynamics of restraint and the development of cyber norms. Restraint Dynamics It’s easy to assume that the United States and other nations would “hack back” when their systems are targeted by adversaries. In fact, many private companies are moving towards this position after their networks are compromised. Yet government officials tend to understand something that private individuals do not: the inner workings of a bureaucracy are complex and dangerous. Needlessly provoking an escalatory response in a domain where both sides are wholly unprotected and borderline incompetent would be strategic suicide. For this simple reason we often see restraint. There is also the reality states will spy on each other, and sometimes even admire their adversaries’ work. The U.S. government has so far refrained from responding to the OPM hack. If there is a response, we predict it will likely come through criminal charges on individuals, not the Chinese state. In fact, the great majority of cyber incidents in our data go without a response in the cyber or the conventional domains. A total of seventy-eight percent of cyber actions we code go without a counterstrike. Of those with responses, seventeen (fifteen percent) come in the form of a cyber response—with only two cases of escalation in severity—and seven conventional responses (six percent). The non-response is the typical response, by an overwhelming margin. Building a System of Norms The lack of escalatory activity can also be explained by a system of norms the United States and others seek to enforce in cyberspace. Like traffic laws, a basic understanding of how things work and what limitations exist benefit everyone. Of course there will be violators, but everyone needs to understand the rules of the road first. Even China and Russia appear to be willing to work within some system of norms, though they disagree with the United States on what the norms should be. Nevertheless, Russia and China are engaging in norms-setting institutions and process, such as the devolution of the Internet Corporation for Assigned Names and Numbers, recognizing that a rules-based framework is important to manage the growth of global connectivity. While many may scoff at the idea of norms, they can be effective means to control the basic behaviors of the majority of actors. Of course there will always be deviants, but as long as we have clear systems of norms, deviancy will be seen as just that—out of the norm. What Does the Future Hold? This all bodes very well for our cyber future. While there is fear that the Internet will be primary threat vector for future societies, this alarmism is a bit premature and primarily based on the lack of understanding of how cyberspace works. We fear what we do not understand. Cyberspace can be controlled and made safe, but this requires us to understand it, to be aware of the possible escalation dynamics at hand in each conflict, and to be take in all available sources of information instead of relying on a few. Given the convergence of the basics of restraint and norms, even the most aggressive of states can be shown to be peaceful actors in cyberspace, even when being poked.
  • Cybersecurity
    Retaliating Against China’s Great Firewall
    David Sanger has a very interesting article in Saturday’s New York Times, reporting that the United States has decided to retaliate against China for the hacking of the Office of Personnel Management. According to Sanger, how the United States will respond is still a matter of debate. The White House is uncertain whether the response will be symbolic or something more substantial; whether it will be public, known only to the Chinese, or secret; and whether it will happen soon or sometime in the future. Over at Lawfare, Jack Goldsmith argues that the White House’s inability to craft a response highlights the challenges of deterring an adversary through counterstrikes, and that deterrence through resilience and defense may be a better option. I am going to pick up on one of the policy responses mentioned in the article, what Sanger calls "one of the most innovative actions" discussed in the U.S. intelligence agencies: finding a way to breach the Great Firewall so as to demonstrate to the Chinese leadership that the thing they value most—"keeping absolute control over the country’s dialogue"—could be at risk. First, a quibble. I am not sure that the idea of attacking the Great Firewall is innovative. I have heard it raised at conferences and other discussions since at least 2010. It may have also happened before. The drop of the Shanghai stock market by 64.89 points on the 23rd anniversary of the Tiananmen massacre (which occurred on June 4, 1989, or 6/4/89) may have been a weird coincidence, or the type of innovative policy Sanger is describing—an effort to show the Chinese leadership that their control was vulnerable. Even if this is an old idea that is seeing new light, it is hard to see how it would deter future Chinese attacks, if only because Beijing appears to believe that the United States is already using the Internet to undermine domestic stability and regime legitimacy. As an article in PLA Daily put it in May (translation by Rogier Creemers): Cybersovereignty symbolized national sovereignty. The online space is also the security space of a nation. If we do not occupy the online battlefield ourselves, others will occupy it; if we do not defend online territory ourselves, sovereignty will be lost, and it may even become a “bridgehead” for hostile forces to erode and disintegrate us. Sanger’s article does not get into details, but there are at least three types of attacks that could be considered: hacks that expose information embarrassing to the leadership; allow Chinese users access to blocked websites outside of China; and lessen or dismantle controls on information within China. Beijing is likely to believe that Washington is already engaging in the first two types of attacks. A hack that exposes corruption or offshore bank accounts, for example, will not be seen as any less a hostile act than the New York Times’ reporting on the hidden wealth of former prime minister Wen Jiabao’s family or Bloomberg’s on the assets of Xi Jinping’s family. In addition, the State Department has spent over $100 million to help develop anti-censorship technology and train online activists, and some of that funding has gone to groups trying to give Chinese users tools to jump over the Great Firewall. Given this perception, counterattacks may not look like tit-for-tat retaliation for the OPM hack but instead as part of ongoing battle in and over cyberspace. In the best case scenario, the Chinese would simply react with more hacking of U.S. targets. In the worst case scenario, attacks directed at the Great Firewall risk significant escalation. Despite the White House’s framing of Chinese cyberattacks as a threat to the U.S. economy and the bilateral relationship, Beijing has probably discounted the importance of the issue to the United States. China’s leadership probably calculates that Washington does not want to scuttle Beijing’s cooperation on a range of global issues over cybersecurity. They also view the United States as the predominant power in cyberspace, willing to use claims of Chinese hacking as a precursor to and justification for more cyberattacks on others. Beijing would likely view the types of responses being debated by U.S. intelligence agencies as disproportionate to the OPM hack, and deem them new threats to national security that call for a Chinese response. This is not argue that the United States should not retaliate for Chinese state-sponsored cyberattacks. Rather, it suggests trying to keep the responses as proportionate as possible—economic sanctions for the cyber-enabled theft of intellectual property; counterintelligence operations for political and military espionage—and, perhaps most importantly, improving defenses and making it much harder for an attacker to breach U.S. networks.
  • Intelligence
    Guest Post: Reevaluating U.S. Targeting Assistance to the Saudi-led Coalition in Yemen
    Samantha Andrews is an intern in the Center for Preventive Action at the Council on Foreign Relations. As the United States provides targeting assistance to the Saudi-led Gulf Cooperation Council in Yemen, it should consider that its allies’ standards for target selection may be less rigorous. However, the United States is still partially responsible for airstrikes enabled with its intelligence. Contrary to the official U.S. position that it remains in a “non-combat advisory and coordinating role to the Saudi-led campaign,” this enabling support makes the United States a combatant in the Yemen air campaign. Even if the United States is not pulling the trigger, the “live intelligence feeds from surveillance flights over Yemen” that “help Saudi Arabia decide what and where to bomb” are indispensable for the launch of airstrikes against Houthi rebels. Recognizing U.S. responsibility and enabling combat role could help to limit the inextricably high number of civilian casualties resulting from coalition airstrikes by increasingly accountability among allies. Shortly after the Saudi-led coalition began Operation Decisive Storm in Yemen on March 26, the United Nations reported the first civilian casualties. On March 31, an airstrike hit a dairy factory, killing 31 civilians. Since then, the United Nations reported on at least 9 other airstrikes that killed a total of 329 civilians, including at least 35 children, and wounded 356 others. This includes an incident on July 6 when coalition airstrikes hit two separate provinces, killing 76, but excludes the initial estimates from July 24 that airstrikes reportedly killed at least 120 civilians. Even though U.S. intelligence is indispensable for coalition airstrikes, the Obama administration’s response to reports of civilian casualties has been to shift the blame to the Saudis or Houthi rebels. On May 6, White House Press Secretary Josh Earnest responded to a question about concerns for civilian casualties stating, “We certainly are pleased that the Saudis have indicated a willingness to scale back their military efforts, but we haven’t seen a corresponding response from the Houthi rebels.” Later, on July 6, State Department Spokesperson John Kirby responded to a direct question about the Saudi-led coalition’s “pattern of attacks, destroying civilian homes and resulting in scores of civilian deaths and injuries” by suggesting that he would let Saudi Arabia “speak to their operational capabilities and performance.” In both cases, there was no mention of the U.S. role. There is precedent for providing joint targeting assistance to U.S. allies while avoiding culpability for civilian casualties resulting from airstrikes. In 2007, the United States established the Combined Intelligence Fusion Cell in Ankara, Turkey to provide real-time intelligence feeds to the Turkish military targeting Kurdistan Workers’ Party, or PKK, members in northern Iraq. Similar to the advisory role that the United States plays in the Saudi-led coalition, the United States worked “side-by-side” with the Turkish military “to analyze incoming intelligence.” In total, U.S. intelligence facilitated over two hundred cross-border and artillery strikes. Yet, when the Turkish military used surveillance from a U.S Predator drone on December 8, 2011, to mistakenly drop a bomb that killed thirty-four civilians, a senior Pentagon official announced, “The Turks made the call. It wasn’t an American decision.” This response reveals consequential flaws in joint target selection. According to a former senior U.S. military official involved in sharing intelligence with Turkey before the December attack, he and his fellow officers were troubled by the Turks notion of “guilt by association” in selecting targets. Whereas U.S. standards for target selection seek a high degree of confidence in discriminating between combatants and non-combatants, the Turkish military blurred this distinction. Further, the U.S. Predator drone that initially identified the civilian caravan was asked to fly offsite before the airstrike. Only when the drone was out of range, and could no longer monitor the attack, did Turkish warplanes strike. Subsequently, any potential intelligence the United States could have provided to the Turkish military about the civilian nature of the caravan was missed. Startlingly, U.S. officials admitted that compliance with Turkish requests was standard procedure. To minimize civilian casualties in Yemen, the Obama administration should consider the lessons from the Combined Intelligence Fusion Cell. Specifically, it should reevaluate its assistance to Saudi Arabia to make it contingent upon greater involvement in joint target selection and approval. Live intelligence feeds from drones should be used to conduct damage assessment, including confirming the impact of the weapon. In an environment where the United States and its allies have limited intelligence on the ground, these considerations would encourage allies to exercise greater discrimination and alleviate potentially negative consequences for the United States. As the Obama administration seeks “to mobilize allies and partners to share the burden” of military action, it should bear in mind that U.S. allies will not always apply the same rigorous standards to avoid civilian casualties. When airstrikes are enabled with U.S. intelligence, the country should acknowledge its enabling role, accepting at least partial responsibility for the collateral damage, and hold its partners to higher standards. Only then can the United States begin to put in place joint targeting measures to minimize civilian casualties.
  • Cybersecurity
    Espionage in the Digital Era: Lessons From the OPM Hack
    Brandon Valeriano is a Senior Lecturer at the University of Glasgow, and recently published Cyber War versus Cyber Realities on Oxford University Press. Stephen Coulthart is a Senior Lecturer in the National Security Studies Institute at the University of Texas at El Paso. It has almost been a month since the Office of Personnel Management (OPM) infiltration was made public and shockwaves of the hack reverberates in Washington, D.C. and beyond. In response, officials have shut down the E-QIP background investigation system. Security and privacy professionals seem united in their demands that OPM director Katherine Archuleta be held accountable for the security lapses in the organization. Commenter after commenter diagnoses the problems in our systems, institutions, and infrastructure, demanding accountability and change. While we continue to extract negatives from the story of the OPM hack, three lessons emerge that might give us hope for a secure future. Lesson #1: Security is not assured in digital systems  The incident should remind us that every networked system is vulnerable. Cyber espionage is a reality and a problem every institution will have to deal with. The events of the last few months only make this clear as the U.S. government officials admitted the State Department was hacked, which then led to an intrusion that even included some of Obama’s personal emails. The Syrian Liberation Army hacked the mil.gov website and public relations portal. Of course, to top it off, records for 4 million (or possibly many more) federal workers were stolen from the OPM, likely by the Chinese. Included in this massive amount of information is the background form that every employee who seeks secret clearance must fill out and includes some of the most intimate details about one’s personal life. Searching for someone to blame is not really the answer. Rethinking what is available and networked is since the Internet was never designed with security in mind. Yet we continue to trust it with our deepest and darkest secrets. Once the vulnerabilities and the weaknesses of our systems are made clear, we can move forward with fixing the problems and altering the nature of how we share information. The simple conclusion is that we have entered an era of cyber espionage, not necessarily cyber war. Lesson #2: U.S. human intelligence will need to adapt to the digital age Some have gone so far as to call the OPM hack a greater national intelligence failure than the Snowden affair. Make no mistake, the hack was large and comprehensive, but we also must move beyond the spy fantasies that pervade analysis of the OPM hack. The typical story is that this information could be used as a stepping stone to siphon off state secrets. Using cheap and available data mining tools similar to the NSAs’, the opposition could use the information to build a profile of individuals susceptible to blackmail, such as a federal employee with a history of extra-marital affairs and ties with the Chinese nationals, information all in the SF86 form were  stolen. Once identified, these targets could be subject to honeytraps, a threat that MI5 has previously warned about in other contexts. Whatever the Chinese do with the data, not all is lost. As Knake writes “I don’t think we are giving the CIA enough credit here, but if it’s true, the harm can be mitigated since we know what data was lost.” For example, while it may now be very difficult to establish cover for an agent already working in intelligence system, this does not prevent the intelligence community from hiring new agents or converting current government employees who have not requested security clearance into assets in the future. The U.S. has not lost all of its HUMINT capabilities because of the hack and information leak, but it will need to adapt to take into account OPM-style attacks in the future. Lesson #3: The main vulnerability to security systems remains external to U.S. government networks The perpetrators hacked the OPM by stealing the credentials of an outside contractor. There are things being done to increase security in U.S. government systems, yet vulnerability will remain through external contractors with access, like Edward Snowden. This is why it is important do more than monitor systems constantly, we must hunt those who already have access and are using it maliciously, or those that might do in the future, as Richard Bejtlich advises. The deeper need is to rethink how we store critical information. That the director of the OPM described their systems as a “hackers dream” in November 2014 should give us pause and rethink our reaction to this latest violation and the need for basic cyber hygiene. There is a collective incompetence in the digital security management of the United States that needs to be rooted out. Merely hiring a new computer security manager for the OPM will not fix the deeper problem of failing to understand the security needs of our infrastructure. At the strategic level, the exploit of OPM’s four million records means very little. It has not and will not change how the United States conducts the business of foreign policy, but the entire intelligence community needs reevaluate how it might conduct its mission. It is important to keep the real issue of cyber espionage in mind as we debate the future of conflict. Our current focus on war in an era of dramatic peace can be counterproductive if we do not first focus on the defense and protecting our networks from exploitation. These continued attacks reinforce the point that our security starts with reforming how we protect information.
  • Cybersecurity
    The Brazil-U.S. Cyber Relationship Is Back on Track
    Alex Grigsby is the assistant director for the Digital and Cyberspace Policy program at the Council on Foreign Relations. Brazilian President Dilma Rousseff’s was in Washington D.C. this week to meet with President Obama. The trip came two years after she had famously cancelled a state visit in 2013 in protest following allegations that the NSA had spied on Brazil and Rousseff personally. At the time, the Brazilian president was very public and vocal in her denunciations, calling the espionage "manifestly illegitimate" and expressing her outrage at the United Nations. While the U.S.-Brazil cyber relationship hit the rocks in the immediate aftermath of the Snowden disclosures, it seems like time and some deft diplomacy has helped patch things up. At this year’s Summit of the Americas, Rousseff indicated that she’s moved on. Things have improved so much in fact that the Rousseff-Obama joint communiqué dedicated five paragraphs to Internet issues. Most importantly, both leaders have agreed to resume the Brazil-U.S. cyber working group. The United States and Brazil share the understanding that global Internet governance must be transparent and inclusive, ensuring full participation of governments, civil society, private sector and international organizations, so that the potential of the Internet as a powerful tool for economic and social development can be fulfilled. Both countries acknowledge the agenda approved by Netmundial conference (São Paulo, April 2014) as a guide for discussions regarding the future of the global internet governance system. Both countries reaffirm their adherence to the multistakeholder model of Internet governance and, in this context, reaffirm their commitment to cooperate for the success of the Tenth Internet Governance Forum (João Pessoa, November 10 to 13, 2015), and extension of the IGF mandate. Likewise, they reaffirm their interest in participating actively in the preparatory process of the High-Level Meeting of the UN General Assembly for the Ten-Year Review of the WSIS outcomes, to be held in New York in December 2015. Bilateral cooperation on cyber issues will be resumed by the convening of the Second Meeting of the Working Group on Internet and Information and Communications Technology in Brasilia in the second semester.  The meeting will offer the opportunity of exchanging experiences and exploring possibilities for cooperation in a number of key areas, including e-government, the digital economy, cybersecurity, cybercrime prevention, capacity building activities, international security in cyberspace, and research, development, and innovation. The resumption of the working group is significant. While the United States and Brazil don’t always see eye-to-eye on cyber issues—particularly on the Internet governance front—they recognize the importance of an un-fragmented and open Internet. That makes them important allies to push for a renewal of the IGF’s mandate and the adherence to multistakeholder model in the face of opposition from countries that question its worth. It will also help both countries coordinate each others’ negotiating positions in the run up to the negotiation of the WSIS+10 outcome document, expected to be issued later this year where the usual suspects are likely to push for a greater decision-making role for UN institutions in Internet governance. While some may be calling out Rousseff for flip-flopping, the resumption of the working group is unequivocally a good thing for the prospects of an open, global Internet.
  • Cybersecurity
    The OPM Hack: Weighing the Damage to U.S. Intelligence
    I finally got my letter from the Office of Personnel Management (OPM). What a relief. I was worried my credibility as a commentator would be damaged if my data wasn’t stolen. Imagine how Dan Rather would have felt had he not received Anthrax. In any case, in the weeks since the story broke, OPM has yet to change their story. The official number is still 4.2 million and the data that was lost is still, according to my letter, “your name, social security number, date and place of birth, and current or former address.” News outlets, however, are putting the number as high as 18 million and are reporting that the Electronic Questionnaires for Investigations Processing (EQIP) system that contains information collected on form SF-86 was compromised. That would be worse than just the loss of employee personally identifiable information. It would include information on employees and contractors as well as their friends and families. Ian Bremmer is worried about the impact on U.S. human intelligence. He even posted a comment on CFR’s Facebook page about it in response to my last piece. I’m not as concerned. Here’s why: 1. We know about it Intelligence is about decision advantage and relative gains. If the Chinese had stolen this information without us finding out, we would be truly screwed. It might take us years to piece together why they seemed to have so much more information on us. But this is a GI Joe moment: knowing is half the battle. Because we know what they took we can mitigate the losses. Who knows, we might even find a way to use it to our advantage. Welcome to the wilderness of mirrors. 2. The CIA is pretty good at what they do The big concern is that the information stolen from OPM could be used to identify U.S. intelligence operatives and the people they meet with. Under this theory, being bandied about in the blogosphere, if the Chinese had a complete list of all federal employees, including those who work at the State Department but excluding those who work in the intelligence community, they could identify CIA case officers under official cover who don’t have State Department employment records and didn’t fill out SF-86s with OPM. If the Chinese then knew which of their officials were meeting with these case officers, they could roll up our network in China. That’s a pretty scary scenario. Is it true? I don’t know. I have a lot of respect for the CIA. It’s filled with smart, dedicated public servants that are the best in the world at what they do. And there is nothing, absolutely nothing, the CIA protects more than its sources and methods. If it’s really possible that China now owns our human intelligence network, that’s really bad. But let me take General Hayden’s comment a step further. If it is indeed true that CIA case officer covers’ could be blown by hacking into an OPM database, it’s not shame on China and it’s not even shame on OPM; it’s shame on CIA.  Anything that the twitterati can figure out in a week is something CIA counterintelligence should have addressed long ago. This kind of data is the digital equivalent of pocket litter. Again, I don’t think we are giving the CIA enough credit here, but if it’s true, the harm can be mitigated since we know what data was lost. It might mean a lot of case officers will be riding desk duty for the rest of their careers. Luckily, this data couldn’t expose any No Official Cover case officers. We will end up relying more on signals intelligence and open source intelligence. Who knows, it might even lead to more work for the Eurasia Group. In short, we can manage the losses. 3. Password resets were already weak The data could certainly be used for password reset. Even more advanced systems that don’t rely on answers the user provides but pull data from public records could be accessed with the information in the SF-86. But let’s not pretend that password systems were secure before this data was lost. We’ve needed to kill off the password for twenty years. The National Strategy for Trusted Identities in Cyberspace is aimed at doing just that. Most major consumer online services are now offering two-factor authentication. I’ve never run into an online password reset process at a Federal agency for anything critical, but any system using them should put stronger controls in place. 4. Spearphishing is already pretty effective Could the information be used to target spearphishing e-mails? Sure, when targeting someone with a spearphishing e-mail, the more information, the better. On the other hand, spearphishing is already pretty effective—it’s the threat vector for most significant cyber incidents and LinkedIn and Facebook make it pretty easy. It’s also a problem that some companies are effectively managing through a combination of user training (PhishMe, Wombat Security) and next generation threat detection (Palo Alto Networks, Fidelis, FireEye). At most, access to this data will make a bad problem worse. 5. Blackmail is an overstated threat There are too many would-be spy novelists in Washington, D.C. conjuring up fanciful scenarios in which information in the EQIP database could be used to blackmail government employees. There are two things to keep in mind when considering blackmail scenarios. First, there are very few known cases in which blackmail was involved in getting government employees to give up classified information. As far as we know, Edward Snowden and Chelsea Manning were both motivated politically. Aldrich Ames and Robert Hanssen were both financially motivated. The lone case I am aware of in which blackmail was involved since World War II was that of Clayton Lonetree, a Marine stationed at the U.S. embassy in Moscow. The details of Lonetree’s case make for fun reading—he was seduced by a KGB agent named “Violetta Seina” and caught in a honeypot. But the harm he did was, according to the then-commandant of the Marine Corps, “minimal.” He was released from prison after serving only nine years. Second, one of the main policy justifications for the security clearance process is to mitigate the possibility of blackmail. In part, by collecting the information, the Federal government ensures that foreign spies can’t threaten clearance holders’ jobs with it. Let’s say you admit to past drug use on your SF-86. If you are still granted a clearance, no one can threaten to tell the government about your past drug use. While the clearance process does capture information on finances and even romantic affairs, problems in these areas quickly get your application rejected. There is no benefit of the doubt. With about 5.5 million clearance holders today, the system most certainly isn’t infallible. But a foreign intelligence agency is going to have a much harder time identifying cases where security investigators made a mistake than the U.S. government in what I am guessing is a massive review currently underway.
  • Defense and Security
    Top Ten Findings of the CIA Inspector General’s Report on 9/11
    Last week, in response to long-standing FOIA requests, the CIA declassified—with significant redactions—five documents related to the terrorist attacks on September 11, 2001. The most notable was a June 2005 Office of the Inspector General (OIG) report into CIA accountability regarding the findings of the Report of the Joint Inquiry into the Terrorist Attacks of September 11, 2001, which was produced by the House and Senate intelligence committees. That joint inquiry was published in December 2002—long before the 9/11 Commission report—and served as the most comprehensive public investigation into Intelligence Community (IC) shortcomings. The 2005 OIG report reviewed the joint inquiry’s central findings to determine if senior CIA officials should be reprimanded for their actions. Most attention on the OIG report has focused on the now-declassified finding about allegations of Saudi Arabia’s support for al-Qaeda. Those who believed that the CIA had intentionally hid evidence of Saudi Arabia-al-Qaeda connections were surely disappointed by this key passage: The [OIG Accountability Review] Team encountered no evidence that the Saudi Government knowingly and willingly supported al-Qa’ida terrorists. Individuals in both the Near East Division (NE) and the Counterterrorist Center (CTC) [redacted] told the Team they had not seen any reliable reporting confirming Saudi Government involvement with the financial support for terrorism prior to 9/11, although a few also speculated that dissident sympathizers within the government may have aided al-Qa’ida. (p. 440) Beyond this brief declassified portion of the OIG report, there were many fascinating findings and insights, which are useful to understand the IC’s approach to terrorism before 9/11, as well as the type of constraints and organizational biases that make intelligence analysis so inherently difficult. One wonders which of these pre-9/11 shortcomings—so obvious with the benefit of hindsight—are having a comparable impact on intelligence collection and analysis today. The following consists of the top ten insights from the June 2005 CIA OIG report, with context and commentary provided for each: The Team found: • No comprehensive strategic assessment of al-Qa’ida by CTC or any other component. • No comprehensive report focusing on UBL since 1993. • No examination of the potential for terrorists to use aircraft as weapons, as distinguished from traditional hijackings. • Limited analytic focus on the United States as a potential target. • No comprehensive analysis that put into context the threats received in the spring and summer of 2001.   That said, the CTC’s analytic component, the Assessments and Information Group (AIG), addressed aspects of these issues in several more narrowly focused strategic papers and other analytic products. The personnel resources of AIG were heavily dedicated to policy support and operational-support activities. Analysis focused primarily on current and tactical issues rather than on strategic analysis. (pp. xvii-xviii) (3PA: Intelligence analysts, then and now, rarely have the amount of time that they would like to conduct the sort of in-depth strategic analyses that policymakers want. Another document that the CIA published last week, with fewer redactions than previous versions, was an August 2001 OIG Inspection Report of the DCI Counterterrorism Center (CTC) Directorate of Operations. That report found that the CTC’s primary analytical arm, the AIG (Assessment and Information Group) devoted “a significant amount of time—interviewees estimated between 30 and 50 percent—to counterterrorism operations support, working closely with their colleagues in the [redacted] operations groups on targeting and planning aimed at penetrations, recruitments, renditions, and disruptions.” In other words, they were helping to capture and kill terrorists rather than undertake long-range analysis of terrorism.) …Differences between the CIA and the Department of Defense over the cost of replacing lost Predators also hampered collaboration over the use of that platform in Afghanistan. The Team concludes, however, that other impediments, including the slow-moving policy process, reduced the importance of these CIA-military differences. (p. xxii) (3PA: The pre-9/11 fight between the Pentagon and CIA over who would pay for the Predator, and have the final authority to approve drone strikes, was what led to the CIA becoming the lead executive agency for drone strikes after 9/11. This historical act of bureaucratic happenstance was never intended to be a permanent solution, yet remains with us today.) …The intelligence priorities in place on 11 September 2001 were based on Presidential Decision Directive (PDD)-35, signed by President Clinton on 2 March 1995. [redacted] The Intelligence Community’s approach to priorities in the years following PDD-35 was to add issues to the various tiers, but not to remove any. Nor was there any significant effort to connect intelligence priorities to resource issues–providing increases to some while decreasing resources provided to lower priority issues. The 9/11 Team believes that a formal reprioritization of intelligence priorities in the years leading up to 9/11 might have provided important context for resource decisions relating to counterterrorism. A number of senior leaders, including the DCI, have stated that the IC had to deal with major challenges that competed for available resources. Indeed, as a Tier 1B issue, terrorism remained at the same level–at least in the formal prioritization [redacted] until after 9/11. (pp. 137-139) (3PA: It is amazing to think that this was the haphazard manner by which the IC provided guidance for intelligence collection and analysis, with such incomplete prioritization. Starting in February 2003, this process was replaced by the more rigorous National Intelligence Priorities Framework (NIPF), which is reviewed annually, at least, by the president and senior national security officials.) …The budget-related actions of the former Chief of CTC might also have contributed to a negative perception of CTC within OMB. [redacted] stated that OMB officers were skeptical when the Agency pushed for permanent increases in its counterterrorism budget because they thought a lot of it was “crying wolf;” this officer recalled that the then-Chief of CTC would give what he called the “dead baby” speech. (pp. 191-192) (3PA: Employing the threat of terrorist fatalities to obtain more resources remains a consistent tactic, as does the disbelief of budget officials and congressional appropriators that more money is needed by the military and IC.) …CTC also hired contractors to help address personnel needs. The number of term employees (blue-badge contractors) was relatively small, however, the Team was unable to obtain reliable data on the number of independent and industrial contractors working in the CTC during FY 1996-2001. Thus, the Team was not able to assess the full extent of CTC’s efforts to augment its staff using contractors. (p. 197) (3PA: Just as today, the military and IC use contractors to enhance their workforce, but fail to compile good data about those contractors, or use that data to employ contractors in a more effective and less costly manner. In July 2010, when the Director of National Intelligence James Clapper was asked how many contractors were in the IC, he replied, “We can certainly count up the number of government employees that we have, absolutely. Counting contractors is a little bit more difficult.”) …The Team therefore cannot accurately determine the overall size of the analyst cadre devoted to working on al-Qa’ida immediately prior to 9/11. The Team believes it is probably somewhat higher than the Joint Inquiry’s figure of “fewer than 40” but, based on our analysis, it is below the 49 - 34 in AIG plus 15 in the DI–which CTC offered in 2002 as the number of analysts working on al-Qa’ida prior to 9/11. (p. 228) (3PA: Before 9/11, there were only forty to fifty analysts working on al-Qaeda.) …AIG analysts in [redacted] who worked most closely on al-Qa’ida undertook only limited alternative analysis in the years prior to 9/11, however. In a review of analytic papers produced by [redacted] the Team found only one example of such analysis; [redacted]. In interviews, most of the analysis in [redacted] recall utilizing no alternative analysis, and “did not have the luxury to do so.” That said, the [redacted] FY 02 Research Plan listed a paper, “Key UBL Assumptions Check,” which was to take a comprehensive look at the key assumptions underlying analysis of the entire Bin Ladin issue and which likely would have employed alternative analysis techniques; 9/11 occurred before the branch could get to this paper. Probably in response to this dearth of pre-9/11 alternative analysis, on 12 September 2001, the DCI created the Red Cell, a unit of senior DI analysts and other IC officers tasked with thinking “outside the box” on counterterrorism. In short order, the Red Cell’s nontraditional approach began receiving praise from the President, Vice President, and other senior policymakers. Later, the Red Cell’s mandate expanded to other intelligence topics. (pp. 247-249) (3PA: IC analysts and policymaker-consumers constantly express their desire to produce and receive alternative analysis, which is distinct from main-line authoritative analysis. However, the “tyranny of the inbox” often makes this an impractical task—a concern that remains prevalent in the CIA and elsewhere to this day. Also, my forthcoming book, Red Team: How to Succeed by Thinking Like the Enemy, includes a case study on the creation and development of the CIA’s Red Cell.) …The price of the dashed hopes of action against the UBL by the tribal assets became clearer in December. On 20 December, [redacted] reported Bin Ladin’s return to Kandahar where the tribal assets could most easily monitor his activities. This set up an intense period during which the assets considered a ground attack on Bin Ladin, while policymakers and CIA considered a cruise missile attack. [redacted] After some delays, on 23 December they picked either 25 or 26 December for their operation to capture Bin Ladin. The Chief of the UBL Station, [redacted] meanwhile, reported [redacted] on 22 December that [redacted]. He predicted that the interest in this [redacted] among the CIA leadership and at the NSC would be “intense.” Asset-reporting problems on the 22nd prevented any action on that date. On 23 December, at noon and then again at 7:30 pm Islamabad time, [redacted] reported that Bin Ladin [redacted]. With this information in hand, the CIA and policymakers, including the National Security Advisor, the Chairman of the Joint Chiefs, the Attorney General, the DCI, and the Assistant DCI for Collection (ADCI/C), among others, deliberated on launching a cruise missile attack against Bin Ladin while he slept. Recollections vary slightly in detail but agree on the most important point: the outcome of the meeting. The CIA presented its information derived from the tribal assets. CTC appeared confident that the information was accurate, initially offering an estimate of 75-percent reliability; under questioning from the National Security Advisor [Sandy Berger], however, it lowered the estimate to 50 percent. According to one participant [in the White House meeting], in response to concerns from some Principals about the potential for killing innocent civilians, the military presented two estimates that diverged greatly on the anticipated extent of collateral damage. One participant recalls that the JCS-2 asserted that the collateral damage would be tolerable. Then officers from the [redacted] gave numbers perhaps three times as high as those from the J-2. In the end, the session ended with the decision not to act. The DCI in his testimony later said that, in this case, as in others where policymakers contemplated missile attacks against Bin Ladin, information on the Saudi terrorist’s location was based on a single thread of intelligence, and they made the decision that it was not good enough. Others told the Team that the estimated collateral damage was also an important factor. The ADCI for Collection recalls that the Chairman of the Joint Chiefs of Staff [Gen. Hugh Shelton] pulled out charts estimating a high number of deaths and injuries and that he said the proponents of launching the missiles “want me to kill 600 people.” (pp. 375-376) (3PA: The CIA lowering its probability estimate of the accuracy of the intelligence from 75 percent to 50 percent after questioning from Berger violates every principle of providing analytical support to policymakers. Similarly, different military officers providing differing estimates of the collateral damage from a cruise missile strike against bin Ladin is not the sort of sound military advice that civilian officials require when authorizing the use of force.) …A more general explanation for deciding against collective action that was applicable to all contemplated cruise missile attacks was the inherent delay in the process. According to UBL Station’s late May 1999 assessment of the 13-16 May period, even with improvement in timeliness, the normal delays from the time [redacted] observed events to the time the information about those events arrived at CIA Headquarters would be from one to three hours. The time needed first to process the information at Headquarters and to get the National Command Authority decision to launch missiles and then for the cruise missile to arrive at their target made for great uncertainty whether the person targeted (Bin Ladin) would remain in place long enough to be hit. (p. 378) (3PA: The impossibility of knowing bin Ladin’s location one to three hours into the future was what compelled the Clinton administration to push the CIA and military to find a weapons platform that would “collapse the kill chain.” Thus, by February 16, 2001, the United States had successfully developed and tested the armed Predator drone.) …Judging by the available evidence, the Agency was unable to satisfy the demands of the top leadership in the US military for precise, actionable intelligence before the military leadership would endorse a decision to deploy US troops on the ground in Afghanistan or to launch cruise missile attacks against UBL-related sites beyond the 1998 retaliatory strikes in Afghanistan and Sudan. The military demanded a precision that, in CIA’s view, the Intelligence Community was incapable of providing. Military interviewees offered their views on why senior military officials (and policymakers) generally were reluctant to act on intelligence. One senior military interviewee [redacted] told the 9/11 Team that the military had a cultural reluctance and distrust about working with CIA prior to 9/11. [redacted] said some in the military feared that, if the military went into Afghanistan and the going got rough, the CIA would leave them in the lurch. (p. 401) (3PA: Again, it is difficult to remember that the military and CIA once had an inherent dislike of each other, and routinely protected their own bureaucratic interests at the expense of achieving shared national missions.)
  • Cybersecurity
    Taking Stock of Snowden’s Disclosures Two Years On
    Last Friday marked the second anniversary of the start of Edward Snowden’s disclosures. The days preceding this anniversary highlighted Snowden’s continued prominence. On June 1, Section 215 of the USA PATRIOT Act—the legal basis for the domestic telephone metadata surveillance program Snowden revealed—expired. On June 2, the Senate passed and President Obama signed the USA FREEDOM Act, which the House of Representatives previously approved. This legislation transforms how the U.S. government will access domestic telephone metadata for foreign surveillance. On June 4, the New York Times published a story based on Snowden-disclosed documents claiming the NSA secretly expanded “Internet spying at the U.S. border.” Also on June 4, Snowden published an op-ed claiming that “the world says no to surveillance.” It was a good week for Snowden. But has it been a good two years for the rest of us? Section 215 and the Domestic Telephone Metadata Program Snowden’s signature achievement involved exposing what the U.S. government did under a secret interpretation of Section 215. He defended the principle that the government should not exercise power under secret laws. Although oversight bodies found no NSA abuses, this conclusion did not overcome the rule-of-law defect Snowden emphasized. However, Snowden’s challenge was not the only factor in Section 215’s death. The metadata program was ineffective as a counter-terrorism tool, which led some in the intelligence community to welcome its demise. Had the program contributed to foiling terrorism, its utility might have overcome the taint of its secret jurisprudence. Section 702 Surveillance Against Foreign Targets Snowden also exposed programs operated under Section 702 of the Foreign Intelligence Surveillance Act (FISA). For example, the Times article on June 4 used Snowden-provided documents to disclose that the U.S. government began conducting surveillance for malicious cyber activities suspected to originate from foreign governments. Section 702 authorizes surveillance against foreign governments, so the cyber surveillance fits within this legal authority. The NSA was interested in conducting cybersecurity surveillance without identifying a foreign target. Such a step might have secretly expanded Section 702, but the Department of Justice blocked the idea. Like Snowden’s other Section 702 revelations, this disclosure did not reveal secret activities that break the law or abuse legal authority. Instead, Snowden’s disclosures provided transparency about Section 702 programs. Information released by the intelligence community and contained in oversight reports brought even more transparency. Controversies about the scope of Section 702 surveillance, the scale of incidental collection of communications of non-targeted persons, and government uses of incidentally collected information existed before Snowden came along. The new transparency rekindled these controversies, but also revealed how valuable Section 702 surveillance is to the U.S. government. President Obama imposed additional restrictions on U.S. government use of incidentally collected information but did not curtail the surveillance. Congress has not, so far, amended Section 702. At the two-year mark, Snowden’s impact concerning Section 702 is less definitive. Section 702 surveillance continues with robust support, leaving advocates of civil liberties lamenting the lack of curtailment of these programs. Further, the new restrictions on the use of incidentally collected information have not placated domestic opponents or foreign governments and nationals. In many ways, pre-Snowden debates about Section 702 continue because the transparency Snowden triggered provides all sides with ammunition. The Global Context Snowden intended to spark global debate by framing expansive surveillance and espionage as threats to universal human rights. His June 4 op-ed claimed a “change in global awareness” is underway and “the balance of power is beginning to shift.” However, the gap between these claims and reality is great, suggesting his impact globally has been weak, if not counterproductive. The latest Freedom on the Net survey does not support Snowden. Between May 2013 and May 2014 (roughly the first year of his disclosures), Internet freedom declined “for the fourth consecutive year, with 36 out of 65 countries assessed . . . experiencing a negative trajectory[.]” Little has happened since May 2014 to suggest this trend has been reversed. Increased surveillance by many states, including democracies, contributed to this trajectory’s momentum. For example, governments in France, Turkey, and the United Kingdom said “yes” to increased surveillance. In the midst of this decline, Snowden damaged the U.S. government’s international standing, created rifts among democracies, and harmed U.S. technology companies. The Snowden-triggered move by tech companies toward stronger encryption pits democratic governments against the private sector and civil society in a looming zero-sum brawl. Meanwhile, unperturbed by Snowden, autocratic countries exploit the disarray within and among democracies, bash the hyposcrisy of Internet freedom’s champions, conduct intrusive surveillance at home and abroad, and strengthen their manipulation, control, and censorship of digital communications. Given these facts, the UN resolution on the right to privacy in the digital age, which represents global progress for Snowden, does not reflect consensus among states on the relationship between surveillance and human rights. An unprincipled but ineffective program is dead. Long-standing controversies about large-scale surveillance programs targeting foreigners continue. Government surveillance powers are increasing, democracies are bitterly divided, and Internet freedom is in retreat. Whether these outcomes mean we have, as a country and an international community, reached a better place is hotly debated—a reminder that history’s arc is longer than two years.
  • Intelligence
    The Messages the Federal Court of Appeals Sent to Congress and the Executive Branch on Metadata Surveillance
    Last week, a federal appeals court ruled that Section 215 of the PATRIOT Act does not authorize the NSA’s telephone metadata surveillance program. Since Edward Snowden disclosed it in June 2013, the program has been so controversial that its fate has taken on historic significance. The decision in American Civil Liberties Union v. Clapper arrived as Congress must decide whether to reform the program, continue it by re-authorizing Section 215, or let Section 215 expire on its June 1 sunset date. The judgment provided the program’s defenders and critics with ammunition in this debate. Moreover, the court, through its decision, seems to be sending the political branches explicit constitutional messages about what should happen next. Troubling Aspects of the Decision This case began in August 2013 when the ACLU filed suit in response to the program’s disclosure. In December 2013, a federal district court denied the ACLU’s request for a preliminary injunction, reasoning that federal law precluded judicial review of Section 215 and the program did not violate the Fourth Amendment. The appeals court over-ruled the district court. It decided Congress did not preclude judicial review of Section 215, and it held Section 215 did not authorize bulk collection of telephone metadata because this activity was not, and could not reasonably be interpreted as being, relevant to authorized counter-terrorism investigations. The court did not issue a Fourth Amendment ruling. Nor did it grant the preliminary injunction the ACLU sought. Commentary of the appeals court’s decision has mostly focused on whether the court was legally correct or persuasive and what impact the decision might have on Capitol Hill. However, the decision has troubling features that have received less attention but deserve examination. To begin, the court compared the firestorm over the program to scandals in the 1970s concerning surveillance within the United States. Like federal courts did in the 1970s, it held that the phone metadata surveillance program was illegal. Yet, in not issuing an injunction, the court allowed the program to continue because of the “national security interests at stake.” Under constitutional law, surveillance should have a legal basis. After the court’s interpretation of Section 215, that basis could only be the president’s constitutional national security powers. But, federal courts in the 1970s rejected claims that these powers justified the domestic surveillance at issue. The Bush administration turned to Section 215 to avoid continuing to rely on presidential powers to justify the metadata program legally. So, with presidential authority suspect, what is the legal basis for the program as it continues to collect phone metadata on Americans? Concerns multiply when we consider the privacy implications of government collection of metadata in the age of ubiquitous digital technologies. The court acknowledged dependence on these technologies raises difficult questions about the “third-party doctrine,” where data is not protected under the Fourth Amendment if it is shared with a third party, such as a phone company. Given this acknowledgment, is the court allowing a surveillance program to continue that not only lacks a legal basis but also might violate the Fourth Amendment? Making Sense of the Decision In its decision, the court is sending two strong messages to the legislative and executive branches about their responsibilities to protect national security and safeguard individual rights. First, the court believes the best outcome over the the Section 215 program is agreement between the political branches. Issuing a preliminary injunction because the metadata program had no legal basis or making a Fourth Amendment ruling because of the impact of digital technologies would take federal courts deeper into volatile national security, privacy, constitutional, and political controversies. The court asserts that legislation provides the most effective way to design metadata surveillance programs for counter-terrorism and to signal what the political branches deem is permissible under the Fourth Amendment. In short, the political branches can directly authorize metadata surveillance to protect national security (avoiding the surreal interpretive brawl Section 215 became) tailored to reflect privacy concerns about government collection and analysis of metadata in the digital age (avoiding potentially divisive judicial decisions on the Fourth Amendment). Second, the court’s reasoning contains warnings to the political branches as they consider their next steps. Its interpretation of “relevance” in Section 215 sends the message that invoking national security should not contort laws in ways that defy their language and intent. The court also rejects the argument that Congress ratified the executive branch’s expansive definition of relevance when it reauthorized Section 215 in 2011. In doing so, the court communicated that secret legislative review of secret interpretations of public laws is not legitimate. Finally, the court signaled its view that changes in communication technologies raise serious constitutional concerns with the third-party doctrine, suggesting that it might have held the metadata program in breach of the Fourth Amendment had it reached this question. In sending these messages, the court recognized the constitutional prerogatives of the political branches in national security but provided rule-of-law guidance to Congress and the president in crafting new legislation the United States so badly needs. Whether the political branches live up to these responsibilities in the coming days will signal to the world if the United States understands how to protect the security and rights of a free people.
  • Politics and Government
    Cybersecurity Legislation in Congress: Three Things to Know
    The debate over cybersecurity is heating up again in Washington. Congress is considering multiple pieces of legislation intended to enhance the ability of the private sector and government to share information about digital threats. Meanwhile, the White House has put forth its own proposal, which diverges from those measures on major issues.  Keeping track of all the legislative measures can be confusing. In this video, I provide some background on three things you need to know about the debate on cybersecurity information sharing.
  • Cybersecurity
    The UN GGE on Cybersecurity: How International Law Applies to Cyberspace
    This week, Net Politics is taking a look at the work of the UN Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security, which is meeting this week in New York.  From its first incarnation in 2004, the UN’s Governmental Group of Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (GGE) tussled over whether international law applied to the use of information and communication technologies (ICTs) by states. This struggle explains why some experts considered it a breakthrough in June 2013 when the GGE stated that “[i]nternational law, and in particular the Charter of the United Nations, is applicable, and is essential in maintaining peace and stability and promoting an open, secure, peaceful and accessible ICT environment.” With that resolved, the GGE has turned to address how international law applies to state use of ICTs. International Law Applies to State Use of ICTs ... Really? The significance accorded to the GGE’s celebrated statement exceeds its actual importance. Recall what happened in and after June 2013. The United States was preparing to confront China over economic cyber espionage, but Snowden’s disclosures about the NSA’s cyber espionage derailed that plan. But, international law does not prohibit or regulate espionage. So at the moment the GGE agreed international law applies to state use of ICTs, international law did not (and still doesn’t) apply to one of the most important state uses of ICTs that cause international security problems. The GGE recommendation did not fare better where international law has rules. Since the release of the 2013 GGE report, the United States has refused to discuss many activities Snowden disclosed, such as offensive cyber operations against foreign nations, let alone explain how they complied with international law. The United States has argued its international legal obligations to protect privacy did not apply to its foreign surveillance activities, which angered allies. China continues to cite the principles of sovereignty and non-intervention in dismissing human rights concerns about its Internet censorship. Russia used cyber operations in its annexation of Crimea and intervention in Ukraine. Iran hacked a Las Vegas casino, and North Korea launched a cyberattack against Sony. Given such behavior, it is fair to ask whether the rules of international law really apply in any meaningful way. Challenges to GGE Efforts on How International Law Applies As the GGE considers how international law applies, it has to navigate legal, technological, and political challenges. Legally, many rules relevant to ICT use in international security contexts are general in nature. For example, international law prohibits the use of force by states except in self-defense in response to an armed attack. Assessing how this rule applies requires fact-specific, case-by-case analyses of incidents. Was Stuxnet a use of force or an armed attack? International lawyers have tackled this question, but controversies surrounding how the law applies to Stuxnet demonstrate the difficulties associated with legal analysis. Technologically, assessing how international law applies requires identifying how technological features of ICTs affect the functioning of legal rules. ICTs allow states to obscure their involvement in cyber operations, which complicates the application of international law on state responsibility. Technology also offers states ways to calibrate effects so that their actions stay under key legal thresholds. So, states can target and disrupt civilian computers during armed conflict as long as the effects do not qualify as an “attack” in the law of armed conflict. These capabilities make cyber attractive to states and create disincentives for adjusting pre-cyber rules to account for what cyber technologies make possible. The elasticity and utility of cyber technologies explain why the U.S. Director of National Intelligence predicted that, rather than massive cyber attacks, the United States confronts “an ongoing series of low-to-moderate level cyber attacks from a variety of sources over time, which will impose cumulative costs on US economic competitiveness and national security.” Similarly, the Commander of U.S. Cyber Command emphasized the need for more offensive cyber power to deter persistent and growing threats that defenses and international law are not stopping. Politically, focusing on how international law applies reveals that states have different interests and compete for influence in cyberspace. As the Snowden-triggered controversies show, serious disagreements exist among leading democracies about how international human rights law applies to state use of ICTs. The gap is more profound between democracies and authoritarian states. These problems are deeply political, and the differences among states—especially between democracies and authoritarian governments—inform the larger competition for power and influence intensifying in international relations. This context means that GGE discussions involve political sub-texts, particularly between the United States and China, that involve more than ICTs and that will make reaching anything more than superficial consensus difficult. Given that it took the GGE nearly a decade to agree that international law applied to state use of ICTs, it is hard to see this process easily overcoming the legal, technological, and political problems inherent in assessing how international law applies. Consensus that cyber activities might, in unspecified situations, violate sovereignty, the principle of non-intervention, the use-of-force prohibition, rules on discrimination and proportionality in the law of armed conflict, or human rights would merely restate that international law applies. This outcome would be no more impressive or important than it was the first time.
  • Political Transitions
    CIA Director: We’re Winning the War on Terror, But It Will Never End
    Last night, Director of Central Intelligence John Brennan participated in a question-and-answer session at Harvard Kennedy School’s Institute of Politics. The first thirty-seven minutes consisted of an unusually probing exchange between Brennan and Harvard professor Graham Allison (full disclosure: Graham is a former boss of mine). Most notably, between 19:07 and 29:25 in the video, Allison pressed Brennan repeatedly about whether the United States is winning the war on terrorism and why the number of al-Qaeda-affiliated groups has only increased since 9/11: “There seem to be more of them than when we started…How are we doing?” Brennan replied: If I look across the board in terms of since 9/11 at terrorist organizations, and if the United States in all of its various forms. In intelligence, military, homeland security, law enforcement, diplomacy. If we were not as engaged against the terrorists, I think we would be facing a horrendous, horrendous environment. Because they would have taken full advantage of the opportunities that they have had across the region… We have worked collectively as a government but also with our international partners very hard to try and root many of them out. Might some of these actions be stimulants to others joining their ranks? Sure, that’s a possibility. I think, though it has taken off of the battlefield a lot more terrorists, than it has put on. This statement is impossible to evaluate or measure because the U.S. government has consistently refused to state publicly which terrorist organizations are deemed combatants, and can therefore be “taken out on the battlefield.” However, relying upon the State Department’s annual Country Reports on Terrorism,the estimated strength of all al-Qaeda-affiliated groups has grown or stayed the same since President Obama came into office.  Of course, non-al-Qaeda-affiliated groups have arisen since 9/11, including the self-proclaimed Islamic State, which the Central Intelligence Agency estimated last September to contain up to 31,500 fighters, and Boko Haram, which has perhaps 10,000 committed members. However, the most interesting question posed to Brennan came at the very end from a Harvard freshman who identified himself as Julian: “We’ve been fighting the war on terror since 2001. Is there an end in sight, or should we get used to this new state of existence? Brennan replied: It’s a long war, unfortunately. But it’s been a war that has been in existence for millennia, at the same time—the use of violence for political purposes against noncombatants by either a state actor or a subnational group. Terrorism has taken many forms over the years. What is more challenging now is, again, the technology that is available to terrorists, the great devastation that can be created by even a handful of folks, and also mass communication that just proliferates all of this activity and incitement and encouragement. So you have an environment now that’s very conducive to that type of propaganda and recruitment efforts, as well as the ability to get materials that are going to kill people. And so this is going to be something, I think, that we’re always going to have to be vigilant about. There is evil in the world and some people just want to kill for the sake of killing…This is something that, whether it’s from this group right now or another group, I think the ability to cause damage and violence and kill will be with us for many years to come. We just have to not kill our way out of this because that’s not going to address it. We need to stop those attacks that are in train but we also have to address some of those underlying factors and conditions. I’m not saying that poverty causes somebody to become a terrorist, or a lack of governance, but they certainly do allow these terrorist organizations to grow and they take full advantage of those opportunities. To summarize, the war on terrorism is working, compared to inaction or other policies. But, the American people should expect it to continue for millennia, or as long as lethal technologies and mass communication remain available to evil people.
  • Cybersecurity
    Sanctioning Hackers
    President Obama signed an executive order today that allows the U.S. Department of the Treasury to sanction individuals or entities involved in "significant malicious cyber-enabled activities" (you can read the order here). The sanctions, which could involve travel bans to the United States or the seizure of funds, would be levied against those who engage in attacks that disrupt or destroy critical infrastructure networks, or who steal intellectual property or trade secrets. State-owned enterprises or entities that benefit from cyber espionage could also be the target of sanctions. This is, as others have noted, a big deal. For years, pundits, including myself, have been saying that the costs of hacking had to be raised, and the next steps would be sanctions targeting individuals. In a June 2014 Asia Unbound podcast, Special Advisor to the President and Senior Director for Asian Affairs at the National Security Council Evan Medeiros hinted that after indicting the five PLA hackers, the Obama administration was thinking about how to penalize the state-owned enterprises that were the recipients of the stolen intellectual property. Three quick questions: How and how often the order will be implemented? It is unlikely to have much effect on North Korean and Iranian hackers, since both of those countries are already under substantial sanction regimes. The same might be said of Russia, since the United States and its European allies have levied sanctions in the wake of the crisis in Crimea. Does that mean China is the main target? Even if it is China, the idea might be to deter the next generation of hackers rather than prevent the current wave of attacks. That is, a PLA attacker may not think much about travel to the United States now, but a college student who has not yet traveled down that road might think twice. They may still have dreams of visiting Los Angeles. If China is the main target, what does Washington think Beijing’s response will be? Right now, the two sides are involved in a complicated dance, where each step seems to be matched by the other. The United States claims China is behind attacks on U.S. networks, China claims the United States is the real evil empire in cyberspace, hacking the entire world. Beijing uses Washington’s demands for backdoors in encryption as justification for similar demands in the revised anti-terror law. Chinese and U.S. tech companies are blocked from each other’s market because of security concerns. If the United States places a travel ban on a Chinese hacker, should NSA employees think twice before they book a tour to see the Forbidden City? Where does the tit-for-tat end? Finally, the executive order says any case must be supported with evidence that can withstand a court challenge. Does that mean we should expect to see the government roll out even more technical details to be used for the attribution of attacks? In the past, the intelligence agencies have hesitated because they wanted to protect their sources and methods. Will the order result in the NSA and others burning more intelligence to levy sanctions? We will have to wait and see how these three questions play out, but there is no doubt that this is a major policy development.
  • Cybersecurity
    The Relationship Between the Biological Weapons Convention and Cybersecurity
    Today, the Biological Weapons Convention (BWC)—the first treaty to ban an entire class of weapons—marks the 40th anniversary of its entry into force. Reflections on this milestone will examine the BWC’s successes and travails, such as its ratification by 173 countries, its lack of a verification mechanism, and what the future holds. Although not prominent in these discussions, the BWC relates to cybersecurity in two ways. First, the BWC is often seen as a model for regulating dual-use cyber technologies because the treaty attempts to advance scientific progress while preventing its exploitation for hostile purposes. Second, the biological sciences’ increasing dependence on information technologies makes cybersecurity a growing risk and, thus, a threat to BWC objectives. The BWC as a Model for Cybersecurity The BWC addresses a dual-use technology with many applications, including the potential to be weaponized. Similarly, cyber technologies have productive uses that could be imperiled with the development of cyber weapons. Those concerned about cyber weapons often turn to the BWC for guidance because of characteristics biology shares with cyber—the thin line between research and weaponization, the global dissemination of technologies and know-how, the tremendous benefits of peaceful research, and the need to adapt to new threats created by scientific and political change. The BWC supports actions to prevent weaponization and foster peaceful exploitation of the biological sciences, including: Prohibitions on weaponization and transferring the means of developing bioweapons; Requirements to implement domestic measures to prevent weaponization; Obligations to cooperate and provide assistance in addressing BWC violations; and Undertakings to facilitate exchange of information, materials, and technologies for peaceful research. However, the BWC maps poorly against cybersecurity problems. Cyber weapons, weaponization, and attacks by states and criminals have become ubiquitous. The BWC required destruction of stockpiles of bioweapons, but many countries accepted this obligation and the weaponization ban because they concluded bioweapons had little national security utility. The same cannot be said for cyber technologies. States find cyber exploits useful for multiple national security tasks, including law enforcement, counter-intelligence, espionage, sabotage, deterrence, and fighting armed conflicts. Tools used to prevent biological weaponization, such as imposing licensing and biosecurity requirements on biological research facilities, make little sense for cyber given the nature of cyber technologies and their global accessibility. Experts have called for a norm requiring countries to assist victims of cyberattacks, which echoes the BWC’s provision on assistance in cases of treaty violations. However, political calculations, not normative considerations, determine whether governments offer assistance to countries hit by cyberattacks—behavior consistent with other contexts where states provide discretionary assistance, such as after natural disasters. Nor have countries embraced export controls on cyber technologies in the manner seen with biological technologies. Countries harmonizing export controls on dual-use technologies through the Wassenaar Arrangement added "intrusion software" to this regime in December 2013. However, this decision reflected human rights concerns about authoritarian governments using such software, a reason having no counterpart in export controls supporting the non-proliferation of bioweapons. Perhaps led by the United States, the Wassenaar Arrangement might create more export controls for cyber technologies, but here the BWC offers a cautionary tale. Developing countries have long considered that export controls on biotechnologies imposed for non-proliferation reasons violate their BWC right to gain access to equipment, materials, and information for peaceful purposes. Whether a similar controversy emerges if Wassennaar participants agree to more export controls on cyber technologies remains to be seen, but this path is not one the BWC suggests would be easy or effective. The Cybersecurity Challenge in the Biological Sciences The more important aspect of the BWC-cyber relationship involves the biological sciences’ increasing exploitation of, and dependence on, information technologies (IT). In describing scientific developments for the BWC review conference in 2011, the BWC Implementation Support Unit noted that "[i]ncreasingly the life sciences are referred to as information sciences. Digital tools and platforms not only enable wetwork but are increasingly able to replace it." Cybersecurity problems increase as dependence on information technologies deepens. Biological research enabled by information technologies is vulnerable to cyber infiltration by foreign governments, criminals, or terrorists and theft of data or manipulation of facilities. The cybersecurity challenge has been recognized in some policies. In the United States, Executive Order 13546 (2010) identified the need for cybersecurity in facilities handling dangerous pathogens, which led to amended regulations. As the biological and information sciences converge, cybersecurity becomes increasingly important for responsible biological research. Despite awareness of this dependence, the BWC process has not focused on cybersecurity. Neither the 2011 review conference nor meetings in 2012-14 identified the security of information and the ubiquity of IT systems as issues arising from developments relevant to the BWC. As planning for the next BWC review conference in 2016 unfolds, cybersecurity should be included to ensure the BWC’s next chapter does not ignore a problem the biological sciences face now and in the future.