Social Issues

Privacy

  • Cybersecurity
    Cyber Week in Review: October 28, 2016
    Here is a quick round-up of this week’s technology headlines and related stories you may have missed: 1. Fallout from the Dyn denial of service incident. In the wake of last Friday’s attack on Dyn, a Chinese electronics firm issued a recall of all webcams containing its circuit boards. The company, Hangzhou Xiongmai, says that the issue is that users haven’t changed the (often unchangeable) default passwords on their devices, allowing hackers to take control of them for nefarious purposes. The real issue is that security considerations are often an afterthought in internet of things (IoT) devices, and unlike car or other manufactures, software and hardware companies are often not liable should a product malfunction. The European Union is preparing to issue new regulations on IoT devices that may help mitigate future incidents, and the U.S. government is looking to issue guidance for IoT manufacturers. By contrast, China is threatening legal action against those who make “false claims” about the integrity of Chinese-manufactured devices. Director of National Intelligence James Clapper stated on Tuesday that last week’s attack was likely carried out by a nonstate actor, and Dyn has confirmed that the attackers exploited the same Mirai malware that has been used in many of the recent DDoS incidents, including the one that targeted Brian Krebs’ website. 2. U.S. director of national intelligence on Russia and Dyn incident. James Clapper sat down for a Q&A session with Charlie Rose at the Council on Foreign Relations where he talked about the Dyn incident and responding to Russian cyber activities, among other things. On Russia, Clapper stuck to the script of the official statement he and the secretary of homeland security released three weeks ago. However, he did mention the challenges associated with responding, noting challenges associated with revealing U.S. intelligence capabilities, controlling escalatory activity, and ensuring the legality of a response. The recent compromise of e-mail accounts of people close to Russian President Vladimir Putin has led to speculation that the Obama administration has begun responding. Putin continues to deny that the accusations are anything more than anti-Russian propaganda. 3. Well that was fast. The Privacy Shield agreement that governs the transfer of personal information between the European Union and the United States is facing a legal challenge from the privacy advocate group Digital Rights Ireland on the grounds that its privacy protections are insufficient. The online case filing is sparse, detailing only the parties, date of filing, and that the subject concerns an “area of freedom, security and justice.” Privacy Shield, which has only recently begun to actually be implemented, has attracted criticism on privacy grounds since the draft text was first released. 4. Law enforcement access to data. Microsoft, which owns Skype, was fined €30,000 by Belgium yesterday for failing to assist investigators in 2012 by intercepting users’ communications over the messaging service, a request the company says was impossible to fulfill. There has been a trend in certain jurisdictions with courts fining or banning messaging providers for failing to hand over data they don’t have, as was the case in Brazil four times over the last two years with WhatsApp. In related law enforcement news, Yahoo released their newest transparency report, which indicated a slight decline in law enforcement requests for user data. The Yahoo transparency report is an outlier given that most tech companies are seeing law enforcement requests rise.
  • Germany
    Cyber Week in Review: August 26, 2016
    Here is a quick round-up of this week’s technology headlines and related stories you may have missed: 1. Further developments in the Shadow Brokers hack. Since an individual or group calling itself the "Shadow Brokers" posted hacking tools widely believed to be created by the National Security Agency, there has been some speculation that an NSA insider, like Edward Snowden, was behind the leak, perhaps physically smuggling the tools out of the NSA in a flash drive. However, the evidence suggests that it’s more likely that a careless NSA operator left the tools on an unsecured NSA server, where the Shadow Brokers were able to find them by tracing back from a system the NSA had compromised. Researchers digging into the tools this week also found a script that would hide an attacker’s presence in Huawei firewalls, on top of the Cisco, Juniper, and Topsec exploits already discovered. Another group of researchers was able to update one of the leaked exploits, which appears to have been stolen in 2013 and targets older versions of Cisco’s ASA firewall, so that it works on a much more recent version. And according to a linguist, despite the bad English used in the Shadow Brokers’ online posting, it appears that the person who wrote the statement was a native English speaker who deliberately introduced errors. 2. France and Germany not ready to concede crypto wars. Meeting in Paris Tuesday, the interior ministers of the two countries discussed plans to regulate encryption in the fight against terrorism. They called on the European Commission to require so-called "over-the-top" telecommunications providers, like video, voice, and text chat apps such as Skype, WhatsApp, or Telegram, to maintain a capability to decrypt encrypted messages and turn over the communications of suspected terrorists to law enforcement authorities. The commission is currently considering extending existing privacy regulations, which stipulate how traditional telecoms handle customer data, to such over-the-top services. The ministers meeting in Paris cited recent terrorist attacks in Europe as the reason they need to access encrypted communications, despite it being unclear whether perpetrators of the recent attacks actually used encrypted systems. Privacy advocates were quick to criticize the ministers’ proposal. 3. WikiLeaks: good on transparency, not so good on privacy and security. According to a report by the Associated Press, WikiLeaks’ releases have included the personal information of several hundred people. While the website’s founders have said in the past that they have a "harm minimization policy" that aims to protect "legitimate secrets" in the documents they leak, such as medical records, it doesn’t seem that this policy was followed in recent leaks. The documents, most of which were released as part of a dump of Saudi Arabian foreign ministry files, include medical records of children, refugees, and individuals with psychiatric conditions. Other documents identify victims of sexual assault, couples going through divorces, and individuals who are deeply in debt. Separately, a Bulgarian security researcher announced last week that he’d discovered several hundred pieces of malware among documents released by the transparency organization. 4. Journalists in Russia target of hacks. According to CNN, journalists with the New York Times and other news agency have been the target of cyberattacks by Russian intelligence agencies in recent months. The Times subsequently announced that its Moscow bureau was the target of an attempted hack, although a spokesperson for the paper said they had no evidence that any of their networks had been breached, and that they had not hired any outside firms to investigate the issue. Both outlets report that the Federal Bureau of Investigation is looking into the issue. While alarming in light of recent breaches of the networks of the Democratic National Committee, such attacks are old hat, both for the New York Times and for Russia. Russian intelligence services have long targeted domestic journalists with cyberattacks, and the New York Times’ networks were breached by Chinese hackers in 2013.
  • Europe and Eurasia
    The UK Investigatory Powers Bill: Adding Much Needed Transparency
    James Pooler is a political science student at New York University and an intern for the Council on Foreign Relations’ Digital and Cyberspace Policy program.  Following David Cameron’s resignation after the United Kingdom’s vote to leave the European Union, Theresa May became the country’s second female prime minister. As most of the attention has focused on the political reshuffling at the top – the new cabinet, committees, and Brexit negotiators—there has been considerably less attention given the Investigatory Powers Bill, a surveillance bill meant to consolidate and formalize the authority of British intelligence and law enforcement agencies, which the new prime minister championed in her previous job as Home Secretary. In March 2016, the bill passed the House of Commons by a landslide 281 to 15 vote. The Liberal Democrats voted against it, dubbing it “Snooper’s Charter.” The bill recently underwent a second reading at the House of Lords, and is expected to pass by the end of this year. While it has been significantly edited, privacy advocates are alarmed over issues of collection and retention of data, encryption, and warrants for law enforcement that had not been explicitly defined. The bill has four major provisions. First, it requires communication service providers (CSPs) such as telcos, messaging app providers, and social networking sites, to collect and retain internet connection records of all their users for up to a year. The records include but are not limited to IP address, browsing history, names of services consulted and other metadata, but exclude content of communications. A cabinet minister would have the power to require CSPs to make information readily available to law enforcement as to identify “which individual has used a specific internet service, how a subject of interest is communicating online, or whether an individual is accessing or making available illegal material.” Second, it explicitly legalizes government-enabled equipment interference, colloquially known as lawful hacking. Intelligence, armed forces, and law enforcement would be able to apply for targeted equipment interception warrants for six months, and could require compliance from CSPs to facilitate interference. These warrants are required for both targeted and bulk interceptions—the latter in the event that intelligence cannot identify a target, a practice the Federal Bureau of Investigation has been pushing for in the United States with its request to amend Rule 41. Third, it empowers cabinet ministers to issue “technical capability notices” to CSPs, which would “impose any obligations relating to the removal by a person of electronic protection applied by or on behalf of that person to any communications or data.” While the bill does not mandate backdoors, it expects CSPs to maintain the ability to decrypt end-to-end encryption. Given the current debate about encryption, this provision has garnered some of the most controversy. Fourth, the draft legislation reforms the oversight of these new and existing powers by creating a dedicated investigatory powers commission and a new way of authorizing warrants. The commission would be an independent body dedicated to the oversight of communications data, interception, equipment interference, and related work of law enforcement and intelligence agencies. The new warrant procedure would require that interception warrants granted by a cabinet minister be approved by the investigatory powers commission. Facebook, Google, Microsoft, Twitter and Yahoo unanimously expressed their opposition in a written statement to the British Parliament. Their primary concern is the legal conflicts such a law would create when tech companies seeking to comply with the law take measures that have extraterritorial effect. For example, it is not inconceivable that UK authorities could ask a company like Microsoft to hand over customer data held in another country whose laws forbid such disclosure. Apple also expressed concern over the bill’s “technical capability notices” and their consequences on encryption, noting that “companies should remain free to implement strong encryption to protect customers.” Opposition has also come from the Court of Justice of the European Union (CJEU), which deemed bulk data collection to only be lawful when used to investigate serious crimes. A CJEU representative declared that data retention laws should “limit interference with fundamental rights” to what is strictly necessary. Moreover, the CJEU has a history of rejecting bulk collection. In the wake of Edward Snowden’s revelations, the court invalidated the EU Data Retention Directive adopted in 2006 following bombings in London and Madrid, deeming that bulk collection’s “wide-ranging and particularly serious interference […] with the fundamental rights at issue is not sufficiently circumscribed to ensure that that interference is actually limited to what is strictly necessary.” For all of the opposition that the bill has received, there are a few silver linings. First, the bill formalizes and brings to light powers that the UK government had once exercised in the shadows. The new transparency and accountability mechanisms, overseen by the judiciary, will ensure that these issues remain in the public domain. Furthermore, the Home Secretary will be required to provide a review of the legislation’s implementation five years after it enters into force, possibly providing a mechanism for review. Second, the bill isn’t nearly as intrusive as other interception and surveillance regimes, such as France’s year-old intelligence bill and Russia’s new anti-terrorism laws both of which have less oversight mechanisms. Despite its seemingly Orwellian features, the Investigatory Powers Bill introduces few new powers that the UK authorities didn’t already have, brings transparency to those powers, and provides the oversight and accountability mechanisms that such extraordinary powers deserve.
  • Russia
    Cyber Week in Review: July 29, 2016
    Here is a quick round-up of this week’s technology headlines and related stories you may have missed: 1. Wikileaks publishes DNC emails, surrealism ensues. All eyes were on Russia this week as the political and national security worlds tried to make sense of the allegation that Russian intelligence services had hacked the Democratic National Committee (DNC) and then provided a trove of emails to Wikileaks, which then published them to damage the Clinton campaign. According to the New York Times, the U.S. intelligence committee has "high confidence" that Russia was behind the hack but is unsure whether Russian intelligence passed the emails along to Wikileaks. Thomas Rid over at Vice lays out the case for Russian state-sponsorship, pointing out that some of the command and control infrastructure the attackers used are the same as that used in the cyber incident against the Bundestag, which German intelligence attributed to Russia. Not everyone, however, is convinced. A number of commentators have suggested that the U.S. should respond forcefully if it eventually emerges that Russia deliberately tried to influence the election. Determining Russia’s intent is a challenging task and will be critical to shaping a U.S. response. 2. If there’s something strange in your computer network, who you gonna call? In a strange coincidence given the DNC hack, the White House issued a Presidential Policy Directive clarifying the responsibilities of various U.S. departments and agencies when responding to cyber incidents. In essence, the Federal Bureau of Investigation is tasked with investigating a cyber incident and the Department of Homeland Security (DHS) is responsible for provided mitigation measures to remedy and recover from a breach. The White House also published a “cyber incident severity schema” which ranks the potential impact of cyber incidents, ranging on a color-coded scale of 1 (unlikely to have an impact) to 5 (poses an imminent threat). CFR Senior Fellow Rob Knake assesses the directive and shoots back at critics who have argued that the schema is a repeat of the heavily criticised color-coded terrorism threat chart DHS managed during the George W. Bush administration. 3. EU privacy regulators: lets see how Privacy Shield works before we challenge it. The new EU-U.S. Privacy Shield will remain legally unchallenged for its first year, according to EU privacy regulators. Isabelle Falque-Perrotin, head of the French data protection office--who last week issued a formal notice against Microsoft’s data collection procedures--told Reuters that she still wanted evidence that the United States would "not conduct mass and indiscriminate surveillance" before challenging it. Next year, data protection agencies argue they will have sufficient evidence to determine whether a challenge is necessary. Privacy Shield will go into effect on August 1, 2016.
  • Cybersecurity
    Cybersecurity in the Health Sector: Mounting Problems, Uncertain Politics
    Recent ransomware attacks on hospitals elevated awareness of cyber threats health care providers face. The attacks forced hospitals to engage in technological regression by relying on hard-copy records and revealed aspects of the health sector that make cybersecurity difficult. These episodes also highlighted ways in which the health sector reflects problems experienced across the U.S. cybersecurity ecosystem. Improving health-sector cybersecurity requires addressing unique sector features and integrating the sector into efforts to strengthen U.S. cybersecurity generally. However, concerns about health-sector cybersecurity have intensified just as the politics of U.S. cybersecurity face uncertainty. Providing health services puts individual well-being in the hands of governmental bodies and private-sector enterprises. Increasingly, such services depend on digital technologies, devices, and data. The benefits of the digital revolution are so significant that the responsibility, recognized in modern versions of the Hippocratic oath, to apply scientific advances for patient health encourages exploitation of information technologies. A ransomware attack on a hospital is not just another cybersecurity incident; it encroaches on matters of life and death. Integrating digital technologies in health services creates repositories of sensitive and valuable patient, financial, physician, pharmaceutical, and insurance information and vulnerabilities in networks used by physicians, hospitals, and insurance companies. Expanding use of digital information, communications, services, and medical devices means the health sector’s attractiveness as a target for malevolent cyber activities and its “attack surface” will grow. This trajectory is global and forms a disturbing part of what the World Health Organization calls the “health internet.” The need for health-sector cybersecurity was recognized before recent ransomware attacks. The Obama administration’s 2009 Cyberspace Policy Review highlighted the need to protect patient data as use of digital technologies advanced. The Department of Health and Human Services (HHS) developed a cybersecurity primer for health-sector activities. The U.S. government classifies the health sector as critical infrastructure subject to, among other things, the Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework). The Food and Drug Administration (FDA) issued recommendations in 2013 for medical device manufacturers and health-care facilities to “take steps to assure that appropriate safeguards are in place to reduce the risk of failure due to cyberattack.” The American Hospital Association (AHA) has been educating its members about cybersecurity threats, including its 2014 publication Cybersecurity and Hospitals. Both the FDA and AHA utilize the Cybersecurity Framework in efforts to strengthen cybersecurity. Federal law requires health-service providers to protect electronic health information and to notify individuals in cases of breaches of such information. Despite these efforts and laws, cyber incidents involving the health sector have increased. Data indicate that, in 2015, health care was the most attacked and affected industry, suggesting that progress has been inadequate. The ransomware attacks in 2016 underscore problems with cybersecurity in the health sector. Many problems are familiar to every sector struggling with cyber threats, including: Difficulties with making cybersecurity an enterprise priority; Dependence on software, systems, and devices developed without sufficient attention to security; Inadequate use of protection measures (e.g., encryption); Threats arising from employee behavior and malicious insiders; Challenges in public-private cooperation, including information sharing; and Frustration with the U.S. government’s perceived failure to better protect U.S. cyberspace. In response, Congress mandated in the Cybersecurity Information Sharing Act of 2015 (CISA) that HHS report to Congress by December 2016 on the preparedness of HHS and health-industry stakeholders in responding to cyber threats. CISA also required HHS establish a Health Care Industry Cybersecurity Task Force to: Analyze cybersecurity challenges the industry faces, including those from networked medical devices; Examine how other sectors have implemented cybersecurity strategies; Provide HHS with information to disseminate on strengthening health-sector cybersecurity; and Establish a plan to facilitate information sharing between health-sector entities and the federal government. The Task Force held its first meeting in April and will have three more meetings before its mandate ends in March 2017. In its work, the Task Force will evaluate ideas on how to improve health-sector cybersecurity and its contributions to overall U.S. cybersecurity, such as the suggestion by CFR’s Robert Knake that tax credits for cybersecurity investments could benefit critical infrastructure sectors, including health. The Task Force will complete its work after a new president and Congress take office, and whether its analysis will matter to the new administration’s and legislature’s cybersecurity priorities is not clear. Neither Donald Trump nor Hillary Clinton have issued, or are likely to release, positions on cybersecurity in the health sector. Rather, efforts in this sector will fall under general policies the new president and Congress will pursue, such as Clinton’s strategy against cyber attacks. In this transition, the burden of turning the Task Force’s recommendations into action will fall on the government agencies and industry leaders that have grappled with this problem. Whether new initiatives on health-sector cybersecurity will take root in the next phase of American politics remains to be determined.
  • Digital Policy
    A New Framework for Cross-Border Data Flows
    Introduction The flow of data across international borders creates jurisdictional challenges, as the data itself and the person generating it may be subject to different countries’ laws. International tensions result when law enforcement seeks evidence stored on a foreign server during a domestic criminal investigation or when individuals expect domestic privacy protections for data hosted abroad. Increasingly, countries have responded by imposing new requirements to store data locally, threatening cross-border data flows, which generate approximately $2.8 trillion of global gross domestic product each year. The United States should explore new avenues to prevent these restrictions on the free flow of data. Given that the majority of the world’s largest Internet companies are headquartered in the United States, tensions erupt most frequently when foreign citizens’ data is held by U.S. companies or stored on U.S. soil. The United States can both raise international data privacy standards and promote the norm of the free flow of information. It can do so by building on several recent diplomatic successes, including the Privacy Shield agreement between the European Union (EU) and the United States, as well as ongoing U.S.-UK negotiations to streamline access to data in criminal cases. The United States should act with great care to ensure its efforts raise overall privacy protections rather than subverting them. It can do so in three ways. First, the U.S. government should promote a common approach to data protection that is gaining traction through regional agreements such as the Privacy Shield and the Asia-Pacific Economic Cooperation’s (APEC) privacy framework in order to lessen growing privacy concerns. Second, the United States should finally update the mutual legal assistance treaty (MLAT) system, increasing the legitimacy of legal methods for obtaining cross-border access to evidence in criminal investigations. Third, the United States should leverage its willingness to take these actions in diplomatic negotiations to seek international endorsement of the norm of the free flow of information. The time to do this is now, when increasing numbers of countries are imposing requirements that data be stored locally, also known as “forced localization,” and when digital issues are on the agenda of the Group of Twenty (G20). This framework would reduce tensions between national sovereignty and the borderless Internet, on which the U.S. economy relies heavily, while strengthening respect for human rights, privacy protections, and the rule of law online. Background The jurisdictional conflicts arising from cross-border data flows usually involve foreigners’ data on servers belonging to U.S. companies. U.S. communication privacy law prohibits electronic communications companies from disclosing communications content except in certain situations—such as when compelled to do so in response to a U.S. warrant, court order, or subpoena—even when it is sought by a foreign country investigating a crime committed on its soil by a non-U.S. citizen. As a result, the principal recourse for foreign law enforcement is the system of MLATs, which protects the due process rights of the individual. However, U.S. procedures for complying with a request are opaque and take an average of ten months to complete. Edward Snowden’s disclosure of the extent of National Security Agency (NSA) surveillance has fueled privacy concerns from individuals whose data winds up on U.S. companies’ servers. The fifteen-year-old EU-U.S. Safe Harbor agreement, under which U.S. companies transferred personal data across the Atlantic by certifying that their privacy procedures complied with EU data protection laws, was struck down by the Court of Justice of the European Union in October 2015. The court acted in response to an Austrian student’s complaint, based in part on incorrect press accounts, that personal data he provided to Facebook was readily accessible to the NSA, in violation of European privacy laws. Fortunately, the United States has taken steps to ease these frictions. In February 2016, the European Union and United States agreed to replace Safe Harbor with Privacy Shield, which provides the European Union with assurances that data on its citizens that is transferred to the United States will be handled in accordance with EU privacy norms. In March 2016, the U.S. attorney general revealed that the United Kingdom and the United States are negotiating a deal to allow UK law enforcement agencies expedited access to data held in the United States. The United States can attempt to build on these efforts to further ease international concern about privacy and law enforcement access to data when it travels to the United States. The United States can also build on the Trans-Pacific Partnership provisions designed to protect the movement of data. This focus of the G20 this year on digital issues is an opportunity to gain acceptance of the norm of the free flow of information. A statement in favor of such a norm would not obligate countries to remove data localization requirements, and therefore might be achievable coupled with good faith efforts on the part of the United States to ease tensions.  Challenges to a New Framework Despite these modest successes, there are considerable challenges to resolving the tension between national sovereignty and international data flows. Privacy law scholars Peter Swire and Justin D. Hemmings argue that the increased use of encryption on consumer devices leads foreign law enforcement agencies to seek access to the same data in unencrypted form, which in some cases is hosted on a server in the United States. When confronted with the dysfunction of the MLAT system, countries may attempt to compel U.S. companies to hand over data in violation of U.S. law, require that data be stored locally, or mandate backdoors to unlock encrypted devices. Therefore, increasing foreign law enforcement’s access to data held by U.S. companies, if accomplished with the appropriate safeguards, could have the counterintuitive effect of strengthening protections. However, care should be taken not to grant access too broadly to the wrong regimes and thereby risk weakening human rights protections. Gaining a statement of support in the G20 for the norm of the free flow of information will be challenging, despite the benefits of cross-border data flows to international collaboration and economies of scale and despite good will gestures by the United States. Concerns from individual users and foreign governments regarding the treatment of data held by U.S. companies are behind some countries’ requirements that companies store personal data domestically. However, several G20 countries, notably Russia and China, have other interests in restricting data flows. Through multilateral diplomacy, the United States can explore whether it can garner enough support from countries eager for the United States to address their privacy and law enforcement concerns that a few remaining countries would be reluctant to oppose a broadly supported agreement. Recommendations In order to preserve the openness and global reach of the Internet, the United States should encourage the adoption of an international framework for increasing privacy and human rights protections while safeguarding the free flow of information. First, the United States should, to the extent practicable, encourage countries to adopt an approach to data protection that raises privacy protections when data crosses international borders to approximate international norms or the individual’s domestic laws. Successive agreements and reports, such as the revised privacy guidelines of the Organization for Economic Cooperation and Development (OECD), the Asia-Pacific Economic Cooperation privacy framework, and the new Privacy Shield, have endorsed this approach, referred to as “interoperability.” The United States should encourage broader adoption of these agreements. In addition, the U.S. Judicial Redress Act, part of Privacy Shield, grants EU citizens standing to sue the U.S. government concerning its collection of EU data. The U.S. government should add additional partners to the list of countries whose citizens can make similar claims, under the new law’s provision allowing the U.S. attorney general (with the agreement of the secretaries of State, Treasury, and Homeland Security) to do so. Second, the United States should undertake two separate reforms to address foreign law enforcement’s frustration with the MLAT process, thereby discouraging attempts to circumvent the system and its due process protections. The U.S. government should expedite and simplify the MLAT process through a variety of measures such as increased funding for the Department of Justice’s Office of International Affairs and the introduction of standardized, online requests, as recommended in the 2013 report by the President’s Review Group on Intelligence and Communications Technologies. This would make the current system more legitimate and user-friendly without weakening its protections. In addition, the United States could allow countries with high human rights standards to join the eventual U.S.-UK agreement. Such a partnership would provide a reward for nations that respect due process and human rights. The system would safeguard against abuse by operating with stringent criteria, including those proposed by legal scholars Jennifer Daskal and Andrew K. Woods (such as the submission of targeted, particularized requests subject to robust minimization procedures and authorized by an independent adjudicator, and committing to transparency reports). Digital privacy expert Greg Nojeim has outlined additional restrictions that should also be considered, including that the crime be wholly committed in the requesting country, that the only connection to the United States should be the headquarters of the company holding the data, and that the U.S. company would not be required to release the information but would be required to notify the Department of Justice, which would ensure the information is not sought to restrict speech or undermine human rights. Third, the United States should work to obtain G20 leaders’ endorsement for the OECD’s Internet policymaking principles, which include allowing cross-border information flows and respecting human rights, as well as endorsement of interoperable privacy protection, such as the OECD privacy guidelines, APEC’s privacy framework, and the EU-U.S. Privacy Shield. Gaining Chinese and Russian support will be difficult, but digital issues are on this year’s G20 agenda and China, as its host, is seeking deliverables, which provides the United States with some diplomatic leverage. Russia’s stated ambition to join the OECD, which would require acceding to the principles, may provide additional leverage. Recent, modest successes provide the United States with an opportunity to help resolve conflicts over privacy protections and law enforcement access to data through interoperable agreements. Forging these agreements would take flexibility on the part of the United States, but offers the opportunity to promote U.S. norms in support of an open, global, and secure Internet.
  • Global
    Privacy and Security in the Digital Age
    Play
    Experts discuss the trade off between privacy and security in the debate over government access to encrypted data, and the implications for business, counterterrorism, and user security.
  • Cybersecurity
    Encryption Explained: A Council on Foreign Relations Infographic
    The debate over encryption has been in the headlines a lot over the past few weeks, fueled in part by the clash over the San Bernardino iPhone, whether the attackers in Paris and Belgium used encryption tools to communicate, and WhatsApp’s roll-out of end-to-end encryption across its platform of one billion users. But what is encryption? What does it do and what’s the fuss all about? The Council on Foreign Relations’ Digital and Cyberspace Policy Program put together an infographic to explain encryption, law enforcement concerns with ubiquitous encryption, and the arguments against mandating tech firms to maintain the capability to decrypt data. You can check it out here [PDF]. Make sure to share it!
  • Cybersecurity
    Crisis Averted, Postponed, or Exacerbated? The Department of Justice Delays the Apple iPhone Case
    On the eve of oral arguments concerning a court order directing Apple to assist the Department of Justice (DOJ) in accessing an iPhone as part of the investigation into the San Bernardino terrorist attack, the DOJ asked federal court to postpone the hearing. The court granted the request. The DOJ told the court that, on March 20, “an outside party” demonstrated a possible way to unlock the iPhone without Apple’s help. The DOJ informed the court it needed time to test the proposed method, but that, if viable, the method “should eliminate the need for the assistance from Apple.” Such an unexpected development in a case of this importance raises eyebrows. Who or what is this “outside party”? What is this method for unlocking an iPhone the DOJ previously insisted could only be accessed with Apple’s involvement? Why had this party and method only come to DOJ’s attention so late in this unprecedented, contentious, and highly publicized case? Will the method work on other locked iPhones law enforcement agencies seek to access? Will the DOJ share this method of unlocking iPhones with Apple? Does the DOJ’s access to a way of unlocking iPhones without Apple’s assistance or the need for a specific court order create different but still worrying legal and policy concerns? Postponing the oral arguments shifts attention away from law to whether the method the DOJ will test provides access to the iPhone without damaging data on the device. If the methods works, the court will, with the DOJ’s agreement, vacate the order directing Apple to provide assistance to unlock the iPhone. This outcome would end the showdown between the DOJ and Apple over this iPhone, but this case, and the legal questions it raised, was always about more than one iPhone. These questions will resurface with Apple if the proposed method fails or when the DOJ asks a court to compel a different company to assist law enforcement agents in accessing encrypted information on digital devices. The existence of a viable method of unlocking this particular iPhone might provide a way to gain access to other iPhones. In opposing the court order directing it to provide assistance, Apple argued the government was forcing it to create a “backdoor” to software that would render other iPhones vulnerable. Now some “outside party” has handed the DOJ a possible backdoor to iPhones about which Apple apparently knows nothing and, thus, cannot address to protect the privacy of its customers. Given the nightmare Apple conjured in its legal briefs about what the DOJ was trying to force it to do, the appearance of a possible way to hack iPhones might be as alarming as any backdoor it would have created under court order. The positions Apple staked out in its legal briefs mean that it will try to change iPhone software to eliminate vulnerabilities this mysterious method might exploit—and the vulnerabilities will be patched if the DOJ shares the method with Apple. Such counter-measures might make the method effective only for the iPhone connected with the San Bernardino attack. Thus, even if the method proves viable, the legal questions at the heart of the dispute between the DOJ and Apple will remain unanswered and contentious. In addition, use of the method might exacerbate controversies associated with government acquisition, stockpiling, disclosure, and use of software vulnerabilities for law enforcement purposes, such as “lawful hacking.” The encryption conundrum could converge with the software vulnerabilities problem in ways that make effective cybersecurity policy more difficult to achieve. Postponing oral arguments in the Apple litigation might provide Congress with an opportunity to pass legislation settling controversies the iPhone case has stirred up. However, had oral arguments proceeded as scheduled, the court’s decision would, in all likelihood, have been appealed, potentially all the way to the Supreme Court. Thus, delaying the oral arguments does not produce significantly more time for Congress to act on the questions the litigation spawned and that will remain unanswered whether or not the method the DOJ is testing works. The world has been watching the Apple case because of its implications for how other governments might handle challenges presented by encrypted devices and data. The legal briefs by Apple and the DOJ laid out the applicable law, the competing interpretations of statutes and constitutional principles, and reinforced the central role of law and an independent judiciary in deciding questions of government power and individual rights. Delaying court proceedings because an unidentified party has provided some unexplained way to hack iPhones seems less transparent, cognizable, and exemplary for people in other countries also struggling with accommodating encryption into their social contract.
  • Cybersecurity
    The Chinese Government Has its Eye on the FBI-Apple Battle
    Shadowing the standoff between the Federal Bureau of Investigation (FBI) and Apple over access to an encrypted iPhone used by one of the San Bernardino attackers is the question: What will China do? If Apple creates unique software that allows Washington access to the phone, does that open the door for Beijing to make similar demands on the company and all other foreign technology firms operating in China? As Sen. Ron Wyden (D-OR) argued, “This move by the FBI could snowball around the world. Why in the world would our government want to give repressive regimes in Russia and China a blueprint for forcing American companies to create a backdoor?” Certainly, China watches United States government statements and policy very closely. An early draft of China’s counterterrorism law included provisions requiring the installation of backdoors and the reporting of encryption keys. In the face of criticism from the U.S. government and foreign technology companies, Fu Ying, spokeswoman for the National People’s Congress, defended the provisions as in accordance with “international common practices,” adding that it was common for Western countries, such as the United States and Britain, to request tech firms to disclose encryption methods. The final law, passed in December 2015, was much more ambiguous about what type of demands the government would make on technology companies, but it is clear that Chinese leaders are more than happy to exploit what is happening in the United States as rhetorical cover. Yet we should be clear that what happens in the United States will have very little impact on what China ultimately decides to do. Beijing, like governments everywhere, wants to collect and analyze data for law enforcement and national intelligence reasons. The desire for data may only intensify under Xi Jinping’s leadership; the Chinese Communist Party appears increasingly worried about domestic stability and the spread of information within the country’s borders. For foreign companies, refusal to cooperate with the Chinese authorities will increasingly lead to a loss of market opportunities. Faced with competing pressures across the many jurisdictions that they operate in, there are no easy options for the companies. Any resolution will be political, not technical. The ideal outcome is a multilateral agreement that embraces privacy and the strongest encryption possible, but also allows government access to data for legitimate purposes. The most workable solution within the United States may in fact involve sidestepping the question about whether governments (or companies) should be able to break encryption. As a recent report from the Berkman Center for Internet & Society at Harvard University argues, there are now massive amounts of data generated through the Internet of Things (cars, thermostats, surveillance cameras and hundreds of devices other connected devices) and the metadata (time, location, address, but not content) produced by cell phones and Internet communications. This data can be made available to law enforcement through established legal procedures, while leaving the encryption that protects phones and other devices alone. This approach could be standardized across the Atlantic. Governments would leave encryption alone, but share other measures to collect data. With Privacy Shield, the new agreement that regulates the transfer of data by companies between the U.S. and the EU, and reports that the U.S. and UK are negotiating a new treaty that would allow easier access for law enforcement to data, there are promising signs that it is possible to develop trans-Atlantic agreements about how information might be shared across national borders. China, however, remains the hard case. There is no indication that Beijing would be willing to forgo access to encrypted data on a phone, and, given cultural and political differences, little hope for rules and standards shared across the European, Chinese, and United States economies. China and Apple seem to have reached a temporary détente. Beijing has so far not made any further public demands on Apple, and the Chinese market is increasingly important to the company’s future, with revenues growing to $12.5 billion in 2015. Yet Beijing has also made it clear that it expects foreign companies to follow its rules if they want to continue selling in the Chinese market. As China’s cyber czar Lu Wei said in December, “As long as you don’t harm China’s national interests or Chinese consumers’ interest, we welcome you and your growth in China.” Apple is likely to be pushed, unwillingly, into forking its products, creating separate, less secure products for Chinese users. While this will be a bitter pill for Tim Cook and Apple to swallow, given their promises to defend the privacy of all users, it is likely to be the price of continuing to do business in China. This post originally appeared on DefenseOne.
  • Digital Policy
    The EU-U.S. Privacy Shield Is a Victory for Common Sense and Transatlantic Good Will
    Alan Charles Raul is a partner in the Privacy, Data Security and Information Law practice of Sidley Austin LLP.  You can follow his group at datamatters.sidley.com. When the Court of Justice of the European Union (CJEU) struck down Safe Harbor last year, it did so on the basis that the European Commission had not determined whether European data transferred to the United States enjoyed the same protections as in the European Union. Despite the fact a recent Sidley Austin report found that many U.S. privacy protections are essentially equivalent—if not stronger—than the European Union’s in national security matters and comparable in other areas, the Commission clearly needed to replace Safe Harbor with something else to satisfy the CJEU and domestic privacy activists. In early February, the Commission and the U.S. Department of Commerce concluded negotiations on a new framework dubbed the Privacy Shield and the text of the agreement was released yesterday. The deal constitutes an impressive array of findings, commitments and obligations to help get EU-U.S. data transfers flowing smoothly again. This is really good news, and should go a long way toward ameliorating the transatlantic digital tension that was exacerbated by the Edward Snowden disclosures in 2013. The Commission has now determined, subject to further review and approval by other EU bodies, that the U.S. legal system for protecting personal information is "adequate." In other words, the Commission believes that the new Privacy Shield will provide EU citizens essentially equivalent protections in the United States to those they enjoy in the European Union. The new principles of the Privacy Shield will require companies that choose to sign up to provide additional redress rights to EU individuals whose data was transferred to the United States, such as mandatory conflict resolution including arbitration at no cost to the complainant. Companies joining the Privacy Shield will also have to cooperate with EU privacy regulators, known as data protection authorities, with regard to human resources data that is transferred to the United States. U.S. companies will also have to provide expanded "access" rights to EU individuals, and expressly obligate their own data processors and other third-party service providers to which they forward EU data to agree to the Privacy Shield principles by entering into "onward transfer" contracts. The Federal Trade Commission, Department of Commerce and European data protection authorities all have increased monitoring and enforcement responsibilities under the agreement. For companies that choose not to join the Privacy Shield, they will still be able to use other EU-approved mechanisms like binding corporate rules or contractual clauses for data transfers, at least unless and until EU privacy regulators assess later this year whether these methods are sufficiently robust. Hopefully they will not strike down these alternatives because that would represent another setback in digital trade across the Atlantic and raise real issues about whether U.S. companies are being discriminated against. It is also significant that the U.S. intelligence community has provided the Commission with written assurances that data transferred to the United States under the Privacy Shield will not be subject to mass or indiscriminate surveillance. Although this does not actually represent a change in practice by U.S. national security agencies, the fact they were willing to communicate this in writing to another international jurisdiction demonstrates the importance to the United States of abating Europe’s surveillance concerns, and engaging in a broader and more informed international discussion of surveillance norms. Moreover, the United States agreed to establish an ombudsperson in the State Department to monitor and resolve any EU complaints about the nature and extent of U.S. surveillance conducted on data transferred under the Privacy Shield or other EU-approved mechanisms. It is also important that President Obama recently signed the Judicial Redress Act into law. This will allow EU citizens to sue federal agencies if they believe their rights have been violated under the Privacy Act, just as U.S. citizens may now. This provision is subject to an important caveat: EU citizens can only bring suit provided the Attorney General determines the European Union is cooperating with the United States on commercial data transfers and is not impeding U.S. data collection for national security purposeshopefully a manageable bar to clear if the Privacy Shield takes effect and the other transfer mechanisms remain valid. In essence, the Attorney General’s determination is a reciprocal "adequacy" determination, which should help maintain some balance and oversight of Europe’s actions. Of course, it leaves to be seen whether EU members states will ever apply to themselves the national security safeguards, checks and balances, and redress mechanisms that are in effect in the United States. In all, the Department of Commerce and the EU Commission have demonstrated that both sides can be reasonable when it comes to something as important as preserving access to the digital information that is necessary to serve the best interests of consumers and businesses on both sides of the Atlantic. And they also showed they can cooperate even where important national security and law enforcement issues and exigencies are at stake. Substantive convergence on data privacy is actually closer than the rhetoric would suggest, and it is good to see mutual investment in working problems out in favor of international trade, political harmony and citizen rights.
  • Cybersecurity
    Reactions to the Apple-FBI Clash in the San Bernardino Case
    Much has been written in the past forty-eight hours on Apple’s refusal to comply with a federal order to assist the FBI access the encrypted contents on a iPhone 5C owned by Syed Rizwan Farook, one of the deceased perpetrators of the San Bernardino terrorist attack. Here’s a quick recap of the events to bring you up to speed: On February 16, a federal magistrate in California ordered Apple to assist the FBI unlock and decrypt Farook’s phone. In siding with the U.S. government, the magistrate accepted the Department of Justice’s interpretation of the All Writs Act, a 200-year old law that allows courts to compel a person to do anything to comply with an order. Specifically, the FBI is looking for Apple to develop a software that will: Disable an iPhone’s ability to automatically wipe its contents if an incorrect password is provided ten times; Allow the FBI to run software that will attempt to guess the iPhone’s password--a technique known as brute force; and Disable software features that would introduce delays after every password attempt. On February 17, Apple published an open letter vowing to oppose the order on two grounds: Complying with the order effectively requires Apple to build malware to defeat the security features of its own products, exposing the security and privacy of its users if a third party got its hands on the malware. Complying with the order would set a bad precedent by using similar orders to "demand that Apple build surveillance software to intercept your messages, access your health records or financial data, track your location, or even access your phone’s microphone or camera without your knowledge." Here are some of the reactions in the... Technical community: Dan Guido over at Trail of Bits argues that it is technically feasible for Apple to comply with the order. Chris Williams over at the Register also argues that Apple could probably comply with the order, but that it’s choosing not to for public relations reasons. Matt Blaze, a cryptographer at the University of Pennsylvania, is skeptical of commentators who argue that it’s easy to develop a new operating system that the FBI requires. The Electronic Frontier Foundation, the Center of Democracy and Technology, and the Open Technology Institute all support Apple’s opposition. Askhan Soltani points out that the FBI already has backups to Farook’s phone as of October 19. The assumption here is that FBI is looking for more data that would have been saved to the phone between that date and the shooting on December 2. Nicholas Weaver thinks the magistrate’s order is worse than a slippery slope, it’s a cliff. Bruce Schneier explains why the public should side with Apple. The Internet Society, a non-profit and institutional home of the standards body that sets the Internet’s technical protocols, expressed support for Apple. Tech companies: Google CEO Sundar Pichai said in a series of tweets that the order could set "a troubling precedent." WhatsApp CEO Jan Koum shared Cook’s letter on Facebook and gave the company his full support, noting that "our freedom and our liberty are at stake." The Information Technology Council, an industry group that represents Dell, Facebook, Google, and others, expressed "worry" at the broader implications of "requiring governments to disable security features." Reform Government Surveillance, an industry group comprised of AOL, Apple, Google, Facebook, Evernote, Yahoo, LinkedIn, Microsoft, Twitter and Dropbox, issued a statement saying that "companies should not be required to build in backdoors to the technologies that keep their users’ information secure." Mozilla’s Mark Surman said that asking Apple to override its own security protections is "massive overreach." Think tank community: Max Boot disagrees with Apple’s position, calling it "sanctimonious and misleading." Robert Chesney at Lawfare notes that the encryption and "going dark" battle is now moving from Congress to the courts. Susan Hennessey and Ben Wittes at Lawfare are saying: "We told you so." Matt Mayer at the American Enterprise Institute argues that absent Congressional action on encryption, Apple is right to fight the magistrate’s order. Julian Sanchez at CATO argues that the Apple-FBI case is all about the precedent it sets. Andrew Woods wouldn’t be surprised if Apple appealed the order on First Amendment grounds given that code is speech. Political establishment: Congressman Ted Lieu (Democrat, California) issued a press release supporting Apple, arguing that the court is effectively asking a private sector company to be an arm of law enforcement. Congressman Justin Amash (Republican, Michigan) tweeted his support for Apple. Senator Tom Cotton (Republican, Arkansas) said that Apple "chose to protect a dead ISIS terrorist’s privacy over the security of the American people." Senator Ron Wyden (Democrat, Oregon), who has clashed with the government on encryption, said that the FBI’s move could "snowball around the world" and give "Russia and China a blueprint for forcing American companies to create a backdoor." Senator Ron Johnson (Republican, Wisconsin) expressed concern that "using the judiciary to require Apple to build a ’master key’ ... could open a Pandora’s box with unforeseen effects." Richard Burr (Republican, North Carolina) and Dianne Feinstein (Democrat, California), the chair and ranking members of the Senate Intelligence Community, sided with the FBI. In a separate op-ed, Burr said Apple has “wrongly chosen to prioritize its business model above compliance with a lawfully issued court order.” 2016 campaign: Donald Trump thinks it’s ridiculous that Apple won’t comply and has called for an Apple boycott. John Kasich said that the magistrate’s order wasn’t a case of government overreach despite acknowledging a month ago at a Council on Foreign Relations event that backdoors in encryption could potentially make people more vulnerable to cybercriminals. Marco Rubio didn’t take any sides, saying the issue was "tough." Ted Cruz said that although Apple shouldn’t be required to put backdoors in all of its phones, terrorism trumps privacy concerns in the San Bernadino case. Hillary Clinton called it a "hard dilemma" but noted that "got to be some way on a very specific basis we could try to help get information around crimes and terrorism." Bernie Sanders said that there has to be a balance and that the United States can fight terrorism without undermining constitutional rights. Newspaper editorials: The Wall Street Journal criticizes the White House’s management of the "going dark" issue and supports the encryption commission proposed by Rep. Michael McCaul (Republican, Texas). The Washington Post argues that Apple shouldn’t be forced to decrypt user data. The New York Times says that Apple is right to challenge the magistrate’s order. We’ll keep this post updated with any additional reactions that we see.
  • Privacy
    Protecting Data Privacy With User-Friendly Software
    Protecting the privacy of user data from unauthorized access is essential for business executives, policymakers, and users themselves. The pace of targeted attacks and massive data breaches is only increasing. Each new incident hurts organizations' bottom lines, undermines users' trust in the products they use every day, and can have dire consequences for public safety. The problem is multifaceted. Technologists are rushing to fix software vulnerabilities. Regulators are trying to keep pace with the realities of a complex ecosystem. Market-based approaches, such as cybersecurity insurance, remain immature. In addition, consumers are still learning what options they have, and what options they should be asking for. Currently, end users can use software that provides strong privacy protection with a high degree of certainty. Unfortunately, adoption rates for such software are low, largely because of how hard it is for nonexperts to use. This does not have to be the case. Software developers in the open-source community—who are generally the first to build encryption and privacy tools—need to improve the design of their tools to make them more user-friendly and useful. In turn, corporate and government purchasers should begin promoting the value of open-source software, particularly as it offers best-in-class security. These steps would go a long way toward improving privacy online. Background: Cryptography and User Experience Tools that provide strong privacy guarantees have historically been niche products, requiring a special understanding of the underlying security mechanisms in order to operate them. However, many of the basic concepts are straightforward. For example, encryption allows the contents of a message to be scrambled so that third parties cannot read it. Users apply an encryption key—similar to a long, complex password—to scramble data, and a decryption key to unscramble it. If the encryption of messages passing between users Alice and Bob occurs on their respective computers, and the decryption keys are under their sole control, it is called end-to-end encryption. This form of encryption is the gold standard in privacy preservation, because it prevents would-be eavesdroppers from intercepting the conversation. Software tools for end-to-end encryption have been available to users since the early 1990s, when Phil Zimmerman created a program called Pretty Good Privacy (PGP) and released it to the public free of charge. However, nonexpert users have faced a number of challenges with these tools from the beginning. In a 1998 paper, "Why Johnny Can't Encrypt," Alma Whitten and J. D. Tygar documented problems facing users of PGP. The authors found that participants had difficulty performing even basic tasks like encrypting and decrypting messages. Further studies have replicated these results with a variety of software programs. The integration of software into daily life—from workplace desks to phones tucked in pockets—has led to tremendous professionalization of user-experience (UX) design. From big firms to small app developers, many software companies live and die on the quality of their UX and employ a host of designers and researchers to improve it. Not surprisingly, end-to-end encryption tools are derided when they are hard to use, whether these tools are for playing games or talking confidentially about sensitive material. Given this industry focus on UX design, it may seem odd that privacy tools still have a reputation for being hard to understand and use. Several factors contribute to these apparent shortcomings, which limit the development and adoption of user-friendly privacy tools. Challenges One fundamental challenge to both the usability and adoption of privacy-preserving tools is that privacy is considered a secondary task, as demonstrated through user-experience research. A secondary task is always subservient in users' minds to the primary task, which is whatever core activity the software is meant to enable: sending emails in an email client, exchanging instant messages in a chat program, or collaborating on documents in a file-sharing application. Many security features have this second-class status: users describe talking to their friend not as "secure messaging" but simply "messaging," with the need for security as an ancillary requirement. This is problematic for software designers because users will readily abandon success on the secondary task if it becomes too onerous, or if they perceive it to be in conflict with the primary task. Privacy-preserving tools seeking mass appeal face another significant challenge: nonexpert users have a hard time distinguishing strong security properties from their snake-oil alternatives. It is easy to call a tool "secure," but it is hard to communicate the nuances without going into overwhelming details in an app-store listing. There have been efforts over the years to create a third-party seal of approval for secure tools, but these attempts have either foundered in obscurity or lacked credibility by allowing developers to purchase approval after a cursory self-evaluation.  Creating well-designed privacy-preserving tools with mass appeal also faces a variety of ecosystem hurdles. First, most tools in this space are developed as open-source products, which means that the authors publish the source code for anyone to read. This is good for security because the transparency of the open-source development model makes it possible to conduct independent reviews of the software (reducing the probability of critical vulnerabilities). It is bad for sustainability because few open-source projects are profitable, and many derive their funding from unsteady sources of income such as donations or grants. Second, the majority of nonexpert users are unlikely to transfer their digital lives to niche security tools, and prefer to prioritize their primary tasks by choosing the convenience of popular cloud platforms. End-to-end encryption is in direct conflict with the business model most platforms have adopted, because it prevents the data mining and ad targeting that these services have monetized. A service provider such as Google cannot serve targeted ads if it cannot read the contents of an email. Finally, a recent resurgence of a decades-old debate around the propriety of encryption technologies—particularly as they relate to law-enforcement efforts to thwart terrorism or investigate crimes—is creating tremendous uncertainty for software developers. Apple and Google have both made upgrades to support user-controlled encryption by default in strategic products (iMessage's encrypted chat, Android's encrypted file system). However, these nascent investments are unlikely to be followed by large-scale integration of privacy-preserving technologies, given that a multitude of conflicting requirements around cryptography loom on the horizon in different jurisdictions. For the United States, it is especially unfortunate that this debate emerges at a time when confidence in technology companies' ability to protect user data is still suffering from the fallout of the Edward Snowden revelations. Recommendations First and most urgent, the current debate over the use of encryption undermines the promotion of privacy tools. The U.S. government proposed a technological backdoor with the Clipper Chip in the early 1990s, using the same arguments heard today. It failed spectacularly. Although technology has evolved since, the fundamentals of encryption have not. Policymakers in the United States and other countries should recognize that anything less than intact cryptography puts all users at risk. Developers cannot build software that allows law enforcement to access encrypted communications but prevents malicious actors from exploiting that access. Cryptography cannot distinguish good people from bad, so a backdoor for one is a backdoor for all. Undermining the encryption used by U.S. companies would place the average consumer at risk of attack by malicious third parties, and merely motivate criminals and terrorists to use one of many alternative options. Powerful cryptography tools can easily be built outside the United States; as the self-declared Islamic State's use of German messaging service Telegram demonstrates, software rarely respects borders. In addition, technology decision-makers, including chief information security officers and others with purchasing power, need to promote the value of open-source tools throughout their organizations. This can be done by authorizing in-house engineers to contribute to open-source projects during work hours and explicitly seeking technology consultants experienced in the open-source world. Open-source development can span geopolitical barriers to create technologies that offer best-in-class security, but all too often such projects are viewed as half-baked or risky by decision-makers. The volunteer authorship of many projects contributes to this reputation, but the lack of polish in the user experience design does greater damage. Changing a company's culture from one of client ("software is an expense, what do I get for my money?") to one of community ("software is an investment, how can we contribute strategically for long-term benefit?") can help organizations and projects find innovative, secure, and affordable solutions. Open-source developers, in turn, need to prioritize user-experience research and design, as well as to optimize their tools for large organizations. The focus of too many projects has long been on users who resemble the developers themselves. It is time to professionalize the practice of open-source development, recruit designers and usability researchers to the cause, and take a human-centered approach to software design. In particular, project leaders should make the development process more accessible to new participants by including explicit instructions to user-experience experts in their documentation. Although this change in focus will require a cultural shift within the open-source community, it will allow projects to attract more users and more donations, and ultimately result in more useful tools. To support these efforts, technology-focused foundations and software companies' research and development wings should shift funding priorities toward more applied research on crafting and communicating about security-related features. Much of the work in this area examines the reasons a tool is hard to use—not ways to improve it—or focuses on toy refinements (e.g., "this custom interface is better than the standard"). As an example of such research, WhatsApp recently incorporated end-to-end encryption into its mobile messaging platform, without changing the user experience of its product. However, it accomplished this by hiding all privacy-specific features and tasks from its users, which in turn introduces vulnerability to certain kinds of attacks. Instead, researchers should work to identify factors that make privacy features successful across tools, and examine how such features might be added to popular products without harming user satisfaction. Taken together, these recommendations would both incentivize and facilitate organizations and individuals in their efforts to adopt stronger protections of user data from unauthorized access. Easier-to-use privacy tools and greater consumer confidence, in turn, will support continued growth, innovation, and financial stability in the digital era.
  • Cybersecurity
    The Top Five Cyber Policy Developments of 2015: Encryption
    Over the next few days, Net Politics will countdown the top five developments in cyber policy of 2015. Each policy event will have its own post, explaining what happened, what it all means, and its impact on cyber policy in 2016. In this post, the encryption debate.  Lincoln Davidson is a research associate for Asia Studies at the Council on Foreign Relations. You can follow him on Twitter @dvdsndvdsn. The debate over encryption—whether tech companies should be required to maintain the ability to decrypt communications pursuant to a lawful government request—dragged on throughout 2015. The year started with a bang, as evidence was released suggesting that the National Security Agency had the ability to break certain virtual private network protocols and it had access to the encryption keys that major telecommunications providers use to encrypt network traffic.  It ended with the debate back in the spotlight, as politicians mulled the need to weaken encryption in the wake of terrorist attacks in Paris and San Bernardino, California. (For a solid recap of what’s what in encryption, see this FAQ published by ProPublica.) The format of this debate has become almost ritualized. Something bad happens. Politicians, law enforcement agents, and intelligence officials claim that encryption helped enable the bad thing or prevented them from stopping the bad thing. Privacy advocates, security researchers, and representatives of the tech industry respond that there was no evidence that was the case, and that weakening encryption would be even worse than the bad thing. The debate then dies down for a bit, nothing having been accomplished. Rinse, repeat. In 2015 we heard from United Kingdom Prime Minister David Cameron; French Prime Minister Manuel Valls; telecommunications regulators in India and Pakistan; government attorneys and police officials in New York City, Paris, London, and Spain; NSA Director Michael Rogers; Senators Dianne Feinstein (D-CA) and John McCain (R-AZ); and presidential candidates Jeb Bush, Hillary Clinton, John Kasich, and George Pataki all arguing for some form of government access to encrypted communications. None of them can match FBI Director James Comey, however, who’s long been one of the most outspoken U.S. government officials in the encryption debate. Comey is particularly opposed to end-to-end encryption, such as that offered by Apple’s iMessage, saying that “use of encryption is part of terrorist tradecraft now.” Testifying to Congress in early December, for the first time Comey gave a specific example of encryption getting in the way of a federal investigation: a shooter exchanged 109 encrypted messages with an “overseas terrorist” before shooting a security guard at an anti-Islam event in Garland, Texas earlier this year. Comey said he found it “depressing” that tech industry leaders support encryption and fail to acknowledge that there are “societal costs to universal encryption,” and called for companies to reconsider their “business model.” This “business model” has a lot of support, however, and not just from the tech industry. In the past year alone, strong encryption has garnered public support from the usual suspects in the tech sector such as the Information Technology Industry Council and Apple CEO Tim Cook (who also spoke to 60 Minutes on the topic), but also from former NSA and CIA Director Michael Hayden, former Secretary of Homeland Security Michael Chertoff, and former NSA Director Mike McConnell. In May, more than a hundred civil society organizations, tech companies, and security experts signed an open letter urging President Obama to develop “policies that will promote rather than undermine the wide adoption of strong encryption technology.” For a while, it seemed as if these voices had won. A National Security Council memo leaked in September suggested the president was leaning towards advocating strong encryption. In October, administration officials said they had decided not to seek a legislative challenge to end-to-end encryption for the time being. But then came the terrorist attacks in Paris and San Bernardino, and a debate that had looked like it was almost settled flared back up. The debate seems ultimately futile. Both sides keep repeating the same talking points and talking past each other. This isn’t terribly surprising, though. The talking points aren’t just the same as they were a year ago; they haven’t changed much in twenty years. The arguments of today’s advocates of “backdoors” and “golden keys” are the same as the arguments that were deployed in the 1990s in support of the Clipper chip. In both cases, advocates for increased government access to private communications pick whatever threat seems the scariest and resonates best with the public, and argue that encryption makes the threat that much scarier. In the mid-90s, the threat bandied about the most was drug dealers; today, it’s terrorists. Looking ahead to 2016, we can expect the encryption debate to continue. In the wake of the San Bernardino attack, as Americans’ fear of terrorism shoots to the highest point in ten years, there will be plentiful opportunities for more anti-encryption proposals. If any evidence comes out that the San Bernardino attackers used encryption, you can be sure that legislators will be quick to act in response. Until then, the encryption debate will drone on.
  • China
    Chinese Cyber Power: Not Learning From the United States’ Mistakes
    Lincoln Davidson is a research associate for Asia Studies at the Council on Foreign Relations. You can follow him on Twitter @dvdsndvdsn.  In an essay published last month on the website of the Communist Youth League—an auxiliary of the Chinese Communist Party (CCP) for young people between the ages of fourteen and twenty-eight—journalist Cai Enze drew a parallel between cyber power and the “Chinese Dream,” Chinese president Xi Jinping’s articulation of his vision for a “rejuvenated” China. “The Chinese Dream is the hundred-year dream of many millions of Chinese people,” Cai wrote. “Its source is the broad wisdom of the masses…a crowdfunded top-level design, where being a cyber power is one of the parts of that crowdfunding.” But what is a cyber power? For the CCP, being a cyber power means having a well-developed Internet and competitive technology and e-commerce companies that aid China’s development. The state must have impressive cyber capabilities to protect the country’s networks from bad actors. And the state must be the sole voice for the interests of the people with regard to the Internet, both domestically and internationally. In other words, cyber power means the comprehensive expression of state sovereignty in cyberspace. This emphasis on sovereignty is rooted in China’s historical experiences of a “century of humiliation” that shape the modern perception of China’s leaders that “hostile foreign forces” are attempting to ideologically infiltrate the country and tear down the CCP. While the CCP’s leadership has been pushing the link between cyber power and state sovereignty in cyberspace for years, the World Internet Conference, held in Wuzhen, China last November, was something of a watershed moment for China’s involvement in global Internet governance. With the CCP’s top cyber official Lu Wei playing host, the party’s conception of the relationship between the Internet, the individual, and the state were on full display. Chinese officials had been airing their grievances with the existing model of Internet governance for more than a decade, but Wuzhen seemed to signal a shift to a new assertiveness backed by the power and wealth of the Chinese party-state. At the conference, party leaders pushed their vision of a state-controlled Internet, even employing a heavy-handed attempt to get conference attendees to sign on to a last-minute declaration of support for the Chinese approach to the Internet. Since then, the CCP has continued to push these norms for the global Internet, norms that directly conflict with the norms of a free and open Internet led by multiple stakeholders that the United States promotes. As the second World Internet Conference approaches—while it was originally scheduled for October 2015, it has been repeatedly delayed due to conflicts with other conferences—China is taking a look back at the success of its attempts to promote its norm of cyber sovereignty. Pointing to a September 2015 meeting between Xi Jinping and U.S. tech executives in Seattle, the Cyberspace Administration of China said the country was moving “from participant in the global Internet to constructor of the global Internet order.” Echoing that refrain, in August People’s Daily wrote that because the United States government is divesting itself of the IANA functions “China’s voice in Internet governance is growing stronger.” Despite what Chinese bureaucrats and state media might say, China’s preferred norms for cyberspace aren’t making much headway on the international stage, however. For years, China has been pushing at the United Nations a “Code of Conduct for Information Security” signed with its partners in the Shanghai Cooperation Organization, to little avail. Meanwhile, an agreement between Xi Jinping and Barack Obama during the former’s visit to the United States in September garnered a Chinese recognition, for the first time, of the United States’ longstanding distinction between economic and political espionage in cyberspace. And the most recent report of the UN Group of Governmental Experts (GGE) affirmed three of the United States’ norms of state behavior in cyberspace. Although the GGE report acknowledges that “state sovereignty and international norms and principles that flow from sovereignty” apply to cyberspace, it doesn’t define exactly what this means, and there’s still disagreement on how exactly sovereignty should play out online. China’s goals in becoming a cyber power, as mentioned earlier, are threefold: boost economic development, assert China’s role on the international stage, and protect the nation. The incredible success of American tech companies, and the benefits they’ve brought to the American economy, are a product of the free, open, multistakeholder approach to the Internet the United States government has adopted. On the international front, the biggest hit to the United States’ credibility and respected status as a global leader in recent years has been the revelation that the U.S. government’s surveillance apparatus was invading the privacy of both its citizens and people abroad. China’s leadership would do well to learn from these lessons; they might find that they get more respect as a “cyber power” if they express their sovereignty in cyberspace in a way that’s in line with other international norms of the rights of individuals relative to the state. As for security, that’s a challenge that every country is facing right now. China’s leaders should be careful that they don’t risk giving up the first two goals in their rush to achieve the third.