Economics

Technology and Innovation

  • Technology and Innovation
    Five Things I Learned About the Future of Solar Power and the Electricity Grid
    Nestled in the foothills of the Rockies in Golden, Colorado, the Energy Department’s  National Renewable Energy Laboratory (NREL) was established in 1977 to help bring new energy technologies to market. Today it is one of seventeen national laboratories overseen by the Energy Department and the only one whose sole focus is renewable energy and energy efficiency research and development. I spent a full day touring the facilities and interviewing researchers working on a range of solar photovoltaic (PV) technologies and on integration of clean energy into the electricity grids of the future. Here’s what I learned: 1. NREL is unique in the solar research ecosystem—that’s a bad thing. Originally christened the Solar Energy Research Institute, NREL is best known as the gold standard of solar technology. One researcher remarked to me, matter-of-factly, “We are the best in solar PV there is.” It is easy to see why. Cutting-edge research on myriad solar technologies is co-located on one campus, and basic science, economic modeling, manufacturing development, and systems integration are all neighbors. Around the world, just two institutions (Germany’s Fraunhofer Institute for Solar Energy Systems and Japan’s National Institute for Advanced Industrial Science and Technology) come close to NREL’s breadth of solar activities... Unfortunately, limited resources constrain NREL’s ability to leverage its integrated research capabilities to commercialize promising technologies. For example, take NREL’s work on an upstart technology I’ve written about elsewhere—solar perovskites. In contrast to academic researchers’ obsession with making fingernail-sized devices that are highly efficient under perfect lab conditions, the researcher leading NREL’s perovskite work wants to scale up manufacturing of larger areas of solar perovskite coatings and achieve long-term stability in the real world. Those goals are ambitious and sorely needed, but with only three and a half researchers supporting them, they will be tough to achieve. Some major research universities (e.g., MIT) also house integrated research programs that help researchers fill the gaps between basic research and commercial success. But many more such centers are needed to institutionalize energy technology development. Prototype printer heads for inkjet printing perovskite solar coatings in a scalable manufacturing process. The process is contained inside a “glovebox,” into which researchers can reach, through the rubber arms, to interact with the process under a controlled atmosphere (Varun Sivaram). 2. Rapid solar PV market evolution means moving targets for researchers. Earlier this year, First Solar (the lone American panel maker in a Chinese-dominated industry) stunningly projected $1 per Watt fully installed cost for utility-scale solar installations by 2017 (this includes a major step-down in tax credit subsidies to solar power). If they achieve this cost, some of the research projects I saw at NREL may have to adjust their targets even lower. For example, an ingenious reactor to mass-produce NREL’s record efficiency solar cells has a long-term panel cost target of $0.70 per Watt. Because the panels are highly efficient, the remainder of the costs to complete the installation should be roughly 33 percent lower than with existing panels; still, even if this reactor were scaled up to produce solar panels in 2017, the fully installed panels would cost $1.10/W, already higher than the industry roadmap. As the leader in solar innovation, researchers around the world will look to NREL to clearly articulate the value of new solar technologies and why a combination of low cost, superior performance, and new applications is preferable to today’s race-to-the-bottom solar commodity market. The reactor (left panel) in which researchers created the world-record efficiency solar cell (46 percent efficient under concentrated sunlight) (Varun Sivaram). For four decades, NREL has compiled the record efficiencies of solar cells and published them in a chart (right panel) that is consulted around the world (U.S. Department of Energy). 3. Reliability in the real world makes and breaks solar PV technology. At NREL’s Outdoor Test Facility (OTF), rows and rows of solar panels endure the rain, snow, and even hail that Golden, CO hurls at them. Out in the field, all sorts of unexpected things can go wrong, and NREL partners with companies who want to learn about the failure modes that could plague their technology. My tour guide pointed out the rooftop shingles coated with a flexible solar panel—although the panels still appear to work, the ensuing leaks from poking wires through the roof had doomed the product and the company. Later, I saw panels which had worked fine for the first year but whose sealing material had gradually given way to attacks by moisture, which now eroded the power-producing material itself. The take-home lesson was that exciting technologies in the lab still have a long way to go to demonstrate the twenty-year reliability that the market demands. As we snaked around the rows of the OTF, two words came to mind: “testbed” and “graveyard.” One First Solar test setup had been in operation for over two decades and still produced 80 percent of its original output—the data from this test had emboldened First Solar’s investors and helped the company achieve its current success. But I also passed dozens of failed relics, sober lessons from the heady days when capital poured into new solar startups that have since expired. NREL’s Outdoor Test Facility (OTF) hosts solar panel test setups from industry partners for multi-decade reliability studies (Varun Sivaram) 4. Standards, not new technology, are crucial for integrating clean energy into electricity grids. At NREL’s Energy Systems Integration Facility (ESIF), researchers shared their views about the challenges and opportunities from modernizing the U.S. electricity grid to integrate new energy technologies. Specifically, ESIF is interested in integrating distributed energy resources (DERs), which include solar panels but also other decentralized ways to make or save energy (e.g., fuel cells, batteries, efficient appliances). In theory, DERs can improve the efficiency of the electricity grid, reduce electricity consumption, and save ratepayers money while also maintaining grid reliability. But in practice, this is complicated by the proliferation of DERs made by different vendors to different specifications and operated without much regard for their effects on the grid, positive or negative. The solution could come from emulating the successful IT industry. There, a robust set of standards have enabled different pieces of hardware to operate seamlessly with software applications, a design feature known as “interoperability.” In much the same way as a computer language employs a concept called “abstraction” to send generalized instructions to diverse hardware, the electricity distribution grid is in need of a standard language, or “common semantics” to coordinate the diverse DERs that will connect to the grid in the coming years. ESIF hopes to create a set of standards that enables such a language and ensures DER interoperability; more federal support would be helpful to accelerate this work. Indeed, the gains from system-level innovation, according to several ESIF researchers, dwarf the expected gains from new energy technology—“we have all the technology we need” was a refrain I heard often. Two of the “Smart Homes” that ESIF has set up to simulate the integration of homes packed with internet-connected, energy-efficient appliances into the electricity grid (Varun Sivaram) 5. A decentralized grid poses serious cybersecurity threats that require immediate attention. As grids integrate more DERs, shifting from a centralized to a decentralized model, there are two opposing effects on grid resilience. The physical resilience of the grid to failure improves, because the strategic value of central power stations and bulk transmission lines decreases as more power is generated and saved closer to the customer. However, the cybersecurity risk actually increases with decentralization, because access points for malicious hackers proliferate—imagine a hacker accessing a mobile phone to breach a home energy system to attack a utility distribution substation, etc. The root cause of the opportunity to efficiently coordinate DERs—their increasing connectivity via the Internet—is also the source of increased cybersecurity risks. Fortunately, these are not new problems, and their solutions are well catalogued. Enterprise IT best practices have long incorporated “role-based access” protocols, in which a proliferation of users on a network does not compromise the network’s integrity, because of walls that isolate decentralized users from the rest of the system. I learned from researchers at ESIF that electricity utilities are far behind other enterprises in their IT practices, and that an immediate culture shift is imperative if grid decentralization is to reduce, rather than enlarge, resilience risk. The contrast between ESIF and the solar research facility was striking to me. ESIF, by decades the younger of the two, was manned by researchers intent on modernizing the century-old utility business model. On the other hand, the solar researchers I met brought decades of experience in solar PV and had a long-term research orientation, at odds with the quarterly target obsessions of a solar industry that is rapidly reducing its costs. But to label the two facets of NREL as its future and past would be a mistake. New solar technologies will be essential to displace fossil fuels, and NREL plays a crucial role in advancing solar PV research and methodically preparing new technologies for the field. Coupled with the next-generation grid that ESIF envisions, tomorrow’s energy systems may look fundamentally different from today’s. I am grateful to the staff of the National Renewable Energy Laboratory for their openness and hospitality, including: Bryan Hannegan, Greg Wilson, Jen Liebold, Tami Reynolds, Jim Cale, Martha Symko-Davies, Erfan Ibrahim, John Geisz, John Simon, Matt Beard, Joe Berry, John Wohlgemuth, Jao van de Lagemaat, and Paul Basore.
  • Noncommunicable Diseases
    New, Cheap, and Improved
    Overview In recent years, frugal and reverse innovation have gained attention as potential strategies for increasing the quality and accessibility of health care while slowing the growth in its costs. The notion that health technologies, services, and delivery processes developed for low-income customers in low-resource settings (known as "frugal innovations") might also prove useful in other countries and higher-income settings (a process some call "reverse innovation") is not new.  The demand for these types of innovation is increasing, however, as developed and developing countries alike strain to cope with the staggering economic and social costs of noncommunicable diseases (NCDs). Increased attention on innovation is welcome—particularly when it is in service of improving the economic opportunities of the world's poorest and increasing their access to much-needed health-care products and services. The trick will be to ensure that the focus on reverse and frugal innovation goes beyond the latest buzzword and translates into real investments and results. With this goal in mind, this paper seeks to answer three practical questions regarding reverse and frugal innovation and NCDs: Are reverse and frugal innovations likely to be important for addressing the NCD challenges facing the poor in high- and low-income settings? Which pressing NCD challenges are reverse and frugal innovations best suited to help solve? What measures can donors, private companies, and nongovernmental organizations (NGOs) take to facilitate the use of reverse and frugal innovations to solve those problems? The answers to these questions may contribute to the ongoing efforts of donors, investors, NGOs, and governments to move frugal and reverse innovation out of the realm of promising ideas and anecdotes and into broader practice to tackle the global challenge of NCDs.
  • Technology and Innovation
    Challenges and Benefits of South Korea’s Middle Power Aspirations
    South Koreans have been among the world’s early adopters in globalization over the past two decades, going from outpost to “node” by embracing networks, connectivity, and economic interdependence in startling fashion in a very short period of time. It has been commonplace for most South Koreans to think of themselves as a small country, buffeted by geostrategic factors beyond its control, consigned to its fate as a “shrimp among whales.” This narrative, generally speaking, conforms with the twentieth century historical experience on the Korean peninsula, which witnessed annexation, colonization, subjugation, and a moment of liberation, followed by division, war, and marginalization as an outpost of the Cold War. Outsider impressions of late twentieth century Korea tended to view Koreans as defensive, self-absorbed, xenophobic to varying degrees, and only capable of viewing the outside world through a distinctively “Korean” lens. Given these circumstances, the early twenty-first century story of South Korea’s embrace of globalization on the foundations of its democratization and modernization is striking. The idea that South Korea should offer something to the world in return for the sacrifices made to defend South Korea from communist domination has had real pay-offs as South Korea’s reach and capacities has taken hold in South Korea. Korean conglomerates went global long ago. The younger generation is outward-looking: Korea’s “peace corps” ranks second only to the U.S. Peace Corps in terms of size; World Friends Korea (which falls under the Korea International Cooperation Agency, or KOICA) boasts over 3,000 volunteers at a given time, welcoming in 1,000 new volunteers each year into the program. Young Koreans are voracious consumers and top performers in the field of higher-level international education. A positive by-product of this shift is that South Korea has sought to make contributions to international leadership, both as a way of paying the world back for decades of international support and as a way of sharing with the world its unique experiences with development and democratization. Between 2010 and 2013, South Korea made a concerted effort to host a series of important multilateral forums, marking a new chapter in South Korea’s experience as a convener and contributor to the international agenda. But how has “hosting diplomacy” changed South Korea’s ability to influence global events, and what lasting impact might South Korea have as a G-20 contributor to international leadership? To examine twenty-first century South Korea’s contributions to the global agenda, I asked four authors, Colin Bradford, Toby Dalton, Brendan Howe, and Jill Kosch O’Donnell, to examine Korea’s contributions as a convener, host, and steward of the international agenda for international financial policy, development assistance and donor cooperation, nuclear security, and global climate finance. In addition, Andrew O’Neil evaluates South Korea’s middle power aspirations from an Australian perspective. Each of the papers tells its own story of Korean efforts to harness and provide thought leadership in the international arena disproportionate to Korea’s size and power. This set of papers (each available for download here) provides useful insights regarding what South Korea has done well, especially through its efforts to generate leadership through mobilization of ideas, personnel, and institutions. But the papers also reveal some challenges in terms of sustainability, transferability, and effectiveness of Korean contributions to international leadership. Some of these problems are organizational, but some are related to the challenge of establishing a vision and mobilizing “brand awareness” at a national level necessary to claim a particular slice of the international agenda as “the thing” Koreans are known for doing as well as or better than anyone else. The conclusion: Korea’s middle power efforts to date provide a good start, but they are still a work-in-progress. In the coming months, President Park will have opportunities to build on this foundation through UN efforts to promote sustainable development and in Paris at the UN conference on climate change.
  • Technology and Innovation
    To Succeed, Solar Perovskites Need to Escape the Ivory Tower
    What will tomorrow’s solar panels look like? This week, along with colleagues from Oxford and MIT, I published a feature in Scientific American making the case for cheap and colorful solar coatings derived from a new class of solar materials: perovskites. In this post, I’ll critically examine prospects for commercialization of solar perovskites, building on our article’s claim that this technology could represent a significant improvement over current silicon solar panels. We argue: Perovskites are tantalizing for several reasons. The ingredients are abundant, and researchers can combine them easily and inexpensively, at low temperature, into thin films that have a highly crystalline structure similar to that achieved in silicon wafers after costly, high-temperature processing. Rolls of perovskite film that are thin and flexible, instead of thick and rigid like silicon wafers, could one day be rapidly spooled from a special printer to make lightweight, bendable, and even colorful solar sheets and coatings. Still, to challenge silicon’s dominance, perovskite cells will have to overcome some significant hurdles. The prototypes today are only as large as a fingernail; researchers have to find ways to make them much bigger if the technology is to compete with silicon panels. They also have to greatly improve the safety and long-term stability of the cells—an uphill battle. We wanted to write for a popular science magazine, with a general audience in mind, to share an exciting story of scientific discovery that has largely been confined to specialist journals. Indeed, for solar perovskites to overcome the odds stacked against an upstart clean technology breaking into the market, we believe the academic, private, and public sectors really need to pay more attention to each other. The lack of awareness by the clean energy industry about solar perovskites, despite the commotion in the scientific community, demonstrates how scientific research can proceed in a bubble. Following the big announcement of a highly efficient solar perovskite from our research group in Oxford, hundreds of laboratories around the world jumped on the perovskite bandwagon, in many cases abandoning their research into other solar technologies. The race among labs to publish record solar efficiencies in the top journals involved international intrigue—the UK banded with Italy, trading records with the Swiss-Chinese coalition, and everyone was eventually upstaged by the South Koreans when they reported a 20 percent efficient solar cell late last year (for reference, silicon solar cells have plateaued at 25 percent efficiency, a target solar perovskites should soon surpass). The excitement and drama reflect the gravity of the perovskite discovery—time will tell, but many of us believe this is the field’s biggest breakthrough since the original invention of the solar cell sixty years ago. Certified solar cell record efficiencies for silicon and perovskite technologies (date axis truncated to better show perovskite efficiency trajectory—silicon solar cells were invented in 1954; data from National Renewable Energy Laboratory) However, when I talk to industry executives at major solar manufacturers and developers, very few have even heard of solar perovskites. This does not bother scientists, many of whom narrowly focus on demonstrating a higher efficiency solar perovskite, even if it is a fingernail-sized cell that degrades in hours. Some might argue that a scientist’s value is in basic inquiry and complementary to industry’s expertise, and they have a point. But aloof regard for real markets from the ivory tower leads many academics to naïvely assume that a superior technology will naturally make the leap from prototype to profitability. In fact, broader feedback from professionals outside of research labs is integral to commercializing solar perovskites. Currently, solar perovskites can be worryingly unstable (although we’ve demonstrated longevity if they are properly sealed away from moisture). That’s a red flag for investors familiar with a mature, 50 billion dollar silicon solar panel industry in which every panel comes with a 25-year performance warranty. And because solar perovskites contain lead, a toxic element, any commercial product will need to undergo extensive safety testing, with which private industry veterans have experience. These professionals can guide research into the stability, safety, and real-world performance of solar perovskites, which are every bit as important as the efficiency under idealized lab conditions, the paramount academic metric. Elsewhere in the physical sciences, the transition from basic research to product development is better institutionalized. This is one of the reasons why I have argued that Moore’s Law for computer chips, which predicts rapid deployment of scientific advances, does not apply to the solar panel industry, whose products have improved at a comparatively plodding pace. Whereas in computer chip development there are established conferences at every step of commercialization from basic device physics to chip integration that bring together scientists and industry, advanced solar technology development is confined almost exclusively to the realm of academia. Fortunately, leading researchers in the United States and Europe are making a concerted effort to bridge the gap between academia and industry. For example, one of my co-authors and the leader of the Oxford research group, Henry Snaith, founded a company to tackle real-world deployment and commercialize solar perovskites. His strategy is actually to partner with the silicon solar panel companies, adding a perovskite coating on top of silicon to boost its performance. That approach seems prudent, because allying with powerful incumbents is easier than fighting them for market access. And through a partnership, his company will benefit from gaining access to experienced solar engineers, investors, and developers to guide the design and delivery of a compelling product. Solar perovskites on glass—researchers can vary the color and transparency of the coatings, enabling new applications (Plamen Petkov) My co-authors and I do hope our article will bring professionals in the solar industry up to speed on the latest research, but our target audience is even broader. We envision architects reimagining the aesthetics and functionality of windows, roof shingles, and facades; policymakers tweaking green building codes and incentives; and the military investigating the use of solar perovskite coatings to power forward deployed bases. These applications may seem far-fetched, and they are—solar perovskites are still a risky bet to succeed in a monolithic market. But if scientists continue to broadly communicate our progress, those odds can only improve. Read our feature, “Outshining Silicon,” in Scientific American’s July 2015 issue, here
  • China
    Guest Post: With the Gaokao, Hacking and Drones Are Just a Way to Get Ahead
    By Lincoln Davidson Lincoln Davidson is a research associate for Asia studies at the Council on Foreign Relations. Each year, millions of Chinese high school graduates take a two-day college entrance examination, colloquially known as the gaokao, that determines whether they’ll be able to attend university. The nine-hour test—which covers history, English, calculus, physics, chemistry, political theory, and more—is highly competitive, and students in the past have resorted to stolen questions and even IV drips to help them prepare. But as this year’s exam approaches, the test’s high stakes are pushing some to resort to technological means to give their score a boost. On Wednesday, the Chongqing Evening Paper reported (in Chinese) that hackers claiming to have stolen copies of the questions were selling them online at a rate of 6,000 RMB (approximately $970 USD) for two sections. When a reporter from the paper called them up, she was shown evidence that they had previously hacked into a university’s servers to change a student’s grades. However, because the network for the city’s department of education is more trouble to access, the “hackers” asked for payment up front. The city’s testing center denied that it was possible for hackers to gain access to the test, saying that it is not put online prior to exam day. That won’t stop innovative, tech-savvy Chinese teenagers from finding other ways to get ahead on the gaokao, however. Last year, Chinese police released photos of dozens of creative devices they’d caught students using to cheat on the test. The gadgets, which look like the kinds of tools James Bond might use, include glasses that wirelessly transmit pictures of the test questions to someone at another location and cell phones hidden in a shirt that allow a test-taker to get help from outside. Officials in the city of Luoyang, Henan Province, are hitting back this year with a drone (in Chinese) they plan to deploy to catch cheaters who use wireless devices to communicate with individuals outside the test area. The drone will monitor radio signals in the vicinity of testing centers and help officials identify where the signals are originating from. Once they’ve locked in on a signal, it will be easier to spot the offending student. Students who violate the exam rules are barred from taking the test for three years; adults who help them face time in prison if caught. While Chinese officials have used new technology to try to catch cheaters in the past, the market has been quick to respond with measures to avoid detection. Last year, in an attempt to catch individuals hired by students to sit the exam for them, officials in Henan began using fingerprint-recognition technology to spot the surrogates. Although officials managed to nab a few scofflaws, others avoided detection by wearing film on their fingers that mimicked the prints of the student who hired them. Given the importance of the test to a student’s future, cheaters will continue to find creative ways to avoid detection. Drones, biometrics, and other cutting-edge technologies seem excessive, but they may help catch students who’ve moved beyond scribbling notes on the palms of their hands.
  • Technology and Innovation
    The World Needs Post-Silicon Solar Technologies
    In his 2007 keynote address to the Materials Research Society, Caltech Professor Nate Lewis surveyed global energy consumption and concluded that out of all the renewable options, only solar power could meaningfully displace human consumption of fossil fuels. However, he warned, the cost of solar would need to fall dramatically to make this possible—Lewis targeted less than a penny per kilowatt-hour (kWh) of energy and dismissed any prospect of existing silicon solar technology meeting that goal.[1] Solar, he argued, “would have to cost not much more than painting a house or buying carpet…Do not think ‘silicon chip,’ think ‘potato chip.’ ” Very few seem to remember that distinction. Since Professor Lewis’ talk, the price of silicon solar photovoltaic (PV) panels, which account for over 90 percent of the market, has fallen by over 80 percent, and the final installed cost of solar power continues to decline with regularity. If one believes commentators captivated by solar’s heady ascent, a recent solar deal in Dubai, which pegged the price of solar at 6 ¢/kWh, heralds a coming-of-age for solar power, set to displace fossil fuels around the world. Indeed, the Department of Energy’s (DOE) goal by 2020, through the SunShot program, is to bring U.S. solar costs down to the same 6 ¢/kWh mark achieved in Dubai—DOE’s funding strategy therefore concentrates on bringing down the soft costs (permitting, installation, equipment) of solar, implicitly assuming that the underlying solar panel technology is a largely solved issue. But last week, my colleague Michael Levi posted a piece on this blog warning that clean energy cost-competitiveness targets are not static if clean energy is to “take a massive share of the market rather than just nip at its fringes.” And earlier this month, “The Future of Solar” report from the MIT Energy Institute presented an excellent rejoinder to advocacy of deployment at the expense of innovation in solar PV energy. Their argument, unpacked below, is that solar panels face a moving target for achieving cost-competitiveness with fossil-fuel based power that becomes more difficult as more solar panels are installed. As a result, even after the expected cost reductions that accompany increased experience with silicon technology, solar PV cannot seriously challenge and replace fossil-fuel generation without advancing beyond the economics of silicon. Today, unsubsidized silicon solar panels are not cost-competitive with conventional generation in the United States. To seriously challenge fossil fuels around the world, solar PV must achieve “grid parity,” or a cost that is competitive with other power sources on an unsubsidized basis—the MIT report first seeks to establish that solar in the United States fails this test today. To do so, for each generation option the authors calculate the “levelized cost of electricity” (LCOE), or the cost per kWh of energy produced. By standardizing the costs of different generators, LCOE serves as the traditional method to enable an apples-to-apples comparison of generators with different up-front vs. ongoing cash flows—for example, solar panels cost a lot to install initially but then each kWh of power costs next to nothing to generate, whereas a natural gas plant has some initial costs and also considerable ongoing O&M and fuel costs. The authors find that a large, utility-scale solar installation is not competitive with a combined-cycle thermal plan—a common natural gas generator—regardless of whether the solar is sited in a sunnier location (they use California and Massachusetts as representative cases of sunny and cloudy climates). Just to stress-test this conclusion, they then tack on a CO2 externality cost to the natural gas plant, and they also discount the solar LCOE to account for a potential 50% reduction in solar panel prices as well as the fact that solar power tends to be generated at more valuable hours of the day and therefore offsets expensive power, resulting in a lower effective cost to the utility. But even with this generous set of assumptions that stacks the calculation in solar’s favor, the LCOE of solar remains higher than that of natural gas in California’s sunny climate (Figure 1).   2014 LCOE Estimates for Utility-scale Solar Installations and Natural Gas Plants in California (Source: MIT Energy Institute)   As the penetration of solar power increases, solar will become less valuable. Figure 1 may suggest that, under a generous set of assumptions, the cost of silicon solar panels is at least pretty close to that of conventional generation. But the 8 ¢/kWh target is deceptive—as more solar panels are deployed, the cost of solar must drop considerably in order to stay competitive. The reason for this moving target is that the marginal value of one more solar panel on the grid depends on the existing set of generation options already in play. If there are very few solar panels installed, then each new one is actually more valuable than its LCOE might suggest, because solar generates power during periods of high electricity demand—in other words, the utility avoids the cost of dispatching expensive on-demand generators and should therefore discount the LCOE of solar, as displayed in Figure 1.[2] But if there is already a high penetration of solar power on the grid (e.g., greater than 20% of generated energy), then there is already a surplus of cheap energy supply at times of high demand, because the variable cost of solar power is zero. Now, the situation is reversed—the utility will perceive a higher effective cost of procuring more solar power, and the LCOE will be an underestimate. As a result, in the wholesale market where different generators compete to sell power to the electricity grid, owners of solar panels will face falling prices—revenues to solar owners could fall by over half, because utilities will bid a lower price for solar under double-digit percentage penetration (Figure 2). Simulated Wholesale Market Prices for Texas Regional Grid Under Increasing Solar Penetration (Source: MIT Energy Institute)   This causes a negative feedback loop—as more solar is developed, the price for solar falls, discouraging further deployment. Moreover, the falling market price for solar only partially represents the full decline in value that solar presents to a utility as solar penetration increases. More solar on the grid actually increases the need for cycling thermal power plants, resulting in accelerated degradation of the equipment and more expensive and inefficient operation. In fact, after a certain penetration, additional energy from solar power does not displace thermal generation at all, since more thermal capacity must be added to compensate for the fact that solar is an intermittent, unpredictable power source. This is evident from Figure 3, which illustrates that beyond a 20 percent solar penetration, new solar does not reduce the requirement for new thermal generation. One might argue that solar has successfully achieved very high penetrations in countries like Germany—there, solar can compose 80 percent of peak system demand on a sunny day. However, in Germany, owners of solar panels are guaranteed a fixed rate (a “Feed-in Tariff”), so they do not experience the moving target of falling wholesale prices. As a result, other generators face plummeting wholesale prices since solar’s price is fixed, putting severe strain on utilities. In a global context, for solar to really make a dent in electricity production and displace double digit percentages of fossil fuel power, it cannot rely on one-off government policies that shield it from market forces. Projections of New Generating Capacity in the Texas Regional Grid Under Increasing Solar Penetration (Source: MIT Energy Institute)   In summary, an LCOE comparison may appear to place the cost of solar within striking distance of that of conventional fossil-fuel generation. However, after taking into account market supply and demand dynamics and the inferior generating characteristics of solar, the marginal value of solar will fall with increasing deployment. Even if the cost of silicon solar panels drops by 50% in the future and improved experience drives down the total installation costs, halved wholesale solar prices and increased strain on other generators will incentivize neither solar developers nor utilities to push penetration further. Storage is not a magic bullet that will make the economics of silicon solar panels work. Some commentators point to the rapidly falling costs of batteries as a sign that in the future, cheap energy storage will neutralize the downsides of intermittent solar generation—by enabling solar installations to store and sell power when valuable to the grid, storage could stabilize the moving target for solar cost-competitiveness. Indeed, storage does improve the economics of solar at high penetration—but not enough to stabilize the moving target. Evidence for this conclusion comes from a 2013 paper by Hirth simulating the German electricity system—the MIT report drew inspiration from this paper to conclude that the value of solar drops with increasing penetration. Hirth studies the effect of pumped hydro storage on the economics of solar at high penetration and finds that by using storage to shift the time of day that solar installations sell back their energy to the wholesale market, the market’s discount on the value of solar drops. However, that discount does not drop very much, and even after doubling the amount of storage capacity in Germany, owners of solar panels would still face declining wholesale prices with increasing penetration (Figure 4).[3] Effect of Storage on the Value of Solar Power in German Electricity System (Source: Hirth, 2013)   New technologies are necessary for solar to compete—and promising candidates exist The MIT report concludes that “beyond modest levels of penetration and absent substantial  government support or a carbon policy that favors renewables, contemporary solar technologies remain too expensive for large-scale deployment…[large] cost reductions may be achieved through the development of novel, inherently less costly PV technologies, some of which are now only in the research stage.” There is plenty of material in the report assessing emerging, promising technologies. But for the impatient, here’s a teaser: the picture at the top of this post is of a PV coating with fundamental economics not too different from carpets or wall paint. Watch this space for a closer look at the technology and why a solar revolution, though difficult, may not be impossible.   Footnotes [1] The target of <1 ¢/kWh is my conversion of Prof. Lewis’ cost target of “$10/m2” for solar power. Assuming solar efficiency somewhere between 15–50 percent turns that cost target into 2–7 ¢/Wp (between one and two orders of magnitude below current costs) and an LCOE of well below 1 ¢/kWh. [2] The MIT report leverages the concept of “Value Factors,” (VF) derived from Hirth, to assess the value of a generator from its generation profile. In essence, the value factor weights the power production profile by the wholesale price at the time of production to determine if the generator on average produces more or less valuable power (VF > 1 and < 1, respectively) than the average wholesale price. Then, by dividing the LCOE by the VF, one can determine an adjusted cost that better represents the utility’s valuation of the generation source (although this still does not take into account dispatchability and dependable capacity). The MIT report finds VF values in the vicinity of 1.1–1.2. The VF concept is closely linked to EIA’s “Levelized Avoided Cost of Energy” (LACE). [3] Note that Figure 4 measures solar penetration differently from Figures 1–3. Figure 4 uses the amount of energy generated by solar (units of kWh) compared to the total system consumption to calculate penetration—this enables a sensible comparison with the amount of storage (ten percent of system consumption) considered in the simulation. Figures 1–3 use the power capacity of solar generation (units of kW) compared with the peak system demand to measure penetration. In the German example explored in Figure 4, five percent penetration in kWh corresponds to roughly fifty percent penetration by kW—note that the relationship does not scale linearly.
  • United States
    Fostering a Government of 'Open Innovation'
    Play
    Aneesh Chopra discusses how technology and innovation affect public policy.
  • Technology and Innovation
    Why Moore’s Law Doesn’t Apply to Clean Technologies
    Over the weekend, Moore’s Law—the prediction that the number of transistors (building blocks) on an integrated circuit (computer chip or microchip) would double every two years—turned fifty years old. It so happens that the silicon solar panel, the dominant variety in the market today, is about the same age—roughly fifty-two years old. And over the last half-century, while the computing power of an identically sized microchip increased by a factor of over a billion, the power output of an identically sized silicon solar panel more or less doubled.[1] The contrast between Moore’s Law for microchips and the plodding progress of clean technology is bittersweet for me. Growing up, I would wait impatiently for my father to bring home a new computer, powered by a faster, next-generation processor—his office was across from Gordon Moore’s at Intel. When he founded a solar panel start-up, he brought the limitless optimism of Moore’s Law with him, but like myriad other cleantech start-ups, his company struggled to stay afloat given the surging tide of cheap, mediocre silicon solar panels from China.[2] My own doctoral research focused on exciting alternatives to silicon solar panels, but those alternatives face a daunting barrier to entry from large silicon firms. So while I celebrate the startling prescience with which Gordon Moore, in 1965, predicted the density of transistors fifty years hence and every year in between, I reject the notion that clean technologies like solar panels and batteries follow a Moore-esque decline in cost. Unfortunately, a chorus of voices in the mainstream media have echoed the claim that Moore’s Law is guiding the regular decline in clean technology costs as production increases, enabling a massive energy transition from fossil fuels. In an excellent 2011 piece, Michael Kanellos at Forbes gently corrected this claim, but he was still too charitable in conceding that clean energy advocates were “wrong in the particulars, but right in their outlook.” Rather, that outlook is far too complacent, satisfied with pedestrian cost declines and stagnating performance in lieu of disruptive technology advances, more in line with Moore’s Law. To date, there have been three crucial differences between Moore’s law for microchips and the historical cost declines of solar panels and batteries: 1. Moore’s Law is a consequence of fundamental physics. Clean technology cost declines are not. 2. Moore’s Law is a prediction about innovation as a function of time. Clean technology cost declines are a function of experience, or production. 3.Why this all matters. Moore’s Law provided a basis to expect dramatic performance improvements that shrank mainframes to mobile phones. Clean technology cost declines do not imply a similar revolution in energy. Difference #1: Moore’s Law is a consequence of fundamental physics. Clean technology cost declines are not. When Gordon Moore made his prediction in 1965 (original article in Electronics here), that the most economical number of transistors on a computer chip would double every two years, he based his reasoning on the physics of transistors. In hindsight, Moore’s physical instinct was confirmed: “As the dimensions of a transistor shrank, the transistor became smaller, lighter, faster, consumed less power, and in most cases was more reliable…it has often been a life without tradeoffs.” In other words, Moore’s primary insight was that shrinking the transistor made it work better—as a happy corollary to this shrinkage, the cost per unit of computing power kept falling, because the cost of manufacturing the same chip area has remained roughly constant. On the contrary, clean technology cost declines have very little to do with the physics of the actual devices being built. For example, falling costs of silicon solar panels have largely been driven by lower input material costs from scale, lower labor costs through manufacturing automation, and lower waste driven by efficient processing. All of these cost reductions follow naturally from manufacturing scale and vertical integration, rather than performance improvements. Now Tesla hopes to achieve similar cost reductions by building a Gigafactory—a massive facility in Nevada—to scale up production of lithium-ion batteries that will only perform incrementally better than the current generation. Difference #2: Moore made a prediction about innovation as a function of time. The advances of clean technology are a function of experience, or production. The physical principles that underpin favorable microchip shrinkage enabled Moore to predict that transistor density would double every two years, rather than predict progress based on the quantity of microchip production. Indeed, fulfilling Moore’s prophecy has required scientists and engineers to sit down at the drawing board and redesign each new generation of microchip down to the basic architecture of the transistor—Moore himself predicted that his law would require engineers to employ “cleverness.” Manufacturing advances, for example to enable printing ever more miniscule features on a chip, accompanied the design decisions to change the transistor architecture and chip design. Microchips certainly got cheaper as a result of manufacturing scale, but the increased production was never a sufficient condition to meet Moore’s Law—fundamental research and development (R&D) drove the advances. By contrast, clean energy devices basically look the same decade after decade. Instead of R&D advances, solar panels and batteries have come down in cost through the benefits of experience and scale, closely related to “learning by doing.” But this relationship between cost and production quantity—the “experience curve”—is a generic one cutting across many industries, from aerospace to chemicals. “Wright’s Law” and “Henderson’s Law” describe the inevitable cost reductions that accompany an industry’s increased experience through scale production of a good. These purported laws do not specify a numerical estimate for the experience curve’s slope—it happens to vary widely among industries. Although Moore’s Law is also, strictly speaking, not a law but an observation, it was extraordinary because it was a specific, quantitative prediction which the microchip industry proceeded to closely follow. Bloomberg New Energy Finance’s Michael Liebreich pointed out last week that the experience curves for solar and batteries look awfully similar, apparent from the similar slope estimates given on the slide. Perhaps this will enable a reliable prediction for the falling cost of batteries as they scale up. But whereas Moore could set a future-looking roadmap with real dates that the industry could target, setting a roadmap based on an experience curve requires perfect forecasting of future production, which is difficult, especially in a rapidly growing market.[3] **Why this all matters** Difference #3: Moore’s Law provided a basis to expect dramatic performance improvements that shrank mainframes to mobile phones. Clean technology cost declines do not imply a similar revolution in energy. To put this all together, Moore’s Law worked so well for microchips, because Moore had a set of physical reasons to believe in his prediction: that the density of transistors on a chip would inexorably increase as the years passed, because transistors just work better when they are smaller. And by making a specific, quantitative prediction, Moore’s Law became a self-fulfilling prophecy—industry leaders like Intel and Qualcomm published technology roadmaps that signaled their intention to match Moore’s Law so that financial markets and the rest of the computing ecosystem knew what to expect. As a result, electronic devices evolved in lockstep with the evolving, shrinking transistor, and today’s mobile phones have more memory and computing power than supercomputers in the 1980s. A different and generic phenomenon—the experience curve—characterizes solar and battery cost declines. If these cost declines were accompanied by performance enhancements, then an industry roadmap could spark synergistic innovation—new long-range electric vehicle designs might evolve alongside lighter batteries. But as costs decline while performance stagnates, it is not just a matter of time before an electric vehicle with a thousand mile range emerges; instead of leading energy innovation, clean technology firms merely peddle commodities. Today, the total cost of putting solar panels on a rooftop is over four times the cost of the panels themselves, so absent efficiency gains, the falling costs of solar panels, however predictable, only play a minor role in the final system cost. Moore’s Law is so special because the prediction it did make about the shrinking transistor resulted in a digital revolution that was completely unpredictable. Although Moore’s Law does not apply to the consistent cost declines of clean technologies, an appreciation for the dynamism that accompanied fifty years of regular progress in electronics could inspire advances on the energy front. Footnotes [1] This comparison is not apples to apples, because there is only so much of the sun’s energy that a solar panel can convert into electricity (its “efficiency”). Still, commercially dominant silicon panels are only around 15% efficient, only twice as efficient as Sharp’s inaugural 1963 solar module, whereas much higher efficiencies (>40%) have been demonstrated in labs and in the field. In other words, commercial solar panels are performing well below their potential today. To derive the computing power density improvement, I compared the 1963 IMB Gemini Digital Computer (~7,000 floating point operations per second (FLOPS)) with the 2013 GeForce GTX 780 r.2 chip (~4,000 GFLOPS) and adjusted for size. [2] Fittingly, my father went back to a world governed by Moore’s Law. He now leads research and development at SanDisk, and, confronted with the physical limits of shrinking transistors to just a few atoms wide, he is determined to keep pace with Moore’s Law by stacking transistors on top of each other. [3] One report notes that a Moore-esque time-based prediction and production-based experience curve predictions are actually equivalent under exponentially increasing production. Since clean technology production seems to be following exponential growth, in the near term it may be possible to forecast a roadmap in terms of dates rather than production quantities. However, the case of microchips is instructive—as soon as microchip production stopped growing exponentially and leveled off around the end of the 1990s, transistor shrinkage started to outpace the experience curve, demonstrating that Moore’s Law is fundamentally a function of time, not production.
  • Sub-Saharan Africa
    Meet Africa’s Hero Rats
    Today is Earth Day, an appropriate moment to remember Africa’s HeroRats. On April 19, the New York Times columnist Nicholas Kristof called attention to these creatures and their ability to sniff-out land mines and unexploded ordnance (UXO) as well as their ability to screen sputum samples for tuberculosis. To date these animals have detected over 48,000 land mines and UXO’s, and screened over 290,000 samples for tuberculosis. APOPO, a Belgian non-governmental organization (NGO) with an international staff, trains these HeroRats in Tanzania. Originally starting with mine and UXO detection APOPO has more recently begun using these HeroRats to detect tuberculosis. According to APOPO’s web site, it deploys mine detecting rats in Mozambique, Thailand, Cambodia, and Angola. They have also worked to clear UXO’s in Laos and Vietnam. For tuberculosis screening, APOPO deploys the rats in Dar es Salaam, Tanzania, and Maputo, Mozambique. The rats are able to clear mine fields and screen sputum for tuberculosis faster and more efficiently than other methods. For example, while a human can screen twenty-five samples of sputum for tuberculosis in a day, HeroRats are able to screen one hundred samples in twenty minutes. In clinics using HeroRats the number of patients identified with tuberculosis has risen by 48 percent. Along with being efficient, HeroRats are an inexpensive answer to the problems they seek to solve. APOPO estimates that in order to fully train one rat the cost is approximately $6,400, far cheaper than the alternatives. A theme of Kristoff’s column is that HeroRats are an example of innovative non-profit approaches. HeroRats are Gambian pouched rats. They can be up to three feet long and weigh perhaps forty ounces, too light to set off a mine. Their sense of smell is very strong, compensating for weak eyes. Their life span is about eight years, and they are retired after six. They eat fruits and nuts. Kristof reports that they become close to their handlers. Despite their name, they are rats, not marsupials. HeroRats are an unabashed good-news story. A NGO has identified how a creature can be used to tackle two different horrors, unexploded munitions and tuberculosis. Kristoff writes that his children “gave” him a HeroRat as a Father’s Day present a few years ago. The cost to adopt a rat is $84 per year, most of which goes toward the year-long training that APOPO provides the rats. If you are interested in adopting your own HeroRat, you can visit apopo.org.
  • China
    Rotenberg and Di: Ali Health Offers a Revolutionary Moment for China’s Healthcare Industry
    Ariella Rotenberg is a research associate for U.S. Foreign Policy, and Peng Di is an intern for Global Health Governance at the Council on Foreign Relations. Private sector actors in China, first and foremost Alibaba, are taking significant steps to do what they believe will earn a handsome profit in the growing Chinese healthcare industry. With a market estimated to reach one trillion dollars by 2020, companies are working fast to secure their slice of the expanding Chinese healthcare pie. Ali Health, Alibaba’s healthcare subsidiary, has pinpointed the prescription drug market for potentially significant financial benefit. Today in China, hospitals sell almost three-quarters of all medicine prescribed in the country. This de facto monopoly in prescription drug sales held by Chinese hospitals contributes significantly to inflated prescription drug prices for patients as well as incentives for corruption, as CFR Senior Fellow Yanzhong Huang explains in his book Governing Contemporary Health in China. The hospitals are not the only ones looking for alternate sources of revenue. Chinese doctors themselves are not paid commensurate salaries for the medical services that they provide. Thus, many Chinese doctors seek “grey income” to supplement their salaries. “Grey income” can come in the form of a so-called rebate from pharmaceutical sales companies. It is paid to doctors by a particular pharmaceutical sales company in exchange for a commitment that the doctor will prescribe that company’s products. To guarantee the pharmaceutical sales companies’ own profits, they then hide the so-called rebate in the sales price of the pharmaceutical product, which is the price at which they then sell to the hospital. As a result, before a drug enters the hospital, its purchase price is already significantly inflated. Alibaba and other similar companies are hoping to benefit from the growing Chinese healthcare industry while searching for creative solutions to break the racket. Currently, Alibaba has piloted a mobile application that enables customers to upload a photo of their prescription and receive price bids from nearby retail pharmacies. Once the customer chooses which retailer they would like to use for that particular transaction, they submit payment through Alibaba’s mobile payment system, AliPay, and the medication is delivered to their door. This application has been piloted in Hebei Province in cooperation with its local governments. The prescription mobile application, called Alijk, has facilitated the purchase of prescription drugs often around 20 percent below average market prices, saving customers up to 50 percent in total spending. Alijk takes advantage of the widespread use of mobile phones in China and combines it with GPS technology to facilitate cost savings to the customer through the existing system of prescription drug sales. However, an Internet prescription drug marketplace, their next goal, would disrupt the supply-chain entirely and offer an online prescription drug store, circumventing retail pharmacies and hospitals altogether. If they succeed, Ali Health would hold a large market share in the $149 billion market for prescription drugs. In doing so, it would have the effect of cutting out some of the price distortions that drive up costs for everyday Chinese. Three considerations must be made before a significant shift in the prescription drug market can take place in China. First of all, doctors and hospitals will most likely oppose any licensing of Internet prescription drug sales as it threatens their ability to raise revenue. For example, in response to Alijk, some hospital administrators have complicated the availability of an “uploadable” version of the prescription so as to create strong incentives to use the hospital’s pharmacy and preclude the ability for patients to use the mobile application. Their resistance can be abated by reforming the provider payment system so that doctors’ salaries will be commensurate with the services they provide. Of course, this is easier said than done, but the revenue doctors need to support their families should not weigh so heavily on the wallets of the average Chinese national. Second, Ali Health could face competition from its counterparts at the intersection of technology and healthcare. For example, Tencent, another major Chinese Internet company, has added doctor appointment systems and payment services into its mobile chat application WeChat, invested in a health portal, and created an online healthcare service. As of now, Tencent has mostly focused on medical service payment, but that is not to say it will not try to enter the Internet drug market should it take off in the near future. Past experience in China indicates that a profitable market attracts a flood of competition. Perhaps a cooperative effort to combine Tencent’s service and payment platforms with Ali Health’s marketplace would benefit both the pioneer companies as well as China’s healthcare industry. Third, and finally, is the challenge of implementation. Ali Health and its competitors will need to develop an effective platform for sales and reliable, safe distribution mechanisms. The incentives for doctors to overprescribe, or prescribe certain brands of medication, can endure in an online marketplace system. Pharmaceutical companies will still be interested in offering financial incentives for doctors to prescribe their medication, no matter where the patient makes the purchase, on the ground or online. Ultimately, the decision of the Chinese government whether or not to grant licenses to sell prescription drugs online will have profound implications for the future direction of China’s healthcare reform. Will Beijing opt to favor the status quo despite its inefficiencies, or will it begin to see that this seemingly smaller issue of Internet pharmacy marketplaces demonstrates the need for a more significant set of reforms to the Chinese healthcare system? We will have to wait and see.
  • Technology and Innovation
    Advanced Industries and North America
    The U.S. economic recovery and current strength reflect in large part advanced industries. As other sectors faltered, both employment and output in these businesses grew. In 2013, they employed 12.3 million workers (9 percent of the U.S. workforce), who made on average $90,000 (compared to the U.S. mean of $51,500). These industries generated $2.7 trillion in output (17 percent of U.S. GDP), and indirectly supported an additional 14.3 million jobs. Central to this classification, as developed in a recent Brookings report, is innovation. Participants stand out on two criteria—over 20 percent of their workers are science, technology, engineering, or math (STEM) professionals and all spend $450 or more in R&D per worker. The authors classify some fifty different industries—from aerospace to semiconductors, satellite telecommunications to software publishers—across manufacturing, energy, and services as advanced sectors. These companies—think Boeing, Sirius Satellite Radio, and Google among the thousands of lesser known names—cluster in cities. 70 percent of their jobs are in the 100 largest metropolitan areas. Here they can link to local universities, benefit from a skilled pool of labor, and learn from other nearby firms. Spillovers from this sector help support the broader local economy. Advanced industry supply chains purchase on average $236,000 in goods and services per worker from other businesses (compared to $67,000 in other industries). And the higher salaries and profits feed back through greater tax intakes. Nationally these industries help maintain the U.S. competitive edge—they account for 90 percent of private-sector R&D and 85 percent of all U.S. patents. And they dominate exports, producing 60 cents of every dollar of products sent abroad. Still, when measured against other countries, U.S. advanced industries are losing ground. Jobs and output as a share of GDP are down. This has potential knock on effects for innovation, given the dominance of these businesses in R&D spending, and for future long term economic competitiveness and growth. As with so many other economic challenges today, better education matters. The United States ranks 23rd among developed countries in terms of annual STEM graduates per capita, behind South Korea, Portugal, and Poland. As a result, U.S. advanced industry employers often struggle to find qualified workers. The authors recommend the United States expand early education, improve the quality of schools, and encourage more students to study STEM areas to diminish the deficit. Foreign policy issues too can make a difference, especially with regards to U.S. neighbors. According to the CFR-sponsored Independent Task Force report, North America: Time for a New Focus, stronger ties enhance the United States’ ability to compete in a dynamic and competitive world economy. Export data shows that of the over $1 trillion in advanced manufacturing industries exports (which doesn’t include energy and services), roughly one third head to Canada and Mexico. This number rises to more than half for motor vehicle, railroad, and computer parts. This back and forth between the three nations in the actual manufacturing process reflects a deepening of regional supply chains and the important, if often overlooked, role U.S. neighbors play in supporting “domestic” advanced industries. As Washington debates broader trade, immigration, and security issues, the economic ramifications of these policy choices for our most dynamic industries shouldn’t be forgotten.
  • Cybersecurity
    Guest Post: Barriers to Sharing Cyber Threat Information Within the Critical Infrastructure Community
    Robert M. Lee is an active-duty U.S. Air Force Cyber Warfare Operations Officer and a PhD candidate at Kings College London researching cyber conflict and control system cyber security. His views and opinions are his alone and do not represent the U.S. Government, Department of Defense, or U.S. Air Force. You can follow him on Twitter @RobertMLee. The sharing of cyber threat data has garnered national level attention, and improved information sharing has been the objective of several pieces of legislation and two executive orders. Threat sharing is an important tool that might help tilt the field away from adversaries who currently take advantage of the fact that an attack on one organization can be effective against thousands of other organizations over extended periods of time. In the absence of information sharing, critical infrastructure operators find themselves fighting off adversaries individually instead of using the knowledge and experience that already exists in their community. Better threat information sharing is an important goal, but two barriers, one cultural and the other technical, continue to plague well intentioned policy efforts. Failing to meaningfully address both barriers can lead to unnecessary hype and the misappropriation of resources. The first barrier is the tight-lipped culture that hinders information sharing within the U.S. critical infrastructure community. Asset owners and operators of the type of critical infrastructure often highlighted in the news, such as the energy and water sectors, live in a culture where reporting incidents seems to only bring trouble. Voluntarily reporting a cyber incident can bring legal repercussions. Likewise, many of the organizations that own, operate, and maintain the United States’ critical infrastructure are publicly traded companies. Even with legal immunity, there are financial losses that can occur from reporting a cyber incident when stockholder and investor confidence is lost. Moreover, reporting incidents can initiate follow-on requests from the government which take valuable time and limited resources to satisfy. It is not uncommon for the critical infrastructure community to think poorly about bringing in outside help instead of trying to tackle issues on their own. Speaking out publicly can also be difficult when news media is quick to highlight stories of cyberattacks on infrastructure and chastise the insecurity of the systems running it. Stories about cyberattacks against the power grid, oil pipelines, and hydroelectric dams generate immense attention especially when nation-state level actors such as Russia, China, or Iran can be seemingly linked. It makes for good media, it makes for good readership, and it makes for a good story. Unfortunately, this hype has lasting impacts, diverting attention to perceived threats instead of the real issues. The difficulty in debunking the hype stems from the second barrier to information sharing—technical shortfalls in critical infrastructure systems that lessen the availability of meaningful data. Proper cyber threat sharing requires meaningful technical data that is often difficult to obtain in critical infrastructure. Industrial control systems use information to affect the physical world. These control systems generate and harness power for the grid, control the flow and purification of water, and operate nuclear reactors. They were built to last for decades, to operate in harsh environments, and be efficient at their designated tasks. Information security for these systems detracted from their mission, such as keeping the power on or keeping the water running, and do so safely with the highest efficiency possible, and so they were often developed with little to no thought of how to keep attackers out. They were also built without consideration of recording and maintaining the type of data useful for threat sharing. After a cyber incident occurs, incident responders are called in to collect data, contain the incident, and extract lessons learned. This data can help identify the adversaries or malicious software in other organizations as well. These indicators of compromise are central to threat information sharing. However, incident response for cyber incidents in control systems is a young field. The data is simply not present in most cases, allowing observers to generate wild theories instead of relying on facts. For example, Bloomberg published an article on the 2008 Baku-Tbilisi-Ceyhan pipeline explosion in Turkey. When the event occurred, Kurdistan Workers’ Party, an internationally recognized terrorist organization, claimed credit for the attack. The Bloomberg article, published seven years later, refuted this claim stating that the attack was caused by cyberattacks, with Russia as a likely culprit. The story presents a number of attack paths that the adversary leveraged to gain access and cause physical damage to the pipeline. Each one of the attack paths presented is plausible, making the story is believable, and has quickly been accepted as a true event. Unfortunately, an understanding of the attack paths presented together with technical knowledge of the type of systems impacted reveals a different story. The incident response data that would have been required to validate the story largely does not exist. Instead, the story relies on anonymous intelligence officials and anonymous incident responders. The report is very likely not true and yet it will remain an incident cited for years to come. The absence of data to confirm the story is the same reason that threat sharing in critical infrastructure is difficult—the data required simply does not exist. Identifying threats to critical infrastructure will continue to be an appropriate motivation for cyber threat sharing discussions. However, facilitating information sharing through legislative change is not a silver bullet. Barriers to threat sharing is more often a mixture of cultural and technical challenges than any one root cause that can be easily overcome. Making information sharing discussions more meaningful will require incentivizing the community, from the companies who manufacture the control systems to those organizations who operate them, to create and run systems that are capable of reporting and storing the technical data needed. It will also require the proper cultural mechanisms to share the meaningful data without facing penalties even beyond those that can be mitigated with legal immunity.
  • Cybersecurity
    Net Politics Podcast: Admiral Mike Rogers
    Podcast
    In this latest episode of the Net Politics podcast, I sit down with Admiral Mike Rogers, Director of the National Security Agency and the Commander of U.S. Cyber Command.
  • Americas
    Latin America’s Middle-Income Trap
    In 2014, GDP growth in the region slowed to less than 1 percent. Expectations for 2015 are just slightly better, with forecasters predicting growth of nearer to 2 percent. The downturn reflects external factors, including the European Union’s continuing problems, a slower China, and falling commodity prices. But it also results from domestic barriers that hold these nations back. The vast majority of Latin American countries have transitioned from low- to middle-income countries according to the World Bank. But most now remain mired in what economists call the middle-income trap. Only Chile, Uruguay, and a few Caribbean countries have joined the ranks of the world’s high-income countries. The Organisation for Economic Co-operation and Development’s (OECD) recent 2015 Latin American Economic Outlook makes the case that the main barriers to climbing the economic ladder in Latin America are education, skills, and innovation. World Bank, "World Development Indicators," 2015. On the plus side, education spending has increased throughout the region. And so too has enrollment. 84 percent of children now complete the primary grades. Still, schools underserve the crucial early years as well as advanced study—pre-K and university enrollment are low relative to other OECD countries. What students get for their extra time in the classroom is also questionable—Latin American students score far behind their OECD peers on the Programme for International Student Assessment (PISA) test. The test also reveals a strong socio-economic tilt—the wealthier the student, the better the score. Organisation for Economic Co-operation and Development, "Latin American Economic Outlook 2015: Education, Skills and Innovation for Development," 2014. Coupled with weak educational systems is a skills mismatch. More than in other emerging economies, employers can’t find workers with the necessary abilities, particularly for more productive knowledge- and technology-intensive economic sectors. Instead employers face an unskilled labor surplus, many of which flood into the informal economy. Organisation for Economic Co-operation and Development, "Latin American Economic Outlook 2015: Education, Skills and Innovation for Development," 2014. Finally, the report highlights the limited expenditure on “knowledge capital,” defined as a country’s capacity to both innovate and then disseminate those advances. Latin America spends on average 13 percent of GDP, less than half OECD rates. The region falls particularly behind in R&D expenditure (as opposed to tertiary education or information and communications technology infrastructure), a recognized driver of innovation. Organisation for Economic Co-operation and Development, "Latin American Economic Outlook 2015: Education, Skills and Innovation for Development," 2014. So what can Latin American nations do? Education reform matters—revamping curriculum, improving teaching, and creating opportunities especially for those not in the upper echelons of society.  Expanding technical and vocational training can also help develop the skills needed for new industries. And greater innovation—building up “knowledge capital”—will come from not just from more foreign direct investment in R&D-intensive sectors, but also by forging links between these multinationals and the rest of the domestic economy. The path out of the middle-income trap is fraught—only a dozen or so newly emerging countries can boast GDP per capita rates comparable to those of developed economies today. But only with better policies can more Latin American countries aspire to join them.
  • China
    Artemisinin’s Rocky Road to Globalization: Part II
    In my previous blog post, I described how artemisinin-based drugs were discovered in China in the 1970s and 1980s. Given their potency for the treatment of malaria, one would expect that Chinese made artemisinin-based drugs quickly became the first choice medicine in the global fight against malaria. Much to the chagrin of Chinese scientists and pharmaceutical companies, the World Health Organization (WHO) did not list a single one of China’s antimalarial drugs on its procurement list until 2007. Why have Chinese companies been boxed out of a market for a drug that was invented by Chinese scientists and extracted from a plant native to China? The issue is not that China lacks interest in exploiting the drug’s commercial value overseas. As early as the 1980s, Chinese pharmaceutical firms were seeking to market their antimalarial products globally. The problem, as Dana Dalrymple noted in his book, was that as China had largely withdrawn from the rest of the world, Chinese research institutes and pharmaceutical companies lacked the necessary funding and expertise to independently break into markets traditionally dominated by multinational pharmaceutical companies. Subsequently, Chinese officials and scientists asked for help from the China International Trust and Investment Corporation (CITIC), the only Chinese state enterprise that was authorized to deal with foreign investors. Through CITIC, Chinese drug developers and manufacturers partnered with Western counterparts to make artemisinin-based antimalarial products available to the rest of the world. In 1988, Guilin Pharmaceutical forged a partnership with Sanofi-Synthelabo that supplied the French pharmaceutical firm with artemisinin. This partnership facilitated the marketing of Artesunate monotherapy worldwide. In 1994, the Academy of Military Medical Sciences (AMMS)—the original patent holder in China—sold its international rights to market artemisinin-based combination therapy (ACT) to Ciba-Geigy, which was a Swiss company that later became Novartis. In return, Novartis agreed to source the Active Pharmaceutical Ingredients (APIs) of its antimalarial treatments from China and pay AMMS an annual royalty equivalent to 4 percent of its annual sales overseas. In 1999, Novartis became the first pharmaceutical company to launch a fixed-dose ACT called Coartem (artemether-lumefantrine). Kunming Pharmaceutical, AMMS’ partner in developing the first ACT, eventually became the supplier of APIs to Novartis. By 2001, the WHO had ordered 150,000 treatment courses of monotherapies from Sanofi, of which Guilin Pharmaceutical was the supplier. However, at almost the same time, the WHO launched a series of policy initiatives that seemed to only reinforce Novartis’ first-mover advantage in the ACT market. In March, the WHO kicked off the Prequalification Medicines Programme, under which a new drug could be procured via the WHO only if the quality of the product conformed with WHO criteria for efficacy, safety and quality. In April of 2001, the WHO recommended the use of ACTs in countries where Plasmodium falciparum malaria was resistant to traditional anti-malarial drugs. The rigorous prequalification process raised the barriers for market entry for Chinese-made ACTs, as no Chinese pharmaceutical firms were good manufacturing practice (GMP)-approved by the WHO. It came as no surprise that Novartis’ Coartem became the first and only fixed dose ACT to meet the WHO prequalification requirements. In December, the company formed a ten-year formal alliance with the WHO to provide ACT for use by public health systems in developing countries without profit. In 2002, Coartem was added to the WHO’s Essential Medicines list for purchase by UN agencies that distribute medicine in the developing world. By 2011, when the alliance was formally brought to an end, Novartis had delivered more than 700 million treatments of ACT through the arrangement. With support from the newly established Global Fund to Fight AIDS, Tuberculosis and Malaria (the Global Fund), the WHO was able to significantly expand its procurement of artemisinin-based therapies from two million in 2003 to thirty million in 2004. In light of growing concerns regarding artemisinin-resistant strains of malaria, the WHO began to seriously promote ACTs. In 2006, the UN stopped ordering monotherapies from Sanofi. At that time, Coartem accounted for nearly 80 percent of the ACT drugs purchased by the WHO. Some Chinese pharmaceutical companies complained that even though Coartem was purchased by UN agencies at cost price ($2.4/person), similar Chinese products could be marketed at much lower price ($1/person). If that was true, scale-up of ACT treatments in the developing world might have been constrained by Coartem’s relatively high price. To be fair, partnership with multinational pharmaceuticals has involved technology transfer that has enabled Chinese companies to meet international quality, health, safety, and environment standards in the production of APIs, thereby enhancing the latter’s research and development capabilities. The ACTs later developed by Guilin Pharmaceuticals admittedly were more similar to generic versions of Sanofi’s products. Chinese pharmaceutical firms also benefited financially from being the major suppliers of artemisinin for multinational pharmaceutical firms. Being situated at the low end of the value chain, though, China was unable to reap the lion’s share of the benefits from the market: it was reported that the API to final formulation profit ratio is 1:20. Chinese API producers are also vulnerable to the fluctuations in the international market. In April 2004, when the Global Fund approved funding ($200 million) for the procurement of ACTs, the Artemisia annua (the source of artemisinin) planting season had already passed, which resulted in de facto shortage of the starting material and steep rise in API price. The shortage was widely reported, leading to unrealistically high forecasts on the potential of the market. Consumed by the zeal for artemisinin, the API producers in China increased from three in 2004 to more than one hundred in 2006, most of which were not even GMP certified. Meanwhile, the total acreage devoted to plantations of A. annua increased to 800,000 mu in 2006, four times the level needed to sufficiently meet market demand. The surplus of the starting material became clear when the WHO lowered its forecasts in 2005, prompting a free fall in the price for A. annua and APIs. In consequence, most of the new API producers went belly up. The 2004-2006 artemisinin bubble nevertheless has not deterred Chinese pharmaceutical firms from pursuing its globalization strategy. Supported by the state, Chinese pharmaceutical firms since 2007 have become even more aggressive in promoting their anti-malarial products globally. But how successful are their efforts? This will be the subject of my next blog post.