Story by DAVID BERREBY Illustrations by PETER KUPER
17
FOR AS LONG AS THERE HAVE BEEN HUMAN BEINGS,
WE’VE SOUGHT TO IDENTIFY OUR FRIENDS AND FOES WITH EVERINCREASING CLARITY. PEOPLE INVENTED FLAGS AND UNIFORMS, TEAM COLORS AND
CORPORATE LOGOS. THEY’VE TOLD THEIR CHILDREN, “WE DON’T DO THAT.” THEY’VE EXPLAINED WHO THE “GOOD GUYS” AND THE “BAD GUYS” ARE IN THE WORLD, NATION AND NEIGHBORHOOD. AFTER ALL, WHEN CONFLICT COMES BETWEEN OUR VERSION OF THE GOOD GUYS AND SOMEONE ELSE’S GOOD GUYS, EVERYONE MUST CHOOSE, AND SHOW THEIR CHOICE. ARE YOU FLYING THAT RUSSIAN TRICOLOR, OR WRAPPING THE GOLD AND BLUE FLAG OF UKRAINE OVER YOUR SHOULDERS? DO YOU CALL THOSE DISPUTED ISLANDS IN THE EAST CHINA SEA SENKAKU OR DIAOYU? IT’S HUMAN NATURE, AND THE ROOT OF WAR: YOU CHOOSE YOUR SIDE, YOU SHOW YOUR SIDE, YOU KNOW WHO STANDS WITH YOU.
BUT DIGITAL CONFLICT—CYBERWAR— IS DIFFERENT...
W
of ramen noodles.) Whose side is Google on, when it pledges to protect the privacy of its first-rate e-mail product—but then harvests users’ information from that product to make money? Yes, every conflict includes people and organizations who change sides as time passes (as Edward Snowden, who’d once opined that leakers should be shot, had done by 2013, when he began the systematic leaks that have revealed so much about U.S. cyberespionage around the world). But in cyberconflict, and cyberlife, many people and organizations seem
HOSE SIDE IS THE CENTRAL INTELLIGENCE AGENCY ON, when it snoops on the hard drives of al-Qaeda operatives and (as U.S. Sen. Dianne Feinstein has alleged) on Senate staffers? On whose side is a young hacker named Wang, when he writes viruses (for Unit 61398 of the People’s Liberation Army’s General Staff 3rd Department, 2nd Bureau) and writes more than 600 posts for a blog about his lousy job? (As the Los Angeles Times’s Barbara Demick has reported, the themes are poor pay, long hours and lots
19
YES, ACTIVISTS HAVE USED FACEBOOK AND TWITTER TO FIGHT OPPRESSION. BUT IT’S ALSO TRUE, AS THE TECH CRITIC EVGENY MOROZOV POINTS OUT, THAT DICTATORSHIPS HAVE USED THEM EFFECTIVELY.
L
to be on opposing sides at the same time. This is part of what makes it maddeningly difficult to protect oneself from digital hazards. After all, a friend (the government that relentlessly mines data for signs of terrorist plots) may also be a foe (the same government that includes your data in its relentless mining). Familiar faces feel untrustworthy, somehow—and that can even include the one in the mirror. What is it about the digital world that fosters this ambiguity? The newness of the technology is certainly a factor. Eons of evolution have primed us to be afraid of big men with weapons; centuries of human history have taught us the indicators of aggression—angry declarations, troop movements and the like. Accustomed to people locking in their loyalties with symbols and rituals, we are not yet used to the idea that a huge amount of damage can be done by someone who just changed his mind.
AST YEAR’S INFAMOUS BREACHES of retailer security (such as the holiday-season attack that stole information on more than 110 million accounts from the American discount store chain Target) depended on malware created by a solitary Russian named Rinat Shabayev, who was all of 23 years old. And one reason for the success of the Target attack, as Bloomberg Businessweek reported last March, was that the company apparently ignored a warning from its own security system. If we do not see our enemies in cyberspace, part of the reason is that we aren’t used to looking for them. Another consequence of the newness of digital tools is that—contrary to what you might hear from cyberevangelists—they aren’t yet associated with any particular moral or political commitments in the non-digital world. For all the rhetoric about the Internet as a driver of freedom and empowerment, the
20
I KNOW IT’S BIG AND SCARY THAT THE GOVERNMENT WANTS A DATABASE OF ALL PHONE CALLS. AND IT’S SCARY THAT THEY’RE PAYING ATTENTION TO THE INTERNET. AND IT’S SCARY THAT YOUR CELL PHONES HAVE GPS INSTALLED.... BUT BE HONEST, MOST OF US ARE GRUDGING PARTICIPANTS IN THIS DYNAMIC. WE WANT THE CELL PHONES. WE LIKE THE INTERNET....” — DAVID SIMON, CREATOR OF THE WIRE
fact remains that its resources are useful to all sides in the world’s political struggles. Yes, activists have used Facebook and Twitter to fight oppression. But it’s also true, as the tech critic Evgeny Morozov points out, that dictatorships have used them effectively. For example, he notes, during massive street protests in Iran in 2009, government agents used Facebook to check on the political affiliations of people entering the country. Then there’s the Milan-based firm called Hacking Team. It sells a powerful spyware tool called Remote Control System (RCS)—which can capture e-mails and Skype activity, as well as other data—to governments. That’s an asset for democratic governments protecting their citizens against cybercrime and terrorism. But last winter researchers at the University of Toronto’s Citizen Lab said they had found traces of RCS on computers in Azerbaijan, Colombia, Egypt, Ethiopia, Hungary, Italy, Kazakhstan, Malaysia, Mexico, Morocco, Nigeria, Oman, Panama, Poland, Saudi Arabia, South Korea, Sudan, Thailand, Turkey, United Arab Emirates and Uzbekistan. “Nine of these
countries receive the lowest ranking, ‘authoritarian,’ in The Economist’s 2012 Democracy Index,” the Citizen Lab post noted. (Hacking Team denied that they sell their tools to repressive regimes; Citizen Lab stood by its claims.) Of course, if a tool is morally neutral, we can’t blame it when it’s used for bad ends any more than we can praise it when it’s used for good. That, ultimately, is the most important fact about the ambiguity we sense in the cyberworld. It comes from us, not from the technology. It is a consequence of the fact that the cybertools we use both benefit and trouble us, often in the same instant. We who are Googled by prospective mates, prospective employers, enemies from summer camp, and on and on, also Google. Who wouldn’t want to know if a potential hire had been arrested or made bizarre statements on Twitter? We who are monitored by those who seek to predict our behavior, we also monitor others (with apps, with nanny cams). For example, Verizon Communications now offers its customers a “new tool to help parents set boundaries for
22
children,” called FamilyBase. For $5 a month, it gives parents a complete a report on all activity on their children’s phones—calls, texts, apps downloaded, time spent talking and the times of conversations. Few are the parents who high-mindedly say they don’t want, and shouldn’t have, such information. We who resent being spied upon by the state also endorse the state spying on other people. (The rule seems to be: “I, in my glorious individuality, am unpredictable but righteous, but please do use Big Data analytics on those other people to predict who will try to blow up a plane next year.”) As David Simon, the creator of the television program The Wire, put it: “I know it’s big and scary that the government wants a database of all phone calls. And it’s scary that they’re paying attention to the Internet. And it’s scary that your cell phones have GPS installed. And it’s scary, too, that the little box that lets you go through the short toll lane on I-95 lets someone, somewhere know that you are on the move ... But be honest, most of us are grudging participants in this dynamic. We want the cell phones. We like the Internet. We don’t want to sit in the slow lane at the Harbor Tunnel toll plaza.”
W
E NEED TO RECOGNIZE THAT this ambivalence is part of what makes it hard to defend ourselves against digital dangers. Our policies are as divided as we are, as Bruce Schneier, the chief technology officer of the computer security firm Co3 Systems, has noted. The U.S. military, he wrote recently, distinguishes its efforts at CNE (computer network exfiltration, which is the business of bypassing security features on a network so as to spy on it) from CNA (computer network attack, which is sabotage). But the distinction is meaningless, Schneier writes. The only way to do CNE is to use tools that could also be used for CNA. If a piece of malware can eavesdrop without being detected, there is no way to be certain it won’t switch to doing something more harmful once it is installed. “As long as cyberespionage equals cyberattack, we would be much safer if we focused the NSA’s efforts on securing the Internet from these attacks,”
Schneier wrote this year. “True, we wouldn’t get the same level of access to information flows around the world. But we would be protecting the world’s information flows—including our own—from both eavesdropping and more damaging attacks.” To do that, though, we would have to decide that we were on the side of the targets of cyberweapons, not the side of the users of such devices. And that’s a commitment no government seems prepared to make. Late last February the journalist Quinn Norton attended a workshop on identity at the Office of the Director of National Intelligence (ODNI). That was, as she wrote later, an unexpected decision. As a writer on hackers and hacker culture, with plenty of contacts in that world, she is no friend of the intelligence establishment. A close friend and former lover of Aaron Swartz, the Internet activist who committed suicide last year in the face of an aggressive federal prosecution for data theft, she stands against everything that ODNI stands for. Why did she go? Several times during the meeting, she wrote, she’d heard others say that there are bad people and good people in the world. “I realized when I heard this,” she wrote, “that I went to the ODNI because I don’t believe in bad or good people.”
THE WORLD IS NEW AGAIN.