Contents | Zoom in | Zoom out
For navigation instructions please click here
Search Issue | Next Page
Verifying Votes A Better Cybersecurity Bill Negotiating Privacy ■
■
May/June 2015 Vol. 13, No. 3
Contents | Zoom in | Zoom out
For navigation instructions please click here
Search Issue | Next Page
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
CHRIS CALVERT
or, HP Global Direct cts lutions Produ Enterprise So
MARCUS H. SACH
S
Security Policy VP, National Verizon
DR. SPENCER SO ting u entific Comp OHOO
Sci r CSO/Director, Medical Cente Cedars-Sinai
ar with W y t i r u c e s ew Cyber rsecurity e b y Win the N C f o s ock Star the New R . breaches
ard of credit c r e tt as well a m countries nger a n lo w o o n d is e e e to g to tak Cybercrim now tryin safe. Com n re o a ti a ls a iz n in ive real, ur orga Cybercrim . Keep yo igned to g s s e ie n d a t n p e m v h-level e as top co threats. e -day, hig n o r, ie ersecurity m b y c e s e the pre to th solutions rts— actionable he expe t h it w borate nd colla a m o r f Learn
SECURITY& PRIVACY
IEEE
r 2015Center e b o t c O 7 2 ummit th Street S The Four CA San Jose,
NOW R E T S I G E R vailable! ing Now A unt Pric Early Disco
/ g r o . r e t u p com cyber2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
May/June 2015 Vol. 13, No. 3
Contents Security on Tap Th is issue of IEEE Security & Privacy features articles on improving security with diversity, better cryptographic security protocols, and a new identity management model for pervasive computing environments. It also highlights the challenges of establishing cybersecurity in India and examines attacks specific to manufacturing machine parts and intelligent transport systems. 9
Information Security Compliance over Intelligent Transport Systems: Is IT Possible?
Kleanthis Dellios, Dimitrios Papanikas, and Despina Polemi
16 India’s Cybersecurity Landscape: The Roles of the Private Sector and Public–Private Partnership Nir Kshetri
24 Improving the Security of Cryptographic Protocol Standards Cover art by Peter Bollinger, www.shannonassociates.com
David Basin, Basin, Cas Cas Cremers, Cremers, Kunihiko Kunihiko Miyazaki, Miyazaki, Saša Saša Radomirović, Radomirović, David and Dai Watanabe
32 Blended Identity: Pervasive IdM for Continuous Authentication Patricia Arias-Cabarcos, Florina Almenárez, Rubén Trapero, Daniel Díaz-Sánchez, and Andrés Marín
40 Bad Parts: Are Our Manufacturing Systems at Risk of Silent Cyberattacks? Hamilton Turner, Jules White, Jaime A. Camelio, Christopher Williams, Brandon Amos, and Robert Parker
48 Diversity Reduces the Impact of Malware Kjell Jørgen Hole
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
59
Columns
Departments
3
5
From the Editors
Interview
What Was Samsung Thinking? Susan Landau
Silver Bullet Talks with L. Jean Camp Gary McGraw
92 Last Word
55 Spotlight
What a Real Cybersecurity Bill Should Address Steven M. Bellovin
Weakness in Depth: A Voting Machine’s Demise Jeremy Epstein
59 It All Depends Also in This Issue 62 | IEEE Computer Society Information 83 | IEEE Reliability Society Information 91 | Advertiser Index
End-to-End Verifiability in Voting Systems, from Theory to Practice Peter Y.A. Ryan, Steve Schneider, and Vanessa Teague
70
74 Systems Security Bolt-On Security Extensions for Industrial Control System Protocols: A Case Study of DNP3 SAv5 J. Adam Crain and Sergey Bratus
80 Security & Privacy Economics Scaring and Bullying People into Security Won’t Work Angela Sasse
84 Building Security In A Developer’s Guide to Audit Logging Jonathan Margulies
63 Education
88 In Our Orbit
Evaluating Cybersecurity Education Interventions: Three Case Studies Jelena Mirkovic, Melissa Dark, Wenliang Du, Giovanni Vigna, and Tamara Denning
Effortless Privacy Negotiations Kat Krol and Sören Preibusch
On the Web 2012 Annual Index www.computer.org/ security/12index
____
Biometric Authentication on Mobile Devices Liam M. Mayron
Postmaster: Send undelivered copies and address changes to IEEE Security & Privacy, Membership Processing Dept., IEEE Service Center, 445 Hoes Lane, Piscataway, NJ 08854-4141. Periodicals postage rate paid at New York, NY, and at additional mailing offices. Canadian GST #125634188. Canada Post Publications Mail Agreement Number 40013885. Return undeliverable Canadian addresses to PO Box 122, Niagara Falls, ON L2E 6S8. Printed in the USA. Circulation: IEEE Security & Privacy (ISSN 1540-7993) is published bimonthly by the IEEE Computer Society. IEEE Headquarters, Three Park Ave., 17th Floor, New York, NY 10016-5997; IEEE Computer Society Publications Office, 10662 Los Vaqueros Circle, Los Alamitos, CA 90720, phone +1 714 821 8380; IEEE Computer Society Headquarters, 2001 L St., Ste. 700, Washington, D.C. 20036. Subscribe to IEEE Security & Privacy by visiting www.computer.org/security. IEEE Security & Privacy is copublished by the IEEE Computer and Reliability Societies. For more information on computing topics, visit the IEEE Computer Society Digital Library at www.computer.org/csdl.
SECURITY& PRIVACY
IEEE
70 Basic Training
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
FROM THE EDITORS
What Was Samsung Thinking?
I
n February 2015, the press discovered that if Samsung’s Smart TV voice recognition system is activated, the television sends voice commands to Samsung and then to a thirdparty provider for processing. Any other conversations that are overheard are also sent. The company’s user manual explains:1 If you enable Voice Recognition, you can interact with your Smart TV using your voice. To provide you the Voice Recognition feature, some voice commands may be transmitted (along with information about your device, including device identifiers) to a third-party service that converts speech to text or to the extent necessary to provide the Voice Recognition features to you.
This is no different from how Siri works. Google Glass is slightly different; it transmits your voice commands to Google but apparently no further. Yet, years after those products launched, Samsung’s Smart TV raised quite the brouhaha. The story was all over CNN, ABC, BBC, and the Washington Post, and there was a Congressional hearing. The Electronic Privacy Information Center filed a complaint with the Federal Trade Commission, seeking a prohibition on the product’s recording and transmitting voice communications and a determination of whether Samsung violated the Electronic Communications Privacy Act, which prohibits “interception and disclosure of wire, oral, or electronic communications.”2 One problem with the product is that Samsung misled its customers. It claimed the voice communications were encrypted, but researchers at Pen Test Partners found otherwise.3 Transmissions from the TV’s early models were sent in the clear. Furthermore, Samsung doesn’t appear to have addressed data collection security: how data would be transmitted and stored, who would have access to the data, how long it would be kept, and what thirdparty security practices would be. These security concerns are complicated, but they are ones that engineers have been 1540-7993/15/$31.00 © 2015 IEEE
SECURITY& PRIVACY
IEEE
trained to tackle. The real issue is how we employ devices. This is different from the concern about the voice channel being hacked.
Privacy Expectations and Social Context People use smartphones to make a call, check their email, or look for directions. Although smartphones transmit users’ location to the network, the assumption is that if a phone isn’t actively being used, only location information is transmitted. (Yes, we’ve all seen the television show in which a cell phone is surreptitiously used to tape a conversation. That’s considered a misuse of the phone and is a violation of social protocol.) The same is true of Google Glass. In fact, when users ask Glass to perform an action such as taking a photo, they must say, “Okay, Glass,” and a small red light comes on to indicate transmission. When users give Samsung’s Smart TV a voice command, there’s no transmission unless users click an activation button on the remote control or TV screen and speak into the microphone on the remote control.4 The voice is transmitted only while the control system is activated. In this way, Samsung’s voice recognition technology works much the same as Siri and Glass. Asking Siri a question or saying “Okay, Glass” might be unnoticeable in a crowded room—many activities are. People assume they will be observed in busy, noisy spaces. But privacy is taken for granted at home. TVs are literally pieces of furniture that fade into the background (many times they are, in fact, on in the background). This mismatch is what caused the vehement reaction to Samsung’s product. When we’re busy chopping vegetables for dinner, helping a child with homework, negotiating tomorrow’s chores, or getting ready for bed, we don’t want—or anticipate—these discussions to be picked up along with “Recommend a good sci-fi movie” and sent to Samsung and its processing partner, Nuance Communications. The conversations might not be particularly confidential, but sharing
Copublished by the IEEE Computer and Reliability Societies
Susan Landau Associate Editor in Chief
May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
3 M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
FROM THE EDITORS
them makes us uncomfortable. This is exacerbated by the fact that, at home, we’re often not paying enough attention to notice when the Smart TV voice recognition system is in use. Televisions are in sensitive locations: living rooms, bedrooms, hotel rooms. They’re inevitably plugged in and often left on—sometimes just to tell time. That’s the concern. There’s a design disconnect going on. Engineers see Samsung’s voice recognition as an electronic system and consider efficiency and bandwidth. Anthropologists see Samsung’s Smart TV and recognize that Samsung’s voice recognition system is an artifact that transmits conversations from one social space to another. That shift of social space is surprising. Put this way, the problem is clear.
Designing Privacy into IoT Systems It’s obvious that Samsung didn’t carefully think through the use case of Smart TVs. But there’s a deeper issue. The Samsung Smart TV example points out how the Internet of Things (IoT) must transform the way we design Internetmediated systems. The knotty issues concern the devices’ social context. How do people perceive the space in which a smart device is being used? What are users’ expectations for privacy in a room with a television? What is the expectation of the other people in range of a microphone—for instance, those oblivious to a television being on? These are the kinds of questions social scientists ask, and they’re important in smart TV system design. They don’t arise in the design of every IoT system— for example, sensors collecting soil humidity data don’t raise large concerns about privacy. But they’re important for IoT systems that potentially monitor people, 4
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
including their use and interaction with the system. This includes smart thermostats, smart lightbulbs, smart refrigerators, and smart meters. Samsung doesn’t seem to have raised questions about how televisions function inside homes. Asking these types of questions is a crucial first step in building more privacyprotective IoT systems. Learning that many, if not most, people don’t want to transmit their background conversations to Samsung and third parties, the company must make tradeoffs when redesigning its Smart TV product. Can it build a system that computes the information locally? If such a solution is technically infeasible at present, does Samsung need to create a more inyour-face notice every time a conversation is being transmitted from the room? When designing a communications device, it’s natural to ask about communication expectations. What are the expectations about delivery, privacy, security, and accuracy? But IoT is different— we’re talking about taking everyday devices and having them communicate on the network. Consider another IoT concern— this one involving phones. Smartphone sensors can collect ambient noise that can be used to figure out where users are. This isn’t GPS data—whether a user is at 17 South Maple Street—but information that locates users in a very different way: in a bar, at a meeting, in a car.5 Does this information have value? Sometimes. When someone is in a crowded bar, perhaps the phone volume should increase automatically. When someone is in a meeting, perhaps the phone should automatically switch to vibrate mode. When a person is in a car, perhaps texting capability should be shut off—though not when the user is a passenger. One could imagine good reasons for information determining locale
to be transmitted, but the privacy issues are quite serious. What mental model do people have about information their phones transmit about them? And what about the people nearby—for instance, those in the same meeting? How would they feel about the fact that someone else’s phone is transmitting information about them, even though it’s ostensibly not being used?
I
f we hope to provide an IoT world that’s useful, rather than one in which people must shut off their devices to achieve a desired level of privacy, we must understand users’ expectations of their devices. Samsung’s Smart TV situation throws that into clear relief. This situation could be a wake-up call, causing companies to realize that they must delve into how devices actually function in peoples’ lives, and then build accordingly. If so, Samsung’s Smart TV will have taught a very useful lesson.
References 1. “Samsung Privacy Policy—SmartTV Supplement,” Samsung, 10 Feb. 2015; www.samsung.com/us/common /privacy.html#smart. ___________ 2. 18 US Code § 2511—Interception and Disclosure of Wire, Oral, or Electronic Communications Prohibited, US Code Title 18, Part I, Chapter 119, 3 Jan. 2012. 3. D. Lodge, “Is Your Samsung TV Listening to You?” Pen Test Partners blog, 16 Feb. 2015; www. ___ pentest par tners.com/blog/i s -your-samsung-tv-listening-to-you. 4. “Samsung Smart TVs Do Not Monitor Living Room Conversations,” Samsung Tomorrow, 10 Feb. 2015; http://global.samsungtomorrow .com/samsung-smart-tvs-do-not -monitor-living-room-conversations. 5. V. Narayanan, S. Nanda, and F. Shih, “Learning Situations via Pattern Matching,” US 20120265717 A1, US Patent Office, 18 Oct. 2012. May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
INTERVIEW Editor: Gary McGraw, gem@cigital.com _________
Silver Bullet Talks with L. Jean Camp Gary McGraw | Cigital
Hear the full podcast at www.computer.org/silverbullet. Show links, notes, and an online discussion can be found at www.cigital.com/silverbullet.
common and how do they fit in with your work focus?
Jean Camp, professor in Indiana University’s (IU) School of Informatics and Computing, discusses topics such as translucent security; risk management; and security-related issues affecting usability, economics, and senior citizens. (Full disclosure: Gary McGraw advises IU’s School of Informatics and Computing as part of the Dean’s Advisory Council.)
The places I’ve worked deal with the same policy and technical issues that my research covers. For example, at Sandia, during the early days of cloud computing, we looked at the mechanics of trust and how, using the technology, you could safely distribute code across multiple machines while avoiding the spread of problems such as malware. The Kennedy School deals with public policy. I have researched the trustworthiness of computing infrastructures, which is an organizational and policy problem as well as a technical one. Indiana University has given me the opportunity to work with students from different disciplines and perspectives, such as social informatics and policy as well as computer science.
You’ve worked in diverse places, including Sandia, the Kennedy School, and Indiana University. What do those places have in
Let’s talk about usability. It’s a huge problem in technology and an even bigger one in security, as security configuration is an art in itself. So,
1540-7993/15/$31.00 © 2015 IEEE
Copublished by the IEEE Computer and Reliability Societies
L.
SECURITY& PRIVACY
IEEE
what are the main research areas in security and usability, and are we making any progress?
One of the areas in which we’ve made a lot of progress is how you communicate security information to users once you have an infrastructure in place. It’s not just about making warnings that people can understand; it’s about making warnings that they care about. That’s a very nice distinction. Your work addresses how normal people interact with security. Can you talk about the translucent-security concept?
There’s this idea in usability that everything should be transparent. The idea is that there should be a direct response to your action. With real-world usability, you turn the steering wheel and the car turns in the direction that you turn the wheel. You move the mouse, and it draws a line. You type in two numbers, they appear. But you cannot do that in security because it’s inherently about risk and acceptable degrees of risk. It’s quite probabilistic. And people have a bad natural conception of risk, according to the latest results. So is that what the concept of translucent security is about?
We can’t make security completely transparent, because of the uncertainty and risk involved. But we can’t make it completely opaque, either, because then the system will respond to every activity that has any degree of risk with “permission denied.” For example, if you think about the Vista operating system, which deserved the abuse it got, Microsoft said, “We’re going to make this totally May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
5 M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
INTERVIEW
About L. Jean Camp
L.
Jean Camp is a professor in Indiana University’s School of Informatics and Computing. Camp’s core interest is technical trust mechanisms in an economic and social context. She has worked at the US Department of Energy’s Sandia National Laboratories and Harvard University’s Kennedy School of Government. She has written three books, including Trust and Risk in Internet Commerce and Economics of Identity Theft, and has published 120 peer-reviewed articles.
transparent. You’re going to know every time you have a risk.” And obviously, that is completely overwhelming. So you need something in between—translucence—to clearly inform users of the risks involved in various activities as well as the choices and potential consequences. Software isn’t perfect. So software security is an exercise in technical risk management.
Yes, exactly. If you think about computing as being about trust, you start thinking about it as probabilistic and not certain. I try to understand the actual risk underlying an action or choice. I try to align the communication about this risk with users’ mental model of the risk, because if you want people to respond to a warning, it must align with the stories they tell themselves. And it must align with their understanding of what the risk might be. You must clearly say what the issue is, what it means, and what actions people must take to remain safe. In my experience, many users of consumer products have what you might call an implicit expectation of security but never really express explicit requirements. For example, people think their mobile phone keeps their conversations private, even though phones don’t do that. So the user’s implicit expectation is that they will have private
6
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
conversations, but they never say privacy is one of their important requirements. Are implicit expectations enough to move the market?
That is a huge question. I do not think they’re enough to move the market. But I think security is a mature enough market for us to set some best practices and standards and to say, “If you are in this industry, these are the practices that you should follow.” Let’s talk about another major branch of your work, which we already alluded to and which we’re sort of talking about now: economics and computer security. What is this field all about, and what progress was made at the first few workshops?
The basic idea is if you want the security market to work, it has to have the elements of a working market. So, for example, you have to be able to say, “As a consumer, here’s some clear information about my choices.” This is clearly aligned with risk communication. What we can do is try to help people choose their own risk. And you need to design incentives from the bottom up for producers and industry to adopt security technologies, or it’s just not going to work. One of the huge challenges that we face in software security is not having any externalities. Measuring security is difficult in software
because all the things we do to make software secure while we’re building it represents only a shadow of what happens in the real world, postrelease. You can’t look at vulnerabilities found in the wild or at zero days and have a real sense of how secure a piece of software is. Are there answers on the horizon for situations where the externality measurement is so far removed from all the engineering that goes in front?
Security is a function of emergent use. When we got our first cell phones, they were designed to talk to people. Then, we had SMS. But now we have banking applications on phones that were never built for this kind of operation. I mean, we now have electronic wallets on the two major operating systems. And so we end up chasing our own success. But generally, that’s how advances work. So, we have to learn to adjust to risk over time, as we’ve done in every other domain. Maybe it’s just a function of getting the market to be smarter about what it means to have secure or insecure software.
Yes, part of it is just the maturity of the industry. Let’s talk a little bit about senior citizens and security. Tell us about your work with [IU associate professor of computer science] Kay Connelly and seniors, and whether they care about security and privacy.
Seniors do care about security and privacy, and they are one of our most vulnerable populations. If you commit identity theft against one of the undergraduates here, they have a lifetime to recover. If you commit identity theft against an older person and access their retirement funds, they’re in a dire situation. Most of our usability research is done on undergraduates who have grown up with and are very comfortable with computers, not on May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
seniors, who are not necessarily as comfortable with them. Seniors often fall prey to scams. Making complex decisions gets hard, and making security decisions—which is already a subtle process—is difficult even for younger people. Tell us about your field research with seniors regarding security and privacy.
I learned that you have to have a completely different approach talking about risk to older people. For example, I find the undergraduates I teach to be very optimistic about the risk their activities pose when on the Internet. That’s not the case with older people. It’s scary for them. And you have to make it very simple for them to avoid risk. We built a toolbar for students to put on their laptops, share comments about various websites, and provide valuable feedback about one another’s online activities. For example, one student could ask another, “This is not a website you’ve been to before—why are you entering a password?” Students liked it. But we showed it to older people, and they hated it.
We ended up building a cube with up and down arrows that users could press to show whether they liked or hated a website. If they visited what we thought was a risky site, the cube would turn red and display a very simple block image that asks, “Do you want to continue?” They loved it. But could you imagine students carrying that around? We’ve also done some work on what older people understand about phishing and certificates, and it is almost nothing. I mean, some people think certificates are amazingly all-powerful and some just don’t know what they are. To help them, you don’t explain what phishing is and what certificates are; you give them immediately actionable information such as “don’t enter your password.” Let’s change gears for one last question. What is your approach to teaching university students—from highly technical kids in computer science to those in other fields who are not so technical—about security and privacy?
For the younger undergraduates, I go through examples that they
understand. I use very simple models that leverage their life experience, such as M&M’s or Reese’s Peanut Butter Cups. I talk about sharing Reese’s Cups, pricing them, and how much you’re willing to invest in this particular Reese’s Cup right now. And then I relate this to the economics of security and privacy.
T
he Silver Bullet Podcast with Gary McGraw is cosponsored by Cigital and this magazine and is syndicated by SearchSecurity.
Gary McGraw is Cigital’s chief tech-
nology officer. He’s the author of Software Security: Building Security In (Addison-Wesley 2006) and eight other books. McGraw received a BA in philosophy from the University of Virginia and a dual PhD in computer science and cognitive science from Indiana University. Contact him at gem@cigital.com. ___________
Selected CS articles and columns are also available for free at http://ComputingNow.computer.org. ___________________
Subscribe today! IEEE Computer Society’s newest magazine tackles the emerging technology of cloud computing.
computer.org/ cloudcomputing _____________________
7
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
www.computer.org/security
PUBLISHING COSPONSORS
EDITOR IN CHIEF Shari Lawrence Pfleeger | Dartmouth College shari.l.pfl eeger@dartmouth.edu ________________ ASSOCIATE EDITORS IN CHIEF Terry Benzel | USC Information Sciences Institute Robin Bloomfield | City University London Jeremy Epstein | SRI International Bill Horne | HP Labs Susan Landau | Worcester Polytechnic Institute Ahmad-Reza Sadeghi | TU Darmstadt EDITORIAL BOARD Iván Arce | Fundación Dr. Manuel Sadosky George Cybenko* | Dartmouth College Dorothy E. Denning | Naval Postgraduate School Marc Donner | MSCI Dieter Gollmann | Technical University Hamburg-Harburg Feng Hao | Newcastle University Carl E. Landwehr* | George Washington University and LeMoyne College Roy Maxion | Carnegie Mellon University Nasir Memon | Polytechnic University Rolf Oppliger | eSECURITY Technologies Sean Peisert | Lawrence Berkeley National Laboratory and University of California, Davis Yan Sun | University of Rhode Island Neeraj Suri | Darmstadt Technical University John Viega | SilverSky Taieb Znati | University of Pittsburgh *EIC Emeritus DEPARTMENT EDITORS Attack Trends | Thorsten Holz, Ruhr-University Bochum Basic Training | Richard Ford, Florida Institute of Technology; and Deborah A. Frincke, National Security Agency Building Security In | Jonathan Margulies, Qmulos Crypto Corner | Peter Gutmann, University of Auckland; David Naccache, École normale supérieure; and Charles C. Palmer, IBM Education | Melissa Dark, Purdue University Interview/Silver Bullet | Gary McGraw, Cigital In Our Orbit | Alessandro Acquisti, Carnegie Mellon University; and Angela Sasse, University College London It All Depends | Mohamed Kaâniche, French National Center for Scientific Research; and Aad van Moorsel, Newcastle University
SECURITY& PRIVACY
IEEE
Privacy Interests | Katrine Evans, Office of the Privacy Commissioner, New Zealand; and Deven McGraw, Manatt, Phelps & Phillips Security & Privacy Economics | Michael Lesk, Rutgers University; and Jeff rey MacKieMason, University of Michigan Systems Security | Patrick McDaniel, Pennsylvania State University; and Sean W. Smith, Dartmouth College
TECHNICAL COSPONSORS
COLUMNISTS Last Word | Bruce Schneier, Resilient Systems; Steven M. Bellovin, Columbia University; and Daniel E. Geer Jr., In-Q-Tel STAFF Content Editor | Christine Anthony Lead Editor | Brian Kirk Staff Editor | Lee Garber and Meghan O’Dell Publications Coordinator | ____________ security@computer.org Production Editor/Webmaster | Monette Velasco Production Staff | Mark Bartosik, Erica Hardison, and Jennie Zhu-Mai Graphic Design | Alex Torres Original Illustrations | Robert Stack Director, Products & Services | Evan Butterfield Sr. Manager, Editorial Services | Robin Baldwin Content Development Manager | Richard Park Sr. Business Development Manager | Sandy Brown Membership Development Manager | Erik Berkowitz Sr. Advertising Coordinator | Marian Anderson, manderson@computer.org ______________ CS MAGAZINE OPERATIONS COMMITTEE Forrest Shull (chair), Nathan Ensmenger, Mazin Yousif, Miguel Encarnacao, George K. Thiruvathukal, Sumi Helal, Brian Blake, Daniel Zeng, San Murugesan, Lieven Eeckhout, Yong Rui , Maria Ebling, Shari Lawrence Pfleeger, Diomidis Spinellis CS PUBLICATIONS BOARD Jean-Luc Gaudiot (VP for Publications), Forrest Shull, Ming C. Lin, Alfredo Benso, David S. Ebert, Alain April, Laxmi Bhuyan, Greg Byrd, Robert Dupuis, Linda I. Shafer, H.J. Siegel EDITORIAL OFFICE IEEE Security & Privacy c/o IEEE Computer Society Publications Office 10662 Los Vaqueros Circle, Los Alamitos, CA 90720 USA Phone | +1 714 821-8380; Fax | +1 714 821-4010
IEEE Engineering in Medicine & Biology Society
Editorial | Unless otherwise stated, bylined articles, as well as product and service descriptions, reflect the author’s or fi rm’s opinion. Inclusion in IEEE Security & Privacy does not necessarily constitute endorsement by the IEEE or the IEEE Computer Society. All submissions are subject to editing for style, clarity, and length. Submissions | We welcome submissions about security and privacy topics. For detailed instructions, see the author guidelines (www.computer.org/ security/author.htm) or log onto IEEE Security ________ & Privacy’s author center at ScholarOne (htt ps:// ___ mc.manuscriptcentral.com/cs-ieee). Reuse Rights and Reprint Permissions | Educational or personal use of this material is permitt ed without fee, provided such use: 1) is not made for profit; 2) includes this notice and a full citation to the original work on the fi rst page of the copy; and 3) does not imply IEEE endorsement of any third-party products or services. Authors and their companies are permitt ed to post the accepted version of IEEE-copyrighted material on their own Web servers without permission, provided that the IEEE copyright notice and a full citation to the original work appear on the fi rst screen of the posted copy. An accepted manuscript is a version which has been revised by the author to incorporate review suggestions, but not the published version with copyediting, proofreading, and formatt ing added by IEEE. For more information, please go to www.ieee. org/publications_standards/publications/rights/ paperversionpolicy.html. Permission to reprint/ __________ republish this material for commercial, advertising, or promotional purposes or for creating new collective works for resale or redistribution must be obtained from IEEE by writing to the IEEE Intellectual Property Rights Office, 445 Hoes Lane, Piscataway, NJ, USA 08854-4141 or pubs-permissions@ieee. __________ org. Copyright © 2015 IEEE. All rights reserved. _ Abstracting and Library Use | Abstracting is permitt ed with credit to the source. Libraries are permitt ed to photocopy for private use of patrons, provided the per-copy fee indicated in the code at the bott om of the fi rst page is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. IEEE prohibits discrimination, harassment, and bullying: For more information, visit www.ieee. org/web/aboutus/whatis/policies/p9-26.html.
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
Information Security Compliance over Intelligent Transport Systems: Is IT Possible? Kleanthis Dellios, Dimitrios Papanikas, and Despina Polemi | University of Piraeus
The automotive industry has used emerging technological concepts to transform automobiles into sophisticated cyber platforms. However, this extended technological evolution hides multiple security risks capable of affecting both the intelligent transportation domain and human life itself.
E
very government takes advantage of cooperation between industrial and technological domains to achieve social and economic prosperity. The automotive industry’s use of information and communications technology (ICT), for example, has triggered the evolution of the global phenomenon known today as intelligent transport systems (ITSs; __________ http://sites.ieee .org/itss). Any disruption or disconnection in modern transport operations makes the entire ITS structure vulnerable at every level—architectural, national, regional, global, physical, cyber, and human. Unfortunately, cyberthreats and attacks are a spreading menace for all critical infrastructure (CI), including the ITS domain.1 Thus, ITS safety and security are major concerns when selecting security management (SM) frameworks to apply in the delicate yet complex cyber environment, in which robustness against attacks must be ensured. However, SM and critical infrastructure protection (CIP) standards and frameworks don’t adequately address the various cascading effects associated with modern ITS security incidents. More specifically, existing security standards, methodologies, and tools don’t explicitly cover the rapidly developed ITS environment or address its specific large-scale requirements. Consequently, cyberthreats and attacks can’t be fully identified, which means malicious actions can’t be prevented, thereby creating critical ITS security gaps. We seek to raise awareness of cybersecurity issues and concerns in 1540-7993/15/$31.00 © 2015 IEEE
SECURITY& PRIVACY
IEEE
the ITS arena by presenting an extensive threat analysis of the ITS environment.
Vulnerabilities and the Efforts to Counter Them The ITS environment consists of multiple heterogeneous and complex layers of physical infrastructures (highways, roads, platforms, vehicles, and buildings), IT infrastructures (hardware equipment, ITS stations, onboard/roadside units, and nomadic devices), communication systems (networks and transmissions), ITS services and applications (cooperative and safety), and users (drivers, passengers, administrators, and security officers or managers). To provide intelligent and real-time services in such a highly mobile environment, the ITS domain collects, processes, and returns data from all participating entities, forcing the in-vehicle network architecture used for data exchange among the electronic control units (ECUs) to evolve accordingly. Until recently, there was no need to secure automotive buses: they weren’t connected to the user domain or the vehicle’s exterior, so the ITS domain wasn’t fully connected (and therefore vulnerable). But as technology evolves, so do threats and vulnerabilities, in spite of the countermeasures applied by ITS architects, security experts, manufacturers, and vendors. Consequently, in less than a decade, numerous security breaches have specifically targeted the in-vehicle network architecture (htt p://spectrum ___________
Copublished by the IEEE Computer and Reliability Societies
May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
9 M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
Glossary of Abbreviations AUTOSAR CCTA CI CIA CIP CIPMA CISA CISM CISSP CRAMM DoS EBIOS ECU EPCIP EVITA ICM ICS ICT ISM ISMS ISO ITS NIST NoW NSRAM OCTAVE OBD-II OVERSEE PRESERVE PRECIOSA SeVeCOM SM V2X
Automotive Open System Architecture Central Computing and Telecommunications Agency Critical infrastructure Confidentiality, integrity, availability Critical infrastructure protection CIP Modeling and Analysis Certified information systems auditor Certified information security manager Certified information systems security professional CCTA Risk Analysis and Management Method Denial of service Expression des Besoins et Identification des Objectifs de Sécurité Electronic control unit European Programme for Critical Infrastructure Protection E-safety Vehicle Intrusion Protected Applications Immobilizer control module Information and communications systems Information and communications technology Information security management Information security management system International Organization for Standardization Intelligent transport system National Institute of Standards and Technology Network on Wheels Network Security Risk Assessment Modeling Operationally Critical Threat Asset and Vulnerability Evaluation On-board diagnostic type II Open Vehicular Secure Preparing Secure V2X Communication Systems Privacy Enabled Capability in Cooperative Systems and Safety Applications Secure Vehicular Communication Security management Vehicle to anything
.ieee.org/tech-talk/computing/embedded-systems /cars-the-next-victims-of-cyberattacks). ________________________ The first wave of cyberattacks showed how severe the impact could be on automotive bus systems.2,3 One study considered in-vehicle communication, introducing the concept of authenticating components that communicate over the controller area network via message authentication codes.4 Even though this research suggested the use of encryption algorithms, mutual authentication of ECUs wasn’t covered until recently.5 McAfee (www.mcafee.com/us/resources/reports/rp -caution-malware-ahead.pdf) and AUTOSAR (www _________________ ___ 10
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
.eetimes.com/document.asp?doc_id=1278566) also indicated vital automotive security issues (www.youtube .com/watch?v=BA2J7O6cqzQ), highlighting the necessity of securely programming ECUs and the immobilizer control module (ICM) so that they’re controlled only by authorized entities via specific sets of cryptographic techniques. Automotive units and components accumulate multiple unstructured and unused data functions or variants due to heterogeneous cryptographic routines.6 One example of the gap created in the face of component incompatibility and diversity is the KeeLoq algorithm.7 The inefficient implementation and exploitation of cryptographic primitives in the KeeLoq algorithm has led to key recovery using methods such as man-inthe-middle attacks. Another vital security issue related to authorized vehicle ignition is immobilizer jamming, a primitive form of electronic warfare that entails interfering with a device’s signals or blocking its receiver. RFID-Zappers can permanently deactivate passive RFID chips, making it possible for hackers to launch a denial-of-service (DoS) attack. Pending security concerns also include insufficient authentication between users and the ICM, preventing valid users from being authenticated to both their key fob and their vehicle. In addition, researchers recently proved that vehicles can be remotely exploited and taken over via short- and long-range attacks thanks to poorly designed and implemented automotive security.2 Short-range wireless attacks happen through the vehicle’s OBD-II diagnostic port, remote keyless entry, RFIDs, Wi-Fi, Bluetooth, and infotainment system, whereas long-range attacks exploit broadcast and addressable channels.8 Researchers have also successfully tracked vehicles and misled their drivers into believing the car suddenly had a mechanical problem by exploiting the wireless tire pressure–monitoring system.9 Beyond the vehicles themselves, the overall ITS environment is a ripe research area, with the roadside infrastructure responsible for traffic management and tolling systems serving as a target of numerous successful attack experiments. Recent works based their attacks on reverse-engineering techniques and found that no cryptographic security was present in the ITS.10,11 In response, the European Commission has funded several framework programs for ITS security, particularly in the communication layer. The lack of active safety applications within the infrastructure and between vehicles prompted the Network on Wheels (NoW) project (2004–2008). Researchers developed communication protocols and security algorithms for intervehicle ad hoc communication systems and provided solutions for position-based routing and forwarding protocols, data security in vehicular ad hoc networks, and secure and fast communication between vehicles. May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Due to the numerous issues involved in identifying the ITS threats, rectifying the contradictions between liability and privacy, determining the mechanisms that offered the right level of protection, and defining the cryptographic primitives that take into account a specific operational environment, the Secure Vehicular Communication (SeVeCOM) project (2006–2010) was launched. The project’s major objectives were to define, design, and implement the security and privacy requirements needed for vehicular communications— specifically the need to overcome challenges such as the sporadic secure connectivity created by moving vehicles and the resulting real-time constraints. The lack of a privacy-aware architecture for cooperative systems that involve suitable trust models, along with the lack of a vehicle-to-anything (V2X) privacy verifiable architecture, motivated the Privacy Enabled Capability in Cooperative Systems and Safety Applications (PRECIOSA) project (2008–2010), which aimed to address questions about whether cooperative systems can comply with future privacy regulations. It also defined cooperative systems’ evaluation and guidelines in terms of communication and data storage privacy. The E-safety Vehicle Intrusion Protected Applications (EVITA) project (2008–2011) is regarded as a state of the art in the automotive security and ITS domain because it presented the novel implementation of a traffic encryption architecture via secure and efficient encryption algorithms. But even though the developed architecture was designed to be compliant with the AUTOSAR framework, the infrastructure doesn’t provide stream ciphers, and the hardware security modules focus primarily on V2X communications rather than the overall system. However, EVITA provided both automotive specialists and the research community with an innovative approach to safety and security.12 A year before EVITA ended, researchers launched the Open Vehicular Secure (OVERSEE) platform project (2010–2012) to develop a platform that could provide secure, standardized communication for vehicles and a secure single point of access to internal and external communication channels. The need to design and implement a secure and scalable V2X security system for realistic deployment scenarios also ignited the Preparing Secure V2X Communication Systems (PRESERVE) project (2011– 2014), which led to the design of an integrated V2X security architecture that provides a ready-to-use V2X security subsystem implementation. These projects addressed many cooperative, safety, and even security-related issues. Unfortunately, none accurately predicted the security needs of today or can prevent cyberattacks before they occur.
Threat Analysis ITS research projects have produced numerous publications about secure communication, secure cooperative services, and automotive safety concerns. Summarizing and assessing major recorded attacks and vulnerabilities and categorizing threats further productive analysis. We can categorize most attacks described in the literature to date into the following broad categories: ■ DoS. Cooperative transport systems deployed for safety rely on information gathered from other ITS entities. DoS attacks can therefore have large-scale effects, especially if data utilization (maps, navigation data, multimedia data, software updates, and so on) are denied or inconvenience the user. ■ Data alteration. In an ITS, data alteration can include both modified and false communication, even though there’s a thin line between the two. When drivers become accustomed to a cooperative system, they consider it trustworthy and reliable. If the system receives faulty data, it can detect the modification and either reject or accept it, which could result in drivers trusting data that isn’t trustworthy. Novel solutions such as digital signatures or biometrics might not yet be completely optimal for the automotive industry, but the research in this area is promising. ■ Falsified entities. With rogue ECUs, there’s a high probability of easily detecting anything out of order (for example, through automotive logs or diagnostics). Unfortunately, detecting fake or falsified ITS entities isn’t that simple because they can reside anywhere in the world, connected to any ITS network, affecting stable functionality. ■ Injected authentication data. Combined with the threat of falsified ITS entities, injected authentication data can lead to multiple unwanted events that could endanger human life. ■ Intercepted communication data. The interception of the ITS communication data is a threat to the transportation domain that needs to be addressed, including personal and transportation data and information. Interception of location-related data, for instance, via GPS, is also a concern. Keeping security threats at bay has been and will continue to be one of security experts’ most critical responsibilities. Threat classes are triggering in-depth investigations into well-known information security standards, methodologies, and tools to help these experts recognize, identify, and foresee advanced ITS security gaps.
Evaluation An ITS can be considered a CI due to its large-scale ICT infrastructures. Any disruption or unavailability 11
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
of its information and communications systems (ICSs) The ISO family is designed to cover large-scale orgacapabilities might have disastrous consequences on the nizations and environments, but it isn’t considered whole domain, including human life. ICS robustness suitable for smaller scales due to cost. Furthermore, against modernized cyberattacks is therefore the key it’s a compliance standard, reporting a list of security challenge that an ITS must face. controls without providing any specific technique or Information SM (ISM) is essential for the day-to-day method to analyze. Nevertheless, it proposes the use of operations of any modern both quantitative and qualiICT-based domain, and tative methods for calan ITS is no exception. culating risk. Even if we transform and combine Any ITS ISM approach The NIST standard, widely adopted security best practices, must ensure the confion the other hand, is dentiality, integrity, and compliant with the ISO methodologies, and tools, these can be availability (CIA) of family and addresses used only for static environments. incoming information all the requirements for each ITS entity. If for establishing and one of these three tenets is implementing the ISMS. violated, the whole ITS structure can be compromised: The standard adopts extensive technical and operational questionnaires that require a variety of users to ■ Confidentiality. Information isn’t made available or dis- participate in the process, but the concept to create the closed to unauthorized ITS individuals, entities, or treatment plan is limited in support. In addition, the processes. methods used for calculating risk aren’t well-defined, ■ Integrity. The information asset’s accuracy and com- reducing the standard’s applicability for an integrated pleteness is safeguarded. and automated approach. ■ Availability. The information is accessible and usable on demand by an authorized ITS entity. Methodologies and Approaches The Operationally Critical Threat Asset and VulnerWorking from this golden security triangle, research- ability Evaluation (OCTAVE) method is a systematic ers have developed multiple security standards, approach to information security risk evaluation that methodologies, and tools to identify and assess secu- embeds a large set of criteria. It was designed with rity vulnerabilities of micro-, small-, medium-, large-, larger environments in mind, but a targeted method or very large–scale ICSs and CI. The main question is for small-scale environments has been developed, too. whether the same security standards, methodologies, Unfortunately, OCTAVE uses a primitive approach and tools can be fully, partially, or not implemented for risk analysis that’s based on a qualitative scale, so in an ITS. it isn’t possible to integrate an advanced technique for analyzing and combining cooperative knowledgeSecurity Standards and Best Practices based ITS environments. Both ISO 27001:2013 and ISO 27001:2005 specify the The Expression des Besoins et Identification des requirements for establishing, implementing, monitor- Objectifs de Sécurité (EBIOS) is another risk maning, reviewing, maintaining, and improving an informa- agement methodology for assessing and treating risks. tion security management system (ISMS). The ISMS EBIOS is easy to grasp, adapt, deploy, and apply to the is the overall management and control framework for subject or environment studied because it’s based on managing the environment’s information security risks. a qualitative approach and incorporates collaborative The ISO/IEC 27002:2005 standard comprises ISO/ abilities. However, the lack of an advanced computaIEC 17799:2005 and establishes guidelines for imple- tional method for correlating and determining results menting the ISMS (initiating, maintaining, and improv- is its big disadvantage. On the plus side, it provides an ing) in an environment; the risk management process open source tool that integrates all risk analysis and is described by ISO 27005:2008. In addition, the NIST management steps defined in EBIOS phases. The Central Computing and Telecommunications SP 800-30 risk management standard is an open guide that provides the fundamentals for developing an Agency’s (CCTA’s) Risk Analysis and Management effective risk management sequence. It was developed Method (CRAMM) was developed to assist large envito help large-scale facilities, with the applied risk man- ronments similar to ITSs. This methodology uses the agement methodology encompassing three baseline ICS’s risk analysis to identify security requirements and processes—risk assessment, risk mitigation, and evalu- detect possible solutions. CRAMM is based on a qualitaation and assessment. tive approach but with limited collaborative capabilities 12
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
due to the low involvement of users at the time of the actual assessment. Analysts should have a high level of skills and experience in gathering and analyzing information to identify threats and vulnerabilities. Nevertheless, CRAMM complies with the rules and obligations imposed in the ISO family of standards.
CIP Frameworks The European Programme for Critical Infrastructure Protection (EPCIP) sets the overall framework for activities aimed at improving CIP in Europe with the CIP/Decision Support System. The CIP Modeling and Analysis (CIPMA) framework evaluates the effects on CI service operation and disruption within CI sectors. The Network Security Risk Assessment Modeling (NSRAM) application, deployed under the CIP program, enables the assessment of both cyber and physical infrastructure security risks and analyzes interconnected multiinfrastructure network events. However, the majority of CIP guidelines, standards, methodologies, and tools originate from the energy sector and address electrical energy infrastructure protection issues. Unfortunately, there isn’t a standardized way to examine the criticality of the ITS infrastructure or address possible cyberthreats. Generally speaking, the security standards presented here don’t address the auditing of complex and heterogeneous IT facilities and systems. Moreover, their current forms don’t support ITS security management. They all require a continuous and systematic effort for identifying and handling security risks and for applying appropriate security measures and controls—which is in direct conflict with the ITS environment’s continuous real-time and highly mobility nature. Even if we transform and combine widely adopted security best practices, methodologies, and risk and vulnerability assessment tools, these can be used only for static rather than dynamic environments like the ITS.
Information Security Compliance Due to ICT’s dynamic nature, ITSs have evolved into complex entities with extensive influence on their surroundings and multiple Security on Tap. IT security awareness over the ITS domain must be raised before the industrialized world fully implements emerging and novel ICT concepts—without proper guidance for safety and security compliance, many lives could be lost. Security is the main technical challenge that if not properly implemented or neglected could severely impact ITS network deployment. Trust and privacy among ITS entities, whether they’re users or stations, should be guaranteed—only authorized entities should be able to participate in communication transactions, for example.
Privacy and safety must always be under consideration as well. Safety for users includes vehicles, drivers, and passengers, whereas privacy is about the secure transfer of data through communication channels and provided services. Because ITS and communication networks are prone to a wide range of cyberattacks, a trusted infrastructure handling secure data transfers is crucial, with authentication, authorization, and access control being the key countermeasures. Allowing only certified and authorized mobile nodes to connect can prevent network attacks and service disruptions. A first step in achieving such an infrastructure is to run a detailed CIA and authenticity threat analysis. ITS confidentiality is related to ■ transaction data threatened by eavesdropping attacks, ■ the collection of location-based information through traffic analysis of the messages, and ■ masquerade attacks via traffic analysis. ITS integrity is related to ■ unauthorized access to restricted information (masquerade or malware injection attacks), ■ unauthorized access to restricted information and loss of information, and ■ corruption or manipulation of information exchanged among ITS entities and communication layers. ITS availability is related to DoS attacks as a result of ■ injection of malware to artificially generate a high volume of false messages, ■ high volume of messages through spamming, thereby affecting the exchanging capabilities of an ITS, and ■ accidentally generating a high volume of false I/O messages for multihop broadcast messaging. Finally, ITS authenticity is related to ■ protection of legitimate ITSs from masquerade attacks, ■ replay of messages at a similar location but a different time, ■ replay of messages at a different location and a different time (wormhole attack), and ■ protection against fraudulent broadcast messages. Summarizing these security issues, the ITS challenges to be resolved include ■ verifying ITS entities’ authentication and the integrity of messages exchanged due to the high mobility of nodes, 13
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
■ ensuring message confidentiality using well-known and strong cryptographic algorithms, ■ ensuring that the ITS applications are providing secure location-based information, ■ securing data access to protect stored data against unknown and unauthorized access (security and privacy), and ■ scalability and interoperability (in spite of a dynamic topology and high mobility, reliable message exchange should be satisfied).
to include safety and security considerations both in development processes and standards currently handled by the TC22/SC3/WG16 committee under the development of ISO 26262. In addition, we suggest using a four-axis strategy to address the issue of ITS safety and security. First, the “construction” of a novel security methodology, including a risk analysis and vulnerabilities assessment for the ITS’s ISM, must implement
■ threat modeling and design reviews as the two major However, the ITS security challenges we face aren’t resilience processes during the design phase; limited only to the ones listed here. As we enter the mod- ■ real-time and systematic analysis of the whole specernized digital era of smart grids, ubiquitous mobility, trum of cyberthreats to determine direct, interconand green technology, today’s approaches don’t promote nected, and interdependent threats targeted at the ITS an efficient collaboration among all involved parties. Speand its entities; cifically, they fail to address sector-specific requirements ■ certifications such as CISA, CISM, and CISSP that or Security on Tap across physical, cybernetic, and can function as broad surveys of information security, communication layers. The necessity of managing ITS academic, and practical perspectives when designing environment heterogeneity should be considered in ITS security and policies; and the flexible development and deployment of anything ■ asset, threat, vulnerability, and risk severity scales related to mobility, including secure protocol architecalignment, which will provide the advantage of contures and networking. tinuous updates and sophisticated upgrades (the tool Dynamic, robust, and sustainable architectures must produced will not only be user friendly but will also be engineered to deal with the stress created by ITS implement the needs of real-time monitoring). mobility. Web services and virtualized techniques Second, penetracan efficiently face this tion testing must be One of the more enduring challenges upcoming challenge but, implemented in the ITS will be how to apply a holistic security if not properly impleto ensure its robustsolution to the ITS backbone. mented, they are vulnerness against modernable to existing and new ized cyberattacks and to families of cyberattacks. properly complete the Real-time information monitoring can be integrated life cycle of information security actions. as information security standards, methodologies, and Third, fault detection and isolation techniques can tools are upgraded, overriding the “instance issue” men- be applied not only in the aerospace domain in which tioned earlier. Existing IT security methodologies fail they originated but also in the automotive industry to handle this level of complexity because they rely on and in ITS designs due to the extensive use of sensors primitive methods to evaluate risks and use ineffective that form an intrusion detection system (IDS). An IDS techniques to analyze qualitative risk analysis because can annihilate a large set of incoming cyberattacks on of the lack of reliable quantitative data.13 Current risk unprotected ITS communication and networks. analysis methodologies rely on interviews to aggregate Finally, redefinition and design of security protocols assessment information, so the audit can’t be conducted for in-vehicle communications5 and components12,14 without the presence of experts, which translates into must be implemented. This step can lead to a vehicle’s large financial costs and lost time. secure entry into an ITS environment, without being Consequently, threats caused by the relation- compromised by cyberthreats or attacks. Security engiships among ITS entities and layers not only cause neers and officers must cooperate to develop various severe cascading effects but aren’t foreseen (and thus levels of sensitivity to easily formalize the security polplanned for) in any existing directive. Until now, the icy complexity and to protect data and metadata.15 US Homeland Security Presidential Directive was the only existing plan addressing interoperability and compatibility issues among involved ITS entities. nlike pure data security in cyberspace, secuAnalysis of automotive safety and security appears in rity solutions proposed for ITSs must consider other efforts,5,12,14 with the issue of how and whether several layers, and any gaps in those layers must be
U
14
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
addressed properly. Without security, false alerts can be created and attackers could exploit safety applications, thus endangering the driver. The ITS security we presented here is attracting considerable attention from both the research community and the automotive industry. Perhaps one of the more enduring challenges will be how to apply a holistic security solution to the ITS backbone. Future publications of research projects will hopefully address this challenge. Until then, the security of in-vehicle systems remains an open question, too. References 1. E. Fok, “An Introduction to Cybersecurity Issues in Modern Transportation Systems,” Inst. Transportation Engineers J., vol. 83, no. 7, 2013, pp. 18–21. 2. K. Koscher et al., “Experimental Security Analysis of a Modern Automobile,” IEEE Symp. Security and Privacy, vol. 31, 2010, pp. 447–462. 3. T. Hoppe, S. Kiltz, and J. Dittmann, “Security Threats to Automotive Can Networks: Practical Examples and Selected Short-Term Countermeasures,” Proc. 27th Int’l Conf. Computer Safety, Reliability, and Security, 2008, pp. 235–248. 4. D.K. Nilsson and U.E. Larson, “Efficient In-Vehicle Delayed Data Authentication Based on Compound Message Authentication Codes,” Proc. 68th IEEE Vehicular Technology Conf., 2008, pp. 1–5. 5. C. Patsakis, K. Dellios, and M. Bouroche, “Towards a Distributed Secure In-Vehicle Communication Architecture for Modern Vehicles,” Computers & Security, Elsevier, vol. 40, 2013, pp. 60–74. 6. R. Verdult, F. Garcia, and J. Balash, “Gone in 360 Seconds: Hijacking with Hitag2,” 21st USENIX Security Symp., 2012; www.usenix.org/system/files/conference /usenixsecurity12/sec12-final95.pdf. _____________________ 7. T. Eisenbarth et al., “Physical Cryptanalysis of KeeLoq Code Hopping Applications,” Int’l Assoc. Cryptologic Research (IACR), 2008; http://eprint.iacr.org/2008/058. 8. C. Szilagyi and P. Koopman, “Flexible Multicast Authentication for Time-Triggered Embedded Control Network Applications in Dependable Systems Networks,” Proc. IEEE/IFIP Int’l Conf. Dependable Systems Networks, 2009, pp. 165–174. 9. I. Rouf et al., “Security and Privacy Vulnerabilities of In-Car Wireless Networks: A Tire Pressure Monitoring System Case Study,” Proc. 19th USENIX Conf. Security, 2010; www.usenix.org/legacy/events/sec10/tech /full_papers/Rouf.pdf. _____________ 10. A. Barisani and D. Bianco, “Hijacking RDS-TMC Traffic Information Signals,” Phrack Magazine, vol. 64, 2007; http://phrack.org/issues/64/5.html#article. 11. N. Lawson, “Highway to Hell: Hacking Toll Systems,” Blackhat USA, 2008; https://archive.org/details/black hat2008usaaudio. __________
12. B. Czerny, “System Security and System Safety Engineering: Differences and Similarities and a System Security Engineering Process Based on the ISO 26262 Process Framework,” SAE Int. J. Passenger Cars—Electronic and Electrical Systems, vol. 6, no. 1, 2013, pp. 349–359; http:// ____ papers.sae.org/2013-01-1419. 13. H-R. Nielson and F. Nielson, “Safety versus Security in the Quality Calculus,” Theories of Programming and Formal Methods, LNCS 8051, Springer, 2013, pp. 285–303. 14. C. Patsakis and A. Solanas, “Privacy-Aware Event Data Recorders: Cryptography Meets the Automotive Industry Again,” IEEE Comm., vol. 51, no. 12, 2013, pp. 122–128. 15. L. Pietre-Cambacedes and M. Bouissou, “CrossFertilizations between Safety and Security Engineering,” Reliability Engineering & System Safety, Elsevier, vol. 110, 2013, pp. 110–126. Kleanthis Dellios is a certified information security man-
agement system (ISMS) auditor whose main areas of interest are information security management and critical infrastructures protection (automotive and maritime information systems), biometrics, and cloud computing technology. Dellios received a PhD in information security from the University of Piraeus. Contact him at kdellios@unipi.gr _____________ __________ or kledellios@gmail.com.
Dimitrios Papanikas is a PhD candidate in the Depart-
ment of Informatics at the University of Piraeus. He’s also a certified ISMS auditor whose main areas of interest are software and network engineering, information security and penetration testing technologies. Papanikas received an MSc in advanced information systems from the University of Piraeus. Contact him at papanik@unipi.gr. ___________
Despina Polemi is an associate professor in the Depart-
ment of Informatics at the University of Piraeus. Her research interests include coding theory, cryptology, and information systems, network, and application security. Polemi received a PhD in applied mathematics (coding theory) from The City University of New York. Contact her at dpolemi@unipi.gr. ___________
Letters for the editor? Please email your comments or feedback to editor Brian Kirk (bkirk@ ____ computer.org). _______ All letters will be edited for brevity, clarity, and language.
15
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
India’s Cybersecurity Landscape: The Roles of the Private Sector and Public–Private Partnership Nir Kshetri | University of North Carolina at Greensboro
Are public–private partnerships an appropriate means of dealing with underdeveloped cybersecurityrelated institutions in India? Whereas the government lacks the resources and expertise to develop new templates, monitor industry behaviors, and enforce laws, trade associations are likely to have more experience and well-focused priorities in these areas.
I
n July 2013, in response to domestic and international pressure to enhance cybersecurity measures, the government of India released the National Cyber Security Policy (NCSP; htt p://deity.gov.in/content _________________ /national-cyber-security-policy-2013-1), which set forth ________________________ 14 objectives that included enhancing the protection of critical infrastructure and developing 500,000 skilled cybersecurity professionals in the next five years. A key component of NCSP is the development of public–private partnership (PPP) efforts to enhance the cybersecurity landscape. PPPs are especially wellsuited for areas that require diverse types of expertise and knowledge to address complex problems, including cybersecurity.1 In this article, I provide insight into various constraints the Indian government faces in strengthening cybersecurity and examine the private sector’s role in this area.
Background India’s economy and the government’s limited resources have given rise to self-regulatory bodies in the private sector.
Economic Issues Two key features of the Indian economy affect its cybersecurity posture. First, owing to the rapidly growing IT and business process management (IT&BPM) sector and its various data breaches, the country is facing 16
May/June 2015
SECURITY& PRIVACY
IEEE
unprecedented pressure from foreign offshoring clients and Western governments to strengthen cybersecurity. In 2011, the US and India signed a memorandum of understanding to promote cybersecurity-related cooperation and information exchange. In bilateral talks, the US emphasized India’s need for capacity building in cybersecurity, especially in cybercrime detection and investigation. Because India is a major offshoring destination for back offices and other highvalue business functions, cybersecurity orientation of Indian businesses has been an issue of pressing concern to US and other Western businesses. Second, the Indian government severely lacks the resources to develop and enforce criminal cybersecurityrelated regulations, standards, and guidelines. For instance, in 2011, the police cybercrime cell of Delhi had only two inspectors. In 2012, the Delhi High Court noted the Delhi police website’s lack of functionality, calling it “completely useless” and “obsolete.”2 Until 2010, there wasn’t a single cybercrime-related conviction in Bangalore, the country’s biggest offshoring hub. One law enforcement officer attributed the low conviction rates to the police’s lack of technical skills, knowledge, and training in collecting evidence.3 For instance, when a police officer was asked to seize a hacker’s computer, he brought in the monitor. In another case, the police seized the CD-ROM drive from a hacker’s computer instead of the hard disk.
Copublished by the IEEE Computer and Reliability Societies
1540-7993/15/$31.00 © 2015 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Government Constraints Nascent and formative areas such as cybersecurity are often characterized by underdeveloped regulatory structures. There’s no template for policy development, assessment, and analysis. Developing templates, monitoring the behaviors of individuals and organizations, and enforcing regulations require extensive resources and expertise in such areas. However, most governments in developing countries are characterized by weak public administration, inadequate technical competence, and lack of political will in the implementation of economic and social policies.4 Another factor is perhaps more important. The way the Indian government is positioned doesn’t allow it to spend state resources to support a new area at the cost of competing sectors. If policymakers allocate disproportionately more resources to develop modern sectors such as IT&BPM, they face stiff opposition from the mass of population that depends on the traditional economy. For instance, in India’s Andhra Pradesh state in the late 1990s and early 2000s, political opponents attacked then–Chief Minister Chandrababu Naidu’s decision to raise rice and electricity prices by cutting subsidies, which would worsen the welfare of most people. They also labeled his promotion of offshoring-related sectors and foreign capital as elitist. Naidu was voted out of office in 2004. For the majority of the Indian population, data privacy and security are largely irrelevant.
Self-Regulatory Bodies Because of these factors, India’s IT&BPM sector manages cybersecurity risk through effective industry self-regulation. A highly visible private-sector actor is the National Association of Software and Services Companies (NASSCOM), established in 1988 as an industry-funded not-for-profit organization to contribute to the software industry’s development. NASSCOM aims to help the IT&BPM sector to be a “trustworthy, respected, innovative and society friendly industry in the world” and to “[e]stablish India as a hub for innovation and professional services” (www.nasscom.in /vision-and-mission). _____________ Owing primarily to the uptick in data breach incidents, addressing data security and privacy issues has become increasingly important for the Indian IT&BPM sector’s success and vitality. In 2008, realizing the importance of an organization with an exclusive focus on data protection, NASSCOM established the Data Security Council of India (DSCI), a self-regulatory member organization. DSCI’s mission is to create trust in Indian companies as global outsourcing service providers. Its focus on cybersecurity is to “[h]arness data protection as a lever for economic development of India through global
integration of practices and standards conforming to various legal regimes” (https://www.dsci.in/taxonomypage _______________________ /1). DSCI took over most of NASSCOM’s data __ protection–related activities. NASSCOM and DSCI have been exemplary selfregulatory bodies, playing key roles in strengthening the IT&BPM sector’s cybersecurity orientation. They’ve played an equally important role in the PPP cybersecurity initiatives and worked with government and law enforcement agencies to formulate and enforce cybersecurity-related legislation. Table 1 shows major events associated with NASSCOM and DSCI’s evolution and their roles in enhancing cybersecurity.
The Roles of NASSCOM and DSCI As of 2015, NASSCOM has more than 1,800 members, compared to 485 corporate members of DSCI. Although any company operating in India’s IT&BPM sector might have incentive to join NASSCOM, DSCI membership is especially important for companies for which cybersecurity is a key priority. Annual NASSCOM membership fees vary from approximately US$450 to $100,000, depending on organization size. Many of NASSCOM’s members are global firms from the US, Europe, Japan, China, and other countries. NASSCOM thus has a fairly high level of expertise and the financial resources to take various cybersecurity measures. DSCI monitors member companies to ensure they adhere to cybersecurity standards. For instance, it requires members to self-police and provide additional layers of security at infrastructure, applications, and other levels. The maximum fine for companies that fail to secure data is $1 million. Noncompliant companies might also lose their NASSCOM and DSCI memberships. Trade associations influence industry behaviors directly as well as through causal chains. Indirect effects entail mimicking behaviors of other actors that are perceived to be exemplary and have a higher degree of effectiveness.5 Exemplary firms serve as models for smaller firms to imitate. In such cases, knowledge flow takes place by externalities mainly due to interactions among firms or their employees. Trade associations are likely to accelerate this process by stimulating interaction among member companies. A trade association’s enforcement strategy becomes efficient and powerful if a large number of firms joins the association. Former NASSCOM president Kiran Karnik addressed the importance of DSCI membership: “While it would be voluntary for the members to be part of the body, it would ensure at the same time that market forces make it mandatory for companies to register themselves.”6 NASSCOM collaborates with other entities. For instance, in the 1990s, it teamed up with the 17
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
Table 1. NASSCOM and DSCI’s evolution and roles in enhancing India’s cybersecurity profile. Date
Milestones and major events
1988
The National Association of Software and Services Companies (NASSCOM) was established as a not-for-profit organization with 38 members, which accounted for 65 percent of India’s software industry’s revenue.
1990
NASSCOM began a public-awareness campaign to educate software users and encourage lawful use.
Early 1990s
NASSCOM teamed up with the Manufacturers Association for Information Technology to launch the Indian Federation against Software Theft.
1994
NASSCOM and Business Software Alliance set up the toll-free Anti-Piracy Hotline in New Delhi.
2003
NASSCOM started working with Mumbai police on cybercrime-related matters.
2004
NASSCOM announced a plan to have its members’ security practices audited by international accounting firms.
2004
NASSCOM started the Cyber Labs program with support from the government’s Department of Electronics and Information Technology.
2005
NASSCOM announced a training initiative for Pune’s cybercrime unit.
April 2005
Three former employees of Mphasis were arrested for allegedly stealing more than US$350,000 from Citibank customers.
2006
NASSCOM drafted plans for new legal measures to safeguard intellectual property and prevent data theft.
January 2006
The National Skill Registry (NSR) launched, allowing employers to perform background checks on existing or prospective employees.
April 2007
The number of individuals registered in the NSR database reached 100,000, and the number of participating companies reached 36.
2008
NASSCOM announced the establishment of the Data Security Council of India (DSCI) as a self-regulatory body.
February 2008
The number of technology employees signed up for the NSR database reached 220,000.
2009
Cloud computing security was reviewed by the NASSCOM–DSCI Information Security Summit. This topic has been a focus in every annual summit since.
2011
DSCI announced a plan to set up a cloud security advisory group to develop a policy framework. The group advises the government on security and privacy issues in a cloud environment.
June 2011
At the DSCI Best Practices meeting, issues related to data protection in cloud computing and compliance were discussed.
December 2012
A seminar organized by DSCI focused on preventing data theft and cyberattacks and securing critical infrastructure.
March 2013
The DSCI had 654 organizations as corporate members, and more than 1,350 security and privacy professionals and practitioners as chapter members.
August 2013
The number of individuals registered in the NSR database reached 1.3 million, and the number of participating companies reached 118. The NSR database is supported by 17 employee background-checking companies and 126 pointof-service vendors in various locations.
December 2013
NASSCOM had more than 1,504 members, representing 95 percent of industry revenue.
2015
NASSCOM has more than 1,800 members, and DSCI membership is at 485.
Manufacturers Association for Information Technology to launch the Indian Federation against Software Theft. Similarly, it announced a plan to have its members’ security practices audited by international accounting firms. Industry leaders also advocated the adoption of certification under the British Standards Institution’s information security management systems, which covers network security, data sanctity, and data utilization terms. Partnerships between the government and the private sector are viewed as a promising way of generating new opportunities to leverage financial, human, and technological resources that aren’t likely to be available 18
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
if the government attempts to do it alone.7 This is especially pertinent for cybersecurity in developing economies owing to their resource-poor environments.
The Need for PPP Prior research suggests that the public and private sectors’ different strengths, expertise, and experience could lead to complementary roles in meeting developmental and social needs.8 A unique strength of the state government is its ability to impose harsh sanctions and penalties on violators of laws and regulations. Trade associations such as NASSCOM often have a high level of technical expertise and resources and don’t face some May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
of the constraints that limit the state’s ability to monitor and control cybercrime activities. Private and public sectors engaged in PPPs have different objectives, agendas, and interests. For example, one goal of the public sector is to employ the private sector’s capital and technology and share risks with the latter to deliver public services or goods. By winning the public sector’s support, the private sector can increase profitability. The Indian government’s and the private-sector actors’ motivation and objectives partly overlap in strengthening cybersecurity. The IT&BPM sector plays a strategic role in the national economy, and most highprofile and widely publicized cybercrimes occur in this sector. In one case, call center workers at outsourcing services provider Mphasis transferred more than $350,000 from four Citibank customers’ accounts to their personal accounts.9 In major Indian cities, “data brokers” obtained data illegally from people working in offshoring companies. For instance, two people who claimed to be workers in Indian offshoring firms met Sunday Times undercover reporters with a laptop full of data and bragged that they had 45 different sets of personal information on approximately 500,000 UK consumers.10 The information included credit card holders’ names, addresses, phone numbers; cards’ start and expiry dates and security verification codes; and information about mortgages, loans, insurance, phone contracts, and television subscriptions. NASSCOM initiated its crime-fighting efforts in response to these events in the Indian IT&BPM sector. In the early 2000s, NASSCOM partnered with the Ministry of Information Technology to draft data protection and privacy laws in response to offshore clients’ privacy concerns. The goal was to bring Indian data protection laws to the same level as European and US standards. In 2011, DSCI announced a plan to set up a cloud security advisory group that would develop a policy framework. The group would also advise the government on cloud security and privacy issues.
Bureau of Investigation, India’s Cyber Labs program is a private-sector initiative started by NASSCOM in 2004 with support from the government’s Department of Electronics and Information Technology. Cyber Labs provide training and other support to police officers, prosecutors, bank officials, and others. As of April 2015, there were eight Cyber Labs in various Indian cities, which provided cybercrime training to more than 28,000 police officers. The Bangalore Cyber Lab alone has resources to train more than 1,000 law enforcement personnel annually. To educate legal communities, NASSCOM and DSCI also meet with bar councils in different cities. DSCI presents public- and private-sector employees and organizations with special awards and recognitions. For instance, the DSCI Excellence Awards began in 2011 in two areas: corporate (based on preparedness level and cybersecurity response) and law enforcement (given to police and investigation agencies for capacity building in investigating and solving cybercrime cases). NASSCOM and DSCI have helped increase consumers’ cybersecurity awareness. In the early 1990s, NASSCOM began a public-awareness campaign to educate software users and encourage lawful use. Other efforts include dissemination measures, such as CyberSafety Week, organized by NASSCOM and government agencies in major cities. For instance, in 2010, NASSCOM, DSCI, and Mumbai police, with support from the Ministry of Information Technology, organized CyberSafety Week—Mumbai to educate users on cyber safety and IT security. In recent years, NASSCOM has realized the need to focus on security issues associated with new technologies such as cloud computing and social media. NASSCOM–DSCI Information Security Summits address cloud security every year. In the 2011 DSCI Best Practices meeting, issues related to data protection and compliance in cloud computing were discussed. Likewise, a December 2012 DSCI seminar focused on preventing data theft and cyberattacks and securing critical infrastructure.
PPP Achievements PPPs involve arrangements and cooperative relationships between public and private sectors, under which the latter undertakes actions that have been traditionally performed by the former.11 NASSCOM has played a lead role in developing and implementing vital cybercrime-fighting programs that are normally initiated and led by government agencies in the US and other industrialized countries. Consider the Cyber Labs program (www.dsci.in/cyber-labs), which is modeled after the National Cyber-Forensics & Training Alliance (NCFTA) in the US. Whereas NCFTA is a US federal government effort established by the Federal
Various State Roles and PPP Conditions It’s important to understand the enabling and constraining conditions that influence the success of PPP projects. Among the most important is the conduciveness of institutional environments to PPP. A government that is friendly with the private sector, willing to involve players in key national economic policies, and interested to see this sector flourish is likely to be supportive of PPP initiatives. Broadly speaking, these conditions exist in India’s IT&BPM sector, which has facilitated cybersecurityrelated PPP in the country. Major emphasis must be 19
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
placed on enactment and enforcement of necessary laws. These conditions can be captured by the state’s regulatory, participatory, and supportive roles. The regulatory roles entail establishing and enforcing the rule of law. The participatory roles are about ensuring that businesses and citizens contribute to national policymaking. The supportive roles involve creating conditions that foster the growth of businesses in certain sectors.
faith” in Gurgaon police. Cybercrime victims have also complained that the police’s process to build a case is long and inefficient. Thus, there is a vicious cycle: law enforcement agencies lack the skills, orientation, and capability to address cybercrime-related offenses; there are low cybercrime reporting rates because of victims’ lack of confidence in law enforcement agencies; and cybercriminals become more resourceful and powerful because their offenses aren’t reported and law enforcement agencies lack motiRegulatory Role vation or justification to improve their skills. In a regulatory state, a set of factors influences the Although NASSCOM’s and DSCI’s measures have enforcement of contracts: sound political institutions been quite successful in boosting firms’ cybersecurity in and the rule of law, a governthe IT&BPM sector, many ment free from corrupcritical factors are tion, bureaucratic qualbeyond their control. Ineffective national legal systems, ity, a strong and effecThe state’s weak reguambiguous laws, and a lack of resources tive court system, and latory role has negaoften severely hinder a state’s ability citizens’ willingness to tively impacted key to control criminal activities. accept the established ingredients of cyberinstitutions.12 Again, the security. For instance, Indian government faces one estimate sugseveral challenges in pergested that approximately forming regulatory state functions, which is its most 20 percent of resumes submitted for IT&BPM posiglaring shortcoming. Indian states have faced budget tions in India are fake.15 The maximum punishment problems and failed to comply with federal directives for faking a resume is termination of employment. Due to hire judges and upgrade legal infrastructures and to India’s highly inefficient legal system, fraudsters are court facilities. rarely caught and punished. The rule of law is weakly Factors such as ineffective national legal systems, developed and often ignored with impunity. Getting an ambiguous laws on the books, a lack of resources, or a outsourcing job using a fake resume is a high-reward, state’s unwillingness to allocate resources often severely low-risk activity because such jobs pay better than those hinder a state’s ability to control criminal activities. This in other economic sectors.15 is especially relevant for new types of crimes such as Many of NASSCOM’s and DSCI’s responses are the cybercrimes. India’s greatest barrier to cybersecurity is result of a hollow state and institutions that are highly its unavailability and ineffectiveness of law enforcement ineffective in dealing with India’s cybersecurity chalowing primarily to its lack of resources and unwilling- lenges. For instance, India lacks standard identifiers like ness to invest in such resources. the US Social Security number, making it difficult to A related problem is the low reporting rate of cyber- check potential employees’ backgrounds. It costs up to crimes. Approximately 10 percent of cybercrimes are $1,000 per employee to check backgrounds thoroughly. reported, and of those reported, about 2 percent are In 2005, in response to the lack of such databases, registered.13 The conviction rate is estimated at 2 per- NASSCOM announced a plan to launch a pilot employeecent. The barriers, hurdles, and hassles that victims screening program called Fortress India, which would confront contribute to the low registration rates. Police allow employers to screen potential workers who have often don’t support victims who want to file a cyber- criminal records. This became the National Skill Registry crime case and show unwillingness to investigate such (NSR), which allows employers to perform background crimes. For instance, a survey conducted by research checks on existing or prospective employees. It’s a volunfirm BPO News indicated that although most Gurgaon tary registry for call center employees. Although the NSR business process outsourcing firms had been cyber- doesn’t include the profiles of most potential job seekers, crime victims, approximately 70 percent didn’t report it’s a step in the right direction. to the police; many expressed doubt about the competence, professionalism, and integrity of the police han- Participatory Role dling cybercrime cases.14 Approximately 50 percent A participatory state captures the extent to which poliof these respondents believed cases aren’t dealt with cies and institutions represent the wishes of the memprofessionally, and 30 percent noted that they had “no bers of society.4 To protect their independence and 20
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
autonomy, businesses might participate in national policymaking and work closely with state agencies. India’s PPP cybersecurity initiatives are largely a product of a participatory state. The country’s 1991 economic liberalization was a major driving force behind the increased importance of groups such as trade associations; the state-dominated economic policy framework shifted to a decentralized one. Religious, social, economic, and political associations have offered a viable set of examples encouraging the development of many new trade and professional associations. A strong mutual interdependence between the state and the private sector—particularly organized business groups— has developed quickly. The liberalization thus resulted in more room for associations to flourish and have a strong voice as well as increased their participation in national policy development and planning processes.16 The Indian government’s relationship with the private sector has involved a high level of trust and partnership in cybersecurity-related matters. In the early 2000s, NASSCOM established a CyberCop Committee to provide cybersecurity services to the government and the private sector in an advisory capacity. In 2006, NASSCOM drafted plans for new legal measures to safeguard intellectual property and prevent data theft. In recent years, the government has made efforts to create a favorable climate for a higher participatory involvement in cybersecurity. For instance, a cybersecurity joint working group ( JWG) was established with representatives from government agencies and the private sector and mandated to come up with PPP recommendations in capacity building and policymaking for government consideration. The JWG released its “Engagement with Private Sector on Cyber Security” report in October 2012 (https://www.dsci.in _____________ /node/1211). NCSP incorporated many of the recom________ mendations of this report as well as that of the NASSCOM–DSCI report “Securing Our Cyber Frontiers” (https://www.dsci.in/node/1092). Both reports placed ____________________ a high level of emphasis on the formulation of PPPs to address cybersecurity issues—a key element of NCSP. Another sign of the improving climate for participatory involvement of the private sector occurred in October 2012, when India’s National Security Advisor announced a plan to establish a permanent working group on cybersecurity, with representatives from the government and the private sector, that would implement the country’s cyberdefense framework. This marked the first time the Indian government allowed the private sector to participate in national security matters.
Supportive Role A government can support cybersecurity development via legal and nonlegal influence. One way to do so is
to address barriers related to skills, information, market, technology, and infrastructures. Nations that have achieved innovation-led growth also directly support these innovations. For instance, the US government invested heavily in several mission-oriented innovations, such as the microchip, the Internet, biotechnology, and nanotechnology. In general, the Indian government offers a low level of support to private businesses. The state’s supportive role is found to be less favorable to private businesses in India than in China. Nonetheless, the Indian government has shown a higher level of support and commitment to cybersecurity. For instance, NASSCOM asked the government to create a special court to try people who were accused of cybercrimes. In response, the first cyber-regulation court was established in Delhi in 2009. Likewise, in view of the country’s lack of indigenous technology and patents in this area, the Indian government announced the possibility of providing financial assistance to Indian firms for acquiring foreign firms with high-end cybersecurity technology. The Ministry of External Affairs explored possible targets worldwide through Indian embassies and missions.17 The Indian company that owns the technology gained through the acquisitions is required to give government agencies access to the intellectual property rights.
Discussion and Implications Although sectoral business organizations such as trade associations are generally numerous and exist in almost every country, their level of development in and influence on national policymaking and implementation vary greatly. NASSCOM is probably among the most influential and effective trade associations and has been successful in strategically solving collective cybersecurity problems in India’s IT&BPM sector. NASSCOM’s measures have paid off brilliantly. In regard to the Indian IT&BPM sector’s data security measures, a UK Banking Code Standards Board (BCSB) report noted: “Customer data is subject to the same level of security as in the UK. High risk and more complex processes are subject to higher levels of scrutiny than similar activities onshore” (www.rediff.com /money/2006/oct/07bpo.htm). ___________________ Citing the findings of the BCSB and Forrester Research, NASSCOM’s then-president Karnik asserted that security standards in Indian call centers were among the best in the world, and there were more security breaches in the UK and the US in 2005 than in India.18 DSCI’s principal consultant Rahul Jain attributed the Indian IT&BPM sector’s rapid growth to the adoption of best practices and global standards related to cybersecurity, investments in the latest cybersecurity technologies and processes, staff training, creation 21
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
of high levels of employee awareness of cybersecurity, focus on IT governance, and internal cybersecurity auditing mechanisms.19 Some have rightly labeled India’s cybersecurity policy as incomplete and “all words and no action,” owing to the lack of a national cybersecurity action plan document or any guidelines regarding how the policy will be implemented.20 Likewise, no clear action plan explains how NCSP’s various goals can be achieved. Nonetheless, if we look at the track record of the roles of the collaborations between public and private sectors, which have been mainly initiated by NASSCOM, we have a wealth of detailed evidence about PPP’s role in strengthening cybersecurity. PPP has resulted in the enactment of regulations and rules related to cybersecurity. However, there are also major weaknesses and shortcomings in the enforcement of the existing laws. In this regard, NASSCOM’s efforts represent a limited but important part of India’s overall cybersecurity posture. Regarding the role of domestic spillover of cybersecurity-related knowledge and technology, it’s important to look at learning processes. Researchers have suggested that such processes generally take place through intra-IT&BPM and interindustry externalities. The diffusion of information and expertise, interfirm labor mobility, and development of specialized services would facilitate such externalities. Research has also suggested that interindustry spillover effects associated with export activities are positively related to industrial linkages. In this regard, to increase the effects associated with spillover and externalities, policy measures are needed to strengthen the linkages between the IT&BPM industry and other economic sectors. Especially when the state’s regulatory roles are weak, trade associations can fill the regulatory vacuum. Interfirm linkages, such as trade associations in emerging economies, can establish the industry’s moral legitimacy in Western economies. For instance, developed world– based offshoring clients might rely more on trade associations such as NASSCOM than on a weak, ineffective state. Trade associations can influence industry behaviors in several ways. These associations’ norms, informal rules, and codes of behavior can create order—without the law’s coercive power—by relying on a decentralized enforcement process in which noncompliance is penalized through social and economic sanctions.21 In some situations, the state finds it beneficial to collaborate with such associations to rationalize an arena of activity. Associations can provide the state with expertise in developing new regulatory frameworks and strengthening enforcement. Although DSCI’s measures in strengthening data protection in the IT&BPM sector have been largely 22
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
successful and can serve as a model for other developing economies, their effects aren’t noticed outside this sector. For instance, DSCI increases its members’ cybersecurity compliance by monitoring their security practices and providing training and education. Although DSCI’s codes of behavior are irrelevant outside the IT&BPM sector, training and educating law enforcement personnel is key to strengthening the national cybersecurity profile. One reason behind the extremely low conviction rate in India could be that DSCI’s training programs are insufficient to develop measurable competence in cybercrime investigation among law enforcement officers. A majority of its initiatives are special lectures or three- to five-day programs. More comprehensive training programs would allow trainees to master the cybercrime investigation techniques and feel confident in their ability to deal with cybercrimes. Although most current programs focus mainly on police officers, DSCI and the government need to educate prosecutors, judges, and lawyers using practical and layman’s language. PPPs are probably the most notable feature of the Indian cybersecurity landscape and an appropriate institutional means of dealing with underdeveloped cybersecurity-related institutions. Although the government has expressed a high degree of willingness to participate in PPP, resource constraints are a significant barrier to the legislation’s effective enforcement. And it’s fair to say that the government’s initiatives to enhance IT&BPM cybersecurity are more symbolic than substantive. In 2015, cybersecurity experts pointed out a number of challenges facing India’s cybersecurity initiatives, such as inadequate budget, lack of coordination of different states’ cybersecurity strategies, and lack of audits in software used in government agencies for security loopholes.22 For instance, the Department of IT’s cybersecurity budget for the 2015 fiscal year was less than $20 million. In addition, attacks on Indian websites increased by approximately 500 percent between 2010 and 2014.
I
ndia’s digital economy has benefited greatly from NASSCOM’s and DSCI’s expertise in the interpretation, implementation, and application of data protection principles and their role as a repository of experience and source of cybersecurity best practices and cutting-edge knowledge. In this way, these agencies have been a driving force that has a major effect on India’s cybersecurity posture. In sum, whereas the government lacks resources, expertise, and legitimacy to develop new templates, monitor the behaviors of industries, and enforce laws, trade associations’ influences are likely to be more readily apparent. With well-focused priorities, trade associations will likely be better, more May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
effective, and more efficient institutions to effect change in this area. References 1. J.X. Yu and Z.Y. Qu, “PPPs: Inter-Actor Relationships. Two Cases of Home-Based Care Services in China,” Public Administration Q., vol. 36, no. 2, 2012, pp. 238–264. 2. S. Nolen, “India’s IT Revolution Doesn’t Touch a Government That Runs on Paper,” The Globe and Mail (Canada), 13 June 2012, p. A1. 3. “Cyber Crime: 1,600 Arrested, Only 7 Convicted,” 11 Dec. 2012; www.rediff.com/business/report/tech-cyber -crime-1600-arrested-only-7-convicted/20121211.htm. 4. C. Pughm, “Getting Good Government: Capacity Building in the Public Sectors of Developing Countries,” Urban Studies, vol. 36, no. 2, 1999, pp. 400–402. 5. M. Dickson, R. BeShers, and V. Gupta, “The Impact of Societal Culture and Industry on Organizational Culture: Theoretical Explanations,” Culture, Leadership, and Organizations: The GLOBE Study of 62 Societies, R.J. House et al., eds., Sage, 2004. 6. “Regulator Soon for Monitoring Data Security Standards,” 24 Apr. 2007; www.thehindubusinessline.com /todays-paper/regulator-soon-for-monitoring-data -security-standards/article1656182.ece. ______________________ 7. P.V. Rosenau, Public-Private Policy Partnerships, MIT Press, 2000. 8. S.H. Linder, “Coming to Terms with the Public–Private Partnership: A Grammar of Multiple Meanings,” Am. Behavioral Scientist, vol. 43, no. 1, 1999, pp. 35–51. 9. N. Kshetri, Cybercrime and Cybersecurity in the Global South, Palgrave Macmillan, 2013. 10. T. Gardner, “Indian Call Centres Selling Your Credit Card Details and Medical Records for Just 2p,” 18 Mar. 2012; www.dailymail.co.uk/news/article-2116649/Indian -centres-selling-YOUR-credit-card-details-medical -records-just-2p.html. ____________ 11. E.S. Savas, Privatization and Public-Private Partnerships, University Press, 2002. 12. A.C. Sobel, State Institutions, Private Incentives, Global Capital, Univ. Michigan Press, 1999. 13. “Securing the Web,” Hindustan Times, 22 Oct. 2006. 14. “Most Gurgaon IT, BPO Companies Victims of Cybercrime: Survey,” 6 Nov. 2011; http://timesofindia.india times.com/city/gurgaon/Most-Gurgaon-IT-BPO -companies-victims-of-cybercrime-Survey/articleshow /10626059.cms. _________ 15. S. Rai, “How Bogus Resumes Raise Questions about Indian Outsourcing Skills,” TechRepublic, 10 Sept. 2012; www.techrepublic.com/blog/cio-insights/how-bogus -resumes-raise-questions-about-indian-outsourcing-skills. 16. R. Frankel, “Associations in China and India: An Overview,” European Society of Assoc. Executives, 15 June 2006, pp. 32–33.
17. T.K. Thomas, “Govt Will Help Fund Buys of Foreign Firms with High-End Cyber Security Tech,” BusinessLine, 2012; www.thehindubusinessline.com/industry -and-economy/info-tech/article3273658.ece?homepage =true&ref=wl_home. ____________ 18. “India Could Process 30 Pct of US Bank Transactions by 2010—Report,” AFX News, 27 Sept. 2006; www ___ .finanznachrichten.de/nachrichten-2006-09/7050839 -india-could-process-30-pct-of-us-bank-transactions-by _____________ -2010-report-020.htm. 19. R. Jain, “Cyber Security: Imperatives for India,” Information Week, 7 Aug. 2012. 20. V.V. Desai, “Is India’s Cyber Policy All Words and No Action?,” TechTarget, 14 Oct. 2013; http://search security.techtarget.in/news/2240207148/Is-Indias ________________________________ -cyber-policy-all-words-and-no-action. ______________________ 21. D.C. North, Institutions, Institutional Change and Economic Performance, Cambridge Univ. Press, 1990. 22. P.K. Jayadevan and N. Alawadhi, “India’s Cyber-Security Budget ‘Woefully Inadequate’: Experts,” 28 Jan. 2015; http://articles.economictimes.indiatimes.com/2015-01 -28/news/58546771_1_cyber-security-cert-in-national -cyber-coordination-centre. _______________ Nir Kshetri is a professor at the University of North
Carolina at Greensboro and a research fellow at the Research Institute for Economics and Business Administration at Kobe University, Japan. His research focuses on global cybersecurity. Contact him at nbkshetr@uncg.edu. ____________ Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
On Computing podcast
www.computer.org/oncomputing
23
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
Improving the Security of Cryptographic Protocol Standards David Basin | ETH Zurich Cas Cremers | University of Oxford Kunihiko Miyazaki | Hitachi Saša Radomirović | ETH Zurich Dai Watanabe | Hitachi
Despite being carefully designed, cryptographic protocol standards often turn out to be flawed. Integrating unambiguous security properties, clear threat models, and formal methods into the standardization process can improve protocol security.
S
ecurity protocols are distributed algorithms that use cryptography to achieve security objectives. In practice, these protocols regulate how computing devices carry out security-critical tasks. For example, Transport Layer Security (TLS) is used to establish secure communication channels between clients and servers, Kerberos is used for distributed authentication and authorization, and IPsec can be used to set up virtual private networks. These protocols are omnipresent and let us access and protect numerous applications, ranging from banking to social media. Many lesser-known protocols are also in use, such as WiMAX for secure communication in wireless networks, ISO/IEC 9798 for entity authentication, and the Extensible Authentication Protocol (EAP) for network access authentication. A protocol such as TLS lets any client potentially communicate with any server, independent of the operating system they run on or the programming language used for their implementation. This generality is enabled by standards and technical documents such as RFCs, which describe a protocol’s operation in sufficient detail to guide the construction of interoperable implementations. All the protocols we have mentioned are described by standards or RFCs approved by standardization bodies, or are undergoing standardization. A closer look at modern protocol standards indicates that although standardization bodies are doing excellent work, the resulting protocols’ security varies considerably. Over the past decade, we have conducted
24
May/June 2015
SECURITY& PRIVACY
IEEE
numerous case studies with model-checking tools for security protocols, some of which we have developed ourselves.1–4 Our analysis shows that many standards suffer from security weaknesses, including basic mistakes and well-known flaws. In some cases, these weaknesses have been quite serious. Even minor problems, however, are best avoided from the start, prior to standardization. Amending standards is time-consuming, and after amendment, companies with products implementing the standard must decide between costly upgrades or the risk of damaging their reputation and undergoing litigation for distributing products with known defects. Because experts design standards carefully, we might expect them to meet strong, well-understood, and wellspecified security guarantees. Unfortunately, standards do not always meet this expectation. Although they often contain detailed functional descriptions, many do not include much information about security guarantees. Instead of unambiguous security properties and clear threat models, many cryptographic protocol standards specify, at best, high-level security properties and a handful of threat scenarios. This lack of clear threat models and specified properties makes it impossible to objectively assess a protocol’s merits: without them, there is nothing to objectively verify or falsify. During the past few decades, researchers have successfully used formal methods to analyze small academic protocols with well-defined threat models (also
Copublished by the IEEE Computer and Reliability Societies
1540-7993/15/$31.00 © 2015 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
called adversary models) and clear security goals. More recently, researchers from the formal methods community have analyzed several protocol standards. This process has typically involved proposing threat models and security properties as well as analyzing the standard with respect to properties not explicitly stated in the standard and therefore conjectured by the researchers. Here, we illustrate the problems that arise when security properties and threat models are neglected in standards and present several case studies to demonstrate how formal methods can make a difference. We then examine how we might better integrate formal methods and associated tools into the standardization process given present obstacles and limitations. We base our case studies on three protocols: WiMAX, EAP, and ISO/IEC 9798.
WiMAX Our first case study is the wireless communication standard IEEE 802.16, also known as WiMAX, which aims to enable the delivery of last-mile wireless broadband access (www.ieee802.org/16/published.html). The WiMAX standard includes several mechanisms that deal with keys or involve cryptographic operations. The core mechanism is the authorization phase, which establishes a shared secret on which all subsequent security is based. This authorization can be performed using EAP protocols or, alternatively, the privacy key management (PKM) protocols the standard describes. IEEE originally proposed the WiMAX standard in 2001 and has updated it several times since then. The first version includes only the PKMv1-RSA protocol. This protocol is executed between a subscriber station (SS)—typically an end user’s WiMAX modem—and a service provider’s base station (BS). At a high level, the protocol proceeds as follows. The subscriber station initiates communication with the base station by sending its certificate, the list of algorithms that it supports, and a unique connection identifier (CID). The base station generates an authorization key, AK, and sends this back encrypted with the subscriber station’s public key. It also sends the key’s sequence number and lifetime as well as a security association identifier, which we denote by SAID in the following message exchanges for PKMv1-RSA: SS → BS: SS_Certificate, SS_Algo_Suites, CID BS → SS: EncPK(SS)(AK), SAID. After the standard’s initial release, David Johnston and Jesse Walker identified several weaknesses in 2004.5 In particular, they argued that PKMv1-RSA essentially provides no security guarantees because, in the context
of wireless transmissions, we should assume that attackers can spoof arbitrary messages (that is, send messages impersonating another party). The subscriber station thus has no idea who encrypted or even generated the key it receives. Johnston and Walker argued that the protocol should at least provide mutual authentication under the (realistic) assumption that attackers can eavesdrop and inject wireless network traffic. Their arguments were necessarily informal, given that the standard specifies neither a threat model nor any details about the security properties it aims to achieve. Furthermore, whereas Johnston and Walker were specific about PKMv1-RSA’s weaknesses, “mutual authentication” is not a uniquely defined concept; authentication has many possible variations that differ in strength, as “The Ambiguity of Authentication” sidebar illustrates. In 2005, IEEE released a new version of the standard that introduced the PKMv2-RSA protocol. This new version is a three-message protocol in which all messages are digitally signed. The subscriber station initiates communication with the base station by sending a random number (SS_Random), its certificate, and a unique connection identifier. The message is signed with the subscriber station’s private RSA key (SigSS). The base station generates a key (pre-PAK), concatenates it with the subscriber station’s MAC address (SS_MAC), and encrypts the result with the subscriber station’s public key. It sends this encrypted message back to the subscriber station together with the subscriber station’s random number, its own random number, and its certificate. The message is signed with the base station’s private key (SigBS). In the third message, the subscriber station confirms the receipt of the previous message by sending back the base station’s random number and signing the message (SigSSc). We can see this in the following message exchanges for PKMv2-RSA: SS → BS: SS_Random, SS_Certificate, CID, SigSS BS → SS: SS_Random, EncPK(SS) (pre-PAK||SS_MAC), BS_Random, SAID, BS_Certificate, SigBS SS → BS: BS_Random, SigSSc It appears that this new protocol aimed to address the weaknesses in PKMv1-RSA. But again, the standard specified neither a threat model nor security properties. Consequently, even though the numbering might suggest that PKMv2-RSA provides properties in addition to those PKMv1-RSA provides, the standard offers no concrete statements to this effect. Both academic and industrial experts were involved in a manual security review of drafts of the 2005 version of the standard.6 These reviews led to changes that found their way into the revised standard. However, soon after 25
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
The Ambiguity of Authentication
A
uthentication is a common security goal. However, the notion of authentication has numerous, substantially different interpretations, each with several variants. Table A presents three typical interpretations of “a client C authenticated by a server S,” each with a weaker and a stronger variant.
Each of these interpretations has many more variants. The critical observation is that no one “right” definition of authentication exists: you cannot specify an appropriate authentication property without a fundamental understanding of the application scenario.
Table A. Typical interpretations of “a client C authenticated by a server S.” Variant
Entity authentication
Data agreement
Authenticated session key Authenticated session key k: session key k is a fresh session key, known only to C and S and possibly a trusted third party.
Weaker
Aliveness of C: C has performed an action.
Noninjective agreement on message m: S has received the message m from C. C has sent m to S.
Stronger
Recent aliveness of C: C has performed an action (causally) after a specific action of S.
Authenticated session key k with Agreement on message m: noninjective agreement on m, and S will not accept m if compromise resilience: k is an it is replayed by the adversary. authenticated session key, and compromise of an old session key does not lead to compromise of k.
the new version’s release, researchers pointed out that an “interleaving attack” was possible on the PKMv2RSA protocol.7 This is a commonplace man-in-themiddle (MITM) attack in which the attacker forwards and selectively modifies messages between two parties. In 2008, Suzana Andova and her colleagues used the formal protocol analysis tool Scyther to analyze several subprotocols from the standard.8 (The protocol models used in the analysis are available at https://github.com /cascremers/scyther/tree/master/gui/Protocols __________________________________ /IEEE-WIMAX.) We independently rediscovered the ___________ MITM attack, proposed a fix, and verified its correctness. Figure 1 shows the attack, which proceeds as follows. The adversary controls a rogue base station, which we will call Charlie. When a subscriber Bob tries to establish a connection with Charlie, the adversary reroutes the message to the legitimate base station Alice instead. Alice replies with a cryptographically signed message, thinking that Bob is trying to start a session with her. Her message contains an encrypted key for Bob. The adversary re-signs Alice’s reply with Charlie’s private key and sends it on to Bob. Bob responds as expected, and the adversary reroutes the message again to Alice. In the end, Alice correctly thinks that she is communicating with Bob, but Bob thinks he is talking to Charlie. Thus, authentication of the session’s participants fails. The cause of this problem is that the first and third messages do not include any information on the 26
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
subscriber’s assumptions about who the base station is. Adding the base station’s identity to the third message prevents the attack.8 Interestingly, despite this attack, the adversary can neither eavesdrop on Bob’s subsequent messages nor send messages impersonating Bob. The reason is twofold. First, the adversary cannot decrypt the key Alice sends to Bob. Second, the protocol immediately following PKMv2-RSA cryptographically binds the communication partners’ identities to all exchanged messages. Thus, the adversary cannot continue the attack. So, is this “attack” really a security threat? The surprisingly simple answer is that, because the standard specifies neither the intended security properties nor the threat model, we cannot know for sure. If we play it safe and assume that PKMv2RSA does not provide strong security guarantees, then the cryptographic operations performed in it and the subsequent protocol are simply redundant overhead. In fact, we can discard PKMv2-RSA’s third message without sacrificing the security properties that it achieves in composition with the subsequent protocol.8 We could thus simplify the protocol and reduce its communication complexity. Alternatively, we can accept that PKMv2-RSA is intended to be a three-message authentication protocol and ignore the MITM problem. However, this could lead to real problems if PKMv2-RSA is combined with a different subsequent protocol whose engineers rely on the May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
Thread 1 Role SS Executed by Bob Talking to Charlie
Thread 2 Attacker Charlie Rogue BS
Thread 3 Role BS Executed by Alice Responding to Bob
Bob_Random, Bob_Certificate, CID, SigBob Bob_Random, Bob_Certificate, CID, SigBob
Bob_Random, EncPK(Bob) (pre-PAK||Bob_MAC), Alice_Random, SAID, Charlie_Certificate, SigCharlie
Bob_Random, EncPK(Bob) (pre-PAK||Bob_MAC), Alice_Random, SAID, Alice _Certificate, SigAlice
Alice_Random, SigBob' Alice_Random, SigBob'
Figure 1. Man-in-the-middle attack on PKMv2-RSA. Subscriber station (SS) Bob is talking to attacker Charlie. Base station (BS) Alice thinks Bob is talking to her.
statement in IEEE 802.15e that PKMv2-RSA achieves mutual authentication. Unfortunately, this failure to specify the threat model and security properties is not an isolated case.
Extensible Authentication Protocol Our second case study is EAP, developed by the Internet Engineering Task Force (IETF). Unlike many other standardization bodies, the IETF uses a completely public process for developing standards. There is no formal membership; the standardization process is open to all parties; and its publications, including the RFCs and Internet drafts we refer to next, are freely available online (www.ietf.org). This lets us study the evolution of EAP, which is currently an IETF proposed standard (http://tools.ietf.org/html/rfc3748). EAP is a framework for network access authentication. It supports multiple authentication protocols, known as methods. Some of the better-known EAP authentication methods are EAP-TLS, EAP-SIM (Subscriber Identity Module), and EAP-AKA (Authentication and Key Agreement), used for authentication and session key distribution in Wi-Fi Protected Access (WPA/WPA2), the Global System for Mobile Communication (GSM), and the Universal Mobile Telecommunications System (UMTS) networks, respectively. EAP began in 1995 as an Internet draft for the Pointto-Point Protocol (PPP) Extensible Authentication Protocol. PPP was first published as RFC 1134 in 1989. In April 2004, an Internet draft document was published reviewing 48 EAP authentication methods (http:// ____ tools.ietf.org/html/draft-bersani-eap-synthesis
_______________ It concluded that some meth-sharedkeymethods-00). ods were no longer under active development, and many did not comply with the then-evolving EAP reference document, which became RFC 3748. Of the remaining methods, the Internet draft identified several interesting candidates but left their comparison for future work. A comparison at the time would have been difficult because an EAP threat model and specific security claims were only introduced in RFC 3748. In fact, even with RFC 3748’s threat model and security claims, we still consider it a challenge to compare EAP authentication methods because the threat model is too vague. This threat model is defined by the assumption that an attacker could compromise links over which EAP packets are transmitted, and by a list of 10 attacks. This is, of course, a source of ambiguity: any attack that is not explicitly mentioned could be considered out of the threat model’s scope. Examining the 10 attacks more closely, we see that they mix generic attacker capabilities with specific scenarios. For instance, the first states that the attacker can eavesdrop on the communication link, but narrows this ability down to discovering user identities. The second affords the attacker two generic capabilities—namely, spoofing and packet modification—but is restricted to EAP packets. One way to obtain a more precise threat model is to focus on what we consider the essential attacker capabilities. From the first two items on the list, we infer that an attacker can eavesdrop on, spoof, and modify EAP packets. Several of the subsequent items consider specific attack scenarios that could result from these
27
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
three capabilities. One concerns denial-of-service attacks by spoofing messages, and three others concern specific MITM attacks. The last item considers a particular scenario in which an attacker might spoof lower-layer protocol messages. The attacker’s capability in this case is not defined by the particular scenario, but by the fact that lower-layer messages are also considered to be under the attacker’s control. Thus, we can infer that an attacker is assumed to be able to eavesdrop on, spoof, and modify EAP and all lower-layer packets. The remaining items state that an attacker can perform offline computations, such as dictionary attacks on passwords and attacks on weak cryptographic schemes. We now turn to EAP’s security properties. An EAP authentication method specification must state which security properties it claims to satisfy by referring to a nonexhaustive list given in section 7.2.1 of RFC 3748. RFC 3748 recommends that the claims be supported with evidence in the form of a proof or reference. We examine a selection of properties relevant for making precise statements about a protocol’s behavior. The property descriptions are lightly edited quotes from section 7.2.1: ■ Integrity protection refers to data origin authentication and protection against unauthorized modification of information for EAP packets (including EAP requests and responses). When making this claim, a method specification must describe the EAP packets and their protected fields. ■ Replay protection refers to protection against the replay of an EAP method or its messages, including status messages. ■ Session independence demonstrates that passive attacks (such as capturing the EAP conversation) or active attacks (including compromising the master session keys) do not enable the compromise of subsequent or prior keys. Even though the standard gives no clear threat model, these descriptions match well with established concepts from the verification community. Integrity protection is related to data agreement, replay protection to injectivity, and session independence to backward and forward secrecy. Surprisingly, the confidentiality claim is based on a definition that unnecessarily complicates protocol analysis and comparison (see RFC 3748, section 7.3): ■ Confidentiality refers to the encryption of EAP messages, including status indications and EAP requests and responses. A method making this claim must support identity protection. 28
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
There are two problems with this property. First, in an adversarially controlled network, encryption is necessary to ensure message confidentiality, but it is not sufficient in general. Danny Dolev and Andrew Yao constructed an artificial but striking example.9 It demonstrates how a secure communication protocol, employing public-key cryptography, can be turned into an insecure protocol simply by encrypting every protocol message an additional time. Second, to satisfy this property, an authentication method must provide not only message confidentiality but also “identity protection,” a privacy feature that is an arguably unrelated property. The consequence of having these two distinct properties combined into one is that authentication methods that provide message confidentiality but not identity protection, such as EAP-PSK (Pre-Shared Key; RFC 4764), cannot be easily distinguished from authentication methods that provide neither of the two properties, such as EAPMD5-Challenge (RFC 2284). RFC 3748 has been updated with RFC 5247, in which the threat model is clearer, but the newer version does not update the security claims. Still, there is clear movement toward a more precise security model. Moreover, RFC 4962, an IETF best current practices document published in 2007, advocates using formal methods in addition to expert review in the standardization process of key management protocols. In the next section, we illustrate the feasibility and benefits of employing formal verification methods in the context of a cryptographic protocol standard.
ISO/IEC 9798 The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) jointly develop IT standards. In 1991, they published the first part of our final case study, ISO/ IEC 9798, which specifies a family of entity authentication protocols. This standard is mandated by numerous other standards that require entity authentication as a building block. Examples include the Guidelines on Algorithms Usage and Key Management by the European Committee for Banking Standards and the ITU-T multimedia standard H.235. Since 1991, ISO/IEC has revised parts of the standard several times to address weaknesses and ambiguities. We might thus expect that such a mature and pervasive standard is “bulletproof ” and that the protocols satisfy strong, practically relevant authentication properties. However, it is not entirely clear which security properties the standard’s protocols provide. The standard claims to provide “entity authentication,” alternatively phrased as “authentication of the claimant identity.” As the sidebar May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
Thread 1 Role A Executed by Alice Initiating with Bob
Thread 2 Role B Executed by Bob Responding to Alice TNA || Text2 || fk
Alice, Bob
Thread 3 Role B Executed by Alice Responding to Bob
(TNA || (IBob) TNB || Text4 || fk
Alice, Bob
TN 'B || Text4 || fk
Alice, Bob
(TNB || (IAlice)
(TN 'B || (I Bob) Agreement (Alice, Bob, TNB )
Figure 2. Role-mixup attack. This attack occurs on the 2009 version of the two-pass mutual authentication protocol using a cryptographic check function. When Alice finishes thread 3, she wrongly assumes that Bob was performing the A role.
explains, we can interpret the notion of authentication in different ways, making it extremely difficult for users to judge if a particular protocol provides a sufficient form of authentication. Similarly, as is common in many standards, the threat model is defined only in terms of specific (informal) attack types, such as “replay attack.” We became involved in evaluating ISO/IEC 9798 in 2010 with the Cryptography Research and Evaluation Committee set up by the Japanese government. We formally analyzed the 2010 versions of the protocols specified, parts 1–4 of ISO/IEC 9798, using the Scyther tool.2 For the threat model, we used the established Dolev-Yao model, in which the attacker has full control over the network but cannot break the cryptographic primitives. We evaluated the protocols with respect to a subset of previously defined authentication properties.10 To our surprise, we found that the standard still contained several weaknesses that had been previously reported in academic literature. Moreover, we found new weaknesses. We provide one illustrative attack, called a role-mixup attack, in which an agent’s assumptions on another agent’s role are wrong. The two data agreement properties (see the sidebar) require that when Alice finishes her role with (apparently) Bob, Alice and Bob not only agree on the exchanged data, but Alice can also be sure that Bob was performing the intended role. Rolemixup attacks violate agreement properties. Figure 2 shows an example of a role-mixup attack on the following 2009 version of the two-pass mutual authentication protocol using a cryptographic check function: A → B: TNA||Text2||f K (TNA||IB||Text1) B → A: TNB||Text4||f K AB(TNB||IA||Text3). AB
Agents perform actions such as sending and receiving messages, resulting in message transmissions (horizontal arrows). Actions are executed in threads (vertical
lines). The box at the top of each thread denotes the parameters involved in the thread’s creation. The crossed-out hexagon denotes that the claimed security property is violated. In this attack, the adversary uses a message from Bob in role B (thread 2) to trick Alice in role B (thread 3) into thinking that Bob is executing role A and is trying to initiate a session with her. However, Bob (thread 2) is replying to a message from Alice in role A (thread 1) and is executing role B. The adversary thereby tricks Alice into thinking that Bob is in a different state than he actually is. In addition, when a protocol implementation uses the optional text fields Text1 and Text3, the role-mixup attack also violates the agreement property with respect to these fields: Alice will end the protocol believing that the optional field data she receives from Bob was intended as Text1, whereas Bob actually sent this data in the Text3 field. Depending on how these fields are used, this could be a serious security problem. For example, consider a deployment scenario in which the optional text fields represent numbers. Let the first message be used for a transaction request, where Text1 represents the amount of money to be transferred. Assume the second message is used for confirmation, where Text3 corresponds to the transaction number. In this case, the adversary can reuse a response message, which contains a transaction number N, to insert a seemingly valid transaction request for the amount N. Note that exploiting these attacks, as well as the other attacks we found, does not require “breaking” cryptography. Rather, the adversary exploits similarities among messages as well as agents’ willingness to engage in the protocol. We analyzed the shortcomings in the protocols’ design and proposed and formally verified repairs.11 Our repairs address all the known problems. Based on our 29
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
analysis, the ISO/IEC working group responsible for the ISO/IEC 9798 standard released an updated version incorporating our proposed protocol fixes in 2012. We believe that the approach we have taken to analyze and provably repair the ISO/IEC 9798 standard can play an important role in future standardization efforts. Our approach supports standardization committees with both falsification (finding errors in the early phases) and verification (providing objective and verifiable security guarantees during end phases).
Discussion and Recommendations Our three case studies suggest a trend toward an improved standardization process. The WiMAX study provides a cautionary tale on what happens when threat models and security goals are not included in the standard. In this case, the lack of these models created a situation in which some protocols could be declared neither secure nor insecure, and simple security flaws were not caught until late in the standardization process, requiring time-consuming, expensive amendments. The EAP case study indicates that security protocols are increasingly considered in the context of a threat model and are designed to satisfy specific security claims. However, the threat models and security claims tend to be specified informally, making it hard to compare protocol proposals and decide whether a protocol is suitable for a given purpose. The ISO/IEC 9798 case study demonstrates that a standard can provide systematic threat models and precise security properties and that we can perform formal verification. It also shows that formal methods are slowly starting to affect standardization bodies.11–13 We expect this trend to continue as governments and other organizations increasingly push for the use of formal methods in developing and evaluating critical standards. For example, in 2007, ISO/IEC JTC 1/SC 27 (IT Security Techniques) started the “verification of cryptographic protocols” project, which involves developing a standard (ISO/IEC 29128) for certifying cryptographic protocol designs in which the highest evaluation levels require using formal, machine-checked correctness proofs.14 The four cornerstones of the ISO/IEC 29128 certification process are the requirements that a security protocol document must contain a protocol specification, a threat model or adversary model, security properties, and self-assessment evidence.15 The specifics for these requirements depend on what protocol assurance level (PAL) is sought. At the lowest assurance level, PAL1, informal descriptions might be given for the protocol specification, adversary model, and security properties. The self-assessment can be conducted with informal arguments (PAL1) or mathematical “paper and pencil” proofs (PAL2) demonstrating that the security 30
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
properties hold with respect to the adversary model. The higher levels require formal descriptions, specific to the automated tools employed to obtain the selfassessment evidence. Unsurprisingly, we think that security protocol designs that satisfy these requirements would make for much-improved security protocol standards. The current security protocol standardization process is still far from ideal, and it will not change overnight. We must bridge several gaps before formal verification becomes a standard procedure. Some of these gaps are due to cultural and technical language differences between network engineers, cryptographers, and other security researchers. Others are in our own backyard and concern the automated tools employed in the verification process. For widespread industrial use, these tools must be robust, well-documented, and go beyond current research prototypes. Moreover, the tools themselves must ultimately be certified to be correct.
M
uch work remains before engineers are as comfortable specifying security properties and threat models as they are specifying functional requirements. Across domains, both security properties and threat models tend to be formulated at different abstraction levels and from different perspectives. Addressing this requires research leading to a common framework. Ideally, we need a standardized set of unambiguous security properties and threat models that other standards can refer to. Once standards add these to their functional specifications, we will have the foundation for evaluating standards’ security merits and, subsequently, for comparing different proposals, possibly using tool support.2,16,17 References 1. D. Basin, S. Mödersheim, and L. Viganò, “OFMC: A Symbolic Model Checker for Security Protocols,” Int’l J. Information Security, vol. 4, no. 3, 2005, pp. 181–208. 2. C. Cremers, “The Scyther Tool: Verification, Falsification, and Analysis of Security Protocols,” Proc. 20th Int’l Conf. Computer Aided Verification, LNCS 5123, 2008, pp. 414– 418; www.cs.ox.ac.uk/people/cas.cremers/scyther. 3. S. Meier, C. Cremers, and D. Basin, “Efficient Construction of Machine-Checked Symbolic Protocol Security Proofs,” J. Computer Security, vol. 21, no. 1, 2013, pp. 41–87. 4. B. Schmidt et al., “Automated Analysis of Diffie-Hellman Protocols and Advanced Security Properties,” Proc. 25th IEEE Computer Security Foundations Symp. (CSF 12), 2012, pp. 78–94. 5. D. Johnston and J. Walker, “Overview of IEEE 802.16 Security,” IEEE Security & Privacy, vol. 2, no. 3, 2004, pp. 40–48. May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
6. B. Aboba, “Summary of the IEEE 802.16e D8 Security Review,” Sept. 2005; www.ieee802.org/16/tge/contrib /C80216e-05_373.pdf. _____________ 7. S. Xu and C.-T. Huang, “Attacks on PKM Protocols of IEEE 802.16 and Its Later Versions,” Proc. 3rd Int’l Symp. Wireless Communication Systems, 2006, pp. 185–189. 8. S. Andova et al., “A Framework for Compositional Verification of Security Protocols,” Information and Computation, Feb. 2008, pp. 425–459. 9. D. Dolev and A. Yao, “On the Security of Public Key Protocols,” IEEE Trans. Information Theory, vol. 29, no. 2, 1983, pp. 198–208. 10. G. Lowe, “A Hierarchy of Authentication Specifications,” Proc. 10th IEEE Computer Security Foundations Workshop (CSFW 97), 1997, pp. 31–44. 11. D. Basin, C. Cremers, and S. Meier, “Provably Repairing the ISO/IEC 9798 Standard for Entity Authentication,” Proc. 1st Int’l Conf. Principles of Security and Trust (POST 12), LNCS 7215, P. Degano and J.D. Guttman, eds., 2012, pp. 129–148. 12. C. Meadows, “Analysis of the Internet Key Exchange Protocol Using the NRL Protocol Analyzer,” Proc. IEEE Symp. Security and Privacy, 1999, pp. 216–231. 13. C. Meadows, P.F. Syverson, and I. Cervesato, “Formal Specification and Analysis of the Group Domain of Interpretation Protocol Using NPATRL and the NRL Protocol Analyzer,” J. Computer Security, vol. 12, no. 6, 2004, pp. 893–931. 14. S. Matsuo et al., “How to Evaluate the Security of Real-Life Cryptographic Protocols? The Cases of ISO/IEC 29128 and CRYPTREC,” Proc. 14th Int’l Conf. Financial Cryptography and Data Security, LNCS 6054, 2010, pp. 182–194. 15. ISO/IEC 29128: Information Technology—Security Techniques—Verification of Cryptographic Protocols, Int’l Organization for Standardization, 2011. 16. B. Blanchet, “An Efficient Cryptographic Protocol Verifier Based on Prolog Rules,” Proc. 14th IEEE Computer Security Foundations Workshop (CSFW 01), 2001, pp. 82–96. 17. S. Meier et al., “The TAMARIN Prover for the Symbolic Analysis of Security Protocols,” Proc. 25th Int’l Conf. Computer Aided Verification (CAV 13), LNCS 8044, 2013, pp. 696–701. David Basin is a full professor and the Information Secu-
rity Chair in the Department of Computer Science at ETH Zurich. His research focuses on information security, in particular, methods and tools for modeling, building, and validating secure and reliable systems. Basin received a Habilitation in computer science from the University of Saarbrücken. Contact him at basin@inf.ethz.ch. ___________
His research focuses on information security and applied cryptography, including the development of automated analysis tools. Cremers received a PhD in computer science from Eindhoven University of Technology. Contact him at cas.cremers@cs.ox.ac.uk. _______________ Kunihiko Miyazaki is a senior researcher in the Research
and Development Group at Hitachi. His research interests include information security, cryptography, and formal methods. Miyazaki received a PhD in information and communication engineering from the University of Tokyo. He’s a member of the Information Processing Society of Japan and the Institute of Electronics, Information, and Communication Engineers. Contact him at kunihiko.miyazaki.zt@ ______________ hitachi.com. _______
Saša Radomirović is a senior scientist in the Department
of Computer Science at ETH Zurich. His research focuses on information security, in particular, modeling, analysis, and verification of security and privacy properties with algebraic and combinatorial methods. Radomirović received a PhD in mathematics from Rutgers University. Contact him at ___ sasa .radomirovic@inf.ethz.ch. ________________
Dai Watanabe is a senior researcher in the Research
and Development Group at Hitachi. His interests include information security, cryptography, and cryptographic protocols. Watanabe received a doctorate in engineering from the Tokyo University of Science. He’s a member of the Information Processing Society of Japan and the Institute of Electronics, Information, and Communication Engineers. Contact him at dai __ .watanabe.td@hitachi.com. _________________
Take the CS Library wherever you go! IEEE Computer Society magazines and Transactions are now available to subscribers in the portable ePub format. Just download the articles from the IEEE Computer Society Digital Library, and you can read them on any device that supports ePub. For more information, including a list of compatible devices, visit
www.computer.org/epub
Cas Cremers is an associate professor in the Department
of Computer Science at the University of Oxford.
31
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
Blended Identity: Pervasive IdM for Continuous Authentication Patricia Arias-Cabarcos and Florina Almenárez | University Carlos III of Madrid Rubén Trapero | Technische Universität Darmstadt Daniel Díaz-Sánchez and Andrés Marín | University Carlos III of Madrid
A proper identity management approach is necessary for pervasive computing to be invisible to users. Federated identity management is key to achieving efficient identity blending and natural integration in the physical and online layers where users, devices, and services are present.
A
doption of new computing paradigms is usually hindered by the security challenges they bring. In the pervasive computing field, an element of paramount importance still requires deeper research—identity management (IdM)—the management of individual principals and their authentication, authorization, and privileges within or across system boundaries. Mark Weiser’s vision of a world in which technology becomes invisible to support people in their everyday lives is currently unrealizable without a continuous authentication system.1 The goal is to create ambient intelligence where network devices embedded in the environment, from clothing to cars, homes, and the human body, provide unobtrusive connectivity and services all the time, thus improving the human experience and quality of life without explicit awareness of the underlying communications and computing technologies. In this ubiquitous computing world, a change of context—for instance, users shifting location or new people appearing in the proximity—might involve new devices, services, and interaction possibilities. To gain access to pervasive services, users must often authenticate and expose different forms of their identity to the various services, which worsens user experience and conflicts with the goal of invisibility. IdM technologies have evolved to cope with the increasing number of services that users might access; federated identity management (FIM) is the latest approach, wherein a common set of policies, practices, and proto-
32
May/June 2015
SECURITY& PRIVACY
IEEE
cols links users’ electronic identity and attributes stored across multiple IdM systems. FIM’s ultimate goal is to enable users of one domain to securely access data or systems of another domain seamlessly—single sign-on (SSO) being the most popular functionality. However, current federation technologies rely on preconfigured static agreements, which aren’t well-suited for the open environments in pervasive computing scenarios. These limitations negatively impact scalability and flexibility. A new identity model for open environments is necessary. Thus, our contribution includes: ■ a definition of blended identity, which is the basis for applying FIM in open environments; ■ a prototype risk-based architecture that extends and improves FIM to allow the creation of dynamic federations; and ■ design and validation of the risk assessment methodology that constitutes the main pillar of our proposed architecture. Our model enables continuous authentication so users can securely access services anytime and anywhere, with minimal interaction. The model is thus aligned with pervasive computing’s basic goals: invisibility, flexibility, scalability, and personalization.
The Challenges of Pervasive IdM When the first computers appeared, password-based
Copublished by the IEEE Computer and Reliability Societies
1540-7993/15/$31.00 © 2015 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
authentication was the core mechanism for IdM. This mechanism worked fairly well at that time, owing largely to how little data it actually needed to protect. However, with the advent of the Internet, the explosion of personal devices and online applications, and the increase in transactions, IdM became far more complex. Today, we’re asked to prove our identities every time we board a plane; check in to a hotel; make a credit card purchase; and log on to a computer, smartphone, smart TV, or website. Therefore, users face a mental burden, known as password fatigue, which frequently leads them to devise strategies that degrade the security of their protected information. For instance, users might employ the “poor man’s SSO” strategy, reusing the same passwords. In the past decade, FIM frameworks and protocols, such as Security Assertion Markup Language (SAML), WS-Federation, OAuth, and OpenID, came on to the scene to ameliorate the problems related to passwordbased authentication and allow identity portability across disparate domains.2 Successful implementations have been deployed in the Web domain, especially in the education and research fields and the social Internet arena. Despite this advance in IdM, important open issues remain. Two influential works analyze IdM problems and formulate identity’s seven laws3 and flaws.4 Both studies point out that two factors are indispensable: security aspects, such as privacy, minimal disclosure, and mutual authentication, and effective human integration, such as natural interaction and easy interfaces. Furthermore, they highlight trust establishment as key for scalability. Although FIM protocols can cover security aspects, usability and trust challenges are unsolved. Whereas the research community has addressed IdM in pervasive computing, there’s still scarce work in the context of applying FIM to it. Some proposals introduce mechanisms for SSO and seamless access control, but they’re usually limited to a particular scenario or set of devices.5,6 We need to evolve IdM one step further; the merging and usage of well-known FIM protocols seems a natural approach.
Blended Identity Identity must be reformulated for FIM’s application in pervasive computing. The seven laws and flaws fail to address the notion of convergence between the physical and online planes. This concern, coupled with proper handling of human interaction and trust management, leads to the concept of blended identity. Identity has both a digital and a physical component. Some entities might have only an online or physical representation, whereas others might have a presence in both planes. IdM requires relationships
not only between entities in the same planes but also across them. Users move around the pervasive world carrying various personal devices that comprise a personal network (PN). This dynamic network changes when users are in motion, for instance, going from a smart home to a smart office. Devices join and leave, services appear and disappear, and access control must adapt to maintain the user perception of being continuously and automatically authenticated. To accomplish this, federations must be established to create trust relationships between devices and services to securely exchange identity data. For example, when users log in to their smartphone, authentication is seamlessly transferred to the rest of their PN devices. When they move to the office, the smartphone’s authentication isn’t enough to access office devices, such as a printer or corporate Web services. Thus, another identity source—in this case, the online corporate database—must provide the users’ job identity and extend and establish a federation with their PN for both physical and online access. All this should happen in the background beyond user consciousness. Hence, there are several coexisting identity sources, called identity providers (IdPs), and several services requiring identity data, which service providers (SPs) offer. Roles can shift, and both physical devices and online providers can offer services. A universal IdP can’t be assumed because SPs require different identity assurances and attributes in different contexts. Furthermore, in pervasive scenarios, it’s unrealistic to assume that interactions always take place between known entities or that an administrator has preconfigured the required trust relationships among every party to guarantee secure operations. Pervasive environments are dynamic, multiprovider, and multiservice. Preconfiguration isn’t feasible because it simply doesn’t scale. Current FIM protocols suffer from limitations that make the described level of identity blending unattainable.2 Nowadays, it’s possible to achieve SSO only across online services in closed domains with a previously established trust relationship. In addition to these FIM protocols’ lack of flexibility, they neglect the remaining possible relationships: SSO across devices in a PN, PN federations with other PNs or smart environments, SSO from physical devices to online services, and SSO from online services to physical devices. To address these concerns, blended identity should efficiently combine the physical and digital planes to achieve IdM for pervasive computing. Users should authenticate automatically and continuously to the smart services and devices, whether online or in the digital or physical plane, and the environment should adapt and personalize accordingly. 33
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
Related Work in Pervasive Computing Authentication
T
he most salient work in authentication in pervasive computing environments is Frank Stajano’s Pico.1 This proposed design is based on a hardware token that relieves users from having to remember passwords and PINs. Unlike other works, it applies not only to Web authentication but to all other contexts in which users must remember passwords, providing continuous authentication. However, users must carry a new dedicated device, which unlocks only when their other devices are within a certain proximity to guarantee that it hasn’t been stolen. But what if users forget one of their devices? Single sign-on (SSO) also uses a new tokenbased protocol instead of leveraging existing working standards. Similar to this work is Enrique Soriano and his colleagues’ security architecture, which allows SSO between user devices and services using a set of software agents to implement token-based continuous authentication.2 In this case, a general-purpose device acts as the center of authentication, avoiding the need to carry yet another device. However, like Pico, it doesn’t build on standard protocols. Other researchers have explored related concepts, such as progressive and implicit authentication.3 Progressive authentication constantly collects cues about users to determine a level of confi-
dence in their authenticity. Based on this confidence level and each application’s degree of protection, the system determines whether authentication is required. Implicit authentication analyzes behavioral evidence to determine if authenticated users are still using a device or if the session should be closed. These approaches aim to reduce the number of times users must authenticate to a particular device, but they don’t address SSO. Thus, they’re complementary to our work. References 1. E. Soriano, F.J. Ballesteros, and G. Guardiola, “SHAD: A HumanCentered Security Architecture for the Plan B Operating System,” Proc. 5th IEEE Int’l Conf. Pervasive Computing and Comm., 2007, pp. 272–282. 2. F. Stajano, “Pico: No More Passwords!,” Security Protocols XIX, LNCS 7114, Springer, 2011, pp. 49–81. 3. O. Riva et al., “Progressive Authentication: Deciding When to Authenticate on Mobile Phones,” Proc. 21st USENIX Security Symp., 2012; _____________________________ https://www.usenix.org/conference/usenixsecurity 12/technical-sessions/presentation/riva. ______________________
Blended identity requires a natural interface and dynamic trust relationships. An easy-to-use interface should choose the best IdP automatically and authenticate users anytime and anywhere in continuously changing contexts. In addition, relationships between SPs and IdPs shouldn’t be based only on preconfiguration. Establishing new trust connections based on risk assessment should be possible.
A Continuous Authentication System Alternative proposals to achieve more streamlined authentication processes in pervasive computing environments are flawed (see the “Related Work in Pervasive Computing Authentication” sidebar). Our proposed solution has three big advantages. First, unlike several proposed models, it doesn’t require users to carry a new device. Second, it leverages current FIM protocols for SSO, which are properly integrated and extended. Thus, it’s easier to deploy than other solutions defining new protocols, and it’s compatible with existing providers. Finally, when interacting parties are unknown to each other, a new trust relationship can be established based on risk assessment, providing greater flexibility and scalability. Our model integrates different authentication sources and identity data naturally. Unlike other work, it dynamically establishes federations between previously unknown IdPs and SPs. This powerful feature 34
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
has a potential positive impact on business ecosystems, because instant virtual enterprises could be created at any moment and share user data to offer personalized services. Users will be constantly authenticated across these services, enjoying a real ubiquitous experience.
Architecture Figure 1 shows the architecture for implementing continuous authentication. Because it’s based on FIM standards, it provides security services—that is, authorization, integrity, and confidentiality—and enhanced services and privacy mechanisms, such as SSO, single logout, account linkage, and transient and persistent pseudonym identifiers.2 Furthermore, it meets the additional interface design and dynamic trust establishment requirements to realize the blended identity vision. The architecture’s main element is the users’ primary device—any device that includes the modules that act as IdP or IdP proxy. When operating as an IdP, the device directly provides user identity data that doesn’t require third-party attestation, for example, to authenticate against other devices. When operating as IdP proxy, it selects and reroutes authentication requests to the most suitable IdP and performs continuous SSO according to the operation flow we describe later. These requests can be processed in any FIM protocol through the FIM connectors module in Figure 1. May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Web services
Online social network Company’s cloud
Something your device knows
Online plane
Authentication requests Pervasive services
ity nt yer dI e a la t da
Something you are
M ic F er nam ag Dy man sk r st r Ri age Tru nage man a m
Smart office
s
FM ector n con
Primary device
Office network Smart office
thing
you h ave
M q M q
M q
MqM q
Some
User’s mobile personal network Physical plane
Figure 1. Architecture for blended identity implementation. Users carry a primary device with software modules that detect authentication requests coming from any kind of device and service, determine the best identity source to authenticate users, and create new trust relationships based on risk assessment.
To achieve such level of intelligence and automation, the primary device stores users’ IdPs and the passwords, credentials, or tokens in a tamper-proof place (see the identity data layer in Figure 1). To unlock this knowledge and let the device authenticate users on their behalf, a biometric proof, such as fingerprint recognition, is given. This mechanism constitutes a simple interface that is always the same, is natural and easy, and requires only light user interaction. Thus, the proposed design is based on the common three-factor identity paradigm, which defines identity as being composed by something you have, something you are, and something you know. Our prototype blends these three features to construct a natural interaction process wherein the primary device represents something you have; biometrics provide something you are; and something you know, such as a password, is transferred to something your smart device knows to improve usability and reduce your mental overhead. This part of the architecture meets the first requirement for blended identity—a natural interface. Another key software module is the dynamic FIM manager, which includes the trust manager and the risk manager. The trust manager gathers external information and reputation data (details of operation can be found in “fedTV: Personal Networks Federation for IdM in Mobile DTV”7). The risk manager computes the
risk of collaborating with an unknown provider. Both trust and risk values are considered to decide whether to establish a relationship. This part of the architecture meets the second requirement for blended identity— dynamic trust relationships.
Operation Flow Based on this architecture, our continuous authentication operation flow has four steps. First, because an SP’s pervasive service requires user identity data for access control, it sends an authentication request to the users’ primary device. Second, the primary device executes an identitymatching algorithm to determine the most suitable IdP to answer the authentication request based on local policies, and then reroutes it. Third, the selected IdP or the device decides whether to authenticate users against the SP. If the SP is known and a trust relationship exists, SSO messages are exchanged following the FIM protocol in use, and users are authenticated. If the SP is unknown, the IdP gathers publicly available information about it, including metadata, policies, and reputation; assesses risk; and decides on the fly whether to federate and share identity data. The reputation protocol is designed to avoid attacks from malicious nodes8; this has been investigated in related work.9 Reputation and risk data are combined 35
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
Security and privacy Confidentiality Integrity Authentication Nonrepudiation
TL ML TL ML TL ML
Metric definition
Metric name
AUTHML
Description
Measure authentication assurance at message level
Measurement
Identify mechanisams used for authentication and assign assurance level depending on the algorithm features
Quantitative scale
• High: Algorithms RSA/DAA with key size ≥ 3072 bits; ECDSA with key size ≥ 256 • Medium: Algorithms RSA/DSA with key size ≥ 2048 and < 3072 bits; ECDSA with key size ≥ 224 and < 256 • Low: Algorithms RSA/DSA with key size < 2048 bits; ECDSA with key size < 224 • None: No algorithms are provided for authentication
Accountability Availability Qualitative scale
Privacy
{High, Medium, Low, None} -> {3, 2, 1, 0}
(b) TL: Transport level ML: Message transport
Source data
(a)
(c)
Figure 2. Quantification process for the security and privacy metric. (a) Part of the taxonomy that identifies security and privacy risk criteria. (b) Example metric definition related to the authentication at message level (AUTHML) taxonomic dimension. (c) A provider’s SAML metadata containing the information used to obtain the AUTHML metric.
using fuzzy logic based on “E-commerce Trust Metrics and Models.”10 The complexity of the relationship between these two factors is tackled by linguistic labels that assign quantitative values from thresholds, allowing decision making about cooperation using conditional rules. We call this process of establishing a new trust relationship dynamic federation. If the threshold is met, SSO messages are exchanged following the FIM protocol in use, and users are authenticated. Finally, authenticated users are granted seamless access to the pervasive service. According to SSO standard protocols, an active IdP session is required to transparently notify the requesting SPs of the authentication state; otherwise, users are first queried for their credentials. This proposed architecture requires an active session only on the primary 36
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
device, for instance, unlocked with a fingerprint or biometric proof. Whenever required, the device authenticates to the rest of the IdPs on behalf of users by sending their credentials.
Risk Assessment Methodology Deciding whether to federate an SP with an IdP isn’t a trivial task. Risk assessment entails identifying, evaluating, and estimating quantitative or qualitative risk levels related to a concrete situation; comparing these levels against benchmarks; and determining an acceptable risk level. Decision-making techniques assist in this procedure; we propose a methodology that provides a meaningful numerical model based on multicriteria decision making, which uses multidimensional risk– based inputs to evaluate the federation’s suitability. May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
We use a methodology based on the multiattribute utility theory (MAUT), which compiles a list of aspects relevant for risk evaluation (N = 1, …, n), a partial score gi that indicates how good a provider A under evaluation is for each aspect N according to a measurement scale in the set of real numbers Si R, and each criterion’s specific importance in the context of the provider (Wi).11 Index i numbers scores, weights, and scales ranging from risk aspect i = 1 to risk aspect i = n. We derived the list of aspects in our risk assessment methodology directly from a taxonomy tailored for FIM that was created by analyzing FIM specifications and the public survey of Research and Education Federations (https://refeds.terena.org/index.php /Federations).12 This taxonomy considers a hierarchical________ based approach with five high-level categories—security and privacy, knowledge, interoperability, service specific risks, and historical interactions—each with subcriteria in the lower levels of the taxonomy. To assign the partial scores gi(A) for a provider A, we defined a set of metrics related to every taxonomic category. In this article, we focus on assurance metrics that are the inverse of the probability of incurring risk—that is, the higher the assurance, the lesser the risk, and vice versa. The process of defining the applicable metrics depends on the MAUT theory, which requires numerical values between 0 and 1. However, the assurance scale format is mostly qualitative: no, low, medium, and high assurance. To solve this issue, we mapped each qualitative value to a quantitative one (0, 1, 2, or 3), which we then normalized. The final result is a vector that represents the partial normalized scores for each subcriterion ([g1(A), …, gn(A)]), which we call the score vector (SV). Figure 2 exemplifies the quantification process for the security and privacy metric, including the mapping from the qualitative to quantitative scale. We obtained metric values based on the strength of the cryptographic algorithms in place, according to the National Institute of Standards and Technology’s recommendations.13 Figure 2c represents the source used to quantify the security and privacy metric that, in this example, is taken from a provider’s SAML metadata. The next step is to determine each criterion’s importance with respect to the others. For this purpose, we used weights assigned by each provider according to its preference, expressed as a weight vector (WV). With this information, we obtained each provider’s acceptable global risk score (Agg(A)) by aggregating all the weighted quantified criteria. Box C in Figure 3a shows this process. However, the problem isn’t totally solved yet. Because the obtained result is a balanced combination of the criteria, meaningful differences in partial scores might lead to erroneous assessments owing to
compensation effects, which might hide relevant information in the final value. There’s no guarantee that minimum requirements are satisfied with this initial approach. To solve these issues, we designed a weighting mechanism based on reference vectors (RVs) that contain each criterion’s minimum required values. RV’s specific content varies by provider and depends on local risk policies. Following this vector notation, box A in Figure 3a shows how we obtain the WV using the RV as input and capturing each criterion’s relative importance. For the sake of completion, we also defined the assurance compliance index (ACI), depicted by box B in Figure 3a. The ACI indicates the degree of compliance with the minimum requirements; an ACI of 1 indicates that all requirements are fulfilled. Any other value gives us an idea of the degree of fulfillment of the requirements. In this last case, the ACI is computed as the number of metrics in a provider’s SV that are greater than or equal to the minimum required value in the RV (denoted as |∪SV|) over the total number of metrics n. Taking the ACI into consideration, we also provide the constrained aggregated assurance value CAgg(A) (see box D in Figure 3a), which discards providers that don’t cover all the minimum requirements by assigning them a CAgg(A) equal to 0. However, if requirements are fulfilled, CAgg(A) is equal to the global risk computed after applying the weighed summation. This final CAgg(A) is the value that determines whether to accept a dynamic federation establishment.
Validation To prove the presented ideas, we used a modular approach, implementing and testing the different parts of the architecture separately, and integrated them in a fully working prototype. In our use case, we show the validation of the mathematical risk model underlying dynamic identity federation.
Risk Evaluation To test the risk model, we used SAML metadata documents in public repositories. We selected two providers, SP A and B, and inferred relevant risk-related features from their metadata. For simplicity, we present only the results for the security and privacy category’s risk aggregation, but other criteria would be aggregated the same way. Figure 3 shows security and privacy’s risk aggregation for SP A and B both graphically and mathematically. Each subcriterion was evaluated against a four-level assurance scale ranging from 0 to 3; for example, a value of 2/3 means that the third assurance level is fulfilled. Figure 3c shows each SP’s score levels, and Figure 3d shows important differences between each SP’s security dimension values. 37
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
SV A
Risk assessment module
Weight calculation SV B C RV Aggregation WV = n ∑i = 0 RV (i) WV • SVT 1 –, 1 1–, 2–, 0, 2–, 2–] WV = [–, 9 9 9 9 9 9 1 1–, 1–, 2–, 0, 2–, 2–] RV = [–, 3 3 3 3 3 3 High
Impact
Medium Low
Local risk policies
(a)
Pervasive service providers
SP A SP B
(b)
Medium Low
Risk High Medium
Very high High
Very low Low
Low Medium
Medium High
Medium Assurance
Low
High
Security and privacy Confidentiality Integrity Authentication Nonrepudiation Availability Accountability Privacy Mean
B
Assurance compliance index (ACI) calculation 1 if SV(i) ≥ RV(i) i ACI = |USV| otherwise n
AggA = 0.7 AggB = 0.67 D
Constrained aggregation AggA = 0 Agg if ACI = 1 AggB = 0.67 CAgg = 0 if ACI ≠ 1 .
A
A
SP A 3/3 0 0 2/3 1/3 3/3 3/3 0.57
SP B 1/3 2/3 1/3 3/3 1/3 2/3 2/3 0.57
ACIA = 0.71 ACIB = 1
1 0.8 0.6 0.4 0.2 SP A SP B (d)
(c)
Figure 3. Risk-driven provider selection. (a) The risk-driven provider selection procedure; (b) the two service providers under selection, SP A and SP B, with their metadata documents; (c) the quantitative values for the score vectors associated to SP A and SP B; and (d) a graphic representation of the score vectors.
Assuming the evaluating entity—that is, the users’ primary device acting as IdP—has the RV shown in Figure 3c, which leads to the associated WV, we can see that some dimensions have higher minimum assurance requirements than others. If the arithmetic mean is applied to aggregate the risk, then the two providers would have the same final security assurance value, even though they have different profiles and SP A clearly doesn’t fulfill the minimum requirements. This fact is easier to understand by comparing the RV in Figure 3a with the depiction of providers’ score vectors in the bar graph of Figure 3d. If we apply the proposed aggregation formula to the weights from the RV, SP A still has better assurance than SP B. The selection of the best SP is performed correctly only after using the ACI. Thus, from this use case, we prove that the risk model fulfills the initial goal, providing a meaningful unique value that assists in automatic decision making.
Implementation Details We developed a proof-of-concept IdM infrastructure 38
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
based on open source software and worked with an SAML-based SSO scenario containing users and several providers. This infrastructure has been extended to implement the logic for dynamic identity federation. This logic modifies the original SAML flow, which directly rejects requests from unknown providers to allow real-time evaluation and decision making. The users’ primary device is developed to comply with the SAML profile for mobile clients in an Android smartphone. For a richer set of IdPs, we programmed plugins for well-known online providers, such as Facebook. Thus, the primary device acts as both IdP and IdP proxy, letting users reuse their accounts.
P
ervasive computing requires a proper IdM approach so technology can actually transcend human consciousness. In this sense, FIM has great potential to achieve this goal and has been identified as a catalyst for the next Internet marketplace revolution.14 If realized, improved IdM can lower barriers for plug-and-play business-to-business, May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
business-to-consumer, and consumer-to-consumer integration, leading to highly dynamic online business ecosystems in which users have a seamless and personalized experience. Our proposal constitutes a new step toward better IdM in pervasive environments. So far, we’ve successfully evaluated the risk aggregation model and tested the feasibility of establishing federations based on one risk dimension, and we plan to implement the whole model including all the risk criteria. We also aim to conduct usability studies that involve real users as well as performance tests for measuring overhead. Acknowledgments We thank the anonymous reviewers for their help to improve this article. Financial support from the Spanish Ministry of Education through the program “Estancias de movilidad en el extranjero ‘José Castillejo’ para jóvenes doctores” (CAS14/00395) is gratefully acknowledged. The Spanish Ministry of Science and Innovation under projects CONSEQUENCE (TEC2010-20572-C02-01) and EMRISCO (TEC2013-47665-C4-4-R) partially supported this work.
References 1. M. Weiser, “The Computer for the 21st Century,” Scientific Am., vol. 265, no. 3, 1991, pp. 94–104. 2. E. Maler and D. Reed, “The Venn of Identity: Options and Issues in Federated Identity Management,” IEEE Security & Privacy, vol. 6, no. 2, 2008, pp. 16–23. 3. K. Cameron, “The Laws of Identity,” 13 May 2005; http://www.identityblog.com/stories/2005/05/13/The LawsOf Identity.pdf. ___________ 4. R. Dhamija and L. Dusseault, “The Seven Flaws of Identity Management: Usability and Security Challenges,” IEEE Security & Privacy, vol. 6, no. 2, 2008, pp. 24–29. 5. E. Soriano, F.J. Ballesteros, and G. Guardiola, “SHAD: A Human-Centered Security Architecture for the Plan B Operating System,” Proc. 5th IEEE Int’l Conf. Pervasive Computing and Comm., 2007, pp. 272–282. 6. F. Stajano, “Pico: No More Passwords!,” Security Protocols XIX, LNCS 7114, Springer, 2011, pp. 49–81. 7. F. Almenárez et al., “fedTV: Personal Networks Federation for IdM in Mobile DTV,” IEEE Trans. Consumer Electronics, vol. 57, no. 2, 2011, pp. 499–506. 8. F. Almenárez et al., “Trust Management for Multimedia P2P Applications in Autonomic Networking,” Ad Hoc Networks, vol. 9, no. 4, 2011, pp. 687–697. 9. Y. Sun, Z. Han, and K.J.R. Liu, “Defense of Trust Management Vulnerabilities in Distributed Networks,” IEEE Communications Magazine, vol. 46, no. 2, 2008, pp. 112–119. 10. D.W. Manchala, “E-commerce Trust Metrics and Models,” IEEE Internet Computing, vol. 4, no. 2, 2000, pp. 36–44. 11. R.L. Keeney and H. Raiffa, Decisions with Multiple
Objectives: Preferences and Value Trade-Offs, Cambridge Univ., 1993. 12. P. Arias et al., “A Metric-Based Approach to Assess Risk for ‘On Cloud’ Federated Identity Management,” J. Network and Systems Management, vol. 20, no. 4, 2012, pp. 513 –533. 13. T. Polk, K. McKay, and S. Chokhani, “Guidelines for the Selection, Configuration, and Use of Transport Layer Security (TLS) Implementations,” special publication 800-52, revision 1, Nat’l Inst. Standards and Technology, 2014; http://nvlpubs.nist.gov/nistpubs/SpecialPublications /NIST.SP.800-52r1.pdf. ______________ 14. W. Steigerwald, P. Scholta, and J. Abendroth, “Identity and Access Management for Networks and Services; Dynamic Federation Negotiation and Trust Management in IdM Systems,” ETSI, 2011; www.etsi.org /deliver/etsi_gs/INS/001_099/004/01.01.01_60/gs _ins004v010101p.pdf. _____________ Patricia Arias-Cabarcos is a researcher at the University
Carlos III of Madrid (UC3M). Her research interests include identity management, trust models, and risk assessment. Arias-Cabarcos received a PhD in telematics engineering from UC3M. Contact her at ariasp@ it.uc3m.es. ____________
Florina Almenárez is an associate professor at UC3M.
Her research interests include trust and reputation management models, identity management, and security architectures in ubiquitous computing. Almenárez received a PhD in telematics engineering from UC3M. Contact her at ____________ florina@it.uc3m.es.
Rubén Trapero is a postdoctoral researcher at Tech-
nische Universität Darmstadt. His research interests include privacy, identity management, and service engineering. Trapero received a PhD in telecommunication engineering from Universidad Politécnica of Madrid. Contact him at rtrapero@cs.tu-darmstadt.de. __________________
Daniel Díaz-Sánchez is an associate professor at UC3M.
His research interests include distributed authentication, authorization, and content protection. DíazSánchez received a PhD in telematics engineering from UC3M. Contact him at dds@it.uc3m.es. __________
Andrés Marín is an associate professor at UC3M. His
research interests include ubiquitous computing: limited devices, trust, and security in next-generation networks. Marín received a PhD in telecommunication engineering from Universidad Politécnica of Madrid. Contact him at amarin@it.uc3m.es. ____________
Selected CS articles and columns are also available for free at http://ComputingNow.computer.org. 39
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
Bad Parts: Are Our Manufacturing Systems at Risk of Silent Cyberattacks? Hamilton Turner | Virginia Tech Jules White | Vanderbilt University Jaime A. Camelio and Christopher Williams | Virginia Tech Brandon Amos | Carnegie Mellon University Robert Parker | Virginia Tech Applied Research Corporation
Recent cyberattacks have highlighted the risk of catastrophic failures resulting from physical equipment operating outside designed tolerances. A related threat is cyberattacks that change the design and manufacturing of a machine’s part, such as an automobile brake component, so it no longer functions properly.
E
very day, we rely on complex, safety-critical mechanical systems, such as next-generation composite aircraft, artificial heart valves, automated drug dispensary equipment, high-speed rail systems, and gas turbines. These systems use physical parts that are precisely engineered for balance, strength, weight, and safety. Malicious attacks on the manufacturing process can modify this precise engineering to change mechanical tolerances, causing problems ranging from excess shearing forces to increased humidity and creating significant safety or monetary issues. Recent high-profile announcements by the US federal government and companies such as Apple indicate huge interest in moving manufacturing processes back to the US.1,2 A key concern in current manufacturing approaches is the critical lack of manufacturing security, both in trusting components from untrusted sources3 and in securing the manufacturing cyberinfrastructure (www.manufacturing.gov/docs/DMDI_overview.pdf). In this article, we investigate a key cyber-physical security question: How vulnerable is our manufacturing infrastructure to undetected cyberattacks that purposely change the design and manufacturing of parts so that the finished products fail in the field? For instance, can attackers inject a design or manufacturing process change that goes undetected and causes a jet engine’s turbine blade to fail under a load within its designed tolerances?
40
May/June 2015
SECURITY& PRIVACY
IEEE
Manufacturing Errors Achieving the precision needed to manufacture parts for safety-critical systems depends nearly completely on computer-driven design tools and control systems. Each part’s performance expectations are carefully modeled and controlled during the manufacturing process. However, despite the level of scrutiny with which these parts are analyzed for quality, errors can and do happen. For example, the recent Boeing 787 Dreamliner had a battery design with numerous thermal runaway events, and although a solution was implemented, the root cause still hasn’t been established.4 The Toyota Prius had a highly publicized acceleration issue that was never completely explained: a National Highway Traffic Safety Administration and NASA investigation concluded that the primary causes weren’t electrical and were likely either user error or mechanical malfunction.5 Large, mission-critical manufacturing operations are challenging to complete correctly even without a hostile or malicious environment. Significant research has examined cybervulnerabilities in industrial control systems, such as SCADA controllers, that can be used to force physical systems to operate outside their intended safety tolerances. But, rather than forcing equipment outside designed tolerances, what if the safety-critical parts that equipment relies on were silently attacked and manufactured with physical characteristics that no longer matched their
Copublished by the IEEE Computer and Reliability Societies
1540-7993/15/$31.00 © 2015 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
Coworker feedback
Initial CAD file Generate toolpaths
Manufacture
Quality control
Figure 1. High-level overview of manufacturing showing the process from CAD to manufacturing quality control checks.
designs? What if these modifications were undetectable to state-of-the-practice testing and quality control procedures? Many researchers have looked at potential flaws injected into computer hardware and software logic, but few have examined the potential for flaws in physical parts with no computational logic, such as a bolt in a braking assembly. We conducted a series of studies to answer the question about the cybersecurity of manufactured parts. Our research shows both cause for alarm and a variety of avenues to greatly enhance manufacturing system security. A significant threat to manufacturing processes is manufacturing personnel’s lack of awareness that cyberattacks could affect them.
■ How large are the attack surfaces for both additive and subtractive manufacturing processes? ■ How likely are physical quality control processes to detect design defects injected by cyberattacks? ■ How secure are the cyber components of part design chains from engineers to manufacturing equipment? ■ What are the research problems most in need of attention to remedy manufacturing vulnerabilities?
In this study, we were interested specifically in parts designed using CAD tools, such as 3D modeling software. Designers use these tools to create 3D models of parts and then convert them to numerical control (NC) programming languages, such as G-Code, that describe how to produce a part step by step. G-Code is provided to the CNC machine to guide tools as they subtract material and produce the final part, for example, moving cutting tools from one 3D coordinate to another at a particular speed. Typically, a move results in the tool intersecting the part and removing a segment of material. G-Code doesn’t directly describe the 3D structure of the part being manufactured but instead encodes the sequence of tool operations needed to manufacture the final geometry of the part from raw material. Clearly, a concern in subtractive manufacturing is ensuring the integrity of the G-Code provided to CNC machines. Because many G-Code operation sequences can yield the same final part, it can be difficult to detect well-placed modifications to G-Code. This is especially true if malicious modifications are hidden among a set of benign modifications, as would likely be the case in an attack. In addition, G-Code modifications could target the CNC tools, altering their speed or other usage parameters to cause failures such as overheating and collisions.
Subtractive Manufacturing
Additive Manufacturing
Subtractive manufacturing processes, such as machining of metal, begin with a solid unit of material and iteratively remove material until the desired shape is reached. Common subtractive manufacturing equipment includes lathes, mills, drill presses, and machine centers. These machines are commonly controlled by computer numerical control (CNC) systems that manage tool movement and operation. Figure 1 shows a high-level overview of the manufacturing process.
Additive manufacturing (AM), also known as 3D printing, incrementally builds parts by selectively adding material layer by layer to form the part’s shape. Although AM technologies use varying techniques for forming each layer, they all create objects by “printing” each cross-sectional layer of a part, starting with the bottom layer and working toward the top. In essence, 3D printing is similar to printing images of cross-sections onto a 2D page, and then stacking these pages to form the 3D
An Overview of Modern Manufacturing Processes Our studies aimed to determine the following:
41
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
Because parts are printed in layers, users can observe each layer as it’s printed to detect defects. However, no commercially available technology visually ensures that the printed layers match what the designer intended. Although some nascent research in implementing machine vision to monitor a build has emerged,9,10 it currently only checks the printed part against the STL file. If attackers modify the STL file or the NC code for printing the part, the printer will report that the layer is correct even if it differs from the designer’s intentions because it’s checking against the modified file. Many current detection methods focus on identifying hardware failures in the manufacturing equipment and aren’t sufficiently cryptographically rigorous to detect data modification. Figure 2. 3D printing involves the successive creation of 2D cross-sectional layers (outlined in red).
part, as Figure 2 shows. Although AM equipment works predominantly with plastics, it can also work with metals, ceramics, and biocompatible materials. Regardless of the underlying technology or vendor, most AM processes use the same input file format. The Standard Tesselation Language (STL) standardized file format is a simplified representation of a part’s geometry. Whereas the original CAD file might contain a history of how the part was made, material information, and a parameterized set of dimensions and constraints, the STL file contains only a listing of the triangular facets that form the part’s surface. This file format is inherently “open” to enable any CAD program to export it and input it into AM technology. Once input into the AM machine, the STL file is transformed into a secondary NC language for the AM technology. This code drives the path of the deposition tool, such as an extrusion nozzle, laser, or inkjet head, and is typically proprietary and specific to each machine’s manufacturer. As with subtractive manufacturing, interpreting the NC language is substantially more difficult. In fact, the NC language alone isn’t sufficient to predict the final part quality because AM technologies involve physical phenomena such as motion, heat transfer, and fluid dynamics. For example, heating and cooling cycles in laser sintering and polymer extrusion substantially impact the final part’s shape and strength.6,7 A fundamental advantage of AM technologies’ layerby-layer fabrication approach is the capability to access the entire volume of the workpiece throughout the build process.8 This lets designers create part features that are internally facing and not directly observable from the exterior. For example, a printed cube can have an interior cavity. 42
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
Quality Control Processes Quality control is a major concern in manufacturing. In general, manufacturing equipment can become miscalibrated or faulty and produce defective parts. Several types of sensors and process-monitoring methodologies are used to determine the processes’ integrity. Some common quality control operations are based on direct measurements of the physical parts being produced. For example, parts might be measured using calipers, mapped with laser scanners, or destructively tested to measure strength. Statistical sampling methods are typically used to identify when parts should be inspected. In addition, manufacturers can’t break every part they produce or measure every surface of each part. A key issue is finding a good subset of a part’s properties to inspect to reliably detect defects or process changes. Knowledge of either the part or surface selection process could enable attacks aimed to circumvent detection.
Manufacturing Processes’ Cyberattack Surfaces Given the importance of protecting the integrity of the manufactured parts, we conducted a study of additive and subtractive manufacturing processes, from design to quality control. Our goal was to identify the attack surfaces in these manufacturing processes, the current cybersecurity controls in place, and areas for improvement. We analyzed the manufacturing tool chain in two labs at Virginia Tech—the Design, Research, and Education for Additive Manufacturing Systems (DREAMS) Lab for additive manufacturing and the Manufacturing Process Lab for subtractive operations. The surveyed equipment ranged from US$12,000 to $250,000; industrial manufacturing equipment can easily cost upwards of one million dollars. We don’t list the equipment due to security and sensitivity concerns; however, all the attacks targeted standard equipment that was operated and secured using manufacturer guidelines. May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Structured quantitative risk analysis, the challenging subproblem of attack classification, and the economic value of cybersecurity are active research topics.11–13 Current practitioners might be interested in established guidelines, such as those on intrusion detection and prevention published by the National Institute of Standards and Technology (NIST).14 NIST details major threat categories, including viruses, keystroke loggers, and rootkits, and describes prevention and response standards for these cybervulnerabilities. NIST also provides an outline for incorporating cyber–risk analysis into existing business practices.
Design Tool Chain Attack Surface Analysis A sequence of engineers uses CAD tools to produce design documents that are eventually delivered to the toolpath-generating software. Engineers typically share these documents using standard file-transfer mechanisms. Common nonsecure mechanisms, such as email and USB drives, allow these files to be compromised in transit using well-known mechanisms. For example, we were able to modify design files by detecting inserted USB drives and altering file contents. We didn’t find consistent use of hashing or signature verification techniques to ensure the integrity or origin of design files as they were exchanged among manufacturing personnel or between personnel and equipment.
Control Attack Surface Analysis AM lab equipment opens nonsecure ports by default. These ports are used for remote printing, machine debugging, and remote control and frequently don’t require authentication; some accept commands from any source. An unsophisticated hacker could “port map” to discover accessible printers and conduct attacks such as denial of service or command injection. The resident machine specialist commonly has no formal network training and is unaware of this vulnerability. In addition, there is a lack of data and command encryption. We intercepted many printer commands and data using WireShark. By manually analyzing the captured data, we identified printer configuration, such as fan speed and temperature setting commands. We also identified 2D layer information sent to the printer. Attackers could easily launch a man-in-the-middle attack, such as Address Resolution Protocol (ARP) spoofing, to capture the data en route to the printer and change it, creating a slightly altered part. By analyzing the part’s physical characteristics, attackers could place alterations to minimize observability and maximize impact, for instance, using a crack propagation site. Another problem we identified was the age of the OS versions and services on the computers controlling the equipment. In some cases, it wasn’t possible to
upgrade, as the equipment software required a specific OS. We documented versions from Windows NT to Windows 7, with the unsupported Windows XP being the most common.
Direct Equipment Attack Surface Analysis Lack of physical security on machining tools and directly connected terminals is a source of concern. Our survey of manufacturing equipment and connected computers included trusted components of the manufacturing cycle. Connected terminals often monitor equipment for faults and manage jobs. USB ports were used to transfer CAD files to computers connected to the equipment. A USBbased attack on the command terminal could remove its ability to accurately relay information to equipment and monitor equipment operation.
Network Attack Surface Analysis When machining equipment is available via a network, traditional network attacks can impact the operation. Most machines use proprietary protocols, but this is no replacement for true security. A traditional man-in-themiddle attack allows injecting custom manufacturing jobs and bypassing the control computer. This not only lets attackers modify parts but also bypasses safety systems in place on the control computer.
Quality Control Process Attack Surface Analysis Quality control is used to detect deviations in the manufactured part’s properties from its designed specifications. Although quality control processes aren’t designed to detect cyberattacks, they can potentially detect attacks that change a physical part. However, quality control balances time and cost with thoroughness; measuring the exact dimensions of every facet of a part in a high-speed manufacturing process isn’t cost effective. Moreover, it’s unclear how to differentiate among manufacturing quality issues, quality control process failures, and malicious attacks. Unlike in cybersecurity, no roots of trust are in place to protect quality control processes. For example, attackers could change settings on a coordinate-measuring machine (CMM) to make users believe that all their parts are manufactured correctly.
Social Attack Surface Analysis From discussions in our cross-discipline team, we found that manufacturing experts were largely unaware of the potential social cybersecurity risks in their discipline. For example, in the AM community, experts often email each other interesting 3D CAD files to print. Attackers impersonating an expert might email a malicious structure to a team of industry and academia 43
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
professionals. Although many computer users are skeptical of unsolicited email attachments, manufacturing teams didn’t see an immediate risk in opening manufacturing-specific files. There’s an inherent belief that manufacturing design tools and machinery are too obscure or niche for targeted exploits.
Cyberattack Studies with Human Participants Our study focused on identifying and documenting potential manufacturing systems’ cybervulnerabilities. We disrupted a manufacturing process to negatively impact the final part’s quality, and then documented participant awareness of this interference. After receiving Institutional Review Board approval, we conducted experiments at Virginia Tech’s subtractive manufacturing lab under the guidance of several graduate research assistants.15
Manufacturing Cyberattack Design We conducted this experiment under the guise of an extra-credit opportunity. Students were asked to manufacture a standard dog bone with a 3D printer under a graduate research assistant’s guidance. Students were provided design requirements and tasked to construct a 3D model and then manufacture and test their design. The initial design requirements were correct. The experiment required that students transfer their design from their computer to the computer connected to the manufacturing equipment. However, the terminal was infected with our virus to detect USB thumb drives and rewrite the G-Code to alter the part’s geometry.
Experimental Subjects Trial subjects were second-year Virginia Tech industrial and systems engineering (ISE) students. The Virginia Tech ISE program is accredited by the Accreditation Board for Engineering and Technology’s Engineering Accreditation Commission, and students are expected to use their degrees in fields such as manufacturing and healthcare. We used the ISE curriculum to evaluate skillset. At this point of schooling, students had completed six engineering courses and approximately 13 other courses spanning mathematics, English, chemistry, physics, and electives. They had completed ENGE 2344 on CAD and might have completed ENGE 1104, which focuses on digital aspects of engineering. Students were currently enrolled in both ISE 2204 and ISE 2214, which teach manufacturing processes, limitations, and automation. Students had yet to take some courses relevant to our experiment, such as ISE 4214, which focuses on lean manufacturing concepts such as assembly lines and process quality control; STAT 4404 44
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
on statistical quality control methods; and ISE 4414, which focuses on the application of statistical quality control in an industrial setting.
Methodology We randomly organized subjects into seven groups. After modeling the component, the groups were asked to report to a lab and manufacture, measure, and report on their component. We chose a simple and familiar manufacturing challenge to ensure students could confidently measure performance loss. More complicated parts would be easier to attack, but the dog bone’s simple geometry could be tested easily to measure the performance reduction that the cyberattack produced (designed to be 19 percent). A graduate researcher monitored progress on key items: ■ Did subjects visually identify the flaw? ■ Did subjects trust or question the manufacturing process components? ■ If provided, did subjects measure the part using calipers? ■ If a defect was identified, how did subjects proceed in tracing the defect’s root cause? ■ For those subjects who identified the defect, what was the final verdict of the root cause? Each group completed a short report, which looked like a laboratory questionnaire and asked subjects to describe the expected outcome, identify possible sources of error, and suggest process improvements to increase manufacture success rate. To gather a wider range of data on how subjects detected anomalies, the experiment monitor aided some teams. Aid included ■ providing calipers for part measurement, ■ providing calipers and enabling monitoring software, ■ providing calipers, enabling monitoring software, and verbally instructing subjects to measure the part. Monitoring software showed an online readout, and subjects who compared their original CAD to this readout would detect differences. Group A was provided no aid; groups B and C were provided calipers and monitoring software; and groups D, E, F, and G were provided calipers, monitoring software, and instructions to measure the component.
Empirical Results and Analysis Table 1 provides a summary of the experiment results and common subject responses. All groups performed the stress test that showed their part was underperforming. Of the seven groups, four noted the anomaly. May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
Table 1. Overview of experimental results. Group B
C
D
E
F
G
Supplied calipers
■
■
■
■
■
■
Enabled monitoring software
■
■
Aid provided to subjects
A
Verbally guided to measure part
■
■
■
■
■
■
■
■
Subject response
A
B
C
D
E
F
G
Performed stress test
■
■
■
■
■
■
■
Used calipers to measure part
■
■
■
■
Verified part geometry in CAD tool
■
■
Verified toolpath generator settings
■
■ ■
Verified generated toolpath ■
Compared files received and transferred
■
■
■ ■
Compared final products A
D
E
F
G
Anomaly detected
■
■
■
■
Isolated to computer system
■
■
■
Result
B
C
Identified as malicious attack
All four of these groups were verbally guided to measure the part, potentially raising their awareness. Three groups isolated the anomaly to the computer system or software used to control the equipment, although they used diverse methods to do so. None of the groups suspected the computer system was under attack. Groups A, B, and C stated that the part “looked correct,” and groups B and C didn’t use the provided calipers to measure the part. They seemed to assume inconsistencies in machining would result in easily noticeable flaws in the part. Groups D, E, F, and G were instructed to measure the part, and all detected the anomaly. Groups D and E performed the most diagnoses, and both identified a “weird computer issue” causing a difference between the flash drive file and the file the software used. This was interpreted as either an error in the manufacturing software or an error during file copy. Group F didn’t use a systematic procedure or verify any individual sections. They eventually concluded the anomaly was an error in the manufacturing software but didn’t use a process to validate this conclusion. Group F unexpectedly compared their CAD file to files in the terminal’s recycle bin, and determined that other groups had anomalies. This appeared to provide reassurance that their component was the desired outcome, and they offered no hypothesis explaining the performance loss. Group G suspected human error and checked their inputs carefully. They didn’t consider
computer processes for faults. They reported the error as unidentified. Of the seven groups, none identified a malicious entity corrupting the file. The groups that detected the error did so after being verbally reminded to check the part, not after noticing that performance was outside normal range. Of the three groups that isolated the anomaly as a computer issue, only two identified the faulty USB file transfer. Pitfalls included checking only inputs and therefore implicitly trusting the operation and blaming manufacturing software instead of the underlying computer.
Experiment Generalization A major challenge in performing a comprehensive survey of manufacturing security is the need to enter professional manufacturing facilities and perform studies. The necessary deception could result in users recalibrating expensive equipment or wasting substantial employee time. Our work identified key variables for future study. The primary testable topic is manufacturers’ trust in their computer systems. This includes how and when computer processes are verified upon error detection and what response is performed when a problem is isolated to the computer systems. Control variables include each facility’s existing policy: Do procedures for isolating errors include verifying computer systems as well as responses to computer system faults? 45
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
Future generalization of this experiment could occur on multiple axes, such as varying attack discoverability (for instance, easily detectable, moderately detectable, and undetectable), part complexity, technician experience levels, and manufacturing environments. The primary challenge is the number of correlated variables. For example, a critical component likely undergoes extensive quality control, so it’s difficult to categorize attack discoverability independent of component complexity.
Recommendations Through our experience working on manufacturing cybersecurity, we identified several avenues for improving the security of manufacturing processes.
Increase Awareness Subjects assumed that the computing infrastructure used to manufacture the part couldn’t possibly be the source of the design discrepancy, and the error must be in CAD or computer-aided manufacturing files. If manufacturing students were exposed to the risk of manufacturing cyberattacks as part of the normal curriculum, they’d be much more adept at identifying these vulnerabilities.
Research on Cyber-Physical Quality Control Processes A unique aspect of cybersecurity in manufacturing is that the output of the manufacturing process is always a physical part that can be measured to determine if it meets expectations. By correlating both cyber and physical manufacturing process quality control events, we can potentially identify when a cyberattack is in process. For example, an increase in malicious email attachments could be linked to variations in physical monitoring to detect cyberattacks.
Research on Risk Assessment Silent attackers must devise attacks that aren’t discovered through normal physical quality control processes. A key concern is determining the risk of manufacturing various parts. For example, certain parts’ properties can be measured only through destructive testing; therefore, testing of these properties isn’t possible on all parts. Cyber-physical models that determine a specific part’s risk would enable additional security and quality control measures.
Cyber Diagnostics and Attack Mitigation Most current diagnostic procedures address physical problems, such as misconfiguration. New procedures are needed to diagnose the cyber elements of the manufacturing process. If a cyberattack causes a defect, 46
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
outline how employees should react to prevent spread and disinfect components.
T
here is significant risk in manufacturing processes used to create the physical parts that our safetycritical systems depend on. These risks stem from the lack of cyber-physical models to identify ongoing attacks as well as the lack of rigorous application of known cybersecurity best practices. To protect manufacturing processes in the future, research will be needed on a number of critical cyberphysical manufacturing security topics. For example, the interplay between cyber–situational awareness and quality control processes might be combinable to detect ongoing cyberattacks. In addition, research is needed to create models of risk for manufacturing processes and parts. Because parts can be measured once they’re produced, different parts and manufacturing processes have different risk profiles as a result of the varying measurability of the produced parts. Armed with a better understanding of the cybersecurity risks of different physical parts, we can ramp up cybersecurity and quality control procedures for parts with the greatest risk of attack. Finally, best practices and certifications are necessary to help ensure that all manufacturing processes are protected with state-of-the-art quality control and cyber–situational awareness techniques.
References 1. “Obama Administration Launches Competition for Three New Manufacturing Innovation Institutes,” Office of the Press Secretary, White House, 9 May 2013; www ___ .whitehouse.gov/the-press-office/2013/05/09/obama -administration-launches-competition-three-new -manufacturing-innova. _____________ 2. B. Williams, “Apple CEO Announces Entire Line of Macs to Be Made in America,” NBC News, 6 Dec. 2012; www ___ .nbcnews.com/video/rock-center/50095631. 3. J. Rizzo, “Industry Experts: Less ‘Made in USA’ Puts Security at Risk,” CNN, 21 Sept. 2010; http://edition.cnn.com /2010/US/09/21/manufacturing.security/index.html. 4. C. Drew and J. Mouawad, “Boeing Fix for Battery Is Approved by FAA,” New York Times, 19 Apr. 2013; www ___ .nytimes.com/2013/04/20/business/faa-endorses_____________________ boeing-remedy-for-787-battery.html. 5. J. Rhee, “DOT: No Electronic Sudden Acceleration in Toyotas,” ABC News, 8 Feb. 2011; http://abcnews.go. com/Blotter/toyota-electronic-sudden-acceleration -toyotas-dot/story?id=12866204. ___________________ 6. J.P. Kruth et al., “Binding Mechanisms in Selective Laser Sintering and Selective Laser Melting,” Rapid Prototyping J., vol. 11, no. 1, 2005, pp. 26–36. May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
7. S.H. Ahn et al., “Anisotropic Material Properties of Fused Deposition Modeling ABS,” Rapid Prototyping J., vol. 8, no. 4, 2002, pp. 248–257. 8. V. Kumar et al., “Representation and Processing of Heterogeneous Objects for Solid Freeform Fabrication,” Geometric Modeling Workshop, 1998, pp. 7–9. 9. J. Mireles et al., “Automatic Feedback Control in Electron Beam Melting Using Infrared Thermography,” Proc. Int’l Solid Freeform Fabrication Symp., 2013, pp. 708–717. 10. A.L. Cooke and S.P. Moylan, “Process Intermittent Measurement for Powder-Bed Based Additive Manufacturing,” Proc. Int’l Solid Freeform Fabrication Symp., 2011, pp. 8–10. 11. K.J.S. Hoo, “How Much Is Enough? A Risk Management Approach to Computer Security,” doctoral dissertation, Dept. Management Science and Eng., Stanford Univ., Aug. 2000. 12. M. Bishop and D. Bailey, A Critical Analysis of Vulnerability Taxonomies, no. CSE-96-11, Dept. Computer Science, Univ. California, Davis, 1996. 13. L.A. Gordon and M.P. Loeb, “The Economics of Information Security Investment,” Proc. ACM Trans. Information and System Security (TISSEC 02), vol. 5, no. 4, 2002, pp. 438–457. 14. R. Bace and P. Mell, “NIST Special Publication on Intrusion Detection Systems,” Nat’l Inst. Standards and Technology, 2001; www.dtic.mil/cgi-bin/GetTRDoc ?AD=ADA393326. ___________ 15. L. Wells et al., “Cyber-Physical Security Challenges in Manufacturing Systems,” Manufacturing Letters, vol. 2, no. 2, 2014, pp. 74–77.
from the University of Michigan. Contact him at jcamelio@vt.edu. ___________ Christopher Williams is an associate professor in the
Department of Mechanical Engineering at Virginia Tech. He’s also the director of the Design, Research, and Education for Additive Manufacturing Systems (DREAMS) Laboratory and the codirector of Virginia Tech’s Center for Innovation-Based Manufacturing. His research interests include design tools to guide the use of additive manufacturing to create functional, end-use artifacts. Williams received a PhD in mechanical engineering from the University of Florida. Contact him at cbwill@vt.edu. _________
Brandon Amos is a PhD student in Carnegie Mellon
University’s School of Computer Science. Amos received a BS in computer science from Virginia Tech. Contact him at bamos@cs.cmu.edu. ____________
Robert Parker is a senior program manager at Virginia
Tech Applied Research Corporation. His research interests include cyber-physical system security. Parker received an ME in electrical engineering from Cornell University. Contact him at rparker@vt-arc.org. ___________ Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
Hamilton Turner is a director of malware research at
Optio Labs. His research interests include wireless sensor networks, mobile security, and large-scale cloud emulation. Turner received a PhD in computer engineering from Virginia Tech and was a student at the time of this writing. Contact him at hamiltont@ _______ gmail.com. ______
Jules White is an assistant professor of electrical engi-
neering and computer science at Vanderbilt University. His interests include creating next-generation mobile and cloud applications that sense the world around us and are secure, efficient, and scalable. White received a PhD in computer science from Vanderbilt University. Contact him at ________________ jules@dre.vanderbilt.edu.
GET MORE, FOR LESS.
Jaime A. Camelio is an associate professor in Virginia
Tech’s Department of Industrial and Systems Engineering. His research focuses on modeling, analysis, and control of manufacturing systems with an emphasis on assembly, remanufacturing, and microforming. Camelio received a PhD in mechanical engineering
Digital magazine subscriptions now available.
computer.org/subscribe 47
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
Diversity Reduces the Impact of Malware Kjell Jørgen Hole | University of Bergen
Many Internet networks have limited internal diversity, making them vulnerable to serious malware spreading. A proposed malware-halting technique that uses software diversity can halt infectious outbreaks on these networks.
A
lthough many networked computing systems are vulnerable to self-propagating malware, large enterprises use automated patching and hardening to make their systems highly immune to malware infections. However, persistent human attackers compromise enterprise networks using advanced tools, customized malware, and 0-day exploits that antimalware technology and patching can’t detect or mitigate.1,2 In this article, I investigate software diversity’s ability to halt infectious malware and argue that diversity increases the time needed for attackers to compromise enterprise systems, increasing the likelihood of early detection and mitigation of infectious outbreaks. A computing system is viewed as a collection of interconnected computing platforms, and the platforms are considered at the OS and application levels. Compilers with diversity engines generate the platforms’ binary images, producing many different executable images from a much smaller set of OS and application source codes.3 Conceptually, a program’s binary images are divided into classes where members of the same class share at least one exploitable vulnerability, and members of different classes have no common exploitable vulnerability. Assuming the compilers generate equally large classes, the number of classes measures the program’s diversity.4 (For more information on research on diversity, see the “Related Work” sidebar.) Previous research used well-established network models from network science5 to show how to combine
48
May/June 2015
SECURITY& PRIVACY
IEEE
software diversity and computer “immunization” to halt multiple simultaneous outbreaks of infectious malware with sparse and inhomogeneous spreading patterns.6 This article considers alternative synthetic and empirical networks.
Explanatory Epidemiological Model Malware exploits vulnerabilities in OSs and application software to infect computing devices. An exploitable vulnerability is a mistake in the software that enables malware to gain access to a device. Examples of exploitable vulnerabilities are buffer overflows and malformed URLs.3,7,8 Infectious malware can spread to new vulnerable devices via network shares, removable media, IP attacks, email messages, instant messaging, and peerto-peer networks.
Epidemiological Model I model the spreading of infectious malware over networked computing devices using a simple graph with N nodes of L (≥1) types, as depicted in Figure 1. There are roughly N/L nodes of each type uniformly distributed over the graph. The node types represent different binary codes on the OS or application level of the computing platforms. Nodes of the same type share an exploitable vulnerability, whereas nodes of different types have no common exploitable vulnerability. The edges represent communications between nodes. A good measure of a model’s diversity is the number of
Copublished by the IEEE Computer and Reliability Societies
1540-7993/15/$31.00 © 2015 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
node types L. (See Scott E. Page’s Diversity and Complexity for a thorough discussion on diversity.4) Two nodes are neighbors if there is an edge between them. A node’s degree k is the number of neighbors, and 〈k〉 denotes the nodes’ average degree. A network is homogeneous when all nodes have degrees k ≈ 〈k〉 and inhomogeneous when a small fraction of nodes, called hubs, have k ≫ 〈k〉. The malware’s different spreading mechanisms determine the topologies of the spreading networks. Malware using IP-based random scanning spreads over connected homogeneous networks, whereas email malware using topological scanning travels over inhomogeneous networks.9 These virtual spreading networks are different from the physical networks of wired and wireless communication links. I study multiple malware or multimalware outbreaks because the deployment of several malware types is an obvious strategy to counter software diversity. All malware types are assumed to have the same spreading mechanism. The discrete-time model contains L types of infectious malware, or one malware type per node type. Each malware type exploits a particular vulnerability to infect a single node type. Initially, S nodes of each type are infected. These L ∙ S nodes are called seeds (see Figure 1). The infection probability determines the rate at which a “sick” node infects a susceptible neighbor of the same type during a time step. To study worst-case spreading, I set the infection probability to 1 to ensure that all nodes reachable from the seeds are infected. No infected node recovers.
Nonpredictive Model It’s hard to estimate the actual spreading of malware in a networked computing system because the spreading is influenced by many factors, including router policies, choice of communication protocols, available bandwidth, traffic load, firewall rules, antimalware signature sets, intrusion detection, and the level of software patching. Rather than trying to incorporate all these factors, the epidemiological model displays very fast, worst-case spreading where an infectious node always infects all of its neighbors of the same type. Although this model can’t predict actual spreading in a network, it can explain the usefulness of software diversity. Because actual malware isn’t likely to spread quite so much, it’s reasonable to believe that the model’s malware halting translates to malware halting in real systems. This view is supported by independent research discussed in the “Related Work” sidebar.
Malware-Halting Technique The proposed malware-halting technique immunizes hubs if they exist and increases the diversity L to limit
Figure 1. Network with N = 8 nodes, L = 4 node types with different colors, and average degree 〈k〉 = 2.5. Stars represent the infected seeds. There is S = 1 seed per node type. Only the orange seed will infect a neighbor.
Table 1. Malware halting on spreading networks with different topologies. Sparse and homogeneous
Sparse and inhomogeneous
Dense and homogeneous
Utilize small true or artificial diversity
Use hub immunization and small true or artificial diversity
Deploy large artificial diversity
the fraction of infected nodes.6 Table 1 outlines how to halt multimalware outbreaks on sparse (small 〈k〉) or dense (large 〈k〉) networks with homogeneous or inhomogeneous topology. Limited true diversity (small L) is obtained by deploying instances of different OSs and applications with similar functionality. Michael Franz argues that much larger artificial diversity (large L) is available when users download software from application stores that use compiler-generated diversity to produce many classes of executable binary images.3 Although true diversity is costly because installation of different software forces users to learn new functionality, the cost of artificial diversity is reasonable as the functionality isn’t changed. Figure 2 illustrates the halting technique on a sparse and inhomogeneous network with hubs. Figure 2a shows a synthetic network with 300 nodes. The nodes are circles with areas proportional to their degrees, highlighting the hubs. The spreading network is a software monoculture with one node type (L = 1). The nodes are red to illustrate that a single seed (S = 1) infects all nodes. Figure 2b shows the same network, but with randomly distributed orange and yellow node types (L = 2). Eight white hubs are made immune to two malware types attacking the nodes. There is little 49
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
Related Work
M
iguel Garcia and his colleagues have studied true diversity at the OS level by considering exploitable OS vulnerabilities published over a period of about 15 years.1 The authors carefully analyzed vulnerabilities in 11 different OSs to determine how many of these vulnerabilities occur in more than one OS. More than 50 percent of the 55 studied OS pairs had at most one remotely exploitable common vulnerability. The low numbers of shared vulnerabilities for different OS combinations strongly indicate that true diversity is obtainable with off-theshelf OSs. The authors also provide a good overview of related research on software diversity. Jin Han and his colleagues have shown that true diversity is achievable at the application level with off-the-shelf software.2 The authors analyzed more than 6,000 application vulnerabilities published in 2007. Approximately 98.6 percent of the studied applications had substitutes, or applications that offer similar functionality, and the majority of the applications either didn’t have the same vulnerability or couldn’t be compromised with the same exploit code. Nearly half of the applications were officially supported to run on multiple OSs. Although the different OS distributions of the same application are likely to suffer from the same vulnerability, the attack code is different in most cases. The work by Konstantinos Kravvaritis and his colleagues supports the need for more diversity in real networked systems.3 The authors made the reasonable assumption that binary files with the same name are realizations of a single program, so the files might
be different at the binary level but their functionality is identical. A client–server application collected executable program and library files from individuals who installed the client application on their computers. The client calculated the MD5 hash of each collected file and sent the hash to the server. Because the hash is unique for each different input file, the server could determine whether binary files with the same name were identical. Kravvaritis and his colleagues defined three metrics to measure the diversity of binary files with the same name.3 One metric, which estimates the probability of a successful targeted attack, is given by m/n, where m is the number of instances of the most frequent binary variant of a program and n is the total number of instances. The server collected 1,309,834 binary instances of 205,221 files with different names. For more than half of the files analyzed, the estimated chance of a successful attack was more than 50 percent. The values of all three metrics support the notion that the diversity of current software platforms is too low to prevent significant malware spreading. Hence, there is a real need for the large compiler-generated diversity discussed in this article. Research by Pu Wang and colleagues confirms that the number of giant components with nodes of the same type determines the extent to which malware of different types spread over diverse networks.4 The authors studied the calling patterns of 6.2 million mobile phone subscribers to determine possible spreading patterns of malware attacking smartphone OSs. When a smartphone
malware spreading in this immunized polyculture. For a particular selection of two seeds of different types, Figure 2b shows that the malware spreading is reduced from 300 to only three red nodes—the halting technique decreases the percentage of infected nodes from 100 percent to 1 percent. The simple spreading network in Figure 2 doesn’t have loops, and the hubs are connected in a small subnetwork. In the following analysis, I consider networks with loops and make no assumptions about how hubs are connected.
Halting Technique Analysis The epidemiological model represents the spreading phase of multimalware outbreaks. The following approximate analysis of this phase establishes a lower bound on the diversity L needed by the halting technique summarized in Table 1. First, I clarify why hubs should be immunized in an inhomogeneous spreading network. When the infection probability is small and the malware spreading originates from a single randomly selected seed, a 50
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
strategically placed node in the “core” of a monoculture contributes more to the spreading than a hub on the network’s periphery does.10 However, I study polycultures with maximum infection probability equal to 1 and multiple widespread seeds per node type. Consider a hub with large degree D on the periphery of a network. Because the S seeds with the same type as the hub are uniformly distributed over the network, one of the hub’s neighbors might be a seed. When this seed infects the hub, the hub will infect roughly D/L of its neighbors with the same type. Preventing this peripheral hub infection is important because D tends to be much larger than L and because any of the D/L infectious nodes can cause extensive malware spreading when the infection probability is 1. The many possible spreading patterns make it essential to analyze malware outbreaks on spreading networks with arbitrary degree distributions. Let the nodes in a network be numbered from 1 to N and let node i have degree ki, i = 1, …, N. Consider the ensemble of random networks with an arbitrary but fixed degree sequence {ki} generated by the so-called configuration May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
OSs’ market share is small, there’s no giant component of the call network connecting most phones with the OS. Although the call network is connected, a subgraph of smartphones sharing the same OS is fragmented into many small, disjointed components.4 The lack of large components on which different types of malware can spread explains the observed low saturation of malware in real mobile phone networks. Nevertheless, future malware epidemics are possible because two OS families currently dominate the smartphone market—Android and Apple—and because more and more people are buying smartphones. Juan Caballero and his colleagues have shown that judicious use of true diversity improves the resilience of the Internet-routing infrastructure against software vulnerabilities facilitating denial-ofservice attacks, remote execution of system-level commands without authentication, and unauthorized privileged access.5 Although the use of different software implementations from different code bases on different routers increases overall network resiliency, it also increases the complexity and cost of network deployment and management. As Michael Franz suggested, artificial diversity is an interesting alternative to true diversity because the complexity and cost are much reduced.6 Caballero and his colleagues developed color algorithms to determine how to best distribute the different software implementations over routing-level networks.5 Not surprisingly, a good coloring algorithm needs fewer colors to obtain adequate diversity on a network compared to just distributing colors uniformly over the network devices. Because the use of coloring algorithms necessitates central coordination to install the correct
model.5 These networks have mean degree 〈k〉 = 1/ N¦iki and mean square degree . Any network has L node types with N/L nodes of each type. A single-type component is a subset of nodes with the same type so that there’s a path between any pair of nodes in the set, and so that it isn’t possible to add another node of the same type to the set while preserving this property. The two orange nodes in Figure 1 constitute the largest single-type component. A giant component of same-type nodes is proportional in size to N/L. If a single-type component contains a seed, all of its nodes will be infected. I study single-type components in a network to limit the overall fraction of infected nodes. Let this fraction be averaged over many model runs, where each run has L ∙ S randomly selected seeds. The underlying network topology is the same for all malware types because they’re assumed to have the same spreading mechanism and because the nodes of different types are uniformly distributed over the network. A particular malware type infects only a single type of nodes. Hence, malware of different types infect distinct subsets of nodes. Because each subset has N/L
software on the different devices, these algorithms are best suited to slow-changing infrastructures managed by skilled personnel. Coloring algorithms are less useful when general users manage computing devices. The main advantage of deploying application stores that incorporate compilers with diversity engines is that adequate diversity can be achieved with very little involvement from device owners. References 1. M. Garcia et al., “OS Diversity for Intrusion Tolerance: Myth or Reality?,” Proc. 2011 IEEE/IFIP 41st Int’l Conf. Dependable Systems & Networks (DSN 11), 2011, pp. 383–394. 2. J. Han, D. Gao, and R.H. Deng, “On the Effectiveness of Software Diversity: A Systematic Study on Real-World Vulnerabilities,” Proc. 6th Int’l Conf. Detection of Intrusions and Malware, and Vulnerability Assessment (DIMVA 09), 2009, pp. 127–146. 3. K. Kravvaritis, D. Mitropoulos, and D. Spinellis, “Cyberdiversity: Measures and Initial Results,” Proc. 14th Panhellenic Conf. Informatics (PCI 10), 2010, pp. 135–140. 4. P. Wang et al., “Understanding the Spreading Patterns of Mobile Phone Viruses,” Science, vol. 324, no. 5930, 2009, pp. 1071–1076. 5. J. Caballero et al., Would Diversity Really Increase the Robustness of the Routing Infrastructure against Software Defects?, tech. report CMU-CyLab-07-002, Dept. Electrical and Computer Eng., Carnegie Mellon Univ., 2008; ________________ repository.cmu.edu/ece/40. 6. M. Franz, “E Unibus Pluram: Massive-Scale Software Diversity as a Defense Mechanism,” Proc. Workshop New Security Paradigms (NSPW 10), 2010, pp. 7–16.
nodes, all subsets have the same fraction of infected nodes when averaged over many model runs. Consequently, the average fraction of infected nodes over all types can be analyzed by considering a monoculture subgraph, defined by all nodes of an arbitrary but fixed type and the edges connecting these nodes. All other nodes and their adjacent edges can be ignored. To limit the average fraction of infected nodes, choose the diversity L so that the monoculture subgraph doesn’t have a giant component. This subgraph has mean degree 〈k〉/L and mean square degree 〈k2〉/ L2 for large N. Because the subgraph is contained in a random network generated by the configuration model, the subgraph has a giant component if and only if 〈k2〉/L2 > 2 〈k〉/L in the limit for large N.5 To prevent the formation of a giant component, 〈k2〉/L ≤ 2〈k〉, or choose the diversity L so that (1) The right side of Equation 1 is large for inhomogeneous networks because ki2 is much larger than ki for 51
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
a tradeoff between the required number of node types L and the number of immunized hubs. If it’s possible to generate many node types, then the fraction of immunized hubs can be reduced, making it possible to halt malware outbreaks on very large inhomogeneous networks.
Halting Technique Performance
(a)
The hubs in a spreading network with inhomogeneous topology can be immunized to obtain a homogeneous network. If the hubs aren’t known, then acquaintance immunization can be used to protect most hubs.11 I apply the malware-halting technique to synthetic and empirical spreading networks with homogeneous topologies. Each network represents the worst-case spreading of S malware outbreaks per node type. Although Equation 1 is valid strictly for random networks in the limit of large N, NetLogo (ccl __ .northwestern.edu/netlogo) _________________ simulations show that the lower bound determines the needed diversity.
Sparse and Homogeneous Networks
(b)
Figure 2. Halting technique. (a) Monoculture with 300 infected nodes whose areas are proportional to their edge degrees. (b) The same network, but with white immunized hubs and orange and yellow node types. Two malware types, each with a single randomly selected seed, only manage to infect one additional node.
hubs. However, hub immunization reduces the lower bound. When the nodes with the largest degrees in the original network are immunized, a new network with Nc < N susceptible nodes and smaller node degrees dj , j = 1, …, Ncare obtained. The new network is obtained by ignoring all immunized hubs and their adjacent edges because they no longer contribute to malware spreading. This network “pruning” affects the previously discussed monoculture subgraph. The new mean square degree and mean degree should be substituted for 〈k2〉 and 〈k〉 in Equation 1 to determine the minimum needed diversity L. Regardless of whether hubs in the original network are immunized to obtain a new network, the S seeds on a monoculture subgraph can spread over at most S components of this subgraph. These components are small in graphs without a giant component, leading to a small fraction of infected nodes.5 Equation 1 shows that there’s 52
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
Computing devices, particularly smartphones, can communicate via short-range wireless links such as Wi-Fi and Bluetooth. In my first epidemiological simulations, different malware types copy themselves to new devices by opening wireless connections. Sparse and homogeneous proximity networks represent the spreading patterns. The NetLogo model generates a proximity network with average node degree 〈k〉 by first placing N nodes uniformly at random on a square. An edge is then added between a randomly chosen node and its closest neighbor in Euclidean distance. More edges are added the same way until the network has the desired average degree 〈k〉. Self-loops and multiple edges between nodes aren’t allowed. Note that although handheld devices move over time, I model only short-term malware spreading assuming fixed networks. Wireless sensor networks stay fixed for a long time. Table 2 lists lower bounds on the needed diversity L, obtained from Equation 1, for proximity networks with 5,000 nodes and increasing average degree 〈k〉. Each fraction of infected nodes is averaged over 500 networks with the same average degree and uniform distribution of node types, including S = 10 seeds per type. I evaluated only connected networks, ignoring networks with isolated subgraphs. The lower bound on the diversity L was the same for all evaluated networks with a given average degree. Although the deterministic epidemiological model causes all nodes to become infected in a monoculture (L = 1), less than 5 percent of the nodes were infected in the diverse proximity networks, according to Table 2. Previously published simulation results and mathematical analyses of other network models confirm that small May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
true or artificial diversity is sufficient to halt multimalware outbreaks on homogeneous and sparse networks.6 I also analyzed malware halting on a sparse network where the nodes represent email addresses and the links represent email exchanges between the addresses. The network has 1,133 nodes and 5,451 edges. The largest node has degree 71, and the average degree is 9.62. Although the network is slightly inhomogeneous, I forgo immunization of large-degree nodes. The lower bound on the diversity is L ≥ 10. Because the network is small, I assumed only S = 1 seed per node type. Ignoring the fact that email malware needs help from unknowing users to propagate, the simulations determined the fraction of infected nodes averaged over 5,000 random configurations of node types and seeds for increasing diversity L = 10, 11, …, 16. The fraction of infected nodes decreases from 8 percent to 4 percent when the diversity increases from 10 to 16. The additional decrease in the fraction of infected nodes is relatively small for diversity above the lower bound in Equation 1. Previously reported simulation results for other networks show similar modest reductions in the fraction of infected nodes for diversities beyond the minimum required value.6
Dense and Homogeneous Networks Consider the case where L types of random scanning malware spread over a complete network with N nodes of degree k = N – 1. There are L node types and N/L uniformly distributed nodes per type. Assume that there’s one seed per node type. Each seed has edges to the other N/L – 1 nodes of the same type. Together, the N/L single-type nodes form a star graph with the seed in the center. Because the seed will always infect all the peripheral nodes in the star graph, increasing the number of node types L doesn’t help as long as there’s one seed per node type. All N nodes will still be infected. The only way to halt multimalware outbreaks is to use many more nodes types than there are malware types. If there are M malware types, then M ∙ N/L nodes will be infected. Hence, the diversity L needs to be proportional to N, and the number of malware types M must be much smaller than N to prevent a large infection. This observation is in accordance with the diversity bound in Equation 1, which is equal to L ≥ (N – 1)/2 for k = N – 1. More generally, consider an arbitrary path consisting of m edges in a dense network. The path’s states are all of the same type with probability L-m for m ≤ N/L. Diversity L ≈ N to ensure that this probability is very small, even for very short paths. Large artificial diversity is needed to halt malware spreading over homogeneous dense networks with many nodes. Because it isn’t completely clear how much artificial diversity is obtainable with compilers utilizing
Table 2. Proximity networks. Average node degree 〈k〉
Minimum needed node types L
Infected nodes (%)
5
3
3.4
6
4
3.6
7
4
4.6
8
5
4.8
diversification techniques,3 the halting technique might not always be applicable to multimalware outbreaks with dense spreading patterns. However, Todd Jackson and his colleagues argue that application stores can produce massive-scale software diversity.12 Furthermore, as we transition from IPv4 to IPv6, topological scanning might become more popular than random scanning due to the huge number of unused IPv6 addresses.
Persistent Targeted Attacks The term advanced persistent threats refers to attacks employing advanced techniques to learn about and compromise computer systems without being detected, at least not for a long time.1,2 Examples of persistent threats are state-sponsored attacks on foreign, commercial, and government enterprises to steal industrial and military secrets. These attacks are often initiated by welltimed, socially engineered spear-phishing emails that deliver Trojans to individuals with access to sensitive information. Malicious email is leveraged because most enterprises allow email to enter their networks. Persistent attackers frequently exploit OS or application vulnerabilities in the targeted systems. An attacker first develops a payload to exploit one or more vulnerabilities. Next, an automated tool such as a PDF or a Microsoft document delivers the payload to a few system users. The payload installs a back door or provides remote system access, allowing the attacker to gain access to the trusted system boundary. Finally, the attacker violates the system’s confidentiality, integrity, or availability to achieve his or her goals. Large software diversity increases the time persistent attackers need to compromise systems, providing defenders with more time to detect the probing of their system defenses, collect information about the attackers, and deploy countermeasures to prevent major system breaches. As before, I divide the binary files realizing the functionality of a particular program into L equally large classes so that all members of the same class share at least one exploitable vulnerability, whereas members of different classes have no common exploitable vulnerability. If a user and an attacker download the same 53
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY ON TAP
program from an application store, then the two downloaded files share an exploitable vulnerability with probability 1/L.3,12 When the diversity L is large, the probability of a common vulnerability is small and attackers can no longer reliably analyze their own downloaded program files to exploit vulnerabilities in users’ program files. (Note that the diversity L must be large even if the lower bound in Equation 1 is small.) Directed attacks against specific computers running known programs become more difficult, as long as the attacker has no way of determining which specific binary is running on which computer. Because it’s necessary to create security patches tailored to the different binary versions of the same program, an attacker can’t reverse-engineer software patches by comparing a particular patch to the corresponding code on a user’s computer because the patch and code are both unknown to the attacker.3,12
A
lthough the Internet’s numerous networks are diverse due to distinct configurations, firewall rules, antimalware signature sets, intrusion detection, and router policies, many networks still have limited internal diversity, making them vulnerable to serious malware spreading. The proposed malware-halting technique can halt outbreaks on these networks. Advanced persistent threats present a serious challenge to defenders of networked systems with very
NEWSLETTERS
Stay Informed on Hot Topics
sensitive information. My analysis shows that software diversity makes it harder to infect computing devices in these systems. Eventually, large-scale experiments will be needed to determine how to best deploy software diversity to make systems more robust to malware. References 1. B. Potter, “Necessary but Not Sufficient,” IEEE Security & Privacy, vol. 8, no. 5, 2010, pp. 57–58. 2. E.M. Hutchins, M.J. Cloppert, and R.M. Amin, “IntelligenceDriven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains,” Proc. 6th Ann. Int’l Conf. Information Warfare and Security (ICIW 11), 2011, pp. 113–125. 3. M. Franz, “E Unibus Pluram: Massive-Scale Software Diversity as a Defense Mechanism,” Proc. Workshop New Security Paradigms (NSPW 10), 2010, pp. 7–16. 4. S.E. Page, Diversity and Complexity, Princeton Univ. Press, 2010. 5. M.E.J. Newman, Networks: An Introduction, Oxford Univ. Press, 2010. 6. K.J. Hole, “Toward a Practical Technique to Halt Multiple Virus Outbreaks on Computer Networks,” J. Computer Networks and Communications, vol. 2012, 2012, article 462747; www.hindawi.com/journals/jcnc/aip/462747. 7. M. Garcia et al., “OS Diversity for Intrusion Tolerance: Myth or Reality?,” Proc. IEEE/IFIP 41st Int’l Conf. Dependable Systems & Networks (DSN 11), 2011, pp. 383–394. 8. J. Han, D. Gao, and R.H. Deng, “On the Effectiveness of Software Diversity: A Systematic Study on Real-World Vulnerabilities,” Proc. 6th Int’l Conf. Detection of Intrusions and Malware, and Vulnerability Assessment (DIMVA 09), 2009, pp. 127–146. 9. J. Balthrop et al., “Technological Networks and the Spread of Computer Viruses,” Science, vol. 304, no. 5670, 2004, pp. 527–529. 10. M. Kitsak et al., “Identification of Influential Spreaders in Complex Networks,” Nature Physics, vol. 6, no. 11, 2010, pp. 888–893. 11. R. Cohen, S. Havlin, and D. Ben-Avraham, “Efficient Immunization Strategies for Computer Networks and Populations,” Physical Rev. Letters, vol. 91, no. 24, 2003, doi: 10.1103/PhysRevLett.91.247901. 12. T. Jackson et al., “Compiler-Generated Software Diversity,” S. Jajodia et al., eds., Moving Target Defense: Creating Asymmetric Uncertainty for Cyber Threats, Springer, 2011, pp. 77–98. Kjell Jørgen Hole is a professor in the Department of
computer.org/newsletters
54
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
Informatics at the University of Bergen, Norway. His research interests include risk management of complex adaptive systems. Hole received a PhD in computer science from the University of Bergen. Contact him at kjell.hole@ii.uib.no. ____________ May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SPOTLIGHT
Weakness in Depth: A Voting Machine’s Demise
The Role of Computer Scientists
Jeremy Epstein | SRI International
A
fter the contentious debate about election results in the 2000 US presidential race, the US Congress passed the Help America Vote Act (HAVA) in 2002 to encourage replacement of older voting machines with newer technology known as direct-recording electronic (DRE) systems, which typically have touchscreens and record votes in a software-controlled database. The effects of this legislation and the hasty decisions made as a result of funding continue to echo more than a decade later. In this article, I explore one of those decisions and consider lessons we can apply to other types of emerging technologies. Although I focus on what happened in the US, similar patterns have occurred around the world. The Netherlands switched to DREs in the late 1990s and abandoned them in 2007 after their security flaws became known. Germany and Ireland similarly had failed experiments with DREs. Brazil converted to DREs over the past 15 years, despite concerns about accuracy and security. India has adopted a homegrown DRE-like system that has no software, but studies have shown vulnerabilities in the hardware’s architecture.1
Understanding the history of US voting systems, and of the particular model described in this article, is instructive because the same technology development patterns are currently happening with the Internet of Things and cyber-physical systems (CPSs). Lack of strong standards, whether from government or the private sector, combined with a poor understanding of the risks and a large demand for technology in any form can result in risky elections, risky refrigerators, risky cars, or risky manufacturing plants.
1540-7993/15/$31.00 © 2015 IEEE
Copublished by the IEEE Computer and Reliability Societies
SECURITY& PRIVACY
IEEE
(For more information on VSS’s evolution, see the “Voting System Standards” sidebar.) In turn, states certified the voting systems, generally relying on the FEC’s certification as the basis for their confidence.
The Role of Legislation and Standards The HAVA legislation authorized several billion dollars for US local governments to upgrade voting systems and other election-related activities. States and local governments purchased billions of dollars of equipment from a handful of small vendors, including Global Election Systems (subsequently purchased by Diebold), Election Systems & Software (ES&S), Hart InterCivic, Sequoia, UniLect, and Advanced Voting Solutions (AVS). Each product met the Federal Elections Commission’s (FEC) Voting System Standards (VSS).2
In the early 2000s, computer scientists played a small role in the evolving DRE marketplace, although pioneers such as Rebecca Mercuri, Roy Saltman, and Peter Neumann pointed out the risks of software that recorded votes.3–5 A key debate was, and continues to be, the threat model. DREs were intended in part to make it harder for insiders to attack a voting system, because the presumed effort to attack a computerized system was harder than stuffing ballots in a paper ballot box or tampering with gears in a lever voting machine. However, with DREs, threats by more sophisticated insiders as well as outsiders increased. Most voting system designers assumed that attackers wouldn’t be able to obtain voting systems for close inspection; this assumption turned out to be false. For instance, I acquired several voting machines from local governments disposing of used equipment; others have had similar success in legitimate purchases. Poll workers in some states have custody of a voting machine for a week or more prior to elections and could presumably study vulnerabilities with minimal chance of detection. By the mid-2000s, computer scientists—and security experts in particular—began to pay attention, and the certification process’s May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
55 M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SPOTLIGHT
Voting System Standards
T
he Federal Elections Commission’s Voting System Standards (VSS) was replaced by the Voluntary Voting Systems Guidelines (VVSG) 1.0 in 2005, and VVSG 1.1 in 2015. These new guidelines were promulgated by the Election Assistance Commission created by the Help America Vote Act. Both VVSG versions are significantly stronger than the VSS. However, security experts continue to be concerned about their laxness.
weaknesses became more apparent. In 2004, the widely reported study of a Diebold voting system’s leaked source code increased attention,6 and in 2007, the California Secretary of State commissioned the “Top-toBottom Review,” which examined the security of the Diebold, Hart, and Sequoia voting systems.7 This was followed by the Ohio EVEREST report, which examined ES&S, Hart, and Diebold systems.8 Several other reports in Florida, Maryland, and other states found additional problems and noted that all DREs examined had serious security flaws. The AVS WinVote, however, was not examined except by the FEC VSS process, as it was used in only Pennsylvania, Mississippi, and Virginia. None of these states performed a comprehensive study of their voting systems. For example, the Pennsylvania certification didn’t include any penetration testing or examination of the cryptographic protocols used. 9,10 In 2008, WinVote was decertified in Pennsylvania in response to lawsuits and ceased being used in Mississippi, leaving Virginia as the only state to use this system.
The WinVote System The WinVote system is a Windows XP Embedded device with a touchscreen, USB ports, and a small thermal-paper printer behind a locked door. It’s arguably different 56
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
from any of the competitive products in one key area: it alone uses Wi-Fi, which allows for easier “programming” of the election— for instance, setting up the voting machine with the contests, names of contests, and other configuration information. More important, from the poll workers’ perspective, the Wi-Fi capability allows easier polling place closing by synchronizing a set of WinVote machines and printing a single summary result, instead of each machine printing its own totals that must then be combined manually—an error-prone process after a 12- to 16-hour day of operating the polling place. In 2005 and 2006, legislative committees (of which I was a member) questioned the security of voting systems, including WinVote, and questioned the use of Wired Equivalent Privacy (WEP) as wireless protection, noting that it had been “broken” years earlier. Despite the advantages of Wi-Fi, security experts advocated for prohibiting Wi-Fi in polling places as a result of security risks—a ban that the Virginia legislature adopted in 2007.11 However, WinVote systems wouldn’t operate if the Wi-Fi was completely disabled, so the prohibition was repealed in 2008.12
Weakness in Depth Exposed The WinVote machines continued to be used until November 2014, when they were reported to be crashing in a polling place whenever a poll worker attempted to download music using an iPhone. The Virginia Information Technology Agency (VITA), Virginia’s centralized IT department for state government, was asked to assess WinVote’s security. Although the report revealed that VITA was unable to reproduce the iPhone crashing syndrome, the results were damning:13 ■ WinVote used a version of Microsoft Windows XP Embedded,
■
■ ■
■
■ ■
■
and no patches had been installed since 2004. WinVote used the WEP wireless encryption scheme, which was declared obsolete in 2004 due to security flaws. The WEP key was hardwired to “abcde.” Even if Wi-Fi was disabled, the “disable” feature applied only to the WinVote application and not to the Windows XP operating system, which continued to expose file services, among others. The Windows administrator password was set to “admin,” and no interface was provided to change the password. The database was encrypted using a weak protection with a hardwired key. There were no logs or cryptographic checksums to detect whether the database had been replaced. The physical controls on the USB and other ports were weak, and physical access could allow installation of software (for instance, through the autorun mechanism).
This collection of security flaws is so severe that if an election was held using the AVS WinVote, and it wasn’t hacked, it was only because no one tried. Attackers could successfully modify an election using a simple set of steps requiring no software development: 1. Using a network sniffer, capture the traffic between WinVote units and break the WEP encryption to determine the key (which VITA did). 2. Connect to the voting machine via Wi-Fi and access file services. 3. If an administrator password is needed, the default “admin” suffices. 4. Download the Microsoft Access database containing the votes. 5. Use a free or commercial tool to determine the hardwired key. May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
6. Modify the database using Microsoft Access to add, remove, or change votes. 7. Upload the revised database to the voting machine.
of purchasing new equipment, further adding to the difficulty.
How Did We Get Here? In retrospect, we can identify several problems that led to the initial certification and continued use of the WinVote machines, despite their now-obvious unsuitability.
■ Election officials and those responsible for certifying election systems commonly didn’t understand the inner workings of the voting systems or the principles of computer security. ■ Voting system software was considered proprietary and closed, so even election officials weren’t generally permitted to examine it and were forced to deploy the voting systems as black boxes and simply trust the vendors’ exaggerated security claims.
Each of these flaws individually would be cause for significant alarm, as a secure system relies on a combination of security measures. This combination of flaws is ■ Computer scientists didn’t adethe reverse of defense in depth, or quately participate in the devel“weakness in depth.” opment of the initial standards, Other potential attack methods allowing a weak standard to be that VITA didn’t explore include instituted. taking advantage of unpatched ■ A voting system’s life cycle is typiMany of these issues apply to Windows XP vulnerabilities or cally 10 to 15 years, so any deci- CPS technologies. The certificacompromising the systems via sions made on certification must tion standards—to the extent that physical access, such as installing consider that lifetime. they exist—are weak, and devices software through a USB device. ■ “Grandfather” clauses in approv- aren’t subject to periodic reexamiBut given the ease of manipulatals at the federal and state levels nation; computer scientists, paring the election directly ticularly security experts, using the above steps, haven’t been sufficiently such options weren’t conIf an election was held using the AVS involved; and a CPS sidered further. device’s lifetime is meaWinVote, and it wasn’t hacked, it After these discoveries, sured in decades. There’s the State Board of Elecwas only because no one tried. an increasing undertions immediately decerstanding of threat modtified WinVote machines, els, largely as a result of prohibiting their future use—a wise ensured that once a voting system the Stuxnet attacks, but every day decision as the problems uncovered was approved, it could continue brings new reports of vulnerabilities to be used essentially forever. in traffic lights, aircraft, automobiles, indicated that any attack would be Grandfather clauses are appro- medical devices, smartphones, and both successful and undetectable. Once it became obvious just how priate when the threat model and other embedded systems. vulnerable the systems were, it was risk environment are unchanging; inevitable that tampering would however, in a field such as information security, both are morphoccur (if it hadn’t already). o what next? The WinVote expeHowever, the cost is significant— ing constantly. rience will presumably help decertification approximately two ■ Voting systems are a niche area bring to a close the era of DRE voting and generally built by boutique systems in the US, although some months before a primary election didn’t leave local governments vendors that might not be in busi- other countries seem to be moving in much time to obtain new equipness for the duration of the life the opposite direction. Many states ment, train their staff and poll cycle, as was the case for WinVote. and countries are experimenting workers, and inform voters of the ■ Different groups’ driving factors with Internet voting; for instance, weren’t adequately considered. Estonia has used Internet voting changes. Many local governments are borrowing equipment from In many states, voting systems are routinely for more than a decade. other local governments for the acquired by local governments, Perhaps the lessons learned from which tend to have relatively small WinVote and other DREs will help primary election, with the plan to purchase new equipment before the IT staffs and frequently have no the community learn the risks of general election in November 2015 security staff. Election offices run immature voting technology before when the entire state legislature and on very tight budgets, and capital it becomes widespread and the risks budgets (for example, for replac- to democracy are irreversible. many local offices will be on the baling voting equipment) are diffilot. And in many cases, local governFinally, the lessons learned ments hadn’t budgeted for the costs cult to obtain. should serve as a warning to those
S
57
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SPOTLIGHT
rushing headlong toward the Internet of Things and CPSs. The siren song of improved productivity might have unanticipated consequences, as the threat environment changes but devices with long life cycles are unchangeable. Weakness in depth is a disaster that must be averted. References 1. S. Wolchok, “Security Analysis of India’s Electronic Voting Machines,” Proc. ACM Conf. Computer and Communications Security (CCS 10), 2010; https://jhalderm _________ .com/pub/papers/evm-ccs10.pdf. 2. “Voting System Standards,” Federal Election Commission, Apr. 2002; www.eac.gov/testing_and _certification/voluntary_voting _system_guidelines.aspx. 3. R. Mercuri, “Electronic Vote Tabulation Checks & Balances,” PhD dissertation, Dept. Computer and Information Systems, Univ. Pennsylvania, 2000; www.notablesoftware .com/Papers/thesdefabs.html. 4. R.G. Saltman, Effective Use of Computing Technology in Vote-Tallying, Nat’l Bureau of Standards, Mar. 1975;
http://csrc.nist.gov/publications /nistpubs/NBS_SP_500-30.pdf. ___________________ 5. R.G. Saltman, Accuracy, Integrity, and Security in Computerized Vote-Tallying, NSB special publication 500158, Nat’l Bureau of Standards, 1988; www.itl.nist.gov/lab/specpubs ________ /500-158.htm. 6. T. Kohno et al., “Analysis of an Electronic Voting System,” IEEE Symp. Security and Privacy, May 2004, pp. 27–40. 7. D. Bowen, “Top-to-Bottom Review,” California Secretary of State, 2007; www.sos.ca.gov/voting-systems /oversight/top-to-bottom-review .htm. ___ 8. “Ohio EVEREST Voting Study,” Systems and Internet Infrastructure Security, Dec. 2007; http://siis.cse .psu.edu/everest.html. ____________ 9. “Examination Results of the Advanced Voting Solutions WinVote Direct Recording Electronic Voting System with WINware Election Management Software,” Commonwealth of Pennsylvania, Department of State, Feb. 2006. 10. “Reexamination Results of the Advanced Voting Solutions WinVote
Direct Recording Electronic Voting System with WINware Election Management Software, Version 2.03,” Commonwealth of Pennsylvania, Department of State, Apr. 2007. 11. “SB 840 Electronic Voting Equipment; Requirements and Recount Procedures,” Virginia’s Legislative Information System, 2007; http:// ____ lis.virginia.gov/cgi-bin/legp604 .exe?071+sum+SB840S. ______________ 12. Senate Bill 52, Virginia’s Legislative Information System, 2 Mar. 2008; http://lis.virginia.gov/cgi-bin /legp604.exe?081+ful+CHAP0087. 13. “Security Assessment of WinVote Voting Equipment for Department of Elections,” Virginia Information Technologies Agency Commonwealth Security and Risk Management, 14 Apr. 2015; ____ http:// elections.virginia.gov/WebDocs /VotingEquipReport/WINVote -final.pdf. _____ Jeremy Epstein is a senior computer
scientist at SRI International. Contact him at jeremy.epstein@ __________ sri.com. _____
IEEE Pervasive Computing explores the many facets of pervasive and ubiquitous computing with research articles, case studies, product reviews, conference reports, departments covering wearable and mobile technologies, and much more. Keep abreast of rapid technology change by subscribing today!
www.computer.org/pervasive 58
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
IT ALL DEPENDS ______________ | Aad Van Moorsel, aad.vanmoorsel@ncl.ac.uk Editors: Mohamed Kaâniche, mohamed.kaaniche@laas.fr ______________
End-to-End Verifiability in Voting Systems, from Theory to Practice Peter Y.A. Ryan | University of Luxembourg Steve Schneider | University of Surrey Vanessa Teague | University of Melbourne
S
ince the dawn of democracy, societies have been experimenting with technological means to tackle corruption and avoid the need to trust officials. Excavations of Ancient Greece have revealed mechanisms that were clearly designed to ensure allotment: the randomness of the selection of people for office. In response to a rash of corrupted elections in the US in the late 19th century, countless devices were created that promised to provide incorruptible vote recording and counting. Thomas Edison even patented an electronic voterecording device, and monstrous Metropolis-style lever machines persisted in some US states until very recently. Throughout the history of democracy, there’s been a battle between those trying to ensure the integrity of elections and those seeking to undermine them. The human ingenuity that has been poured into this war is truly impressive; see Andrew Gumbel’s Steal this Vote: Dirty Elections and the Rotten History of Democracy in America for a highly entertaining—and somewhat terrifying—account.1 The combat continues unabated, but now with new technology available to both sides. Cryptographers and those in information security have attempted to address the problem since the turn of the 21st century. Modern cryptography opens up a realm of new possibilities, but like all technology, cryptography and
digital innovations are double-edged swords, opening up new threats. Some argue that voting is a human activity that should remain in the traditional, even ceremonial realm: casting paper votes into ballot boxes and counting the resulting pile of ballots by hand. Others worry that any move to digital voting technology will enable systematic corruption. This position does hold some merit: it’s true that any hasty, ill-thought-out innovation could result in disaster. Indeed, this has been demonstrated many times, such as with the California Top-toBottom Review of voting (https:// _____ www.sos.ca.gov/voting-systems /oversight/top-to-bottom-review ______________________
1540-7993/15/$31.00 © 2015 IEEE
Copublished by the IEEE Computer and Reliability Societies
SECURITY& PRIVACY
IEEE
.htm), where the team analyzing commercial voting systems in California declared that “virtually every important software security mechanism is vulnerable to circumvention.” It’s clear, then, that innovations must be developed with extreme care. But the argument that moving away from the traditional voting system will be disastrous is misguided.
___
End-to-End Verifiability The promise of end-to-end verifiability (E2EV) gives us hope that digital technologies can provide benefits in terms of security, and not just in terms of convenience and usability. E2EV uses some of the novel May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
59 M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
IT ALL DEPENDS
properties of modern cryptography to offer something completely new and quite remarkable: the means for voters to confirm that their vote is accurately included in the tally while preventing any third party from determining how they voted, even with their cooperation. In essence, voters can privately create an encryption of their vote. All encrypted votes are posted to a public website, where voters can confirm that their vote is correctly recorded. The batch of encrypted votes is anonymized and decrypted in a universally verifiable fashion and can then be tabulated. The fundamental challenge in public voting is how to reconcile the conflict between demonstrable integrity and ballot privacy. The E2EV solution is the classic computer science way of introducing an indirection: the encryption and decryption of votes. A short, gentle introduction to E2EV can be found at http://arxiv.org /abs/1504.03778. ___________ Although E2EV sounds simple, it’s really quite complex. The implementation of E2EV has to be sufficiently simple and usable for voters, election officials, and candidates to feel comfortable. A particularly delicate step is encrypting the ballot in such a way so that voters are confident that their vote has been correctly encoded without involving a third party. The most common approach to achieving ballot assurance is the Benaloh challenge: voters tell the device how they wish to vote, and this commits to an encryption. Voters can now challenge this— requiring that the encryption be opened—or cast their ballot. Voters are free to repeat this as many times as they wish until they feel confident that the device is behaving correctly. Of course, it’s essential that the device not know in advance how voters will choose. In recent years, we’ve seen such systems start to move from 60
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
academic articles into the real world. In 2009, the Scantegrity II system, which uses the E2EV approach, was successfully used in municipal elections in Takoma Park, Maryland.2
vVote Last November in Victoria, Australia, a system called vVote, based on the Prêt à Voter approach,3 was successfully used by a section of the electorate. The system allowed for E2EV electronic voting in supervised polling places—the first time this was done in a politically binding statewide election—for voters with disabilities, such as vision impairment, and for Australian citizens voting remotely from London, England. Votes were cast privately in a voting booth and then transferred electronically to a central count. Because the electronic system ran in parallel with the traditional paper voting system, the final step in which the electronic votes were merged with the physical ones could be observed only by poll watchers who were present. Apart from that, all other steps could be verified by voters. The key idea behind the Prêt à Voter approach, which vVote inherits, is to encode votes using a randomized candidate list, which ensures the secrecy of each vote and removes any bias. Once a ballot is marked by a voter, the candidate list is detached and destroyed. An encryption of the candidate order is preserved and used to extract the vote during tabulation. This gives voters four steps of verification: 1. Before casting a vote, voters can confirm that the printed ballot with the randomized candidate list is properly constructed. When given a ballot, voters can choose to challenge it by demanding cryptographic proof of its correctness, which they can take home and verify.
Voters can challenge as many ballots as they like before accepting one. 2. When the voting computer prints out their marked ballot, voters can check that the marks align properly with the randomized candidate list. 3. Once the candidate list is destroyed, voters leave the polling place with a receipt that includes their printed ballot and the encrypted candidate order. Voters can see that their ballot appears on a public list of accepted votes without revealing how they voted. 4. Anyone can verify that all the votes on the public list are properly shuffled and decrypted. All of these steps—aside from the second—can be performed by or with the help of proxies of the voters’ choice. Every aspect of the system is available for scrutiny: every check that voters perform with a computer can be independently recompiled, reimplemented, or performed by a completely independent party. The source code for vVote is available at https://bitbucket.org /vvote. ____ A nontechnical guide is available at http://electionwatch.edu.au /victoria-2014/click-here-democracy ______________________ -e-vote-explained, ___________ and the complete system description and security analysis can be found in Chris Culnane and his colleagues’ “vVote: A Verifiable Voting System.”4 The vVote system was designed to handle up to hundreds of thousands of votes, though for this particular election, access to the system in the State of Victoria was restricted to 24 early voting centers and to voters with disabilities. In addition, voters in London, England, were able to use the system to cast their vote in a supervised polling place at the Australian High Commission. For these groups, 1,121 votes were cast using the system, more than the number of remote electronic votes cast in May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
2010, and with a quarter of the number of polling places available. A survey of the voters in London found that more than 75 percent agreed or strongly agreed with the statement that the system was easy to use.
fully auditable. This idea is nicely captured in Josh Benaloh’s maxim: “Verify the election, not the system.” A related concept is Ronald L. Rivest and John P. Wack’s notion of “software independence,” which says that any error in the code that Issues and Challenges could result in a change in the outcome must be detectable at execuAlthough voter feedback seems to be tion time (http://people.csail.mit fairly positive, there are some issues .edu/rivest/RivestWack-OnThe regarding existing E2EV techniques. ______________________ NotionOfSoftwareIndependence The very concept of being able to ______________________ InVotingSystems.pdf). Of course, verify a vote rather than blindly _____________ this doesn’t mean that trusting a system is novel verification of the design for voters and requires an effort by the authorities The fundamental challenge in public voting and code should be neglected—it just means to educate and motivate is how to reconcile the conflict between that the integrity of the the electorate. Usability remains a challenge for demonstrable integrity and ballot privacy. outcome should not be dependent on assumpE2EV systems, as distions about the correctcussed in Fatih Karayuness of the running code. mak and his colleague’s Another project is the End-to“User Study of the Improved Helios patched, but only after 66,000 votes Voting System Interfaces.”5 Verifica- had been cast.7 Given that iVote’s End Verifiable Internet Voting Projtion needs to be simple enough so “verification” mechanism is unavail- ect (www.overseasvotefoundation voters can understand its purpose able for external review, there’s a .org/E2E -Verif iable-Internetwhich is and feel motivated to perform the risk that it contains errors or secu- Voting-Project/News), ______________ checks in significant numbers. It’s rity holes. This is important because examining E2EV in an attempt not sufficient for voters to simply trust in a small number of comput- to define the real requirements of follow the system’s instructions— ers represents a potential avenue for verifiability, so vendor systems that without performing any checks—as undetectable, large-scale electoral are not truly E2EV—but claim to attackers could manipulate the code manipulation if attackers can com- be—can be differentiated from issuing the instructions. promise that small set. systems that are. Another challenge is that a system can’t simply be verifiable—it’s System Verification essential that the system is actually versus E2EV nd-to-end verifiability repreverified randomly many times to It’s important to note that the phisents a paradigm shift in elecensure confidence in the result. In losophy behind E2EV systems is tronic voting, providing a way to the case of the November 2014 elec- quite different from what’s usually verify the integrity of elections by tion in Victoria, observation of the meant by “system verification.” In allowing voters to audit the inforremote voters in London suggested the latter, the idea is to perform a mation published by the system, that the majority did perform some detailed analysis of a system’s design rather than trusting that the syscheck of the printed receipt against and implementation against a set of tem has behaved correctly. Recent the candidate list, and around 13 required properties. Thus, as long deployments of E2EV systems in percent of those using vVote checked as the verified code is running at real elections demonstrate its practireceipts on the public website.6 execution time and the verification cal applicability, and we hope to one There are a number of alterna- is complete and correct, the system day see E2EV as the normal expectative commercial systems that claim should uphold the required proper- tion for electronic voting systems. to be verifiable but don’t actually ties. In practice, it’s extremely difallow voters to perform their own ficult to achieve all this, especially References checks. Of course, this can result in due to the rather open, distributed 1. A. Gumbel, Steal this Vote: Dirty Elections and the Rotten History a more appealing “vote and go” user nature of voting systems. of Democracy in America, Nation interface. With the iVote system, By contrast, E2EV seeks to Books, 2005. used in the 2015 state elections in ensure that the system execution is Victoria’s neighboring state of New South Wales, only a small number of chosen auditors could verify the system’s output. Voters can check their own votes only by querying a database, instead of seeing the evidence themselves and checking it with their own machine as they can with E2EV voting. One of the authors of this article co-discovered a serious security vulnerability in the 2015 New South Wales election. It was easily
E
61
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
IT ALL DEPENDS
2. R. Carback et al., “Scantegrity II Municipal Election at Takoma Park: The First E2E Binding Governmental Election with Ballot Privacy,” Proc. 19th USENIX Conf. Security (USENIX Security 10), 2010, p. 19. 3. P.Y.A. Ryan et al., “Prêt à Voter: A Voter-Verifiable Voting System,” IEEE Trans. Information Forensics and Security, vol. 4, no. 4, 2009, pp. 662–673. 4. C. Culnane et al., “vVote: A Verifiable Voting System,” arXiv:1404.6822, 2014; http://arxiv.org/abs/1404 .6822. ___ 5. F. Karayumak et al., “User Study of the Improved Helios Voting System Interfaces,” 1st Workshop Socio-Technical Aspects in Security and Trust (STAST 11), 2011, pp. 37–44. 6. C. Burton, C. Culnane, and S. Schneider, “Secure and Verifiable Electronic Voting in Practice: The Use of vVote in the Victorian State Election,” arXiv:1504.07098, 2015; http://arxiv.org/abs/1504.07098. 7. V. Teague and A. Halderman, “Security Flaw in New South Wales Puts Thousands of Online Votes at Risk,” Freedom to Tinker, 22 Mar. 2015, https://freedom-to-tinker _______________ .com/blog/teag uehalder man /ivote-vulnerability. ___________
PURPOSE: The IEEE Computer Society is the world’s largest association of computing professionals and is the leading provider of technical information in the field. MEMBERSHIP: Members receive the monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field. COMPUTER SOCIETY WEBSITE: www.computer.org Next Board Meeting: 1–5 June 2015, Atlanta, GA, USA
EXECUTIVE COMMITTEE President: Thomas M. Conte President-Elect: Roger U. Fujii; Past President: Dejan S. Milojicic; Secretary: Cecilia Metra; Treasurer, 2nd VP: David S. Ebert; 1st VP, Member & Geographic Activities: Elizabeth L. Burd; VP, Publications: Jean-Luc Gaudiot; VP, Professional & Educational Activities: Charlene (Chuck) Walrad; VP, Standards Activities: Don Wright; VP, Technical & Conference Activities: Phillip A. Laplante; 2015–2016 IEEE Director & Delegate Division VIII: John W. Walz; 2014–2015 IEEE Director & Delegate Division V: Susan K. (Kathy) Land; 2015 IEEE Director-Elect & Delegate Division V: Harold Javid
BOARD OF GOVERNORS Term Expiring 2015: Ann DeMarle, Cecilia Metra, Nita Patel, Diomidis Spinellis, Phillip A. Laplante, Jean-Luc Gaudiot, Stefano Zanero Term Expriring 2016: David A. Bader, Pierre Bourque, Dennis J. Frailey, Jill I. Gostin, Atsuhiro Goto, Rob Reilly, Christina M. Schober Term Expiring 2017: David Lomet, Ming C. Lin, Gregory T. Byrd, Alfredo Benso, Forrest Shull, Fabrizio Lombardi, Hausi A. Muller
EXECUTIVE STAFF Executive Director: Angela R. Burgess; Director, Governance & Associate Executive Director: Anne Marie Kelly; Director, Finance & Accounting: John G. Miller; Director, Information Technology Services: Ray Kahn; Director, Membership: Eric Berkowitz; Director, Products & Services: Evan M. Butterfield; Director, Sales & Marketing: Chris Jensen
COMPUTER SOCIETY OFFICES
Peter Y.A. Ryan is a professor of
applied security at the University of Luxembourg. Contact him at peter.ryan@uni.lu. ___________
Steve Schneider is a professor of com-
puting and Director of the Surrey Centre for Cyber Security at the University of Surrey. Contact him at _______________ s.schneider@surrey.ac.uk.
Vanessa Teague is a research fellow
in the Department of Computing and Information Systems at the University of Melbourne. Contact her at vjteague@unimelb.edu.au. ________________
Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
62
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
Washington, D.C.: 2001 L St., Ste. 700, Washington, D.C. 20036-4928 Phone: +1 202 371 0101 • Fax: +1 202 728 9614 • Email: ___________ hq.ofc@computer.org Los Alamitos: 10662 Los Vaqueros Circle, Los Alamitos, CA 90720 Phone: +1 714 821 8380 • Email: help@computer.org ___________ Membership & Publication Orders Phone: +1 800 272 6657 • Fax: +1 714 821 4641 • Email: help@computer.org ___________ Asia/Pacific: Watanabe Building, 1-4-2 Minami-Aoyama, Minato-ku, Tokyo 1070062, Japan • Phone: +81 3 3408 3118 • Fax: +81 3 3408 3553 • Email: tokyo.ofc@ ______ computer.org _______
IEEE BOARD OF DIRECTORS President & CEO: Howard E. Michel; President-Elect: Barry L. Shoop; Past President: J. Roberto de Marca; Director & Secretary: Parviz Famouri; Director & Treasurer: Jerry Hudgins; Director & President, IEEE-USA: James A. Jefferies; Director & President, Standards Association: Bruce P. Kraemer; Director & VP, Educational Activities: Saurabh Sinha; Director & VP, Membership and Geographic Activities: Wai-Choong Wong; Director & VP, Publication Services and Products: Sheila Hemami; Director & VP, Technical Activities: Vincenzo Piuri; Director & Delegate Division V: Susan K. (Kathy) Land; Director & Delegate Division VIII: John W. Walz revised 27 Jan. 2015
May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
EDUCATION Editor: Melissa Dark, _________ dark@purdue.edu
Evaluating Cybersecurity Education Interventions: Three Case Studies Jelena Mirkovic | University of Southern California Melissa Dark | Purdue University Wenliang Du | Syracuse University Giovanni Vigna | University of California at Santa Barbara Tamara Denning | University of Utah
T
he March/April 2015 installment of this department outlined a five-step process for designing an evaluation for an education intervention:1
Education Interventions
We applied this five-step approach in designing an evaluation for three education interventions. The goals of this exercise were to show how to design an evaluation for a real intervention from beginning to end; to highlight the common intervention goals and propose suitable evaluation instruments; and to discuss the expected investment
We selected three successful and widely used, yet very different, education interventions. The University of California at Santa Barbara’s (UCSB) International Capture the Flag (iCTF) exercise is a distributed, wide-area security exercise in which teams compete against each other to perform security-related tasks. iCTF is the world’s largest and longestrunning educational hacking competition that integrates both attack and defense aspects in a live setting. Organized by coauthor Vigna, the competition is held once a year in different locations, and participants come from all over the world (https://ictf.cs.ucsb.edu). For this ______________ intervention, we consider preparation for the competition as part of the intervention.
1540-7993/15/$31.00 © 2015 IEEE
Copublished by the IEEE Computer and Reliability Societies
■ Determine the purpose of the evaluation. ■ Frame the evaluation. ■ Determine the evaluation questions. ■ Determine the information needed to answer the evaluation questions. ■ Establish a systematic method for collecting the information, including timing, target population, and instruments.
SECURITY& PRIVACY
IEEE
of time and effort in preparing and performing the education evaluations, which can be significant.
SEED labs are a collection of more than 30 hands-on exercises that cover a variety of cybersecurity topics (www.cis.syr.edu/~wedu /seed). These exercises are distrib___ uted as prebuilt virtual machine images that participants download and run on their computers, making the exercises highly portable and easily adopted. SEED labs are developed under the leadership of coauthor Du, and have been adopted by hundreds of educators worldwide. Control-Alt-Hack is a tabletop card game about ethical hacking, designed by coauthor Denning, Tadayoshi Kohno of the University of Washington Computer Security and Privacy Research Lab, and Adam Shostack, an honorary member of the lab. The game introduces many cybersecurity concepts in a fun setting and is widely adopted by educators (www ___ .controlalthack.com). ____________
Determine the Purpose of the Evaluation We designed summative evaluations for all three interventions that ask whether the intervention achieved the education goals. Summative evaluation can also determine why an intervention succeeded or failed to achieve its desired goals, but this is beyond the scope of this article.
Frame the Evaluation For each evaluation, we attempted to identify its antecedents (the state of education that the intervention aims to change), transactions (activities that change the state), and desired outcomes (educational goals).2 We also identified the underlying assumptions, beliefs, May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
63 M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
EDUCATION
Table 1. Antecedents, transactions, beliefs, and outcomes for each intervention. Intervention
Antecedents
Transactions
International Capture the Flag (iCTF) exercise
Students have only superficial knowledge of cybersecurity.
Force participants to work in teams, plan ahead and prepare before the competition, strategize and think like an adversary, and gain practical skills.
Students lack practical skills in cybersecurity.
Popularize iCTF broadly.
Students have low interest in cybersecurity. Students engage in only passive learning. SEED labs
Students and workforce members lack practical skills.
Allow students to gain practical skills needed for their career and increase student and knowledge retention.
Teachers who lack practical skills can’t teach these skills to students.
Produce materials for teachers that let them assign practical exercises to students even if they aren’t experts in a given area of cybersecurity.
Schools with underrepresented populations can’t teach practical skills due to lack of resources.
Exercises are portable and easily adoptable (not tied to infrastructure).
There are low student interest and retention rates in the cybersecurity field.
Advertise and disseminate materials broadly.
There is low student retention of cybersecurity knowledge. Control-Alt-Hack
People understand little about the importance of cybersecurity and their risk of being attacked.
Introduce diverse fictional characters that players can identify with and present diverse threats in diverse environments.
People don’t understand the diversity of cybersecurity threats.
Advertise and disseminate materials broadly.
Students have low interest in cybersecurity. Underrepresented populations have even lower interest in cybersecurity. Cybersecurity is perceived as a complex field for a narrow technical population.
constraints, and theories that relate to each intervention. Table 1 summarizes this data. Each intervention has numerous educational goals, and some overlap. All three interventions aim to increase student interest in cybersecurity, and iCTF and SEED labs both aim to teach practical 64
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
skills. This overlap suggests that evaluation approaches for these common outcomes can be reused by many cybersecurity educators. Other outcomes were specific to each intervention. iCTF emphasizes defensive skills, how to think like an adversary, and teamwork— all of which are needed to prepare
students for a career in cybersecurity. SEED labs aim to help underrepresented teachers and institutions and to increase the retention of students and knowledge in cybersecurity—goals that look at improving the current quality of classroom education. Control-Alt-Hack’s objectives are May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
MqM q THE WORLD’S NEWSSTAND®
Underlying assumptions, beliefs, constraints, and theories
Desired outcomes
Competition-based learning generates interest among a variety of students because it appeals to students’ desire to compete.
Motivate students to go beyond the call of duty.
Competitions are a form of active learning, which is effective for developing deeper knowledge and practical skills because it requires that students engage in problem-solving activities that promote analysis, synthesis, and evaluation of content.
Improve students’ practical skills.
Cooperative learning, problem-based learning, and the use of case methods and simulations are often coupled with active learning and have been shown to improve students’ ability to work in teams.
Increase the number of students interested in cybersecurity.
Practical exercises increase knowledge retention.
Improve students’ practical skills.
Exercises not tied to infrastructure will be more readily adopted and infused into the curriculum of teachers who lack practical skills, especially at schools with underrepresented populations.
Enable teachers lacking practical skills to expose students to practical work in security.
Practical learning exercises will increase student interest and help retain them in cybersecurity programs of study.
Improve retention of material taught in class by reinforcing it in exercises.
Teach students how to apply the adversarial mindset under substantial pressure.
Improve retention of students in the cybersecurity field. Increase students’ interest in cybersecurity. Enable schools to provide students with practical security skills without spending too much money. Diverse fictional characters help a broader cross-section of people identify with the field of cybersecurity, showing that the field and its professions welcome people with diverse backgrounds.
Increase understanding of the importance of cybersecurity and the potential risks posed by inadequate security safeguards.
When many people play a computer security game in different settings, it helps increase their understanding of the importance of and interest in cybersecurity.
Convey the breadth of technologies for which cybersecurity is relevant, including conventional computing platforms and emerging platforms like pervasive technologies and cyber-physical systems.
A game as a nonstandard awareness tool might reach different audiences than organized lectures and encourage casual and voluntary play.
Improve understanding of the diversity of potential threats that security designers must consider and the creativity of attackers. Increase interest and enthusiasm in cybersecurity. Show that the information technology community and its professions welcome people with diverse backgrounds.
to improve the general population’s understanding of cybersecurity and to promote diversity in the field. These goals are specific to raising awareness rather than improving skills or acquiring knowledge. We can group these interventions’ outcomes into the following broad categories:
■ skills—acquiring or improving a skill or set of skills, ■ interest and awareness—raising awareness of and interest in cybersecurity, ■ learning—acquiring or retaining knowledge, and ■ impact—helping underrepresented populations.
Determine the Evaluation Questions We developed evaluation questions that ask whether the desired outcomes were met. Table 2 summarizes the questions for all three interventions by category. We believe that many other cybersecurity education interventions will 65
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
EDUCATION
Table 2. Evaluation questions by category. Category
Questions
Skills
After the intervention, are participants better able to devise strategic, in-depth offensive and defensive tactics? Do participants have more practical skills in cybersecurity? Are participants better able to use the adversarial mindset under pressure? Does the intervention teach participants to plan and work in teams?
Interest and awareness
Are participants more interested in cybersecurity after the intervention?
Learning
Do participants better retain cybersecurity knowledge?
Does the intervention increase student retention in the cybersecurity field? Do participants have a better understanding of the importance of cybersecurity and the risks posed by attacks? Do participants have a better understanding of the breadth of technologies that can be affected by cyberattacks? Do participants have a better understanding of the diversity of cyberthreats?
Impact
Can institutions that serve underrepresented populations easily adopt the intervention? Can teachers who lack practical skills easily adopt the intervention? Do participants see themselves studying or practicing cybersecurity after engaging in the intervention? Were diverse populations impacted by the intervention?
share desired outcomes with these three interventions, so we focus on designing evaluation instruments for each question, independent of the intervention.
Skills To determine whether the intervention helps participants acquire or improve a skill, evaluators could apply the following evaluation instruments.
Determine the Information Quiz on hypothetical scenarios. Needed to Answer the Describe hypothetical scenarios Evaluation Questions to participants before and after the This step includes identifying and defining the quality or attribute to be measured, determining a set of operations by which the attribute might be perceived, and establishing a set of procedures or definitions for translating observations into quantitative statements of degree or amount. Here, we discuss ways to evaluate whether the desired outcomes were met using the categories shown in Table 2. We considered a plethora of possible evaluation instruments that could be used to measure given learning goals, regardless of the complexity or cost needed to implement each instrument. In practice, researchers must choose appropriate instruments based on their suitability to the evaluation task at hand as well as the time and effort needed to implement them. 66
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
intervention. Ask them to reply to questions that reveal their mastery of targeted skills. Compare these measures to see if the mastery of skills improved after the intervention. For example, ask students to provide essay answers to a hypothetical security scenario (in either the defensive or offensive role) and have someone evaluate the depth and quality of their solutions before and after participating in iCTF. This would measure whether they learned to go beyond the call of duty (see the first iCTF outcome in Table 1).
Simulate scenarios. Place students into simulated scenarios that force them to use targeted skills and either automate measurements or pair someone with each participant
to evaluate their skills. For example, tabulate the practical skills students learn in each SEED lab and create computer-based scenarios in which students must employ these skills to achieve some goal. Measure the mastery of targeted skills by recording students’ command-line input and then analyzing the input’s sophistication and speed of achieving the goal. Measure as students learn. Apply
the same automated measurement approach from the simulated scenarios to the intervention. This would enable analysis of not only whether students learn with the intervention but also how they learn. For example, track command-line inputs for each participant and system state during an iCTF exercise and have someone analyze the sophistication of each participant’s action, the evolution of this sophistication during the exercise, and the level of teamwork. For many interventions, however, learning happens in the preparatory phase as well as the execution phase, so this evaluation approach would have to span both phases. May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Self-assessment. Survey students
about their perception of what they know before and after the intervention using multiple-choice questions (for instance, rate your level of proficiency in network administration) or essay questions (for instance, list the tasks you know how to perform in network administration). Quiz and selfassessment approaches require the evaluator to precisely identify skills learned in an intervention. This might be challenging for loosely specified interventions like iCTF, in which students learn independently in an informal setting, and in which students might employ many skills to achieve the iCTF goal (such as hacking into the system). Furthermore, self-assessment might not align well with actual skill level because it’s subjective. Simulated scenarios and measuring while learning offer more objective evidence of learning. But, if automated measurement is deployed, it requires instrumentation of the intervention and aggregation of low-level measurements into highlevel goals. Pairing an evaluator with participants to measure skill attainment requires a large time investment and limits the evaluation scale.
Interest and Awareness Measuring interest and awareness is well-understood and often applied in the education community. These qualities can be assessed using the following evaluation instruments. Participants are surveyed about their interest in the field or their intent to continue to be involved in cybersecurity. Self-assessment.
Follow-up. Participants are tracked for some time after the intervention, and data is gathered to evaluate their engagement with the field—did they take a follow-up course, enroll in graduate school,
join a professional group, or continue self-learning? Of these two options, selfassessment is easier to design and conduct but might overestimate the intervention’s impact, as it’s likely that some students will say they intend to stay engaged but won’t follow up on those intentions with actions. Control group. A control group—
students who weren’t affected by the intervention—is needed to properly measure an increase in interest and awareness. One way to achieve this would be to draw the control group from the same population as the intervention group. This works in the case of voluntary interventions such as iCTF and Control-Alt-Hack, in which a student can choose to participate or not. It doesn’t work well in the case of class-based interventions, as a teacher usually wants all students to benefit from an intervention. Another way to evaluate the impact of class-based interventions would be to draw a control group from a similar population—for example, students in cybersecurity classes at comparable institutions with regard to student demographics and institution strength and resources. In this case, care must be taken to control for variables such as different teachers, class syllabi, and so on. We’d like to draw attention to a well-known fact in survey-based evaluations: how you ask questions matters.3 Care must be taken to not lead participants to any specific answer. For example, asking “did your interest increase?” biases the participant to answer “yes.” Instead, ask for a rating of the level of interest before and after the intervention and compare these numerical values to see if there was an increase. It’s also possible that students learning about security and privacy will become more sensitive to sharing personal information. This makes
research consent even more important to protect participants’ wellbeing while ensuring that the results are ecologically valid.
Knowledge The following evaluation instruments could be applied to measure knowledge acquisition and retention. Quiz on concepts. Students take a quiz with either multiple-choice or essay questions that evaluates their understanding of the concepts. Simulated scenarios. Students are placed in simulated scenarios that require them to apply the knowledge they gained in the intervention. Automatically collect low-level measurements and convert them into higher-level concepts that students learned, or pair an evaluator with each student to determine which concepts they understand based on their actions and verbal output. Of these two approaches, a quiz is much easier to design and conduct. To evaluate learning based on the intervention, the same evaluation would be conducted before and after the intervention. To evaluate knowledge retention, the same evaluation would be repeated after some time has passed since the intervention. This requires evaluators to collect student contact information and follow up with students after a specific interval.
Impact Evaluate the impact of an intervention on specific populations by surveying the participants. A survey can consist of multiple-choice questions (for instance, rate any difficulties you had in adopting the materials on a scale of 0 to 5, where 0 is none and 5 is many difficulties) or essay questions (for instance, list any difficulties you had in adopting the material). Along with the collection of the level of impact, the demographics of participants or 67
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
EDUCATION
institutional characteristics must be collected often. This is necessary so the impact can be reported separately for each group of interest (such as Hispanic Americans).
addresses after the intervention, for example, to evaluate retention in cybersecurity.
better-organized team fared much better in the competition. SEED employed pre- and postlab questionnaires to evaluate student Reality Check learning and gathered a great deal So, what was actually done to eval- of valuable data. Evaluation adoption Establish a Systematic uate our chosen interventions? The information and teacher feedback Method for Collecting intervention authors considered were collected voluntarily and were the Information many of our approaches but imple- thus incomplete—if a teacher didn’t For many of the above-listed evalu- mented only some of them, owing report back with information, the ation instruments, evaluators could to time and cost constraints, some author had no way to collect it. This collect the required information ill-defined goal metrics, or a dis- highlights the necessity of designing from each participant in the inter- agreement. In addition, it should automated information gathering vention or from a ranand possibly integratdomly sampled subset. ing the measurements The biggest challenge A viable path forward could be to increase with the materials. For we faced in evaluating example, users who want the three interventions interaction and collaboration between the to download materials was due to their wide cybersecurity and education communities. could be required to first adoption by other eduinput their affiliation and cators. This removed the email address. The sysparticipant pool from the tem could then periodiintervention authors and made data be noted that most faculty posi- cally email teachers with a survey collection difficult. Although it’s tions are judged by research pub- link asking for feedback. Although possible to ask teachers who adopt lications much more than teaching teachers might not reply to these an intervention to collect some innovations as criteria for pro- emails, an automated system such data for its evaluation, the logistics motions and tenure. When inno- as this has a higher chance of gathare complex. vative teaching approaches are ering useful information than an As a community of cybersecu- designed and implemented—often approach that relies on teachers volrity educators, we want successful as a sideline from primary research unteering feedback. and far-reaching interventions— directions—the time and effort The Control-Alt-Hack creators those that are widely adopted and needed to properly evaluate their opted to send open-ended suraffect broad, international popu- impact on learning can be pro- veys to educators who adopted the lations. Our efforts, then, should hibitively expensive (in terms of intervention and to conduct small be directed to designing standard time or monetary investment), and user studies with pre- and posevaluation instruments that can venues likely to publish the work tintervention questionnaires, the be adopted by educators who use might not be considered top-tier in results of which were published.6 the intervention and whose results the faculty member’s area. Only 22 out of 150 polled educacould be easily reported back to the The iCTF authors considered tors returned the surveys. Those intervention authors. Ideally, these many of our evaluation approaches who replied represented more than evaluation instruments and result and implemented a postexer- 450 students at the high school, reports would be automated. For cise survey in 2009 to measure undergraduate, and graduate level. example, quizzes could be created team composition, skill develop- The authors elected to use openby intervention authors and con- ment, and learning.4 Of 56 teams, ended questions to avoid leading ducted using an online service like 35 responded. The survey asked questions. Although this method SurveyMonkey. Teachers who use simple yes or no questions and requires more manual analysis, it an intervention would announce thus provided only coarse evi- helps provide answers that could these quizzes in their classes at a dence of iCTF’s positive effects. be missed in a more structured specific time. As another example, iCTF was further studied by edu- survey. Authors reported the numstudent email addresses could be cation researchers to evaluate the ber of educators who described collected by requiring students impact of teamwork on cyber- positive or negative results for to sign up before participating defense effectiveness.5 Research- engagement and awareness and in an intervention. The interven- ers followed two teams from the recounted specific critiques from tion author would use the email 2011 iCTF and found that the the surveys. 68
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Overall, we see the following trends in these evaluations: simple evaluation instruments are favored because of low time and effort cost, even though they might not be bestsuited to the evaluation task; evaluation is often done as an afterthought and in an ad hoc manner, resulting in a small sample size and impacting validity; evaluation is often done by cybersecurity experts and not education experts; and intervention authors understand how to design better evaluations but lack the time and resources for this task. Furthermore, some intervention goals might be difficult to measure quantitatively, for example, measuring whether a student has acquired an adversarial mindset.
A Viable Path Forward The complexity of designing and delivering an evaluation with strong validity is understated. Many conflating factors can make it very difficult to attribute outcomes to specific interventions. As the cybersecurity field evolves, we need to investigate and report on the effects of educational interventions on cybersecurity awareness, learning, retention, motivation, interest, and so on. We then need to be circumspect in how advances in these areas contribute to cybersecurity in knowing which interventions to stop, start, and continue. Despite the numerous challenges in doing this well, this is important work. Both the complexity and cost of sound evaluation approaches indicate the need for us to address these issues as a community. As we discussed, many interventions share the same goals of increasing awareness, teaching practical skills, improving learning and knowledge retention, and so on. This makes it possible for a community of researchers to each invest a fraction of their time and develop sound evaluation instruments that quantitatively measure the attainment
of the chosen goal and can be coupled with automated data gathering. Others could then easily locate the appropriate instruments, integrate them with their interventions, and invest time only in processing and reporting on the automatically gathered data.
A
viable path forward could be to increase interaction and collaboration between the cybersecurity and education communities. Education experts understand how to design sound evaluation instruments and how to gather data—this is what they do for living. Cybersecurity experts are often enthusiastic about teaching and designing new interventions but don’t have evaluation cycles. Teaming up might just be the winning combination to producing sound evaluations of cybersecurity education interventions without jeopardizing careers. We welcome feedback from readers on educational evaluation needs to help us discern priority areas.
Society Ann. Meeting, vol. 56, no. 1, 2014, pp. 458–462. 6. T. Denning et al., “Control-AltHack: The Design and Evaluation of a Card Game for Computer Security Awareness and Education,” Proc. 2013 ACM SIGSAC Conf. Computer & Communications Security (CCS 13), 2013, pp. 915–928. Jelena Mirkovic is a research faculty
member and project leader at the University of Southern California’s Information Sciences Institute. Contact her at ___________ mirkovic@isi.edu.
Melissa Dark is the W.C. Furnas
Professor of Technology in the College of Technology at Purdue University and the assistant director of Educational Programs at CERIAS (the Center for Education and Research in Information Assurance and Security). Contact her at dark@purdue.edu. ___________
Wenliang Du is a professor in the
Department of Electrical Engineering and Computer Science at Syracuse University. Contact him at wedu@syr.edu. _________
References 1. M. Dark and J. Mirkovic, “Evaluation Theory and Practice Applied to Cybersecurity Education,” IEEE Security & Privacy, vol. 13, no. 2, 2015, pp. 75–80. 2. P. Rossi, M. Lipsey, and H.E. Freeman, Evaluation: A Systematic Approach, Sage Publications, 7th edition, 2003. 3. D. Dillman and P. Salant, How to Conduct Your Own Survey, Wiley, 1994. 4. N. Childers et al., “Organizing Large Scale Hacking Competitions,” Proc. 7th Int’l. Conf. Detection of Intrusions and Malware & Vulnerability Assessment (DIMVA 10), 2010, pp. 132–152. 5. S. Jariwala et al., “Influence of Team Communication and Coordination on the Performance of Teams at the iCTF Competition,” Proc. Human Factors and Ergonomics
Giovanni Vigna is a professor in the
Department of Computer Science at the University of California at Santa Barbara and the CTO at Lastline, Inc. Contact him at vigna@cs.ucsb.edu. ____________
Tamara Denning is an assistant prof-
essor in the School of Computing at University of Utah. Contact her at tdenning@cs.utah.edu. ______________
Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
Got an idea for a future article? Email editor Melissa Dark (dark@ ____ purdue.edu). _______
69
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
BASIC TRAINING Editors: Richard Ford, rford@se.fit.edu ________ | Deborah A. Frincke, debfrincke@gmail.com ____________
Biometric Authentication on Mobile Devices Liam M. Mayron | Arizona State University
M
obile devices provide a compelling platform for computing. They are small, portable, and increasingly powerful, and many have multicore processors and high-resolution displays. More and more users are performing tasks on smartphones that they might have previously executed on personal computers. These tasks range from composing and sending emails to playing games and everything in between. Smartphones have become the gatekeepers to our most personal information, including our heart rate and bank account information. It’s critical to protect the sensitive data available through these devices. One way to do this is through the use of biometrics. Smartphones contain many useful sensors, such as GPS, cameras, touchscreens, gyroscopes, and microphones; some even have fingerprint sensors. Such sensors allow the use of biometrics as an authentication mechanism on mobile devices.
Authentication on Mobile Devices Users often decline to use passwords or other methods of securing their mobile devices.1 Passwords have historically been used to authenticate users; they’re convenient to implement, requiring minimal hardware. Passwords require an exact match, which allows them to be protected using one-way transforms, or hashes. They can be authenticated very quickly and 70
May/June 2015
SECURITY& PRIVACY
IEEE
Copublished by the IEEE Computer and Reliability Societies
1540-7993/15/$31.00 © 2015 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
provide no ambiguity—a password can authenticate without even look- environment. In other cases, some is either an exact match and is ing at their phone’s screen—but biometrics won’t be possible or authenticated, or it isn’t an exact provide only a limited level of pro- won’t perform as well if the system match and is denied. tection. Furthermore, pattern locks can’t move—for instance, tracking Strong passwords are difficult to could be in full view of others with gait requires motion. guess. Generally, longer passwords access to the phone’s screen, and finAlthough biometrics provide that don’t contain dictionary words gertip oils could leave a distinctive many benefits, the security of the but instead are made up of numbers, trace, indicating the pattern used to system’s resources and the user’s special characters, and a combina- access the device. own biometric patterns must be tion of uppercase and lowercase letWe need a quick but secure considered. ters will be more secure. On desktop authentication method for mobile systems, complex passwords can be devices. Biometrics could be the Biometrics difficult for users to remember but answer: they’re present on the user Biometrics can be physical or are only slightly less convenient at all times, they can’t be forgotten, behavioral. Popular physical bioto input than simple passwords. and they can be input as quickly as a metrics include fingerprint and face Mobile devices, however, have chal- glance or a touch. recognition. Physical biometrics are lenged the usability of text passBiometrics have the poten- based on our intrinsic characterwords. Many smartphones rely on a tial for additional capabilities on istics, which are often determined finger-operated touchscreen as their mobile devices. For example, cer- at birth or through random musmain input method. Touchscreens tain biometric traits can indicate a cle movements over the course of are versatile, but they impede the person’s stress level, so the system our lives. Behavioral biometrics precise input that a physical key- can use this information to respond range from our typing pattern to board provides. Algoour movements when rithms that effectively we walk. Behavioral bioAlthough biometrics provide many autocorrect text can’t be metrics are the result of benefits, the security of the system’s applied to passwords, learned patterns of activslowing down the authenity, although they’re still resources and the user’s own biometric tication process. shaped by our physical patterns must be considered. Mobile devices are characteristics (for example, our voice or gait). intended for quick, freUnlike passwords, quent access, which can lead to compromised secu- in an appropriate manner. Mobile for which only an exact match is rity. Secure passwords might not devices are personal and typically acceptable, biometrics typically be appropriate for use on mobile dedicated to an individual, mak- seek a good (but not perfect) match devices due to the length of time ing biometrics a natural choice for for authentication. For example, in the case of fingerprint recognirequired for their input. The more study and implementation. individuals use their mobile device Adopting biometrics on mobile tion, we would expect some varia(and the more personal data is devices could influence their use tion due to perspiration, pressure stored), the faster they’ll want to on desktop and other computing (on the sensor), obstructions, and be able to access that device. Many platforms. A challenge with using other factors. For this reason, a permobile devices provide PINs— biometrics on traditional systems is fect match would be hard to attain. sequences of numbers—as alterna- providing better security than exist- Instead, we seek a match that’s close tives to passwords. PINs have the ing nonbiometric methods. How- enough to provide a reasonable benefit of being able to be entered ever, as users become accustomed level of certainty that a user is who quickly, but they provide far less to the convenience of biometric he or she claims to be. The system security than passwords and might authentication, expectations might administrator can select the degree be guessed more quickly. be raised toward the adoption of bio- of similarity needed for a match. Pattern locks provide a similar metrics elsewhere. Although non- A high-throughput system that level of security to PIN codes; they mobile computing systems often doesn’t protect sensitive data could allow users to choose a sequence provide additional resources, they’re use a lower matching threshold to of points during enrollment and less portable. This might make the reduce the risk of incorrectly rejectrepeat it during authentication. use of certain biometrics easier. For ing a user. A system protecting These are convenient and easy to example, face recognition is more more valuable information might remember—in some cases, users practical in a controlled, consistent require a more precise match, even 71
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
BASIC TRAINING
if legitimate users are accidentally denied access.
have the same flexibility as text passwords, for which users are simply asked to change their password. This risk is compounded if the same fingerprints are used to authenticate multiple systems—known as function creep. There are ways to use fingerprints more securely; however, we must use different techniques than those for traditional text passwords. We can protect text passwords with a one-way transformation. The same input will yield the same output, but the output shouldn’t be able to be manipulated to reveal its formation. This works for text passwords because they require an exact match. Fingerprints require a close match, making a one-way hash impractical. One option is to encrypt the template data, but
Facial Recognition
Facial recognition is the oldest and most popular biometric. It has Fingerprint Recognition special appeal on mobile devices, Our fingertips contain patterns of as camera hardware is relatively ridges and valleys. These patterns inexpensive and widely deployed form during gestation and are conon smartphones. Faces don’t require sidered to be unique to individuals. physical contact to sample, and Fingerprints increase the fingermany users are accustomed to havtip’s surface area, making it more ing their photo taken. Facial recsensitive and providing better grip. ognition functions similarly to Points at which ridges terminate fingerprint recognition, in the sense or split (ridge endings and bifurthat the face is first sampled (often, cations, respectively) are known a 2D picture is taken in the visible as minutiae. These minutiae, along spectrum) and then transformed with the associated ridge directions, into a set of features that can be used make up the data most commonly for matching.5 used for fingerprint recognition: There are challenges to implethe template. More advanced senmenting facial recognition reliably sors with higher resolutions can on mobile devices. Mobile devices, account for the width of the ridges as their name suggests, can be used and even the locations of pores on in any environment at any time our fingertips.2 and can be held at many For authentication, different angles. This Biometrics are a promising method individuals requesting presents immense variof authenticating users of mobile access to a resource will ability, compounded by present their fingertip to the natural changes in our devices due to their convenience, the system. The system appearance in reaction but they must be used properly. must then compare the to different weather, template generated from moods, stress, and other the new fingerprint to factors. Furthermore, we the template in its records. Depend- this still relies on key secrecy and might change our appearance, pering on the degree to which the new decrypting template data to deter- haps by wearing glasses or growing fingerprint template matches that mine matches. Another alternative a beard. It can be tempting to lower in the database and the acceptance is the biometrics cryptosystem, the acceptance threshold to help parameters established by the sys- wherein a fingerprint can be used overcome the wide range of envitem administrator, a user will be to generate a cryptographic key.3 ronmental factors that might interrejected or accepted. In certain The fingerprint template is dis- fere, but this could result in lower instances, when the fingerprint carded and only the cryptographic system security. quality appears to be poor, the sys- key is stored. A third option is Facial recognition is easy to tem could immediately prompt the template transformation, wherein implement today, and major venuser to try again in hopes of obtain- users can supply an additional key dors have created libraries to siming a better sample. used to generate a hash of their bio- plify its deployment. However, The primary benefit of finger- metric information.4 practitioners must exercise cauprints is that they’re closely tied to Fingerprints are a useful and tion and verify any solution before the individual, but this is also the convenient biometric, but care must deployment. For example, liveness main risk of using fingerprints for be taken when storing the sensitive detection is particularly important in authentication. The consequences data. Major smartphone vendors facial recognition. Liveness detecof users having their fingerprint have been appropriately cautious by tion uses algorithms that can detect copied by an attacker are severe. restricting access to the fingerprint whether a sample is from a person A fingerprint can’t change: once a sensor and taking steps to ensure who is truly in front of the device, fingerprint is deemed invalid, a user that templates are protected and or a potential replay attack, where can no longer use it as a form of can’t be removed from the device the attacker attempts to record authentication. Fingerprints don’t on which they were generated. and reuse credentials that were 72
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
previously validated. Without liveness detection, attackers can use a different image of a user (often easily found on the Internet) to gain access. Facial detection can provide convenient access, but this convenience should be weighed against security needs.
Behavioral Biometrics Behavioral biometrics are patterns of human actions such as handwriting, voice, gait, and keystroke dynamics. One example of a behavioral biometric is the written signature, which has long been used to determine a document’s authenticity. Behavioral biometrics are acquired and learned over time, although influenced by our physical characteristics, and are generally easy to collect. Often, sampling of behavioral biometrics can be done alongside another task without a user’s direct involvement. Behavioral biometrics are subject to great intrauser variation, meaning that the same metric can vary widely for a given user. Because of this, it’s currently difficult to use behavioral biometrics alone for authentication, as a sufficiently low threshold of match acceptance must be used. Certain behavioral biometrics, such as
voice, could be impersonated by others or subject to replay attacks. Biometric authentication could also be combined with other methods of authenticating a system, such as initially requiring a lengthy (but secure) text password and then using biometrics for subsequent authentication attempts. If the quality of the biometric match decreases, the user can be prompted to reenter the text password instead.
B
iometrics are a promising method of authenticating users of mobile devices due to their convenience, but they must be used properly. System security must be maintained and user biometrics must be protected. Biometrics are more usable than lengthy text passwords and are certainly better than no authentication at all, but their use must be tempered against security needs until biometrics authentication on mobile devices can be proven to be secure and reliable.
References 1. M. Harbach et al., “It’s a Hard Lock Life: A Field Study of Smartphone (Un)Locking Behavior and Risk Perception,” Symp. Usable Privacy
2. 3.
4.
5.
and Security (SOUPS 14), 2014; w w w.useni x .org/system/f i les /conference/soups2014/soups14 ____________________ -paper-harbach.pdf. ___________ D. Maltoni et al., Handbook of Fingerprint Recognition, Springer, 2005. U. Uludag, S. Pankanti, and A.K. Jain, “Fuzzy Vault for Fingerprints,” Proc. 5th Int’l Conf. Audio- and VideoBased Biometric Person Authentication (AVBPA 05), 2005, pp. 310–319. A. Nagar, K. Nandakumar, and A.K. Jain, “Biometric Template Transformation: A Security Analysis,” Proc. SPIE7541, Media Forensics and Security II, 2010, doi:10.1117/12.839976. M.A. Turk and A.P. Pentland, “Face Recognition Using Eigenfaces,” Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR 91), 1991, pp. 586–591.
Liam M. Mayron is a faculty member
of the School of Computing, Informatics, and Decision Systems Engineering at Arizona State University. Contact him at lmayron@asu.edu. ___________
Got an idea for a future article? Email editors Richard Ford (rford@ ____ _____ se.fi t.edu) and Deborah A. Frincke (debfrincke@gmail.com). _____________
Silver Bullet Security Podcast In-depth interviews with security gurus. Hosted by Gary McGraw.
www.computer.org/security/podcasts *Also available at iTunes Sponsored by
73
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SYSTEMS SECURITY Editors: Patrick McDaniel, ____________ mcdaniel@cse.psu.edu | Sean W. Smith, ____________ sws@cs.dartmouth.edu
Bolt-On Security Extensions for Industrial Control System Protocols: A Case Study of DNP3 SAv5 J. Adam Crain | Automatak Sergey Bratus | Dartmouth College
I
ndustrial control system (ICS) protocols—key to public utility operations—have developed alongside the Internet but are largely isolated from it, carried by dedicated serial lines between closed networks with trusted software. However, as leased lines are replaced with transmission control protocol (TCP) or wireless connections to serve the needs of “smarter” energy systems and as ICS traffic comingles with other kinds of packets, legacy ICS protocol design becomes a problem. Protocols previously designed for isolated networks must receive “bolt-on” security extensions, compatible with the bulk of legacy implementations already deployed; implementations never meant to be exposed to maliciously malformed input must be hardened to reject it gracefully. Attempting to realize visions of smart utilities with
74
May/June 2015
SECURITY& PRIVACY
IEEE
unprotected and increasing overall syntactic complexity. We believe that this manner of extension represents a security antipattern—a design that will keep producing bugs and weaknesses— and considerably increases the attack surface associated with protocol encoding, parsing, and implementation complexity. Reviews of SA have overlooked this additional attack surface, focusing instead on its cryptographic primitives and message flows. In this article, we discuss this increased attack surface and how to avoid its worst pitfalls.
DNP3 Overview
the current ICSs’ attack surfaces will dramatically increase risks of their catastrophic failure due to hostile actions. DNP3 (IEEE Standard 18152012) is widely used in the US power grid and is a typical representative of the supervisory control and data acquisition (SCADA)/ ICS protocol family.1 As with many other such protocols, DNP3’s original design didn’t include security features such as authentication. The recently standardized secure authentication (SA) extends DNP3 to provide optional and multiuser authentication services, with characteristic tradeoffs between security and bandwidth. These extensions modify the existing DNP3 application layer by creating additional function codes and object types that selectively apply to a subset of protocol features, leaving others Copublished by the IEEE Computer and Reliability Societies
The DNP3 protocol stack is split into three layers: link, transport, and application (see Figure 1). The protocol is transport agnostic—all three layers are used regardless of whether the underlying network is a serial communications channel or a TCP stream (with its own open systems interconnection model layers below DNP3’s link). The link layer is concerned primarily with framing, point-tomultipoint addressing, and error detection in a manner similar to Ethernet datagrams, but it also includes simple stateless functionality such as heartbeat messages. The transport layer reassembles multiple link layer frames into larger application messages. This reassembly is based on a singlebyte transport header with first, final, and sequence parameters that allow for only in-order reassembly. Unexpected transport segments are simply dropped, and the reassembly buffer is emptied. The maximum default size of a reassembled 1540-7993/15/$31.00 © 2015 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
application layer message is 2,048 octets, although this size is adjustable if both ends agree on the value of using out-of-band configuration. The application layer handles messages called application data service units (ASDUs) that derive their semantics from a combination of function codes and objects. Messages can consist of zero or more object headers that follow the main application layer header (see Figure 2). Object headers describe the type and quantity of objects that follow. The beginning of the next header is discoverable only by parsing the previous one. The rules for determining object payload lengths are complex and varied. This complexity gives implementations of the DNP3 application layer a large attack surface due to potential programmer errors. For example, programmers might fail to check a payload’s multiple object lengths for consistency, interpret a payload’s contents differently than intended, or assume the presence of objects that are actually absent from a maliciously crafted payload. As usual in software exploitation scenarios, acting on incorrect assumptions while allocating or copying maliciously crafted payload data results in memory corruptions, which attackers can leverage to crash or control ICS processes. Most types of valid messages require at least one object header. Notable exceptions are confirm, cold restart, warm restart, d e l a y m e a s u r e m e n t , and record current time, which are never paired with any objects. The specification exhaustively defines which objects can be paired with which function codes.1 The vast majority of object headers can be processed independently—that is, they aren’t context sensitive with regard to other object headers in the ASDU. Nonsecure DNP3 has only one notable exception to this rule: common
time of occurrence fields and the measurements with which they can be paired. This combination of headers requires a common reference time header to precede one or more relative times and acts as a crude way of compressing what’s normally 48-bit time stamps on measurement values. In any protocol, there’s an inherent design tradeoff between structural flexibility and attack surface, which doesn’t favor cryptography. Indeed, underlying complexity or ambiguity of encoding gave rise to a variety of attacks such as Serge Vaudenay’s and Daniel Bleichenbacher’s as well as the more recent BEAST, CRIME, Lucky13, POODLE, BERserk, and others, which all worked around the enduring strength of the cryptographic primitives. With its high level of flexibility, the DNP3 application layer is a poor candidate for encoding cryptographic functions. Despite the constraints placed on function and object combinations, the number of valid combinations of objects for many DNP3 function codes is practically infinite. The ability to associate multiple objects to a single function makes the DNP3 application layer powerful in terms of flexibility and bandwidth but also particularly vulnerable when it comes to parsing and processing attacker-supplied input. By contrast, SCADA protocols of similar functionality, such as IEC 60870-5-104, have more rigid application layer structures in which the function code completely defines the type of data that follows, reducing the combinatorial complexity of valid inputs (and thus the complexity of the code that must validate them). Not surprisingly, DNP3’s complexity is reflected in its distribution of vulnerabilities.
Fuzzing Vulnerable Implementations Vulnerabilities in DNP3 implementations that arise due to the
Application Transport Link
Figure 1. Abstract DNP3 communication stack. The link layer has direct access to the communication channel.
Header (including function)
Header + data #1
...
Header + data #N
Figure 2. DNP3 application layer messages consist of a main header and zero or more object headers and associated data.
protocol’s complexity aren’t theoretical. For nearly a decade, such vulnerabilities in DNP3 and other SCADA protocol implementations have been found by fuzzing2,3; however, little information has been made publicly available on DNP3 vulnerability specifics. A 2010 US government–funded report specifically mentioned the dire need to improve input parsing routines in DNP3 implementations without citing specific failure modes.4 The most comprehensive study of DNP3 vulnerabilities was conducted by Crain (coauthor of this article) and Chris Sistrunk from 2013 to 2014 and resulted in numerous disclosures coordinated with vendors and asset owners (www ___ .automatak.com/robus); a small representative fraction of the raw vulnerability data was released publicly.5 We recap the results of this study here, as they pertain to DNP3 security extensions.
Examples of Vulnerabilities Crain and Sistrunk tested the effects that crafted malformed frames could have on DNP3 implementations in master controllers and outstation (remote) equipment. Nearly all vendor products were 75
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SYSTEMS SECURITY
CRC 05
64
06
44
64
00
100 1-byte payload
64
00
FF
F2
CRC C0
1D
0A
100
Unconfirmed user data
First/Final Sequence number = 0
Figure 3. A DNP3 frame with source address 100 and destination address 100. It contains no application layer payload and caused a fault in a real system owing to poor input validation. CRC is cyclic redundancy check.
found to be vulnerable to singleframe attacks for certain frame types; these types were chosen to exercise the protocol’s syntactic complexity and to trigger programmer errors that would likely result from this complexity. A single crafted frame received by a vulnerable implementation could crash the receiving process or drive it into an infinite loop, rendering the entire protocol stack inoperable. Moreover, for many vendors, broadcast frames could trigger such effects, which doesn’t require any attacker knowledge about the link endpoint configurations. For example, ASDUs that are too short to contain a valid object header could be delivered in a frame with a correct lower-layer cyclic redundancy check (CRC) value to cause an unhandled exception in the receiving code. An infinite loop could be exploited in another implementation by setting an object count to the maximum possible value of 65,535 but failing to provide these bytes. A response with two control objects unexpected in such a frame would cause a buffer overrun and crash—an example of a payload that’s syntactically valid according to the specification but meaningless. This creates room for ambiguity of payload interpretation. In other cases, malformations in simpler lower layers caused crashes, for example, a link frame encapsulating a single-byte malformed transport protocol data unit and no application protocol data unit (APDU). 76
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
A single frame triggering an unhandled exception can be as simple as a payload that contains no APDU under the valid link and transport layer checksums (see Figure 3).
Distribution of Vulnerabilities The generational fuzzer used in this study was designed to stress each layer of the protocol individually to expose weaknesses in each layer’s implementation. The tool was iteratively improved using code coverage analysis obtained from an open source implementation of DNP3. More than 80 percent of discovered vulnerabilities were found in the application layer. This isn’t surprising given how DNP3’s complexity is distributed. The DNP3 specification devotes hundreds of pages to describing the application layer, its state machine, and the numerous object encodings, whereas the link layer is covered in only 21 pages and the transport layer reassembly gets a mere seven pages. We find similar ratios by counting the source lines of code associated with each layer in an open source implementation of the protocol. Simply put, when it comes to robustness and security, less is more. Of the application layer vulnerabilities, a disproportionate number were associated with the unsolicited response functions. A crude way of explaining this is to analyze the specification to see how many object types can be paired with certain function codes. Performing this analysis on the
Parsing Guideline Tables in IEEE Standard 1815-2012 reveals that the most overloaded functions in terms of the number of possible object types are read, response, and unsolicited response.1 The near absence of vulnerabilities in the read function code is best explained by the fact that client ASDUs don’t carry data payloads but merely describe what data is being requested, resulting in a simpler syntax. Responses and unsolicited responses can be associated with the majority of the object headers and types in the specification, giving these functions the highest attack surface.
Underrepresentation of the Application Layer Despite the majority of the failures discovered in the application layer, there’s reason to believe that this layer is underrepresented in the results as compared to the link and transport layers. The open source package used to verify the fuzzer is a conservative implementation that doesn’t include even more complex protocol feature subsets such as file transfer, datasets, and device attributes. The fuzzer was developed to verify this open implementation and therefore doesn’t model these optional features. Many of these features use more complicated encodings that include variable length fields, many of which can be specified in multiple ways and can be internally inconsistent and potentially confusing to a parser. It’s almost certain that significant latent vulnerabilities exist in these complex but untested areas of the various protocol implementations.
Optional Authentication The SA specification lists a set of function codes that must always be authenticated as well as a smaller subset of function codes that can be optionally authenticated. This decision was made to conserve communication bandwidth.1 However, selective authentication of May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
application layer messages is a counterproductive and dangerous design pattern, especially in SCADA. Optional authentication conveys a false sense of security to users, fails to address the vulnerability threats posed by parsing and processing payloads, and substantially increases the protocol’s overall complexity by requiring security mechanisms to be protocol aware.
Unauthenticated Closed Loop The spoofing of measurement data has been a component of several major attacks against ICSs, including Stuxnet, allowing attackers to cause more undetected damage or losses to a process over time than with a sudden catastrophic event. In this context, not providing mandatory authentication of measurement data from the field is an important oversight. Man-in-the-middle attackers on an SA link with only mandatory authentication enabled can allow authenticated control information to pass but subtly alter measurement data in such a way that gradually degrades the process or damages equipment.
Lack of Stateless Functionality Because almost no stateless functionality can be found in the protocol, configuring an SA system to not authenticate any particular function code is inadvisable. The DNP3 application layer has only a handful of completely stateless function codes. The delay measurement function code, for instance, doesn’t alter any server-side state in the outstation when processed. However, because of the event-oriented nature of DNP3, a combination of read and confirm functions allows attackers with access to the network to flush all queued event messages from an outstation if these functions aren’t authenticated. DNP3 SAv5 requires the authentication of 21 out of 34 total function codes, whereas the remaining function
codes can be optionally authenticated based on the configuration. Mandatory function codes—listed in IEEE Standard 1815-20121—are primarily those that can alter the outstation’s state and the process’s output state. A notable exception is the assign class function code, which can be used to silence an outstation’s reporting mechanism by assigning all event data in the outstation to class 0. This would have an effect similar to disable unsolicited but could be even more harmful because it would likely persist across device reboots and remove event data from responses to normal event polls.
Responses Present the Most Risk for Exploitation As we discussed, the most complex response and unsolicited response function codes present the highest attack surface and therefore the most risk of exploitation. Furthermore, remotely compromising a physically well-protected master from an isolated and less-protected field asset was until recently an underdiscussed attack vector.6,7 The attack model under which the specification was designed doesn’t seem to include implementation defects as a viable threat. Selectively authenticating subsets of the protocol by function alone— and not for complexity—is a major oversight and should be regarded as a secure protocol anti-pattern. Conversely, requiring authentication prior to parsing these complex areas of the specification would turn a preauthentication exploit into one requiring compromised credentials.
Protocol Complexity DNP3 is a complex protocol, mostly due to the way it implements the transfer of event data using server-side state. A lot of bookkeeping and additional messages such as confirms are required to keep things synchronized. SA adds even more complexity to the same set of
application layer state machines in a manner that’s difficult to untangle. This presents real challenges for implementers who must now support both the secure and nonsecure versions of the protocol. This complexity also extends into parsing and ambiguous encodings. Cryptographic protocols should always defer as much parsing and processing as possible until after the sender’s identity has been established to both derive the most benefit from cryptographic integrity protections and avoid the so-called “cryptographic doom principle.”8 Unfortunately, SA’s design contradicts this principle.
Challenge–Response versus Aggressive Mode After an initial session key exchange, normal DNP3 traffic can be authenticated using one of two modes: challenge–response and aggressive, which is a form of one-pass authentication using sequence numbers for replay protection. The challenge–response mode introduces two additional messages into the normal traffic flow and can substantially impact latency and throughput for a serial link. This twopass authentication mode is more resistant to replay attacks because each message is authenticated using a unique nonce for each challenged message. In this mode, it’s fairly easy for the challenging party to treat everything after the function code as opaque “payload data” that isn’t parsed until the remote side authenticates. Figure 4 shows challenge– response mode’s traffic flow. Aggressive mode adds a user object with a sequence number as the first object header in the ASDU and a hash message authentication code (HMAC) value as the last object header. The purpose of this mode is to reduce bandwidth and latency by authenticating messages in a request–response exchange. Figure 5 shows the request’s structure. 77
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SYSTEMS SECURITY
Normal request Function + (payload bytes) Challenge Function + nonce Authentication HMAC (key, request, nonce) Normal response
Figure 4. Challenge–response message flow. Parsing of the message payload can be deferred until after authentication. HMAC is hash message authentication code.
Normal function
User ID and CSQ
Payload objects ...
HMAC
Figure 5. In aggressive mode, application data service units sandwich the payload to be processed inside an ad hoc envelope consisting of user and sequence information and a trailing message digest. The challenge sequence number (CSQ) protects messages from replay attacks.
object headers that use start and stop indices. At first blush, it appears necessary to interpret the inner payload data to be able to determine the trailing HMAC’s position. Fortunately, in this case, there’s a nonintuitive and undocumented workaround. The HMAC object and its header are of a known size and can be speculatively parsed off the end of the ASDU. Future versions of the specification should make explicit recommendations to implementers to use this methodology for reading the aggressive mode HMAC. We note that the signing schemes for Linux’s loadable kernel modules have finally converged on a similar design in which a fixedsize signature is simply appended to the end of the module object file after a string of unsuccessful designs that attempted to use more complex formats and metadata.
Conflicting Encodings of Length Aggressive Mode Ambiguity The first issue with aggressive mode request encoding is the ambiguity of the request. Normally, DNP3 message payloads can be processed solely based on the function code. In aggressive mode, the first object header must be inspected to determine whether the ASDU is a normal request or an aggressive mode request. The lack of a proper envelope for the payload data requires implementers to perform special-case parsing in multiple places to safely handle aggressive mode requests. The most dangerous issue with aggressive mode encoding is that many implementers will naively parse the entire payload data to reach the HMAC trailer. Recall that DNP3 object headers can’t normally be skipped over without at least some level of light parsing. Numerous vulnerabilities were identified in the parsing of these object headers, particularly integer overflow issues related to handling 78
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
Many variable-length objects related to security functionality have inconsistent encodings between objects as well as encodings with multiple ways of representing the length of certain fields in a single object. Having two sources of truth for lengths of certain payload elements has been a common source of implementation defects in various protocols, most recently OpenSSL’s Heartbleed and the GNU TLS Hello bug, as well as classic preauthentication bugs such as OpenSSH’s challenge–response vulnerability. In DNP3, all variable-length objects are preceded by a UINT16 length that defines the entire object’s length. Fixed-length fields come first in the object, and variable-length fields come last. All but the last variable-length field is preceded by its own UINT16 length field. The last field’s length is implicitly established as the remainder of the envelope length. Figure 6 shows this pattern.
In this encoding, message authentication code (MAC) value length is unambiguous in the sense that there’s only one way to determine its value. If the total size of the object is N and the length of all fields preceding the MAC value is P, then the length of the MAC value is N – P. However, this encoding scheme isn’t applied consistently. Some objects have a preceding length field for the final variable length field, as Figure 7 shows. Thus, there are two ways to determine the master challenge data field’s length in an update key change request. In a valid encoding of this object, the entire object’s length must agree with the final field’s explicit length value. To complicate the issue, the specification informs implementers that they can use either method to establish the final field’s length,1 which can lead to implementations that disagree on the cryptographic data’s contents. If the protocol can’t be redesigned to remove such encoding ambiguities, the parsing recommendation should be to always check that these two methods produce the same length value.
D
NP3 SA contains a number of anti-patterns that will likely serve as a significant source of bugs. Vendors and standards bodies adding security to SCADA/ ICS protocols should strongly favor a layered approach to security in which legacy protocol issues can be decoupled from SCADA object models and semantics. Acknowledgments This specification review was performed as part of the process of implementing it in a preexisting open source project. The DHS S&T HOST program award partially funded this work.
References 1. IEEE Std. 1815-2012, IEEE StanMay/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
dard for Electric Power Systems Communications-Distributed Network Protocol (DNP3), IEEE, 2012; https://standards.ieee.org/find stds/standard/1815-2012.html. __________________ 2. G. Devarjaran, “Unraveling SCADA Protocols: Using Sulley Fuzzer,” DEFCON 15, 2007; www.dc414 .org/download/confs/defcon15 /Speakers/Devarajan/Presentation /dc-15-devarajan.pdf. ____________ 3. D.G. Peterson, “Iccpsic Assessment Tool Set Released,” Digital Bond, 2007; www.digitalbond.com/blog /2007/08/28/iccpsic-assessment -tool-set-released. __________ 4. NSTB Assessments Summary Report: Common Industrial Control System Cyber Security Weaknesses, tech. report INL/EXT-10-18381, Idaho Nat’l Laboratory, May 2010; http://fas .org/sgp/eprint/nstb.pdf. 5. D. Peterson, “S4x14 Video: Crain/ Sistrunk—Project Robus, Master Serial Killer,” Digital Bond, 23 Jan. 2014; www.digitalbond.com /blog/2014/01/23/s4x14-video - c ra i n s i s t r u n k- p ro j ec t- ro b u s -master-serial-killer. ___________ 6. D. Peterson, “Why Crain/Sistrunk Vulns Are a Big Deal,” Digital Bond, 2013; www.digitalbond.com/blog /2013/10/16/why-crain-sistrunk ____________________ ____________ -vulns-are-a-big-deal. 7. E. Byers, “DNP3 Vulnerabilities Part 1 of 2—NERC’s Electronic Security Perimeter Is Swiss Cheese,” Tofino Security, 7 Nov. 2013; www.tofinosecurity.com /blog/dnp3-vulnerabilities-part -1-2-nerc%E2%80%99s-electronic -security-perimeter-swiss-cheese. ___________________ 8. M. Marlinspike, “The Cryptographic Doom Principle,” Thought Crime blog, 13 Dec. 2011; www ___ .t h o u g h tc r i m e.o r g / b l o g / t h e -cryptographic-doom-principle. __________________ J. Adam Crain is a software engi-
neer, security researcher, and open source advocate. He’s also a partner at Automatak, which aims to improve the penetration of robust open source software in
7
6
5
Octet transmission order 4 3 2 1 0 Bit position b0 Key change sequence number
b31 b0 User number b15 b0 Key wrap algorithm b0 Key status b0 MAC algorithm b0 Challenge data length
b7 b7 b7 b15
Challenge data
MAC value
Figure 6. A session key status object with two variable-length fields, challenge data, and message authentication code (MAC) value. The MAC value’s length is the remainder of the length field framing the entire object.1
7 b7 b15
6
5
Octet transmission order 4 3 2 1 0 Bit position b0 Key change method b0 User name length b0 Master challenge data length
b15
User name
Master challenge data
Figure 7. Update key change request with two variable-length fields, user name and master challenge data. The length of the challenge data is explicitly encoded in the length field and implicitly encoded as the remainder of the length field framing the entire object.
the utility space. Contact him at jadamcrain@automatak.com. __________________ Sergey Bratus is a research associ-
ate professor in the Computer
Science Department at Dartmouth College. His research interests include Unix security and wireless networking. Contact him at ________________ sergey@cs.dartmouth.edu. 79
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY & PRIVACY ECONOMICS Editors: Michael Lesk, lesk@acm.org jmm@umich.edu _______ | Jeffrey MacKie-Mason, _________
Scaring and Bullying People into Security Won’t Work Angela Sasse | University College London
be secure. However, this has made little difference to the many users who decide to ignore the warnings and proceed. Creating more elaborate warnings to guide users toward secure behavior is not necessarily the best course of action, as it doesn’t align with the principles of user-centered design.
Refining Warnings
U
sable security and privacy research began more than 15 years ago. In 1999, Alma Whitten and J.D. Tygar explained “Why Johnny Can’t Encrypt,”1 and Anne Adams and I pleaded that, even though they don’t always comply with security policies, “Users Are Not the Enemy.”2 Today, there are several specialist conferences and workshops: publications on usable security and privacy are featured in top usability conferences, such as the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), and top security conferences, such as the IEEE Symposium on Security and Privacy.
80
May/June 2015
SECURITY& PRIVACY
IEEE
An ongoing topic in usable security research is security warnings. Security experts despair that the vast majority of users ignore warnings—they just “swat” them away, as they do with most dialog boxes. Over the past six years, continuous efforts have focused on changing this behavior and getting users to pay more attention. SSL certificate warnings are a key example: all browser providers have evolved their warnings in an attempt to get users to take them more seriously. For instance, Mozilla Firefox increased the number of dialog boxes and clicks users must wade through to proceed with the connection, even though it might not Copublished by the IEEE Computer and Reliability Societies
At ACM CHI 2015, two studies reported on efforts to make more users heed warnings. Adrienne Porter Felt and her colleagues at Google designed a new SSL warning for Google Chrome, applying recommendations from current usable security research: keep warnings brief, use simple language to describe the specific risk, and illustrate the potential consequences of proceeding.3 The authors hypothesized that if users understand the risks associated with a warning, they will heed rather than ignore it. They tested these improved warnings in a series of mini surveys and found a modest but significant (12 percent) improvement in the number of participants who correctly identified the potential risks of proceeding, but no significant improvement in the number of participants who correctly identified the data at risk. In addition, compared to existing browser SSL warnings, there was no improvement in the number of participants who thought the warning was likely to be a false positive. 1540-7993/15/$31.00 © 2015 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Felt and her colleagues reasoned that if they couldn’t improve users’ understanding, they might still be able to guide users toward secure choices. They applied what they called opinionated design to make it harder for participants to circumvent warnings, and visual design techniques to make the secure course of action look more attractive. In a field study, this technique led to a 30 percent increase in the number of participants who didn’t proceed upon seeing the warning. The authors concluded that it’s difficult to improve user comprehension of online risks with simple, brief, nontechnical, and specific warnings, yet they urge fellow researchers to keep trying to develop such warnings. In the meantime, they advise designers to use opinionated design to deter users from proceeding in the face of warnings by making them harder to circumvent and emphasizing the risks associated with doing so. In the second paper, Bonnie Anderson and her colleagues examined 25 participants’ brain responses to warnings using a functional magnetic resonance imaging (fMRI) scanner.4 Previous studies using eye tracking showed that users habituate: the first time around, a warning catches their attention, but after repeated showings, it does not. Anderson and her colleagues found that the brain mirrors this habituation: when encountering a warning for the first time, participants’ visual processing center in the superior parietal lobes showed elevated activation levels, but these disappeared with repeated showings of the warning. The authors hypothesized that varying a warning’s appearance, such as its size, color, and text ordering, should prevent habituation and keep participants paying attention. They found that participants indeed showed sustained activation levels when encountering these
polymorphic warnings; participants’ attention decreased on average only after the 13th variation of the same warning. They concluded that users can’t help but habituate, and designers should combat this by creating warnings that force users to pay attention.
Usability: When Does “Guiding” Become “Bullying”? Both teams’ work was motivated by an honorable intention—to help users choose the secure option. But as a security researcher with a usability background and many years of studying user behavior in the lab as well as in real-world settings, I am concerned by the suggestion that we should use design techniques to force users to keep paying attention and push them toward what we deem the secure— and hence better—option. It is a paternalistic, technology-centered perspective that assumes the security experts’ solution is the correct way to manage a specific threat. In the case of SSL, the authors recommended counteracting people’s habituation response and keeping their attention focused on security. However, habituation is an evolved response that increases human efficiency in day-to-day interactions with the environment: we stop paying attention to signals we’ve deemed irrelevant. Crying wolf too often leads to alarm or alert fatigue; this has been demonstrated over many decades in industries such as construction and mining and, most recently, with the rapid increase of monitoring equipment in hospitals. In 2013, the US Joint Commission issued an alert about the widespread phenomenon of alarm fatigue.5 The main problem was desensitization to alarms, which led to staff missing critical events. An increase in workload and decrease in patient satisfaction were also noted.
Eminent software engineer and usability expert Alan Cooper identified the use of warnings in software as a problem more than a decade ago.6 He pointed out that warnings should be reserved for genuine exceptions—events software developers couldn’t reasonably anticipate and make provisions for. Perhaps on their legal advisors’ suggestion, most developers have ignored Cooper’s recommendation, and the increasing need for security has led to a marked increase in the number of dialog boxes or warnings that users have to “swat” away today. Strategies such as opinionated design and forcibly attracting users’ attention do not align with usability. As Cooper pointed out, usability’s overall guiding principle is to support users in reaching their primary goals as efficiently as possible. Security that routinely diverts the attention and disrupts the activities of users in pursuit of these goals is thus the antithesis of a usercentered approach. And where, in practical terms, would this approach lead us? A colleague with whom I discussed the studies commented: “Even with this polymorphic approach, users stop paying attention after 13 warning messages. I suppose the next step is to administer significant electrical shocks to users as they receive the warning messages, so that they are literally jolted into paying attention.” (The colleague kindly allowed me to use the quote, but wishes to remain anonymous.) Scaring, tricking, and bullying users into secure behaviors is not usable security.
Cost versus Benefit In 2009, Turing award and von Neumann medal winner Butler Lampson pointed out that7 [t]hings are so bad for usable security that we need to give up on perfection and focus on essentials. The root cause of the
81
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY & PRIVACY ECONOMICS
problem is economics: we don’t know the costs either of getting security or of not having it, so users quite rationally don’t care much about it. … To fix this we need to measure the cost of security, and especially the time users spend on it.
Lampson’s observations haven’t been heeded. User time and effort are rarely at the forefront of security studies; the focus is on whether users choose the behavior that researchers claim to be desirable because it’s more secure. Even if users’ interaction time with specific security mechanisms, such as a longer password, is measured, the cumulative longer-term effect of draining time from individual and organizational productivity isn’t considered. Over the past few years, researchers have declared the task of recalling and entering 15- or 20-character complex passwords “usable” because participants in Mechanical Turk studies were able to do so. But being able to do something a couple of times in the artificial constraints of such studies doesn’t mean the vast majority of users could—or would want to— do so regularly in pursuit of their everyday goals. Factors such as fatigue as well as habituation affect performance. In real-world environments, authentication fatigue isn’t hard to detect: users reorganize their primary tasks to minimize exposure to secondary security tasks, stop using devices and services with onerous security, and don’t pursue innovative ideas because they can’t face any more “battles with security” that they anticipate on the path to realizing those ideas.8 It’s been disheartening to see that, in many organizations, users who circumvent security measures to remain productive are still seen as the root of the problem—“the enemy”2—and that 82
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
the answer is to educate or threaten them into behavior security experts demand—rather than considering the possibility that security needs to be redesigned. A good example is the currently popular notion that sending phishing messages to a company’s employees, and directing them to pages about the dangers of clicking links, is a good way to get their attention and make them less likely to click in the future. Telling employees not to click on links can work in businesses in which there’s no need to click embedded links. But if legitimate business tasks contain embedded links, employees can’t examine and ponder every time they encounter a link without compromising productivity. In addition, being tricked by a company’s own security staff is a negative, adversarial experience that undermines the trust relationship between the organization and employees. Security experts who aim to make security work by “fixing” human shortcomings are ignoring key lessons from human factors and economics. In modern, busy work environments, users will continue to circumvent security tasks that have a high workload and disrupt primary activities because they substantially decrease productivity. No amount of security education—a further distraction from primary tasks—will change that. Rather, any security measure should pass a cost–benefit test: Is it easy and quick to do, and does it offer a good level of protection? Cormac Herley calculated that the economic cost of the time users spend on standard security measures such as passwords, antiphishing tools, and certificate warnings is billions of dollars in the US alone— and this when the security benefits of complying with the security advice are dubious.9 SSL warnings have an overwhelming false-positive
rate—close to 100 percent for many years9—so users developed alarm fatigue and learned to ignore them. In addition, longer (12- to 15-character) passwords, which are associated with a very real cost in recall and entry time and increased failure rates—especially on the now widely used touchscreens—offer no improvement in security.10
Fitting the Task to the Human The security-centered view assumes that users want to avoid risk and harm altogether. However, many users choose to accept some risks in pursuit of goals that are important to them. Security experts assume that users who don’t choose the secure option are making a mistake, and thus preventing mistakes and educating users is the way forward. However, a combination of usability and economics insights leads to a different way of thinking about usable security: ■ Usable security starts by recognizing users’ security goals rather than by imposing security experts’ views on users. ■ Usable security acknowledges that users are focused on their primary goals—for example, banking, shopping, or social networking. Rather than disrupting these primary tasks and creating a huge workload for users, security tasks should cause minimum friction. ■ Security experts must acknowledge and support human capabilities and limitations. Rather than trying to “fix the human,” experts should design technology and security mechanisms that don’t burden and disrupt users. Techniques from the human factors field can maximize performance while ensuring safety and security. A key principle is designing technology that fits users’ physical and mental abilities—fitting the May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
task to the human. Rarely should we fit the human to the task, because this requires significant organizational investment in terms of behavior change through education and training. Security education and training are only worthwhile if the behavior fits with primary tasks. An organization could train its employees to become memory artists, enabling them to juggle a large number of changing PINs and passwords. But then employees would need time for routines and exercises that reinforce memory and recall. Changing security policies and implementing mechanisms that enable employees to cope without training are more efficient. For instance, Michelle Steves and Mary Theofanos recommend a shift from explicit to implicit authentication8; in most environments, there are other ways to recognize legitimate users, including device and location information or behavioral biometrics, without disrupting users’ workflow. They also point out that infrequent authentication requires different mechanisms that complement the workings of human memory—something Adams and I recommended after our first study 15 years ago2—but this rarely occurs in practice.
U
sers will pay attention to reliable and credible indicators of risks they want to avoid. Security mechanisms with a high false-positive rate undermine the credibility of security and train users to ignore them. We need more accurate detection and better security tools if we are to regain users’ attention and respect, rather than scare, trick, and bully them into complying with security measures that obstruct human endeavors.
References 1. A. Whitten and D. Tygar, “Why Johnny Can’t Encrypt: A Usability
2.
3.
4.
5.
6.
7.
8.
9.
10.
Evaluation of PGP 5.0,” Proc. 8th Conf. USENIX Security Symp., vol. 9, 1999, p. 14. A. Adams and M.A. Sasse, “Users Are Not the Enemy,” Comm. ACM, vol. 42, no. 12, 1999, pp. 40–46. A. Porter Felt et al., “Improving SSL Warnings: Comprehension and Adherence,” Proc. Conf. Human Factors and Computing Systems, 2015; https://adrifelt.github.io/ssl ________________ interstitial-chi.pdf. __________ B.B. Anderson et al., “How Polymorphic Warnings Reduce Habituation in the Brain—Insights from an fMRI Study,” Proc. Conf. Human Factors and Computing Systems, 2015; http://neurosecurity.byu.edu /media/Anderson_et_al._CHI ____________________ _2015.pdf. ______ “Medical Device Alarm Safety in Hospitals,” Sentinel Event Alert, no. 50, 8 Apr. 2013; www.pwrnewmedia .com/2013/joint_commission /medical_alarm_safety/downloads ____________________ /SEA_50_alarms.pdf. ____________ A. Cooper, The Inmates Are Running the Asylum: Why High-Tech Products Drive Us Crazy and How to Restore the Sanity, Sams–Pearson, 2004. B. Lampson, “Usable Security: How to Get It,” Comm. ACM, vol. 52, no. 11, 2009, pp. 25–27. M.P. Steves and M.F. Theofanos, Report: Authentication Diary Study, tech. report NISTIR 7983, Nat’l Inst. Standards and Technology, 2014. C. Herley, “So Long, and No Thanks for the Externalities: The Rational Rejection of Security Advice by Users,” Proc. 2009 Workshop New Security Paradigms, 2009, pp. 133–144. D. Florencio, C. Herley, and P.C. van Oorschot, “An Administrator’s Guide to Internet Password Research,” Proc. USENIX Conf. Large Installation System Administration, 2014, pp. 35–52.
Angela Sasse is a professor of human-
Executive Committee (ExCom) Members: Christian Hansen, President; Dennis Hoffman, Jr. Past President; Jeffrey Voas, Sr. Past President; W. Eric Wong, VP Publications; Alfred Stevens, VP Meetings and Conferences; Marsha Abramo, VP Membership; Shiuhpyng Winston Shieh, VP Technical Activities; Scott Abrams, Secretary; Robert Loomis, Treasurer Administrative Committee (AdCom) Members: Marsha Abramo, Scott Abrams, Loretta Arellano, Lon Chase, Joseph Childs, Pierre Dersin, Lance Fiondella, Carole Graas, Lou Gullo, Christian Hansen, Dennis Hoffman, Samuel Keene, Pradeep Lall, Zhaojun (Steven) Li, Robert Loomis, Pradeep Ramuhalli, Rex Sallade, Shiuhpyng Shieh, Alfred Stevens, Jeffrey Voas, and W. Eric Wong
http://rs.ieee.org The IEEE Reliability Society (RS) is a technical society within the IEEE, which is the world’s leading professional association for the advancement of technology. The RS is engaged in the engineering disciplines of hardware, software, and human factors. Its focus on the broad aspects of reliability allows the RS to be seen as the IEEE Specialty Engineering organization. The IEEE Reliability Society is concerned with attaining and sustaining these design attributes throughout the total life cycle. The Reliability Society has the management, resources, and administrative and technical structures to develop and to provide technical information via publications, training, conferences, and technical library (IEEE Xplore) data to its members and the Specialty Engineering community. The IEEE Reliability Society has 23 chapters and members in 60 countries worldwide. The Reliability Society is the IEEE professional society for Reliability Engineering, along with other Specialty Engineering disciplines. These disciplines are design engineering fields that apply scientific knowledge so that their specific attributes are designed into the system / product / device / process to assure that it will perform its intended function for the required duration within a given environment, including the ability to test and support it throughout its total life cycle. This is accomplished concurrently with other design disciplines by contributing to the planning and selection of the system architecture, design implementation, materials, processes, and components; followed by verifying the selections made by thorough analysis and test and then sustainment. Visit the IEEE Reliability Society website as it is the gateway to the many resources that the RS makes available to its members and others interested in the broad aspects of Reliability and Specialty Engineering.
centered technology at University College London. Contact her at a.sasse@cs.ucl.ac.uk. ____________ 83
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
BUILDING SECURITY IN Editor: Jonathan Margulies, ____________ jonathan@qmulos.com
A Developer’s Guide to Audit Logging Jonathan Margulies | Qmulos
I
n the September/October 2014 issue of this magazine, three researchers from Hewlett-Packard Laboratories wrote a detailed analysis of the role that security information and event management systems (SIEMs) play in security operations centers (SOCs).1 That article emphasized that SIEMs are responsible for collecting audit logs from all areas of an enterprise and presenting SOC analysts with only the most critical security events to act on. Processing audit logs to separate wheat from chaff is called audit reduction, and it’s even harder than it sounds. I’ve spent the past two years writing software that sits on top of SIEMs and related audit reduction tools, using the collected audit logs to help comply with monitoring standards like Intelligence Community Standard 500-27. During that time, I wrestled with audit logs from various products and vendors, and I’ve been almost universally disappointed with how poorly the developers of those logs understand SIEMs and SOCs. When writing audit logs, those developers made decisions they would never have made if they understood how SIEMs work. But this isn’t the developers’
84
May/June 2015
SECURITY& PRIVACY
IEEE
fault. Readers of this magazine know that developers’ lack of security understanding has been an ongoing problem for decades. To help improve this situation, I present a simple how-to guide for writing good audit logs so developers don’t need to become security experts to work with SIEMs.
SIEM Basics A typical large enterprise collects hundreds of Gbytes of logs per day—some enterprises collect tens of Tbytes of logs—and every entry that reaches the SIEM must be processed. With so much to process and the need to keep up in real time, SIEMs work best when their functionality is kept simple. Think of a SIEM as a simple state machine: it receives a new event, matches the event against a handful of critical attributes, and defines the event’s category and criticality. I like the state machine analogy because it helps convey what SIEMs are bad at: context. Imagine that a SIEM receives a malware alert showing that a user on a certain workstation executed malicious code, but the alert doesn’t include any of the user’s identifying information. A human analyst might Copublished by the IEEE Computer and Reliability Societies
look through the workstation’s audit log, find the most recent user login events, and figure out who the user is. A SIEM can be programmed to do that, but it becomes much slower to respond and harder to manage as it takes on such complex, contextsensitive tasks. Context-sensitive tasks need to be customized to a given environment— the particular combination of deployed devices, system configurations, architectures, and knowledge of which events are important and meaningful to an organization. SIEM creators have to build their products to be customizable, but this comes at a price: smart people are dedicated to the full-time care and feeding of elaborate audit reduction systems that could be made much simpler with just a little effort from IT product developers. It’s the difference between households separating their recyclables from trash and having workers do this at the landfill. Proper use of event codes, session identifiers, documentation, and other techniques can help improve SIEM efficiency and security.
Specific Event Codes Event codes are simple alphanumeric sequences that act as shorthand to quickly sort events into categories. Microsoft Windows event codes define hundreds of categories. Table 1 shows a few examples to give you an idea of their specificity. Specific event codes are great for audit reduction, as they make similar events easy to identify with minimal processing. Event codes can also be extended easily for hierarchical organizations in which, for example, the first digit could be an 1540-7993/15/$31.00 © 2015 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
event category, the second digit a subcategory, and the third another subcategory, giving SIEM users an easy way to choose the appropriate granularity level. However, specific event codes aren’t always used correctly. For instance, when an individual uses virtual private network (VPN) logs generated by the Cisco Adaptive Security Appliance (ASA), the VPN client sends a great deal of security-related information about the client computer: the user’s OS, applied hotfixes, and other system state descriptions. Unfortunately, each of these attributes, no matter how different from the others, uses the same event code.
Table 1. Windows event codes. Event code
Event description
4800
Workstation was locked
4725
User account was disabled
4647
User initiated logoff
Table 2. NIST Special Publication 800-53 requirements versus Blue Coat ProxySG. NIST SP 800-53 requirement
Although session identifiers are most closely associated with helping Web servers track sessions, they can serve an important logging purpose in any kind of session. Continuing with the Cisco ASA example, when monitoring VPN events, it’s important to be able to attribute an activity to a single user, computer, and session. This typically involves attaching a relatively unique session ID to every logged event, which is a good practice for any security-relevant, session-oriented application. However, the Cisco ASA VPN doesn’t log a session ID, and its log messages don’t include a consistent user identifier. Some messages log usernames, others log client IP addresses, and still others log both; this variety of identification methods allows most session information to be reconstructed, albeit with unnecessary complexity and processing costs.
Critical Attributes Sometimes even the most popular products’ event logs are missing basic security attributes. One example is Microsoft Windows event 4719—“system audit policy was changed.” This event triggers when
ProxySG attributes
Event type
Request method, proxy action taken, and protocol
Event time
Unix timestamp, Greenwich Mean Time, local time, duration, and time taken
Event location
ProxySG IP address and ProxySG host name
Event source
Web server IP address, Web server host name, client IP address, and client host name
Event outcome
Protocol status code, proxy action taken, and proxy filter result
Identity of individuals or subjects associated with the event
Client username
Session Identifiers
an administrator changes the event categories that the OS logs. But instead of including the username of the administrator who made the change, the event log places the computer’s name in the “account name” field. Although it might be possible to comb through previous logs to identify which administrator was active on the system at the time, doing so is both unreliable and processor intensive. National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53, a commonly referenced standard for enterprise security best practices, requires audit records to include six measurements: event type, event time, event location, event source, event outcome, and the identity of any individuals or subjects associated with the event.2 Given the large number of organizations that use SP 800-53 to define enterprise security requirements—both within and outside the US federal government—IT enterprise
product developers should consider these six measurements the bare minimum requirements for every logged event. One product that does a particularly good job of meeting the minimum SP 800-53 requirements—and giving customers the customizability to meet just about any other logging requirement—is the Blue Coat ProxySG Web proxy. For every Web request, it can log up to 40 different attributes, and both the format and content of those logs are highly configurable. Table 2 shows how the ProxySG’s log attributes match up against the SP 800-53 requirements (https://bto _______ .bluecoat.com/sites/default/files /tech_pubs/SGOS_Vol8_Access ______________________ Logging_5.4.pdf). ___________
Descriptive Documentation The best publicly available documentation for Windows logs is the Windows Security Log Encyclopedia (www.ultimatewindowssecurity 85
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
BUILDING SECURITY IN
.com/securitylog/encyclopedia), a website run by the Monterey Technology Group. Microsoft offers no comparable service. Documenting logging subsystems appears to be a low priority for many enterprise technology vendors, perhaps because “excellent log documentation” has yet to break through as a differentiating product feature. However, because log messages tend to be terse and cryptic, documentation can be extremely helpful.
Typical Formats and Locations The most common format for logs is ASCII or Unicode text with a text-based delimiter, such as a carriage return or a line of asterisks, between events. Logs are generally written out in one of four forms: over the network via the syslog protocol, to a file, to the Windows Event Log (Windows allows thirdparty applications to write to its application log), or as a database entry. Audit reduction tools are therefore designed to accept syslog messages, query databases, and read text files and the Windows Event Log. Although some tools can also read XML or JSON, query an API for logs, or even parse command
stay connected.
| @ComputerSociety | @ComputingNow | facebook.com/IEEE ComputerSociety | facebook.com/ComputingNow | IEEE Computer Society | Computing Now | youtube.com/ieeecomputersociety
86
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
line output, these features are generally less reliable, harder to use, and more taxing than the aforementioned collection methods. Writing custom audit log parsers for atypical formats can cost tens of thousands of dollars in development time. If possible, it’s better to offer users the option of logging in a text file with a simple data format like commaseparated values or key-value pairs. Recent versions of Microsoft Exchange provide an interesting example of what not to do. Importing their audit logs requires either ingesting PowerShell command output or emailing an XML file of log events to the SIEM. These methods are so difficult to use that multiple companies sell expensive products designed to perform this one task on behalf of SIEMs. Moreover, having a system rely on email as part of its logging infrastructure creates an obvious and unnecessary single point of failure.
Tamper Protection Tamper protection is the most difficult problem because the typical solution—a digital signature—is processor intensive and might not be feasible with some products, particularly at enterprise scale. Still, a key threat that audit logging is meant to address is an attacker with administrative privileges. Logs that lack tamper protection are susceptible to that attacker’s manipulation. Although research on cryptographic protection of audit logs dates back to at least 19973 and signing of audit records has been recommended by the SP 800-53 standard since 2009, digital signatures in audit logs remain exceedingly rare.
rules and to know when to ask an expert for help. This article is an attempt to summarize a care and maintenance rule without assuming the security mindset. I hope it’s short enough to be easy on developers and long enough to teach them something. Disclaimer This article focuses on Microsoft and Cisco only because their products are so broadly deployed in many enterprises and they generate a large volume of audit logs; this makes their logs particularly impactful and makes examples easy to find.
References 1. S. Bhatt, P.K. Manadhata, and L. Zomlot, “The Operational Role of Security Information and Event Management Systems,” IEEE Security & Privacy, vol. 12, no. 5, 2014, pp. 35–41. 2. “Security and Privacy Controls for Federal Information Systems and Organizations,” NIST Special Publication 800-53, revision 4, Nat’l Inst. Standards and Technology, Apr. 2013; http://nvlpubs.nist.gov /nistpubs/SpecialPublications ____________________ /NIST.SP.800-53r4.pdf. _____________ 3. M. Bellare and B.S. Yee, Forward Integrity for Secure Audit Logs, tech. report, Dept. Computer Science and Eng., Univ. California, San Diego, 23 Nov. 1997. Jonathan Margulies is the chief tech-
nology officer at Qmulos. Contact him at jonathan@qmulos.com. _______________
Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
T
he security community can expect developers to understand security in the same way that mechanics can expect drivers to understand cars: enough to follow some basic care and maintenance
Got an idea for a future article? Email editor Jonathan Margulies (jonathan@qmulos.com). ______________
May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLDâ&#x20AC;&#x2122;S NEWSSTAND®
-VJ\Z VU @V\Y 1VI :LHYJO 0,,, *VTW\[LY :VJPL[` 1VIZ OLSWZ `V\ LHZPS` Ã&#x201E;UK H UL^ QVI PU 0; ZVM[^HYL KL]LSVWTLU[ JVTW\[LY LU NPULLYPUN YLZLHYJO WYVNYHTTPUN HYJOP[LJ[\YL JSV\K JVTW\[PUN JVUZ\S[PUN KH[HIHZLZ HUK THU` V[OLY JVTW\[LY YLSH[LK HYLHZ 5L^ MLH[\YL! -PUK QVIZ YLJVTTLUKPUN VY YLX\PYPUN [OL 0,,, *: *:+( VY *:+7 JLY[PÃ&#x201E;JH[PVUZ =PZP[ ^^^ JVTW\[LY VYN QVIZ [V ZLHYJO [LJOUPJHS QVI ___________________ VWLUPUNZ WS\Z PU[LYUZOPWZ MYVT LTWSV`LYZ ^VYSK^PKL
O[[W! ^^^ JVTW\[LY VYN QVIZ ______________________________________
;OL 0,,, *VTW\[LY :VJPL[` PZ H WHY[ULY PU [OL (07 *HYLLY 5L[^VYR H JVSSLJ[PVU VM VUSPUL QVI ZP[LZ MVY ZJPLU[PZ[Z LUNPULLYZ HUK JVT W\[PUN WYVMLZZPVUHSZ 6[OLY WHY[ULYZ PUJS\KL 7O`ZPJZ ;VKH` [OL (TLYPJHU (ZZVJPH[PVU VM 7O`ZPJPZ[Z PU 4LKPJPUL ((74 (TLYPJHU (ZZVJPH[PVU VM 7O`ZPJZ ;LHJOLYZ ((7; (TLYPJHU 7O`ZPJHS :VJPL[` (7: (=: :JPLUJL HUK ;LJOUVSVN` HUK [OL :VJPL[` VM 7O`ZPJZ :[\KLU[Z :7: HUK :PNTH 7P :PNTH
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLDâ&#x20AC;&#x2122;S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
IN OUR ORBIT Editors: Angela Sasse, __________ a.sasse@cs.ucl.ac.uk | Alessandro Acquisti, acquisti@andrew.cmu.edu ______________
Effortless Privacy Negotiations Kat Krol | University College London Sören Preibusch | Google
P
rotecting our privacy is second nature in the physical world. We are guided by simple rules of thumb that enable quick decision making based on cues from our environment. We modulate our protective measures based on context: we share different information with the postman than with our doctor and discuss our strengths and weaknesses differently when talking to a prospective employer than when confiding in a close friend. These behaviors are learned at an early age and require little effort. Our sensitivity as to what we disclose, to whom, and when is shaped by many factors, including social norms, previous experiences, and expected benefits. The ease of offline privacy decisions contrasts sharply with the skill and effort required to protect our data online. How can we engineer privacy controls for the online world that are similarly simple and effective as those for the offline world? This article is a call to action: we highlight the differences between offline and online sharing of personal information and demonstrate that customary take-it-or-leave-it privacy policies are at odds with
88
May/June 2015
SECURITY& PRIVACY
IEEE
the Internet’s individualism and users’ diverse privacy preferences. In bringing these two thoughts together, we argue that effortless privacy negotiations can guide Web users through privacy choices. Yet, designing and engineering effortless privacy negotiations is a challenging endeavor that requires substantial commitment from both academia and industry.
Heuristic Decision Making, Feedback, and Information Sharing Humans make decisions by employing heuristics, or mental shortcuts that allow people to solve problems and make judgments quickly and efficiently. For instance, if individuals were asked whether London or Leeds is bigger, the recognition heuristic would lead them to choose London, because they are more likely to be familiar with London. However, biases introduced by heuristics can work to users’ detriment when making decisions online. Employing the recognition heuristic online leads users to recognizable brands, which can make them vulnerable to phishing. In computermediated communication, users Copublished by the IEEE Computer and Reliability Societies
have been shown to judge a site’s legitimacy using signals learned from offline interactions,1,2 and sites can imitate trust cues to deliberately mislead users. Heuristic decisionmaking cues are much less reliable online, making it difficult for users to make sound privacy decisions. In the physical world, we constantly receive feedback from our immediate surroundings: we can see who is looking in our direction and assess the friendliness or hostility of the environment. But with online privacy decisions, we hardly ever get feedback, and the consequences of our choices are rarely tangible. In addition, each transaction between a service provider and a customer is isolated from other customers’ transactions— this invisibility makes social learning difficult. A 2013 behavioral lab experiment on Web search privacy found that three times as many users were willing to pay extra to enhance the quality of their search results than to enhance their queries’ privacy.3 This can indicate that quality matters more than privacy in a modern search engine. Another explanation is that the quality of the results is immediately perceivable, improving user experience by saving time and effort. The privacy benefits are less tangible, and users might not be able to imagine them easily. Due to the ambiguity of online cues and the invisible consequences of privacy choices, Web users can exhibit a strong present bias—their desire for immediate gratification takes over when making impactful privacy choices with unforeseeable consequences.4 1540-7993/15/$31.00 © 2015 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
These examples consider vertical tickets and confronted with a choice In the push toward customerprivacy: the relationship between between two providers with varying centric personalization, the prian individual and a service provider. prices and data collection practices, vacy experience has stagnated in its Similar challenges arise for hori- one third of users paid extra not to user-denying inflexibility. A single zontal privacy: information sharing disclose their phone number.7 privacy statement posted on a webamong individuals. In casual offline In our 2013 study on disclo- site must be accepted by all users interactions, individuals rarely have sure warnings for Web forms, the who want to perform a transaction a planned strategy for the disclosure majority of the participants wanted on the website. Users must agree of their personal information; shar- insight into and control of the pri- in full to this take-it-or-leave-it priing is often reciprocal during inter- vacy protection process.8 One of vacy statement, or abandon what actions as relationships develop.5 the eight warnings we tested offered they came for and navigate away. On today’s social networks, how- the option of erasing the entries In a large observational study on ever, we are often asked upfront to from all optional form fields, which dropout behavior conducted over approximately 95,000 decide what we want to Web browsing sessions, share and with whom. Incentivized privacy negotiations we saw that even major A pronounced informacan resolve cost and privacy service providers like tion asymmetry between Microsoft, Google, and the observed and the tradeoffs according to users’ Yahoo see conversion observer means we diswillingness to pay for privacy. rates as low as 10 percent close personal informaon their account registration without knowing tion pages.9 who is receiving it and have little control over what others would save a lot of time. However, Users who dislike a single privacy participants disliked this option statement must cancel their transacwill do with it. The unreliability of heuristic and preferred to go from question tion or override their privacy preferdecision-making cues, the lack of to question and decide whether to ences. Quitting a transaction results feedback, and the need to commit share each item, even if it meant in wasted time and effort, and furto information sharing upfront are more effort. The motives behind ther effort will be spent in searching just three factors that make online users’ privacy choices varied: for and learning about an alternative decision making difficult. However, some enjoyed disclosing informa- company. This can result in disgrunone-size-fits-all policies that do not tion about themselves, and others tled consumers and online retailers account for individual differences hoped for some benefit in return that lose transactions or accumulate for the requested information. This fake information in their databases can be even worse. indicates that nonhomogeneous, when buyers feel forced to provide one-size-fits-all privacy solutions personal details they are unwilling to Heterogeneous are unsuitable. A good privacy- disclose. In a 2015 survey by SymanPrivacy Preferences The monetization of personal infor- enhancing technology should be tec in Europe, one in three responmation powers the Internet. In the receptive to users’ needs and allow dents admitted to falsifying data to UK, online advertising that sup- for informed, low-effort disclosure. protect their privacy online.10 The Web is personalized in many ports Internet services is worth A single, inflexible privacy stateUS$5 billion—twice the total areas, but not for the exchange of ment cannot satisfy a user populaamount that UK users pay in ISP personal data. Internet companies tion with heterogeneous privacy fees.6 Even though the majority of improve user experience by antici- preferences. Fortunately, privacy Web users choose discounts over pating and tailoring their offerings negotiations provide the opporprivacy, some consumers would to users’ needs and expectations. tunity to reach and implement rather pay with their wallet than Web pages are displayed in users’ individual agreements about data with their data. Pilot programs such local language, returning customers collection and use between compaas Google Contributor demonstrate are often recognized and greeted by nies and consumers. that some Web users are willing to name, retailers recommend prodpay a small monthly fee not to see ucts based on past purchases, and Privacy Negotiations banner ads. Our own research pro- so on. Admittedly, some of these Incentivized privacy negotiations vides evidence that a sizable propor- advances are possible only through can resolve cost and privacy tradetion of consumers are willing to pay customer tracking, but this is offs according to users’ willingfor privacy. When buying cinema beyond the scope of this article. ness to pay for privacy.11 Privacy 89
www.computer.org/security
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
IN OUR ORBIT
negotiations require no overhead or haggling, as service alternatives can be explored through ongoing choice; there is no need to turn a browser into a bazaar. Service providers can offer incentives such as discounts or work with smart defaults to guide their customers. In return, users navigate effortlessly through a series of simple choices, often either providing or withholding information at will or deciding on permissible uses for a data item. Customers are free to decide whether they want the free version of an app that shows advertisements or to pay for an ad-free experience, sustained through the monetization of their personal information. Legal protection ensures minimum privacy standards. This is similar to how a supermarket offers a variety of tomato sauce at different prices, but each one—from the inexpensive store brand to the organically grown specialty brand—must meet food safety standards. Incentivized privacy negotiations are particularly powerful in online shopping. Retailing was one of the first activities that transitioned to the Web, and the US Census Bureau estimated a total of $305 billion in e-commerce sales in 2014.12 At checkout, online stores require certain information, some of which is necessary in the context of distance selling, such as payment details and a shipping address. Other information might be required for fraud prevention or marketing purposes, such as subscribing to a newsletter. Depending on the type of store, providing lifestyle information can greatly improve product recommendations, such as an online grocer being aware of your allergies or a specific diet. Not all consumers want to reveal these sensitive details, whereas others might appreciate the added convenience and peace of mind. With privacy negotiations, consumers can pay the risk premium 90
IEEE Security & Privacy
SECURITY& PRIVACY
IEEE
if they are unwilling to provide details required for fraud screening. Product recommendations could become available when additional profile information is shared, and a newsletter opt-in might unlock a discount on the purchase. Companies that offer privacy negotiations benefit from higher data quality, as their customers no longer need to resort to providing fake details when coping with mandatory form fields. It follows from standard economics that a company offering privacy negotiations appeals to a larger audience, as its privacy practices are compatible with the preferences of a larger population. Variants of the same product can be tailored to privacy personas—segments of the user population who exhibit similar preferences.13 By offering different privacy options, an online store can gain a competitive advantage as customers do not have to turn elsewhere to get the privacy choices they want.
How Can We Achieve Effortless Privacy Decisions? It is easy to argue that users need to be given a choice when it comes to privacy, but this is difficult to achieve. Service providers face high implementation and management costs. Users must absorb the increased effort, as choice comes at a cost. Our study on Web form warnings showed that users liked having fine-grained control, but we doubt this would hold at scale. Security and privacy decisions are not users’ primary concerns when using the Web, and choice fatigue has been demonstrated in empirical research:14 Stefan Korff and Rainer Böhme showed that users who had more privacy options to choose from reported more negative emotions, experienced more regret, and were less satisfied with the decision they made.15
Effortless online privacy choices might not be achievable. However, academia and industry must work together to offer consumers a better privacy experience on the Web. Offline heuristics were developed over centuries, some since the beginning of mankind. The first graphical browsers appeared less than 25 years ago and today’s Web paradigms were shaped over the past decade. Until evolution rewires our brains to handle online privacy choices with ease, it is up to research to make these choices as effortless as possible.
F
our major research questions need to be answered: How can we offer cues that would enable low-effort decision making online? How do we enable effortless choice? What kind of options do we display, to whom, and when? How do we afford for feedback and transparency to enable learning? We need further insight into smart privacy defaults and how to meaningfully narrow down options for users. Privacy choices could be explained through metaphors from the physical world that promise to be better understood by the users. Crowdsourced efforts have tried to reproduce social signals about the trustworthiness of websites (http://tosdr.org), but many have failed to reach critical mass. Commitment is required not only from privacy activists who maintain such services but also from academia and industry—they must team up and conduct research in the wild to find the right balance between effort and control in online privacy choices. References 1. I. Kirlappos and M.A. Sasse, “Security Education against Phishing: A Modest Proposal for a Major Rethink,” IEEE Security & Privacy, vol. 10, no. 2, 2012, pp. 24–32. 2. M. Wu, R.C. Miller, and S.L. Garfinkel, “Do Security Toolbars Actually May/June 2015
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Prevent Phishing Attacks?,” Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI 06), 2006, pp. 601–610. 3. S. Preibusch, “The Value of Privacy in Web Search,” Workshop Economics of Information Security (WEIS 13), 2013; http://weis2013.econinfosec .org/papers/PreibuschWEIS2013.pdf. 4. A. Acquisti, “Privacy in Electronic Commerce and the Economics of Immediate Gratification,” Proc. 5th Ann. ACM Conf. Electronic Commerce (EC 04), 2004, pp. 21–29. 5. I. Altman and D.A. Taylor, Social Penetration: The Development of Interpersonal Relationships, Holt McDougal, 1973. 6. C. Kalapesi et al., “The Connected Kingdom: How the Internet Is Transforming the UK Economy,” The Boston Consulting Group, Oct. 2010; www.bcg.com/documents /file62983.pdf. ________ 7. N. Jentzsch, S. Preibusch, and A. Harasser, “Study on Monetising Privacy: An Economic Model for Pricing Personal Information,” European Network and Information Security Agency (ENISA), Feb. 2012; www.enisa.europa.eu/activities /identity-and-trust/librar y /deliverables/monetising-privacy. ___________________ 8. K. Krol and S. Preibusch, “Control Versus Effort in Privacy Warnings for Personal Information Disclosure on Web Forms,” under review at ACM Trans. on the Web (TWEB). 9. M. Malheiros and S. Preibusch, “Sign-up or Give-up: Exploring User Drop-out in Web Service Registration,” Symp. Usable Privacy and Security (SOUPS), 2013; http://cups .cs.cmu.edu/soups/2013/trust busters2013/Sign_up_or_Give _up_Malheiros.pdf. ___________ 10. “State of Privacy Report 2015,” Symantec, 2015; www.symantec .com/content/en/us/about/press kits/b-state-of-privac y-report -2015.pdf. _____ 11. S. Preibusch, “Implementing Privacy Negotiations in E-commerce,” Frontiers of WWW Research and
12.
13.
14.
15.
Development—APWeb 2006, LNCS 3841, X. Zhou et al., eds., Springer, 2006, pp. 604–615. “Quarterly Retail E-commerce Sales 4th Quarter 2014,” US Dept. Commerce, US Census Bureau, Feb. 2015; www.census.gov/retail/mrts /www/data/pdf/ec_current.pdf. __________________ A. Morton and M.A. Sasse, “Desperately Seeking Assurances: Segmenting Users by Their Information-Seeking Preferences,” 12th Ann. Int’l Conf. Privacy, Security and Trust (PST 14), 2014, pp. 102–111. D.V. Thompson, R.W. Hamilton, and R.T. Rust, “Feature Fatigue: When Product Capabilities Become Too Much of a Good Thing,” J. Marketing Research, vol. 42, no. 4, 2005, pp. 431–442. S. Korff and R. Böhme, “Too Much Choice: End-User Privacy Decisions in the Context of Choice Proliferation,” Proc. Symp. Usable
Privacy and Security (SOUPS 14), 2014; www.usenix.org/conference /soups2014/proceedings/ presentation/korff. ___________ Kat Krol is a PhD student at Univer-
sity College London. Contact her at ______________ kat.krol.10@ucl.ac.uk.
Sören Preibusch is a user experience
researcher at Google. Contact him at mail@soeren-preibusch.de. _________________
Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
Got an idea for a future article? Email editors Angela Sasse (a.sasse@cs.ucl.ac.uk) ___________ and Alessandro Acquisti (acquisti@ ______ andrew.cmu.edu). _________
#&8'46+5'4 +0(14/#6+10 r /#; ,70' Advertising Personnel Debbie Sims Advertising Coordinator Email: _____________ dsims@computer.org Phone: +1 714 816 2138 Fax: +1 714 821 4010 Chris Ruoff, Sales Manager Email: cruoff@computer.org _____________ Phone: +1 714 816 2168 Fax: +1 714 821 4010 Advertising Sales Representatives (display)
Central, Northwest, Far East: Eric Kincaid Email: ________________ e.kincaid@computer.org Phone: +1 214 673 3742 Fax: +1 888 886 8599 Northeast, Midwest, Europe, Middle East: Ann & David Schissler
Email: a.schissler@computer.org, ________________ d.schissler@computer.org ________________ Phone: +1 508 394 4026 Fax: +1 508 394 1707 Southwest, California: Mike Hughes Email: mikehughes@computer.org _________________ Phone: +1 805 529 6790 Southeast: Heather Buonadies Email: h.buonadies@computer.org __________________ Phone: +1 201-887-1703 Advertising Sales Representative (%NCUUKƂ GF .KPG ,QDU $QCTF)
Heather Buonadies Email: h.buonadies@computer.org __________________ Phone: +1 201-887-1703
91
www.computer.org/security
SECURITY& PRIVACY
IEEE
M q M q
M q
MqM q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
LAST WORD
What a Real Cybersecurity Bill Should Address
T
Steven M. Bellovin Columbia University
92
May/June 2015
SECURITY& PRIVACY
IEEE
he US Congress is currently considering what to do about computer security. Unfortunately, it’s concentrating on information sharing between the private sector and the government. Although information sharing isn’t inherently bad, especially if done with enough attention to privacy, it won’t solve the problem. At best, it’s like downstream flood warnings based on what just happened upstream. What we really need is a stronger dam; better yet, we need to prevent the floods in the first place. It’s not likely that any law or set of laws will solve the problem. Nevertheless, there are some concrete things that can be done. Some of these can be addressed by legislation, though often only in the form of incentives for companies to do the right thing. The biggest security problem we face stems from one simple fact: software is often buggy. These bugs are often exploitable by attackers. Even when better software is available, companies don’t install patches promptly. We’re not going to solve the software problem anytime soon, but we can do better. The single best thing Congress can do for cybersecurity is attack these problems. It won’t be easy. Favored tax treatment for software security efforts would help, but it’s tricky to come up with correct definitions. A better approach would be to outlaw disclaimers of liability in end-user license agreements; most insist that the vendor isn’t responsible for anything, up to and including softwarerelated zombie outbreaks. If financial incentives for better software security exist, the market will be able to work its magic. Improving system administration would be an immense help. Congress might not be able to do anything about it for the private sector, but it can and should do something for government organizations by raising the pay, status, and professionalism of the job. The government should also encourage the use of cryptographic technology. Cryptography is hard for people to use properly, but much
of its complexity arises from its relative rarity. If everything is supposed to be encrypted, life would be a lot simpler. Encouraging cryptography can be done through both requirements and incentives. All storage devices and traffic going into and out of critical infrastructure computers (including those back-office desktops) should be encrypted. This is, by definition, a matter of national security. For less critical systems, companies could be liable for information theft, depending on what was or wasn’t encrypted. Many state data breach notification laws already include similar provisions. Using cryptography properly requires a secure way to store secret keys, preferably in tamper-resistant hardware. Industrydeveloped voluntary standards should suffice; if not, the National Institute of Standards and Technology could develop them. Furthermore, properly implemented cryptography could help stamp out passwords.
F
inally, people need data on security failures. Airplanes are so safe today because every crash is investigated and the results are made public. Pilots, airlines, and manufacturers have all learned from past problems. In cybersecurity, we don’t know if a particular penetration was due to lack of firewalls, bad passwords, employee mistakes, or any of a dozen other causes. Insurance companies need data, too, both as an actuarial basis for setting liability rates but also so they can adjust their rates based on risk factors. Congress should mandate external, published investigations for security problems at publicly traded companies. None of these ideas is a panacea, and none is a short-term fix. Together, though, they’ll help in the long run.
Steven M. Bellovin is a computer science pro-
fessor at Columbia University. Contact him via www.cs.columbia.edu/~smb.
Copublished by the IEEE Computer and Reliability Societies
1540-7993/15/$31.00 © 2015 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
DIGITAL EDITION
SUBSCRIBE FOR $2995 Protect your network Further your knowledge with in-depth interviews with thought leaders Access the latest trends and peer-reviewed research anywhere, anytime
IEEE Security & Privacy is the publication of choice for great security ideas that you can put into practice immediately. No vendor nonsense, just real science made practical. —Gary McGraw, CTO, Cigital, and author of Software Security and Exploiting Software
www.qmags.com/SNP SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®
GET MORE, FOR LESS! Interested in getting a digital subscription to another IEEE Computer Society magazine? With a digital subscription, you’ll get videos, podcasts, and interactive links to the latest articles—all delivered via email at the halfyear member subscription price of just $19.50 per magazine. This rate expires 10 August 2015. You’ll also get free access in the Computer Society Digital Library to previous issues of the magazines you subscribe to. Subscribe now by clicking “Subscription options” under the magazine titles at computer.org/subscribe.
SECURITY& PRIVACY
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
MqM q THE WORLD’S NEWSSTAND®