BUILDING SECURITY IN Editors: Brian Chess, bchess@vantuyl.com | Brad Arkin, barkin@adobe.com
Driving Secure Software Development Experiences in a Diverse Product Environment
Barbara Fichtinger, Frances Paulisch, and Peter Panholzer | Siemens
A
t Siemens, owing to the nature of our business, our markets and end customers have different security expectations. We offer electronics and electrical engineering products and solutions for industry, energy, and healthcare, as well as infrastructure solutions. Many of these products and solutions are systems people experience either directly or indirectly in their everyday life—for example, automation systems for factories, wind turbines, X-ray units, or trains. Even though each of these systems contains a significant amount of software, Siemens still isn’t a typical software company. Many of our products and solutions also contain much hardware or consist of integrated third-party products. For our security program, this diversity means our strategy must adapt to our different customers’ requirements. For example, in healthcare, data privacy is the most important requirement, regulated in the US by the Health Insurance
Portability and Accountability Act (HIPAA)1 or the Health Information Technology for Economic and Clinical Health Act (HITECH, part of the American Recovery and Reinvestment Act of 20092). For the energy market, the most relevant requirements are customer or regulatory requirements such as the Requirements for Secure Control and Telecommunication Systems3 in Germany, the Cyber Security Standards for Critical Infrastructure Protection of the North American Electric Reliability Corporation,4,5 and the Security Requirements for Vendors of the International Instrument User’s Association.6 Even though security expectations differ in the individual fields, our experience is that the security activities we need during development and maintenance are similar in terms of workflow and methodology. However, the checklists and content we use in those activities often differ. As the central security team at Siemens, we strive to give
1540-7993/12/$31.00 © 2012 IEEE
Copublished by the IEEE Computer and Reliability Societies
guidance that is adequate from a corporate perspective but still gives business units enough freedom to make the right decisions for their business. The challenge is finding a sound balance.
A Process Approach
Regarding security in product and solution development, our experience is that it’s most effective to closely link the necessary security activities with the established development processes. Our internal standard for measuring the quality of the software development process is based on CMMI (Capability Maturity Model Integration) for Development.7 Each business unit tailors its development processes and defines roles and responsibilities, activities, and quality gates. We see a strong trend toward establishing more iterative, incremental, and agile life-cycle models, and we’ve had good experience in allowing this tailoring and flexibility while still meeting the CMMI goals. Depending on the business area, these processes are also designed to meet regulatory requirements (for example, safety); consequently, they form a sound basis for integrating security activities. The strong focus on the process capability helps enable security throughout the organization. This also means a change of mindset that we’re encouraging in the organization. In the past, development teams often focused only on building a high level of security into one particular product or feature. Although both approaches have their merits, we aim to establish an March/April 2012
97
BUILDING SECURITY IN
Table 1. Goals for the +SECURE process areas. Process area
Goals
Organizational preparedness for secure development
Establish and maintain a set of process assets and work environment standards for developing secure products. Establish and manage the project’s secure development process.
Security management for projects
Plan and implement secure supplier and third-party-component selection. Evaluate and manage product security risks throughout the project.
Security requirements and technical solution
Develop security requirements to meet the relevant stakeholders’ security needs. Ensure secure architecture, design, and implementation as part of the technical solution. Ensure that work products meet their specified security requirements.
Security verification and validation
Demonstrate that the product fulfills the security expectations when placed in the intended operational environment.
adequate set of basics for the organizational process first. In 2009, when we started propagating the secure development process, we found that simply using the available secure development best practices wasn’t useful for driving such development on a company-wide level. For example, the Microsoft Security Development Lifecycle8 (SDL) focuses on specific Microsoft technologies and more on software development than the integration of third-party components. However, in our heterogeneous product landscape, you can find almost every technology (for example, embedded systems, PCs, Web applications, and Web services) somewhere. And because the development teams not only develop their own software but also integrate third-party components, secure configuration of these components is also important. We also looked at OpenSAMM (Open Software Assurance Maturity Model; www.opensamm.org), but it didn’t address the process level enough. In addition, we considered BSIMM (Building Security In Maturity Model; http:// bsimm.com). However, its lack of a process view and its descriptive approach didn’t fit our needs. So, we developed +SECURE, an extension to CMMI for Development that’s applicable to both 98
IEEE Security & Privacy
agile and traditional life-cycle models. We did this because we already have a culture for CMMIbased process improvements. Piggybacking on CMMI is a big lever to foster secure development practices in our development organizations. Development teams and quality organizations are familiar with the improvement methodologies and metrics. Adapting those methodologies and metrics for security gives us a high acceptance level in the organization. Table 1 lists the process areas that +SECURE covers. Even though +SECURE is one level more abstract than, for example, Microsoft SDL, OpenSAMM, or BSIMM, they all complement each other at the implementation level. We use +SECURE to evaluate our development organizations’ security capability and to plan and motivate necessary improvement steps. The evaluation is a first step to get an as-is picture so that we can start the improvement appropriately. It’s important for our program’s success because, owing to the diversity of our businesses, each development organization differs slightly. We tailor the improvement steps to the business (for example, by adapting existing documents or style guides and integrating security-related requests into request-tracking tools).
Roles and Responsibilities
The process alone won’t bring the necessary security—after all, the team members must implement the activities and have the required skills. Siemens has more than 15,000 software engineers worldwide, often in geographically distributed teams. The roles (including the responsibilities and needed capabilities) for product development are defined on the basis of process documentation. This principle is independent of the development team’s size or organizational setup. In small teams, one person might have two roles. When integrating security activities in the development teams, we found that just providing training isn’t enough. We must also hold team members accountable for the security activities. To promote this, we extended the existing role descriptions with security responsibilities and required competencies and created the role of “product and solution security expert.” This expert implements the required security in products and solutions and helps development teams conduct the corresponding security activities during development. We have two goals. First, each expert should focus on one of three areas: secure architecture and design, secure coding, or security testing (see Table 2). Second, March/April 2012
Table 2. Focus areas for product and solution security experts. Focus area
Security responsibilities
Secure architecture and design
The main responsibility is to conduct product security risk analysis to ensure early identification of potential security requirements and constraints. The expert also acts as the interface and mediator between product management and development.
Secure coding
During software implementation, the expert ensures that secure coding practices are followed and conducts code analysis to identify security vulnerabilities.
Security testing
During testing, the expert supports the verification of security requirements and conducts penetration tests to identify security vulnerabilities. The expert also evaluates the mitigations’ effectiveness.
each expert should support multiple development teams at the same time, depending on the project size and product categorization. Regarding training, we try to piggyback on other company initiatives—for example, the basic know-how for our architects currently is integrated into the Siemens-wide curriculum for the system and software architects.9 Regarding the product and solution security experts, we primarily develop people who already have the domain knowledge (for example, architects, developers, and testers) into that role by providing them with the necessary security knowledge. The basis for this education is our internal secure-coding training, which we started building in 2004. We’ve extended this training to cover a more comprehensive curriculum realized through a combination of in-house, Webbased, and external classes. Our experience is that there are people at different locations throughout the company who are interested in product security and could help us push product security forward. By creating the role of the product and solution security expert, we leverage these individuals’ engagement and give them the mandate to perform security activities and develop themselves in this direction. We also aim to make the job attractive to them by establishing an official career path in the company. One advantage is www.computer.org/security
that these individuals already have the domain-specific know-how to adequately perform the job in our diverse environment. We plan to use the network of product and solution security experts to enable the development teams. For this enablement, what appears most challenging for us is to make development teams understand how the security world works. This particularly includes the difference between the developer mindset and that of hackers or security researchers. A good way to do this is to actively involve development teams in product-specific threat and risk analysis.
Threat and Risk Analysis and Risk Management
At Siemens, our methodology for threat and risk analysis has evolved significantly. Until 2008, security experts primarily conducted the analysis without actively involving the development teams. Often, the result was that the teams addressed some of the findings but continued to make the same mistakes. What we really wanted was to give the teams the initial push toward security, so that they have sufficient awareness and try to improve their security practices. In 2009, we initiated a threat and risk analysis workshop moderated by security experts from the central security team. This workshop effectively promoted secure development activities. For the
development projects, this was an optional one-time event. It significantly raised the development teams’ awareness. However, once again, the teams mitigated the most serious and easily fixed risks and then stopped handling risk. They often considered the list of risks identified during the workshop a final document rather than a living document that must be maintained as development proceeds and the operational environment changes. So, we implemented two strategies. First, we augmented the list with the appropriate risk management activities. For product security risk management, senior management defines the risk parameters and risk treatment strategy. Our experience is that a Siemens-wide definition isn’t useful. Instead, management defines the parameters and strategy at the business unit level, owing to the different types of business that each unit conducts. The parameters include likelihood and impact levels and the risk matrix (defining which combination of likelihood and impact results in which risk level). The strategy defines how to proceed with the risks of each risk level. This includes escalation paths, time frames, and defining the acceptable risk depending on the product’s highlevel security categorization. Second, instead of the optional one-time workshop, we introduced 99
BUILDING SECURITY IN
risk analyses at different development stages. During project definition, product managers determine the required security effort by answering questions such as these: ■ What’s the product’s expected lifetime? ■ Does the product provide security functionality (for other products)? ■ How exposed to attacks is the product in its intended operational environment? For example, will the component be accessible via the Internet? The answers let us define a highlevel security categorization of the product. We want to identify the products that require more attention (for example, remote service interfaces) so that the project (and budget) planning can be adjusted.
Detailed risk analysis occurs during requirements engineering and architecture and design. These analyses usually take the form of workshops. The workshops consist of the traditional steps: scoping, identifying threats and evaluating their probability and impact level, and discussing risk treatment. In general, we can apply this methodology to any particular product or solution (for example, power plant control systems, software modules, or our own production systems). During the scoping, the participants must define the intended operational environment. Many Siemens products have several intended operational environments with different protection mechanisms and interfaces. The definition must include at least the intended use, users, trusted zones, trust boundaries,
ADVERTISER INFORMATION • MARCH/APRIL 2012 ADVERTISER IEEE Biometrics Usenix
PAGE Cover 4 Cover 3
Advertising Personnel Marian Anderson Sr. Advertising Coordinator Email: manderson@computer.org Phone: +1 714 816 2139 Fax: +1 714 821 4010 Sandy Brown Sr. Business Development Mgr. Email: sbrown@computer.org Phone: +1 714 816 2144 Fax: +1 714 821 4010 Advertising Sales Representatives (display) Central, Northwest, Far East: Eric Kincaid Email: e.kincaid@computer.org Phone: +1 214 673 3742 Fax: +1 888 886 8599 Northeast, Midwest, Europe, Middle East: Ann & David Schissler Email: a.schissler@computer.org,
100
IEEE Security & Privacy
d.schissler@computer.org Phone: +1 508 394 4026 Fax: +1 508 394 1707 Southwest, California: Mike Hughes Email: mikehughes@computer.org Phone: +1 805 529 6790 Southeast: Heather Buonadies Email: h.buonadies@computer.org Phone: +1 973 585 7070 Fax: +1 973 585 7071 Advertising Sales Representative (Classified Line) Heather Buonadies Email: h.buonadies@computer.org Phone: +1 973 585 7070 Fax: +1 973 585 7071 Advertising Sales Representative (Jobs Board) Heather Buonadies Email: h.buonadies@computer.org Phone: +1 973 585 7070 Fax: +1 973 585 7071
and assumptions (for example, we assume that system users are well trained and follow sound information security practices). Making assumptions is necessary for the analysis. But it’s important to critically question those assumptions. Why do we assume the users are well trained? Is this training part of the product delivery? Are there processes to ensure new staff are trained? We sometimes have to analyze large, very complex systems. In these situations, we need to identify the most critical product components so that we can focus on those first. So, we conduct a high-level impact analysis of the components. We assess each component on its potential maximum impact according to the defined impact levels, without evaluating the likelihood. A component can be excluded from the detailed analysis if its potential maximum impact is within the acceptable range and it doesn’t provide security services on which higher-impact components rely. To provide a broad picture of the usage scenarios, the workshop participants include at least ■ members of product management, development, and service, who bring know-how about the product itself, and ■ a product and solution security expert, who acts as a catalyst for the attack views. This ensures that the analysis covers many perspectives on the system. Involving the service people usually lets us get the best picture of the product’s actual use, which is important for risk estimation. When identifying threats, we don’t use predefined checklists; most often we use brainstorming methodologies. This is effective because the threats are often highly individual, depending on the product and use cases. March/April 2012
The workshops bring together people from different organization domains to exchange ideas and relate experiences regarding security. First-time participants are often reluctant at the beginning. However, as soon as they begin to understand what we’re looking for when talking about security threats, they think of ways to misuse the system that only experts of the product under analysis could come up with. The workshops also create a high awareness of security issues and the importance of secure development activities. After the workshops, we must ensure that the participants treat the identified risks according to the defined risk treatment strategy. With the participants, we discuss the findings and talk about possible risk mitigations. Most often, there’s no generic mitigation for recurring risks. Depending on the product, we must find individual solutions, starting with additional security requirements, architectural changes, or user documentation changes, through to additional development activities. We track these mitigations by linking product security risk management to the existing mechanisms in the development process (for example, project risk management, error-tracking systems, and bug reports). Consequently, development teams can’t neglect the findings anymore because those findings are visible and trackable. However, again, no one-size-fitsall solution exists. Depending on the business, we must devise a strategy for the best integration in the development organization. We’ve observed that product security risk management gives us the necessary transparency to communicate about secure product development with senior management. Equipping development teams with a methodology that helps them allocate the available www.computer.org/security
resources to the relevant security topics is also a key argument we use to promote threat and risk analysis and risk management in those teams.
D
efining roles and responsibilities and starting product security risk management are important first steps toward providing secure products. But of course, that isn’t all. For example, we’re defining harmonized guidelines for security requirements engineering and testing. Our participation in SAFECode (www. safecode.org) also helps drive our internal software assurance methods and improve the published best practices for the worldwide software engineering community.
References 1. Health Insurance Portability and Accountability Act, Public Law No. 104-191, 1996; www.gpo.gov/ fdsys/pkg/PLAW-104publ191/ html/PLAW-104publ191.htm. 2. American Recovery and Reinvestment Act of 2009, Public Law 111-5, 2009; http:// frwebgate.access.gpo.gov/cgi-bin/ getdoc.cgi?dbname=111_cong _bills&docid=f:h1enr.pdf. 3. “Requirements for Secure Control and Telecommunication Systems,” ver 1.0, white paper, Bundesverband der Energie- und Wasserwirtschaft e.V. (German Assoc. of Energy and Water Supply), 2008. 4. Cyber Security—Electronic Security Perimeter(s), North Am. Electric Reliability Corp., standard CIP-005-4a, Jan. 2011; www.nerc. com/files/CIP-005-4a.pdf. 5. Cyber Security—Systems Security Management, North Am. Electric Reliability Corp., standard CIP007-4, Jan. 2011; www.nerc.com/ files/CIP-007-4.pdf. 6. Process Control Domain—Security Requirements for Vendors, WIB
report M 2784-X-10, Int’l Instrument Users’ Assoc., Oct. 2010. 7. CMMI for Development, ver. 1.3, Software Eng. Inst., Carnegie Mellon Univ., Nov. 2010. 8. Microsoft Security Development Lifecycle (SDL), ver. 5.1, Microsoft, 2011; http://msdn.microsoft.com/ en-us/library/cc307748.aspx. 9. F. Paulisch and P. Zimmerer, “A Role-Based Qualification and Certification Program for Software Architects: An Experience Report from Siemens,” Proc. 2010 Int’l Conf. Software Eng. (ICSE 10), ACM, 2010, pp. 21–27. Barbara Fichtinger is a security con-
sultant at Siemens Corporate Technology in Munich. Contact her at barbara.fichtinger@ siemens.com. Paulisch drives crosscompany initiatives related to software and to IT security for products and solutions at Siemens. Contact her at frances. paulisch@siemens.com.
Frances
Peter Panholzer is a security consul-
tant at Siemens Corporate Technology in Munich. Contact him at peter.panholzer@siemens.com.
Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
Any comments or feedback? Please email your letters to the editor to lead editor Kathy Clark-Fisher (kclark-fisher@computer.org). All letters will be edited for brevity, clarity, and language.
101
This article was featured in
For access to more content from the IEEE Computer Society, see computingnow.computer.org.
Top articles, podcasts, and more.
computingnow.computer.org