stp-2007-01

Page 1

A

Publication

: ST ES BE CTIC ting A s PR it Te Un

VOLUME 4 • ISSUE 1 • JANUARY 2007 • $8.95

www.stpmag.com

Putting Together A Clean Case For Automation

Using Models to Drive Automated Java Testing Three Principles For Successful Offshoring

Patching the Holes In AJAX Security Learn How to Scrub Vulnerabilities Throughout the Development Life Cycle


“A conference with something for everyone.” — — Scott Scott L. L. Boudreau, Boudreau, QA QA Manager, Manager, Tribune Tribune Interactive Interactive

Attend The

SPRING April 17-19, 2007 San Mateo Marriott San Mateo, CA "This is absolutely the best conference I have attended. The instructors were extremely knowledgeable and helped me look at testing in a new way.” —Ann Schwerin QA Analyst, Sunrise Senior Living

N O I T A R T REGIS OPEN! IS NOW

A

Event


• OPTIMIZE Your Web Testing Strategies • LEARN How to Apply Proven Software Test Methodologies • NETWORK With Other Test/QA & Development Professionals • ACHIEVE a Better Return on Investment From Your Test Teams • GET the Highest Performance From Your Deployed Applications The Software Test & Performance Conference provides technical education for test/QA managers, software developers, software development managers and senior testers. Don’t miss your opportunity to take your skills to the next level. Take a look at what your colleagues had to say about the last two sold-out conferences:

“I learned things I didn’t know existed! I met people from all ranges of QA, all of whom were brimming with information they were willing to share. 25-50% of the things I learned were in between classes.”

“This conference is great for developers, their managers, as well as business-side people.”

— Rene Howard, Quality Assurance Analyst IA System

“This conference is a wonderful tool to gain insights into the QA world. A must-attend conference!”

“This was the most practical conference I have been to in 18 years.”

— Ginamarie Gaughan, Software Consultant Distinctive Solutions

— Mary Schafrik, Fifth Third Bank B2B Manager/QA & Defect Mgmt.

—Steve Margenau, Systems Analyst Great Lakes Educational Loan Services

“The Conference was quite useful. If you get one impact idea from a conference, it pays for itself. I got several at the ST&P Conference.”

“Excellent conference — provided a wide range of topics for a variety of experience levels. It provided tools and techniques that I could apply when I got back, as well as many additional sources of information.”

— Patrick Higgins, Sr. Software Test Engineer SSAI

— Carol Rusch, Systems Analyst Associated Bank

For Sponsorship and Exhibitor Information Contact David Karp at 631-421-4158 x102 or dkarp@bzmedia.com

April 17-19, 2007 • San Mateo Marriott San Mateo, CA

Register Online at

www.stpcon.com

www.stpcon.com


TESTING SUPERSTAR! Introducing: ®

TestTrack TCM

The ultimate tool for test case planning, execution, and tracking. How can you ship with confidence if you don’t have the tools in place to document, repeat, and quantify your testing effort? The fact is, you can’t. TestTrack TCM can help.

Issue & Defect Management > Test Case Planning & Tracking Configuration Management Automated Functional Testing Essential Process Management

In TestTrack TCM you have the tool you need to write and manage thousands of test cases, select sets of tests to run against builds, and process the pass/fail results using your development workflow. With TestTrack TCM driving your QA process, you’ll know what has been tested, what hasn’t, and how much effort remains to ship a quality product. Deliver with confidence—be a testing superstar!

• Ensure all steps are executed, and in the same order, for more consistent testing. • Know instantly which test cases have been executed, what your coverage is, and how much testing remains. • Track test case execution times to calculate how much time is required to test your applications. • Streamline the QA…Fix…Re-test cycle by pushing test failures immediately into the defect management workflow. • Cost-effectively implement an auditable quality assurance program.

Download your FREE fully functional evaluation software now at www.seapine.com/stptcm or call 1-888-683-6456. Managing Process, Change & Quality Throughout the Enterprise

©2006 Seapine Software, Inc. Seapine TestTrack and the Seapine logo are trademarks of Seapine Software, Inc. All Rights Reserved.


VOLUME 4 • ISSUE 1 • JANUARY 2007

Contents

12

A

Publication

COV ER STORY Stronger Security: How to Clean Up Your AJAX Applications

Apps built with AJAX offer more options than their page-based counterparts—along with a higher risk of vulnerabilities. Here’s how to keep your By Billy Hoffman and Bryan Sullivan apps squeaky clean.

18

Check Out Model-Driven Testing To keep your automation running smoothly and boost all phases of the Java development life cycle, put UML 2.0 By Scott Niemann behind the wheel.

24

Develop Your Defense: Making The Case for Automation

Depar t ments

In an age of shrinking budgets, you’ve got to persuade the powers-that-be that automation is a compelling priority for your organization. A softer view of ROI By Bob Galen can help.

8 • Contributors

7 • Editorial My New Year’s Resolution: More of the same.

Get to know this month’s experts and the best practices they preach.

9 • Feedback

31

Three Keys To Offshore Development

The new economy is global—and as you tackle the challenges of outsourcing your development projects, learn from Parasoft’s example: The secret lies in separation—and sharing. By Matt Love

JANUARY 2007

Readers get their turn to tell us where to go.

10 • Out of the Box New products for developers and testers.

36 • Best Practices Don’t forget that intangible ingredient that goes beyond basic unit-test aptitude— attitude. By Geoff Koch

38 • Future Test Shoring up system verification: Let’s rob By Kingston Duffie Peter to pay Paul.

www.stpmag.com •

5



Ed Notes VOLUME 4 • ISSUE 1 • JANUARY 2007 Editor Edward J. Correia +1-631-421-4158 x100 ecorreia@bzmedia.com

EDITORIAL Editorial Director Alan Zeichick +1-650-359-4763 alan@bzmedia.com

Copy Editor Laurie O’Connell loconnell@bzmedia.com

Contributing Editor Geoff Koch koch.geoff@gmail.com

ART & PRODUCTION Art Director LuAnn T. Palazzo lpalazzo@bzmedia.com

Art /Production Assistant Erin Broadhurst ebroadhurst@bzmedia.com

SALES & MARKETING Publisher

Ted Bahr +1-631-421-4158 x101 ted@bzmedia.com Associate Publisher

List Services

David Karp +1-631-421-4158 x102 dkarp@bzmedia.com

Nyla Moshlak +1-631-421-4158 x124 nmoshlak@bzmedia.com

Advertising Traffic

Reprints

Phyllis Oakes +1-631-421-4158 x115 poakes@bzmedia.com

Lisa Abelson +1-516-379-7097 labelson@bzmedia.com

Marketing Manager

Accounting

Marilyn Daly +1-631-421-4158 x118 mdaly@bzmedia.com

Viena Isaray +1-631-421-4158 x110 visaray@bzmedia.com

READER SERVICE Director of Circulation

Agnes Vanek +1-631-421-4158 x111 avanek@bzmedia.com

Customer Service/ Subscriptions

+1-847-763-9692 stpmag@halldata.com

Cover Photograph by The Design Diva With Special Thanks to Marilyn Daly

President Ted Bahr Executive Vice President Alan Zeichick

BZ Media LLC 7 High Street, Suite 407 Huntington, NY 11743 +1-631-421-4158 fax +1-631-421-4130 www.bzmedia.com info@bzmedia.com

Software Test & Performance (ISSN- #1548-3460) is published monthly by BZ Media LLC, 7 High St. Suite 407, Huntington, NY, 11743. Periodicals postage paid at Huntington, NY and additional offices. Software Test & Performance is a registered trademark of BZ Media LLC. All contents copyrighted 2007 BZ Media LLC. All rights reserved. The price of a one year subscription is US $49.95, $69.95 in Canada, $99.95 elsewhere. POSTMASTER: Send changes of address to Software Test & Performance, PO Box 2169, Skokie, IL 60076. Software Test & Performance Subscribers Services may be reached at stpmag@halldata.com or by calling 1-847-763-9692.

JANUARY 2007

Expect More Of The Same in ’07 In 2007, I promise to bring about his article. Product you more articles like our testing, he warned, was lead story, a terrific tutorisomething to avoid. “Test al on AJAX security by was in the way. Whoever Billy Hoffman and Bryan got to market fastest was Sullivan. the winner.” They gave us a thoughtBut as the early 1990s ful examination of where became the late 1990s, the Asynchronous Java Duffie related, “IT woke Script and XML developup and realized that it ment technique—better can’t sell what it can’t test.” known as AJAX—is vulnerSoftware quality became Edward J. Correia able, and that’s mainly in more important by orders the Asynchronous JavaScript part. of magnitude, and eventually became AJAX’s necessity to expose business the main differentiator it is today. Find logic to the outside world (in the form out which industries Duffie believes can of JavaScript) means giving potential teach us the most on page 38. hackers information about your interA New Trilogy, and Fresh Pages nals. And from there, your systems are As Bob Galen’s excellent three-part an open book. Hoffman and Sullivan series on the Automation SDLC rings show us how to minimize these potenin the new year with advice on buildtial vulnerabilities before they become ing the business case for automation, unauthorized access points into your next month I expect to begin a new private data. trilogy, this time on boundary testing. To their credit, the authors nixed a This opus’s author will be Rob single mention of their company’s Sabourin, a perennial favorite at our products, despite being employed by Software Test & Performance Consecurity software developer SPI ferences. I’ve already had a peek at his Dynamics. The opportunity that pubopening installment, which explores lishing dangles for shameless hawking boundary risks based on software is irresistible to many. Thanks, Billy requirements, input and data processand Bryan, for not taking the bait. ing, and exposes some important In 2007, I also pledge to bring you boundary-related bugs. You won’t want more opinions like the one you’ll find to miss it. in this month’s Future Test by Finally, you may notice some new Kingston Duffie, whose company elements this month. We welcome our Fanfare Group develops and markets Contributors on page 8, and Feedback an IDE for testing high-tech systems. on page 9. In Contributors, we bring By borrowing wisdom from other you a little about the authors and the industries—some more mature, some articles they’re writing in this issue. In relatively new—he tells us where to this way, we hope you get better find help solving the testing and QA acquainted with our authors and get a issues facing software developers. more personal feel from ST&P. And “If you go back 15 years, the race to in Feedback, we’ll publish your input the top in market position and in IT in about the magazine, Test & QA general was all about coming out with Report newsletter and STPCon. Please the best, the fastest; developing on tell us what you think; we’ll tell everyinnovation, bailing wire and duct one else. ý tape,” said Duffie in a phone interview www.stpmag.com •

7


Contributors This month’s lead story was written by BILLY HOFFMAN and BRYAN SULLIVAN, and explores methods of securing applications built with AJAX, the popular development technique that combines JavaScript and XML. Hoffman is lead security researcher at SPI Dynamics, a maker of Web application security testing solutions headquartered in Atlanta, where he focuses on crawling technologies and automated vulnerability discovery. Hoffman gained notoriety when, as a computer science student at Georgia Tech, he discovered a flaw in the campus security system. He later created StripeSnoop tools for magstripe data capture, analysis, manipulation and validation. Also employed by SPI, Sullivan is a development manager for the company’s DevInspect and QAInspect Web security products, and contributed to the Application Vulnerability Description Language (AVDL), an OASIS standard for security interoperability. Hoffman and Sullivan are collaborating on a book on AJAX security that is expected to be published this summer.

Performance Issues? Get the cure for those bad software blues. Don't fret about design defects, out-of-tune device drivers, off-key databases or flat response time. Software Test & Performance is your ticket to runtime rock and roll.

We can help... Subscribe Online at www.stpmag.com

SCOTT NIEMANN, director of telecom industry solutions at Telelogic, explores how Java application testing can be automated using a model-driven approach. Niemann illustrates how, by using tools and techniques commonly available today, developers can automate testing to reduce the time currently required by their manual test cycles while improving code coverage and adherence to requirements. A 12-year telecom industry veteran, Niemann joined Telelogic in 1999 as a field consultant. In the final installment of his three-part series on the Automation SDLC, BOB GALEN outlines the business case for building automated processes as part of the software development life cycle. Galen is a 25-year veteran of software development. His experiences range from design and construction of real-time systems to Web applications. Currently a senior QA manager at information services company Thomson Dialog, based in Cary, N.C., Galen is also a principal of RGalen Consulting Group, a software development consultancy. For companies considering or currently engaging in offshore development, MATT LOVE, a software development manager at test tools maker Parasoft, draws parallels between Parasoft’s 10-year expansion into other countries with today’s expanding practice of sending application development overseas. He details three techniques to help ensure the success of offshore projects. Love has been a Java developer since version 1.1 was released in 1997, and has been involved in the development of Parasoft’s Jtest unit-testing tool since 2001. TO CONTACT AN AUTHOR, please send e-mail to feedback@bzmedia.com.

8

• Software Test & Performance

JANUARY 2007


Feedback VIRTUAL—BUT RESTRICTED? Edward J. Correia’s article “Make Testing Virtually Risk Free” (T&QA newsletter, Oct. 24, 2006) should also have mentioned the commercial tools specifically meant to automate test labs using virtualization, such as Surgient’s Virtual QA/Test Lab Management System (VQMS), Akimbi Slingshot (now VMware) and VMLogix. Server class virtualization can be very complicated, and controlling a consolidated, virtualized test lab requires new skills for most testing organizations. Using a commercial test lab–management tool can accelerate time to benefit for a test team and brings along features that would be very difficult for a team to develop on its own, such as lab scheduling, reporting and a self-service provisioning portal. Erik Josowitz Surgient Austin, Tex. In ”Make Testing Virtually Risk Free,” Edward J. Correia states, “Microsoft is well on its way to broad adoption of virtualization within its operating systems.” That is not my understanding at all. I understand that Microsoft’s latest licensing restrictions for the new Vista operating system severely limit installations on virtual machines. Even for real physical machines, the Vista EULA permits only one license transfer—full retail—for an upgrade or new computer. After that, Microsoft expects its users to buy another whole new license. Phil Boettge Moorestown, N.J. From the editor: Microsoft in May revealed its extended virtualization strategy and product road map, which included claims that Windows Server virtualization would become a part of the “Longhorn” release of Windows Server six months after Longhorn is released to manufacturing. As of this printing, Longhorn was in beta 2 testing. The company’s plans also include System Center Virtual Machine Manager, a “centralized enterprise management solution for the virtualized data center,” according to company documents. VMM is set for release in the middle of this year. JANUARY 2007

BUG UBIQUITY I liked Geoff Koch’s “Still Buggy After All These Years” (Future Test, July 2006). I’ve learned through time I’m actually not a technical buff when it comes to computers; I excel in program methodologies and tool applications. Koch’s story revealed how unpredictable and sometimes (useless) it is to make software bugfree. One of the big issues tends to fall in the coding department. Imagine making a simple change for a piece of software to work well with one application and find out that it works with only a particular application when your goal was for global app compatibility. Ho-hum! So while I eye these “computer geek” publications with a secondary glance, your particular article caught my eye. James Richards

Ellenwood, Ga.

As for the desktop, Microsoft’s Vista enduser license agreement permits the Ultimate, Business and Enterprise editions to be run virtually, but forbids virtualization using the less expensive Home and Home Premium editions. In July 2006, Microsoft completed the acquisition of Softricity, and has since added that company’s virtualization and software streaming functionality to its IT administrative tools, including the November 2006 release of the SoftGrid virtualization platform for desktops and terminal services.

THE CARROT AND THE STICK? Regarding Edward J. Correia’s Nov. 14, 2006, T&QA newsletter article “Get Results or Get Fired,” you ruined a perfectly good idea by mixing into it the stick; i.e., by adding right at the end the notion of “get results or get fired.” The motivation technique you describe works because it comes naturally. Once it gets explicitly mixed with the negative sanction of firing, it will stop to work in the same positive way. On people’s minds will not be the motivation to better their peers, but the need to avoid firing. All questions of ethics and logic aside, the second motivation does not work as well to surpass oneself as the first. Manuel Delafortrie Brussels, Belgium

In “Get Results or Get Fired,” Edward J. Correia said, “Parasoft offers a system similar to CQ2—I learned at STPCon— called Group Reporting System.... But you won’t find information about GRS on Parasoft’s Web site. According to a Parasoft spokesperson, the company thinks its capabilities are too far afield from Parasoft’s main product line and doesn’t want to confuse its customer base.” Well, OK, some reluctance is understandable, but is there anything preventing Parasoft from setting up a separate Web site to promote/describe/market its GRS product? It could even be on Parasoft’s main Web site, just on pages to which the usual customers wouldn’t have links. The cost should be pretty minimal, especially if some actual sales resulted from posting the information. (Is there something I’m missing here?) Vincent Johns Johns Consulting Service Oklahoma City, Okla. FEEDBACK: Letters should include the writer’s name, city, state, company affiliation, e-mail address and daytime phone number. Send your thoughts to feedback@bzmedia.com. Letters become the property of BZ Media and may be edited for space and style. www.stpmag.com •

9


RadView Takes the Long View on Testing: Giving Something Back to the Community Why would a company give a $30,000 load-testing utility away for free? Ask RadView Software, which last month announced that it would offer a free license to WebLOAD, its Web application testing tool, to any organization that could prove it runs a bona fide open-source project. According to RadView founder and chief strategy officer Ilan Kinreich, the move is largely reciprocal. “We are based on open standards, such as JavaScript by Mozilla. We recognize contributions of the open-source industry and we’d like to give back.” To qualify for the free license, the projects must be distributed under an OSI-approved license, he said. WebLOAD can be used to test Java, .NET and Web apps running on Linux, Solaris and Windows systems, and is being offered fully featured and unrestricted. “One console can [sim- WebLOAD is now free for bona fide open-source projects, and can test most Web apps. ulate] up to 300 concurrent users,” said Kinreich, adding that a couAmazon.com, for example. Report data bank application, you could generate ple of “strong PCs” with single- or dualcan be exported to a database or in for10,000 different user IDs and IP addresscore processors and 1GB of RAM would mats for Excel, HTML and others. es, randomly select them from a file be sufficient to generate that kind of Also included will be a cruise control [during simulation] and log them in load. “That’s a pretty high scale for most feature that Kinreich said can be used over a weekend. Our environment software you might need to test.” to gradually step up simulation loads allows you to easily do that.” Support contracts can be purchased for until a failure occurs. “Every application At press time, WebLOAD 7.6 was in US$6,000 per year. sometimes breaks. This lets you see how beta testing and scheduled to be genKinreich said that WebLOAD simumuch yours can handle, and you don’t erally available by Dec. 11, 2006. But lates application load by generating have to run it by hand; it does it autothat date has since slipped to April 2007, threads rather than with virtualization, matically.” and its planned features will be rolled which is used by some of its competiBeginning with version 7.7—set for into version 7.7. tors. “We are creating threads at a low release in April—WebLOAD will also Also driving the decision to give away level of operating system so there’s as include a test-script development enviWebLOAD licenses—source code is not low a level of overhead as possible.” ronment. “An easy to-use IDE will help included—was what Kinreich said was a WebLOAD controls test scripts [developers] visualize, create, debug lack of high-end testing software availthrough a Windows-only console, which and run test scripts that [they] generable to open-source developers. “We also receives performance reports. ate,” which Kinreich said will broaden look at fostering open-source projects WebLOAD tracks CPU, memory usage the product’s appeal beyond its current and making sure they can benefit and other server statistics, connect utility as a general testing tool. “The from high-quality testing. Use of stantimes, page load times, transaction times scripts let you customize what use case dards is important and it’s been proven and round-trips, which Kinreich said and transactions to run on your applithat use of open source has boosted can include a complete transaction at cation. For example, if you’re testing a standards.”

10

• Software Test & Performance

JANUARY 2007


Heroix Gives Tool Living Up to Its Name? Testers a MIB It may not perform feats of colossal strength, but performance monitoring and management capabilities are what Heroix is all about. In December 2006, the company launched Longitude 4.0, the latest version of its tool for agentless monitoring and reporting solutions for Linux, Unix and Windows applications that lists centralized event monitoring and a dashboard among new and enhanced capabilities. The price for basic monitoring remains at US$299 per monitored system; the upgrade is free for current subscribers. Centralized monitoring, according to the company, enables developers and testers to more easily examine trouble spots in real time, edit performance thresholds and pull problem components out of action for further inspection or defect correction. A customizable dashboard provides a consolidated display of data coming from applications, databases, e-mail systems, servers and other network devices. Developers and testers can assign warning- and critical-level thresholds using a variety of gauge styles, including thermometers, dials and time lines. Longitude 4 now reaches out to SNMP devices with SNMP Studio, an environment that lets developers monitor and report on network switches, routers, firewalls and other SNMP-compliant devices. By importing the device’s management information base (MIB) file, device health and activity data can be combined and interpreted with other system metrics to assess conditions affecting applications throughout the enterprise. The new version also now consolidates data from Windows application, security and system-event logs for display in its event-log viewer or translated into events for display in the event monitor. Such events also can trigger actions and reports. Send product announcements to stpnews@bzmedia.com JANUARY 2007

AutomatedQA in mid-December 2006 began shipping TestComplete 5 with support for Windows Vista, the Windows Presentation Foundation, XAML, and Firefox and IE 7 browsers. The testing tool for Windows, .NET and Web apps can also now recognize and test functions added to desktop apps through third-party components. Pricing for TestComplete 5 has increased slightly. The Enterprise edition now costs US$999 per seat (up from $899), and the runtime component TestExecute now costs $89 per tester (up from $75). The upgrade is free for recent purchasers of version 4.x; discounted for others. According to Drew Wells, AutomatedQA’s vice president of product development, the testing tools for Windows, .NET and Web apps can now take advantage of advances in Microsoft’s newest operating system related to the separation of presentation and business logic in applications. For handing off testing jobs, TestComplete 5 now lets developers build and deploy user-defined forms that can be displayed during tests. Developers can pull from dozens of prebuilt controls that present testers with options for selecting test scenarios or setting test parameters. Testers can execute tests with the full package or the

lower-cost TestExecute runtime. A form generator is included that Wells said is similar to the point-and-click experience of Visual Basic. Wells said that forms also can be used to test apps that might not otherwise be capable of automated testing, such as those that involve audio. “For instance, it can be used for testing an outgoing greeting on a voicemail system. This would allow them to create a form so they can have a human listen to it to make sure it’s OK.” TestComplete can now test functionality added to desktop applications using components purchased from ComponentOne and Infragistics. By working directly with those and other component vendors, including Borland, Developer Express, Janus Systems, Microsoft and Syncfusion, Wells said that TestComplete is more complete. “We’ve done our own analysis, so when you test an Infragistics component, [for example], you can test it with its own cells and rows.” This gives developers the ability to test specific characteristics of those components. Version 5 also includes improved name mapping, which Wells said simplifies the testing of apps in which multiple functions might be tied to the same component, such as an OK button displayed in five different languages.

TestComplete 5 now works with Internet Explorer 7 and Firefox browsers (shown). www.stpmag.com •

11


With Their Need to Expose Business Logic, AJAX Apps Should Be Handled With Care By Billy Hoffman and Bryan Sullivan

synchronous JavaScript and XML technologies have taken the Web by storm. The popularity of the AJAX technique for

A

building Web UI is growing at a tremendous rate and shows no sign of stopping. It isn’t hard to understand why—AJAX applications can provide a much richer and more intuitive user interface than their traditional page-based counterparts. However, AJAX is far from secure, and the decision to “AJAXify” a Web application shouldn’t be made lightly. It demands that serious security implications be addressed across all phases of the application development life cycle.

AJAX 101 for Hackers AJAX applications can be thought of as a cross between traditional Web applications and Web services. On the surface, they seem to be more like Web pages, since they provide a Web-based graphical user interface and accept input directly from a human user. However, when applications use AJAX techniques to update

12

• Software Test & Performance

JANUARY 2007


portions of the page, they act more like a Web service (see Figure 1, page 14). Calls made through AJAX to fetch a current stock price, calculate an order cost or retrieve a weather forecast, for example, are all calls that have historically been made through Web services. In fact, some AJAX frameworks require the use of Web services on the server side to provide data to client-side controls. Far from creating a secure application architecture, this dual nature actually creates a “best of both worlds” scenario for attackers. Web pages and Web services are frequent targets for hackers, and both types of applications have their own inherent security strengths and weaknesses. The key to hacking any application is information. The more insight an attacker can gain about his intended target, the more likely he is to successfully find and exploit any security holes present in that target. Ironically, one of a hacker’s best sources of information is the application’s own user interface. Many Web sites display detailed error messages when unexpected errors occur in the application. These certainly help the programmers to debug the problem, and even help legitimate users to work through the issue with technical support—but they also help attackers to learn about the application. On the other hand, while Web services can return detailed errors to their clients, it’s not a common occurrence. Since Web services aren’t meant to be directly consumed by a human user, they often don’t provide much human-readable information in their errors. Many Web service frameworks also help by catching errors on the server side and returning nondescript error messages to the client.

XSS attacks work by injecting attacker-defined JavaScript code into a vulnerable Web site, where it can be unintentionally downloaded and executed by unsuspecting users. Since these attacks require the use of a Web browser as part of the attack vector (to execute the malicious JavaScript), any output from a Web service that isn’t directly displayed on a Web page is likely to be safe. However, Web services can be susceptible to attack in other ways. Again, the key to hacking is information. To attack an application, a hacker first tries to understand how the application works; to learn its capabilities. For example, if he attacks a bank Web site, he tries to learn if it’s possible to transfer funds between accounts and if a user can apply for loans. If so, it’s possible that this functionality can be manipulated, and the hacker can plot ways to transfer other users’ money into his account and apply for loans in someone else’s name. Traditional page-based Web applications have an advantage over the hacker because their application logic remains relatively hidden. This forces him to spend a considerable amount of time analyzing (or “fingerprinting”) the application to determine its capabilities.

Machine-to-Machine Security

Logic Exposed—and Vulnerable

Another security advantage of Web services lies in the machine-to-machine nature of their communication, which makes certain attacks that require a user interface much more difficult to execute. Cross-site scripting attacks are a good example. XSS is one of the most prevalent and dangerous vulnerabilities on the Web.

The same can’t be said of Web services. The Web Service Definition Language (WSDL) document of a Web service is usually freely available to anyone who requests it. This document lists all of the available methods provided by the Web service, as well as the parameters needed to call those methods. In short, it tells the user exactly what the service’s capabilities are and exact-

JANUARY 2007

www.stpmag.com •

13


AJAX SCRUBDOWN

ly how to use them. To continue the bank example, our hard-working hacker will find it much easier to attack the bank through its Web service when he discovers that the service provides methods such as ApplyForLoan and TransferFunds. If you look at the relative security weaknesses of both Web pages and Web services, you’ll see that AJAX applications, unfortunately, share all of them. Just like a regular Web page, AJAX applications have a rich user interface. Descriptive error messages are common in AJAX Web sites. Any attack that can be made against a non-AJAX page, such as XSS, can also be made against an AJAX page. In fact, AJAX can actually amplify the danger of these attacks since they can be made silently, without the user ever knowing about the problem. And just like a Web service, much of an AJAX application’s business logic is exposed and freely available to anyone who knows where to look for it. While AJAX applications generally don’t provide WSDL documents, they do provide business logic in the form of client-side JavaScript code. Anyone can see this code simply by clicking the “View Source” command in their browser. Furthermore, while WSDL documents can be encrypted or completely

disabled (the Web server can be configured to refuse requests for WSDL documents), client-side JavaScript can’t be—it must be available and unencrypted so

problem arises when other security defects exist in the method’s code, such as SQL injection, buffer overflow vulnerabilities or improper credential authentication. These defects may have gone unnoticed by an attacker when “TransferFunds” was internal to the server-side code, but now that the application has been “AJAXified,” this formerly inaccessible function is now available to everyone. Its visibility is much greater, and its vulnerabilities are much more likely to be discovered by an attacker.

AJAX can amplify the danger of silent attacks, without the user ever knowing about the problem.

Minimize Functionality, Not Methods

When designing a new AJAX application or method, or when refactoring an existing application to use AJAX, be aware of the dangers involved. Simply minimizing the number of these meththat the browser can interpret and exeods isn’t usually an option, since the cute it. Otherwise, the client-side portion value of AJAX is its ability to quickly of the application won’t run. In the case update small, individual portions of a of AJAX applications, disabling Web page. If you combine all of the asynJavaScript would disable the application. chronous methods on a page into a sinThe real danger of exposing busigle call—something like “RefreshAll” ness logic is not that it’s a security threat that updates all or many of the fields on in itself, but rather that it increases the the page—the value of AJAX is lost. The chance that an attacker can find and call will take too long to execute, and exploit any other security threats presAJAX will end up actually detracting ent in the system. from the user experience rather than There’s nothing inherently insecure enhancing it. about exposing a “TransferFunds” Instead of minimizing the number method in a banking application. The of methods, minimize the amount of functionality. FIG. 1: LOOKS LIKE A PAGE, ACTS LIKE A SERVICE If you were designing the sample banking application, you’d need to determine whether it’s really Web Page Web Service necessary to be able to apply for a mortgage via an AJAX call, or if it would be acceptable to do this through a standard form submission. If AJAX doesn’t significantAJAX Application ly improve the application’s usability, it should probably be removed. You must decide for yourself how much risk you’re willing to accept for a more responsive user interface. Applications with exposed logic also have a greater “attack surface” through which they can be compromised. sdfewkjhs fjdskfhewkrh djsafhe fdjaf;weoih fndsa;wier fda;woeir fda;o feioa3 j23565 56 hifdoa fjdiao;seiru cjdiao; fjdioa;oer

fghduaoia;slkdfj eia sdfewkjhs fjdskfhewkrh djsafhe fdjaf;weoih

fndsa;wier fda;woeir fd3664648 4564 6546 a;o feioa3 jhifdoa fjdiao;seiru cjdiao; fjdioa;oer fghduaoia;slkdfj eia da;woeir

fd3664648 4564 6546 a;o feioa3 jhifdoa fjdiao;seiru cjdiao; fjdioa;oer fghduaoia;slkdfj eia

14

• Software Test & Performance

JANUARY 2007


AJAX SCRUBDOWN

TABLE 1: VULNERABILITIES IN COMMON Web Pages

Web Services

AJAX

Detailed error messages User interface attacks possible Application logic exposed

Bank Vault or Shopping Mall? Consider a bank vault, which has exactly one entrance—the vault door. It can be guarded well or poorly, but there is just one way in. A non-AJAX Web page can be compared to this bank vault. The single entrance is the request to the Web server for the page. There are certainly many different ways to make this request, such as varying the HTTP method (GET vs. POST) or the data sent (form field values, query string parameters, cookie values, etc.), but all such requests must go through this single “door.” On the other hand, an AJAX-enabled Web page is more like a shopping mall. Each AJAX method call is another door into the server. There could be dozens or hundreds of entry points, and if any one of them is left unguarded, the entire system could be open to attack. With all these inputs into the shopping mall that is your AJAX application, how do you, as a developer, secure them? AJAX inputs should be secured the same way that you deal with traditional inputs—such as query string parameters, post data values or cookies—through input validation. Input validation simply ensures that you have the proper type and format of data before you operate on it. It’s not a new concept, nor is it one used solely to ensure security or integrity. If you look at some early books about computers from the 1950s or ’60s, you’ll find that input validation is a well-covered topic and is part of any reliable application. If you feed malformed data into a program, the program will malfunction. When processing power was at a premium and crashing a computer had serious economic ramifications, programmers had to ensure that their programs not only recognized corrupt data, but would also stop running and alert the user. Input validation is an effective way to ensure that your programs aren’t consuming “garbage.” Proper input validation also offers the benefit of securing an application because it filters out JANUARY 2007

malicious input as well as garbage input. The key word here is proper. Consider a text box on a Web form used for entering a person’s name. Because you’re inserting this into a database and because you’re aware of the potential for attacks such as SQL injection and XSS, you decide to ensure that the input cannot contain characters like ’, < or >. You write a simple regular expression to look for these characters and reject the input if any of them are detected. This technique is known as blacklist input validation, because you’re creating a list of characters your application won’t accept. But what if some aspiring attacker finds a new way to execute SQL injection that doesn’t use the single quote? Your blacklist no longer protects you (see Figure 2). This is why blacklist input validation doesn’t work—you must constantly update your blacklist as new attacks come out, and you’ll always be one step behind the attackers. By its very nature, user input is chaotic and unpredictable. It can come in any shape, size, or structure. To control this chaos, you must constrain it to an expected format. Again, blacklisting can’t accomplish this because it defines what is excluded, not what is expected. A more effective solution is to ensure that the input data can be only what you explicitly specify it to be. This technique is known as whitelist input validation. Again, consider the text box for a person’s name. You might decide that a name should consist only of letters and possibly spaces. Symbols like

parentheses, tabs, asterisks or pipes should never appear in a name. If you write a regular expression that allows only letters and spaces, no malicious symbol characters will get through. Even if an aspiring attacker finds a new obscure way to execute a XSS attack that uses only a semicolon, it won’t get past your whitelist filter. Using whitelisting to check input against a known acceptable data format is more secure than using blacklisting to check input against a potentially incomplete list of illegal characters. Sometimes, the ideal solution is a combination of a whitelist followed up with a blacklist. For example, a whitelist for a person’s name allows only letters through, but as a further defense against SQL injection, you might want to check that the name isn’t a SQL keyword, such as SELECT, DELETE or WHERE. The combination whitelist and blacklist also makes creating a blacklist easier because you’ve already severely limited what could possibly get through the whitelist. An important part of whitelist validation that developers often forget is validation of the length or range of the input. For example, consider a zip code. Instead of using a regular expression, a developer commonly ensures that a given input is a number by trying to convert the input to a number and then observing if an error is generated. If it is, the input is obviously not a number. However, this line of reasoning is flawed. All U.S. zip codes are indeed numbers, but all numbers are not zip codes. The input might contain a negative number, or it might be a number 10 digits long. Both of these cases would pass the “convert to a number” validation scheme, and while you’ve ensured that you’re dealing with a number, you haven’t verified that number’s correct range. This input may overflow your database column or some other part of the program. Zip codes in the U.S. have a specific format: They’re either five digits or five digits followed by a dash and another four digits. Other stats, like credit card

With dozens or hundreds of entry points, if any one is left unguarded, the entire system could be open to attack.

www.stpmag.com •

15


AJAX SCRUBDOWN

FIG. 2: BLACKLIST BLOW-BY

Blacklist Input Filter

OR 1=1

Database

%27%20OR%201%20=%201

numbers, phone numbers and Social Security numbers, all have well-known formats that simple regular expressions can validate.

Where to Validate? Equally important as the input validation method is the location where the input is being validated. User input should be validated on both the client and the server, but for different reasons. Making an HTTP request from a Web browser across the Internet to a Web server is expensive in terms of time and user experience. Before a user submits a form, it makes sense to use JavaScript to ensure that they have completed the form properly. Did they fill in the required fields? Did they type in the correct number of digits for their telephone number? Did they select a country? Client-side validation ensures that the user input is in the correct format before taking the costly step of submitting the data to the server for processing. In short, client-side validation should be used to increase the performance and usability of the Web application. But does client-side validation provide security? Without answering the question directly, what happens if a user has turned off JavaScript? He may inadvertently fill in the Web form with malformed data. If your application doesn’t validate the information on the server, the request will fail. You have no control over the user’s Web browser. You can’t force certain code to run or ensure that the JavaScript is

16

• Software Test & Performance

used in an appropriate way. Because you can’t control this, you can’t trust that any instructions you send to the client’s browser in the form or HTML input tags or JavaScript will be used the way you intended. Thus, you can’t trust that any of the data you receive from the client is “safe.” The only way to enforce application security is by whitelist input validation on the Web server before processing any of the data.

Security Misconceptions

internal systems, such as your database servers, but it’s completely useless for securing Web sites. Another common misconception is that using Secure Sockets Layer keeps hackers out of a system. SSL does an excellent job of keeping third parties from eavesdropping on the “conversation” taking place between the user and the Web server. If an attacker tried to intercept this communication in order to steal the user’s authentication credentials, or to steal his credit card number when he makes an order on a shopping site, then indeed SSL would be a valid defense. But what if the attacker was not a third party, but rather the user himself? In this case, all SSL does is keep other parties from eavesdropping on the attack taking place (see Figure 2). Now, using SSL actually works in the attacker’s favor. The attack isn’t prevented, and it’s much less likely that it will ever be noticed, since there are no request details logged about the attack (because a logging utility–a form of eavesdropping—is prevented by SSL).

A firewall is a great tool to keep unauthorized users out of your internal systems, but it’s useless in securing Web sites.

Similar to the misunderstandings regarding proper input validation, other misconceptions about AJAX security persist. The most common of these is that a firewall protects the server from attack. The reality is that a firewall can block only certain ports on the Web server. Closing off port 80 (or whatever port your Web server uses) results only in the denial of requests to the server on that port. In other words, no user, legitimate or otherwise, would ever be able to access the Web site. A firewall is a great tool for keeping unauthorized users from directly accessing your

Flawed Frameworks

Many developers believe that by using an AJAX framework like those at script.aculo.us or proto type.conio.net, their code is safer than if they created an AJAX application from scratch. These views stem from the belief that third-party frameworks are “battle tested” and thus more secure. While framework libraries do keep developers from having to reinvent the wheel, they don’t often include security features. In the case of purely client-side JavaScript libraries, there can be no security features at all. Since client-side code can be selectively disabled or modified by JANUARY 2007


AJAX SCRUBDOWN

the user, any security functionality implemented solely on the client is useless. Additionally, since frameworks hide implementation details from the programmer, the programmer can write an application that works but that he doesn’t completely understand. This practice can be dangerous, and often leads to security defects. The programmer can’t ensure the integrity of software he creates if he doesn’t understand large portions of it. Just like AJAX itself, the use of framework libraries isn’t inherently insecure. However, it’s important that the development team understands not only how to make its application work, but why it works the way it does.

ous that moving the cursor over an image displayed on the page is also causing requests to be sent, as can be the case on an AJAX-enabled page. And even if a tester were aware that

program’s designer, such as “CA” or “NY.” But the app might also accept input such as “XX” “1234”, or “@@.” The application’s ability to cope with these invalid state values can’t be exercised simply by testing through a browser. To ensure that an AJAX Web site is secure, you must also test it at the raw HTTP request level. This can be daunting, especially for testers unfamiliar with HTTP protocols or application hacking techniques, but several commercial tools can help.

The team must understand not only how to make its application work, but why it works the way it does.

To AJAX or Not to AJAX

The Burden of Testing In addition to the extra burden that AJAX applications place on the development team during the design and development phases of the product life cycle, there are also extra burdens felt during the testing phase as well. A security vulnerability is really just a type of software defect. And testing for security defects is critical to ensuring that the application won’t be compromised when it’s deployed. Unfortunately, AJAX applications are much more difficult to FIG. 3: SSL test than standard Web pages; there’s simply more code to test. There are more entry points on the server that must be tested, and more clientside JavaScript code. All of the JavaScript required to send the asynchronous requests, process the asynchronous responses and perform the partial page updates is new code that adds time and complexity to the testing process. But the real difficulty comes from the fact that so much of an AJAX application’s functionality is invisible to the tester. Most testers would be aware that when they press the Submit button on a standard Web page, a request is being sent to the Web server. But it may not be immediately obviJANUARY 2007

requests were being sent, it may be impossible to test the security of these requests using only a Web browser. Consider an AJAX application that displays a map of the United States, and as the user scrolls his mouse pointer over each state, an asynchronous request is made to retrieve and display the current population of that state. When testing with a Web browser, the only state labels that would ever get sent to the server are the state abbreviations entered by the

Whether creating a fresh AJAX application or retrofitting existing apps with AJAX techniques, you must consider security throughout the entire development life cycle. AJAX isn’t inherently insecure, but it does carry a greater potential for abuse than do page-based applications. So it’s important to evaluate the potential risks associated with exposing your business logic as JavaScript that increase the application’s attack surface and add time and complexity to the testing process. But if you’re impressed by what AJAX can do for the user experience and want to try your hand, the results might well be worth it. ý

HACKS EVADE LOGGING

Logging Utility

HTTP

Web Server

HTTP

Request product “ABC” from catalog

Request product “ABC” from catalog

Web Server

Logging Utility

HTTPS

HTTPS

???????????????????

Request product “ABC” from catalog

www.stpmag.com •

17


By Scott Niemann

new way of automating Java testing promises to help developers make tremendous strides in

A

productivity, quality and time to market. In the fast and furious mobile-device marketplace, the ability to react, anticipate and quickly execute on requirements changes can mean the difference between a healthy business and a failed one. Using a model-driven approach to Java development allows for automatic code testing to ensure that the application meets requirements and allows programmers to graphically manage the impacts of change. Here, I’ll explain how using the Unified Modeling Language 2.0 (UML 2.0) and an MDD Java development platform that allows for easily tested applications can help drive your organization’s productivity, enhance product quality, meet project deadlines, and exceed customer expectations.

The Integration and System Levels To create Java designs—particularly Java designs that make use of an automated testing process—you must first understand modeling Java systems using UML 2.0, and how executable models traced to the requirements can create a process of verification through automation. To better define the term, automation at this level refers to integration-level and system-level testing. The UML model defines what the system should do, and how its architectural software components fit together in enough detail. The spec provides detail and executable semantics that can be fed into UML 2.0 design environments to provide visual execution of the Java code. As a caveat, this paper will not address unit testing. The primary components of modeling that synch with an automated testing process can be found in architectural modeling and requirements modeling. While both use the same basic notation—UML 2.0—they’re not always done in parallel, and at many times in the requirements phase, modeling is omitted.

Model Test UML 2.0 Can Help Automate and Improve All


Additionally, you must consider the Java coder workflow, which may be the exact workflow your development teams are using today. This means that Java coders are “coding� today, and that modeling of a UML 2.0 design and coding can go hand in hand. Model- and code-centric design tools allow a process of dynamic model and code associativity, so that the code view and model view are always in synch, and changes in either view are dynamically reflected in the other.

Photograph by Sharon Dominick

Designing With UML 2.0 The process of modeling a Java application using UML 2.0 not only provides a visual reference for the application, but can lead to other valuable design artifacts that MDD tools can produce, such as automated design documentation. The aspects of UML 2.0 modeling with Java that make it appropriate for an automated testing process are class diagrams, use case diagrams, behavior diagrams (activity or statemachines) and sequence diagrams. Other diagrams can be used to describe your software architecture, but these specific diagrams contain enough semantics to allow the most important factors: executable Java code, requirements trace and change impact. Class and behavior diagrams contain enough semantics to easily generate Java code. For example, Figure 1 (see page 20) depicts two different UML 2.0 diagrams. The lower image represents a class diagram describing the high-level architecture of a mobile phone and a Bluetooth headset, and how they interface with each other. To define the behaviors of either the mobile phone or headset, you could do several things. In one scenario, you could decompose one of these software structures into many software structures or parts that further define the behavior; for example, what a mobile phone consists of, or what the specific applications in the mobile phone might be. They could also define behavior with simple Java operations defined in each class in this diagram. Another approach is depicted in the upper image: the behavior diagram, which in this case referred to as a statema-

Driven ing Phases of the Java Development Life Cycle


DRIVING MISS JAVA

chine. The statemachine defines a particular state in which the object exists and describes how it may respond to external events, such as transitioning to a new state or calling some operation. By defining diagrams like this, a complete Java application can be generated and tested, as there are enough semantics behind the design to produce an executable system. UML 2.0 provides diagrams that can be used to capture the expected interaction of defined components. Class and statemachine diagrams can produce an executable system, such as those represented in Figure 1.

Requirements Modeling In addition to its ability to illustrate the expected interaction of components defined in architecture diagrams (as in Figure 1), UML 2.0 also can provide an overall context for what the system you’re creating should do. In the case illustrated in Figure 2, the system context may be an entire mobile phone network or simply an applet running inside a mobile phone, which largely depends on the level of abstraction of the particular system you’re designing. In this example—of a mobile phone and a Bluetooth headset—a simple use case of the system may require that the mobile phone be able to communicate remotely with the Bluetooth headset. A use case, as most of us know, is a functional requirement of a system that specifies what the system is supposed to do, not how it should do it. However, the requirements for what

FIG. 1: IN SUCH A STATE

20

• Software Test & Performance

the device should or should not do are almost never supplied from the customer to the device vendor in the form of UML 2.0 use cases. Typically, these are supplied in a text document or spreadsheet. To cope with that reality, requirements modeling in UML 2.0 is intended to show how the model traces to the customer’s textual requirements. The modeler must create the links in the model to the textual requirements to show traceability, and be able to prove that the system being designed meets all requirements. The textual requirements are fed into the design tool manually or by using commercial automation tools that can parse a textual document and automatically create the equivalent textual requirements in a use case diagram. This use case, labeled “Allow a mobile phone to be used remotely,” is tied to actual requirements represented as classes in a use case diagram. The requirements in this case were read in from a textual document that provides a detailed description of how the functional requirement should be achieved. The “satisfy” relationship ensures that the traceability of the functional requirement—represented by the use case—is traced to the actual customer requirements (labeled ID=1, 2 and 3).

Detailing Scenarios After traceability is established, designers can start detailing the functional requirements, or use cases, through scenarios represented as sequence diagrams. This phase is critical, since the

T

HE ESSENTIALS OF MODEL-BASED JAVA TESTING 1. Define the architecture through class, structure, use case, sequence and statemachine diagrams. 2. The two diagrams necessary to achieve automated testing are the use case and sequence; others are needed to create a deployable Java application. 2. Import customer requirements as UML 2.0 classes; trace use cases to the requirements. 3. Detail scenarios of functional requirements; create a scenario for each customer requirement showing how each requirement will be met in the software. 4. Execute all scenarios and get feedback on how your Java code is behaving. 5. Record (via UML 2.0 tags) whether scenarios have passed or failed and which requirements are and are not being met.

testing results of the executable Java designs are matched against these scenarios to determine whether the software is behaving as expected. A sequence diagram represents one particular path through a use case or functional requirement. There may be several scenarios that fully define a particular use case, so the sequence designer would create a sequence diagram or test that describes how the software should react in each specific scenario. For example, your software must be able to handle the condition of the mobile phone failing to connect to the headset. You must therefore model this scenario in a way similar to that shown in Figure 3. The sequence diagram in the right pane displays what happens during a failure scenario. The diagram can be used to describe how the software components in the design react to anything outside the context of your software design. In the Bluetooth headset system, a failure could represent a keypress coming from a keypad driver in the phone. JANUARY 2007


DRIVING MISS JAVA

Semantics can be specified from the statemachine itself, such as which code to execute (for example, flashing the blue LED) as you enter the state.

FIG. 2: WHAT’S THE USE CASE

The Automation Process

The right pane in Figure 3 also shows different Java objects in the UML 2.0 design and how we expect them to react upon receiving a keypress event. The diagram in the center pane shows the complete failure scenario that references, via the UML 2.0 ref operator, the keypress event shown on the right. Operators are an easy way to capture a scenario once and reuse it. This practice allows scenario capture to scale to even the most complex scenarios.

Graphical Java Programming Without placing processes behind the modeling, Java application testing automation would be impossible. But where’s the Java code? Modeling and coding can coexist harmoniously. Automating the modeldriven Java testing process shouldn’t drastically alter your current design processes. The goal is to improve productivity so you and your team can spend more time on analysis and design and less on testing. When moving to a model-driven development environment, many manual Java tasks can be automated. For example, Figure 4 (see page 22) shows a structure diagram depicting various components of the sample app, including a GUI used in the mobile phone. The GUI can be represented as a UML class, which is, of course, translated to a Java class. The operations and attributes of the GUI UML class are operations of the Java GUI class, whose behavior the user is free to define. In Figure 4, the dialog at right shows the operation code of the GUI operations called initGUI(), depicting code that the user may have imported, written from scratch or generated automatJANUARY 2007

ically in the design tool. Each software design component in Figure 4 could translate itself into a Java file, automatically specifying all data and operation code input into the model. It also could autogenerate the complete behavior of a statemachine or activity diagram used to represent behavior. For example, the statemachine shown in Figure 1 can automatically be translated into Java code, as the semantics behind UML 2.0 statemachines are simple enough to translate to a set of operations of the Java class—in this case, the headset example—to fully specify the behavior defined in the graphics. In Figure 5 (see page 22), the connecting state in the headset statemachine maps to the connectingEnter() Java operation that is produced automatically by the MDD environment.

Now that you’re aware of the main components of the automated testing process for Java applications, you can begin to put the automation process together. Model-based testing is requirements-based testing. The requirements are the sequence diagrams already defined by the user, and the testing occurs upon execution of the code. In this case, testing means that you’re executing Java code and getting visual feedback of the results through the model. For model-driven development to facilitate automated testing, the model and Java code must be identical. So while executing code, you get a visual representation of how scenarios play out in the system by viewing the execution results in UML 2.0 sequence diagrams. Let’s take a look at the execution of the Java code for turning on the headset. The objects involved in turning on the object are the Button and the Headset. Figure 3 shows three diagrams, a side-by-side comparison of two sequence diagrams, and a simulated view of the Java Button statemachine. The sequence diagram compares the requirement view (the one that was captured by hand) with the execution results of the Java code. Figure 3 also illustrates a compari-

FIG. 3: SEQUENCE OF CONSEQUENCE

www.stpmag.com •

21


DRIVING MISS JAVA

son of execution results against the requirements. The sequence diagram on the left is the execution view, and was drawn dynamically as the Java code executed. The sequence diagram on the right is the analysis or requirements view. Magenta lines reveal the differences between the execution view and requirements view. Visual or design-level debugging is one advantage of moving to a UML 2.0 model. Not only does it permit developers to create a static view of the architecture, but also to view Java code execution dynamically, making it easy to spot where code doesn’t match requirements. Also in Figure 3 is the dynamic view of the Java Button statemachine class, where the button currently exists in the idle state. As a Java developer, you can step through the model as you can step through code—using the Eclipse JDT debugging environment—and view how the code and model behave as the code executes. In this way, you’re able to provide multiple views of information to get a picture of how your software is performing and quickly locate areas in the model where behavior is failing. Repairs can be made in either the code or model view. This rapid iteration of design and execution can help you find defects early, when they’re cheaper and easier to fix.

Automating the Process The next step is to ensure that your code meets requirements, can be automated to run several test suites and gives feedback on the results. Ultimately, your goal

FIG. 5: MAP OF THE STATE

FIG. 4: SOFT AND GUI

will be for automated regression tests to be performed nightly. Many UML 2.0 tools allow for scripting of such a process so that you can execute several scenarios and compare the results. Some technologies are available that provide this requirements-based testing solution built into the MDD tool suite, enabling you to create the regressions easily from requirements written as sequence diagrams. For example, your application may have several functional requirements or use cases, and each use case may contain several scenarios describing sunny- and rainy-day situations. This could equate to dozens, hundreds or even thousands of sequence diagrams that serve as your test vectors. Manually verifying these scenarios is not realistic. Fortunately there are commercial tools that can help with

this, too. Referring back to Figure 2, the dialog on the right shows the execution of test scripts on the customer requirements, each of which can become a scenario that is designed through a sequence diagram with a test script that executes scenarios to represent each requirement. These will tell you whether your Java code is behaving as expected. The test script dialog is dynamic, and is executed automatically by the requirements-based testing tool, giving the user an indication of whether the test has either passed, failed or is still in progress. The browser on the left indicates which functional requirement is being tested, the requirements associated with the use case, and the pass/fail status of the scenario attached to the requirements.

Early and Iterative Automating the testing process for Java applications can be achieved through Model-Driven Development based on UML 2.0. This development model defines an architecture that is tied to requirements, founded on a modelbased execution engine, and that can exercise the code while giving visual feedback at the model level. By following this process for Java development, which incorporates iterative debugging, you can find defects prior to the integration testing phase, where they are more difficult to correct. This requirements-based testing approach delivers software that truly meets all customer requirements. ý

22

• Software Test & Performance

JANUARY 2007


$PNQMFY BOE FWPMWJOH TZTUFNT BSF IBSE UP UFTU

1BSBTPGU IFMQT ZPV DPEF TNBSUFS BOE UFTU GBTUFS 4UBSU JNQSPWJOH RVBMJUZ BOE BDDFMFSBUJOH EFMJWFSZ XJUI UIFTF QSPEVDUT "XBSEFE i#FTU 40" 5FTUJOH 5PPMw CZ 4ZT $PO .FEJB 3FBEFST

TPBuftu

UN

*OGP8PSME T 5FDIOPMPHZ PG UIF :FBS QJDL GPS BVUPNBUFE +BWB VOJU UFTUJOH

Kuftu

UN

"VUPNBUFE VOJU UFTUJOH BOE DPEF BOBMZTJT GPS $ $

RVBMJUZ

D,,uftu

UN

.FNPSZ FSSPST $PSSVQUJPOT -FBLT #VGGFS PWFSnPXT 5VSO UPy

Jotvsf,,

UN

&BTJFS .JDSPTPGU /&5 UFTUJOH CZ BVUP HFOFSBUJOH UFTU DBTFT IBSOFTTFT TUVCT

/UFTU

UN

(P UP XXX QBSBTPGU DPN 4%UJNFT t 0S DBMM Y ª 1BSBTPGU $PSQPSBUJPO "MM PUIFS DPNQBOZ BOE PS QSPEVDU OBNFT NFOUJPOFE BSF USBEFNBSLT PG UIFJS SFTQFDUJWF PXOFST

"VUPNBUF 8FC UFTUJOH BOE BOBMZTJT

XfcLjoh

UN


Making the Case: Develop Your Defense For Test Automation By Bob Galen

t a software testing conference I attended a year ago, I sat in on a track presentation about establishing an automation business case. The talk centered on hard return-on-investment factors and inevitably

A

24

• Software Test & Performance

effective business case for your automation efforts—one that will be compelling not only to your stakeholders, but to your product development teams as well. Let’s take a closer look at the three factors and how they interact for success.

Assessing the Climate For Test Automation It’s a simple, oft-forgotten phase in business case development, but it’s essential: First, you’ve got to determine the compelling drivers for automation within your organization. Often, teams dive into developing test automation with a specific view toward time and cost savings. While these can surely be a result, I caution against overzealously cutting costs—even if it seems logical. Instead, take a broader investigative approach to sorting out your core motivations—one that assesses your unique context or climate, and then forms a set of motivating factors targeted at those specific challenges. Some of the most common assessment areas to examine in a review of the automation business climate include: JANUARY 2007

Photograph by Frances Twitty

on cost saving and cutting as the primary value proposition. I enjoyed the discussion, but felt it was too narrow. While hard ROI savings is certainly important, other elements are necessary to create a compelling test automation business case. Simply put, it isn’t all about dollars and cents. And in fact, in my own experience, the real differentiating factors are actually never about the money. A solely hard-ROI focus can also lead stakeholders to consider trimming and compressing the test team—rarely a good idea. Instead, to create a compelling business case for test automation—one that not only drives organizational support, but raises your team members’ excitement and expectations to do a great job in automating their testing—think beyond the bucks. There are three vital activities that I believe are essential for developing a successful business case: • Assess the climate for your test automation • Explore and define your hard ROI goals • Explore and define your soft ROI goals By keeping these three elements in mind, you can construct a broad and


A Successful Case Should Focus on More Than the Bottom Line — Intangibles Such As Productivity Also Come Into Play


THE DEFENSE

Application characteristics. Often, your targeted application(s) for automation determine the level and capacity of your automation efforts. For example, GUI- or Web-based applications are often easy targets for test automation, so the relevance of and opportunity for these applications might be very clear and broad. However, what if your GUI applications have high data-quantity dependencies in their back-end databases? Data that takes literally weeks to populate from a variety of sources and that is renewed on a monthly basis will clearly impact your automation efforts. Other sorts of applications— for example, embedded medical systems, system software or SOA applications—also affect your automation viability, breadth and strategies. Another factor that clouds application support is the varying ability of your tools to support the languages, libraries and third-party components used to build your applications. I’ve found that many tools have difficulty interacting with certain controls and other components. While this sounds merely inconvenient, it can actually inhibit testing of specific features that leverage these components. In fact, I’ve seen component or control incompatibility virtually halt large

26

• Software Test & Performance

areas of my automation efforts. The key here is to thoroughly analyze your toolsets against your applications. It’s not enough to simply perform an evaluation. Plan in a pilot phase to truly ascertain the actual support level and identify problem areas. Any real limitations or suspected risks that could impact your test automation development should be surfaced and factored into your business case. Team capabilities. The team’s experience and capabilities are a frequently underestimated assessment factor. Evaluate the team’s familiarity with basic software and automation development. Have they built tools and software before? Have they built test automation? Do they understand the difference between writing automated test cases and developing an automation architecture and framework that supports efficient automation development and execution? Usually you’ll need architectural capabilities on staff—someone with deep technical skills with the automation tools and in the product technologies. You’ll rely heavily on this team member to set the stage for your automation efforts. Often, teams lack the requisite depth in development experience to start an automation program on their

R

ECOMMENDED COST— ANALYSIS ITEMS FIRST 18 MONTHS: • Startup licenses–development • Startup licenses–execution • First-year maintenance • Training • Consulting assistance • Additional hardware • Additional tools infrastructure • Any anticipated project expansion It’s important to identify the obvious and hidden costs of the automation effort to ensure that your stakeholders understand fully the realities and upfront costs associated with automation development.

own. And certainly, sitting through a few courses in tools and techniques won’t be enough to build the requisite skills. In these cases, you might want to identify a consulting partner who can supplement your team’s experience gaps and jump-start your efforts. Don’t underestimate the length of time you’ll need this help. In my experience, longer-term contracts for core competencies are necessary for ongoing success. Budget constraints. Today’s production automation tools aren’t cheap. And even once you’ve purchased a tool, it’s common to encounter additional licensing costs and vendor-supplied start-up training. Additional training and mentorship are often needed to bring your team fully up to speed. You should also factor in recurring maintenance costs for at least the first year. There also might be execution costs. Often, the tools used for development must be replicated to physically run the automation. This may require two sets of licenses: one to support development and another for your runtime requirements. This also leads to additional costs for infrastructure (physical space, power, networking support and basic software assets) and hardware to support your automation environment. It’s not atypical for every seat of automation to drive an additional two to five systems that are supporting automation development and execution. JANUARY 2007


THE DEFENSE

Leadership understanding. One of the final points (and probably the most important) is to assess your leadership team’s understanding of automation and their overall expectations. You must determine whether they understand that developing test automation is equivalent to developing software, and requires the same development processes, ongoing maintenance and support of the infrastructure and toolset. Your team also must understand that you have to establish a baseline set of tools and infrastructure before you begin automating test cases, and that this can often take quite a bit of time and effort—perhaps even months. They must also realize that automation follows a prescribed software development life cycle, and that its introduction needs to be properly planned. Finally, they must understand that automation needs to integrate with the development project stream—a tricky merger that often creates priority conflicts and skews the automation’s ability to impact existing product-line development. If there are any gaps in these areas, you can normally map them as risks in your business case that need to be tackled along the way. In almost all of my experiences, stakeholders initially have unreasonable expectations for automation development that need to be remapped back to reality. In many cases, the business case must be adjusted to compensate for this as well. Wrapping up the assessment. After you’ve gathered assessment data, which is usually informal and conversational, take a little time to write it up. Don’t worry about capturing all of the data—just think about the key forces or goals that will be driving your automation efforts. Also look to identify forces that could block your automation success so that you can work to resolve them either within your business case or by taking actions across the organization.

Determining ‘Hard’ ROI Hard ROI refers to savings related to testing time or results quality. Typically, you’ll make some general assumptions about automation’s effect on your test execution and extrapolate savings versus the automation’s overall cost. JANUARY 2007

Test automation impacts your current testing-expense structure in four primary areas: • Improved test coverage • Reduced test execution time • Decrease in test escapes into production • Improved test repeatability Let’s look at these in detail. Improved test coverage. This is your ability to execute more test cases in less time, thereby improving the overall coverage per unit of time. Given that testing is very much a cyclical activity, increasing coverage in your

S

tion rates, you can derive coverage increases versus time saved. One of the initial challenges you face when establishing ROI, hard or soft, is insufficient data about your execution cycle time. Many teams don’t establish baselines from which to measure. If that’s the case, you should consider establishing performance dynamics. Reduced test execution time. Related to coverage improvement, this is the reduced time required to execute tests. Again, in this case, a baseline estimate should be associated with

PEED KILLS I was working as a QA manager for a large storage software application developer. Our testing team was tasked with developing automation that would be deployed and executed within our current product development iterations. This meant that we strove to develop the automation virtually in parallel with the development team’s product development efforts and execute it before product release. We were so serious about this strategy that our product’s release criteria included successful execution of the relevant automation. While this strategy was fraught with higher costs, product instability and loads of rework, it empowered our development team to do innovative things, such as: Develop product features late in the game. We found that this empowered our analysts to freely suggest late-breaking features based on continuous customer feedback. Change their minds (often) based on customer feedback. We worked hard to create automation that was modular and insensitive to change. While this required solid engineering, it enabled the development team to be more adaptive on meeting the needs of our customers. Iterate often. As a way of gaining code-quality feedback, many development teams prefer to deliver small chunks of code to their testing teams. However, they also prefer seeing the entire regression suite run against these change sets. In our automation-intensive model, we could quickly run our automation suites and give the development team nearly real-time feedback on the quality of the changes and important insight into any side effects. These three techniques increased our development team’s confidence in making risky changes late in the game, since they knew we could mitigate much of the risk. We became a visible partner in increasing the nimbleness and embracing the change model. Over time, we had worked with our organization’s leaders and stakeholders to explain and illustrate the value and possibilities that automation could bring to our development programs. As we gained traction and momentum, our success was evidence that a well-architected and fully invested automation program can truly enable huge product development competitive advantages.

standard cycle times can significantly improve your ability to influence quality. To calculate the savings, you’ll need historical insights into the time required to execute manual test cases and the average number of tests executed in your typical cycle times. Based upon your estimated automa-

your previous execution speed, usually measured in average test cases, per person, per day. Then, depending on the number of testers assigned to the project, you can calculate the speed of your release-testing phase. In my experience, testing cycles usually take place in one-to-two-week intervals. Implementing automation can www.stpmag.com •

27


THE DEFENSE

have the effect of driving coverage up, driving time down or doing both in predetermined moderation. Your personal focus should be determined by project requirements and your current coverage levels. Initially, you may wish to focus on driving coverage up because it usually has the greatest exposure. Later on, you can work to drive time down and then establish a more effective balance between the two. Decrease of test escapes into production. Improving coverage and execution time is relatively easy to prove. However, demonstrating improved quality is a bit more difficult. One good measure is to review defects escaping into the customer base. You’ll need to mine your defect data for production-level problems and identify root causes that relate to your verification testing. In some cases, you’ll establish this metric as part of your automation development and then measure improvement over time. You’ll also need to extrapolate the cost of a production- or customerdetected defect (on average). Typically, these costs are unique to your organization and can be derived with your QA or operations teams’ help. Once you have a baseline for escapes and understand the cost, you can track improvement and potential cost savings as a result of expected and/or actual trending after your automation is running. Improved test repeatability. Often, stakeholders look to offshore testing as an exercise in moving less interesting, repeated test activity to lower-cost, manufacturing-oriented testers. Their intent is to improve the human repeatability. However, boring is boring, and virtually everyone gets tired of repetitive work and makes more and more mistakes, particularly if working overtime is part of the equation. While this is the softest of the hard ROI measures, it can be significant on large-scale systems. Automation resolves this problem entirely. Once a test case is effectively reviewed and automated, it executes the same steps time after time. One of the most crucial factors for selecting an automation candidate is its repetition requirements. I’m intentionally leaving the baseline capture, performance planning

and savings calculations as an exercise for the reader; Shaun Bradshaw’s article “Automate Application Tests to Achieve Maximum Benefit” (Nov.

in itself. Again, let’s look to agile practices to make the point. Most agile teams use a practice called continuous integration. With CI, you build and run a set of unit tests at every check-in of code to continuously integrate every small change and catch any integration issues immediately—and then repair them. Instead of a big-bang approach, you take small, incremental steps that maximize discovery and minimize corrective rework. Test automation can enable the same incremental savings, but on the broader application feature–set level, drastically increasing the overall quality of your products while reducing traditional testing integration cycle times. Another benefit arising from speed is an expansion of testing coverage. In today’s iterative development models, it’s virtually impossible for test teams to test everything—but I’d argue that it never was possible, and that the goal

Instead of a big-bang approach, you take small, incremental steps that maximize discovery.

28

• Software Test & Performance

• 2006) offers these details. As part of the hard ROI calculations, you obviously must offset any savings against the cost of initiating the automation program. That’s why I recommend performing a complete budget analysis as part of your climate assessment. You can then factor those findings clearly against the savings to determine ROI potential.

Determining ‘Soft’ ROI Two principles from the Lean Software Development philosophy described by Mary and Tom Poppendieck (in their book of the same name) are useful in determining soft ROI. First, deliver features as late as possible. Second, deliver them as quickly as possible. This just-in-time and just-enough view of development implies a focus on real and demonstrable—rather than expected or anticipated—value. However, to make this vision a reality, development teams need the capacity to make software changes quite late while retaining their overall product quality goals. Test automation is one of the few ways to provide this late-change quality safety net and still allow your development team to remain as nimble as possible. See “Automation Business Case Template” for a good illustration of the difference this can make. The Lean methodology interplays nicely with a focus on soft ROI, and incremental speed, developer confidence and reinvestment are the three primary aspects supporting soft ROI. Raw, incremental speed. Incremental speed can be a powerful differentiator

A

UTOMATION BUSINESS CASE TEMPLATE To be effective in gaining stakeholder buy-in and understanding, an automation business case should identify specific key areas, including: • Driving forces for automation • Key challenges opposing automation • Exhaustive automation costs • Primary hard ROI factors and measurable goals • Primary soft ROI factors and goals • Implementation plan with a three- to 12-month view, including frequent milestones • Establish contrasting organizational prior states and intended future states to illustrate resulting possibilities • Develop a sustainable strategy for the overall automation effort including: • How to maximize initial momentum • How to minimize or eliminate challenges

JANUARY 2007


THE DEFENSE

of 100 percent coverage was a management oversimplification. However, increasing automation capabilities allows your testing teams to iterate faster, testing more and covering the remainder of your applications with alternative testing techniques such as Pareto-based or other risk-based approaches, to more thoroughly test the application’s high-risk areas. Development confidence. Testing can provide a safety net for incremental application changes by the development team. Traditional software methods consider code freezes and change as the enemy—to largely influence change lessening. This attitude arose in part because of the lack of a dependable, traditional way to mitigate the side effects that late-coming changes can bring about. However, a properly defined and well-implemented automation program can substantially boost team confidence in making changes, even in latecoming features that impact competitive advantage or in refactoring the architecture when and where necessary. This confidence can foster more aggressive development that results in more competitive product feature sets. It can also lead to improved collaboration and gained mutual respect between the development and testing organizations—moving them toward more of a partnership than the battles that are all too common. Continuous investment. A final soft ROI factor is connecting your savings to the continuous improvement of your testing team. This can take a couple of directions. First is the notion that time saved begets time—time that may be used to trim staff or, more compellingly, to train your team and improve its methods. You can also invest the time saved in tools and process improvement. For example, I’ve used automation time saved to invest in testing teams for training in Exploratory Testing techniques. ET can be a fast and effective testing approach for application areas that aren’t yet automated or aren’t good automation candidates—but that you still want to test nontraditionally and quickly. Automation then becomes a fundamental part of your overall strategy for testing effectiveness and team improvement. This logic can be JANUARY 2007

B

USTING THROUGH WITH A FORCE FIELD ANALYSIS Force field analysis is an effective tool for identifying your goals and the internal organizational forces that oppose them. Opposition can take many forms: Change resistance, existing culture, overall costs, skillset and experience, level of effort and comfort with the status quo are among some of the key factors. The forces for become the most compelling outcomes from executing test automation, while the forces against are mostly mapped back to your assessment results. Once you’ve identified the driving and opposing forces, the opposing forces then spur your thinking around implementation strategies and planning, as opposing forces must be overcome for success.

Automation Assessment — Force Field Analysis Forces Supporting Test Automation

Forces Opposing Test Automation

Cycle time reduction Long-term reduced costs Increased investment in team performance

Overall team skillset Lack of management understanding, leading to unrealistic deadlines Costs beyond the tools

Strategy for Amplifying Support Run a pilot project to illustrate applicability of approaches and speed potential

Strategy for Overcoming Resistance Train executive team members in aspects of automation

extended to team skill improvement, tools introductions and larger-scale process improvements. The essence of soft ROI is to remind your stakeholders of the intangible but powerful benefits of reinvesting savings within the testing team. They must know that reinvestment can create an ongoing competitive advantage for your technical projects—allowing for late-coming and just-enough application changes that can mean the difference between mediocrity and competitive advantage. I’ve found time and again that the soft ROI advantages can impact organizations in much broader and profound ways than their hard ROI counterparts.

Business Cases in Search Of Clarity Possibly the most important part of a good business case is the assessment process. It serves not only as a mechanism to gather critical focus points and key goals, but to establish the major risks that you’ll encounter along the way. In my consulting, I get bombarded by stakeholders who are frustrated with their automation efforts. They’re annoyed at the unexpected costs and length of time it takes to make a visible impact. By the time they get to me, this frustration is overflowing, and they want to take drastic actions, often looking to throw away existing efforts and start again, or to perform the

work elsewhere—usually offshore. To some degree, they’re looking for a silver bullet. The root cause of their frustration arises from the absence of a clear business case that establishes the requirements, goals, timing, effort, costs and, ultimately, the potential return for automation. These folks probably didn’t focus on establishing a proper case, nor on understanding what exactly was feasible. Remember, too, that once you understand the landscape, you’ll need to balance across hard and soft criteria to measure your effectiveness. And you should never make bold early promises to cut staff by large percentages before having some experience and hard facts behind you. Stakeholders often make another mistake—assuming that ROI is simply a cost-cutting exercise. While cost cutting and/or increased revenue will always be an important part of the discussion, it’s often better to frame the view of ROI as a quality improvement and competitive advantage exercise. That’s a much more powerful and contagious model. With that progressive view of ROI, coupled with an adequate assessment procedure that unveils the facts about your application’s characteristics, your team’s abilities and that all-important budget, you’re in a sweet spot to create a truly functional business case that can guide your automation to success. ý www.stpmag.com •

29


Don’t Miss Out

Test & QA Report e-newsletter! On Another Issue of The

Each FREE weekly issue includes original articles that interview top thought leaders in software testing and quality trends, best practices and Test/QA methodologies. Get must-read articles that appear only in this e-newsletter!

Sign up at: www.stpmag.com/tqa


Three Keys to Effective Offshore Development By Matt Love

“I

n the past three years, offshore programming jobs have nearly tripled, from 27,000 to an estimated 80,000.”

Offshore software development is becoming standard practice in the modern global economy. Thanks to advances in culture and technology, diverse nations around the world can now provide a pool of well-educated programmers to contribute to any softJANUARY 2007

ware project. This new labor pool’s ability to provide increased productivity at low cost has enticed many onshore organizations, but offshore development requires collaboration among groups with different geographical and cultural backgrounds— www.stpmag.com •

31

Photograph by Tom Grill

– Forrester Research, February 2004


AS THE WORLD SHARES

and in software development, collaboration is neither intuitive nor trivial. For example, software designed and implemented by a single team in a single geographic location usually fits the specific niche for which it was created, but might not be able to compete on a global scale. Other cultures have different expectations of software— expectations that may not be obvious to a single development team. Joint development involving several culturally distinct groups exposes each feature to additional scrutiny during the early stages of planning and development to result in a product that can compete in the global economy. Parasoft, where I work as a software development manager, has spent the past decade expanding its development offices from its U.S. base to countries around the world. This offshore push requires close collaboration among various development groups. Our initial offshore strategy was to assign each group to develop a different product. Although this increased the number of products we could offer, each product appeared to be produced by a different company. We found that large deployments of multiple products became more successful after development groups started to collaborate and share a common code base.

32

• Software Test & Performance

The Challenge of Sharing Sharing a source code base among development groups requires clearly defined goals. Developers and managers stumble across plenty of obstacles during the transition to a shared code base, and many question the desired benefits. One obvious plus is that a unified development process results in similarities in product appearance and functionality. One group’s shared experiences may prevent other groups from making the same mistakes.

S

However, the added complexity of each development group relying on code written by other groups makes this level of sharing difficult. For instance, different groups and cultures tend to favor different programming styles, hindering collaboration and a unified development process. Moreover, development work spread out across multiple time zones decreases the odds of the full code base building successfully. At any given time, someone somewhere is likely in the middle of changing something. When one component does break, whether functionally or in compilation, every other group that is attempting to build the full code base, or that depends on that one component, is negatively impacted. Nevertheless, overcoming these and other challenges of implementing a process for shared development is well worth it, considering that the end result will be uniform applications and a unified code base. Most organizations attempt incremental adoption of offshore development collaboration, where each phase is a learning experience and the disadvantages of one approach are typically addressed in the next iteration. Working in this manner, Parasoft has developed three key principles for sharing code with an offshore development team: 1. Keep source code separate and share compiled binaries. 2. Share all source code and compile binaries for the entire code

HARE AND SHARE ALIKE Consider the Java code example Listing 1 to the right. Suppose interface IShareable and class BusinessComponent are developed by two different groups within the organization. Now suppose a developer for interface IShareable adds a new method String getComponentType() to the interface. No compilation errors will be reported when rebuilding against a compiled jar containing BusinessComponent. Of course, if the development group responsible for class BusinessComponent rebuilds its source, it will notice a compilation error if it has a jar containing the latest IShareable on the classpath. However, the owner of IShareable might not know how many other groups are implementing the interface, and running an older compiled implementation of that interface will work as long as the new getComponentType() method isn’t called. But if that method is called, it will throw a java.lang.AbstractMethodError exception unless both code sets are rebuilt against the latest versions of other code sets. Problems like this underscore the need for early detection at compile time or sooner, rather than seeing an exception if a certain code path is executed.

JANUARY 2007


AS THE WORLD SHARES

base at each location. 3. Use automation to make sharing source code a permanent process. These three principles, along with careful planning, can help smooth your organization’s move to offshore involvement. Let’s examine each in detail.

Separate Source, Share Binaries

such as whether null objects are handled or ignored—could be discouraged practices to another. Moreover, the differences in standard procedures for programming interfaces, compounded with the lack of documentation about shared code

The first step toward integrating code from other development groups is to produce prepackaged bundles of compiled code for each module. Each team creates a bundle for its own modules at stable points during development, and additional development is done based on the stable code bundles provided by other groups. Typically, these bundles contain only compiled code, such as jar files in Java and dlls or static libraries in C++. The bundles may be uploaded to file servers or saved withand limited visibility of raw shared in a source control system as binary source code, make integrating and files. The final product combines understanding another group’s code modules from multiple groups to crecontribution a daunting task. ate a complete application. An understanding of shared code Creating code bundles for distriburequires visibility into how that code is tion is one of the easiest ways to integrate implemented and consistency with development efforts among teams. Most other programming practices within groups are already set up to produce the organization. A lengthy turnthese distribution bundles for the release around time is normal when someprocess. The same procedures used for thing in one group’s module needs to public releases can be repeated at times change to accommodate something in of code stability to create internal develanother. In the best-case scenario, opment releases. Errors introduced into after a change request is made by one an individual module won’t affect other group, the next group can implement development groups as long as those the change during working hours in errors are identified and resolved in time its time zone, and the group originatfor that module’s next internal developing the change gets it the next day. ment release. Development groups can LISTING 1: I ‘NOT’ SHAREABLE continue working public interface IShareable { without being inhibString getComponentName (); } ited by momentary errors in dependent public class BusinessComponent implements IShareable { modules. private static final String NAME = “Business Logic Module”; public String getComponentName() { However, develreturn NAME; oping against pre} … compiled bundles of code from other teams has some disadvantages. In the worst-case scenario, one Inevitably, some confusion ensues group waits far too long for the next when code written by foreign developstable internal bundle, or the change ers is first used by other groups. request is misinterpreted, causing furNaturally, assumptions about specificather delays. Some changes to the tions or programming techniques that source for one project might break a might seem intuitive to one group— bundle depending on that project in

such a way that wouldn’t be noticed unless that bundle were rebuilt or executed at runtime. Breaking changes need to be detected as soon as they’re introduced, not later when another group tries to rebuild its own source or when the application runs in a production environment. “Share and Share Alike” llustrates how usage misunderstandings and uncertainty about integration quality can arise from use of precompiled code bundles. Some of the shortcomings of this approach can be addressed with small changes to the process. A regularly scheduled nightly build and drop for each module detects problems sooner and keeps individual modules in synch with each other within one or two days. Attaching source code to the compiled bundles for internal use clears up some misunderstandings about code implementation and behavior. Documentation levels for code, as well as programming patterns and runtime integration errors, can be monitored and enforced using automation. Simple steps to improve the process go a long way toward meeting shared code integration challenges.

Deployments of multiple products became more successful after development groups started to share a common code base.

JANUARY 2007

Share Once, Compile Everywhere The next step toward a shared code base is to share only the raw source code. Each group can access all of the raw source code and compile the entire code base from scratch along with its own changes. Typically, whenever it’s needed, the source code is shadowed from a source control system, and all developers use a similar procedure to construct the entire code base. Precompiled bundles of code are used only for public releases, and all development is performed against the latest versions of source code to be committed into the source control system. Sharing raw source code overcomes several of the drawbacks to sharing precompiled bundles of code. One clear advantage of having source-level www.stpmag.com •

33


AS THE WORLD SHARES

access is the visibility into how shared functionality is implemented. Another is that in-code comments become accessible without needing to be published to an HTTP server. Most important, by compiling all shared source at once, you gain the ability to make local changes in the shared code. Experimentation is one of the best ways to understand algorithms written by others, and it usually provides answers much sooner than email queries to groups in other time zones that can provide, at best, a oneday turnaround—if they understand the question the first time, that is. Time-critical changes to shared code can be made by one group and later reviewed by the other group that owns the shared code, as long as source control permissions allow multiple groups to modify the same set of source code. Moreover, integration problems arising from structural changes to shared code are detected at compile time rather than later at runtime, when some modules were already compiled. The end result of sharing raw source code among all groups is that software development gains more flexibility to respond quickly to change requests and more understanding of the entire code base. However, you’ve got to tackle a new set of issues when you’re sharing raw source code. The initial setup of a unified build process may not be trivial if some groups have been using different build configurations. Source tree structure, compiler versions and build script targets must be unified to build the whole code base at once. To achieve the required unity, groups must reach consensus on which build procedures the entire organization will adopt and which it will discard. Delay is another difficulty: Developers retrieving source code from a repository on another continent can sometimes be hindered by slow network traffic. Moreover, with the added freedom of modifying

34

• Software Test & Performance

shared code from other groups comes the added risk of erroneous modifications by developers who are unfamiliar with a set of code. Incremental changes to the code base may put the code in an unstable state or prevent it from even compiling until a set of changes is complete. Shadowing shared source at the wrong time could interrupt a developer’s workflow. More time can be wasted when developers who prefer different coding styles and naming conventions go back and forth changing a common piece of code. Error detection and

The scheduled times should be set up to shadow source when it’s least likely to be modified. When scheduling nightly builds, the team should try to identify a time during the day when none of the development groups is making modifications to the code base. If that’s impossible, establish a hands-off policy prohibiting modifications for a specified time period during the nightly build. Also helping to smooth the adoption of shared source code is peer code review. Each group can potentially offer a fresh perspective when reviewing changes from other groups, and interoperability problems are often identified sooner. When using the practice, code quality becomes even more important because modifications to source are used by many groups soon after check-in. Unit testing and compliance with coding style and standards prior to check-in become more critical than ever, and should be part of standard operating policy.

Make Code Sharing Automatic And Permanent

standards compliance before committing to source control become even more important when development groups in other time zones share the same source-code base. Shared source-code adoption can be smoothed with careful planning and policies for development practices. Setting a source control server in each location minimizes the impact of slow network traffic across continents. Each group keeps the source code for which it’s responsible in its local source-control server. With this setup, frequent modifications to the repository are fast. It’s also effective to shadow source from other locations at scheduled times to reduce the impact of slow network traffic from those repositories.

Policies for sharing source code among groups are more effective when they’re enforced using automation. Coding guidelines can be enforced automatically, and compliance can be required before committing code to source control. Reporting on overall compliance, source modifications and testing results gives managers and architects better insight into the state of the shared code base. Automation is important for reducing dependency on the manual resources required to make source code sharing effective. It’s also required to ensure that set policies are followed every day, and to ward off human laziness. An effective plan for sharing source code, together with automation to enforce that plan, is the best solution for offshore development. Each development group can leverage automation to document, protect, enhance and secure the source code for each module. Automated enforcement of policies to document public interJANUARY 2007


AS THE WORLD SHARES

faces, parameter inputs and return values increases understanding of code by developers who read documentation as well as those who write it. The dangers of erroneous modifications or style changes by developers from foreign groups are mitigated when automation is used to enforce quality and compliance. Any noncompliant changes can either be immediately denied or flagged for review. Automation also can verify that unit testing is covering a majority of the source code, and in some cases can even create supplemental tests to expose defects and enhance quality. Automated code analysis—with the ability to identify custom patterns—can expedite the process of one group learning from the mistakes of another. A defect code pattern identified in one group can be shared with the rest of the groups and applied against the entire code base to check for similar errors. The automation process is not trivial; obstacles include deployment resource limitations and objections to policy

enforcement by individual developers. Another obstacle is time. Automated tools take time and effort to configure to specific environments (see Bob

developers who won’t resist such enforcement because it may conflict with their preferred programming practices. Naming conventions, coding style and unit testing requirements are among topics that are potentially contentious. Supplementing collaborative development with automation may present the solutions to its own problems. The investment of time and money is unavoidable, but the benefits far outweigh the sacrifice once the system is up and running. Most automated approaches to policy enforcement are customizable, which can help to appease stubborn developers. An important historical lesson has taught us that offshore development can’t encompass all development activities, but with targeted collaboration between local and offshore groups, it can provide significant benefits. By following a collaborative process based on shared source code, backed up with appropriate automation, your organization has a better chance of finding success in offshore development. ý

An effective plan for sharing source code is the best solution for offshore development.

JANUARY 2007

• Galen’s Automation SDLC stories in the Nov. and Dec. 2006 issues). Fortunately, there exist many excellent tools for helping to automate enforcement of development policies, including JUnit and other free tools, as well as numerous commercial products. What might be harder to find are

www.stpmag.com •

35


Best Prac t ices

Good Unit Testing: More Attitude Than Aptitude First, a disclaimer: Vinu tency with expected uses Varghese seems to be an outlined in the requirealtogether nice chap. He is ments documents. good-humored, modest The problem with this and patient in fielding a approach? Code is invariwide range of questions ably used in unexpected related to unit testing. ways, either by end users or Varghese also occasionother programs. Such surally suffers an ailment prises often create the afflicting most programbiggest software snafus. ming pros. Namely, he harVarghese, though still in Geoff Koch bors a quiet belief that his his first job after graduatcode is rock-solid, airtight and mostly ing just two years ago from Drexel error-free. University, is not naïve about his former “By nature, developers have big “it-seems-to-meet-the-requirements” heads and think that they write code testing approach. He says that of course that never breaks,” says Varghese, a Java he knows of the well-documented programmer with Vanguard Group, a advantages of embracing full functional mutual fund management company in and negative testing that probes Malvern, Pa. “I’m guilty of doing that.” extreme and unusual use cases. The Unit testing, the art of validating sooner you can see what causes your ultra-thin slices of source code, is a powcode to throw exceptions, the better. erful antidote to such feelings of superiStill, it wasn’t until about a year ago ority. For those seeking how-to-type unit that Varghese’s group adopted a more testing information, the Web is positiverigorous unit-testing regime. By comly bloated with whitepapers, case studbining an implementation of Parasoft’s ies, news articles and blog postings on Jtest with an in-house library of tests the topic. In this column, I’ll toss in my built with the open-source JUnit frametwo cents about sufficient code coverwork, the team now happily practices age, effective test-driven development what Varghese describes as destructive testing—breaking the code early and and unit testing multithreaded code. often to send a more robust application Attitude Is Everything downstream. The only question: What However, if there’s one topic that’s fretook so long? quently omitted from the discussion, it’s “The problem is our group has lots the nagging issue of developer attitudes, of timelines and deadlines,” Varghese whether marked by overconfidence or says, explaining why there just wasn’t insecurity. Not to get too warm and time to address sub-par unit testing fuzzy on you, but I’m fairly certain that sooner. good unit testing, like much in the test The Flip Side of Grandiosity and performance world, is as much That’s a lament common to nearly all about grokking mindsets as honing coders, and though he didn’t say as skillsets—and this goes for both develmuch, I’d wager that Varghese also sufopers and their bosses. fers from the flip side of those occasionUntil recently, Varghese and his al bouts of grandiosity. I’m talking Vanguard colleagues concentrated about that nagging uneasiness associatmostly on writing straightforward test ed with the increasing pressure to procases that narrowly confirmed consis-

36

• Software Test & Performance

duce, a concern that’s exacerbated by the hypercompetitive and ever-morecomplex coding world. When a deadline looms, it’s flat-out easy to accept a good-enough approach to application development and unit testing, especially when you’re the low programmer on the totem pole. So you’ll forgive Varghese if he didn’t demand a complete overhaul of Vanguard’s unit-test methodology in the months after he was hired. Developer managers, though, should take note. When a talented young programmer shrugs and points to the calendar instead of advocating for solid unit-testing approaches, it may be time to call for a breather. After all, most agree that unit testing is the cheapest point in the development cycle to catch and fix bugs.

How Much Is Enough? One such approach is to ensure that unit tests collectively test a good chunk of the code base, though just how much is an open question. In his excellent article in the Jan. 2006 issue of MSDN magazine, Israelbased consultant Roy Osherove seems to suggest that code coverage should approach nearly 100 percent. “How do you know if you have good coverage for your new code?” Osherove asks in “Write Maintainable Unit Tests That Will Save You Time and Tears.” “Try removing a line or a constraint check. If all tests still pass, you don’t have enough code coverage and you probably need to add another unit test.” Varghese says that his group must demonstrate a minimum of 80 percent Geoff Koch has a great attitude about technology journalism and loves hearing from hands-on developers about how his column can be improved. Write to him at koch dot geoff at gmail dot com. JANUARY 2007


Best Practices unit-test coverage to pass their code into the weekly build, which includes modules from other Vanguard teams.

‘Test It, Write It, Release It’ Another hot unit testing–related topic is test-driven development. According to Houston-based programmer Louis Thomas, TDD follows a “debug it, write it, release it” approach, which, at first blush, seems to have a slightly bollixed order. “It sounds more intriguing that way, doesn’t it? Well, if you replace the word debug with the word test, it becomes much more familiar: Test it, write it, release it,” says Thomas, who divides his time between Texas and Minnesota, home base of the small hedge-fund company he works for. “That’s TDD in a nutshell.” The TDD advantage, Thomas says, is that it tackles up-front the hardest part of debugging software: creating a consistent, reproducible test case. This can be especially difficult when it comes to

testing multithreaded code, which is novel to many developers who aren’t steeped in server-side applications. In multithreading, as is true with most other programming contexts, the best way to create unit tests is simply to think about how you might manually step through the code. Here, for instance, is how Thomas would test whether thread A can successfully notify thread B using a condition variable in a fictional multithreaded application: “I’d want to make sure thread B had checked the condition, found it unsignaled, and waited on it. I’d pause thread B, then have thread A signal the condition. When that had completed, I’d wake up thread B and make sure that it looped back to check the condition, saw that it was signaled, and decided to proceed to the necessary task rather than sleeping again. This is easy enough to do with a good debugger, but how can I automate this? What I do is place checkpoints in the code at the same places I would place breakpoints

with the debugger.” Thomas’s example is relatively straightforward, but that doesn’t change the fact that threading is seen by some as an impending sea change in programming methodology for which much of the software world is largely unprepared. Most chip vendors, having crashed into the laws of physics a few years ago, are now moving aggressively to multicore CPUs to continue to boost performance. Even programmers steeped in writing applications for desktop PCs and mobile devices may soon need to write clean unit tests for code running on quad-core chips. Yes, to prepare for this somewhat scary testing future, developers should start reading those whitepapers, case studies, news articles and blog postings. Yet they shouldn’t forget that intangible ingredient that goes beyond basic unittest aptitude. “I think the most important thing is attitude,” says Thomas. “The first test is always the hardest to write.” ý

Index to Advertisers Advertiser

URL

HP

www.OptimizeTheOutcome.com

40

Parasoft

www.parasoft.net/SDtimes

23

Perforce Software

www.perforce.com

39

Pragmatic Software

www.SoftwarePlanner.com/STPA

37

Software Test & Performance Conference

www.stpcon.com

Software Test & Performance Magazine

www.stpmag.com

8

Software Security Summit

www.S-3con.com

6

Seapine Software

www.seapine.com/stptcm

4

Test & QA Report

www.stpmag.com/tqa

Urbancode

www.anthillpro.com

JANUARY 2007

Page

2-3

30

35

www.stpmag.com •

37


Future Future Test

Test

Let’s Rob Peter To Pay Paul For high-tech equipment Globalization. Globally manufacturers, success in distributed system verificathe current business climate tion complicates coordinahinges primarily on time to tion of product releases. market and customer satisWhen development and sysfaction. Yet, with the rising tem verification teams work complexity of today’s equipin different countries and ment, many manufacturers speak different languages, have to make tough tradethey lack a standard way to offs between the two. describe defects and test Product quality efforts may results. Miscommunication Kingston Duffie be cut short to meet tough and delays can greatly affect timelines, resulting in field failure and field quality. A less sophisticated customer base. Many unhappy customers. high-tech manufacturers have expanded The underlying problem here? their products to target home users. The System verification tools and processes average consumer can now purchase haven’t kept pace with increasingly equipment such as broadband modems, sophisticated equipment, thus slowing wireless routers and networked digital product release cycles. While software video recorders. These users are not the developers now enjoy IDEs and sophistitraditional technical customers (network cated tools that offload mundane proadministrators) and are therefore less gramming tasks, QA teams still spend able to troubleshoot on their own. As a much of their time performing repetitive result, support lines are clogged with tasks such as configuring test beds, mancalls, adding new costs. ually executing regression tests, docuA lack of commercially available system menting existing test cases and attemptverification tools. Until recently, the indusing to reproduce bugs. Little time, if any, try lacked commercially available verificais left for the creative testing that can tion solutions on par with the systems tackle the most important aspects of under development. Consequences of equipment verification. this include steady degradation of field The industry needs a new approach to quality, declining customer satisfaction, system verification because of: An exponential rise in equipment comslower time to market and reduced revplexity. As Moore’s Law continues to hold enue windows. true, powerful hardware enables equipHistorically, commercial verification ment manufacturers to add more featools couldn’t be delivered because early tures to differentiate their products. They high-tech equipment was controlled now offer a variety of highly integrated exclusively via custom interfaces. But devices, requiring verification of operatoday’s devices are configured, controlled tion with new protocols, middleware and and monitored by a variety of standard applications. mechanisms and APIs. Such standardizaSoftware validation emerging as the new tion is starting to enable new types of bottleneck. Previously, time to market was commercial automation tools that will achieved in great part by the speed at dramatically leapfrog today’s approaches. which the software could be developed. Leverage Lessons Learned This is no longer an issue—the bottleArmed with those test automation soluneck has now become validating that the tions, QA organizations need also to software works.

38

• Software Test & Performance

adopt a new quality mindset based on successes borne out by other industries. Apply modern programming philosophies to application software development. More inspiration comes from modern programming movements such as agile and Extreme Programming. For example, the Agile Manifesto (www.agilemanifesto.org) advocates incremental, collaborative development, testing each small change as it’s developed to more quickly identify and fix defects. Interweaving verification with development makes it easier to fix bugs by narrowing the scope of inquiry. Verify often—not just at the end of the life cycle. Make quality a central element of the equipment design philosophy. Like the commitment to personal safety in civil engineering, attention to quality can no longer be casual; the stakes have grown too high. Whether the building stands or the levee holds should not be an afterthought, but the primary goal. Take advantage of tools to operate at a higher level. Use simulation tools such as the microchip design industry’s Hardware Description Language to save time, enabling more rigorous verification. Separating design from process improves quality, accelerates development and relieves designers from having to be experts about underlying processes so that they can focus on their core expertise. Be a pragmatist, not a purist. Rather than battling all defects at once, follow the example of Six Sigma manufacturing companies, using Pareto lists to correct defects according to their impact. These industries have demonstrated that breakthroughs in quality begin with a change in attitude. By increasing expectations for system verification— and providing testers with the tools to meet those expectations—high-tech equipment manufacturers can position themselves to significantly increase quality, improve customer satisfaction and sharpen their competitive edge. ý Kingston Duffie is founder and CTO of The Fanfare Group and creator of FanfareSVT, an IDE for testing high-tech equipment. Duffie has more than 20 years in the computer networking and telecommunications industries, and has founded several hi-tech companies. JANUARY 2007



HP software is turning I.T. on its head, by pairing sixteen years of Mercury’s QA experience with the full range of HP solutions and support. Now you can have the best of both worlds. Learn more at OptimizeTheOutcome.com

©2006 Hewlett-Packard Development Company, L.P.

There’s a new way to look at QA software.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.