stp-2008-03

Page 1

Publication

VOLUME 5 • ISSUE 3 • MARCH 2008 • $8.95 • www.stpmag.com

Building’s Not in the Cards? Minimize Risk When Buying

Moti vate a Team W i th Some Spade Wor k The Foundation Of Good Testing

Manage Performance By Testing Early and Often

: ST ES BE CTIC ge nt A n e PR Cha agem n Ma

A


April 15-17, 2008 San Mateo Marriott San Mateo, CA

SPRING

dia A BZ Me

Event


break your old testing habits

Learn the Latest Tips and Techniques— Try Out the Newest Technology—All at STPCon! SUPERB SPEAKERS

TERRIFIC TOPICS

Michael Bolton, Jeff Feldstein, Michael Hackett, Jeff Johnson, Bj Rollison, Rob Sabourin, Mary Sweeney, Robert Walsh

Improving Web Application Performance Optimizing the Software Quality Process Developing Quality Metrics Testing SOA Applications Charting Performance Results Managing Test Teams

AND DOZENS MORE! “Great, informative conference for software testers, leads and managers alike. Useful tutorials and technical classes of wide variety—A mustattend for all serious QA/SQE professionals!”

—Alan Abar Software Quality Engineering Manager, Covad Communications

“It solidifies the total testing experience and opens your eyes to alternative approaches and methods that you simply cannot get from books.” —John Croft QA Manager, I4Commerce

AND OVER 70 MORE TO CHOOSE FROM!

Register By March 28 to Get The Early-Bird Rate SAVE OVER $200! Platinum Sponsors

Gold Sponsors

Silver Sponsor

“You’ll find information outside of your daily activities, and options/alternatives to think about new approaches to testing.” —Alex Kang Staff Engineer, Tellabs

m o c . n o c p t www.s



VOLUME 5 • ISSUE 3 • MARCH 2008

Contents

14

A

Publication

COV ER STORY

Keep Your Web App From Falling Like a House of Cards

Don’t wait until the end stages to discover that your application’s architecture doesn’t scale well. Test early and often to keep your Web apps from falling apart. By Ernst Ambichl

24

Buy vs. Build: Minimize Risk

Custom-developed and COTS software bring a slippery slope of opportunity— and risk—to system quality. Learn the dangers of buying vs. building, and strategies that can transform risk into By Rex Black profit.

29

Depar t ments

Motivate Your Team With A Few Simple Tricks

7 • Editorial Selling software vulnerabilities to the highest bidder—free market or shakedown?

To find the best practices that charge up your team, look for its motivators and demotivators. Shrink management, dump groupthink, encourage circulation, and watch your team get going.

By Alan Berg

8 • Contributors Get to know this month’s experts and the best practices they preach.

33

Testing From The Ground Up

In software testing, as in construction, a solid foundation is crucial. Ground your project with comprehensible requirements, a well-prepared test strategy and continuous enhancement of the test suite. By Kiran Vankatesh

9 • Feedback It’s your chance to tell us where to go.

11 • Out of the Box New products for testers.

36 • Best Practices From tulips to leaky levees: a comparative study in change management. By Geoff Koch

38 • Future Test Usage metering and software security join forces for IP protection. By Kevin Morgan MARCH 2008

www.stpmag.com •

5


4AKE THE

HANDCUFFS OFF

QUALITY ASSURANCE

Empirix gives you the freedom to test your way. Tired of being held captive by proprietary scripting? Empirix offers a suite of testing solutions that allow you to take your QA initiatives wherever you like. Download our white paper, “Lowering Switching Costs for Load Testing Software,” and let Empirix set you free.

www.empirix.com/freedom


Ed N otes VOLUME 5 • ISSUE 3 • MARCH 2008 Editor Edward J. Correia +1-631-421-4158 x100 ecorreia@bzmedia.com

EDITORIAL Editorial Director Alan Zeichick +1-650-359-4763 alan@bzmedia.com

Copy Editor Laurie O’Connell loconnell@bzmedia.com

Contributing Editor Geoff Koch koch.geoff@gmail.com

ART & PRODUCTION Art Director LuAnn T. Palazzo lpalazzo@bzmedia.com

Art /Production Assistant Erin Broadhurst ebroadhurst@bzmedia.com

SALES & MARKETING Publisher

Ted Bahr +1-631-421-4158 x101 ted@bzmedia.com Associate Publisher

List Services

David Karp +1-631-421-4158 x102 dkarp@bzmedia.com

Lisa Fiske +1-631-479-2977 lfiske@bzmedia.com

Advertising Traffic

Reprints

Phyllis Oakes +1-631-421-4158 x115 poakes@bzmedia.com

Lisa Abelson +1-516-379-7097 labelson@bzmedia.com

Director of Marketing

Accounting

Marilyn Daly +1-631-421-4158 x118 mdaly@bzmedia.com

Viena Ludewig +1-631-421-4158 x110 vludewig@bzmedia.com

READER SERVICE Director of Circulation

Agnes Vanek +1-631-443-4158 avanek@bzmedia.com

Customer Service/ Subscriptions

+1-847-763-9692 stpmag@halldata.com

Cover Photograph by Alexey Kashin

President Ted Bahr Executive Vice President Alan Zeichick

BZ Media LLC 7 High Street, Suite 407 Huntington, NY 11743 +1-631-421-4158 fax +1-631-421-4130 www.bzmedia.com info@bzmedia.com

Software Test & Performance (ISSN- #1548-3460) is published monthly by BZ Media LLC, 7 High Street, Suite 407, Huntington, NY, 11743. Periodicals postage paid at Huntington, NY and additional offices. Software Test & Performance is a registered trademark of BZ Media LLC. All contents copyrighted 2008 BZ Media LLC. All rights reserved. The price of a one year subscription is US $49.95, $69.95 in Canada, $99.95 elsewhere. POSTMASTER: Send changes of address to Software Test & Performance, PO Box 2169, Skokie, IL 60076. Software Test & Performance Subscribers Services may be reached at stpmag@halldata.com or by calling 1-847-763-9692.

MARCH 2008

Hackers and The Free Market In a newsletter last OctoSecurity Shakedown Although this is perfectly ber, I wrote about a Swiss legal, it might seem more company with a name as like legalized extortion. unusual as its mission. Somewhat akin to the local WabiSabiLabi is one of a locksmith, after you’ve purgrowing number of compachased a new lockset for nies to begin selling software security vulnerabilities to your home, shaking you the highest bidder. As I redown so he won’t sell copies of your house keys. ported at the time, the “A protocol that better model encourages security Edward J. Correia companies, researchers and protects the security of our software ecosystem would be for vulothers to capitalize their findings in an nerability finders to contract directly open marketplace. The idea was that buyers and sellwith the vendor to find vulnerabilities,” says Chris Wysopal, CTO and coers would be vetted by the company, founder of Veracode, of the incident and transactions would be limited to legitimate organizations. After only on his blog. Veracode too offers securitwo months in business, the company ty testing solutions and services, but had logged 160,000 unique visitors, operates a bit differently. 1,000 registered sellers and 150 vulIf a company is concerned about the nerabilities. security of software it’s about to buy, it can WSLabi attributed the quick success hire Veracode to conduct an assessment. to a security community anxious for an “We will contact the vendor and have opportunity to spread their experience them upload their software binary exeand research to an “eager and ready audicutable to our portal,” Wysopal explains. ence of vetted buyers” prepared to pay “We analyze the software and deliver a for the latest information. Patrons of the detailed report of the security issues we site (wslabi.com) include enterprises, find in the code. We also generate a sumgovernment agencies and major software mary report for the customer to undervendors in the IT security sector keen on stand the security risks of the software.” learning about the vulnerabilities as they This seems a more reasonable enter the world. approach; Veracode customers know All that may sound good on paper, but about the vulnerability and can weigh the risks of using the product, while the there’s a dark side. application’s developer gets what it needs A Russian security research firm to fix the flaw. called GLEG Ltd. is one of a number of There’s just one problem: The solucompanies that analyze software for security defects and offer the information completely overlooks vulnerabilition for sale to the software’s developer. ties of the type found by GLEG—in software that is free. And for software that’s The company on January 1 announced not free, Veracode serves only people that it had identified a zero-day vulnerawho ask for their services, leaving a lot bility in RealNetworks’ RealPlayer 11 of software unchecked. (build 6.0.14.74) that reportedly allows I’m a firm believer in the free market, for code execution when RealPlayer as long as its solutions are fair to all sides. opens a malicious song file. GLEG gives I suppose that the simplest answer in this information to its customers and RealNetworks’ case would be to become a wants to be paid by RealNetworks before customer of GLEG. ! revealing the exploit. www.stpmag.com •

7


Contributors We’re pleased to welcome ERNST AMBICHL, Borland’s chief scientist, to our pages. Ernst served as chief technology officer at Segue Software until 2006, when the maker of SilkTest and other QA tools was acquired by Borland. He joined Segue in 1998 and helped build it into a leader in its field. At Borland, Ernst is responsible for the architecture of Borland’s Lifecycle Quality Management products. In our lead feature, which begins on page 14, Ernst will school you on methods of load testing early in the development cycle—even when parts of an application aren’t yet completed—with an eye toward preventing downstream performance issues. REX BLACK has a quarter-century of software and systems engineering experience, and is president of RBCS, a software, hardware and systems testing consultancy. In this issue, Rex lends his considerable expertise to the practice of minimizing the risks of testing and integrating outsourced application components. Beginning on page 24, Rex mixes practical wisdom with real-world experience from working with corporations in dozens of countries to bring you an analysis of the risk factors of integration, how to select a component vendor and how to test its products and processes. We once again bring you the enjoyable style and wit of ALAN BERG, the author of numerous articles and papers on software development and testing. This time, he draws from his experience on numerous teams to enlighten us on motivating a development team, beginning on page 29. And yes, bribery is one of several techniques he espouses. Alan is the lead developer of Central Computer Services at the University of Amsterdam, a post he has held for more than seven years. He holds a bachelor’s degree, two master’s degrees and a teaching certification. KIRAN VANKATESH is test lead of the Testing Practice at MindTree Consulting, an IT services and consulting company with offices in the U.S., Europe and AsiaPacific. Beginning on page 33, Kiran offers a tutorial covering the basics of good testing practice. Kiran has been a software tester for four years, and has a strong conceptual background in financial, healthcare and asset management systems. He is proficient in functional testing, verification and general software testing, and also has worked on real-time transactional applications. Kiran works in MindTree’s Bangalore office and holds a Software Test Engineer certificate from the International Software Testing Qualifications Board. TO CONTACT AN AUTHOR, please send e-mail to feedback@bzmedia.com.

8

• Software Test & Performance

MARCH 2008


Feedback

SPRING

SPRINGTIME MEANS SUNSHINE, BASEBALL AND STPCON The following letters refer to Edward J. Correia’s editorial “Defect Tracker for Politicians” (Software Test & Performance magazine, Feb. 2008; retitled “Track Politicians Like Bugs” in the Feb. 5, 2008, edition of Test & QA Report newsletter; see http://stpmag.com /retrieve/stp-0802.htm).

FROM FANTASTIC… Just received and read today’s Test & QA Report. I just wanted to say that was fantastic. Jo Compton Los Angeles, CA

… TO REFRESHING… A note to let you know how absolutely refreshing your “Ed Notes” column was in the February 2008 issue of Software Test & Performance. Frankly, I did not even once have to mumble under my breath and grind my teeth as I have been told that I do when reading some of the liberal, progressive propaganda that always seems to work its way—I am sure by no accident— into just about every issue of eWeek. Bravo! Michael Hyman San Diego, CA

… TO IGNORANT POOR TASTE I found this article to be in very poor taste. First of all, it is probably a very bad idea to inflict your political views on the readership of your publication. Second, many of the statements you made were based on faulty logic or ignorance of the facts, or were just plain simplistic and/or not reflective of very high intelligence. You may want to consider avoiding this kind of content in the future. Steve Munger Portland, OR FEEDBACK: Letters should include the writer’s name, city, state and e-mail address . Send your thoughts to feedback@bzmedia.com. Letters become the property of BZ Media and may be edited for space and style. MARCH 2008

Here they come again. No, not Derek Jeter and Barry Bonds. I’m referring to Michael Bolton, Hans Buwalda, Mary Sweeney and Rob Sabourin, who also delivers the keynote on testing in Scrum. These are just a few of the instructors you’ve told us are your favorites, so we’ve brought them back to the Software Test & Performance Conference in San Mateo, along with a few new faces too. The San Mateo Marriott is where we’ll break out of the box; the performance box, that is. You’ve told us you wanted more performance classes—and we’ve delivered. This year’s conference will be loaded (so to speak) with nearly a dozen classes designed specifically to help you find ways of improving the performance of your applications. We’ve also brought Karen Johnson to town, and she’ll offer a two-part class on charting and presenting performance results using graphical analysis and proven storytelling techniques. If you were with us last year, you might remember the Hands-On Testing Showcase, a successful event we introduced in San Mateo and expanded last fall in Boston. Well, HOTS is back and will be better than ever, with multiple vendors inviting you to test their latest products while enjoying copious quantities of fabulous food and bottomless bins STPCon in San Mateo this April will feature a demo hall that’s bigger than ever before and stocked to the rafters with the of potent potables. newest products for software testers, and knowledgeable comWe’ll also be intro- pany reps to explain how to put them to use. ducing Lightning Talks to STPCon, where conference-goers can hear as many as 10 speakers in a single hour give short, targeted lectures on the essence of a subject relevant to your job. Speakers might test-drive a new topic, promote one of their classes or new pet project, or just provoke thought among the audience with a brilliant concept. So here it is, your ticket to advancing your testing skills, expanding your contact base and broadening your mind—all at the Software Test & Performance Conference. I hope to see you there, April 15-17, at the San Mateo Marriott. —Edward J. Correia www.stpmag.com •

9



Out of t he Box

SOAPscope Trio Spots a Test-Team Oasis SOAPscope Server 6.1, the latest version of Mindreef’s SOA and Web services testing platform, now includes three desktop modules aimed specifically at testers and developers. The company also increased support in the platform for OASIS WSSecurity specifications. Among the new trio is SOAPscope Architect 6.1, which the company describes as a “design-time governance and SOA quality and testing platform” for authoring policy rules, design-time support, prototyping, change-time and runtime support. The tool incorporates industry standards and specifications for SOA applications and enables design teams to build compliant components in combination with their own customized best practices. Also new is SOAPscope Tester 6.1, which brings load testing and test automation to the SOA quality platform, and helps to QA engineers, testers and consultants “identify quality problems and potential performance bottlenecks early in the life cycle.” SOAPscope Developer 6.1 integrates tools for “problem diagnosis and resolution, unit testing and supporting service customers.” The tool allows teams to create, test, deliver and support Web services and SOA components, and automates XML-oriented tasks. The three new modules are included with SOAPscope Server 6.1, a server-based solution intended for use and collaboration by all members of the SOA and Web services team, including analysts and managers.

An Oasis of Interoperability As OASIS and other specifications advance,

it becomes ever more important for companies to remain compliant so their applications continue to interoperate with those of other organizations. According to the company, all version 6.1 Mindreef products can be used to test Web services that

Break-out apps in SOAPscope Server 6.1 target application designers, testers and developers.

use WS-Security. They do this by invoking and resending protected SOAP messages, running scenario tests using the specified X.509 Token Profile, signing and encrypting. Testers can use SOAPscope tools to create working security profiles for different WS-Security configurations and switch between them for testing. Frank Grossman, president and CTO

From Here to Aternity Aternity, which makes user experience management tools, in late January began shipping the Frontline Performance Intelligence Platform, which it claims can pre-emptively detect software problems, monitor application usage and usability, analyze end-user productivity, correlate business performance and help with capacity planning. Licensing starts at MARCH 2008

of Mindreef, said, “Project teams have been lacking the ability to quickly and easily check for adherence to standards as services are being created, tested and implemented.” The expanded line was designed with this problem in mind, he added.

US$75,000. At the heart of the system is a series of Microsoft Certified Agents, which gather data about end-user activities and transactions, and report back to an aggregation service. According to company claims, the agents consume a maximum CPU utilization of 3 percent, and 0.1 percent on average. Other services handle

SOAPscope Server 6.1 introduces the concept of the service space, a “container that allows teams to organize, collaborate and share assets with other project teams members,” and run tests based on predefined profiles, the company said in a statement announcing the new products. SOAPscope Server 6.1 is available now; pricing is based on project scope.

data analysis and management. “By transforming every desktop into a self-monitoring platform that is end-userexperience aware, we’re enabling these enterprises to harness the frontline intelligence they need to make effective business decisions that will drive increased productivity, performance and usability,” said Aternity president and CEO Trevor Matz in a statement introducing the product at the DEMO 08 Conference in Palm Desert, Calif. www.stpmag.com •

11


Out of t he Box

Open Studio Goes Live Talend updated its flagship Open Studio data integration solution in February, adding more than 30 new components, connectivity to more databases and support for event triggering based on realtime data conditions. The tool also now can execute groovy scripts, dynamically load and execute Java classes, and generate graphs compatible with the Portable Network Graphics (PNG) lossless compression specification. According to the company, the latest version, Open Studio 2.3, now fully supports the WSDL specifications, enabling Talend’s data integration processes to become data services components of an SOA. The company also claims performance gains of as much as 600 percent over the previous versions, and major enhancements to debugging and trace modes for viewing data as it flows through processes. These enhancements add expand/collapse, pause/resume and step-by-step viewing modes to the viewing capabilities. Connectivity in Open Studio 2.3 now

Open Studio 2.3 now supports event triggering based on real-time data conditions.

includes JasperSoft iReports, Microsoft Dynamics and SQL Server 2008, Mondrian, Palo, Sage CRM and Vertica, all of which can be used for integration as data targets or sources, the company said. The release also expands support for the data warehousing phenomenon of Slowly Changing Dimensions (types 1, 2 and 3), adding IBM DB2, Ingres, MySQL, SQL Server, Oracle, PostgreSQL and Sybase ASE to its supported list. Talend Open Studio 2.3 is available now; pricing was not disclosed.

In related news, Talend in late January struck a deal under which Microsoft will dedicate resources to help the company optimize performance and integration of Talend’s software products with Windows. In a statement, Microsoft director of platform technology strategy Sam Ramji said that the company’s motivation for the move was “expanding our customers’ options for data integration and extending both Windows and SQL Server.”

The Credo of Zephyr QA Test Management: Of Desktops and Dashboards By now it would be a stretch desktops customized for their to claim that software as a servspecific roles on the team. ice is a new thing, particularAll relevant applications are ly when companies are reportcontained in the desktop and ing half-billion-dollar fiscal can open in multiple windows. years, as Salesforce.com did in Managers might see project 2007. and resource management But a scant few have apps while testers see test case offered SaaS solutions for creation and execution protesters, and none as complete grams. as promised by the forthcomChanges to any data shared ing Zephyr from D Software. Zephyr’s sleek, dynamic interfaces take on the look of a high-end hi-fi among multiple team memZephyr consists of a series systemand present real-time data on project status. bers are updated on all screens of modern-looking, dynamic instantly, according to inforWeb pages centering around the conExecutives, managers and test team mation on the company’s Web site cept of “desktops and dashboards.” members access the system through (www.getzephyr.com).

12

• Software Test & Performance

MARCH 2008


VDI Spreads The Virtual Love VMware claims to have simplified the way administrators using its tools can connect to and manage the virtual desktops under their control. Virtual Desktop Manager 2 is an enhancement to VMware Virtual Desktop Infrastructure (VDI) that the company claims streamlines secure connections to the data center and provides continuity services that were previously offered only for mission-critical applications. VDI is available now starting at US$150 per concurrent user. Virtual Desktop Manager 2 can connect from a PC or thin client, can manage thousands of desktops at once and “reduces the time it takes to provision a new desktop from hours to minutes,” according to a company news release. The tool also is available in various bundles.

Insight on Byte Code Analysis Source code analysis tool maker Klocwork on Feb. 12 began shipping a new version of Insight for Java, its automated analysis tool that it claims now delivers accurate bug and security vulnerability results from byte code scans, regardless of the compiler and framework used to build it. Insight for Java supports all versions of Java up to and including 1.6, Java EE and ME. It also works with AWT, GWT, Hibernate and JavaMail, and integrates with Eclipse, IBM Rational Application Developer, IntelliJ IDEA and JBuilder 2007 IDEs, as well as ANT and Maven build tools.

Springing Into .NET Development SpringSource has released Spring.NET 1.1, extending the Spring open source framework for Java to the .NET environment. The tool is available now for free download at www.springframework.net /download.html. According to a company news release, features implemented or improved in version 1.1 include an inversion of control container for configuring application MARCH 2008

classes using dependency injection; an ASP.NET framework for Web development with bi-directional data binding and improved localization support, data model and process management; externalized navigation through result mapping; and a UI-agnostic data validation framework. “We believe Sprint.NET will prove beneficial to both the .NET developer community as well as the growing number of developers who work on both [Java and .NET] platforms,” said Rob Johnson, CEO and founder of SpringSource, which prior to November was known as Interface21. Johnson also founded the Spring Framework for Java. Also implemented are an aspect-oriented programming framework, portable service transactions, an aspect library, an ADO.NET data access framework and declarative transaction management via XML configuration and attributes. It reportedly integrates with ASP.NET AJAX, NUnit and NHibernate 1.0 and 1.2, and can mix ADO.NET and NHibernate operations in a single transaction.

Linux App? Now You Can GuardIT Arxan Technologies has released a Linux version of GuardIT, giving Linux developers a solution for protecting their applications from tampering. According to Arxan, its solutions are deployed using a binary solution that isn’t intrusive to application performance. “Through an interconnected mesh of small security units called Guards scattered across a compiled binary and then dissolved into the application, Arxan’s GuardIT fortifies the overall software product” against piracy, reverse engineering, insertion of malware and other forms of attack. With the release in late January, GuardIT now works with Linux desktop, server and embedded platforms on x86 and PowerPC systems as well as on Windows and .NET. GuardIT for Linux offers feature parity with the Windows version on both 32- and 64-bit architectures. The new version also introduces anti-tamper, anti-debug, obfuscation and encryption technologies, the company said, as well as the ability to selectively analyze and aim at specific portions of

the binary for targeted code protection. GuardIT is available now; pricing was not disclosed.

GlobalLogic: Here’s Version 1.0 Version 1.0 No, it’s not a misprint. GlobalLogic last month unveiled Version 1.0, a conceptualization and software development service that it says is designed to help startups and small shops get new software applications or ventures off the ground quickly and with relatively low financial outlay. “With Version 1.0, GlobalLogic will provide everything entrepreneurs need to rapidly and qualitatively take an idea scribbled on a napkin to a product of service in the market,” said GlobalLogic CEO Peter Harrison in a statement announcing the new service. “By providing early innovators with end-to-end product engineering services, we let them focus on strategy, marketing, customer acquisition and go-to-market challenges.” Harrison compared the idea to what has been common practice in the semiconductor industry for decades. “We are seeing the emergence of a new breed of ‘fabless’ software company, and we are excited to be an enabler of this new trend.” For its part, GlobalLogic offers to produce early applications prototypes to help companies attract customers and investor feedback, and even fill in as head of engineering or CTO when necessary. The service also is offered to established companies looking to “overcome the roadblocks they typically face when launching an entirely new product, such as slow internal procedures, lack of domain experience and scarce software engineering talent,” according to a document announcing the release. Though pricing wasn’t disclosed, the company claims it can cut timelines and operating costs by as much 60 percent compared with in-house development. GlobalLogic employs nearly 3,000 people and has offices in the U.S., China, India, and Ukraine. Send product announcements to stpnews@bzmedia.com www.stpmag.com •

13


Testing Early and Often Can Help Prevent Web Applications From Crumbling Under Pressure Like a House of Cards

14

• Software Test & Performance

MARCH 2008


By Ernst Ambichl

to the discovery that the architecture doesn’t scale well, at a time when it’s too late to do anything about it. The earlier you start load testing during the application life cycle, the earlier the underlying infrastructure’s software defects, design flaws and bottlenecks will be found. A methodology that establishes quality and performance-related activities early in the application life cycle helps to mitigate the risk of project failure, reduces overall project costs, and increases the application’s quality and performance. Despite the well-known fact that the cost of issue correction increases in each downstream phase, project teams often wait until the end of development to set up and integrate load testing. While it’s good practice to perform endto-end load tests on an application shortly before going live with a new or updated product to prove that the application performs and scales as expected, if the results don’t meet the expectations, you can’t do much to salvage the project at such a late stage. Usually these activities are limited to tuning the hardware or software configurations, and often, as a last resort, throwing more or faster hardware at the problem. If neither of these activities is successful, it’s back to development to find the root cause of the problem in the application code. In the worst-case

MARCH 2008

scenario, the core architecture isn’t suited for scalability and performance, and you have to redo core parts of the application. With the emergence of application technologies such as SOA and the Web, you also need to adapt your load testing process to the new requirements and challenges that new technologies bring.

What to Test Early Decisions about infrastructure and application architecture are usually done early in the application life cycle. Both have a strong impact on application design, implementation and operation. Reverting infrastructure and architectural decisions until late in the development process can be painful. If you want to prove your architectural concept or different architectural alternatives, you often start with a prototype that implements your major concepts. By applying the prototype to the planned hardware/software infrastructure early, you can test how well the chosen architecture is suited to the infrastructure it will run on. Component load testing can be done against business logic components as soon as they’re ready, and without the need of a fully developed UI or other software components. With SOA-component load testing, early load testing becomes even more critical. The earlier you start developing load Ernst Ambichl is chief scientist at Borland. www.stpmag.com •

15

Photograph by Alexey Kashin

M

any organizations wait until the end stages of application development to perform load testing. This practice often leads


WEB-APP LOAD TESTING

groups often need to cooperate. The first is IT operations, which is responsible for the infrastructure the application will run on in production. The second is the development team, which is responsible for the application and the scalability of the architecture. A dedicated performance team (perhaps part of the QA group, development or IT) can greatly facilitate these efforts and act as the bridging group between development and IT.

FIG. 1: UI PROTOTYPE

Load Test

Load Test

Load Test

Web Server

App. Server

UI Prototype

Intranet

Load Balancer

Database Server

App. Server

Load Testing a UI Prototype

App. Server

Web Server UI Prototype

tests for components of your system, the earlier you can start to find regressions of performance when these components change. By integrating load tests as part of your regression test suite, you can avoid detecting performance problems long after they are introduced.

Focus on Infrastructure And Architecture Some could argue that testing with a focus on infrastructure is a classic benchmarking domain and doesn’t have much to do with load testing an application. Basic hardware/software infrastructures such as network switches, Web servers, firewalls, application servers, DBMSs or messaging middleware are already well known and mature. Often you can even find standard benchmarks for most parts of your infrastructure. But be careful: Standard benchmarks have down sides, as they: • Ignore your application’s individual structure and workload • Exist only for discrete infrastructure parts, not for the specific combination of infrastructure parts that make up your application infrastructure • Usually aren’t available for new application technologies The benefits of early load tests of parts of the application within the target infrastructure are: • Early capacity assessment of the application infrastructure • Early check for scalability of your architecture

16

• Software Test & Performance

• Early identification of relevant performance indicators and configuration settings • Early information for infrastructure tuning By load testing the infrastructure early, you’re able to learn about the configuration settings and metrics that are relevant to performance. Knowledge of the relevant performance indicators and configuration settings is highly valuable, not only for later testing and tuning, but also for setting up the right set of infrastructure monitors for your live application. For this kind of test, especially within large IT organizations, two or more

Let’s assume you need to build a highly scalable architecture for a Web-based application with a high standard for usability and speed of the user interface. The application will be delivered to all locations using the existing corporate intranet. As part of the application development, you’re designing a new HTML/UI framework including third-party AJAX components. You want to ensure early in the process that the existing network infrastructure—as well as the company’s standard Web server infrastructure—will deliver the required performance and responsiveness for the new application. To accomplish this goal, you’ll load test a prototype of the application UI using a new UI framework. The prototype includes only a small subset of the planned application’s UI logic and is already using the framework’s UI controls. Since you don’t yet have the business logic in place, you’re emulating the business logic as “hard-coded” parts

FIG. 2: WITH FULL-TIER PROTOTYPE Load Test

Intranet

App. Server (Business Logic)

Web Server (UI) UI Component 1

BL Component 1

UI Component 2...n

BL Component 2...n

Load Balancer

App. Server

Database Server Data Access Component 1

Web Server

App. Server

Data Access Component 2

MARCH 2008


WEB-APP LOAD TESTING

inside the UI prototype (see Figure 1). Having this UI prototype in place, you already can test how well the UI framework performs on the planned infrastructure. Stepwise, you can do tests against a single Web server, load-balanced Web servers, and across the corporate intranet. The idea of this type of early testing is to determine whether some UI components might not be suitable for your intranet’s network latency, for example, or are consuming too much memory on the Web server to scale well. This can and should be done before you base your whole application on these components.

to get answers to the following questions: What is the viability of the infrastructure? With a small subset of functionality touching all tiers, you can determine whether the different infrastructure components can work together to deliver acceptable performance. What are the design flaws that result in

FIG. 3: COMPONENTS IN A MULTI-TIER APP

Load Testing a ‘Full-Tier’ Prototype In another scenario, you may want to verify that your application’s planned distributed architecture actually runs and scales as expected on the infrastructure chosen for deployment. To accomplish this, you can use another prototype of your application for load testing. The prototype needs only to contain a small subset of the real application; it doesn’t need to be complete in terms of the functionality it will deliver. It’s important that the prototype allows you to test against a small set of use cases that already touch all tiers of the application using the proposed distributed architecture for the

Intranet

UI Component Load Test

BL Component Load Test

Web Server (User Interface)

App. Server (Business Logic)

UI Component 1

BL Component 1

UI Component 2...n

BL Component 2...n

Load Balancer

App. Server

Web Server

bottlenecks? Your software architecture’s scalability and performance, which define how the different parts of the infrastructure will work together, also

App 1

App 2

Services Framework Services

Service A

Service B

Providers

App 2

App 3

application. For a typical Web-based application, these tiers are Web server, application server, database server and external providers, if applicable. Load tests using a “full-tier” prototype on the target infrastructure can help you MARCH 2008

DA Component Load Test

Database Server Data Access Component 1

FIG. 4: EXEMPLARY SOA

Consumers

Are there incompatibilities between the different technologies used? Early detection of incompatible parts of the infrastructure can be accomplished when a ”fulltier“ prototype (as in Figure 2) exists that touches all tiers of the application. I ran into such a problem when testing the servlet engine used in certain

can be verified by a prototype that implements the architectural framework used by the application’s components. Even different design alternatives (if available as prototypes) can be tested for performance, scalability and reliability.

App. Server

Data Access Component 2

Web-based products (in our case, Tomcat) with one of the Web servers we needed to support (in this case, IIS). The problems occurred only under load conditions. Testing our application on Tomcat without IIS as the servlet container never showed similar problems.

Component Load Testing Modern multi-user applications are usually built with frameworks that allow for modular, componentized design and architecture. Componentizing your application is the first and most important step to enable you to begin your testing earlier, when certain individual components are getting ready. Especially with components that are accessible remotely and/or concurrently from multiple clients, functional testing should be expanded to component load testing as soon as possible. Often, functional tests for components are already completed by developers with the help of standard unittesting frameworks like JUnit or NUnit. With the right tools in place, it’s only a small step to extend these JUnit/NUnitbased tests to small component load tests. My experience has shown that many elusive performance problems can easily be found when exposing www.stpmag.com •

17


WEB-APP LOAD TESTING

remote and multi-instance components to moderate load conditions. What components should be load tested? It’s important to concentrate your load testing on remote components and/or components that are used concurrently

and with minimal dependencies to the environment they run in. The terms Web services and SOAP are purposely omitted from this definition of SOA, which is a much broader architectural concept.

FIG. 5: TESTING SERVICE B IN TEST FRAMEWORK

Simulator App 2

App 1

Consumers

Services Test Framework Services

Service A

Service B

App 2

App 3

Providers

by multiple clients (Figure 3). From a technology view, these are components that expose their functionality via interfaces like RMI, RCP, CORBA and (D)COM, .NET Remoting and, of course, Web services. (SOA will be handled in more detail later in this article). I also typically include SQL-based data access components in my roster of candidates for load testing. Database performance remains one of the critical elements in a distributed application architecture. With the evolution of SOA technology, there comes the need to adapt your load testing approaches to SOA’s new requirements and challenges. First, let’s define SOA. As defined by XML.com: SOA is an architectural style whose goal is to achieve loose coupling among interacting software agents. A service is a unit of work done by a service provider to achieve desired end results for a service consumer. Both provider and consumer are roles played by software agents on behalf of their owners. Loose coupling is the magic phrase in this definition, and is the enabling factor that allows us to start testing as soon as the services contract (or interface) between the software agents is defined. In theory, SOA architectures are well suited for applying “testing early” principles, as services should be built with a high degree of autonomy

18

• Software Test & Performance

Factors that influence your load testing approach for SOA applications include: A decreased predictability of use. The

ing early for the scalability of your services is important. Increased complexity. Since applications based on SOA often consume multiple services (such as composite applications), the services call chain to fulfill an application request can get quite long, especially when using services that themselves consume services. Availability of service providers comes late in the application life cycle. This is especially true when your application depends on a service provided by a third party, such as a business partner. If this is the case, you need to ensure that you can test your application when not all service providers are available. Availability of service consumers comes late in the application life cycle. You need to ensure that you can test your service before the service consumers begin consuming it. SOA facilitates distributed development. Often, distributed teams or even different organizations work on service providers and service consumers. To avoid finger pointing when performance problems are found during system testing, it’s important to test the services in isolation. Complex root-cause analysis. Due to the complexity and the distributed

FIG. 6: TESTING SERVICE A IN TEST FRAMEWORK

Consumers

Simulator App 1

App 2

Services Test Framework Services

Service A

Mock Service B

App 2

App 3

Providers

agility SOA provides for building new applications based on existing services leads to more unpredictable usage patterns and workloads compared to classic “n-tier” applications. As a service provider, you may not know who might ultimately consume your service at the time you’re developing it. Hence, test-

nature of SOA applications, identifying the root cause of SOA performance problems is harder than in traditional ntier systems. The earlier you detect problems in isolation, the easier it will be to fix them. Impact of change increase. SOA-based applications typically evolve over time MARCH 2008


WEB-APP LOAD TESTING

and change constantly by adding new applications on top of existing services and new providers for existing services, or by creating new services on top of existing services. A simple change in a service can impact multiple applications consuming this service. This also introduces the need to constantly retest and carefully monitor your services whenever you change the service. Different types of load tests can be done in different stages of system development. This depends on your testing strategy and the availability of your SOA components.

Isolation Load Test

use a small subset of the deployment infrastructure or are developing within a test framework (Figure 5) to execute and debug their work, which is different from the target framework.

You can create mock services for services that aren’t available.

Load testing should be done before you integrate the service with your consumer applications or integrate it into the services framework. Isolation load tests are the “cheapest” load tests because you can do them without having the whole infrastructure in place. In addition, you typically won’t need a lot of virtual users to test a single service behavior under load conditions (synthetic workload in contrast to realistic loads for end-to-end load tests). This makes such tests good candidates for regression testing. As soon as the service changes, you can run isolation load tests to check if the behavior of the service has changed under load conditions. Often, a fix of a defect related to the component’s functional behavior just introduces a degradation of performance.

Conducting small load tests as part of developer activities (which can most often be directly derived from unit tests) without the burden to set up big infrastructures for testing helps to move load testing nearer to the developer and earlier into the application life cycle. You can do small load tests with your nightly developer builds, which can signal changes in performance as soon as possible.

Testing Without a Consumer

Testing Without a Provider

When developing services, you often have no access to the client application, or the application isn’t ready for testing. Also, if a service is consumed by multiple applications, you won’t reach sufficient test coverage when testing is done with only one client application. In the absence of a client application, traditional test-script creation techniques such as recording client interactions aren’t possible. So, even if you aren’t working in an agile development shop, developing functional tests as part of service implementation is a good practice. You might even say it’s a necessity. These functional tests can also be reused for load testing.

Although SOA fosters loose coupling

Testing Without a Services Test Framework

between components and therefore minimizes dependencies between components, real dependency always exists and can’t be reduced. Real dependency is the set of features or services that a system consumes from other systems. So how can you test a service that depends on another service before that service is available? In object-oriented programming, you use mock objects, which are simulated objects that mimic the behavior of real objects in controlled ways. Similarly, you can create mock services for services that aren’t available or that you want to factor out of your test (Figure 6). Factoring out services by emulating their behavior through mock services offers the advantage of allowing testers to control the behavior of the emulated service. This allows you to easily build load testing scenarios in which you emulate the misbehavior of dependent services such as service calls that are tardy, time out or return incorrect data.

Integration Load Test: Services Framework Integration Test After isolation testing, in which you test the service in your services test framework, you can replace your test framework with the services framework used for deployment. This lets you test how well your service works in the target environment. While this usually adds the work of deploying your services and providing a test environment with the target services framework, you can reuse the tests

FIG. 7: TESTING SERVICE B IN SERVICES FRAMEWORK

Consumers

App 1

Simulator App 2

Services Framework Services

Service A

Service B

App 2

App 3

Providers

Developers usually don’t work within the deployment infrastructure. They typically MARCH 2008

www.stpmag.com •

19




WEB-APP LOAD TESTING

additional complexities with end-to-end load testing (Figure 10). Services that share common a infrastructure or platform require coordinated load testing to truly replicate production-like states. Providing the test infrastructure, creating and setting up these tests, identifying production-like workloads, analyzing results and finding the root cause for performance problems is even more difficult when compared with more traditional n-tier systems. Everything that can be done to identify possible performance problems before you actually perform your system load test helps to lower the cost of fixing performance problems and mitigate the risk of project failure due to wrong architectural decisions you can’t redo at the end.

FIG. 8: TESTING SERVICE A IN SERVICE FRAMEWORK

Simulator App 1

Consumers

App 2

Services Framework Services

Service A

Mock Service B

App 2

App 3

Providers

you’ve already written. You won’t perform integration load testing (Figures 7 and 8) as often as your isolation tests (as with every checkin). But they should be done on a regular basis, such as every time development passes a build to QA. This ensures that QA isn’t wasting time on testing builds that don’t pass the performance criteria checked by your service framework integration tests. You’ll also most likely increase workload by testing the scalability of the services framework in combination with your service. Extending your isolation tests to services framework integration tests helps to answers questions like: • How does the service scale within the services framework? • How much overhead is the framework adding to the service? • Does the framework correctly handle the life cycle of the service? • What is the payload for enabling security?

Establishing integration load tests as soon as two interacting services are available helps to find integration problems early. Rerunning integration load tests (regression testing) as soon as dependent services change helps to identify performance degradations at the time they’re introduced. With service interaction tests, you’ll extend the test infrastructure to better reflect the target system and extend the workload patterns to more realistic scenarios (Figure 9). Also, your test scripts will need to reflect that they’re testing the integration aspect and not the isolation aspect of the services.

System Load Test: End-to-End Test Loosely coupled architectural implementations such as those of an SOA create

22

• Software Test & Performance

Every change in a system might not only introduce regressions in terms of functionality, but also in terms of performance, scalability and stability. Focusing only on functional test automation to address regressions leaves performance problems undetected until final systemload tests. Integrating load tests as part of your regression test suite avoids the danger of detecting performance problems too long after they are introduced. Because it’s expensive to set up and integrate load testing into a test automation process, not all types of load tests are suited for regression load tests. Some good candidates for regression load testing are: Isolation load tests. Such load tests can

FIG. 9: SERVICE A AND B TESTING IN SERVICES FRAMEWORK

Consumers

Integration Load Test: Service Interaction Test As important as it is to test services in isolation as early as possible to detect performance problems, it’s equally crucial to test the services in combination to detect problems related to their interaction with other services. No isolation test will ever give you absolute certainty that your system will pass even the most simple integration test, even if your isolation tests cover almost all your code. This is especially true of the performance, scalability and stability aspects of your SOA-based application.

Regression Load Test

Simulator App 1

Simulator App 2

Services Framework Services

Providers

Service A

Service B

App 2

App 3

MARCH 2008


WEB-APP LOAD TESTING

be done on a regular basis (ranging from tests per check-in to nightly scheduled builds). Services framework integration tests. Isolation load tests also should be executed regularly in the target services framework. Functional tests have simple success conditions (usually pass/fail per test case based on assertions in your test script that make it easy to automate your tests’ results analysis). This isn’t the case for load tests, which usually require analysis of multiple metrics to determine a pass or fail status. To automate that process and flag “failed” load tests, you can use the following methods: • Compare performance-relevant metrics such as response times, throughput rates and resource consumption to defined baselines (static thresholds) that you’ve set up for each individual load test. • Compare the change/delta of performance-relevant metrics to historic measurements of the same test. In this case, you don’t need to

System Monitor

production and FIG. 10: END-TO-END TESTING both provide valuable feedback Simulator Simulator about the accuracy App 1 App 2 of your load testing: Active service monitors. By reusing Simulator Simulator Consumers App 1 App 2 existing load-testing scripts and executing them on the Services Framework live system, you get an accurate picture Service A Service B Services of how the performance of the system under test and the live system compares. Leading App 3 App 2 Providers load-testing tools have integrations with application your whole application code on these performance monitoring frameworks, concepts. Component load tests help which makes it easy to reuse your load to isolate performance problems testing assets for active monitors. System and in-depth monitors. By using early—before they become difficult to system monitoring techniques, you can find and expensive to fix. keep track of services usage patterns. The integration of load testing Input/output data can be monitored with throughout the development process in-depth monitoring techniques. Results has never been more important as, due to increasing complexity, we face FIG. 11: TESTING IN PRODUCTION less predictability of usage and more dynamic changes in applications. Because of SOA’s loosely coupled Real Users Real Users nature, unit and component testing approaches can be adopted for load Active Monitor Active Monitor Service Performance Metrics testing, delivering early results about App 1 App 2 Consumers (e.g. service response time) App 1 App 2 the performance and scalability of your services-based components. Services Framework Integrating load tests into your reguIn-depth Monitoring lar regression testing suite will help you Service System Metrics Service A Service B (e.g. service execution count) Services to detect performance regressions as soon as they’re introduced. You can extend your testing approach to the proService In-DepthMetrics (e.g. service input data) duction phase of your application by App 3 App 2 reusing load testing assets for application Providers monitoring to gather feedback about real usage and real performance. set up thresholds for each test. for service execution counts and input For optimal success, load testing • Both methods have their advancan be fed back into the testing process to should be conducted throughout the tages and disadvantages. Decide create more accurate workloads. project life cycle, started soon after an case-by-case which one best suits application is conceived and continued Early and Integrated your requirements. until it’s retired. ! Load testing can be done in early REFERENCES stages of development and applied to Testing in Production: • “What Is Service-Oriented Architecture?” Hao He, Sept. 30, 2003, O’Reilly xml.com, various components of an application Application Monitoring www.xml.com/pub/a/ws/2003/09/30/soa.html Load testing SOA applications under before the final end-to-end load test. • W3.org, Web Services Glossary http://dev.w3.org/2002/ws/arch/glossary/wsa Early infrastructure load tests can mitreal-life conditions is extremely com-glossary.html igate the risk of investing in a specific plex (Figure 11). It’s therefore valuPractices for Web Application Deployment” “Best • keynote, Ernst Ambichl, Segue Software, Total infrastructure that doesn’t scale or able to extend your testing approach Performance Management Symposium, Mar. 18, perform as needed. to the production phase of your appli2004 By using prototypes of the applicacation to gather feedback for your test• “Choosing a Load Testing Strategy” Whitepaper, Ernst Embichl, Segue Software, 2005 tion for load testing, you can proof ing. • “Adjusting Testing for SOA,” David S. Linthicum, architectural concepts before you base Two techniques extend testing into SD Times, Aug. 15, 2007 MARCH 2008

www.stpmag.com •

23


By Rex Black

Photograph by David Franklin

M

ore and more projects involve integration of custom-developed or commercial-off-the-shelf (COTS) components, rather

than in-house development or enhancement of software. In effect, these two approaches constitute direct or indirect outsourcing of some or all of the development work for a system, respectively. While some project managers see such outsourcing of development as reducing the overall risk, each integrated component can bring with it significantly increased risks to system quality. If your organization does or is plan-

24

• Software Test & Performance

ning to outsource, you’ll need to understand the factors that lead to these risks, and some strategies you can use to manage them. I’ll illustrate the factors and the strategies with a hypothetical project. In this project, assume you’re the project manager for a bank that is creating a Web application that allows homeowners to apply for a home equity loan. You’ve purchased components

from two suppliers, including a COTS database management system from one of them. You’ll hire an outsourced custom development organization to develop the Web pages, the business logic on the servers, and the database schemas and commands to manage the data. First, let’s analyze how to recognize the factors that create quality risks, and identify strategies you can use to manage those risks. Rex Black is president of RBCS, a software, hardware and systems testing consultancy.

MARCH 2008


When You Must Buy Versus Build, There Are Ways To Help You Avoid Any Slip-ups

Quality Risk Factors in Integration Figure 1 (page 27) shows four factors that lead to increased quality risk for a system. Let’s take a look at each, one at a time. One factor that increases quality risk is component coupling, which creates a strong interaction with the system— or consequence to the system—when the component fails. For example, suppose the customer table on the Web application database becomes locked and inaccessible under normal load. In such a case, most of the other components of the MARCH 2008

system, being unable to access customer information, also would fail. The database is strongly coupled to the rest of the system. Another factor that increases risk is irreplaceability. This occurs when few similar components are available or the replacement is costly or requires a long lead time. If such a component creates quality problems for your system, you’re stuck with them. For example, the database package you choose might be replaceable, provided that you don’t do anything non-standard with it.

However, the development organization will want to be paid for the custom-developed Web application. And should you choose to try to replace it, off-the-shelf products might not exist. Yet another factor that increases risk is essentiality, where some key feature or features of the system will be unavailable if a certain component doesn’t work properly. For example, suppose you planned to include a pop-up loan planner on the first page of your application to allow customers to evaluate various payment scenarios. If that component www.stpmag.com •

25


REDUCING RISK

failed, you could still deliver most of your application’s major features, since the planner is not essential to the system. But if the subsystem that accesses a credit bureau to check customer credit scores doesn’t work, you can’t process loan applications. Checking credit scores is essential to the application. The final factor that increases risk entails vendor quality problems. This factor can be compounded if it’s accompanied by slow turnaround on bug fixes when problems are reported. If there’s a high likelihood of the vendor sending you a bad component, the level of risk to the quality of the entire system is higher. For example, if you buy a commercial database from a reputable, established vendor, or if you select a custom development organization with a proven track record, then you’ll probably have fewer problems. If you use a new open source database that has never been used in commercial applications before, or if you use a newly open custom development organization, you’ll probably have more problems, particularly if there is poor technical support or if it’s absent altogether. It’s obvious how these factors could affect a typical data center application. Imagine a weapons system for which defense contractors intend to develop software to run on COTS platforms. Here the situation is similar, though the replaceability and vendor quality problems could be exacerbated by limited choices for components and vendors. How might these risks be mitigated? In my experience, I’ve seen and used four effective strategies.

and assume they’ll deliver a sufficiently good, more-or-less working component to you. This approach may sound naïve on its face, but project teams do it all the time. If you choose this course, I suggest you do so with your eyes open. Understand the risks you’re accepting. Allocate time and finances as a contingency for poor component quality. The more coupled, essential and irreplaceable the component, the greater the impact of such a situation. To continue with our example, you might choose to trust both the custom development organization and the database vendor. You could make such a decision rationally by checking the development organization’s references, assuming they can provide references for customers who used them for projects that are very similar to yours in design and scale. The same is true for the database vendor, though you might have to do your own research if their sales and marketing staff cannot or will not provide references. Relying solely on an acceptance test is practically the same as trusting your partners in the custom development situation. For the COTS database, you could run an acceptance test at the beginning of the project for the database, using simple models to evaluate database performance, reliability and data quality under your intended load conditions. However, for the custom-developed component, you’ll have to wait until you receive the component before you can acceptance-test it. And if the component fails, what options do you have? Even if the contract stipulates that you don’t have to pay under these circumstances, you face a good chance of a lawsuit, and you also have the actual (and opportunity) costs of starting over with a new custom development organization.

Run an acceptance test at the beginning

of the project for the database, using simple models to evaluate database performance.

Trust Your Vendor One strategy is simply to trust the vendor’s component quality and testing,

26

• Software Test & Performance

Manage Your Vendor Another strategy is to integrate, track and manage the vendor testing of their component as part of an overall, distributed test effort for the system. This involves up-front planning, along with sufficient clout with the vendor to insist that they consider their test teams and test efforts subordinate to (and contained within) yours. To continue with our hypothetical project, imagine that you’re working at a large bank and that the custom development organization is a small firm. They’ll probably be motivated to get and retain your business. They’ll be especially flexible if they think that you have particularly good testing processes and that they can learn something from you. In exchange for the effort you expend managing their testing, you’ll have early warning should quality problems emerge, and therefore more options to deal with such an outcome. Conversely, though, if you’re buying the database from a large COTS vendor, they probably see your business as a small part of their larger product sales picture. They have their own test processes, product road map and target release dates. It’s highly unlikely that they’ll be receptive to offers—much less insistence—that you manage their testing operation. Even smaller COTS vendors, when selling a COTS component, want to sell you what they’re offering. They’ll likely be averse to the possibility of an open-ended situation under which you might redefine the component’s requirements through expansive testing and ambiguous or evolving pass/fail criteria for the tests. I’ve seen more than one COTS vendor get burned by customers when they allowed this to happen. Smart COTS vendors (large or small) would probably insist that this management of their testing, and any resulting bug fixes and change requests, be considered a customization of their component subject to time-and-materials billing. The only likely exceptions to such a condition would arise when the COTS vendor saw a strong possibility that working with you to fix problems and change the product would benefit their current or future customers sufficiently to justify the risks they’d be taking. MARCH 2008


REDUCING RISK

Fix Your Vendor Another option is to fix the component vendor’s testing or quality problems. In other words, you go into the situation expecting to either revamp the vendor’s processes or build new processes for them from scratch. Both sides must expect that substantial effort, including product modifications, will result. Once again, a key assumption is that you have the clout to insist that you be allowed to go in and straighten out what’s broken in their test and quality processes. This might sound daunting, but on one project the client hired me to do exactly that, and it worked out well. The vendor was compensated for their part of the work, including the modifications. And my client felt that the vendor brought enough technical innovation and capability to the project to justify their management of the quality and testing problems. With expectations aligned from the start, both sides were happy. Going back to our example, suppose you assess the outsource development organization before the project and find their testing and quality processes lacking. They accept your assessment. You offer to help them fix the issues that were identified, and they accept that offer. If your assessment identified the major problems, and if you and the vendor can resolve those problems with the scope, budget and schedule for the project, and if continuing to work with that vendor makes sense for other reasons, this can succeed. However, it’s difficult to imagine that the database vendor would accede to the request for an assessment of their testing to begin with, not to mention allowing you to come in and implement changes to it. The very fact that a COTS vendor might agree to such a request should set off alarm bells in your mind. You should then ask yourself if they actually have a COTS product to sell or if you’re dealing with a prototype masquerading as a product.

Test Your Vendor’s Component A final option, especially if you have proof of incompetent testing by the vendor, is to disregard their testing, assume that the component is coming to you untested, and retest the component. You’ll have to allocate time and MARCH 2008

effort for this, and realize that the vendor will likely push to have every bug report you submit reclassified as a change request except in the most egregious cases. You also have to ask yourself if the vendor might decide, at some point, to cut their losses and disengage from the project. You’ll want to make sure you have contingency plans in place should that happen. I’ve had to do this for clients on system testing projects. On one notable project, a vendor sold my client a mail server component that was seriously buggy. We became aware of the problems by a series of misadventures in which promised deliverables continued to show up late and with substantive bugs, as well as fit-and-finish problems that gradually eroded our confidence in them. Eventually, the component did work and was included in the system, but the entire process took a few months, not the one-week deliver-andintegrate that was in the project plan. Fortunately, slack elsewhere in the

need to start a serious testing effort to take over where they’ve failed. Suppose you become aware of similar problems with the database vendor. You can confront the vendor with the problems. But if they delivered something to you with the assertion it would work, can you really trust them to resolve the problems now? Would they be likely to let you manage their testing? If you try to do the testing yourself, do you think they’ll fix the problems you find? If the component isn’t essential, you’re best off omitting it, or if it is replaceable, you’re best off replacing it. Whether for a COTS component or a custom-developed component, these are clearly nasty scenarios, and at some point you’d have to ask yourself how you managed to get into such trouble. If you ran acceptance tests on a COTS component, why weren’t the problems identified? If you thoroughly vetted your custom developer, why did they prove incompetent? How should your quali-

FIG. 1: THE FOUR CORNERS OF QUALITY RISK Component Coupling

Component Irreplaceability

Increased Risk to System Quality Posed by Component

Vendor Quality Problems

Component Essentiality

schedule prevented this from becoming a project-endangering episode. Returning once again to our example, suppose that you become aware of serious quality problems in the early prototypes delivered by the custom development organization. You can no longer trust their testing. There’s not much point in managing a test process that is clearly broken. There’s no time remaining in your schedule to go in and fix their testing process. So, if you intend to stick with this vendor, you’ll

ty risk-mitigation strategy for outsourced components change for future projects? These are good questions, and should be saved for the project retrospective. During the project, the focus must remain on achieving the best possible outcome.

Implications, Considerations And Success All of these options can carry serious political implications. Should problems arise, the vendor is unlikely to www.stpmag.com •

27


REDUCING RISK

accept your assertion that their testing staff is incompetent or their quality unacceptable. They might well attack your credibility. If a senior manager made the choice to use that vendor—and it might been an expensive choice—that person might side with the vendor against your assertion. So, you’ll need to bring data to the discussion about these strategies if the triggering conditions arise during the project. Better yet, if you’re dealing with a custom-developed component, see if you can influence the contract negotiations up front to require the vendor to submit their tests and their test results, along with the offer to let you perform acceptance testing by your team prior to payment. Build sufficient contingency plans into your schedule, including an allowance for replacement of the vendor during the project if things start looking bad. Make sure the vendor understands that you’re paying attention to quality and that payment depends on delivery of a quality prod-

uct on time. It’s amazing how motivational such clauses can be. For COTS components, arrange a careful component selection process, including vendor research, talking to

managed at the component level, it’s still possible to make a serious mistake in the area of testing. Even the besttested and highest-quality components might not work well in the particular environment in which you intend to use them. Plan on integration-testing and systemtesting the integrated system yourself. Integration of COTS and outsourced custom-developed software is a smart choice for many organizations. It’s a trend that continues to grow as organizations gain experience with it. To ensure success on your next integration project, consider the factors that create quality risk in such scenarios. Select strategies that mitigate those risks. Build risk mitigation and contingency plans into your project plan. If you do these things and execute the project carefully, with an eye on testing and quality, you can control the risks and reduce the likelihood and impact of component quality problems. !

Plan on integration-testing and system-testing the integrated system yourself.

28

• Software Test & Performance

• references and acceptance-testing using carefully designed tests. Identify alternative sources if possible. Consider the possibility and the consequence of omitting the component if it isn’t essential.

Finally, DIY Finally, with the risks to system quality

MARCH 2008


By Alan Berg

Photograph by Andres Rodriguez

D

oes your development team listen and respond to all your requests? Not likely, but if so, you’re one of the lucky few who

may not need to read this article. However, if you’re like most development managers, your team has productivity highs and lows, feels down about taking blame unfairly, or is frustrated by any number of other problems common to teams of all kinds. Developers come in all shapes and sizes, ages and mentalities, and are wrapped in many project experiences and development methodologies. In my years as a developer, I’ve met a broad range of interesting personalities— strong-willed and submissive alike. And for as many types of developers, there are probably as many specific techniques for motivating them. Alan Berg is the lead developer at the University of Amsterdam.

What follows are my random observations and nonscientific suggestions based on my experience of seeing what works to motivate developers—and what does not. Disclaimer: Any resemblance to true life is purely coincidental. I’m a happy developer in a highly motivated, highly literate and mildly agile team. The stories I write about here are about others and not myself, nor the great organization that I work for.

Motivators/Demotivators In my daily life, I’m a developer, part of a small team that builds big things for lots of error-prone humans—human end users who never cease to amaze me with their ability to find the weakest of weak links in any given application. Luckily for my well-managed group, the team remains motivated. Occasionally I get to go to conferences or meet other likeminded workers of a similarly cynical nature. If you exist long enough in your trade, over time, repetitive patterns and practices emerge that work for or against the good of the collective. I call these practices motivators and demotivators.

Some RealWorld Advice For Getting The Most From Your Team Players

Acceptance Environments In general, a well-balanced developer is not the same as a well-balanced system administrator. System administrators love to administer such that their systems remain solid, strong and stable. System

www.stpmag.com •

29


HOW TO MOTIVATE

administrators attend to many machines and don’t expect or enjoy being woken up at 2 a.m. because of a rampaging cron job or a badly configured persistence manager. They strive for gray consistency and quiet boredom. Conversely, developers are born (or in my case, genetically manipulated) to build new products and risk error through the creation of futuristic services. A point of potential friction between the two professions—one that loves stability and the other risk—is the delivery of shiny new things into production. For custom solutions, there will be a burn-in time where application issues need to be resolved and the bugs patched. Crisis is in the air, and downtime, though unpopular, may become an infectious reality. Planning decreases this burn-in time by giving you proper development, acceptance and production infrastructure as part of the logistics. The acceptance environment needs to be as similar as possible to the production environment to catch the unexpected sniffle or rude belch early. By using acceptance systems, administrators can effectively learn the implications of the new and how to manage the shiny application, giving feedback that helps them develop their own elusive functional requirements. The acceptance environment is a point of contact between developers and system administrators, and generally helps decrease tension between the two warring tribes in preparation for the time when systems fail and new systems cost real administrators real headaches. When the system administrators are happy, the developer’s pain level is eased and he can sleep nights safe from any long-term retribution. The developer can walk confidently to the coffee machine without fear of shadows and evil end users, not to mention sharp, pointy sticks.

Everything Is NOT a Priority There are many types of heroic project leaders leading the charge into the daily battle, making plans, shielding resources and generally managing to control and give insight into the evolution of their pet project. Sometimes things have to happen yesterday, which is fine, as developers are well-known time travelers. However, when project leaders plan every target for the day before and give the target A1

30

• Software Test & Performance

priority, it’s difficult to fit the entire team into the time machine. At some point, the mini-black hole will fail and the team will be sucked into the resulting implosion. Therefore, motivating developers requires realistic priorities and planning—unless management can generate more warped space to enfold the whole company and its customers. Development teams tend to learn quickly that project leaders who set realistic pain levels gain more respect and actual man-hours than the more strident and commanding warriors. Developers tend to hate exhaustion, and all humans in the long term are motivated by a marathon pace, not an energy-draining sprint. Note: Agile project tactics with short iterations of weeks, when implemented correctly, make planning resources for the current and next iterations easier to predict and achieve than longer cycles. I suspect that the “everything is top priority mentality” is soon washed out of the system.

Remove Management Layers When Things Go Wrong A percentage of all software projects fail. Therefore, the more your team does, the more failures you can realistically expect. A corollary truth is that the people who don’t work hard tend not to fail as often as those who do, and therefore have an advantage whenever finger pointing starts. If your team doesn’t occasionally fail, you’re not producing something real, you’re not keeping up with changes in the market, or your average throughput is suboptimal. Sure, failures occur due to human error, and some organizations are more error-prone than others. But all in all, it’s not healthy to always succeed. Constant success probably implies that you have low standards or your QA or marketing mechanisms have failed to detect the obvious. With project management in traditional organizations, developers tend not to be in direct communication with stakeholders and high management. Instead, they depend on the comfortable, user-friendly layers in between. This lack of direct communication can be a cause of frustration and demotivation for any human wishing to defend themselves against perceived injustices. The person on the deck of the ship shouting “Turn left!” may have looked like part of the problem just before the

ship struck the iceberg. But in hindsight, it’s clear that he may have been offering the correct solution at just the appropriate moment. The closer you are to the wheelhouse, the more responsibility you have to talk and listen to the deck hands. Give the poor sods the opportunity to defend their isolated position, even if they do speak what might seem a foreign language and in excessive detail. And by listening to them, you just might learn a thing or two. Agile project management techniques emphasize the need for active stakeholders who, by default, are in more intimate contact with the team than more hierarchical planning approaches. Motivate developers by not blaming them. Realize that people who work hard and are productive statistically chalk up more failures. It may be better for a team to turn in nine successful projects a month with one failure than to deliver only six per month with zero failures.

Avoid Groupthink Just because your organization favors 286based systems or has never heard of AJAX doesn’t mean the same groupthink will help in a competitive marketplace. Developers tend to be the nearest to the source of change and usually have the least mental inertia. Years of training in specific problem domains, days of courses learning about the implementation of subtly chosen patterns, and minutes of training by the candy machine force many in the profession to constantly reexamine


HOW TO MOTIVATE

Just-Right Documentation Think of documentation as the porridge in the story of Goldilocks and the Three Bears—it should be just right. How many people will read the documentation and how often? Know the documentation’s real value and don’t underestimate how burdensome it is to write similar words for similar products at similar parts in the process. Do you really need that extra diagram when a use case, sequence and class diagram are enough to signal the true intent of the product? Motivate your developers by asking for the paper as needed and not “that paper that’s like the last project’s” or that fills in a template from your expensive projecttracking software.

Catch Bugs Early Nothing is more irritating than catching a bug late in the process. Not only has the developer moved on from that project, he may not know the product so well any more, or the bug may be hidden by nasty unaccountable special effects. Poof! The user enters a big number. Bang! The machine rattles. Clunk! An unexpected output occurs, or,

worse still, a blank page pops up. It seems that the bug only rears its ugly, wart-ridden head when the relevant parties are really, really busy, or on holiday in Hawaii. No single technique will catch all lifestyle-threatening coding errors. However, a consistent and wide-spectrum approach is more likely to succeed than five minutes of inspection at the end of a cycle. A nightly build infrastructure—or better still, a continuous build infrastructure—can eliminate some classes of issues. Regular code reviews and function tests also are helpful. Sure, these methodologies cost more energy in the short term, but at that point, the code base is hard-etched into the team’s five-second memory span. Again, agile projects with short iterations have the potential to catch bugs in coding early.

Open Standards Call it a personal bias, but I like open standards. Because when a particular piece of software fails or delivers less than promised in the marketing blurb, I can pull the software out and replace it with another standards-compliant product. For example, if you’re building a Javabased portal system, it’s nice to know that if you write your code based on JSR 168, most of the personalization functionality via portlets is transferable to other similarly developed infrastructures. And if you have a problem, you’re more likely to find answers by Googling. In comparison, code written for hidden or proprietary APIs can be fragile and cause extra work. Developers like working smart—not hard.

New Hardware Eight minutes may not seem like a long time, but when you’re waiting for a local build to finish so you can continue your work, it can be an eternity. Multiply that by 10 and you’re close to the number of builds performed in a typical developer’s workday. And if you’re busy and need to accomplish a lot of complex inert code refactoring, those eight minutes can feel like 40. Worse still, when a bug appears at 4 p.m. on a Friday evening and you’ve got a date at 7 p.m., a slow debug cycle is like scratching a chalkboard with your fingers. Developers need powerful machines. And while it might be company policy that development take place only on central servers, there are always real reasons to work on some issues locally. Buy decent-sized hardware for your develop-

ers, and you’ll see code bashers turn to smiley faces when the new equipment is unveiled.

Play’s Cool Like many IT workers, developers enjoy exploring and learning new things. To them, playtime can mean surfing the Web for design ideas, hunting for useful code snippets or downloading a new application framework and taking it for a spin. Three years ago, the best practice for building large-scale, dynamic Java-based Web sites was (arguably) to use the Struts framework. Two years ago, perhaps, the Java Server Faces framework was more in style. And with today’s social communities, AJAX is all the rage. Of course, mixing and matching these well-known frameworks with newer ones also can prove favorable. But forcing the mass production of code under a single framework—though it may gain you short-term efficiencies—is bound to result in outdated products. The monolithic approach risks market agility. It’s therefore important to be openminded about how developers spend their time. To keep your company and its applications competitive, your developers need to stay up-to-speed on the latest standards, specifications and techniques. Otherwise, your products will be slow and appear old and worn. Developers tend to be the first to see and intuitively understand new technologies, and being unable to bring a product forward can be frustrating for them. I recommend allocating an amount of time each week for developers to play with and learn new things, and to channel that effort through the group. This can be done with weekly presentations, which help keep the group motivated toward building the best product possible.

Don’t Overdo It At the other extreme are the time and mental energy required to learn a new framework. This is especially true for enterprise-level programming languages such as Java. Multitudes of helpful open source code exists: Spring, Hibernate, Struts, Cocoon, Xerces, iBatis, Jdom, MyFaces and AJAX methodologies, to name just a few. And every year, more community standards, portlets, repositories, Enterprise JavaBeans, XML databases, Java Management Extensions and security specs are added. It’s hard to www.stpmag.com •

31

Photograph by Aurelio/Fotolia.com

their biases. Therefore, grant the team some early inclusion in the project filtering. This can act as a sanity check for slowly decaying, aging technologies, but only if you listen to and act on their input. Change is good if managed and maintained at a palatable rate. Motivate your developers further by including them rather than landing memos on their desks or e-mail inboxes about projects hidden from their view until the moment before the nuclear bomb impacts the asteroid.


HOW TO MOTIVATE

know where to start. Too much information can cause overload and a feeling of loss of control. It’s important to have a well-defined mechanism in place with which learning tasks are distributed fairly across the group by the software architect or development manager. This will help reduce unnecessary anguish and mental burden, and keep everyone motivated toward learning.

Pumping Up Circulation Hard as it may be to believe, not all developers are reclusive. I’ve known a few bouncy, happy types who have no problem socializing and spreading the word. Others like to stay focused and plug away at code generation. A third group would like to be more in contact, but is stuck under an avalanche of tasks. For those with their feet sticking out from the snow, circulation is needed to motivate and increase the ability to remain sharp. Sending the team to conferences is a good way to get the blood circulating again. Select and budget for a number of developer and tester conferences every year and send two or more team members at a time. Which ones to send first can be based on seniority, productivity or some other factor that might breed friendly competition. Without this occasional change of scenery, efficiency will fall over time and motivation will decrease. Conferences are a great way to recharge batteries and wipe away cynicism while at the same time keeping the team well-trained.

Blame Light In an organization that strides for a consistently high volume of throughput, a percentage of failure is inevitable. When channeled properly, it can teach valuable lessons and serve to mature the team. Unfortunately, failure is usually accompanied by the side effect of blame. When managed badly, blame can be like a ball with explosives inside. It might be passed between team members for months without incident, but when the ball explodes, most of the players are maimed. As with most humans, developers like to understand why things went wrong without having their heads shot off; no one likes to be blamed unfairly. If given the chance, developers would like nothing more than to systematically find and eliminate mechanisms of negative

32

• Software Test & Performance

process. If the GUI was not as required, perhaps the methodology needs a little “wire framing.” If performance is always an issue, preproduction stress testing may need to be the norm. But all that goes out the window if fingers are pointed at someone who’s sure it wasn’t their fault. To avoid that corrosive will-sapping, I recommend code reviews, nightly builds and functional testing by outsiders. Code reviews, if managed correctly, can hammer out bad programming habits without causing unnecessary dust-ups. Nightly builds can catch the big compile errors and divide the blame

Many a manager skeleton is and will be hidden under the rubble of assumption.

• into small pieces. That helps keep fixes easy, fast and relatively lightweight. Finally, some good, old-fashioned functional tests by neutral players can preemptively find faults. One thing to watch out for is that the neutral players aren’t perceived as a task force being brought in as a rescue mission. What developers absolutely hate is to be blamed and “bailed out” at any point during a project.

IDE Conformity For reasons that include IT support, security and economies of scale, many companies—particularly larger enterprises—require all employees to use the same desktop applications and operating system. For some employees, this can be a mistake that results in a severe drain on energy, morale and productivity. Developers, especially experienced ones, are often set in their programming ways. For example, I’m partial to Eclipse, which has many advantages

over a simple text processor, such as syntax checking, and has also boasts a large and thriving marketplace of free plugins. And for a small amount, I can get the MyEclipse environment for enhancements and extra polish. On the other hand, one of the most experienced developers and proficient problem-solvers I’ve ever met uses Emacs for his Java, Perl and PHP programming. And a good many system administrators still invoke VI from a command line to bang away at configuration files. To the managers upstairs, the choice of a single IDE and a few plugins appears sound in terms of training and other reasons already listed. But these are just words when thrown at a frustrated developer, because managers may not realize the years that developers spend personalizing their environments and learning all the time-saving control codes and keystrokes that boost their productivity. Many a flame war has been fought over this and lesser issues. And many a manager skeleton is and will be hidden under the rubble of assumption. My advice is to fight such changes as if the life of your project depends on it, because it just might be true. So unless you have no choice, let be the IDE.

Likes and Dislikes Developers like to be included and valued in project discussions. We like to know about problems up front—the earlier the better. Buy us fast new hardware, and we smile. Allow us to play and force us to circulate with others of our kind, and our attitude will stay positive, our energy fresh and our skills agile. Keep us in the loop with communication and allow us to investigate and defend ourselves when things go wrong. Don’t place blame or allow others to do so before all the facts are in. We know about technology, but often have less influence over technological decisions than others. We don’t like writing documentation, so please don’t make us if it’s never going to be used. Don’t pit us against IT, but foster alliances and good relationships with support and system admins. This will minimize confrontation, which we abhor, and help us to diffuse unwanted tensions when working within the acceptance environment. Keep iterations short, challenge assumptions and spot failures early. This keeps fixes quick, and blame light. ! MARCH 2008


To End Up With a Quality Product, Your Team Must Begin With a Good Set of Testing Practices By Kiran Vankatesh

Q Photograph by Chris Hall

uality is a key differentiator for business growth. Good software testing practices help ensure a quality process, which ultimately leads to quality software. These good practices include understanding requirements, a wellprepared test strategy and continuous enhancement of the test suite. It’s also important to know when testing is complete—something usually based Kiran Vankatesh is a senior test engineer with MindTree Consulting. MARCH 2008

on specific indicators of product readiness. This article addresses the fundamentals of the testing process, explores the good practices of product testing and offers recommendations for a phased approach to component-, feature- and system-level testing. Testing effectiveness can be improved by testing with all possible combinations and techniques. However, it’s difficult to do this and sometimes impossible to ensure that the product has zero defects. There are various approaches to improving the effectiveness of testing

an application. Can it be installed from a CD? What other factors or inputs are going unseen or unchecked? It’s easy to overlook such factors. But as software testers, we need to be aware of and test them, because they may not always function as expected. Inputs should be selected uniformly among the different input domains. If the testing effort is limited due to cost or schedule constraints, I suggest that you base your testing on highly used input factors only. These include product specification, design documents, product reviews, competitive informawww.stpmag.com •

33


TESTING PRACTICES

the application’s intended purpose.

Develop Test Cases Early One good practice is to start developing test cases during requirement analysis. Once the requirements are available, the testing team can write high-level test cases to demonstrate an understanding of customer requirements and business scenarios. As the project progresses, test cases should adhere to the test strategy, performance of the tests and enhancement of the test suite, and adherence to quality and value additions. Test case documents can be provided to the customer for continual review and approval. Once software begins to be sent to the testing team, test cases are executed and the test case documents are updated for features that weren’t clearly defined in the requirements

B Photograph by Lou Oates

ENEFITS OF GOOD SOFTWARE TESTING

tion, schedules, feedback from previous versions, customer surveys, lookand-feel specifications, software architecture, test plans, usability data and software code. By focusing on these factors, testers can adequately determine operational reliability. Processes used by testers are verified by the software quality auditor, who also ensures that the testing processes are followed correctly. This involves an internal audit of processes from the preparation of test case documents to the release of the product. Central to the process is to measure the quality of developed application and to authenticate its correctness, completeness, security, capability, reliability, efficiency, portability, maintainability, compatibility and usability. In terms of technical investigation, the process can be performed on behalf of the stakeholders and may also include reviews, walkthroughs or inspections. The testing team should receive

34

• Software Test & Performance

new builds at regular intervals, each new build containing defect fixes identified in the previous release. For every release, the testing team must follow the same practices to help minimize testing time and increase efficiency. End products of good quality blend user satisfaction, compliant product, good quality and delivery within budget/schedule. An end product should be reliable, modifiable, understandable, efficient, usable and portable. Software is developed to fulfill the specific needs of a person or a group; i.e., the customer. Part of the testing team’s job is to understand the customer’s business model, wants and needs. This can be done by using surveys, direct feedback and face-to-face meetings. Once this information is understood, the exact approach for testing can be determined without assumptions. The only assumption the testing team should ever make, in fact, is that of the customer’s perspective, such that it reveals any defects or diversions from

Reduced testing time. Consistent tasks over time can be done more quickly. Automated test suites run faster and with fewer errors than manual tests and require less human intervention. Improved product quality. Better processes result in more complete coverage, more organized and welldefined procedures, and better overall quality. Consistent test results. Consistent methods ensure uniformity of testing scope and uniform test results. Improved productivity. Documented processes, techniques and implementation help to identify defects early and prevent regression. Improved test coverage. Test cases created by manual testers can drive the automation team, which can more easily create and maintain the automation test scripts. Reduced cost. Automation reduces tester involvement, improves effectiveness, productivity and consistency of test results, and reduces redundancy.

MARCH 2008


TESTING PRACTICES

document. The team confirms that the test case documents comply with the requirements. Documents should be maintained in a CMS tool to ensure that document versions and changes are tracked.

Designate an Automation Team Setting aside resources for an automation team saves manual testing time and resources. The team should be dedicated now just to development of the automated testing, but also to maintaining incremental advancements of the product. Regression testing also can be assigned to this team. When frequent builds or executable files are to be tested, test automation in an organization plays a vital role in reducing manual testing time. Specialized automation testers are required to maintain a structured testing setup. This would require proficient and certified testers for advanced testing methodologies and comprehensive usage of popular testing tools.

Unconventional Test Matrices An unconventional test matrix (decision table) is used for testing complex functions and frequently needs regression testing. The unconventional test matrix provides insight into the important concept of the product testing life cycle. Monitoring the specifications in the requirements document and verifying that all requirements have been met by the time of release can be cumbersome and laborious. If this aspect of the testing life cycle isn’t considered high priority, it can result in confusion and arguments between the QA team and stakeholders. This problem can be solved by using a traceability matrix. Created before test cases are written, the traceability matrix is a complete listing of all that has to be tested. Sometimes that means one test case for each requirement, or several requirements that can be validated by only one test scenario. This depends on the kind of application that is available for testing. Irregular test matrices or decision tables for testing complex functions can be created whenever it’s necessary during the testing activity using matrix elements such as requirement, function specification, design specification, source code files and test cases. MARCH 2008

A traceability matrix provides a cross-reference between a test case document and the functional/design specification document. This can show if the test case document contains tests for all the identified unit functions from the design specification. From this matrix, the percentage of test coverage can be collected, taking into account the percentage of functionalities in the ratio of total-test-

A

to take on more roles than those of testing the product. And it’s usually a good idea to assign maintenance and automation responsibilities of modules to the module leads.

Maintain Test Case Design Documents Testing is a continuous process involving frequent software builds and multiple product releases and updates. It’s

LL GOOD PRACTICES GO TO TESTING Good practices should be followed for every new version until the product is stable enough for release.

Smoke testing. This should be standard practice for any new build before any other testing begins. This is a series of “shallow and wide” tests to ensure the build can perform minimum functions and is stable enough for further testing. Bug verification. All the bugs reported against the previous release need to be verified in the new build and closed if they are fixed. Regression testing. The testing team must verify whether the areas around fixed bugs have affected other dependent functionalities. Installation testing. Installation of the software on customer-specific environments to ensure that it installs and functions properly.

Compatibility testing. Verification of how well a product performs on various hardware, databases, operating systems and network environments. Testing should be done on all intended target platforms.

ed to total-untested. “If you don’t identify defects, the customer will.” This well-known axiom is likely to pop up whenever a software product isn’t appropriately tested. Releasing a product with defects can occur for many reasons. But common to all scenarios is speculation about tested and untested parts of the software. The traceability matrix is an important tool for helping avoid this scenario.

therefore important to create test case documents and keep them up-to-date along with change requests and release notes. The consistent use of good testing practices will help improve the quality of the tested software, help teams overcome the challenges and effectively deal with software defects. If you’re open to learning from others who have come before, you’re much less likely to repeat their errors. !

Define Roles and Responsibilities

REFERENCES

A vital practice for a high-quality product is to divide the product into modules and assign them to individuals. Define roles and responsibilities that allow the testers to perform their functions with minimal overlap and without uncertainty as to which team member should perform which duties. One way to divide testing resources is by specialization in particular application areas and nonfunctional areas. The testing team also may be able

• J. Bayer et al., “PuLSE: A Methodology to Develop Software Product Lines,” in Proceedings of the Fifth ACM SIGSOFT Symposium on Software Reusability (SSR’99), Los Angeles, CA, May 1999 • B. Beizer, “Software Testing Techniques,” Second Edition (Van Nostrad Reinhold, 1990) • A. Bhor, “Component Testing Strategies,” Technical Report UCI-ICS-02-06, University of California, Irvine, June 2001 • P. Clements and L. M. Northrop, “Software Product Lines: Practices and Patterns” (Addison-Wesley, 2001) • Ron Patton, ”Software Testing: Master the Testing Concepts,” Wikipedia

www.stpmag.com •

35


Best Prac t ices

Shore Up Those Levees ’Cause Change Is A-Comin’ California and the Netherrivers. More than 20 million lands provide two approCalifornians get their drinkpriate settings for considering water from the freshwaation of what works and ter half of the delta, which what doesn’t when it comes also supplies much of the to change management. state’s huge agribusiness Sure, both are hotbeds of industry. So, higher sea levtechnology. The Golden els predicted by all those State famously gave rise to credentialed experts in fact the Silicon Valley, while imperil far more than the Holland boasts what may rich farmland and the tens Geoff Koch be the highest per capita of thousands of people that rate of broadband Internet usage in the comprise the communities immediately world. However, the two also share a abutting the levees. more dubious distinction: They’re Despite this backdrop of such stagsmack dab in the path of rising sea levgering potential cost and many striking els that are likely to be an inevitable failed-levee lessons associated with consequence of global warming. Hurricane Katrina, change has been Here you’re forgiven for flipping to slow. About the only sign of activity is the cover of the magazine to confirm the formation, by California Gov. that you’re still in the familiar confines Arnold Schwarzenegger, of the to-date of Software Test & Performance. Rest toothless Deltavision Blue Ribbon Task assured, there are absolutely no referForce to plot a more productive future ences to Al Gore in the paragraphs that for the delta. Task force chair Phil follow, though I’ll argue with any of you Isenberg says in a Jan. 14 story on NPR’s that he does deserve that Nobel Prize All Things Considered newscast that for calling for what amounts to a worldthings have gone so slowly because “any wide change management exercise. But decision that gets taken in the delta in perhaps the disparate approaches to the one sense or another involves over 200 dangers of climate change will provide a different government agencies.” useful lesson or two about dealing with Golden State’s Gaffe disruption in your programming work. Since software developers are so good Let’s start and spend most of our at seeing patterns, I’m hoping that certime in California, where the most seritain aspects of this story might tickle ous problems related to rising sea levels the gray matter in your brain. One will likely be felt in the Sacramento such pattern is that profound change Delta northeast of San Francisco. Once is on the horizon in the software develone of the largest estuaries on North opment world in the years ahead. America’s west coast, the region is now Climate change is being fueled in crisscrossed with hundreds of miles of large measure by the fact that every manmade levees, many of which date year, more people on the planet have back to work done by Chinese laborers the ability to produce and consume in the late 1800s. more goods and services. Similarly, The levees by and large prevent ocean software development is no longer an saltwater from the western edge of the exclusive, high-end pursuit. More delta from mixing with the freshwater code is being written and maintained pouring into the region from the east by by teenagers and $12-an-hour offshore way of the Sacramento and San Joaquin

36

• Software Test & Performance

programmers, and more innovation is coming from the nonprofessional ranks of tech aficionados in the United States and abroad. First among the sweeping trends in change management are “advances in Web 2.0 and the increasing role of end users in more interactive Web-based applications,” says Arash Shaban-Nejad, a researcher and Ph.D. candidate at Concordia University in Montreal. “This shifts the end user’s position toward being a knowledge producer rather than an absolute consumer.” Another pattern that should be obvious is that the response varies wildly depending on the source of the change. “What we resist, as if our lives depended on it, is being changed,” says Peter de Jager, an organizational change management consultant in Brampton, Ontario. De Jager points out that IT is littered with the detritus of well-intentioned new product features that end users might well have loved if they’d had a hand in shaping these features in the first place. I’d wager even money that when it comes to global warming, most of those resisting changes to the status quo in California’s delta are personally concerned about the issue but are annoyed at being told by others, especially politicians, how to respond. Of course, attempts to seek consensus can go too far, which is why a third clear pattern is visible. Specifically, the importance of a well-defined and somewhat imperious change management process is imperative as complexity increases. This flies in the face of Best Practices columnist Geoff Koch relied heavily on Joe Palca’s excellent reporting for NPR’s Climate Connections series for this column. Write to Koch at gkoch@stanfordalumni.org. MARCH 2008


Best Practices one of the noble norms of programming, particularly among the open source crowd, which holds that writing software should be a fairly egalitarian exercise. Surf message boards and you’ll invariably stumble upon earnest pleas to always hear from various constituencies and seek consensus about how the code base should evolve. However, the idea that lots of quasiindependent, fair-minded coders will necessarily converge on what’s best for the wider community breaks down as scale increases, as do attempts to cajole the democratic process into addressing climate change. “It’s not a wonder nothing happens rapidly,” complains Deltavision chair Isenberg in another Jan. 14 story, this one for NPR’s Morning Edition newscast. “It’s a wonder anything happens at all.” This is precisely why “large companies in all sectors, including the Microsofts of the world, will always practice change management,” says Jessica Keyes, president of New Art Technologies, Inc. and author of several books on software development, including “Software Configuration Management,” published in 2004. “Without it, projects tend to become chaotic very quickly.”

Gouda—and Good Approaches The approach to rising seas in the Netherlands, a low-lying country wedged against the North Sea and known almost as much for its dikes and surge barriers as for its windmills, clogs and conspicuously liberal attitudes toward recreational drugs, is a hopeful alternative to California’s chaos. And the pragmatic stance adopted in Holland holds powerful change management lessons, as well. For starters, Dutch policymakers seem to readily acknowledge that their country’s current infrastructure simply won’t be sufficient much longer to deal with the ocean’s encroaching edges. Proposed solutions include building barrier islands in the North Sea and beginning to allow controlled flooding even if it means relocating farms and communities, ideas that represent an everything-is-on-the table way of thinking that’s utterly absent in California. MARCH 2008

Years ago when I worked at Intel, Yahoo co-founder Jerry Yang spoke at our annual sales conference. Alluding to the importance of embracing change, Yang reminded attendees of Einstein’s oftcited definition of insanity: doing the same thing over and over again and expecting different results. I think both Yang and Einstein would agree that simply shoring up antiquated version control systems or leaky levees is unlikely to be sufficient to hold back rising tides, metaphorical or otherwise. Beyond their embrace of the immediate problem, residents of the Netherlands also trump Californians in their ability to take the long view and treat change as more of a permanent fixture than passing fad. “Dealing with climatic change is a flexible process that will never end any more,” Dutch water engineering executive Piet Dircke told NPR’s Joe Palca. “We will have to adapt again and again.” Dircke’s sentiment was echoed nearly verbatim during my interview with Roger

S. Pressman, author of “Software Engineering: A Practitioner’s Approach,” of which more than one million copies have been sold. “Being adaptable is probably the most important thing that any software organization can be,” says Pressman. Asked whether change management is getting easier, Pressman is quick with a reply: “No. In fact, if you look at the trajectory of software over the next 20 years, some truly scary and amazing things are going to be happening. But none of them, in my view, are going to mitigate the need to change the journey midway through it.” In other words, it’s time to start thinking well beyond the next few release cycles. There’s little doubt that everything from open APIs to global development teams to the semantic Web points to increasing change in software. The only question is whether you’ll be ready before today’s bootstrapped processes and tools for channeling all this change finally give way. !

Index to Advertisers Advertiser

URL

Page

Automated QA

www.testcomplete.com/stp

10

Checkpoint Technologies

www.checkpointech.com/BuildIT

28

Empirix

www.empirix.com/freedom

Hewlett-Packard

www.hp.com/go/quality

iTKO

www.itko.com/lisa

8

Seapine

www.seapine.com/qualityready2

4

Software Test & Performance Conference Spring 2008

www.stpcon.com

Test & QA Newsletter

www.stpmag.com/tqa

6

40

2-3

39

www.stpmag.com •

37


Future Future Test

Test

Painless Piracy Prevention Over the next four years, an SDLC and to end users. estimated US $200 billion in Application hardening software revenues will be solutions comprehensively lost due to piracy. protect software against According to the Business reverse engineering, tamSoftware Alliance’s latest pering and hacking. global piracy study conductProvided by companies ed by IDC, piracy impacts whose core competency is not just the bottom line, but preventing attack by profesextends beyond revenue sional hackers, they’re loss. Counterfeit software, nondisruptive in nature and Kevin Morgan sold by professional pirates offer rugged protection. on so-called “cheap OEM software” sites, Application hardening ensures that is buggy and often carries malware paysoftware metering measures can run loads. Customers who unknowingly buy untampered and protects software such counterfeits can account for as applications from sophisticated attack much as 20 percent of your technical supvectors such as disassemblers and port costs. debuggers. It also goes far beyond simLike most software vendors, you plistic and one-dimensional security may be considering an anti-piracy solutechniques. The goal is to not only protion to combat this problem. There are tect the license management functiontwo prongs to any solution: usage ality against attack, but also to protect metering measures such as license all of the software-based IP—including management and node locking, and algorithms, APIs and data—against tamsoftware security measures against pering and piracy. By making applicahacking and tampering with the applition hardening the cornerstone of your cation and the usage metering meassoftware protection strategy, you can ures themselves. ensure that you get maximum revenues Durably protecting software against from software usage. an attacker with administrative priviIn considering which application lege is challenging. Solutions that hardening solution to adopt, consider apply conventional strategies such as these seven key properties to ensure obfuscation give rise to new problems that your solution is seamless, effective and can impact every aspect of softand efficient: Transparent to end user. The best software development, quality assurance ware protection tools shouldn’t impact and end-user experience. And most you application’s runtime performance. software packages get hacked anyway, They should be self-contained within often within hours or days of release. your application and should not affect Faced with these challenges, which the user experience. directly impact success metrics for the Works on the compiled binary. There software development team, many should be no impact on the source code companies elect to lower their security development process for you, or for any bars and therefore knowingly incur partners who embed or build plugins piracy. Fortunately, recent advances in for your system. Binary-based solutions application hardening technology proalso can protect legacy code and existvide a viable alternative that successing builds. fully secures software applications Does not disrupt development. The while being nondisruptive to the

38

• Software Test & Performance

hardening solution should seamlessly integrate into your build environment. It should be fully automated and run in real time. Configuring (and reconfiguring) the protection should be easy, and should not disrupt your software structure or development. Is easy to customize. One-size-fits-all security is attractive in theory, but impractical in reality. Choose a solution that can be easily tuned to your specific threat profile and application structure, and can be quickly yet securely blended across your software logic. This provides maximum security without a large integration effort. Doesn’t disrupt QA or field maintenance. Quality assurance always occurs on a compressed schedule and therefore has the least tolerance for risk and overhead. Leading application hardening solutions are transparent to the functional testing process, assist the security testing process, and interoperate with field debugging and crash analysis tools. Has a proven track record. Cut through the hype and choose a solution that’s tested by independent organizations and proven with realworld successes. Recognize that any protection can eventually be broken, and consider solutions that offer breach management. A winning anti-piracy strategy tightly binds application hardening with license management, protecting revenue and the development process and timetable. The value for your customers is their confidence that the software they’re purchasing is legitimate and untainted with malware or other Trojan horses. There should be no performance impact for protected software. Protection should exist seamlessly in the background and without disruption to any application function. Realizing the fullest return on your R&D means maximizing investments in new features and capturing new markets without fear that the software is being compromised or that your IP investment will be plundered. ! Kevin Morgan is vice president of engineering at Arxan Technologies, which makes IP protection tools. MARCH 2008


EVERYTHING

MORE

ABOUT JAVA™ TECHNOLOGY. AND SO MUCH MORE.

OF WHAT YOU NEED

You won’t want to miss the JavaOne conference, the premier technology conference for the developer community. This year’s Conference presents the latest and most important topics and innovations today to help developers access even more richness and functionality for creating powerful new applications and services.

200+ technical sessions More than 100

Birds-of-a-Feather sessions

15 Hands-on Labs

LEARN MORE ABOUT s Java Platform, Standard Edition (Java SE) s Java Platform, Enterprise Edition (Java EE) s Java Platform, Micro Edition (Java ME)

s Web 2.0 s Rich Internet applications s Compatibility and interoperability s Open source s E-commerce collaboration s Scripting languages

Save $200

on Conference registration!

Register by April 7 at

java.sun.com/javaone Please use priority code: J8PA1SDT

JavaOneSM Conference | May 6–9, 2008 JavaOne Pavilion: May 6–8, 2008, The Moscone Center, San Francisco, CA SM

Platinum Cosponsors

Cosponsors

Copyright © 2008 Sun Microsystems, Inc. All rights reserved. Sun, Sun Microsystems, the Sun logo, Java, the Java Coffee Cup logo, JavaOne, JavaOne Conference, the JavaOne logo, Java Developer Conference, Java EE, Java ME, Java SE and all Java-based marks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries.


A LT E R N AT I V E T H I N K I N G A B O U T Q UA L I T Y M A N AG E M E N T S O F T WA R E:

Make Foresight 20/20. Alternative thinking is “Pre.” Precaution. Preparation. Prevention. Predestined to send the competition home quivering. It’s proactively designing a way to ensure higher quality in your applications to help you reach your business goals. It’s understanding and locking down requirements ahead of time—because “Well, I guess we should’ve” just doesn’t cut it. It’s quality management software designed to remove the uncertainties and perils of deployments and upgrades, leaving you free to come up with the next big thing.

Technology for better business outcomes. hp.com/go/quality ©2007 Hewlett-Packard Development Company, L.P.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.