stp-2007-03

Page 1

Publication

VOLUME 4 • ISSUE 3 • MARCH 2007 • $8.95 • www.stpmag.com

Don’t Judge a Book... Judge Functional Tests By Code Coverage Boundary Testing II: Here Are Dragons

Should the Newbie Carry Everyone’s Stuff?

Load-Test Your Web Apps End to End How to Build A Custom DLL That Piles on the Protocols

: ST ES BE CTIC e nt A ng e PR Cha gem na Ma

A


4AKE THE

HANDCUFFS OFF

QUALITY ASSURANCE

Empirix gives you the freedom to test your way. Tired of being held captive by proprietary scripting? Empirix offers a suite of testing solutions that allow you to take your QA initiatives wherever you like. Download our white paper, “Lowering Switching Costs for Load Testing Software,” and let Empirix set you free.

www.empirix.com/freedom



The days of

‘Play with it until it breaks’ are over!

Introducing: ®

TestTrack TCM The ultimate tool for test case planning, execution, and tracking. How can you ship with confidence if you don’t have the tools in place to document, repeat, and quantify your testing effort? The fact is, you can’t. TestTrack TCM can help. In TestTrack TCM you have the tool you need to write and manage thousands of test cases, select sets of tests to run against builds, and process the pass/fail results using your development workflow. With TestTrack TCM driving your QA process, you’ll know what has been tested, what hasn't, and how much effort remains to ship a quality product. Deliver with the confidence only achieved with a well-planned testing effort.

• Ensure all steps are executed, and in the same order, for more consistent testing. • Know instantly which test cases have been executed, what your coverage is, and how much testing remains. • Track test case execution times to calculate how much time is required to test your applications. • Streamline the QA > Fix > Re-test cycle by pushing test failures immediately into the defect management workflow. • Cost-effectively implement an auditable quality assurance program.

Download your FREE fully functional evaluation software now at www.seapine.com/stptcmr or call 1-888-683-6456. ©2007 Seapine Software, Inc. Seapine TestTrack and the Seapine logo are trademarks of Seapine Software, Inc. All Rights Reserved.

See us at

Booth 502


VOLUME 4 • ISSUE 3 • MARCH 2007

Contents

12

A

Publication

COV ER STORY

Tune Your Load-Tester for End-to-End Web Applications

Many performance-test tools can be extended for your needs. Learn how to add protocols with a custom-built DLL. By Francisco Sambade

18

You Cannot Keep A Lid On Code Coverage Don’t blow your top. Code coverage testing can provide a pragmatic sanity check to complement the use of functional test—a white box to balance the

24

Depar t ments

Don’t Let The Unknown Burn Your Software Project

In this second of three articles on boundary testing, you’ll learn to use techniques in exploratory testing to ferret out the dangers lurking at the edges of your apps. By Rob Sabourin

7 • Editorial Though the Web is friendly to consumers, it’s also hospitable to hackers.

8 • Contributors Get to know this month’s experts and the best practices they preach.

31

Newbie: Can I Just Say One Thing?

Regardless of your experience, when you start a new job, you’ve got a lot to learn. These 10 tips can help ease that tricky transition. By Prakash Sodhani

9 • Letters Now it’s your turn to tell us where to go.

10 • Out of the Box New products for developers and testers.

36 • Best Practices Change is good: from cocktail napkin to By Geoff Koch change management tool.

38 • Future Test What’s the best requirements tool? The one between our ears. By Robin Goldsmith

MARCH 2007

www.stpmag.com •

5



Ed Notes VOLUME 4 • ISSUE 3 • MARCH 2007 Editor Edward J. Correia +1-631-421-4158 x100 ecorreia@bzmedia.com

EDITORIAL Editorial Director Alan Zeichick +1-650-359-4763 alan@bzmedia.com

Copy Editor Laurie O’Connell loconnell@bzmedia.com

Contributing Editor Geoff Koch koch.geoff@gmail.com

ART & PRODUCTION Art Director LuAnn T. Palazzo lpalazzo@bzmedia.com

Art /Production Assistant Erin Broadhurst ebroadhurst@bzmedia.com

SALES & MARKETING Publisher

Ted Bahr +1-631-421-4158 x101 ted@bzmedia.com Associate Publisher

List Services

David Karp +1-631-421-4158 x102 dkarp@bzmedia.com

Nyla Moshlak +1-631-421-4158 x124 nmoshlak@bzmedia.com

Advertising Traffic

Reprints

Phyllis Oakes +1-631-421-4158 x115 poakes@bzmedia.com

Lisa Abelson +1-516-379-7097 labelson@bzmedia.com

Marketing Manager

Accounting

Marilyn Daly +1-631-421-4158 x118 mdaly@bzmedia.com

Viena Isaray +1-631-421-4158 x110 visaray@bzmedia.com

READER SERVICE Director of Circulation

Customer Service/

Agnes Vanek +1-631-421-4158 x111 avanek@bzmedia.com

Subscriptions

+1-847-763-9692 stpmag@halldata.com

Cover Photograph by Rosemarie Gearhart

President Ted Bahr Executive Vice President Alan Zeichick

BZ Media LLC 7 High Street, Suite 407 Huntington, NY 11743 +1-631-421-4158 fax +1-631-421-4130 www.bzmedia.com info@bzmedia.com

Software Test & Performance (ISSN- #1548-3460) is published monthly by BZ Media LLC, 7 High St. Suite 407, Huntington, NY, 11743. Periodicals postage paid at Huntington, NY and additional offices. Software Test & Performance is a registered trademark of BZ Media LLC. All contents copyrighted 2007 BZ Media LLC. All rights reserved. The price of a one year subscription is US $49.95, $69.95 in Canada, $99.95 elsewhere. POSTMASTER: Send changes of address to Software Test & Performance, PO Box 2169, Skokie, IL 60076. Software Test & Performance Subscribers Services may be reached at stpmag@halldata.com or by calling 1-847-763-9692.

MARCH 2007

To Live and Die By the Web 2.0 cific incidents in 2006 I’m not a big fan of indusinvolving Wikipedia in try buzzwords. To me, November and MySpace in terms like Y2K, paradigm December. shift and DSO—mainly for Another alarming trend marketing types—lose their is the use of obfuscation to meaning the more they’re propagate malicious code, used. Even the definition of which can circumvent sigSOA, a term now firmly nature-based solutions. entrenched within today’s Always ahead of countertechnology lexicon, still measures, hackers are now varies based on who’s definapparently employing ing it. OASIS currently has Edward J. Correia dynamic code obfuscation, six different technical coma technique that eludes detection by mittees working on its various parts. But randomly generating variations in funchey, we’ve all got our jobs to do, martions and parameter names. Aren’t keting and standards bodies alike. there any legitimate jobs for these peoThen along comes Web 2.0, which ple? until recently I lumped in the same catThere’s also been a rise in a pheegory as the terms robust and scalable. But after hearing Tim O’Reilly explain nomenon known as vulnerability auchis use of Web 2.0—as technologies that tions, in which “just discovered” softallow the users of Web sites to create ware or Web site security flaws are their own content for those sites—I’m offered for sale to the highest bidder. convinced there’s more to it than marYou may recall a highly publicized case keting hype. For proof, look at sites like in December 2005, in which an exploit YouTube and Wikipedia; their meteoric of Microsoft Excel was offered for sale rise proves that the user-populated Web on eBay, but then quickly yanked by is more than just a fad. the online auctioneer. The seller’s auction included this description: “Since I The Cost of Equality was unable to find any use for this byUnfortunately, Web 2.0 also has a product of Microsoft developers, it is downside. The friendly nature of such now available for you at the low startsites knows no prejudices, and welling price of $0.01—a fair value estimameaning content creators are treated tion for any Microsoft product.” in precisely the same manner as maliMicrosoft should have auctions like cious hackers. It should come as no this on its watch list. Or, better yet, put surprise that the hackers are taking sellers like this on its payroll. advantage of that. A study published late last year by Cover Your Apps the Malicious Code Research Center, a The moral of this story is to perform division of security appliance maker your due diligence. Supplement your Finjan, pointed to a trend of uploads of functional tests with an examination of malicious code to popular user-created your testing’s code coverage. This is sites. The malicious code then infects particularly important for Web-based visitors to those sites who, because they applications and any that are intended weren’t alerted by any URL filtering to face the public. This issue contains that might have been present, unsussome great techniques by Alan Berg pectingly view everything found there as and Francisco Sambade that will show trusted or genuine. The study cites speyou how. ý www.stpmag.com •

7


Contributors FRANCISCO SAMBADE is a senior consultant in the financial services practice at Acumen Solutions, a business and technology consulting firm. His experiences with performance and functional application testing led him to develop the end-to-end load testing technique for Web applications described in our cover story, which begins on page 12. Sambade has experience in risk assessment and compliance management (SOX, ISO 17799), and holds CompTIA A+ and CompTIA Network+ certifications as well as a degree in computer science from the State University of New York at Stony Brook.

The author of numerous articles on software development and testing, ALAN M. BERG lends his unique style to a tutorial on coverage testing for Java Web applications that starts on page 18. Berg is the lead developer at the Central Computer Services at the University of Amsterdam, a post he has held for the last seven years. His achievements while working at the university include the implementation of smart cards, an LDAP provisioning system, a metadirectory, single sign in/out, search engines and automated code review, code coverage and profiling integrations with the Sakai open-source collaboration and learning environment for educators. ROBERT SABOURIN wrote his first computer program in 1972. Now an accomplished software engineer and SQA management expert, Sabourin is fond of saying, “Don’t tell me it can’t be done.” He continues his three-part series on boundary testing beginning on page 24, where he explores analytic boundaries, singularities, searches and boundary-related exploratory testing techniques. Sabourin is president of AmiBug.com, a Montrealbased consultancy. The company’s core values include building trust relationships, keeping commitments, fostering “gung-ho employees” and creating “raving fan” customers. As a quality control specialist with a global IT services organization, PRAKASH SODHANI is involved in various activities related to product quality and assurance. He has experience as a quality professional working in different capacities with several top IT organizations. His list of the top 10 challenges for new testers starts on page 31. In addition to a master’s degree in computer science, he holds Certified Software Test Engineer, Certified Quality Improvement Associate, Sun Certified Java Programmer, MCP (SQL Server 2000, C#) and Brainbench's Software Testing certifications.

TO CONTACT AN AUTHOR, please send e-mail to feedback@bzmedia.com.

8

• Software Test & Performance

MARCH 2007


Feedback COMMUNICATION CHAOS In reference to Edward J. Correia’s “The Problem With Stealth Deployments” (T&QA newsletter, Jan. 23, 2007), what we have here is a problem of communication, and with project management, a failure to adhere to the division of responsibility and authority, poor communications, inadequate procedures—all leading to not just stealth deployment, but to potential chaos. Joseph De Natale San Carlos, CA

AGILE ANARCHY? To me, this is what the “agilists” really never address (re: “The Problem With Stealth Deployments”). They put lots of premium on people, not processes, and they like the world in which specs and regulations do not interfere with the development process. The problem is that if you don't follow specs and regs, chaos ultimately ensues. We all know the populist views of agilists. It would be good if one day they would discuss the role of rules, regs, requirements and discipline. Andrew Binstock San Carlos, CA

JITTER AND CONCRETE In reference to “The Problem With Stealth Deployments,” it seems to me that the notion of “agile” doesn’t preclude the rules, regs and reqs, though many of the “agilists” (particularly the XP guys) seem to use agile as an excuse to do away with them. Though I have no hard evidence, my guess is that these sort of agile projects fail as often as the non-agile projects that ignore the requirements. Lately I’ve been thinking a lot about the gather-requirements/build/release cycle, and it seems that there’s a sweet spot in terms of timing. At one extreme, you have a sort of jitter caused by a tooquick reaction to a poorly thought-out requirement, where a change is made for specious reasons, the version is released and rejected, and another equally ineffective change comes in to take its place. Agile projects in the hands of inexperienced programmers (or experienced ones who buckle to pressure too easily) can get very jittery. At the other extreme is the SEI-5 guys who treat requirements as if they’re boltMARCH 2007

DITCH THE DEVELOPMENT! The last issue of ST&P (January 2007) made me realize that a major change is occurring either at the testing industry or in your editorial content, and I wish to validate my feeling that the latter is the correct answer. In the past (at least since 2005 until the first issue of this year) I, as a senior software test engineer, felt I was reading relevant, interesting articles about my profession, or at least as the title of your magazine implies, software test and performance, but in the last couple issues it seems there has been a tremendous shift toward code development and its quality aspects. The last issue was setting new records for this transformation. Even the most promising topic, “Three Principles for Successful Offshoring,” was heavily involved with code development. Since I am far from being a magazine editor, I’m finding it hard to make a good argument about the content of this magazine, but from my personal perspective and experience, I can tell you that the amount of content I find interesting reading has decreased to almost none. I am familiar with agile and automation, but I doubt this plus software development deserves almost 100 percent of the articles in the magazine. For that, there are more targeted magazines bearing more suitable titles, just as I don’t think these are the entire essence of the software testing industry. I don’t even want to guess what drove this change, but I wish it didn’t involve neglecting software testers for other highly respected professionals (i.e., developers) to increase the total number of subscribers/circulation. Please thought-provoke me with at least one article about software testing in future issues. Itay Haver Boston, MA ed into concrete, even when they have to change. A classic example is the IBM project to rebuild the air traffic–control software for the FAA. By the time they gathered requirements and built the system, the underlying environment had changed to the point where the system they built didn’t satisfy the current requirements. It’s possible, I think, to balance in the middle of these extremes---to apply critical-damping, if you will, to a system that tends to oscillate erratically. There aren’t many people writing about how to do that, though—probably because they’re too busy working. Allen Holub Berkeley, CA

KNOWLEDGE IS POWER Regarding Edward J. Correia’s “The Problem With Stealth Deployments,” my comment: I would have thought this to

be obvious. I have been testing products for a long time (over 20 years) and even at the beginning of my career, I always wanted to know what changes were going on with the test system. I think the key is communication and without that, you (as a test engineer) will not know if you are testing the correct configuration or software version. Knowledge is power, and anyone testing needs all the power they can muster to know the configuration and changes being made. Gretchen Henrich Dayton, OH FEEDBACK: Letters should include the writer’s name, city, state, company affiliation, e-mail address and daytime phone number. Send your thoughts to feedback@bzmedia.com. Letters become the property of BZ Media and may be edited for space and style. www.stpmag.com •

9


Out of t he Box

Check Remote Services Without Source If you’re a tester who’s been confounded by an inability to adequately examine and verify third-party Web services in your SOA, SOAtest 5.0 from Parasoft may solve the problem. Released in late January, the new version, according to the company, can test components on remote systems without access to the corresponding source code. “Interacting with a service within its visible interface is no longer sufficient to see all the errors,” asserted Rami Jaamour, product manager for SOA solutions at Parasoft. SOAtest 5 works with the help of an agent that gets dropped into the target Java app. “From SOAtest, you can trace the execution remotely within the code,” he said. “I can run through a business scenario and isolate problems and bugs without having the source code available locally. The agent resides on an app server and points to an argument that lets us see inside.” From the data captured in such tests, Jaamour said testers can invoke the code that was executed and isolate a particular component in the code to identify problems. “SOAtest lets you test distributed environments rapidly and incrementally, to make sure that assets are as

interoperable as they need to be and are keeping with standards,” he said. Version 5 also includes enhancements to the SOAtest policy enforcement engine to allow rules to be extended across multiple teams, such as those for enforcing WS-I compliance, for example. “Before, rules were one-to-one with the testing

Complexity Gets Bum’s Rush Java developers building new high-performance computing applications for dualcore systems are faced with a choice. They can either immerse themselves in threading libraries, deadlock detection algorithms and concurrency process design, or they can employ libraries written by someone who has. Helping teams to adopt the latter strategy is Pervasive’s DataRush, a set of Java components and a framework for building highly parallel data-processing applications intended for multicore systems. According to company documents, components can be quickly assembled into data-flow operations using XML scripting. An SDK is supplied for extending or including custom components. “There’s zero tooling available to let developers attack hyperparallel platforms,” claimed Pervasive CTO Mike Hoskins,

10

• Software Test & Performance

adding that the JVM handles multicore systems only in the rawest sense. “You’re totally on your own worrying about how threads collide with each other. As soon as multithreading is available, it’s complex.” Intended as a framework for applications written from scratch, DataRush is designed for data- and compute-intensive applications. “We hide all kinds of complexities: memory management, concurrency, locks, threading,” Hoskins said, “so the typical Java [application] can fully exploit all the threads on a machine… using pipeline parallelism.” Now in beta, DataRush is set for release at JavaOne in June. Subscription pricing is not yet set, but will be assessed per-processor when applications are ready for deployment. Until then, the development environment will be free.

SOAtest 5 can help teams ensure quality of service and locate defects in remote Web services.

tools,” said Wayne Ariola, Parasoft’s vice president of corporate development. “Now it’s available to everyone using SOAtest so checks can be integrated into the development process, and not haphazardly.” SOAtest now integrates with the asset registry of BEA’s AquaLogic. “With our centralized registry view, you can now see all the information associated with an asset, life cycle, project, quality of service and policies applied, and whether it’s conforming,” said Jaamour. “We bridge the gap between operation status visible in the registry and the quality status of the assets.” Ariola cautioned that a failure by others to adhere to standards could lead to defects in your own applications. “A noncompliance might reside as a bug in other products, but with SOAtest, it’s a living set of metadata that resides with an asset, like a credit report.” An architect, he continued, makes the decision to reuse an asset based on this credit report. “If it doesn’t pass this hurdle, I’m not using it in my next application.” Pricing for a team of five starts at around US$50,000. MARCH 2007


QualityLogic Test Management Becomes Makes Sense Part of Integrity of XPS If your company plans on developing applications that will support Microsoft’s XML Paper Specification— the print format used in Vista—you’ll want to know about QualityLogic’s XPS Test Tools and Suites for testing and debugging XPS implementations. According to a company document released with the products in late January, version 1.0 includes XPS Functional Test Suite, for checking integration of XPS within an application; XPSLab Utilities Suite, for manipulation and analysis of XPS documents; and XPS Comprehensive Evaluation Test, which ensures that an XPS implementation is capable of handling all capabilities of the XPS specification. “We developed these test tools to help [developers] get high-quality XPS implementations to market quickly, and to help ensure compliance with the XPS specification and interoperability between producers and consumers of XPS documents,” said James Mater, president and CEO of QualityLogic. Also introduced were XPS Native Application Test Suite, for testing and debugging XPS implementations for compatibility with Vista. QualityLogic also offers XPS Conversion Path ATS, which it says is useful for testing that XPS implementations are compatible with existing Win32 applications, and HD Photo Functional Test Suite, which “exercises each container and bitstream syntax element that influences behavior of an HD photo decoder,” according to a company news release. Pricing starts at around US$15,000 per suite. QualityLogic, which develops and markets testing tools for a variety of computer peripherals, has pledged to update all of its products for Vista by the end of March, including its PageSense printer-performance measurement tool. Send product announcements to stpnews@bzmedia.com MARCH 2007

Application life-cycle management solutions provider MKS in mid-February added test management capabilities to MKS Integrity, its flagship ALM and enterprise workflow suite. The new tool, MKS Test, includes a set of workflows for Integrity that, according to Brad Van Horne, MKS Test product manager, help to automate many of the typical tester’s activities. “You run a test more than once, nightly or every night for three months. With MKS Test, you can set them up to run automatically or as part of another workflow.” Also included, Van Horne said, is a series of charts and reports for defect management, allowing the team to easily track “which defects have been found, verified, assigned and are ready for testing. You get a nice picture of where things are going and who’s doing what.” MKS Test introduced no new coding to Integrity, Van Horne said. “We’ve formalized and created new types,” he said, referring to MKS’s nomenclature for Integrity workflows. These types permit Integrity

users to create test cases and suites, which he described as logical groupings of test cases. “Suites may be grouped by function, or by who they’re going to be assigned to,” he said. The tests are then passed to a company’s existing test tool or framework. “We connect and trigger it to run tests and capture and report results into our system,” Van Horne said. A connector for JUnit is included. Additional connectors are under development for HP Mercury’s QuickTest Professional, Compuware’s TestPartner and iTKO’s LISA; and will be free. Also new is the Test Results Editor, a place that Van Horne said testers will likely spend much of their time. “This becomes a dashboard, a simple place for them to do their daily work,” which he said can include creation or validation of defects, marking them as passed or failed, and attachment of comments or a screenshot. “They can also do bulk editing.” MKS Test was set for general availability in mid-February. Pricing had not been set at press time.

The Test Results Editor permits issues to be handled individually or in bulk. www.stpmag.com •

11


12

• Software Test & Performance


Learn To Build A

DLL For Piling On The Protocols

By Francisco Sambade

H

ave you ever needed to design a performance test bed using a product that worked wonders on some requirements, but didn’t

quite meet your expectations on others? Or perhaps you’ve been waiting for the day when you could combine functionality from leading performance-test tools into a single product, giving you access to the most advanced protocol emulator/performance monitor/results-analyzer test tool on the market? Well, don’t hold your breath. Although progress has been made, there’s still no silver bullet—no single, unified product that does it all. But many of today’s commercial performance-test tools allow developers to extend their off-the-shelf functionality through well-documented APIs. Organizations can hook right into a product’s operating bits and bytes, and even load external DLLs that can be called at runtime by your virtual users (VUs).

Hypothetical Enterprise Testing Scenario Suppose you’re required to build a suite of performance scripts for a system that processes credit applications. Let’s call it the Quick Credit Application Processor, or QCAP. The platform supports interfaces from multiple vendors, queries credit reports from credit bureaus, maintains external data feeds and processes credit application requests, all in real-time. To accomplish these requirements, QCAP is deployed on the company’s mainframe and communicates through industry standard middleware services. A message broker transforms XML messages to and from copybook (COBOL) format for QCAP. Platform users have the ability to submit credit applications for processing from any of the vendor interfaces, as well as directly from QCAP dumb terminal emulators. Figure 1 illustrates QCAP’s infrastructure.

Business Challenge While reviewing the workflows that need to be automated, you notice that most test scenarios require user access from either QCAP or a vendor interface to quanFrancisco Sambade is a QA expert at technology consultant Acumen Solutions. www.stpmag.com •

13

Photograph by Rosemarie Gearhart

Custom


PROTOCOL PILEUP

The client enters a polling state, refreshing the current page at a predetermined interval (1/2 to 1 sec.), until the application status changes from Pending to Approved. Since the approval status must be triggered directly from QCAP, an RTE user needs to log into the system, retrieve the application in question, and submit the necessary requests to overwrite the pending status with the Approved state. Typical interface implementations offer various data retrieval methods, including ad hoc searches by common identifiers such as first or last name, telephone number and Social Security number. For our discussion, QCAP will be able to perform application searches by a sequenced number generated by the vendor interface. This unique identifier must be passed between multiple scripts, increasing the complexity in emulating these transactions if the testing product doesn’t support Web and RTE protocols in a single script. Numerous methods are available to

TABLE 1: REFERENCE DESK Script

VU#1 0_Web_Application_Ingestion

VU#2 1_QCAP_Application_Approval

VU#2 1_QCAP_Application_Approval

VU#1 0_Web_Application_Ingestion

Transaction

Action(s)

1. Credit Application Request a. Login b. Click on New Application c. Submit Application

VUser_Init New_Application Submit_Application

2. Application Lookup a. Login b. Navigate to Search Panel c. Application Search

VUser_Init Search_Panel_Nav Search_App

3. Application Approval a. Approve Application

Approve_App

4. Poll for Approval Response a. Navigate to Search Page b. Application Search c. Display Application d. Poll for Approval

Search_Page_Nav Search_App Display_App Poll

tify system performance under load. However, a few of these workflows define both those systems as entry points, one of which includes starting business processes at a vendor’s Web-based interface, and requires user intervention in QCAP to overwrite pending transactions into an approved state (see Figure 2). Having previously worked with mainframe and Web-based systems, and, skeptical of the socket protocol available on most performance-test products, you recognized that both Web and Remote Terminal Emulator (RTE) communication protocols are required to implement a scalable, maintainable testing framework. My performance-test product of choice is Mercury LoadRunner, which can emulate both Web and RTE protocols independently but can’t combine the two in a multi-protocol script. And while the technique I’m about to describe is specific to LoadRunner, its principles can be applied to any loadtesting tool with a published API. Since RTE can’t be added to a multiprotocol script, unique identifiers from the vendor interface must be provided to QCAP over two separate scripts, increasing the overall difficulty in automating end-to-end business transactions. My technique overcomes this difficulty, which is explained in greater detail below. Figure 2 shows a sample end-to-end transaction workflow across interfaces using different communication protocols.

Figure 2. HTTP requests are emulated from the test tool to the vendor interface to process a credit application to a pending state. At this point, the application is retrieved (see Figure 2) and displayed by the vendor interface.

FIG. 1: HYPOTHETICAL INFRASTRUCTURE

Web Applications

Credit Applicant

Credit Applicant

Credit Applicant

Real Estate Agency

Marine Agency

Generator

Auto Dealership

WAN

Firewall Middleware Services External Gateway

Queue Manager

Integration

Queue Manager

Message Transformation

Broker

Z/OS Applications

QCAP Users

VU

QCAP

Solution Overview Look at the workflow presented in

14

• Software Test & Performance

MARCH 2007


PROTOCOL PILEUP

implement inter-script com- FIG. 2: CHECK-OUT SYSTEM munication, but the following three methods are common: 2 1 3 File I/O: File I/O provides Login virtual users with the ability to read and write identifiers at Click on New Application runtime. This method requires synchronization logic to be writSubmit New Application ten that prevents multiple users from working with the same Pending Status account, and avoid concurrency issues such as accessing the Navigate to Search Page same file for writing. On the other hand, if a unique file is Application Search used for each virtual user, I/O overhead increases significantly Display Application as the number of VUs increases. Memory allocators: C-lanPoll for Approval Poll While Pending guage routines could be used to allocate and address a globLogin al memory block, providing Navigate to Search Panel virtual users with access to set and get arguments between Application Search scripts. This method, however, Approve Application requires careful coding to Approved Status avoid race conditions and prevent memory leaks. Centralized database: A database schema could be Requests over HTP defined with the tables and Requests over RTE columns needed to host the Transparent Requests required identifiers at runServer Response time. This method provides virtual users the ability to query and update the dataaction relationships defined for this base on demand, and offers an effiworkflow. This high-level implementacient implementation for data coordition plan maps scripts and actions to nation among scripts. business transactions. Of the listed methods, a centralized Now that scripts are available to emudatabase is the easiest and most efficient late transactions for each entry point, way to pass data arguments between test it’s time to coordinate data values scripts. The implementation described between them. This is where the datahere refers to MySQL, but can apply to any open-source or commercial product FIG. 3: CARD CATALOG in use at your company.

A Database for Your Protocols The first step is to record and data-correlate the single-protocol scripts for the transactions that need to be performed at each entry point. This creates an automated, reusable framework to emulate sub-transactions of the larger, endto-end workflow. For our sample project, the workflow defined in Figure 2 could be implemented using two scripts: one for the mainframe RTE-based transactions, and another for the HTTPbased transactions. Table 1 lists the transaction-scriptMARCH 2007

base comes into play (See “Defining the Database” sidebar on page 19). Once you’ve set up your database for inter-script communication, all that’s left is to define a connectivity method that allows virtual users to access the database at runtime. To simplify the performance-test product’s interaction with MySQL, the proposed implementation relies on libmysql.dll for connecting, updating and querying the database. However, to avoid library dependency issues associated with loading libmysql.dll directly from the test tool (for example, external references to header files, etc.), I compiled a wrapper to this DLL as rteweb.dll, using LCC-Win32, an open-source C-Compiler and IDE (see Figure 4 on page 19). Figure 4 displays the various levels of abstraction used to connect to MySQL: To provide updating and querying capabilities to virtual users, rteweb.dll exports the following method, which issues function calls to MySQL such as mysql_connect(), and mysql_query(), through libmysql.dll: char * __cdecl __declspec(dllexport) runQuery(char *sql, int qType, int colNum)

The function runQuery() takes the following three arguments: • char * sql: This specifies the query that needs to be executed against MySQL. • int qType: Query type; 1 for UPDATE, 2 for SELECT. • colNum: Integer specifying which column, out of the record-set produced by the query, should be returned. Default is 0, which retrieves the first column out of this record-set.

Additionally, to facilitate interaction with MySQL through Load Runner scripts, runQuery () is called from the following two functions stored on a standard header (*.h) file (see Figure 4): The function insertAppID() in Listing 1 loads rteweb.dll into memory, and inserts a new tuple into RTEWEBDATA by issuing a call to runQuery(). www.stpmag.com •

15


PROTOCOL PILEUP

flag to Y upon completion. Therefore, they must be run in this sequence:

LISTING 1: PUT IT IN MEMORY /****************************************************************************/ /* RTE / Web VU Coordination Function /* Inserts a new Record into RTEWEBDATA with the Application ID and Status /* Param: appID The Application ID representing a credit application /* Param: appType The Application Type/Status: ‘Pending’ or ‘Approved’ /****************************************************************************/ void insertAppID(char * appID, char * appType) /****************************************************************************/ /* RTE / Web VU Coordination Function /* Enters a querying loop to retrieve an Application ID from RTEWEBDATA /* Pollinq will stop after an unused application with matching type is found /* Param: appType The Application Type/Status to Retrieve: /* ‘Pending’ or Approved /* Return: Application ID /****************************************************************************/ char * getAppID(char *appType)

LISTING 2: ACCESS THE DATABASE /****************************************************************************/ /* Sample Web VU Script /* Inserts an application ID to MySQL /****************************************************************************/ #include as_web.h Action1() { lr_start_transaction (“Test MySQL Data Insertion”); lr_save_string(“1000000”, “appID”); /* MySQL Insert */ insertAppID(lr_eval_string(“{appID}”), “PENDING”); lr_end_transaction (“Test MySQL Data Insertion”, LR_AUTO); return 0; } /****************************************************************************/ /* Sample RTE VU Script /* Retrieves an application ID from MySQL /****************************************************************************/ #include <lrrte.h> Action1() char * appID; lr_start_transaction (“Test MySQL Data Retrieval”); /* MySQL Retrieve */ appID = getAppID(“PENDING”); lr_save_string(appID, “retrievedID”); lr_output_message(lr_eval_string(“Application ID Retrieved: {retrievedID}”)); lr_end_transaction (“Test MySQL Data Retrieval”, LR_AUTO); return 0; }

This call supplies the application type and sequence ID to MySQL as follows: returnValue = runQuery (lr_eval_string(“INSERT INTO rtewebdata SET APP_ID = ‘{appID}’, APP_TYPE = ‘{appType}’”), 1, 0);

Note that appID is a parameter representing the unique identifier used to tie an application from a Web interface to

16

• Software Test & Performance

QCAP, and appType is a parameter specifying the state of the application as either Pending or Approved. The function getAppID() loads rteweb.dll into memory and queries RTEWEBDATA by issuing three separate calls to runQuery(). These calls (listed below) will lock a row, retrieve the first application ID for a given application type, and set the USED

retVal = runQuery (lr_eval_string(“UPDATE rtewebdata SET USER_ID = ‘{VUID}’, USED = ‘N’ WHERE ISNULL(USER_ID) AND ISNULL(USED) AND APP_TYPE = ‘{appType}’ LIMIT 1”),1, 0); retVal = runQuery (lr_eval_string(“SELECT APP_ID FROM rtewebdata WHERE USER_ID = ‘{VUID}’ AND APP_TYPE = ‘{appType}’ AND USED = ‘N’ ORDER BY cindex DESC LIMIT 1”), 2, 0); retVal = runQuery (lr_eval_string(“UPDATE rtewebdata SET USED = ‘Y’ WHERE USER_ID = ‘{VUID}’ AND APP_TYPE = ‘{appType}’ AND USED = ‘N’ LIMIT 1”), 1, 0);

Note that appType is a parameter specifying the status type for the application ID to be retrieved, and VUID is a parameter uniquely identifying the virtual user executing this transaction. Now that virtual users can connect to MySQL at runtime, retrieving the sequence numbers that identify credit applications within QCAP becomes a simple task to emulate. The sample scripts in Listing 2 summarize the necessary steps to insert and retrieve these IDs from MySQL over HTTP and RTE.

Putting It All Together The final step is to set up the test scenario. This scenario should define two user groups that include the Web and QCAP scripts for concurrent (rather than sequential) execution, since polling logic on each script will be used to coordinate VU transactions between these users. Let’s start from the top. The Web user, VU#1 (see Table 1), submits HTTP requests from LoadRunner to the vendor interface in order to log in and process a credit application to a Pending state. This user proceeds to submit the necessary requests to retrieve and display the application while parsing its sequence ID from a server response. This ID, stored as a parameter in appID, is then inserted to MySQL through a call to insertAppID() as follows: insertAppID(lr_eval_string(“{appID}”), “PENDING”);

At this point, the Web user enters a polling state, refreshing the current page at a predetermined interval (1/2 to 1 sec.), until the application status changes from Pending to Approved. During this polling state, the RTE user, VU#2 (see Table 1), has already logged in to QCAP and started querying the MySQL database for a pending ID MARCH 2007


PROTOCOL PILEUP

FIG. 4: LIBRARY WRAPPER

LR Functions

DLL(s)

Libraries

Database

mysql.h my_global.h ---

MySQL

mysql_connect() insertAppID

mysql_init()

WEB VU mysql_query() getAppID

End-to-End Automation

mysql_close()

RTE VU

using the getAppID() function: appID = getAppID(“PENDING”);

As soon as a matching application is found in the database, its ID will be returned from the previous function call, providing the RTE user with the data required to perform an application lookup within QCAP. Subsequently, this user navigates to the search panel, retrieves the credit application and approves it, sending a response back to the vendor interface through middleware. Since the Web user (VU#1) is currently refreshing the application status, it will soon process a valid approval and exit this polling condition, successfully completing the emulation of a business workflow across interfaces implemented with unsupported multi-protocols.

Extending the Framework This technique can also be used to build reports. For instance, you could extend this framework to store transaction timestamps, which could be used to calculate delta values between any two server components. To illustrate, let’s use the QCAP scenario previously described. This time, in addition to storing a unique identifier in the database that ties business transactions between heterogeneous systems, we’ll add a timestamp for each transaction. To facilitate subsequent data mining and reporting, transaction values will be added to a separate table in the database. This process will be repeated for each transaction, which lets us map out the entire workflow for a specific test. To better understand this technique, take a look at the following example: A Web user submits the following transactions, in order: 1. Create New Application 2. Submit Application 3. Navigate to Search Page What we’d like to report on are MARCH 2007

delays for each business transaction, should we need this level of granularity. Once results from the performance tests are stored in the database, a simple Extract-Transform-Load (ETL) interface can be implemented to calculate the response-time deltas between subtransactions.

delta response times between these transactions, in addition to common metrics provided by LoadRunner analysis, such as min, max and average response times. This can be accomplished by storing the following entries in the database: • Transaction name • Application ID (Unique identifier) • Timestamp The application ID, in this case, can be used to map out response-time

Now you know how to use DLLS and a centralized database to extend off-theshelf functionality in common performance-test products to provide an efficient VU data coordination solution. You can use this method’s repeatable steps to completely automate end-toend business transactions, even if end users must continue a workflow across several multi-protocol interfaces. ý RESOURCES • MySQL 5.0 Reference Manual: http://dev.mysql.com /doc/refman/5.0/en/ • MySQL 5.0 C API: http://dev.mysql.com/doc/refman /5.0/en/c.html • LCC-Win 32: http://www.cs.virginia.edu/~lcc-win32/

D

EFINING THE DATABASE A simple, single-table database can be used to provide inter-script communication functionality. This implementation applies to MySQL, but could be adapted to any open-source or commercial product. Creating a database is as easy as executing the following statements from the mysql prompt. First, define the new database labeled RTEWEB: CREATE DATABASE RTEWEB; Next, connect to the new database: CONNECT RTEWEB; Then define a table RTEWEBDATA with five columns using the following DDL code: CREATE TABLE RTEWEBDATA (CINDEX INT NOT NULL PRIMARY KEY AUTO_INCREMENT, APP_ID VARCHAR(50), USER_ID VARCHAR(10), USED VARCHAR(10), APP_TYPE VARCHAR(50)); This implementation hosts a table with the following columns, sufficient for inter-script communications: APP_ID: Varchar type used to hold the unique identifiers that link applications between vendor interfaces and QCAP. USED: Varchar type used to hold the processing status for these applications. For the purpose of this example, the status could either be: “Y” or “N.” CINDEX: Integer primary key used for sorting. USER_ID: Varchar type used to hold the virtual user ID processing a given record. This field is used to simplify the retrieval of an APP_ID, effectively querying a single record at a time, and to provide row-level locking to maintain system integrity under load. APP_TYPE: Varchar type used to hold the status an application has reached at the time of insertion into the database. For the purpose of this discussion, valid statuses include “Pending” and “Approved.” Once created, the database schema for RTEWEB will look similar to Figure 3, which describes the RTEWEBDATA table. The MySQL API (http://dev.mysql.com/doc/refman/5.0/en/c.html) defines interfaces for several languages, including C. The API relies on libmysql.dll, installed with any standard setup in the bin directory.

www.stpmag.com •

17


Validate Your Functional Tests With These Code Coverage Tests For Java

By Alan M. Berg

T

o the functional test, software is a black box. Either the overall system performs as expected, or it does not. In theory, a functional test tool or an automated functional tester such as Selenium may spend a complete virtual life not needing to know the target application’s gruesome underlying details. When a system fails, you can’t just pop the lid and discover that no one put milk in your Java. Code coverage testing details the code that has run—or, more importantly—not run—during application execution, and can serve as a helpful enhancer to functional tests. Coverage complements the functional side by looking from the inside out, and reports on the pieces of code that have been actively run during a given test cycle. The coverage reports deliver a pragmatic sanity check to balance the use of function tests—a white box to balance the black-box approach. Coverage tests are not only practical tools, but a necessary way of understanding the value of specific tests. This is especially true when the functional test list is longer than a few pages, as the coverage

18

• Software Test & Performance

test will highlight areas of code that aren’t being exercised by the functional tests. This article will show you how to perform code coverage tests for Java Web applications (war files) within a Tomcat server via EMMA, an open-source code coverage tool for Java (see Resources).

Code Coverage Methodology Big systems may interact with the universe in many original and complex permutations. Functional tests try to distill the important features of a given system into a to-do list of observable and achievable actions. In this way, an action such as “can logon” is redirected to a MARCH 2007


Stadler Photograp h by Wayne

welcome message, or receives a shoe size from a back-end system when the form is filled in correctly. But the to-do-list approach may not exercise all the important combinations of features employed by end users after a system has been deployed. In themselves, functional tests may seem sane from a distance. However, as soon as you dump your application into an acidic production reality, unexpected parts may fail. As a QA manager, you may face the awkward situation of failing to test a significant code branch failure. One approach to debugging is to MARCH 2007

compare code coverage reports from sheltered parts of production against your functional tests and systematically measure where the differences lie. However, this approach generates a raft of information that needs to be filtered by highly experienced programmers; thus, it’s not a very energy-efficient methodology. A second, more digestible, approach is to perform your functional tests against a system that is code-covered enabled, and then observe via generated reports which parts of the code haven’t been exercised. If you’re lucky, you won’t see whole blocks of code missed, but rather

hopefully insignificant parts of if statements that are called only in low-probability, low-risk circumstances. Otherwise, yes—it’s time to update your tests. You could also see this scenario occurring after you debug a serious flaw—er, “interesting feature.” In that case, you’d compare where the code failed against coverage reports and see if the function-test safety net has the right degree of granularity to find the issue if it reappears. All three approaches have positive Alan M. Berg is the lead developer at the University of Amsterdam. www.stpmag.com •

19


POPPING THE LID

aspects. Skipping the first, energy-inefficient approach, the second is the easiest to deploy in your QA cycle, and the third is slightly more difficult, but does have the advantage of training the relevant programmers in the bug patterns that have affected your systems in reality. EMMA is a tool that sits in a jar file and can be applied in two ways. The first approach is a live mode wherein the class emmarun sits between the Java JVM and the application code. While this approach has the advantage of relative simplicity—no

modifications are required to the original code base—setting up such intercepts may affect application performance. The second, more intrusive mode covered here is a post compilation, wherein EMMA instruments the compiled classes of the target application so that the enabled classes with EMMA additions save coverage information. This technique isn’t restricted to the shallow confines of enabling war files, but also works with generic Java applications from 1.2 onward.

LISTING 1: FIND THE DEAD CODE 1 public void doGet(HttpServletRequest request, HttpServletResponse response) 2 throws ServletException, IOException { 3 if (true == true) { doPost(request, response); 4 } else { 5 // Dead code here 6 int irrelevant_int = 5; 7 8 } 9 } 10 11 public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { 12 13 String message = getBackEndData(request); 14 response.setContentType(“text/html”); 15 PrintWriter out = response.getWriter(); 16 out.println(“<html><head><title>Code coverage demo Servlet</title></head>”); 17 out.println(“<body><h4>” + message + “</h4></body></html>”); 18 out.flush(); 19 out.close(); 20 } 21 22 private String getBackEndData(HttpServletRequest request) { 23 if (“size”.equalsIgnoreCase(request.getParameter(“action”))) { 24 return “Shoe size 15 found”; 25 } else if (“address”.equalsIgnoreCase(request.getParameter(“action”))) { 26 return “Address: 32 Blog Street, Blah 1234 XX”; 27 } else { 28 return “No idea what you want me to do here”; 29 } 30 }

LISTING 2: COVERAGE TRY-OUTS emma instr usage: emma instr [options], where options include: -ip, -cp, -instrpath <class directories and zip/jar files> {required} instrumentation path -d, -dir, -outdir <directory> instrumentation output directory (required for non-overwrite output modes) -out, -outfile <file> metadata output file (defaults to ‘coverage.em’) -merge (y[es]|n[o]) merge metadata into output file, if it exists -m, -outmode (copy|overwrite|fullcopy) output mode (defaults to ‘copy’) -ix, -filter <class name wildcard patterns> coverage inclusion/exclusion patterns {?,*} -v, -version display version and exit -p, -props, -properties <properties file> properties override file -h, -help display usage information {use ‘help’ option to see detailed usage help} [EMMA v2.1, build 5320 (stable)]

20

• Software Test & Performance

The Test Scenario Your organization is a famous shoe company. Its premium development team has built a Web application to help in the business of online selling that it hopes will cut transaction costs. The technical details: We’ll make the Web application as simple as possible. The war file contains one servlet class. The Java servlet receives a request for one of two possible actions. We name the first action shoe. Shoe forces the servlet to return shoe size. The second action returns the previously filled-in address of an end user. Like any wellwritten piece of software that encounters user input, the application also checks for the possibility of an unexpected input. Listing 1 partially lists this scenario’s Web application. The full source code is downloadable as part of an archive at http://www.stpmag.com/downloads /stp-0703_berg.zip. The code functions as follows: The Java method doGet (Listing 1, lines 1–10) is executed when a Web browser uses an HTTP GET action, and the Java method doPost when Web browser sends a POST action. If the servlet has to handle a GET method, it passes the necessary work on to the Java doPost method. Within the Java GET method, there is an if statement in which the else section is never reached. The unreachable code makes for an interesting test of the code coverage report’s validity. Under normal circumstances, the doPost (lines 11–21) method’s role is to return an appropriate message embedded in an HTML page. The returned message will depend on the sent parameter-named action. The code calls a private and hidden utility method, getBackEndData. getBackEndData (lines 22–30) pretends to receive data from the back end and sends an appropriate message back to doPost, which in turn merges the message with the HTML content that is returned. The private utility method contains stereotypical business logic in which an if statement makes decisions based on expected actions. If our functional test fails, it fails because it doesn’t exercise all the relevant branches. The dead code in the doGet method (lines 5–8) represents junk DNA left over by code that no longer has a purpose. Junk DNA tends to accumulate as programs evolve away from their origiMARCH 2007


POPPING THE LID

nal purpose and new features are added, subsuming the old. Most of this retired code does no harm. However, it usually has a minor negative impact on readability, code size and compilation, and potentially on runtimes. Under extreme situations, the dead code may interact to cause failures and force programmers to learn new bug patterns.

Installation Next, we’ll run EMMA from the command line to instrument the previously compiled classes of the Web application. This process involves the generation of metadata during instrumentation to help generate reports later. Once we’ve put the application through a series of functional tests, we’ll save state live—without stopping the application. While we’re saving state, we’ll reset it so that the application is clean and ready for a fresh start with a new set of problem-solving tests. The ability to save and reset state live is a highly relevant and useful feature. A programmer debugging his own code and unsure which piece of code is in the affected blocks must start clean, must perform the required functional to-do list, and then save and reset state. With live state-saving and resetting capabilities, the developer can generate a tool within seconds that is drilled specifically to the task.

For Reporters The following section is for those who intend to generate reports. To build and deploy a Web application in this context requires Tomcat 5.5, Ant and Java 5 installed (find the download location of the example code in the Resources list). You’ll find our extremely simple and straightforward application, coverage_demo.war, in the toplevel directory of the archive. An Ant script compiled the war; you don’t need to run the script yourself. However, for those who like to get their hands dirty, a basic README.txt in the downloadable archive mentions a couple of relevant details. When you’re building a war file fit for instrumentation, remember that compilation is best served with the Java compiler debug flag on. EMMA requires this if you want the final report to express extra details. With the flag off, you’ll get only a summaries page. If you were an Ant aficionado, you’d have realized that you must add an extra MARCH 2007

option in the build file under the make_war target javac task: <target name=”make_war” depends=”clean,init” description=”makes a war file” > <javac srcdir=”src” destdir=”build/expanded/WEB-INF/classes” classpath=”${servlet.jar.location}” debug=”on” /> <war warfile=”build/war/coverage_demo.war” webxml=”meta/web.xml”> <classes dir=”build/expanded/WEBINF/classes” /> </war> </target>

Three convenience script files exist with the example code: run_instrumentation.sh instruments the class files and produces a metadata file that is required for report generation get_info.sh generates the usage data report.sh generates the report

Next, we’ll explore how to generate reports from the command line by hand. However, if you want to automate, simply replace variables at the start of the convenience scripts to mirror your environment and run when required. First, let’s place the emma.jar in a place that both the Web application and the Java instance, which we’ll run from the command line to instrument the classes in the Web application. Place the emma.jar file in the Tomcat_home/common/lib directory. Also, place the file in the JAVA_HOME/jre/lib/ext directory of the Java 5 instance that runs from the command line. If all is well by running the command java emma instr –h, you’ll display the basic usage information. Note the version number mentioned at the end; if you have an earlier version that mentioned in listing 2, please upgrade. Next, we’ll deploy the war file so that we expose the class file ready for instrumentation. This normally requires placing the Web application in the Tomcat webapps directory. Start Tomcat and wait until the Tomcat instance expands the war file, and then stop. At this point, we have an expanded war file with the class files lying underTomcat_home/ webapps/coverage_demo/WEB-INF/ classes. Make a directory for the test data and report. Now we’ll instrument the code via the following command: java emma instr -ix - -ip Tomcat_home/webapps

C

ODE COVERAGE EXPERIMENTS Want to get wild and experiment with code coverage? Try these hints to help you along the way. • If you fail to obtain coverage information, please double-check that you’ve managed to instrument your Web application. • If your application fails because of missing classes, make sure that you’ve placed the emma.jar file in the common/lib directory of your Tomcat server. • Report generation with the source code included requires that the source code pointed to in generation is the same as the source code that was previously used to compile classes. If there are even slight version differences, the generated report’s color-coding may not make much sense. • Always use full pathnames in commands to avoid ambiguity. • The live collection of session information is a new feature in EMMA, so if you can’t make the functionality fly, check your jar version number against Listing 2. Older versions will certainly not function as expected.

/coverage_demo/WEB-INF/classes -m overwriteout =Report_dir/coverage_demo.em

In this command, as briefly described in Listing 2: • ix is the filter rule set to, which implies that no classes will be ignored for instrumentation • ip indicates the location of classes • m (mode) overwrites class files with instrumented version • out is the location of the generated meta information file. The notation Tomcat_home is the home of your Tomcat server, and Report_dir is the location of the directory where you’ll be creating reports. Please replace the variables with relevant values. You should now see the coverage_demo.em meta-information www.stpmag.com •

21


POPPING THE LID

see that we’ve covered only four out of the five lines of the getBackEndData method. Figure 2 makes it very clear which condition we’ve failed to enact— perhaps that one vital line stopped an SQL injection attack or purged an avalanching set of connections.

FIG. 1: CLASS SUMMARY

The QA X-Ray

file in your report directory. Next, we need to run the Web application and perform a couple of functional tests. Start your Tomcat instance once more. If you’re running on localhost port 8080, via your Web browser, go to the following links: http://localhost:8080/coverage_de mo?action=size You should see the following response: “Shoe size 15 found.” http://localhost:8080/coverage_de mo You’ll see this message: “No idea what you want me to do here.” At this point, we’ve simulated a partial functional test suite—and, yes, it’s full of gaps. Now, let’s generate the state information required to create a report and clean the currently active application’s state via a command:

versions of the report is that you lose the ability to view a color-enhanced source code listing. However, note that XML is the best format when extracting information automatically for high-level report generation.

What Does the Report Mean? The generated report is divided into three parts: an overall summary for all classes, a summary section for a given

In sum, EMMA has allowed us to peek into the black box and turn it white. With three relatively straightforward commands, we’ve instrumented a Web application and generated a code coverage report live. We now have QA x-ray vision. Code coverage reporting enhances the value of functional tests by accurately measuring the used code paths during test execution. To further enable passive support, code coverage measurements, when applied actively and in real time, allow for efficient focusing in on the value of specific functional tests during the debugging cycle. With a fast, reliable, open-source

FIG. 2: COLOR-CODED SOURCE

java emma ctl -connect localhost:47653 -command coverage.get,Report_dir/coverage.ec -command coverage.reset

We assume that the local JVM of the Tomcat server is listening on localhost port 47653. We’re running two commands: coverage.get to get the coverage information and place it in the report directory in the file coverage.ec, and coverage.reset, which cleans the state information of the running Tomcat server. If all is successful, coverage.ec should now exist. For full usage information, run java emma ctl -h. Report generation is just as simple as the previous two commands. Try: java emma report -r html -sp Src_dir -in Report_dir/coverage_demo.em -in Dreport.html.out.file=Report_dir/report.html where Src_dir = directory which contains the source code. -r = report type (text, xml, or html)

The drawback with using text or xml

22

• Software Test & Performance

class (see Figure 1) and an optional, color-coded source page per class (see Figure 2). The report generator splits summaries into percentage of code covered per class, method, block and line. A block is a small lump of code, one or more commands that the JVM executes sequentially within the same time chunk (for example, the code contained within a condition within an if statement). Figure 1 can help us work out a couple of interesting facts: After performing the two browser actions mentioned in the last section, we’ve covered all the class, its methods and, amazingly, 61 out of 63 blocks. If we dive a little further, however, we

product that allows for efficient instrumentation of previously compiled code, it took us only three commands to generate almost live reports. Code coverage reporting with EMMA is a useful tool that can enhance any QA cycle in which the product has a significant number of requirements. ý RESOURCES • Tomcat: http://tomcat.apache.org/download-55.cgi • Emma jar download (at time of writing): http://prdownloads.sourceforge.net/emma /emma-stable-2.1.5320-lib.zip?download • Emma: http://emma.sourceforge.net/ • Emma property definitions: http://emma .sourceforge.net/reference/ch03s02 .html#prop-ref.coverage.out.file • Aston wizards plug-in: http://renaud91.free.fr /Plugins/index_en.html • Selenium: http://www.openqa.org/selenium/ • This project:http://www.stpmag.com/downloads /stp-0703_berg.zip

MARCH 2007



24

• Software Test & Performance

MARCH 2007


Rout Out the Dangers Lurking in Program Boundaries With Exploratory Testing By Rob Sabourin

n ancient navigation charts, dangerous or unexplored territories were often indicated with images of mythical creatures

O

and warnings to mariners. The Lenox Globe from the early 1500s, the oldest known terrestrial globe, used the Latin phrase “HC SVNT DRACONES (hic sunt dracones, “here are dragons”) on the east coast of Asia. But the true dangers were not mythical creatures, but some very real concerns about the risks of the unknown. In many software testing projects, I’ve been challenged to discover a product’s true boundaries. As explained in last month’s installment, “On the Field of Finite Boundaries,” a good many bugs can be found at the edges of your applications. Boundaries exist in all variables and variable combinations, which influRob Sabourin is president of software consultancy Amibug.com. MARCH 2007

ence the behavior of the software we test. Why look for boundaries? Their locations and sources can help us to gain confidence in the behavior of software, as well as help us focus testing. In my experience, whenever I identify a boundary in a variable, I can immediately define important tests of the application’s behavior by attempting to process values on either side of the boundary as well as on and around the boundary.

The Four Steps I vary my approach to finding an application’s boundaries depending on the blend of the project’s technical and business contexts. Sometimes I have access to requirements; sometimes I have access to designs, and on occasion, I can even chat with developers. I’ve even had the good fortune of accessing database schemas and source code to help me find boundaries. While uncovering bugs as part of the process is certainly a useful side effect,

understanding boundaries can also help us to define related tests and discover strategies to access correctness. Understanding boundaries also gives us key points at which to validate the behavior and requirements of the software being tested. The four basic steps for identifying boundaries are: • Identify the testing objective • Find variables related to the testing objective • Determine which variables contribute to the processing or behavior of the application being tested • Use experimentation and analysis to isolate boundary values of contributing variables

Identify Testing Objectives When I test software, I don’t begin by looking for boundaries. Instead, I make sure that I’m finding boundaries related to a specific testing objective. A testing objective could be a goal from a test plan, confirmation of a software requirewww.stpmag.com •

25


THE DRAGON SLAYER

ment, a charter of exploratory testing, or confirmation of a usage scenario or a system failure mode. Focus on one test objective at a time. Try to identify boundaries of the application to help fulfill the test objective. Test objectives concerned with software behavior invariably involve finding boundaries.

Find Variables

Try to identify variables that the requirements may not mention, but which affect the related software.

Variables in requirementbased testing. Requirementbased test objectives can be related to explicit, documented requirements; implicit, undocumented requirements; product constraints; product environments or statements of business rules. Boundaries can be found based on the requirement or the implementation of the requirement. Requirement-based test objectives are used to confirm that the software conforms to the requirements. Variables are generally the parameters, options and conditions that influence the application’s behavior in fulfilling the requirement. All variables related to the requirement are considered. In addition, it’s critical at times to identify variables not explicitly mentioned in the requirement document, but which would affect the behavior of the software related to the requirement. For example, a report-generation requirement may describe the data and structure required on a month-end report. Report generation is also influenced by the printer page setup, margins and paper type. These variables influence the implementation of the capability, but aren’t explicitly referenced in the requirement. Variables in exploratory testing. When implementing exploratory testing, I generally divide the project into a series of testing charters, each designed to concurrently explicate and test the product. Charters help to focus testing. In exploratory testing, boundaries related to the charter are discovered. Sometimes I even define the charter of an exploratory testing session to explicitly discover the variables and associated bound-

26

aries related to some feature or characteristic of the application being tested. Discovering and identifying the variables depends on the tester’s experience, subject-matter expertise and observation skills. Some variables, especially data entered on menus, dialogs and user interface controls, are obvious. Variables related to the system under test are also relatively straightforward to isolate. But the elusive ones are those that influence the behavior of the system but aren’t readily visible to the tester. Experimental approaches can be used to confirm hypotheses about the nature of possible variables related to the testing objective. For example, in an insurance application. many attributes, including age, can define a customer, some of them influencing the rate computation. Experiments can be defined to confirm the hypothesis that age influences the rate computation. First, set all attributes to fixed values, and then compute rate several times varying the age value. In each trial, vary the age without changing other attributes. If the rate varies, then you’ve confirmed the hypothesis that age is a variable in the rate computation. The opposite is not true—an invariant rate tells us nothing about the relationship between rate computations and age. The principle of concomitant variations, conceived by 19th century English philosopher John Stuart Mill, can be used to discover variables that relate to the testing charter (see “Mill’s Methods” sidebar). Variables in usage scenarios. When testing a usage scenario, I’m concerned with the user’s ability to perform a task using the software. Usage scenarios are defined in terms of the work the user must do to perform the task—not on the capabilities of the software under test. In usage scenarios, I generally walk through the scenario from beginning to end, once for each possible alternate path. For example, to test the purchase of a book

• Software Test & Performance

at Amazon.com, I check paths for each alternate flow by varying the type of books purchased and the payment terms. In the walkthrough, I take note of every time a user enters data or makes a choice. Each instance noted is considered a variable, and only those variables related to the scenario qualify. If there are fields of dialogs and menu items the user hasn’t employed to fulfill the scenario, they aren’t considered relevant in my hunt for boundaries. Variables in failure-mode testing. In failure-mode testing, we study the behavior of the system in response to invalid data, unexpected conditions and a harshly constrained environment. Variables are chosen in advance, and testing observes and analyzes the application’s behavior as these variables are controlled in an invalid way. Generally, failure-mode testing offers an indication of the robustness or fault tolerance of the system under test. Often, the failure-mode test aims to discover at what value a variable will cause system behavior to degenerate.

Determine Contributing Variables Variables can be classified into two broad categories: contributing variables are those that influence the behavior of the application, and independent variables are those that don’t. Although all vari-

M

ILL’S METHODS

From www.wikipedia.org: “Whatever phenomenon varies in any manner whenever another phenomenon varies in some particular manner, is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation.” If,across a range of circumstances leading to a phenomenon, some property of the phenomenon varies in tandem with some factor existing in the circumstances, the phenomenon can be attributed to that factor. For instance, suppose that various samples of water, each containing both salt and lead, were found to be toxic. If the level of toxicity varied in tandem with the level of lead, one could attribute the toxicity to the presence of lead.

MARCH 2007


THE DRAGON SLAYER

ables influence the behavior of the software being tested in some way, I’m concerned with variables that add optimal value to the process. If I were testing the capabilities of a clothes dryer, I’d identify several variables: material type, temperature, weight, humidity, speed, color and the light switch setting. To focus on the variables more directly influencing the ability to dry clothing, I’d probably define material color and light switch settings as independent variables. I’d be tempted to consider material type as independent as well, but I’d probably consult with subject-matter experts first. A textile specialist could help me better understand the relationship among materials, temperature and humidity in the drying process. Variables of temperature, weight, humidity and speed would certainly be contributing variables.

Finding Boundaries Now that we’ve identified the relevant variables, it’s time to roll up our sleeves and discover the boundaries. I’ve used many different approaches, but I’ve found that the most effective are black-, white- and gray-box testing. Black-box testing is testing an application from the outside. We generally lack specific knowledge of the design or implementation of the software. We control the application by varying the environment and controlling external parameters. And we assess correctness by observing the results, outputs and outcomes of the processing done by the application. Very often, black-box approaches are the only ones readily available to testers and should be considered key skills. If I have requirement documents available, I research the possible ranges of values for business logic or data processing. Each range of values defines boundaries. I then confirm that these boundaries really existed in the application by trying values above, below and immediately on each value. Next, I make sure that I confirm processing; that persistent stored data and reports are consistent for each variable. In my experience, boundary bugs are often due to inconsistent development or design for different aspects of the application related to the variable. To explore for boundaries of a variable, I often try to vary data entered or

processed. If the data entered is a string, I may try to find boundaries related to the length of the string and the amount of the string that’s processed. Testing guru James Bach’s Web site, www.satisfice.com, offers Perlclip, a wonderful free tool that allows you to enter a string of different lengths into the Windows clipboard. These strings can then be pasted into data fields of dialogs. Perlclip lets you create counterstrings of different lengths. If an application truncates a counterstring, you can easily determine the number of characters processed, and thus identify a boundary. Table 1 illustrates how a binary search

approach has some risks, since you may not converge on all of the boundaries. To identify boundaries based on applied differential calculus, you can try some other analytic techniques that can be useful when the application computes different values in a continuous range. I urge you to study Runge-Kutta, Newton and Newton-Raphson numerical methods to identify boundaries of continuous functions. Detailed descriptions of these approaches can be found at www.wikipedia.com. When you’re identifying boundaries, the application’s behavior may vary depending on the order or progression

TABLE 1: BLACKBEARD’S BLACK BOX STEP 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

MIN

0 0 0 0 62,500 93,750 93,750 93,750 97,656 99,609 99,609 99,609 99,853 99,975 99,975 99,975 99,990 99,997 99,997 99,999

MAX

NEW

0 1,000,0000 500,000 250,000 125,000 125,000 125,000 109,375 101,562 101,562 101,562 100,585 100,097 100,097 100,097 100,036 100,005 100,005 100,005 100,001 100,001

0 1,000,000 500,000 250,000 125,000 62,500 93,750 109,375 101,562.5 97,656 99,609 100,585.5 100,097 99,853 99,975 100,036 100,005.5 99,990 99,997.5 100,001 99,999 100,000

can be used to identify boundaries of a variable. The software being investigated computes the tax rate based on income level; different tax rates are used for different ranges of income. The system precision is assumed to be in units of $1.00. Boundaries of numeric variables can be determined in many ways. For example, a binary search can be useful. Start by choosing two possible values: MIN and MAX. If the behavior observed is different at MIN and MAX, identify a NEW value that is halfway between. If behavior at NEW is the same as MIN, replace MIN with the NEW value. If behavior at NEW is the same as MAX, replace MAX with the NEW value. Continue this process until the difference between MIN and MAX is within the precision of the variable being explored. Caveat: This

Result 0 50 50 50 50 25 25 50 50 25 25 50 50 25 25 50 50 25 25 50 25 50

Comment Low Tax Rate High Tax Rate High Tax Rate High Tax Rate High Tax Rate Medium Tax Rate Medium Tax Rate High Tax Rate High Tax Rate Medium Tax Rate Medium Tax Rate High Tax Rate High Tax Rate Medium Tax Rate Medium Tax Rate High Tax Rate High Tax Rate Medium Tax Rate Medium Tax Rate High Tax Rate Medium Tax Rate High Tax Rate/ Med.Tax Rate Boundary

of values tested. The application may have a bug in which memory is accidentally overwritten by data. I’ve tried to find boundaries by approaching them from large to small, or independently from small to large. In a number line, this would be considered approaching the value from the left or right. The application’s behavior may be different depending on the approach. It’s also important to make sure each trial of the application begins at a controlled starting point, since the application’s behavior varies based on the sequence of events, not just the specific values used in testing. I’ve occasionally had to restart from scratch each trial to help isolate a buffer overflow–related boundary. It’s possible to simulate changes to environment variables, such as the


THE DRAGON SLAYER

amount of free disk space, memory or network resources. There are several commercial tools that allow you to simulate varying many system parameters without modifying the actual operating environment. White-box testing is based on studying the detailed design, code and data structures used to implement the application. When I have access to such information, I use it to complement black-box approaches. I never rely exclusively on white-box approaches since they’re based on the code and tend to confirm that the code does only what it says it does. I prefer to confirm that the code does what it’s expected or required to do. I use three approaches to white-box testing to help identify boundaries: structured walkthroughs, static analysis and data-flow analysis. In the structured walkthrough, I ask the developer to walk through and explain to me all of the code used to implement the part of the application I’m testing. I focus on how data is processed and stored in memory. I don’t generally follow all of the paths through the code. Instead, I walk through the normal cases and exceptional paths that commonly occur. Whenever I identify processing or storage of data, I take note of the types of data and valid ranges. Decisions and case statements in the code are indicators of potential boundaries. Confirm boundaries found in the code by experimenting with a running application after the walkthrough. Look at database schemas to understand the types and relationships among records kept in persistent storage. Ranges, rules and triggers associated with elements of the database are all potential boundary conditions. Experiment with the running application to explore behavior around these boundaries. In my white-box toolkit, I also use static analysis tools to help identify potential boundary conditions in code. A static analysis tool helps identify type casting, the cases in which the data type is changed at runtime by storing data of one type in another. For example, an integer is commonly cast into a character variable or a byte variable. Casting exposes poten-

28

• Software Test & Performance

tial boundary conditions related to the ranges of values. I also identify boundaries related to the size of buffers used in memory to process data. Buffer sizes may be computed dynamically or assigned statically. Studying buffers in code helps identify potential boundaries. The third white-box testing approach I use is data-flow analysis. In this approach, you examine all the places that variables are created, modified or destroyed. Each point where a data element is processed may expose a potential boundary. Gray-box testing is a combination of black- and white-box approaches. In this method, I study the design to understand

• In white-box testing, studying buffers in code can help to identify potential boundaries.

• how data is processed and stored. Based on understanding the algorithms and data structures used, I identify potential boundaries. I then explore the behavior of the application around these boundaries. I use interviews with developers, architects and technical analysts to understand how multiple variables can combine to influence the behavior of the application being tested. Boundaries may depend on the relationship between variables as well as the specific values of each variable. For example, consider the task of computing valid postal rates in Canada. The rate depends on the height, width and length of a package on an individual basis (for example, a maximum and minimum boundary for each), but the validation depends on the sum of all three values. If the sum of height, width and length exceeds some value, the postal rate computation changes. Algorithms used to solve complex problems using numeric methods often have values that lead to unstable computations. Take, for example, a result that

ends up with values that are divided by zero. Dividing by zero leads to an invalid number. Any number divided by zero that isn’t represented by floating-point numbers in a digital computer is said to be infinite. By studying the numeric methods used in computation, we can also identify boundaries. A computational boundary, therefore, is a value or combination of values that lead to unstable computations.

Any Means Justify the End Different techniques can be used to identify boundaries in a software system. The critical task is to identify which variables influence the behavior of the application being tested. Once this is done, you can determine how they contribute to the processing of the application related to the test objective. For each contributing variable, different black-, whiteor gray-box approaches can be used to explore boundaries and study the application’s behavior as variables or combinations of variables approach their boundary conditions. Next month, the concluding article in this series will investigate boundaries related to the behavior of the system and aspects of quality factors, as well as the limits of system behavior under load and in harshly constrained environments. I’ll also examine boundaries of load, performance, capacity, environment and stress. Finally, I’ll share some techniques to identify risks associated with multiple boundaries and experiments that push code to the limits. ý

MARCH 2007


Qbsbtpgu!TPBuftu

UN

7FSJmFT 8FC TFSWJDFT JOUFSPQFSBCJMJUZ BOE TFDVSJUZ DPNQMJBODF 40"UFTU XBT BXBSEFE i#FTU 40" 5FTUJOH 5PPMw CZ 4ZT $PO .FEJB 3FBEFST

8FC 4FSWJDFT

"QQMJDBUJPO 4FSWFS

Qbsbtpgu!Kuftu

UN

7FSJmFT +BWB TFDVSJUZ BOE QFSGPSNBODF DPNQMJBODF +VEHFE *OGP8PSME T 5FDIOPMPHZ PG UIF :FBS QJDL GPS BVUPNBUFE +BWB VOJU UFTUJOH

%BUBCBTF 4FSWFS

*NQSPWJOH QSPEVDUJWJUZ DBO TPNFUJNFT CF B MJUUMF TLFUDIZ

"QQMJDBUJPO -PHJD

1SFTFOUBUJPO -BZFS

-FHBDZ

Qbsbtpgu!XfcLjoh

8FCTJUF

-FU 1BSBTPGU mMM JO UIF CMBOLT XJUI PVS 8FC QSPEVDUJWJUZ TVJUF 1BSBTPGU QSPEVDUT IBWF CFFO IFMQJOH TPGUXBSF EFWFMPQFST JNQSPWF QSPEVDUJWJUZ GPS PWFS ZFBST +UFTU 8FC,JOH BOE 40"UFTU XPSL UPHFUIFS UP HJWF ZPV B DPNQSFIFOTJWF MPPL BU UIF DPEF ZPV WF XSJUUFO TP ZPV DBO CF TVSF ZPV SF CVJMEJOH UP TQFD

5IJO $MJFOU

UN

7FSJmFT )5.- MJOLT BDDFTTJCJMJUZ BOE CSBOE VTBHF BOE NBOBHFT UIF LFZ BSFBT PG TFDVSJUZ BOE BOBMZTJT JO B TJOHMF JOUFHSBUFE UFTU TVJUF

UIF OFX DPEF EPFTO U CSFBL XPSLJOH QSPEVDU BOE BOZ QSPCMFNT DBO CF mYFE JNNFEJBUFMZ 8IJDI NFBOT ZPV MM CF XSJUJOH CFUUFS DPEF GBTUFS 4P NBLF 1BSBTPGU QBSU PG IPX ZPV XPSL UPEBZ "OE ESBX PO PVS FYQFSUJTF

(P UP XXX QBSBTPGU DPN 451NBH t 0S DBMM Y ª 1BSBTPGU $PSQPSBUJPO "MM PUIFS DPNQBOZ BOE PS QSPEVDU OBNFT NFOUJPOFE BSF USBEFNBSLT PG UIFJS SFTQFDUJWF PXOFST



Newbie: Can I Just Say One Thing? By Prakash Sodhani

e’re all familiar with the experiences that accompany a new job, because each of us has been a newbie in a job at one

Photograph by Jason Lugo

W

time or another, and each company has its own unique processes and culture to learn. Of all the years you spend in an organization, the first few months are the most interesting and challenging. This is the time when you begin to understand your new employer, your responsibilities and your colleagues. As a tester at a new job, you face some unique challenges. Here’s an example. Katie wasn’t satisfied with her previous job, so she joined company X as a test engineer. She wanted to learn a variety of new technologies. She likes training, preparing for certifications and facing challenges in her daily work. In other words, she hates monotony and looks for something stimulating every day. On her first day at company X, she met her manager, Samuel. “Welcome, Katie,” he smiled. “Since this is your first day, I wanted to meet with you and answer any questions you may have.” “Thanks,” Katie replied. “I do have some questions, and I’d appreciate it if you can help me with them. I understand that I have many company benefits as full-time employee, but I wanted to know the company’s policies on individual development with respect to inhouse-training, certifications and study sponsorships.” “Glad you asked,” Samuel replied. “I was going to discuss this with you myself. Our company does have good benefits for individual and Prakash Sodhani is a quality control specialist at a large IT organization in Texas. MARCH 2007

www.stpmag.com •

31


THE FRESHMAN

professional development. But they’re all paid by respective departments, and because we’re under a very strict budget, we’ve reduced some of them.” “I was looking forward to a lot of training and certification work,” Katie frowned, adding, “I thought I was eligible as a full-time employee.” “I understand. But these classes and sponsorships cost us a lot of money, so I doubt it’ll be possible this year, as budget is already approved. I’ll see what we can do next year.” Katie was disappointed that she wasn’t allowed to take advantage of her benefits and revealed that disappointment to her manager, on her very first day. Was she right in insisting on what she wanted? Or should she have handled the situation differently? Testers often face similar situations in the early stages of a new job. Let’s take a look at the top 10 challenges faced by newbie testers, and explore some useful solutions.

2. Getting to Know Your Colleagues Programmers and testers can differ, and often do. Hostile and antisocial behavior, while often tolerated when coming from a programmer, is usually out of bounds for a tester. Because testers often take abuse from outraged programmers, they must have better people skills. Thick skin and a sense of humor are a matter of survival. You have to be diplomatic when confronting a senior programmer with a fundamental goof—and on a deadline, to boot. Talk to your team members—find out about them and how they do things. Communication is essential to testers; they need to be comfortable and adept at communicating with business analysts, developers and all other team members. The effective tester doesn’t let ego obscure the way of getting the job done. As a new member, you must be patient and sensitive to the needs of others without becoming a doormat.

1. Adapting to the Culture Every company (and, within each company, each team) has its unique culture. As a tester, you must understand both. Sometimes, team and company work cultures differ to allow flexibility, sometimes they work in sync, and sometimes it’s a challenge to prioritize what may be conflicting macro- and micro-cultures within the same organization. You may be used to working from 7 am to 4 pm at your old company, but at your new job, most test meetings are scheduled after business hours. Or perhaps your old organization had a relaxed atmosphere, and your new company has a stuffier image. It’s imperative to understand the work cultures of both the company and the team. This will enable you to identify areas where adjustments are needed to improve the efficiency and productivity of the individual, as well as the entire team. You should discuss any issues and concerns with your supervisor as soon as possible. Katie was right to bring up her training concerns as soon as they arose—to lay out expectations up front—but she could have been more diplomatic about it. As a newbie, you must tread a fine line between your own interests and those of your new work culture—so get used to walking that balance beam.

32

• Software Test & Performance

3. Learning a New Process Different companies follow different processes. Sometimes, processes differ among teams within a company. As a new team member, you need to understand the details of the process, since it affects almost every test activity. A number of project processes are followed in the industry, with varia-

T

require daily status updates. A new team member should take time to go through the process and adjust their activities to conform. For example, a person used to doing things at the last minute might need to change, as the new methodology might require daily reports. Failure to understand that process may lead to poor-quality work and hence may affect the quality of product being shipped.

4. Understanding the Project Some so-called “testers” test a system with little or no knowledge about the project’s goals and requirements. They could be slackers who don’t care much about work as long as they get their paychecks, but perhaps the company doesn’t give them sufficient opportunity to learn the details of each project. For example, if you’re assigned to work on multiple projects simultaneously, you may be unable to do justice to all of your assignments. As a tester, your primary role is to uncover errors. But first you must understand the project you’re working on as thoroughly as possible. Study all documentation in detail. Report inconsistencies, omissions and errors. You may find it a challenge, but once you get a better idea of the project’s goals and requirements, it will be easier to get your job done more efficient-

HE FRESHMAN’S TOP 10 CHALLENGES 1. Adapting to the culture

6. Delving into the software

2. Getting to know your colleagues

7. Asking questions

3. Learning a new process

8. Dealing with old habits

4. Understanding the project

9. Playing it cool

5. Configuring your hardware

10. Channeling your creativity

tions arriving every day. These processes can differ in ways that affect the manner in which the team carries out their activities. For example, while I worked at company X, I followed the waterfall methodology. When I moved to company Y, I found they were using a Scrum-based approach. Though the fundamental aspects of the processes remain largely the same, all new processes need some adjustment. While you may be comfortable with sending test status reports every couple of weeks in a waterfall methodology, a more agile approach may

ly. Testing without understanding the project is, in reality, not testing.

5. Configuring Your Hardware Every application being tested requires careful analysis of the hardware and software. You can’t find valid defects in an application unless you’re aware of the system inside and out. Once a newly hired tester is assigned a project, he usually needs to plunge into the test bed, ensuring that testing is conducted with hardware configured properly, including (but not limited to) operating system, MARCH 2007


THE FRESHMAN

processor, memory, graphics card, network card and the other nuts and bolts that make up a system. All too often, testers use different hardware configurations than those noted in the test plan. Sometimes, a supervisor neglects to make sure that a new hire has the correct hardware configurations for a project. Be sure to identify the nature of the project and the hardware requirements associated with it. For example, for a project involving graphicsrelated activities, make sure that you’re equipped with the appropriate graphics card. Read all related matter.

6. Delving Into the Software Now your skeletal foundation needs a spark: the software you’ll use to test the system. Again, make sure this coincides with your project’s specific needs. For example, before you commit to using an automated tool, make sure it supports testing of the application under test. Let’s look at a conversation between Alex, a newly hired tester, and his manager, Yan: “I wanted to talk about the project you’ve been assigned, and explain our expectations,” Yan began. “Sure. I’ve looked at the project documents already,” Alex replied. “Great. For this project, you’ll be writing automated test scripts using the tool M. I’ve assigned another resource for this project to do the manual testing,” explained Yan. “You can see your project manager for more information about the scenarios and other questions you might have about writing automated test scripts,” he continued. “From what I read in the technical specifications,” Alex replied in a concerned tone, “this application uses third-party components in most of the functionalities. But as far as I know, the tool M doesn’t support scripting on that company’s components unless we install a third-party add-in.” “But we’ve already done all the planning and don’t have much time— we have a strict deadline,” said Yan in a worried tone. “I understand,” Alex responded. “I’ll automate whatever I can. But we MARCH 2007

need to let the team know that we can’t automate functionalities involving Infragistics components, and figure out an alternative plan.” In this conversation, Alex was able to identify the problem before diving

fell silent and did the best he could with what he understood. Because he’s a talented tester, he did all right. But in hindsight, he admitted, he could have done the job much better if he had understood the application completely—even if that involved asking “too many” questions. He also told me that since he was hired in a specialist role, he didn’t want to disturb other team members or risk his reputation by asking questions that might lead some to doubt his competence. Don’t be shy. No question is a stupid question. It’s imperative that you understand as much as you can about a project before you begin to test it. Ask as many as you need to get a quality job done. Keep the questions succinct and relevant, and remember the answers. Remember, the desire to make a good impression by not asking relevant questions can backfire if the job is not completed as expected. Though you may be hesitant to disturb your fellow coworkers, remember that they too were once wet behind the ears, and most are glad to offer assistance.

Be sure to identify the nature of the project and the hardware requirements associated with it.

• into the project, thus avoiding considerable waste of time and resources. As a new team member, a tester must make sure that all the software requirements are in place as per the test plan. If you start working on a project without thorough knowledge of the application and the associated testing tools, the project will suffer, especially if it’s operating under strict deadlines.

7. Asking Questions When you’re the newbie, everything is unfamiliar at first. Even with many years of experience in the field, it takes time to adjust to tasks within a new organization. It’s natural for new team members to be hesitant to ask questions, afraid to reveal their ignorance. But in fact, the reverse is often true—asking the right questions can be indicative of experience and knowledge. So don’t try to make a good impression by not asking too much and pretending to understand things you don’t entirely grasp. That strategy can often do just the opposite. Talking to one of my colleagues who joined our company a few months ago, I learned that he fell victim to this “shy guy” syndrome. Unsure of most of what he was doing at the onset, he asked a few questions at first, but soon

8. Dealing With Old Habits It’s easy to carry over your previous company’s values to your new job. Some elements may be relevant, but some may not be, so proceed with caution. For example, a process that worked beautifully in one company might be irrelevant in another. As a newbie, don’t push for something until you settle in and understand your new company’s entire process. Let’s eavesdrop on a discussion between a newly hired tester, Xavier, and his supervisor, Yolanda. Xavier has just joined the team as an automation specialist. He wasn’t given a VPN connection, but, per his previous experience, he thought that he needed it because he had to run and monitor nightly scripts. “Hi, Yolanda,” Xavier began. “I just wanted to ask you if I’m going to get the VPN connection.” “Well, that needs director-level approval. Also, it has to come from our annual budget, and we don’t have www.stpmag.com •

33


THE FRESHMAN

any bandwidth right now,” Yolanda replied. “But I think I’ll need it because I have to monitor the scripts every night from home,” said Xavier. “I understand,” Yolanda responded. “But let’s make do with whatever we have right now. If we absolutely need it in the future, we’ll see what we can do.” Xavier grimaced. “I was hoping to get it as soon as possible. It’d also help me learn more, because I can try a few new features with the tools we’re using on my own time,” he added. “Let’s just wait and see how it goes,” Yolanda responded brusquely, ending the conversation. You can see that in this exchange, although Xavier was correct in airing his initial request, he ignored his supervisor’s obvious reluctant response and continued to push. He might have needed a VPN connection at his previous company, but he hadn’t spent enough time at his new position to really know what he needed. Instead, his insistence became unreasonable and alienated his supervisor. A little tact and patience would have gone a long way to smooth this situation. As a new tester, it’s important to pick

your battles. Before you push for any request, make sure that it’s essential to your new position.

9. Playing It Cool A tester’s job is one of the most interesting in our industry. However, it’s frequently viewed with less respect than it deserves. We often hear it characterized as “easy” and “nontechnical.” Though this might seem silly, testers who feel disrespected can lose their cool. In a new job, a tester’s highest priority is to do good work. However, depending on workload, you may not be able to do justice to all your projects. Don’t succumb to time pressure by rushing. One of a tester’s challenges is to keep a cool head at all times.

10. Channeling Your Creativity We all have ideas that we think can improve the team—and we all have various reasons that we don’t share those ideas with the team. The team supervisor may not care much about the team members, or the team members may not be passionate about improving the team. As a new team member, you’ll be full of fresh ideas based on your technical knowledge and past experiences.

However, new ideas are best conveyed through sensitive, relaxed communication rather than overweening insistence. When I started a job as a test automation specialist, I didn’t like the way things were being done. I tried raising this issue in numerous meetings before it finally dawned on me: No one at this company cared about automation. After doing the same job for ages, team members didn’t want to tackle something new and challenging. Biding my time, I waited for the right moment to demonstrate my ideas. The opportunity arose not long afterward, when I was asked to give a demo of the scripts I’d written. It was a perfect opportunity to show how automation could reduce manual labor and increase efficiency, and my audience was enthralled. However, had I persisted in lecturing them at all those meetings, no one would have paid attention. Channeled correctly, creativity counts! If you’re the newbie, keep these 10 tips in mind and you can ease even the toughest transition to a new company. For managers, be mindful of the challenges of being the new guy. You might even want to pass these hints along. ý



Best Prac t ices

Change Is the Thing need for dreaded rework. Developers looking to buy While the market is software development lifewith various replete cycle tools have many approaches and tools, you choices. A veritable launneed consider only a handdry list of commercial and ful of requirements when open-source products it comes to successful exists to help with gatherchange-management imping requirements, autolementations. mating testing, deployAmong the must-haves: ment and other SDLC getting support from phases from soup to nuts. Geoff Koch senior management in But Don Cunningham, automating the process that ties who supports the various software change requests to specific edits to the development tools used by programcode base, and choosing a tool that can mers at Milford, Mass.–based Waters be easily molded to fit a given company Corp., says that a good change-manor development team’s specific agement tool should be the top priorinuances. ty, trumping even configuration and In 1995, Nina Godbole was in charge requirements management. He points of a sales module for a start-up engiout that many teams wind up mainneering-oriented manufacturing comtaining just one executable—hardly pany. The company, staffed by a few enough to justify a configuration manbright programmers and hungry for agement tool, no matter the price. new assignments, didn’t think much And capturing requirements can be about its process maturity. But when its done on napkins, which, he says faceprogrammers working onsite at a custiously, can be managed with little tomer’s office were forced to start more than a “napkin organizer.” changing the specifications, it wreaked “Change management really is the havoc on the back-office team in charge heart of your development process,” of testing the code. says Cunningham, who’s been doing “We were in tears and hapless with embedded programming since he testing,” says Godbole, author of the graduated from college in 1976, book “Software Quality Assurance: including the last 11 years at Waters, Principles and Practice” (Alpha Science which provides liquid chromatography International, 2004). “The testers were and mass spectrometry products to at a loss; they were testing versions that food and drug companies worldwide. were becoming obsolete almost every “It tells me what my developers and evaluators are doing. It’s the glue day, given the rapid and direct changes between the two ends.” happening to the code onsite.” The solution was to take a firm Communication, Competition stance with customers that the proAnd Clean Code grammers couldn’t make any changes Cunningham certainly isn’t alone in unless they were discussed by the core singing the praises of change manageteam. It was the beginning, Godbole ment. Other software types cite a long says, of the maturing company’s change list of reasons to embrace a sound control board—some version of which change-management process, which is required in any firm that wants to take include fostering good communication change management seriously. and healthy competition among developers, complying with regulations, Going Company-Wide delivering clean code that meets cusJim Hendricks joined AutoZone in the tomer expectations and avoiding the seriously fun late 1990s, an era marked

36

• Software Test & Performance

by well-documented business exuberance and bountiful IT budgets. “Change management was characterized by a user picking up the telephone and saying ‘I need something’ and a developer saying ‘Oh yeah, I can figure that out’ and... writing the code and throwing it on a machine,” says Hendricks, who manages the testing and change management group at the national automotive parts retailer based in Memphis, Tenn. “That was the entire process.” Hendricks remembers an unwieldy list of about 500 change requests— maintained in Excel—that ranged from hefty demands for new functionality for internal systems to pleas from individual managers to build small macros for favorite spreadsheets. Today, Hendricks uses tools from Serena Software to keep his eye on a list of 90 projects, all of which, he says, can be characterized as “large and strategic.” Trimming the list was made possible, in part, by a new requirement that all change requests be tied to some agreed-upon and prioritized business objective. And using the tool, which created a common yardstick to measure deliverables of the various AutoZone development teams, served to spur some healthy internal competition. Steve Odland, AutoZone CEO from 2001 to 2005, helped push this more clear-eyed approach to writing and maintaining code, and Hendricks says the top-down directive was essential in overhauling the company’s change management. Previously, developer teams were sent to various classes and encouraged to embrace various industry best practices, though groups invariably developed their own ways of doing things. It was Odland’s insistence on companywide processes and tools that finally made the difference. Geoff Koch covers software and science from Lansing, Mich. Write to him at koch.geoff@gmail.com. MARCH 2007


Best Practices One Size Doesn’t Fit All Cunningham, a few years into a similar effort at Waters, agrees about the importance of executive support. The company, which has pursued a growth-by-acquisition strategy, uses its unified approach to change management in part to stitch together the efforts of its 200-person, multi-country development organization. “It was pretty much top-down,” says Cunningham of the 2004 implementation of Telelogic software development products, including a change management tool. “Management basically asked, ‘Can’t we bring the development organization together without having everyone relocate?’ ” The answer turned out to be yes, and Cunningham says the common platform is helping the organization share its geographically dispersed skills, such as database management, GUI development and C# coding. The best new capability, however, is the automatic cataloging of all the nittygritty changes to the code base associ-

ated with change requests. Such a record is required when doing business in the highly regulated food and drug industries. And post-SarbanesOxley (SOX), the change management system of any publicly traded company may conceivably be fair game for government regulators. This potential scrutiny is enough to cause any IT manager to think about hurriedly buying a change management product and then fitting company process to the off-the-shelf capabilities—an approach that both Cunningham and Hendricks frown on. “When we started looking at tools, we moved away from tools that forced us into one-size-fits-all,” says Hendricks. “Any macro-level change management software has to be flexible enough at the micro level to allow individual groups to use the software in a way that works best for them.”

Process = Drudgery? Like many large tool makers, Telelogic was willing to tweak its tools to Waters’

own business processes, though Cunningham adds wryly that such changes racked up on the bill eventually presented by Telelogic’s professional services group. Still, such customization is essential to getting organizational buy-in, especially when you’re trying to sell sometimes-freewheeling coders on that dreaded idea of adopting process, a word roundly associated with drudgery. “What I have seen in the industry for more than a decade is this: Stupid implementation of badly written processes earns a bad name for the process-oriented way of working,” says Godbole. “If processes are written with a practical perspective in a way that provides a suitable tailoring to a business context, people can be convinced.” And if they can’t, they can keep looking for a good way to organize all those cocktail napkins covered in scribbles of coding genius. ý

Index to Advertisers Page

Advertiser

URL

Empirix

www.empirix.com/freedom

2

Gomez

www.gomez.com

6

Hewlett-Packard

www.optimizetheoutcome.com

40

IBM Corp.

www.ibm.com/takebackcontrol/flexible

23

JavaOne Conference

java.sun.com/javaone

35

Klocwork

www.klocwork.com

39

Parasoft

www.parasoft.com/stpmag

29

Perforce Software Inc.

www.perforce.com

Pragmatic

www.softwareplanner.com

37

Seapine Software Inc.

www.seapine.com/stptcmr

4

Software Test & Performance Conference

www.stpcon.com

34

Software Security Summit

www.S-3con.com

8, 30

MARCH 2007

3

www.stpmag.com •

37


Future Future Test

Test

Let’s Talk Requirements quate understanding of Have you ever asked yourrequirements. self why an article on requirements would appear Format Over Function in a testing magazine? If so, A tester’s ability to find out you’re not alone. In fact, I the requirements is similarly was so impressed by a comaffected by format and conment I’d overheard at a tent aspects. Format factors recent conference that I include organizational strucnow use it for the title of a ture, politics and physical featured speech: “I Went to logistics. At conferences, a Testing Conference, and Robin Goldsmith much attention is devoted to All They Talked About Was lamenting organizational failure to proRequirements.” vide testers with the necessary requireTo create tests that demonstrate that ments information, but seldom provides the delivered system meets requirements, assistance beyond sympathetic exhortatesters must know what the requirements tions that testers should be treated better are. Too often, though, testers are expectand that others should care more about ed to create tests without having a clue quality. about the requirements. Logistics—the physical ability to Without a reliable requirements basis access documented requirements—are for determining what content to demoneasier to address than politics, and can be strate, testers tend to be relegated to automated. At a minimum, the requireguessing what the system is supposed to ments must be captured in some retrievdo. The tester’s experience with the able form, on paper for example, in an organization and with testing in general electronic word processing document or can increase the chances of good guesses, spreadsheet, or carved in stone tablets. but guessing is bound to miss things. Since a system can be only as good as its GUI vs. Guts requirements, for years many authorities The less the tester knows about the sys(and tool vendors) have touted requiretem’s intended “guts”—the functionality ments management, and by implication that provides most of the system’s value— requirements management tools, as the the more that testing is likely to concenbiggest single determinant of effective trate on graphical user interface format system development. Certainly, both characteristics. While usability is surely developers and testers have a better important, it’s relevant mainly within the chance of delivering quality when the functionality’s context, which stems from tool enables them to know what the the requirements. requirements are. Despite efforts such as exploratory Content in Context testing, which characterize testing without requirements as a preferred Requirements management tools suppractice, the fact that such testing port administrative/clerical activities but often reveals errors is more an indicthave little to do with the more important ment of poor development than a tescontent issue of whether the requiretament to the testing itself. Of course, ments are correct. Testers wishing to defects are especially likely when address issues of content generally do so developers also proceed with inadefrom the perspective of their potential

38

• Software Test & Performance

role as reviewer of requirements that someone else has defined. To date, tools have mainly been incidental to reviews, primarily just using an existing requirements-management tool to capture review comments next to stored requirements. Some newer tools analyze requirements text to identify clarity and consistency problems, which is often characterized as content but is mainly format. That is, a requirement can be clear and consistent but wrong, and clarity and consistency are irrelevant for requirements that have been overlooked. Additionally, some tools are based on analyzing designs, which may be called requirements but are not. Again, the most commonly articulated challenge is often the reluctance of organizations to involve testers in requirements reviews. Such practices may reflect bigger issues that testers seldom recognize. For instance, testers who lack relevant subject area knowledge may be perceived (rightly) as not contributing to the review. Moreover, conventional testing-industry wisdom is that testability is the main requirements issue. Testability is a form of clarity. But others characterize the testers’ harping on testability as trivial nitpicking, which may warrant the exclusion of testers from reviews. However, I’ve noticed recent trends shifting emphasis toward defining requirements content. Testers increasingly seem to be deciding defensively that the only way to ensure having the requirements they need is to define them. This raises issues such as duplication of effort and whether testers have suitable skills, and impacts on testing because the time that was previously spent on testing is diverted to requirements definition. Also, several prominent tools aim at assisting requirements content definition, and I think that such tools will gain the attention that the format-oriented requirements tools never achieved. But the most important requirements tool is the one we carry between our ears. ý Robin F. Goldsmith, J.D., is president of Go Pro Management consultancy, creator of the Proactive Testing methodology, and an author. Reach him at robin@gopromanagement.com. MARCH 2007



HP software is turning I.T. on its head, by pairing sixteen years of Mercury’s QA experience with the full range of HP solutions and support. Now you can have the best of both worlds. Learn more at OptimizeTheOutcome.com

©2006 Hewlett-Packard Development Company, L.P.

There’s a new way to look at QA software.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.