stp-2007-02

Page 1

Publication

VOLUME 4 • ISSUE 2 • FEBRUARY 2007 • $8.95 • www.stpmag.com

Basics Of Sound Defect Tracking Building Blocks For Better Bug Hunting Winning Tactics For .NET Quality

Do Security Testing Boundary Defects: Data, Input and Requirements

: ST ES BE CTIC t A ild en PR Bu gem na Ma

A


“A conference with something for everyone.” — — Scott Scott L. L. Boudreau, Boudreau, QA QA Manager, Manager, Tribune Tribune Interactive Interactive

Attend The

SPRING April 17-19, 2007 San Mateo Marriott San Mateo, CA "This is absolutely the best conference I have attended. The instructors were extremely knowledgeable and helped me look at testing in a new way.” —Ann Schwerin QA Analyst, Sunrise Senior Living

A BZ Media Event

April 17-19, 2007 •


• OPTIMIZE Your Web Testing Strategies • LEARN How to Apply Proven Software Test Methodologies • NETWORK With Other Test/QA & Development Professionals • ACHIEVE a Better Return on Investment From Your Test Teams • GET the Highest Performance From Your Deployed Applications The Software Test & Performance Conference provides technical education for test/QA managers, software developers, software development managers and senior testers. Don’t miss your opportunity to take your skills to the next level. Take a look at what your colleagues had to say about the last two sold-out conferences:

3 2 . b e F y eR gister B vantage Of d A e k a T ly To r a E r e p The Su ates! Bird R

“I learned things I didn’t know existed! I met people from all ranges of QA, all of whom were brimming with information they were willing to share. 25-50% of the things I learned were in between classes.”

“This conference is great for developers, their managers, as well as business-side people.”

— Rene Howard, Quality Assurance Analyst IA System

“This conference is a wonderful tool to gain insights into the QA world. A must-attend conference!”

“This was the most practical conference I have been to in 18 years.”

— Ginamarie Gaughan, Software Consultant Distinctive Solutions

— Mary Schafrik, Fifth Third Bank B2B Manager/QA & Defect Mgmt.

—Steve Margenau, Systems Analyst Great Lakes Educational Loan Services

“The Conference was quite useful. If you get one impact idea from a conference, it pays for itself. I got several at the ST&P Conference.”

“Excellent conference — provided a wide range of topics for a variety of experience levels. It provided tools and techniques that I could apply when I got back, as well as many additional sources of information.”

— Patrick Higgins, Sr. Software Test Engineer SSAI

— Carol Rusch, Systems Analyst Associated Bank

For Sponsorship and Exhibitor Information Contact David Karp at 631-421-4158 x102 or dkarp@bzmedia.com

San Mateo Marriott • San Mateo, CA

Register Online at

www.stpcon.com

www.stpcon.com


The days of

‘Play with it until it breaks’ are over!

Introducing: ®

TestTrack TCM The ultimate tool for test case planning, execution, and tracking. How can you ship with confidence if you don’t have the tools in place to document, repeat, and quantify your testing effort? The fact is, you can’t. TestTrack TCM can help. In TestTrack TCM you have the tool you need to write and manage thousands of test cases, select sets of tests to run against builds, and process the pass/fail results using your development workflow. With TestTrack TCM driving your QA process, you’ll know what has been tested, what hasn't, and how much effort remains to ship a quality product. Deliver with the confidence only achieved with a well-planned testing effort.

• Ensure all steps are executed, and in the same order, for more consistent testing. • Know instantly which test cases have been executed, what your coverage is, and how much testing remains. • Track test case execution times to calculate how much time is required to test your applications. • Streamline the QA > Fix > Re-test cycle by pushing test failures immediately into the defect management workflow. • Cost-effectively implement an auditable quality assurance program.

Download your FREE fully functional evaluation software now at www.seapine.com/stptcmr or call 1-888-683-6456. ©2006 Seapine Software, Inc. Seapine TestTrack and the Seapine ©2007 Seapine Software, Inc. Seapine TestTrack andare thetrademarks Seapine logo logo of Seapine Software, Inc. All Rights Reserved. are trademarks of Seapine Software, Inc. All Rights Reserved.


VOLUME 4 • ISSUE 2 • FEBRUARY 2007

Contents

14

A

Publication

COVER STORY

The ABCs of Defect Tracking

A common-sense approach to finding and tracking bugs can streamline and simplify this fundamental and critical aspect of application life-cycle By Pat Burma management.

20

Conquer The World of .NET Testing Throughout testing, a little .NETspecific knowledge can help unearth problems in enterprise Windows and Web apps, as well as XML-based Web

26

Depar t ments

On the Field Of Finite Boundaries

7 • Editorial

This first of three articles on boundary testing, risks and bug sources takes a close look at detecting errors based on software requirements, input and data By Rob Sabourin processing.

On football, baseball—and boundaries.

8 • Contributors Get to know this month’s experts and the best practices they preach.

37

‘Testers Don’t Do Security Testing’

10 • Out of the Box

At first a skeptic, now I’m a convert: Though it was tough to admit, I know now that the full life-cycle approach is the only way we can achieve truly secure By Elfriede Dustin software.

43 • Best Practices

New products for developers and testers.

Let’s all move beyond make.

By Geoff Koch

46 • Future Test The future demands teamwork—so try these tips to boost collaboration. By Murtada Elfahal

FEBRUARY 2007

www.stpmag.com •

5


Perforce

Fast Software Configuration Management

Introducing Time-lapse View, a productivity feature of Perforce SCM. Time-lapse View lets developers see every edit ever made to a file in a dynamic, annotated display. At long last, developers can quickly find answers to questions such as: ‘Who wrote this code, and when?’ and ‘What content got changed, and why?’ Time-lapse View features a graphical timeline that visually recreates the evolution of a file, change by change, in one fluid display. Color gradations mark the aging of file contents, and the display’s timeline can be configured to show changes by revision number, date, or Perforce Time-lapse View

changeset number. Time-lapse View is just one of the many productivity tools that come with the Perforce SCM System.

Download a free copy of Perforce, no questions asked, from www.perforce.com. Free technical support is available throughout your evaluation.


Ed Notes VOLUME 4 • ISSUE 2 • FEBRUARY 2007 Editor Edward J. Correia +1-631-421-4158 x100 ecorreia@bzmedia.com

EDITORIAL Editorial Director Alan Zeichick +1-650-359-4763 alan@bzmedia.com

Copy Editor Laurie O’Connell loconnell@bzmedia.com

Contributing Editor Geoff Koch koch.geoff@gmail.com

ART & PRODUCTION Art Director LuAnn T. Palazzo lpalazzo@bzmedia.com

Art /Production Assistant Erin Broadhurst ebroadhurst@bzmedia.com

SALES & MARKETING Publisher

Ted Bahr +1-631-421-4158 x101 ted@bzmedia.com Associate Publisher

List Services

David Karp +1-631-421-4158 x102 dkarp@bzmedia.com

Nyla Moshlak +1-631-421-4158 x124 nmoshlak@bzmedia.com

Advertising Traffic

Reprints

Phyllis Oakes +1-631-421-4158 x115 poakes@bzmedia.com

Lisa Abelson +1-516-379-7097 labelson@bzmedia.com

Marketing Manager

Accounting

Marilyn Daly +1-631-421-4158 x118 mdaly@bzmedia.com

Viena Isaray +1-631-421-4158 x110 visaray@bzmedia.com

READER SERVICE Director of Circulation

Customer Service/

Agnes Vanek +1-631-421-4158 x111 avanek@bzmedia.com

Subscriptions

+1-847-763-9692 stpmag@halldata.com

Cover Photograph by Gino Santa Maria

President Ted Bahr Executive Vice President Alan Zeichick

BZ Media LLC 7 High Street, Suite 407 Huntington, NY 11743 +1-631-421-4158 fax +1-631-421-4130 www.bzmedia.com info@bzmedia.com

Software Test & Performance (ISSN- #1548-3460) is published monthly by BZ Media LLC, 7 High St. Suite 407, Huntington, NY, 11743. Periodicals postage paid at Huntington, NY and additional offices. Software Test & Performance is a registered trademark of BZ Media LLC. All contents copyrighted 2007 BZ Media LLC. All rights reserved. The price of a one year subscription is US $49.95, $69.95 in Canada, $99.95 elsewhere. POSTMASTER: Send changes of address to Software Test & Performance, PO Box 2169, Skokie, IL 60076. Software Test & Performance Subscribers Services may be reached at stpmag@halldata.com or by calling 1-847-763-9692.

FEBRUARY 2007

How Software Is Like Football Now that football season is the companies developing over, developers can start them must sometimes be looking forward to the really accountable to the governimportant stuff—like basement for their practices. ball. Closing the Door—And It’s easy to see the relathe Mouth tionship between football Aside from the methodoloand software development. gy of the gridiron, Rob Football, like software, is all Sabourin’s fine article on about boundaries. Teams boundary testing got me begin on their side of a line, Edward J. Correia thinking about other limits. usually in their own territoIf you were designing a ry. They try to move into radio tuner, you’d make sure it tried to their opponent’s territory, in increments receive only frequencies within the comof at least 10 yards. Every gain of 10 yards mercial bands. And an inventory proor more is rewarded. gram shouldn’t allow negative values. Software applications begin within the There are physical boundaries, too. walls of a company and try to encroach We try to stay within the bounds of the on a marketplace occupied by competiroads each day as we travel to and from tors. Developers try to increase market work, school or the coffee shop. We set share, and in doing so gain pride, profits up personal boundaries. Don’t stand and other rewards. too close to the person in front of you Football players must stay within the in line. We close the bathroom door as bounds of the playing field. They must we shower; the bedroom door as we not cross into the opponent’s territory dress. We’re mindful of our checking before play begins. They must begin play account balance, credit card limits and within a certain amount of time, and how much milk is in the fridge. must reach the end zone or otherwise Boundaries also define our interacscore more than their opponent before tions in the workplace. It’s ill-advised the game runs out. to tell off-color jokes, criticize someSoftware applications must do what’s one’s apparel or correct a subordinate expected of them. They must open, edit in front of others. It’s sometimes and save documents, if that’s their misagainst company policy to discuss sion, and must not be released premasalary, post political cartoons or disturely and in unstable condition. play religious symbols in public areas. However, they must also be released on And it’s always a bad idea to drink too time, so they can compete at critical times much at company parties—trust me on of the year without letting opponents that one. gain an advantage. But while I understand football, my Finally, football players must not comreal passion is the Great American mit fouls against the other team. They Pastime, a game with far fewer boundmust comply with regulatory constraints aries than football. In fact, one of the that, if broken, distance them from their things I like most about baseball is a goal or put them at risk of being thrown boundary that does not exist—that of out of the game by the officials. time. A baseball game can last all day Likewise, applications must not crash. and all night. And to me, the more They must not be difficult to use, must baseball, the better. No wonder it’s not operate slowly, and must never played on a diamond. ý interfere with other applications. And www.stpmag.com •

7


Contributors February’s lead story takes a broad look at defect tracking, a critical part of any application development project. Beginning on page 14, the article covers best practices from the basics to the advanced. It was written by PATRICK BURMA, a product specialist at Seapine Software, which develops and markets issue and configuration management tools. A seven-year software industry veteran, Burma specializes in change management and application lifecycle management, and has lectured at a several software development conferences. Read his blog at blogs.seapine.com/pat.

Performance Issues? Get the cure for those bad software blues. Don't fret about design defects, out-of-tune device drivers, off-key databases or flat response time. Software Test & Performance is your ticket to runtime rock and roll.

We can help... Subscribe Online at www.stpmag.com

While .NET simplifies memory management and other aspects of application development, Microsoft’s managed environment also introduces challenges for the tester unlike those of testing native apps. In our second article, which begins on page 20, DAN KOLOSKI presents strategies for functional-, regression- and load-testing .NET applications. Koloski has extensive experience in CRM, content management and e-commerce application design, testing and deployment. He’s currently the director of strategy and business development for the Web business unit of Empirix, which offers testing and monitoring solutions.

ROBERT SABOURIN, a 25-year software development industry veteran, begins a three-part series on boundary testing with a look at risks based on software requirements, input and data processing. The article begins on page 26. Sabourin is president of AmiBug.com, a Montrealbased consultancy that helps companies implement effective processes for successful commercial software development. An adjunct professor of software engineering at McGill University, he has published numerous articles as well as a children’s book on software testing, and is a sought-after speaker at conferences.

You’re doing memory leak detection, guarding against SQL injection and testing user roles/permissions. Your cross-site scripting tests are second to none. And you’re researching new and upcoming security tests all the time. While you may think you’re doing enough, some experts say you’re just scratching the surface. ELFRIEDE DUSTIN presents a six-phased approach to deploying secure applications. Author of “Effective Software Testing” (Symantec Press, 2006) and a number of other books on software security, Dustin is an independent software testing and QA consultant in the Washington D.C. area. TO CONTACT AN AUTHOR, please send e-mail to feedback@bzmedia.com.

8

• Software Test & Performance

FEBRUARY 2007


Qbsbtpgu!TPBuftu

UN

7FSJmFT 8FC TFSWJDFT JOUFSPQFSBCJMJUZ BOE TFDVSJUZ DPNQMJBODF 40"UFTU XBT BXBSEFE i#FTU 40" 5FTUJOH 5PPMw CZ 4ZT $PO .FEJB 3FBEFST

8FC 4FSWJDFT

"QQMJDBUJPO 4FSWFS

Qbsbtpgu!Kuftu

UN

7FSJmFT +BWB TFDVSJUZ BOE QFSGPSNBODF DPNQMJBODF +VEHFE *OGP8PSME T 5FDIOPMPHZ PG UIF :FBS QJDL GPS BVUPNBUFE +BWB VOJU UFTUJOH

%BUBCBTF 4FSWFS

*NQSPWJOH QSPEVDUJWJUZ DBO TPNFUJNFT CF B MJUUMF TLFUDIZ

"QQMJDBUJPO -PHJD

1SFTFOUBUJPO -BZFS

-FHBDZ

Qbsbtpgu!XfcLjoh

8FCTJUF

-FU 1BSBTPGU mMM JO UIF CMBOLT XJUI PVS 8FC QSPEVDUJWJUZ TVJUF 1BSBTPGU QSPEVDUT IBWF CFFO IFMQJOH TPGUXBSF EFWFMPQFST JNQSPWF QSPEVDUJWJUZ GPS PWFS ZFBST +UFTU 8FC,JOH BOE 40"UFTU XPSL UPHFUIFS UP HJWF ZPV B DPNQSFIFOTJWF MPPL BU UIF DPEF ZPV WF XSJUUFO TP ZPV DBO CF TVSF ZPV SF CVJMEJOH UP TQFD

5IJO $MJFOU

UN

7FSJmFT )5.- MJOLT BDDFTTJCJMJUZ BOE CSBOE VTBHF BOE NBOBHFT UIF LFZ BSFBT PG TFDVSJUZ BOE BOBMZTJT JO B TJOHMF JOUFHSBUFE UFTU TVJUF

UIF OFX DPEF EPFTO U CSFBL XPSLJOH QSPEVDU BOE BOZ QSPCMFNT DBO CF mYFE JNNFEJBUFMZ 8IJDI NFBOT ZPV MM CF XSJUJOH CFUUFS DPEF GBTUFS 4P NBLF 1BSBTPGU QBSU PG IPX ZPV XPSL UPEBZ "OE ESBX PO PVS FYQFSUJTF

(P UP XXX QBSBTPGU DPN 451NBH t 0S DBMM Y ª 1BSBTPGU $PSQPSBUJPO "MM PUIFS DPNQBOZ BOE PS QSPEVDU OBNFT NFOUJPOFE BSF USBEFNBSLT PG UIFJS SFTQFDUJWF PXOFST


Out of the Box

Infragistics NetAdvantage No Longer a Facelet Enemy

Infragistics now includes support for Portlets and Facelets in NetAdvantage for JSF 2006 Vol. 2, the latest version of its tool set for building user interfaces for Java EE applications with JavaServer Faces components. JavaServer Faces (JSF) is a Web application framework intended to simplify the development of user interfaces for Java EE apps. Using JavaServer Pages by default, JSF includes a default set of UI components and tag libraries for defining interactions between JSP pages and the JavaServer Faces within them. JSF incorporates an API for representing user interface components and handling their state, and specifies a server-side event model. Facelets offer an alternative method for building JSF applications that doesn’t rely on JSP (or any other) Web containers, nor require XML configuration files or learning a new XML schema. NetAdvantage includes a set of AJAXenabled interface controls for editing, navigation, menus, trees, tabs, explorer bar and grids, the latter of which also now supports hierarchical data views. All com-

10

• Software Test & Performance

NetAdvantage for JSF 2006 V2 uses JSF to help build UIs for Java EE apps.

ponents are customizable. According to documents that accompanied the news, the AJAX-enabled grid can now present more complex datasets of unlimited depth and width. Nested grids may be collapsed and expanded, with data available for paging and sorting. Grid columns and rows may now be locked à la Microsoft Excel, permitting grids to be split into sections with scrolling capabilities. The company also released TestAdvantage for Windows Forms 2006 Vol. 3, an update to its libraries for testing the presentation layer of applications built with NetAdvantage for Windows Forms controls. According to Infragistics, Vol. 3 is the first tool that permits the creation of automated data-driven user interface tests for Windows Forms applications built with NetAdvantage. The tool supports Hewlett-Packard (formerly Mercury) QuickTest Professional versions 8.2 through 9.1.

Master and Commander Of Back End Build automation company Electric Cloud has released ElectricCommander, a system for automating build, package, testing and deployment—the back-end phases of enterprise software development. ElectricCommander centers around a multithreaded Java server engine for Linux and Windows that provides developers with synchronization services and production resources. This permits the project team to build and test prior to check-in. The system’s AJAX-enabled, browser-based access allows different teams on disparate platforms and projects to collaborate and share project components. The system allows procedures to be nested, permitting one procedure to invoke another rather than being restricted to calling up a command. Steps, jobs, procedures and other project items are stored in ElectricCommander inside virtual projects, along with metadata to facilitate asset identification and reuse, according to the company. Following the completion of each process step, an analytics engine records information from the step’s log file for later reference, diagnostics and regulatory compliance. Information such as number of compilations, test runs and failures is available in real time or for later trend analysis and reporting. Existing build scripts can be migrated into ElectricCommander, the company claims, or used as-is to activate ElectricCommander capabilities from a command line. Included integrations with a company’s existing SCM systems can be customized and extended. The ElectricCommander system also makes use of client agents for Linux, Unix and Windows; IE 6 and Firefox 1.5 browsers are supported. FEBRUARY 2007


Quality Center 9, the Perforce Is With You SCM system developer Perforce in January introduced the Perforce Defect Tracking Gateway, a set of components for integrating the company’s Fast Software Configuration Management system with the issue-tracking products of third parties. The initial release will include support for HP Quality Center 9.0 (formerly known as Mercury Quality Center). Defect Tracking Gateway includes a

graphical editor for creating and managing field maps between Fast SCM and the defect management system. A replication engine moves data between the two systems and keeps everything in sync. Synchronization can be either one-way or bidirectional. Changelists, created and used by the Perforce system, describe all changes made to files, and also can be replicated.

According to the company, demand was highest for integration with Quality Center, but other defect-tracking systems will follow. The company did not specify which would be next. Defect Tracking Gateway is included in Perforce Server 2006.2 for Windows XP, available now; pricing starts at US$800 per seat. A fully functional 45-day evaluation version is also available now.

Lattix LDM 3.0 Plots Dependencies Across Multiple Application Domains If you’re developing complex applications and tracking dependencies manually, you’re spending more time than necessary to manage application changes. Addressing this issue is Lattix, with its Lightweight Dependency Model. The company in January began shipping LDM 3.0, which broadens the tool’s capability to analyze dependencies of a single application to include multiple domains, including databases, services and other apps. According to Lattix president and founder Neeraj Sangal, the new capabili-

ty lets developers assess how changes to one part of a system will affect the remaining parts of the system. “For example, it is now possible to answer the questions such as ‘Which of my applications will be affected by changing a particular stored procedure in my database?’” he said. LDM analyzes and illustrates complex systems using Dependency Structure Matrix (DSM), a decades-old technology that came to prominence in the 1990s when MIT used it to model complex processes at Boeing, General Motors and

LDM 3.0 analyzes dependencies of multiple domains, including databases, services and other apps. FEBRUARY 2007

Intel. Lattix claims its product to be the first to apply DSM to software systems. Also new in version 3.0 is an enhanced dependency model that now supports the creation of rules, enabling developers to better visualize and manage database architectures. The models can now display and set relationship rules between such database elements as schemas, tables, views, stored procedures, packages, sequences, synonyms and triggers. This new capability also supports system dependencies stored in configuration files, such as those used by Hibernate, the o/r mapping framework for Java and .NET. LDM 3.0 now includes these mappings in its high-level dependency views. A new import feature facilitates the introduction of a company’s systems, configurations and processes to LDM. Called Lattix Data Import (LDI), it’s an XML specification that “allows users to load dependency information from different languages, configuration files and proprietary tools,” according to a company document. This enables the development team to get an end-to-end view of its development process and its dependencies. Lattix offers LDM 3.0 starting at US$495 per seat, including Web-based reporting of metrics, violations and incremental changes. Modules are available for Java, C/C++, .NET, Oracle, Hibernate and LDI. Send product announcements to stpnews@bzmedia.com www.stpmag.com •

11


Advertisement

SECURITY

Hackistan leader shakes confidence of I.T. world. Conventional firewalls unable to withstand expected onslaught.

T

he conclusions of the Hackistan Study Group (HSG) offer an alarming assessment of the computer hacking threats posed by this rogue nation. (The dangers from Hackistan’s pursuit of weaponsgrade body odor were not covered in this report.) Ever since Zorkul, the self-appointed Lifetime Despot of Hackistan, refused entry to United Nations virus inspectors in 2004, security experts have been concerned about efforts to topple the global financial system via a grand plan to switch identities between the world’s richest and poorest people. Now, the potential for catastrophe looms larger, as the report cites “an alarming investment in Hackistan’s elite Bot Army.” It noted that “the growing sophistication of their logic bombs, Trojans and SQL injection techniques is gravely disturbing. It threatens the fabric of modern society, particularly polyester.” Many are banking on California-based Fortify Software, a leader in software security, to neutralize these threats. Commenting on Fortify’s groundbreaking approach, the report said that “protecting applications at the code level is increasingly being viewed as the only viable path to creating confidence in a very dangerous world.”

One expert, who did not give his name out of concern that the muchfeared Hackistan secret police would sell his Social Security number on eBay, spoke favorably about the company: “Fortify’s products offer a radical improvement over the patch-and-pray, surround-and-surrender approaches that have been unable to stop the cross-site scripting and other advanced attack techniques that come out of Hackistan Institute of Technology (HIT).”

Lifetime Despot Zorkul of Hackistan

“The study group warned against pro-Hackistan propaganda that appears on such Web sites as www.discoverhackistan.com.” Meanwhile, personal tensions are also rising, as the enmity between Fortify’s CEO John M. Jack and Zorkul, who is reportedly obsessed with creating underage fake friends for his mySpace page, is reaching a new level. Contacted at Fortify headquarters, Jack said, “Hackistan’s threats are real. I don’t want to minimize them, but faced with our powerful technology, Hackistan will be toast, if their pathetic country even had a toaster.”

Leading the fight against Hackistan is an innovative high-tech company called Fortify Software. The company said it will not rest until Hackistan is turned into a Club Med vacation spot.

REPRINTED FROM GLOBAL SECURITY UPDATE, JANUARY 2007 • JOIN THE FIGHT AGAINST HACKISTAN • GO TO WWW.FORTIFYSOFTWARE.COM.


;XMNOGNAH@ "QDKK@MN

4

D

@

4

G

@Q

O

1NQS 4B@M

/NQSG 4KNADQH@ "-1)" ;0/&

#&5" ;0/&

4SQ@SDFHB ;DQN %@X 3DRDQUD 7@KKDX NE SGD 8NQLR

;NQJTKCNMH@ MDV B@OHS@K

3HUD

Q

,-6(&4 3&(*0/ -"

7HQTR

&@RS 4KNADQH@ QD

RS

)"$,*45"/ '

T

Y

Y

'N

/HKD

8DRS

4OKNHS .@QJDS

/

7&3 3*

(DDJK@MC

3NNS ,HS 1@RR

)*() .&.03: 3&(*0/

Q

N

M

H

@

#TEEDQUHKKD

.@KV@QH@

#TFATQF

&3

D

3*7

;

@

4D

-"/

1@SBG %DKS@

ENQLDQ B@OHS@K

611&3 04 #QD@BG 7@KKDX

1GQ@BJ 5Q@HMHMF $@LO

4D@

1KT

R

1K

TR

-08&3 04

'QDDCNMH@

HACKISTAN Gross national product: From legal activities, $5MM. From illegal activities, $167 billion Per capita income: 99% live on less than $10/week; 1% cavort like Donald Trump

4BGKHDMNUH@

Main industries: Key logging, yak jerky production, phishing Counterfeit ATM cards per capita: 17.3 Chief exports: V1a@GRA and Ciali s National bird: Roasted vulture National anthem: “I Sing of Proud Hackistan, Land of My Mother’s Facial Hair”

© 2007 Fortify Software Inc.


By Pat Burma

efect tracking is a fundamental and critical part of application life-cycle management, and the defect-tracking system is the central collaboration hub. However, its features are often underused by software development and QA teams, with much functionality remaining untapped. This can be remedied by implementing a few simple practices throughout the defecttracking process.

14

• Software Test & Performance

FEBRUARY 2007


Contrary to common wisdom, the implementation of best practices will not commit you to one inflexible, unchanging set of processes. Instead, think of best practices as a management approach based on continuous learning and continual improvement.

The Fundamentals Whether you use a pencil and paper, a spreadsheet or a full-fledged defecttracking system, the basics of effective defect tracking are the same. Implementation of these practices alone can provide a solid foundation

steps, features or capabilities, users either try to use every bell and whistle, or avoid it altogether. This can frustrate users, particularly non-technical ones who simply need to add and track defects. Configure your system for ease-of-use with automation and workflow definition whenever possible. Display information at the decision point. While it’s important to capture essential information during the defect-tracking process, it’s also crucial to keep the input screens clean and easy to understand. Context-sensitive screens greatly

don’t help developers track and fix defects, or help testers understand the nature of a problem, remove them or make them optional. Capture report data. Users and managers will want to generate reports from the data captured by the defecttracking system. When the application is set up, fields should be added to capture information that helps with reporting. Information such as date reported, user data and organizational data (for example, the department that reported the defect) may not be useful when fixing a defect, but will

A Common-Sense Approach to Finding Bugs Can Streamline and Simplify the Process for a successful defect-tracking project. Once you understand the fundamentals, you can then begin to integrate additional practices and finetune them for your organization. Unify tracking methods. Make sure that developers and testers follow the same defect-tracking methodology and use the same tools to manage change. Be sure to involve and get feedback from all the groups involved in the defect-tracking process, including development, quality assurance, customer service, field representatives, project managers, partners, managers and end users. By asking for input, you’re making each group a stakeholder in the system, boosting your project’s odds. Keep it simple. The need for simplicity is often overlooked when implementing a defect-tracking system. Even the most sophisticated applications do little good if they’re sitting on the shelf. The process should make it easy for users to report defects. They should be able to interact with the system easily and with as little frustration and overhead as possible. A system that is easy to use also encourages users to participate more in the defect-reporting process. It may seem like common sense, but when a system is laden with numerous extra FEBRUARY 2007

improve ease of use. Don’t overwhelm your users with 100 fields on a single screen. Instead, add information as the issue moves through its life cycle. For example, instead of displaying fix information when a defect is first entered, display those fields only when fix information is being entered. Some users may never need to enter this type of data, so make these details visible only to relevant users. And while simple is always better when it comes to the user interface, don’t omit information needed to reproduce a defect and track its change history over time in the process of creating a simple input form. Use relevant terminology. When creating an input screen to capture defect information, remember that each field’s description must be abundantly clear. System administrators should spend as much time as necessary on the initial configuration of the system, working with project managers and team members to ensure the use of terminology that is familiar to the organization. If users add defects and see fields that are unclear or that they don’t understand, the defect will lack essential data. Try to capture only the most relevant information. If you see fields that

provide valuable reporting data later. Users can create trend reports that track when defects are spiking, which users are reporting the most defects, which teams have the most issues, and so on. Project managers will be able to allocate and adjust resources based on these trends. For example, if the server application team is generating more defects than the client application team, development and QA resources can be shifted to help the server team and prevent delays from gumming up the project schedule. Write clear, reproducible defects. As I was growing up, Mom’s favorite saying when she thought I was watching too much television was “garbage in,

www.stpmag.com •

15


BUG HUNT 101

garbage out.” The same phrase can be rate statistical representations of prodapplied to defect tracking. uct quality. It’s vital to collect relevant, useful By avoiding duplication, users information because a system’s benewon’t spend unnecessary time fits are only as good as the information researching defects that have already put into it. When incorrect or incombeen fixed. Although defects can be plete information is entered, developreported multiple times, they should ment and QA teams waste time trackbe added to the same defect report. If ing down defects or requesting clarification about poorly FIG. 1: DEFECT WORKFLOW documented problems. Information accuracy can Open be hard to enforce. How do (not assigned) project managers make sure Can auto-assign users provide good information? Most defect-tracking Assign tools include features to make Assigned to a team member to fix defect reporting more accurate and to assist in capturing Estimate the information needed to Closed The assigned team find and fix a defect. Use The defect is fixed member estimates and closed the effort to complete required fields to ensure the the task correct level of detail is captured, configure e-mail to Fix notify customers about probThe assigned team member fixes the lems with their defect reports, defect or use alerting mechanisms to notify users when the defects Needs Verification Needs Customer they reported are fixed. The assigned team To Verify member must verify Reduce ambiguity with The customer verifies the defect screenshots. Software defects the defect is fixed often have a visual compoRelease to Testing nent that can’t be adequately The assigned team Release to member tests the described. Attach screenshots Customer Testing defect The customer tests of failures to reduce ambiguithe defect ty and confusion. A screenSave & Assign shot can simplify the defectOnce fixed, the defect Customer Verify reporting process so dramatiis assigned to another Defect team member to verify cally that some organizations The customer verifies the defect is fixed require them. Screenshots can be espeVerify The team member cially useful when team memFAIL PASS verifies the fixed defect bers don’t speak the same lanand chooses either “Pass” or “Fail” guage. One caveat to attaching screenshots: High-definiClosed tion image files, such as BMP The defect passes the KEY: verification process or TIFF, can use a large and is closed amount of disk space. Can auto-assign Compressed file formats, such Reopen Optional before fix as PNG or GIF, are good alterThe team member reopens a defect and natives. Optional save & assign can assign it Avoid defect duplication. Optional life cycle One defect report is helpful— 10 reports about the same problem are not. To minimize not, they could be assigned to multiple duplication, users should query the programmers to correct without realizdatabase prior to submitting a defect. ing the issues are duplicates. When a Make sure to run a few simple queries project manager generates statistical in the database to determine if the reports about the health of a project, defect already exists there. An unclutthe number of reported defects, the tered defect database makes it easier criticality of defects, or the rate at to manage projects and provides accu-

16

• Software Test & Performance

which defects are reported and fixed, the data may be skewed by duplicates. For example, a project with 15 critical defects may have only five critical defects if one issue was reported multiple times by 10 different users. Merge duplicate defects if the defect-tracking tool supports record merging. Merging doesn’t delete information from the database and still allows users to track reported issues. Users can also easily identify which issues customers and other users have reported most frequently. This information is useful when prioritizing issues and requirements. For instance, a critical defect that 10 customers reported may have a higher priority than an equally critical defect that was reported by only one customer. Match the team’s workflow. The purpose of a defect-tracking workflow is to move issues from initial reporting to resolution. When a defect is reported, an organization may require the following to occur: Verify the defect. Is it really a defect? Is it reproducible? Allocate resources to fix the defect. How much time will development and QA need? How much will it cost? How long will it take? Release the fixed defect. When will it be released? Who approves releasing the change into a build? How are code changes moved into new builds? These questions, which affect project management, software development, QA and release management, can all be enforced through a defecttracking workflow. For example, when a defect is added, it must be reviewed by a QA team member to ensure its accuracy and authenticity. Once QA determines that a defect exists, a project manager must prioritize and assign the defect to a developer to fix. After the defect is fixed, QA must test and verify the fix. A build manager must then ensure that the fixed defect is released to the next build. A customer may even perform an acceptFEBRUARY 2007


BUG HUNT 101

ance test on the issue and verify the fix. Finally, the defect is closed after the fix is verified in the latest build or release. A defect-tracking workflow can help define and enforce changemanagement policies while providing traceability and accountability throughout the defect-tracking process. The workflow defines states for the life cycle of the defect being tracked, with each state indicating a change to the defect. Workflow configuration also determines what type of information is captured as the defect moves from open to closed. Some information, such as who reported it, the due date, what changed for a fix and what version to release the defect fixes into, may also change. This information isn’t captured when the defect is added, but is gathered as the defect moves through the workflow. It’s imperative to configure an enforceable workflow that captures all the information needed to manage the defect throughout the life cycle, from being reported to merging in code changes to releasing the next build containing the fix. When implemented correctly, the workflow ensures that all team members know their roles and assignments. After logging into the defect-tracking system, developers can be presented with a list of their assigned defects that need to be fixed, and QA staff can be presented with a list of fixed defects that are pending testing and verification for specific builds or releases. Roles will become better defined, information will be exchanged more effortlessly and with better accuracy, and teams in different departments will learn to work together to find and fix defects. Instead of countless e-mails, properly documented records that register dates, times, user effort and user interaction will all be easily accessible. This keeps project members up-to-date on exactly who’s doing what at any given time, which tasks have already been completed and which still need to be done. Comply with regulations. Many organizations are implementing com-

B

ASICS OF BUG BUSTING Information capture: Guidelines should be established that determine the minimum amount of information necessary to report a defect. For example, determine what information is needed on descriptions, products and screenshots. Defect tracking must be simple enough so that people will use it, but must be adequate to capture vital information about the problem. Reproduction: To verify that a defect has been fixed, a user must first be able to reproduce it. While you can fix a non-reproducible defect, it’s a very difficult and time-consuming task. Prioritize and schedule: After a defect is found, it must be prioritized and scheduled to be fixed. Often this is subjectively determined by the severity of the problem as seen by the project manager. Communication: An open line of communication can facilitate constructive dialog between the person who reported the defect and the person who’s responsible for fixing it. Environment: Some defects exist only in specific environments. To provide thorough testing, the QA team must identify and test all possible hardware/software combinations.

pliance measures, such as those required by Sarbanes-Oxley (SOX), which puts demands on information gathering, data integrity, process definition and policy enforceability. A defect-tracking tool with workflow capabilities provides a system that meets SOX requirements by capturing relevant change information throughout the defect life cycle. For example, when a defect is

unaltered change history with a reasonable description of the change. This makes an auditable record of activity available to ensure that the process requirements and guidelines are being fully satisfied. Integrate with change management. A popular feature in many defect-tracking tools is the ability to integrate what’s being done on the development side in code changes with what’s being done in the defecttracking system. The ability to link code changes to defects does several things: • Provides a better description of the change • Provides a new approach to release management • Simplifies QA tasks • Provides better accountability • Provides better overall project management Commonly, developers document changes by commenting on changes that are being committed or checked into the source control system. Comments generally describe the nature of the change made to the source code and why the change was made. For example, a developer may comment about new functions that were added to a source code file during a check-in and also specify a defect number that correlates with that check-in. The toughest problem with comments? You have no way to ensure

Create an enforceable workflow that captures all the data needed to manage defects throughout the life cycle.

FEBRUARY 2007

• fixed, information such as the user who fixed it, when the fix occurred, and why it was made are details that describe the change itself. These types of change descriptions, and the ability to lock down the history of these changes so they can’t be altered or corrupted at a later time, help meet compliance requirements and guidelines that often demand an

www.stpmag.com •

17


BUG HUNT 101

their usefulness. A check-in comment may include statements that don’t provide relevant information about the change, such as “I made a change.” When the change is reviewed, it won’t be easy to determine what changed and why. The ability to associate a defect with a checkin or commit action provides an additional of the description change that was made. If a change explanation is needed for a modification, instead of relying solely on comments, users can simply read the defect details. This information is expanded to input from the users who initially reported the problem and authored all the details and other data, such as the version initially reported in, products, components, hardware/software variables, comments about the issue and other corollary information. This information goes much further to explain and justify certain types of changes than a mere comment. The integration of defect tracking and configuration management can greatly simplify release management. Change is introduced into builds in specific ways, such as into one version of a modified file. With integration between defect tracking and configuration management, a release engineer can generate a query or report to identify the specific files and versions that belong to approved defect fixes for a given release. In this way, you can develop a system that ensures the exact components for a build are properly included. By linking source code changes with a corresponding defect, a tester can directly access information that constitutes the defect fix. Information pertaining to specific defect fixes that exist in specific builds is readily available to QA team members, allowing test plans to be executed more efficiently. For example, instead of getting a new copy of an entire Web site to verify a defect, a tester can easily obtain

and test the individual files that represent the fix. When a page loads, the change may or may not be immediately obvious. Instead of spending time looking for the change, the tester can open the source code file and view the changes to the HTML. The tester can then use a differencing utility, common in most software configurationmanagement tools, to focus on specific changes, no matter how small. Involve customers. When possible, release the application on a limited basis to customers and users willing to act as beta testers. This increases the range of QA testers well beyond what most companies can support. The more users testing the software, the more defects that will be captured—and the sooner they’re caught, the better. A defect-tracking tool should offer a way to get information from users—internal or external— without necessarily allowing them to access confidential data. Working with customers to identify and resolve problems is a far more attractive method than the alternative of waiting for complaints about bugs in your new release. Often, when defects are reported, something gets lost in the translation between the customer and engineer. To avoid making a code change that results in “this isn’t what I was really looking for,” include customers in the process. Solicit additional input when the defect is reported and ask them to verify bug fixes in pre-release builds to make sure they’re getting the proper results.

Working with customers to resolve problems is far better than waiting for complaints about your new release.

18

• Software Test & Performance

Make Best Practices Your Own The implementation of best practices helps an organization evolve to higher levels of productivity while maintaining the highest-quality product possible. By implementing some of the defect-tracking best practices described here and adapting them to your particular organization, you’ll maximize the efficiency and effective-

ness of your company’s product development cycle. As your development needs change, your practices should follow suit, so that the final result is a finely tuned set of techniques, methods, processes, activities, incentives and rewards that are most effective at delivering your desired outcome. ý

C

APTURE ESSENTIALS

The following fields should be required for every defect:

Title: The title should be clearly written to increase the defect database’s “searchability.” Think of how others might describe and look for the problem. Summary: A paragraph or two that describes the defect. System configuration: Capture the exact configuration of the system where the defect was found, including operating system, memory, processor, browser, etc. A defect may be found on Intel but not AMD, Internet Explorer but not Firefox, or Windows 2000 but not Windows XP. Steps to reproduce: Explain how to reproduce the defect.This critical piece of information is often inadequately described, wasting the time of developers and testers as they attempt to reproduce it. Expected results: Describe how the application should work. If submitting a cosmetic defect, attach a screenshot. If words alone don’t suffice, attach a mock-up or a sketch showing how the application should look. Notes: Address anything not covered in the previous categories, such as team members to contact for additional information. C l a s s i f i c a t i o n o r s e v e r i t y : When you’re classifying defects, you must be careful. An improperly classified defect can get more attention than it deserves, or an important defect might be overlooked because the severity is set too low.

FEBRUARY 2007




A Fundamental Strategy: Conquering the World Of .NET App Testing How to Arm Your Team With A Solid Risk-Aversion Plan By Dan Koloski

n execution environment such as Microsoft’s Common Language Runtime is like an application

fortress; it builds walls around your programs, inside which it talks to the operating system, manages memory and offers the security of a “sandbox” execution runtime. But the same behind-the-walls management that aids the developer also can serve to complicate the job of the tester. It obscures functionality and behaves differently than the native application code they’re accustomed to. But it doesn’t have to be that way. A peek inside the walls of the .NET runtime can arm testers with .NET-centric knowledge. Add some .NET-aware tools and principles, and QA professionals should be much better equipped to improve the quality of .NET applications and provide valuable feedback to the developers who build them. In the functional-, regression- and load-testing phases, a little .NET-specific knowledge can go a long way in diagnosing problems. Let’s investigate how to apply this knowledge to enterprise Web and Windows applications, and to XML-based Web services.

.NET Web Applications Aside from the ASP part of their names, ASP and ASP.NET have little in common. But their fundamental differences give clues to the testing nuances that arise in ASP.NET Web applications. The Active Server Pages (ASP) technology uses a linear programming model with server-side scripting to create dynamic elements in a Web page. www.stpmag.com •

21

Photographs by The Design Diva, NY

A


.NET RISK AVERSION

Its successor, ASP.NET—and specifically the Web Forms class—uses a set of server controls along with an event model that lets developers use a traditional object-oriented programming style to build more complex Web applications using far less code than with classic ASP (Figure 2). Developers using Visual Studio .NET can create Web applications using Web Server controls in a similar fashion to how Windows programmers use the Windows Forms controls. These Web Server controls allow programmers to set properties and create events against the various UI elements. The ASP.NET server processes these controls at runtime to render the appropriate HTML, DHTML and JavaScript that can be understood by client browsers. While the resulting application is formally called a “thin” client, the client-side code can be complex. In theory, ASP.NET simplifies Web application development through the

use of server controls with properties and events familiar to most developers, and with behind-the-scenes state management. But from a testing per-

ASP.NET sweeps many details under the rug—details that testers need to extract and understand.

• spective, ASP.NET sweeps many details under the rug—details that testers need to extract and understand.

Testing Challenges QA professionals testing ASP.NET applications first seek to create parameterized scripts to automate functional regression and load testing.

ASP.NET presents challenges to the QA professional charged with creating and maintaining effective scripts. The challenges facing the tester of enterprise Web applications built using .NET can be summarized in two general areas: complex user interface code and state management issues. Complex user interface code. The complex user interface code generated by ASP.NET, along with the dynamic nature of Web application development, can overcome some testing tools. Many test tools feature event recording, and some approaches are better than others. These differences are magnified when creating scripts against .NET applications. Given that the server renders the client user interface dynamically, the guarantees found in static pages with simple script don’t always apply. For tools that merely look for the first link in a page, ASP.NET can change the game in a hurry. Also, some tools will use standard x-, y- or indexed-based navigation to create their automated

FIG. 1: INSURANCE PREMIUM APP DATA, IN THE MATRIX

22

• Software Test & Performance

FEBRUARY 2007


.NET RISK AVERSION

scripts, and a sim- FIG. 2: THE MICROSOFT .NET STACK gration has always ple change of user been difficult and interface can renexpensive, and Web remove der existing scripts services useless. Better test many of these cost XML Web Web tools can recognize and complexity barWindows the actual objects riers. Services Forms Forms rendered within a Most Web servicASP.NET page to ensure that es have an associatwhenever elements ed Web Services Data and XML Classes change, the script Definition Langstill works correctly. uage (WSDL) file Basic Framework Classes The challenges that describes its become even more capabilities, inputs complicated for and outputs. Common Language Runtime applications that As a result, any employ client-side client or user who WebControls, forwishes to communimerly known as HTMLControls. Many cate with a Web service has the option cessing time substantially. So although companies have started to use these to read the WSDL file of the Web servview state is expedient for the develclient-side objects to improve the user ice to understand what information oper, it can easily lead to performance experience; examples include the can be exchanged with it. bottlenecks in an ASP.NET Web appliTabStrip and TreeView controls. Once There are three common cation. When state information isn’t again, the test tool must recognize approaches to using Web services in required on a control, developers these objects and get inside them so should disable view state by setting the enterprise applications. The first scripts can be generated that use the EnableViewState property to False. method is using Web services as a stanreal functionality intended by the Test tools can perform simple tests to dardized application adapter for condeveloper and the control used. For check for large view-state necting back-end sysexample, you might wish to create an fields. tems. Second, Web servautomated test that simulates a user F i n a l l y , A S P. N E T ices can be invoked by an clicking on a TreeView node. This is decides when to do postASP.NET server and the best done through explicit object backs based on the response used as part of recognition or through customizable default behavior of spethe information returned scripts that can be created for any new cific controls and the to end users. Finally, Web client-side object. AutoPostBack property. services can be embedState management. HTTP is a stateFor example, a buttonded within client Web less protocol. And since most interestclick event will trigger a pages and invoked by the ing applications require that state be postback to the applicabrowser (using XMLHttp kept across page requests, Web applition server, but a or SOAPInvoke) to cation developers must deal with state TextChanged event on a access content or servicmanagement themselves. ASP.NET TextBox by default would es. Regardless of the provides a convenient mechanism, not. Developers can overtechnique used, Web called view state, to keep state each ride the default behavior services present some time the browser posts a page back to by changing the common challenges to the server. View state automatically AutoPostBack property automated testing. Web services testing preserves page and control property to True. challenges. Testing Web values so that information associated Overzealous or inexservices presents a with the page and the controls on the perienced developers unique challenge, since, page aren’t lost with each round trip. can abuse this in an by definition, they have The view state information is hashed attempt to create more no inherent user interinto a string and saved in the page as a interactive applications face. Traditional autohidden field. When the page is posted that respond to any user mated testing solutions back to the server, the server parses action. Testers again rely on recorded endthe view state string at page initializashould consider creating user transactions that are scaled up for tion and restores property informacustom scripts that report excessive functional regression and performtion in the page. usage of these “onchange” attributes. ance testing. Without a UI to record The view state field can quickly against, Web services testers have grow to several kilobytes, a size that XML Web Services either not tested or relied on manual will degrade the performance of your XML provides a language- and plattesting for functionality and built perWeb application. Additionally, using form-neutral protocol for communicaformance tests from scratch. view state on complicated controls like tion of Web services between compaThe ultimate goal is to expose the the DataGrid can increase server pronies. Application-to-application inte-

Web services remove many of the cost and complexity barriers to integration.

FEBRUARY 2007

www.stpmag.com •

23


.NET RISK AVERSION

Web services to a tester so he can use the same familiar tools to perform functional regression and load testing against standard Web applications. The test tool should present an abstraction layer between the tester and the Web service. Test tools can address this problem by presenting visual interfaces into the WSDL descriptions published by many Web services and allowing the tester to

based (thin) clients or Windows (thick) clients. Thin clients worked for any user with a browser but lacked the interactivity, fast response times and visualization flexibility available with Windows applications. Although dynamic scripting, ActiveX controls and Java applets increased the power of a thin-client application, there’s only so much you can do when most of the logic and the

.NET Windows applications from a Web server. Windows application testing challenges. For both load testing and functional regression testing, a tool must understand the standards-based communication techniques used by these Windows applications. Test tools that instrument the Common can Language Runtime (CLR) provide a visibility into Windows Forms unavailable to GUI-based test tools. The resulting validation of these Windows Forms is much more sophisticated, and test tools can use a proxy service residing on the Windows client or elsewhere to capture and expose the structured messages sent between the client and the server. The test tool must capture the structures and parameterize both the key header and payload data, so that testers can view this information and easily create automated test scripts from the fields contained within the messages. Finally, testers can leverage these scripts to perform both functional regression and load tests.

Some Best Practices Are Not Technology-Specific

parameterize and automate testing based on the structured information found within these WSDLs. If the WSDL is unavailable, test tools can capture actual SOAP requests through proxies and then once again make this information available in a structured way to testers. In either event, the scripts can then be used to perform load and performance tests, confirm functionality, and validate XML response data expected for various parameterized requests. In addition, test tools must allow the tester to chain Web services together, taking the response from one Web service and using it as part of a request to another.

Testing .NET Enterprise Windows Applications Before .NET, companies faced a clear set of trade-offs when choosing whether to create and deploy browser-

24

• Software Test & Performance

data reside on the server. Thick clients give the developer more options to create a great user experience, download and manipulate data more easily, and exercise more complex client-side logic. However, these applications are difficult to deploy and update, especially in application scenarios in which the company had no captive audience. The .NET architecture and its Windows Forms classes improve Windows application deployment in two ways. First, Windows applications run within the .NET framework as managed code, facing none of the COM interaction issues that caused so many problems in the past. (With COM-based applications, it was common to have DLL conflicts with other installed software. This was debilitating, especially when the user was outside the company with an uncontrolled set of applications.) Second, companies can deploy and update

In addition to applying technologyspecific expertise, test planning is a key ingredient to the success of test efforts—no matter what technology is involved. As a general best practice, test teams should have active representation during requirements definition and initial design and development phases, regardless of the development process used by the development team (waterfall, agile or other). Through active engagement in the requirements phase of the development process, test teams can: • Account for the technology-specific testing requirements, which are numerous in .NET • Point out requirements that are “doomed to fail” based on experience • Gain a solid understanding of the application that can be used to analyze the functional specification and test planning while the application is being developed • Understand the business drivers for the application and focus testing on the “critical few” during the inevitable timetable trade-offs at the end. What’s more, test managers can start the resource acquisition process FEBRUARY 2007


.NET RISK AVERSION

(staffing, test labs, tools and other requirements) long before those resources are needed, to ensure that they’re available on time. In applications where validating application quality requires testing extensive paths through the business logic, you should begin by using the application’s functional specification to create a matrix of all possible expected user paths through the application. These matrices serve as guides for positive testing, and when combined with automation, are useful for testing unexpected user input (negative testing, boundary conditions, etc.). A realworld sample data set is provided in Figure 1 on page 22. It’s important to understand that the creation of these data sets is a tremendous amount of work—and is application-specific. Often, test teams fail to understand the importance, or estimate the time required to create these data sets. They can be created, long before the application is built, with an understanding of the functional specification and business logic. Armed with the matrices of test data, test teams can understand the user scenarios that need to be tested, and can then focus on planning the technology-specific tests that also need to be run.

Y

OU CAN AUTOMATE .NET TESTING The CUMIS Group, a Canadian financial services and insurance provider, recently developed a Web application that would allow customers to calculate insurance premiums and sign up for coverage online. Called iCLIC, it would replace an older, distributed client/server application that had been in use since 1997. Insurance premium calculations are complex, and include about a dozen parameters, according to the CUMIS Group, including the member’s age, the type of coverage selected, the insurance term and payment frequency. Calculation errors are costly, both in terms of lost revenue and lost customer goodwill. So thorough testing of the new iCLIC application was crucial. But timely deployment also was important, which meant that some tests would need to be automated. The CUMIS Group, a .NET shop, placed a greater emphasis on completing the testing in as little time as possible than on, say, automating some percentage of their test plan. They focused on automating tedious and labor-intensive test tasks—such as validating all possible expected and unexpected user interactions with .NET controls and Web services—by using their test matrices. The company found that automation worked best when the types of transactions and UI components corresponded to the strengths of the tools employed. For example, both .NET WebForms and WinForms (the desktop analog of .NET controls) are hosted through the .NET runtime (much as Java Applets and Swing/AWT components are hosted through a JVM). This makes a .NET-aware automation tool essential. The company now automates more than 80 percent of its iCLIC testing—almost 3,000 different scenarios. The most important part of the CUMIS Group’s testing regimen—testing the insurance premium calculations—now takes six hours, versus two to three weeks when it was done manually.

Boosting .NET Awareness Use common sense when thinking about automation in the context of testing. In the sidebar on the CUMIS Group, you’ll read about an organization that has, over time, automated over 80 percent of its test cases. That phenomenal achievement reflects a high-functioning team with several years of experience on an enterpriseclass tool set. This level of achievement isn’t available for newcomers to test automation and may never be consistently feasible, given the nature of a given application. The often-maligned negative impact of automated testing is usually the result of attempting to automate where automation isn’t the right solution. While black-box testing is important, QA testers can be more effective and help developers more by having .NET specific knowledge and using .NET-aware testing tools, as well as solid test planning, to provide better automation and more useful testing results. ý FEBRUARY 2007

www.stpmag.com •

25


Knowing the Limits of Your Applications Will Give Your Project a Winning Season

26

• Software Test & Performance

FEBRUARY 2007


By Rob Sabourin

B

ugs—I love to find bugs! In my testing adventures, I’ve found a lot of great bugs. Many of

the most important ones, I call boundary bugs. I’ve found many boundary bugs that are of great value and potential consequence. When I teach software testing, which I do with thousands of students and conference attendees every year, I often take a simple survey. I first ask, “How many of you use equivalence partitioning to develop tests?” Only about 5 percent raise their hands. My next question is “How many of you use boundary analysis to develop tests?” Invariably, a large percentage—generally about 80 percent—respond with a resounding “Yes, I know boundary analysis and practice it frequently.”

I’ve found that many of my students know what boundary testing is, but are able to discover, explore and expose only a small number of potential boundary risks. So let’s start to remedy that shortcoming right now. I’ll examine three sources of boundary risks: those found in requirements, those related to data input, and those exposed by the processing, storage and manipulation of data. But first, I’ll need to lay some groundwork.

Equivalence partitioning is a basic test-design technique I first learned from Glenford Myers in his 1979 book “The Art of Software Testing” (Wiley released a second edition in 2004). It involves three simple steps. First, identify variables that influence the behavior of the application under test. Next, define subsets of all possible values each variable can take that the application will handle the same way. These are called equivalence classes. Finally, for each equivalence class, choose a representative sample value and use it to test the application. Equivalence partitioning applies a blend of common sense and set theory to testing problems. The basic assertion of this approach is that the information you get from testing any one member of an equivalence class is the same, so you need not try more than one representative sample to test the entire class. Traditional boundary analysis derives from Myers’ notion of an equivalence class. Many equivalence classes have extreme values, minimum values, maximum values or edge conditions. We can define these extrema as boundary conditions, and use them to develop test cases. I call this traditional equivalence-class boundary testing. In my basic approach, for each boundary of the class, I choose three values to test the application: 1. A value exactly on the boundary 2. A value immediately within the boundary 3. A value immediately outside the boundary So for each equivalence class, I can choose up to six tests FEBRUARY 2007

to study the behavior of the application at the variable’s boundary. Some classes have no boundaries, and some classes have only one. For example, the class of positive numbers has no upper boundary, and the class of negative numbers has no lower one. Classes about membership comprise lists of objects that do have boundaries, but in an insurance application, the class of vehicles with four-wheel drive is an equivalence class without boundaries. I found many bugs at the boundaries of equivalence classes. These are due to the nature of requirements, design, programming and testing. Requirements that correctly delimit ranges of values are hard to elicit and unambiguously describe. Boundary bugs will continue to exist in software projects, but there are specific techniques you can use to tame them.

Study the Rule Book Some software engineering techniques can help reduce the injection of boundary bugs into software. Software inspections can help validate our requirements, and decision tables can help clarify business rules. However, any two software development professionals referencing the requirements can interpret them in different ways. Ambiguous requirements are full of boundary risks. Is a range inclusive or exclusive? Do ranges of values overlap? What does it mean for a value to be in range? How many digwww.stpmag.com •

27

Photograph by Brandon Laufenberg

Traditional Gridiron Testing


STAYING IN-BOUNDS

its of precision do we consider in defining continuous numeric ranges? A classic boundary-requirement bug occurred in a communications analysis project I managed a couple of years ago. We had a very small team and pretty clear requirements. Table 1 depicts a decision table that is typical of an application’s requirement statements. This decision table was used as part of the rate-management software requirements. The table indicates the discount rate, invoicing frequency and reporting frequency as a function of the type and volume of traffic. Volume bands are designated up to the amount indicated in the row labeled Monthly Volume; the assumption is volume up to and including the value indicated in the table. So when the table indicates 100,000, it implicitly refers to traffic up to and including 100,000. The volume band labeled 1,000,000 implies traffic greater than 100,000 and up to and including 1,000,000 and so on. Although this might be an implicit ambiguity, it was common practice on the project. Bugs started showing up when we realized that developers were making different assumptions about the lower bound of the range. Different developers were responsible for the business logic behind rate computations, invoicing and report generation. Furthermore, different testers were responsible for testing the reporting and invoicing software. Testers responsible for invoicing also validated discount-rate computations. Requirements were elicited from different stakeholders depending on traffic type. Different product managers were responsible for the voice and data business segments, creating sever-

al possible points of misinterpretation of the requirements. Each potential misinterpretation is a potential source of a requirementbased boundary risk. Requirement analysis could have misinterpreted the client needs. Requirement analysis of

• Any misinterpretation can cause a requirement-based boundary risk.

• voice could have interpreted needs differently than a data analysis. Business logic, reporting and invoicing developers could have interpreted stated requirements differently, and invoice testers could have interpreted stated requirements differently from reporting testers. I like to draw a tree of potential boundary misinterpretations, using a simple mind map as in Figure 1(page 30). The tree has many branches: The requirement may be wrong, the developer may misinterpret the requirement, the tester may misinterpret the requirement, or the test results may be misinterpreted. Only one of the 32 paths through the mind map represents a correctly implemented, validated and verified boundary condition. Some cases are particularly difficult, and all derive from the nature of equivalence-related boundaries. To avoid misinterpretation, we must

TABLE 1: PLAYBOOK FOR COMMUNICATION TRAFFIC Condition Monthly Volume

1,000,000 10,000,000 >10,000,000 100,000

1,000,000

10,000,000

>10,000,000

Traffic Type Voice

Voice

Voice

Voice

Data

Data

Data

Data

Action Discount Rate

0%

10%

15%

20%

0%

12.5%

15%

17.5%

Invoice Report

Quarterly Monthly Monthly Monthly Monthly Weekly

Weekly Weekly

Monthly Monthly Monthly Weekly

Weekly Weekly

Weekly Weekly

28

100,000

develop effective boundary tests for equivalence-class boundaries that validate requirements, confirm the development team’s understanding of requirements, confirm the testing team’s understanding of requirements and use testing oracles—all strategies that help to avoid misinterpretation. To validate requirements, I generally use software inspections to ensure consistency across the sources used to elicit them as well as consistency in the way the requirement is described. To confirm development understanding of requirements, I often use peer reviews and unit-testing approaches that challenge assumptions developers may make about boundaries. I ensure that testing includes cases exactly on boundaries, as well as immediately within and outside of boundary conditions. To confirm testing understanding of requirements, I review test against requirements and test against design, challenge validation of requirements, perform an ambiguity review and use peer reviews of test cases developed. To help avoid misinterpretation of

TABLE 2: FIRST STRING Minimum Length String Minimum -1 Length String Minimum +1 Length String Maximum Length String Maximum -1 Length String Maximum +1 Length String

test results, I try to find testing oracles, the strategies that help to avoid misinterpretation, based on requirements. I use multiple approaches to verify results, and different sources of information to validate correctness. Requirement-based variables can take on many different forms. Each type of variable has a different type of business logic–related classes, which in turn have different types of boundaries.

First String Variables

• Software Test & Performance

String variables are quite common in applications and generally represent the names of objects or some sort of tests associated with objects. String variables generally have boundaries FEBRUARY 2007


STAYING IN-BOUNDS

represented by their length, with minimum and maximum lengths defining boundary test ranges. Testing boundaries of string variables generally involves the six boundary conditions listed in Table 2. Date and time variables often have basic components of day, month, year, hour, minute, second and fractional second. Precision is an important element of a date requirement that helps to identify boundary test cases. For example, if I had to test a data variable defined with a precision of days, I would test a day before and a day after the minimum date, and a day before and a day after the maximum date. If the precision were a millisecond, the equivalent boundary tests would be on the boundary and a millisecond before or a millisecond after the boundary date. Sub-second computations are common in real-time systems related to event synchronization and telecommunication or network switching. Table 3 summarizes common date boundaries to consider when the date is a variable influencing the behavior or processing of an application. Testers involved in the infamous Y2K testing initiative undoubtedly will be able to identify dozens of other relevant date and time boundaries based on a single variable. With international Web-based applications, transactions that originate in one time zone can be processed in another. The same instant in time can be represented by different hours, minutes and days, depending on the originating and processing time zone. Boundaries can exist in dates when transactions take place in different time zones. Boundary conditions can include cases in which transaction timing occurs at different times. Transaction timing also can “cross the top” of an hour; for example, the Newfoundland time zone is offset by 30 minutes from the Atlantic time

TABLE 3: FIRST-ROUND DATE BOUNDARIES Date Component

Low Boundary

Upper Boundary

Day

First day of month

Last day of month

Month

First month of year

Last month of year

Year

First year of century

Last year of century

Year

First year of millennium

Last year of millennium

Year

Leap year

Year

Leap century

Hour

First hour of day

Last hour of day

Hour

First hour of morning

Last hour of morning

Hour

First hour of afternoon

Last hour of afternoon

Hour

Leap hour forward

Hour

Leap hour backward

Day

Leap day forward

Day

Leap day backward

Minute

First minute of hour

Last minute of hour

Second

First second of minute

Last second of minute

zone. Transactions can span morning to afternoon, through days, weeks, months, years, or even centuries.

Game-Day Boundaries Date and time ranges can be defined either with a starting point and an ending point, or with a starting point and a duration. Starting and ending point. Boundaries exist in the absolute definition of the starting and ending points, as well as in the relationship between starting and ending points. For boundary conditions in these cases, I like to look at starting date equal to ending date, starting date just before ending date, and ending date just before starting date. Starting point and duration. Boundaries exist in the absolute definition of the starting point as well as the absolute definition of the duration. Requirements should indicate minimum and maximum values for the duration. Generally, date ranges run into boundaries when they cross day, hour, morning, afternoon, year, century or millennium boundaries. If the starting and

ending times or dates cross time-related boundaries, there is a risk that business logic and computations that aggregate transactional date may allocate transactions to different time periods.

Compound Fractures Compound variables are those that influence an application’s behaviors in combination. For example, take a look at the dimensions and weight of an envelope used to help compute postage. Let’s say the size of the envelope is comprised of length, width, thickness and weight. I can identify boundaries based on the extreme values allowed for each component of the envelope’s size, but I can also define the extremes that are the maximum of all components or the minimum of all components. Look at the object of the testing: the envelope. What’s the smallest envelope? What’s the largest? These define the boundaries of an envelope. In some cases, the business logic is defined in such a way as to necessitate testing the boundaries of each component individually and in combination. But when I test compound variables, l

TABLE 4: PRESSURE IN THE POCKET Test identifier

Length

Width

Env001

Minimum

Minimum

Minimum

Minimum

Env002

Minimum - E

Minimum - E

Minimum - E

Minimum - E

Env003

Minimum + E

Minimum + E

Minimum + E

Minimum + E

Env004

Maximum

Maximum

Maximum

Maximum

Env005

Maximum - E

Maximum - E

Maximum - E

Maximum - E

Env006

Maximum + E

Maximum + E

Maximum + E

Maximum + E

FEBRUARY 2007

Thickness

Weight

www.stpmag.com •

29


STAYING IN-BOUNDS

focus on the boundaries of the objects being tested. I test boundaries with six test envelopes, as shown in Table 4. E represents the smallest unit that depends on the requirement for precision.

Discrete and Continuous Offense Boundaries exist for some equivalence classes of variables. If a variable is represented by a whole number or an integer value, or can be mapped to a whole number, it’s known as a discrete variable. An example of a discrete variable is the

number of items purchased in a shopping cart application. On a recent project, I performed a series of tests to confirm the business logic for order processing as a function of inventory in stock. To test the variable for order entry, I had business rules defined for a minimum and maximum order quantity as well as these rules related to the amount of inventory: 1. Order quantity must be greater than zero 2. Order quantity must be less than 256

3. If order quantity is less than or equal to inventory, the order is processed 4. If order quantity is more than or equal to inventory, a partial order is processed, sufficient inventory is reordered and completion is suspended until inventory arrives Implementing these four rules put several interesting boundaries into play, as depicted in Table 5. Note that in determining the inventory quantity for testing, the boundary

FIG. 1: ONE PENALTY-FREE PATH TO THE END-ZONE Correctly Interpreted Results Test Pass Incorrectly Interpreted Results Test Right Correctly Interpreted Results Test Fail Incorrectly Interpreted Results

Development Right Test Pass

Incorrectly Interpreted Results Test Wrong Correctly Interpreted Results Test Fail Incorrectly Interpreted Results

Requirement Right

Correctly Interpreted Results Test Pass Incorrectly Interpreted Results Test Right Correctly Interpreted Results Test Fail Incorrectly Interpreted Results

Development Wrong Test Pass

Incorrectly Interpreted Results Test Wrong Correctly Interpreted Results Test Fail Incorrectly Interpreted Results Boundary Risk

Misinterpretation

Correctly Interpreted Results Test Pass Incorrectly Interpreted Results Test Right Correctly Interpreted Results Test Fail Incorrectly Interpreted Results

Development Right Test Pass

Incorrectly Interpreted Results Test Wrong Correctly Interpreted Results Test Fail Incorrectly Interpreted Results Requirement Wrong Correctly Interpreted Results Test Pass Incorrectly Interpreted Results Test Right Correctly Interpreted Results Test Fail Incorrectly Interpreted Results

Development Wrong Test Pass

Incorrectly Interpreted Results Test Wrong Correctly Interpreted Results Test Fail Incorrectly Interpreted Results

30

• Software Test & Performance

FEBRUARY 2007


WHITE PAPER SPONSOREDAdvertisement

The reality behind the hype What most software companies don’t tell you You see them all the time: advertisements, newsletters, press releases and email campaigns offering you BIG software discounts, migration utilities or lofty promises to end your software licensing woes. Let’s face it; many software vendors spend a lot of time and money just baiting customers to switch to their products. While most savvy consumers can see through the hype, we thought it helpful to dispel some myths and point out some realities. Myth #1: Cheap software = Lower TCO

Myth #2: Migration utilities = Easy migration

Reality check: The total cost of ownership (TCO) of

Reality check: You’ve seen advertisements for the

testing comes from many sources, including: software

migration utility du jour. They promise simple migration

licenses, maintenance, implementation costs, human

from one vendor’s offering to another. What they

resources, hardware, the efficiency of the testing

DON’T mention is the significant switching costs

process and more. If a vendor promises to lower your

associated with the deployment and customization of

TCO through cheap software, think twice. If you don’t

the new solution, the migration of existing data and

tread carefully, the results could actually be quite

retraining of staff to use new offerings. Don’t let the

expensive. As a general rule, steer clear of those who

conversion utility hype distract you from seeking the

offer no real value proposition beyond fire-sale

true capabilities and the intrinsic value of the software.

discount programs or permanent “limited” promotions.


SPONSORED WHITE PAPER

Myth #3: The price of the software is Myth #5: All QA and testing the most important selection criteria solutions are more or less the same Reality check: Try this quick experiment: Go to any

Reality check: No two organizations handle quality

résumé posting website and enter the name of your

testing the same way—so it’s important that your

testing software in the search field. The results can show

products can address the complexity of real-world

whether local skilled resources are available for that

enterprise environments. Use the following simple

product. The fact is, labor cost is often the most

criteria to test broad vendor claims:

expensive item in your budget. Selecting widely used

• Does the software have proven ability to scale from individual, project-based testing to centralized, global testing centers of excellence?

technologies increases your chances of getting the right people from the local talent pool.

Myth #4: All QA and testing software companies are more or less the same Reality check: Just because a vendor makes a claim in advertising doesn’t mean it can bring great quality management software to market. Scrutinize the company carefully—you often get what you pay for. Good questions to ask are:

• What are the depth and breadth of environment support for applications? • Does the testing software help you test serviceoriented architecture (SOA) as well as custom and packaged applications? • What is the quantity and quality of the ecosystem for third-party tools available for the solution? • Is this software likely to grow and expand to support new technologies and new practices as they become available?

• Has the company been in the quality assurance (QA) The HP Software difference software business for at least 10 years? HP Software brings together 16 years of Mercury • Has it been able to consistently innovate and deliver experience in Quality Management software with the new and useful technologies to market? full range of HP solutions and support. While most • Has it set the tone in QA and testing software or just vendors make claims, HP Software delivers. We are the followed others? market-share leader in key categories including • How many customers does it have in production? performance testing (77 percent, Yankee Group) and • Does the company have a vision to which it is automated software quality (56 percent, IDC). More executing, or is it just producing tactical tools? customers have selected HP Software Quality Great differences among vendors exist in this space; Management software and services than all other identifying those differences will benefit you. Choosing vendors combined. Why? Because our software and the right partner can spell the difference between services are known to bring real value to customers, and success and failure. that’s not just a bunch of hype.

Get the details Get the big picture, and the advantages of HP Software become increasingly clear. For more information and materials, visit www.optimizetheoutcome.com.

© 2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.


STAYING IN-BOUNDS

TABLE 5: INVOKING PENALTIES Test Identifier

Condition

Tid001

Rule 1 (INV>1)

-1

0

1

Tid002

Rule 2 (INV>257)

255

256

257

Tid003

Rule 3 and 4 (INV a specific value between 1 and 256)

INV AMOUNT -1

INV AMOUNT

INV AMOUNT +1

Tid004

Rule 3 and 4 (INV=0)

-1

0

1

Tid005

Rule 3 and 4 (INV=1)

0

1

2

Tid006

Rule 3 and 4 (INV=255)

254

255

256

Tid007 Tid008

Rule 3 and 4 (INV=256) Rule 3 and 4 (INV=257)

255 256

257 257

257 258

related to inventory varies depending on the amount of inventory. For rules 1 and 2, I would test the boundaries by ensuring the associated inventory was greater than the order quantity. However, for rules 3 and 4, I would explicitly vary the inventory and the order quantity. Continuous variables are those that can have any possible real number in some range. Continuous variables have the interesting characteristic that there always exists a third value between any two possible values. There are an infinite number of possible values. If you look at the Windows Calculator and try to test the square root function, you’ll observe that any real number can be an input value. The valid range of values for the square root function is positive real numbers including zero. When you’re working with systems that have a continuous range of values, it’s easy to identify the boundary values but it becomes challenging to identify values a little above and a little below boundaries. This is where the notion of precision is critical. An application’s precision may be indicated in the requirement documentation or may be part of the system design. To unearth the relevant information for boundary testing, you must identify the smallest value that the application can process such that if you take a continuous variable, the following is true: Let EPSILON be the smallest represented value such that VARIABLE + EPSILON > VARIABLE VARIABLE – EPSILON < VARIABLE If the variable can have different magnitudes, the EPSILON value is generally defined to be a function of the magnitude of the value. For example, if the value is a very large number such as 10**100 (1 followed by 100 zeros, a.k.a. FEBRUARY 2007

Below Boundary

On Boundary

Above Boundary

a googol), the value of EPSILON may well be a value of 10*30. However, if the same variable has a value with a smaller number like 1,000, EPSILON may well be defined as 0.000001. Therefore, boundaries of continuous variables depend on the magnitude of the variable, and different values of EPSILON should be used for different orders of magnitude. I run into the issue of different EPSILON values when involved in scientific calculations.

Calling Plays From The Sideline

So far we’ve explored boundary risks related to variables derived from software requirements and business rules. These boundaries are critical and can be identified and explored based on discovering and understanding what the application should do. In parallel, I generally identify a series of boundary risks that relate to input—how variables get populated by data before processing begins. Variables get data from many different sources. Users can enter it though user interfaces, forms, menus, dialogs and controls. Messages can be exchanged between processes and applications, and also between the operating system and the application. As well, variables may be populated from data sources, persistent storage, files, databases and registries. The way variables get populated leads to a series of potential boundary

risks that are different from those related to business rules. The input boundaries that I find in testing products are often the most critical. Whenever a form is populated on a screen or dialog, operators are asked to enter data before processing can occur. The fields on the display may have different restrictions or rules based on user interface requirements. Some of the common constraints I run into in discovering input boundaries include the following: For string length, what is the acceptable length of a field? For white space, how much leading and trailing white spaces are allowed? Are there restrictions in the number of digits? What is the number of significant digits or precision; for example, how many values after the decimal point are computed? For each of these constraints, I consider identifying boundaries related to what the rules are and what I can do. For example, a field may expect a user name of between six and 12 characters. There may be user interface constraints that restrict the field length to this range. I also ask myself what I can possibly enter into the field; for example, can I enter a string of length 0 (null string)? Can I enter a long string longer than the acceptable range? Can I enter an extremely large string (perhaps a BLOB [Binary Large Object]) into the field? The boundaries related to these extremes could expose potential failure modes of the application that can lead to discovering security breaches. Numeric precision in input fields relates to the number of digits that will be actually processed. Interesting boundaries show up based on the number of significant digits. For example, if an application processes values of up to four decimal places, I would identify inputbased test cases related to entry of data with three, four or five decimal places. I might also see if the application attempts to validate, process or ignore these values. If the values are validated, I’d expect the application to warn the

The input boundaries found in testing products are often the most critical.

www.stpmag.com •

33


STAYING IN-BOUNDS

user that an attempt was made to exceed the maximum precision supported. If the application processes the values, I’d expect it to round them up or down to the nearest value of the appropriate precision. For example, 10.01256 with four digits of precision rounds up to 10.0126. When the application does the rounding, it becomes interesting to test and ensure that processing is of the rounded value and not the entered value. Processing for 10.01256 and 10.01259 both round up to 10.0126 and thus should generate the same result. If not, you may have identified a precision-related boundary bug.

Memory, Storage and Processing: A Head Coach’s Headache The way data is stored in computer memory or databases can lead to interesting boundary conditions that are independent of the other two types. Table 6 lists valid ranges of values for common data types in the ANSI-C programming language. If we can find out how an integer is stored in memory, it may be interesting to explore how the application behaves when the boundaries are determined by the range of the variable type. We can learn about the types of variables for any string, numeric, currency, date or other compound data type by consulting with the software developers, checking database schemas or actually reviewing the code. If a quantity value is stored as an unsigned short integer, I’d want to explore how it processes values around the lower possible boundary, 0, and the higher possible boundary, 65,635; perhaps computational overflow or underflow bugs would occur.

TABLE 6: ZONE DEFENSE Type

Min

Max

unsigned char

0

255

short int

-32,767

32,767

unsigned short

0

65,535

int

-32,767

32,767

unsigned int

0

65,535

long int

-2,147,483,647

2,147,483,647

unsigned long

0

4,294,967,295

software development staff. I then explore application behavior at the memory-related processing boundaries to make sure the application considers the possibility that we overflow an internal boundary condition as an internal intermediate step in processing. Any data type has different boundaries, and occasional restrictions and ranges vary across tool vendors, technologies and operating environments. To find and test these memory and processing variables, you’ll have to get your hands dirty, but you’ll be helping to ensure that the code is healthy. These boundaries are sometimes considered hidden since they can’t be discovered by merely looking at the application from the outside, studying the requirements or exploring how data can be input into the application being tested.

Boundaries related to data type are also significant when programming languages attempt to convert data from one type to another. In C, an integer variable may be cast into a char type variable, thus dramatically impacting the range. An unsigned short has a maximum value of 65,535, whereas an unsigned char has a maximum value of 255. Code process review also leads to some interesting boundaries. For example, imagine an algorithm that takes values A and B and multiplies them, placing the result in a variable C: C=A*B The variables A, B and C are of the type unsigned short. Each variable can store values between 0 and 65,535. There are many combinations of data stored in A and B whose product would be above the maximum value stored in C. A serious bug could occur if an attempt is made to overflow the variable C. Such problems could result in incorrect computations, corruptions of memory or both. The boundaries of interest in testing are combinations of A and B that result in values on the line depicted in Figure 2. I generally do the type of “white box” analysis required to discover processing boundaries in collaboration with the

Highlight Reel Here, I’ve reviewed software boundaries from traditional sources based on data requirements, input and processing. Next month, in part 2, I’ll relate further tales of exposing and discovering boundary-related bugs, and share some systematic and exploratory testing techniques to unearth them. ý

FIG. 2: AN UNSIGNED (SHORT) FREE AGENT 80,000

B Values

60,000 40,000 20,000 0 1

2

3

4

5

6

7

8

9

10

A Values

34

• Software Test & Performance

FEBRUARY 2007




The Sec urit y Zone

Guarding Against Data Vulnerability

‘Testers Don’t Do Security Testing’—But We Should By Elfriede Dustin hen we met, we both worked at the same Fortune 500 security company. Unknown faces to each other in a large corporation, we reported to the same senior VP. That day, we both attended an all-day company technical exchange meeting. Chris was one of the project managers. His PowerPoint presentation contained the statement “Testers don’t do security testing.” It stopped me in my tracks. As the QA manager responsible for all of the development groups in the room, and with my QA presentation still looming, my red flags started flying high. “But we are indeed conducting security testing!” I thought. “Who is this guy? I need to tell him about the type of security testing we do,” I grumbled inwardly, continuing to ponder as his presentation continued. Was he referring to a recent security bug that slipped through into production? No, I didn’t recall any bugs slipping through at all. My mind was racing. “He must be referring to those ‘other’ testing groups in those ‘other’ companies,” I pacified myself. After his presentation ended and the applause had subsided, I walked up and stood in line to talk to Chris for clarification about the little bomb he’d dropped. I started to set him straight, telling him that we QA/testers are indeed doing just that, explaining the type of security testing we do. I was proud of the memory leak detection, SQL injection and the user roles/permission tests we ran. Lately we also had been testing for cross-site scripting and other new and upcoming security tests. With a slightly condescending smile, Chris countered that the security testing we were doing was just scratching the surface, and intoned, “A full life-cycle approach is the only way to achieve secure software.” And that’s how I met Chris, the co-author (with Lucas Nelson and Dino Dai Zovi) of my latest book, “The Art of Software Security Testing” (Symantec Press, 2006). Yes, now FEBRUARY 2007

Photograph by Josef Philipp

W

I’m a convert: The full life-cycle approach to security testing is what Chris and I preach today. It turns out that Chris Wysopal is one of the most recognized security researchers in the country. He’s now cofounder and CTO of Veracode, where he’s responsible for the software security analysis capabilities of Veracode’s technology. Unfortunately, Chris is correct: Testers don’t do effective security testing. Many testers today still neglect basic security testing techniques, such as fuzzing, and companies still farm out security testing to expensive security researchers after the product has been released into production. Testers have abundant knowledge, techniques and technology at their disposal. For example, testers, more so than developers, get a view of the entire system. While a developer might focus on her component alone, testers get to test the system integration and often focus on end-to-end testing. Testers generally have the entire system view. While a developer usually performs unit tests to determine if the feature positively does what it is supposed to do, testers also conduct “negative” testing: They test for how a feature www.stpmag.com •

37


The Sec urit y Zone behaves when it receives an input it’s not supposed to receive, for example. Additionally, by redirecting testing tools and techniques toward an attacker’s perspective, backed up with threat modeling and secure development techniques, testers make a major contribution to product security. When I present the Secure Software Development Life Cycle at conferences, attendees love the concept, but complain about the additional time that security testing requires. Indeed, security testing does require more time, but security must parallel the software development life cycle to be effectively built into a product. The sooner a security defect is uncovered in the product life cycle, the cheaper it is to fix. Imagine the implications of a security breach from a bug that wasn’t uncovered before going live!

Adding Security Testing to the SDLC In the SSDL, security issues are evaluated and addressed early in the system’s life cycle, during business analysis, throughout the requirements phase, and during design and development of each software build. This early involvement allows the security team to provide a quality review of the security requirements specification, attack use cases and software design. The team will also gain a complete understanding of business needs and requirements and their associated risks. Finally, the team can design and architect the most appropriate system environment using secure development methods, threat modeling efforts and other techniques to generate a more secure design. Early involvement is significant because requirements or attack use cases comprise the foundation or reference point from which security requirements are defined, the system is designed and implemented, and test cases are written and executed, and by which success is measured. The security team needs to review the system or application’s functional specification. Once it’s been determined that a vulnerability has a high level of exploitability, the respective mitigation strategies must be evaluated and implemented. In addition, a process must be in place that allows for deploying the application securely. Secure deployment means that the software is installed with secure defaults. File permissions need to be set appropriately, and the secure settings of the application’s configuration are used. After the software has been deployed securely, its security must be maintained throughout its existence. An allencompassing software patch management process needs to be in place. Emerging threats must be evaluated, and vulnerabilities prioritized and managed. Infrastructure security, such as firewall, DMZ and IDS management, is assumed to be in place. Backup/recoverability and availability plans must also be present. And the team’s roles and responsibilities in relation to security must be understood by all. The SSDL outlines secure development processes. No matter how strong your firewall rule sets are, or how diligent your infrastructure patching mechanism is, if your Web

38

• Software Test & Performance

S

AMPLE SECURITY REQUIREMENTS There are many examples of security requirements; a few are listed here. A tester who relies solely on requirements for testing and who usually would miss any type of security testing is now armed with this set of security requirements. From here, you can start developing the security test cases. The application stores sensitive user information that must be protected for HIPAA compliance. Therefore, strong encryption must be used to protect all sensitive user information wherever it is stored. The application transmits sensitive user information across potentially untrusted or unsecured networks. To protect the data, communication channels must be encrypted to prevent snooping, and mutual cryptographic authentication must be employed to prevent man-in-the-middle attacks. The application sends private data over the network. Therefore, communication encryption is a requirement. The application must remain available to legitimate users. Resource utilization by remote users must be monitored and limited to prevent or mitigate denial-of-service attacks. The application supports multiple users with different levels of privilege. The application assigns each user to an appropriate privilege level and defines the actions each privilege level is authorized to perform.The various levels need to be defined and tested. Mitigations for authorization bypass attacks need to be defined. The application takes user input and uses SQL. SQL injection mitigations are a requirement.

application developers haven’t followed secure coding practices, attackers can walk right into your systems through port 80. To help make sure that won’t happen, let’s explore the phases of the SSDL.

Phase 1: Define Your Security Guidelines, Rules and Compliance Regulations First, create a system-wide specification that defines the security requirements that apply to the system; it can be based on specific government regulations. One such company-wide regulation could be the Sarbanes-Oxley Act of 2002, which contains specific security requirements. For example, Section 404 of SOX states “Various internal controls must be in place to curtail fraud and abuse.” This can serve as a baseline for creating a company-wide security policy that covers this requirement. Role-based permission levels, access-level controls and password standards and controls are just some of the elements that need to be FEBRUARY 2007


The Sec urit y Zone implemented and tested to meet the requirements of this specific SOX section. The Open Web Application Security Project (OWASP) lists a few security standards such as the ISO 17799, the International Standard for Information Security Management, a well-adopted and well-understood standard published by the International Organization for Standardization. However, it has rarely been applied specifically to those concerned with managing a secure Web site. When you implement a secure Web application, information security management is unavoidable. ISO 17799 does an excellent job of identifying policies and procedures you should consider. But it doesn’t explain how they should be implemented, nor does it give you the tools to implement them. It’s simply a guide of which policies and procedures you should consider, and doesn’t mandate that you should implement them all. OWASP recommends the Web Application Security Standards (WASS) project, which proposes a set of minimum requirements a Web application must exhibit if it processes credit card information. This project aims to develop specific, testable criteria that can stand alone or be integrated into existing security standards such as the Cardholder Information Security Program (CISP), which is vendor- and technology-neutral. By testing against this standard, you should be able to determine that minimal security procedures and best practices have been followed in the development of a Web-based application. Another such company-wide security regulation could state, for example, “The system needs to consider the HIPAA privacy and security regulations and be compliant,” “The system will meet the FISMA standards,” “The system will be BASEL II–compatible,” “The system needs to meet the Payment Card Industry Data Security Standard” or “We must abide by the Gramm-Leach-Bliley (Financial Modernization) Act,” to name just a few.

description should contain a section that documents the specific security needs of that particular requirement that deviate from the system-wide security policy or specification. Guidelines for requirement development and documentation must be defined at the project’s outset. In all but the smallest programs, careful analysis is required to ensure that the system is developed properly. Attack use cases are one way to document security requirements. They can lead to more thorough secure system designs and test procedures. See the sidebar on this page for examples. Defining a requirement’s specific quality measure helps reduce fuzzy requirements. For example, everyone would agree with a statement such as “The system must be highly secure,” but each person may have a different interpretation of what that means. Security requirements don’t endow the system with specific functions. Instead, they constrain or further define how the system will handle any function that shouldn’t be allowed. This is where the analysts should look at the system from an attacker’s point of view. Attack use cases can be developed that show behavioral flows that aren’t allowed or are unauthorized. They can help you understand and analyze security implications of pre- and post-conditions. “Includes” relationships can illustrate many protection mechanisms, such as the logon process. “Extends” relationships can illus-

Insist that security requirements are described and documented with each functional requirement.

Phase 2: Document Security Requirements, Develop Attack Use Cases A common mistake is to omit security requirements from any type of requirements documentation. Not only do security requirements aid in software design, implementation and test case development, they also can help determine technology choices and areas of risk. The security engineer should insist that associated security requirements be described and documented along with each functional requirement. Each functional requirement FEBRUARY 2007

A

TTACK PATTERNS TO APPLY THROUGHOUT THE SSDL • Define security/software development roles and responsibilities. • Understand the security regulations your system must abide by, as applicable. • Request a security policy if none exists. • Documented security requirements and/or attack use cases. • Develop and execute test cases for adherence to umbrella security regulations, if applicable. Develop and execute test cases for the security requirements/attack use cases described in this article. • Request secure coding guidelines and train software developers and testers on them. • Test for adherence to secure coding practices. • Participate in threat modeling walkthroughs and prioritize security tests. • Understand and practice secure deployment practices. • Maintain a secure system by having a patch management process in place, including evaluating exploitability.

www.stpmag.com •

39


The Sec urit y Zone trate many detection mechanisms, such as audit logging. Attack use cases list ways in which the system could possibly be attacked. Security defect prevention involves the use of techniques and processes that can help detect and avoid security errors before they propagate to later development phases. Defect prevention is most effective during the requirements phase, when the impact of a change required to fix a defect is low. If security is in everyone’s minds from the beginning of the development life cycle, they can help recognize omissions, discrepancies, ambiguities and other problems that may affect the project’s security. Requirements traceability ensures that each security requirement is identified in such a way that it can be associated with all parts of the system where it is used. For any change to requirements, is it possible to identify all parts of the system where this change has an effect? Traceability also lets you collect information about individual requirements and other parts of the system that could be affected by requirement changes, such as designs, code or tests. When informed of requirement changes, security testers should make sure that all affected areas are adjusted accordingly.

the program from operating securely no matter how perfectly it’s implemented by the coders. Implementation vulnerabilities are caused by security bugs in the actual coding of the software. Static analysis tools can detect many implementation errors by scanning the source code or the binary executable. These tools are useful in finding issues such as buffer over-

R

OLES AND RESPONSIBILITIES It’s often unclear whose responsibility security really is. Is it the sole domain of the infrastructure group who sets up and monitors the networks? Is it the architect’s responsibility to design security into the software? For effective security testing to take place, roles and responsibilities must be clarified. In the SSDL, security is the responsibility of many. It’s a mistake to rely on infrastructure or the network group to simply set up the IDSes and firewalls and have them run a few network tools for security. Instead, roles and responsibilities must be defined so that everyone understands who’s testing what and an application testing team, for example, doesn’t assume that a network testing tool will also catch application vulnerabilities.

Phase 3: Perform Architectural and Design Reviews; Identify and Define Threat Models Security practitioners need a solid understanding of the product’s architecture and design so that they can devise better and more complete security strategies, plans, designs, procedures and techniques. Early security team involvement can prevent insecure architectures and low-security designs, as well as help eliminate confusion about the application’s behavior later in the project life cycle. In addition, early involvement allows the security expert to learn which aspects of the application are the most critical and which are the highest-risk elements from a security perspective. This knowledge enables security practitioners to focus on the most important parts of the application first and helps testers avoid over-testing low-risk areas and under-testing the high-risk ones. With threat modeling, the various methods of attack are identified and defined. With threat modeling, you can find security problems early, before coding them into products. This helps determine the “highest-risk” parts of application—those that need the most scrutiny throughout the software development effort. Another valuable benefit of the threat model is that it can give you a sense of completeness. Saying “Every data input is contained within this drawing” is a powerful statement that can’t be made at any other point.

Phase 4: Use Secure Coding Guidelines; Differentiate Between Design and Implementation Vulnerabilities To understand how vulnerabilities get into all software, the developer must learn how to prevent them from sneaking into programs and must be able to differentiate design versus implementation vulnerabilities. A design vulnerability is a flaw in the design that precludes

40

• Software Test & Performance

The program or product manager should write the security policies. They can be based on the standards dictated, if applicable, or based on the best security practices discussed here.The product or project manager is also responsible for handling security certification processes if no specific security role is available. Architects and developers are responsible for providing design and implementation details, determining and investigating threats, and performing code reviews. QA/testers drive critical analyses of the system, take part in threat-modeling efforts, determine and investigate threats, and build white- and blackbox tests. Program managers manage the schedule and own individual documents and dates. Security process managers can oversee threat modeling, security assessments and secure coding training.

flows, and their output can help developers learn to prevent the errors in the first place. You can also send your code to a third party for defect analysis to validate your code security for compliance reasons or customer requirements. Because this usually doesn’t take place until the code has already been developed, initial standards should be devised and followed. The third party can then verify adherence and uncover other security issues.

Phase 5: The Colorful World of Black-, White- And Gray-Box Testing Black-, white- and gray-box testing refer to the perspective of the tester when designing test cases—black from outside with no visibility into the application under test, white from FEBRUARY 2007


The Sec urit y Zone inside with total source code visibility, and gray with access to source code and the ability to seed target data and build specific tests for specific results. The test environment setup is an essential part of security test planning. It facilitates the need to plan, track and manage test environment setup activities, where material procurements may have long lead times. The test team must schedule and track environment setup activities; install test environment hardware, software and network resources; integrate and install test environment resources; obtain/refine test databases; and develop environment setup scripts and test bed scripts. Additionally, setup includes developing security test scripts that are based on the attack use cases described in the SSDL Phase 2, then executing security test scripts and refining them, conducting evaluation activities to avoid false positives and/or false negatives, documenting security problems via system problem reports, supporting developer understanding of system and software problems and replication of the issue, performing regression tests and other tests, and tracking problems to closure.

Phase 6: Determine the Exploitability of Your Vulnerabilities Ideally, every vulnerability discovered in the testing phase of the SSDL can be easily fixed. But depending on the cause, whether a design or implementation error, the effort required to address it can vary widely.

A vulnerability’s exploitability is an important factor in gauging the risk it presents. You can use this information to prioritize the vulnerability’s remediation among other development requirements, such as implementing new features and addressing other security concerns. Determining a vulnerability’s exploitability involves weighing five factors: • The access or positioning required by the attacker to attempt exploitation • The level of access or privilege yielded by successful exploitation • The time or work factor required to exploit the vulnerability • The exploit’s potential reliability • The repeatability of exploit attempts To determine exploitability, the risks of each vulnerability are used to prioritize, comparing them against each other; then, based on risk and priority, to address the vulnerabilities and other development tasks (such as new features). This is also the phase in which you can manage external vulnerability reports. Exploitability needs to be regularly reevaluated because it always gets easier over time: Crypto gets weaker, people figure out new techniques and so on. Once the six phases of the Secure Software Development Lifecycle have been carried out, the secure defaults are set and understood, and testers verify the settings, the application is ready to be deployed. ý

BZ Research presents

Sixth Java Use and Awareness Study

Fifth Microsoft .NET Adoption Study

(with comparisons to the previous studies) December 2006 Study #7672 Available

(with comparisons to the previous studies) August 2006 Study #3556

! 18, 2007 January

Third Eclipse Adoption Study (with comparisons to the previous studies) November 2006 Study #2352

Third SCM, Defect Tracking and Build Management Study (with comparisons to the previous studies) September 2006 Study #4806

Third Database, Data Access, Integration and Reporting Study (with comparisons to the previous studies) July 2006 Study #6604

First AJAX Use and Buying Plans Study July 2006 Study #6100

www.bzresearch.com FEBRUARY 2007

www.stpmag.com •

41



Best Prac t ices

Moving Beyond Make Listening to audience LSI mostly sells silicon questions about build manfor embedded systems, agement at various conferhardly a one-size-fits-all ence sessions and webinars, market. Brack’s team must Laurent Brack has started build working software for to notice something. what soon will be four “The questions are generations of silicon and always confused about what 15 different products, build management is all each with unique drivers about,” said Brack, a softand memory configuraware manager at LSI Logic tions—and all from a sinGeoff Koch in Milpitas, Calif. “People gle code base. always think it’s about the make utility; Busting BuildForge it’s not about make.” The complexity presented enough of Make is the avuncular figure among a challenge to bring LSI’s previous utilities for automatically building large build-management tool, BuildForge, applications. Created in the late 1970s to its knees. This happened despite at Bell Labs and incorporated into several personal interventions and Unix, make is still celebrated for its abildebug efforts from Joe Senner and ity to handle much of the complexity Michael O’Rourke, at the time associated with dependency tracking BuildForge’s chief technology officer and archive handling. and vice president of development, Yet make and its many progeny— respectively. examples include dmake, for Sun BuildForge has since been acquired Microsystems software; Speedy Make, by IBM. Meanwhile, Brack has taken an XML-based utility; and Rake, a LSI’s business elsewhere, while filing Ruby-based build tool—may be a away at least a few lessons about the shrinking part of overall build manageimportance of up-front work with build ment activities. management vendors. Build managers today dabble in everyOne of the problems in dealing with thing from systems integration to fullBuildForge—not his decision, Brack fledged IT management. And some, like insists—was deciding to buy the product Brack, get riled up on reading quotes after watching it successfully execute like this one, from an Aug. 15, 2005, spevarious LSI build tasks on just a handful cial report I wrote for SD Times: of machines. “‘A build tool is not rocket sciThe complete build system at LSI ence,’” said Robert ‘Uncle Bob’ includes dozens of machines, many of Martin, CEO of software consulting which must work in parallel. BuildForge company Object Mentor and coground to a halt when deployed in this author of the Agile Manifesto broader environment. Eight months and (www.agilemanifesto.org). ‘Any team three releases later, the tool was finally that can build a software project can working more or less as advertised. build a tool to build that project. The “Establish lots of benchmarks with tool does not need to be complicated, vendors,” Brack said, rattling off his nor does it need to solve world peace. takeaways and adding that for all the All it has to do is build the system, run nifty features and sales promises, any the tests and report status.’” system must be able to reliably build the “I am wondering if we are complete software day in and day out. idiots,” Brack wrote in an e-mail Today, Brack seems mostly pleased exchange leading up to our interview. with the tools LSI is using from another “Our system is pretty complicated.” FEBRUARY 2007

company that specializes in parallel, distributed builds. Much of our conversation, however, focused on decidedly non-software topics.

DIY: From Tinkerer to Specialist It turns out Brack has created his own build and test lab from the ground up. Cool! Could I see a picture or two, maybe to run with the column? “Sorry, we’re not even supposed to bring cameras into the building,” Brack demurred, invoking a Silicon Valley obsession with secrecy, especially when it comes to the world of semiconductors. He did, however, give a fairly detailed inventory of what sounded like a positively Rube Goldberg–type setup. But for a few moderately high-powered Dell servers, the lab he described is mostly full of rack upon rack of cheap “pizza box” systems and various mounted boards. As he has moved from the ranks of tinkerer to serious systems specialist, Brack has learned how to distinguish between a build or test failure caused by a hard drive gone bad versus clunky code, how to push out software updates to the entire unwieldy cluster, and how to support remote use of the cluster by LSI development teams in Italy, India and China. Despite the learning curve, the payoff is an intimate feel for a system designed to be scalable and handle an emerging continuous-build, continuous-test approach to development at LSI.

Software as Service This approach, in fact, is taking root through much of the software world, which is morphing to respond to new software-as-service business models, shorter development cycles and, as the case of QuickBooks illustrates, an Geoff Koch invites stories from hands-on developers for his column. Write to him at koch.geoff@gmail.com. www.stpmag.com •

43


Best Practices increasing pursuit of niche markets and tailored features. A few years ago when software configuration manager Jon Burt joined Mountain View, Calif.–based Intuit, the company had just two or three annually released versions of QuickBooks. Today Burt’s team has doubled in size, but it’s tasked with supporting more than a dozen QuickBooks versions, all of which are released multiple times each year. “It’s basically 50 to 75 releases per year,” said Burt. “We’re doing lots more in parallel than when I joined the company.” Like Brack, Burt spent much time talking about the hardware underpinnings of faster, more parallel builds. His team’s builds run on clusters of $800 single-CPU Dell computers. If one breaks, they just throw it away. “It’s the Google approach,” Burt said, referring to the search giant’s often-described massively distributed, commodity-based server farms.

Overall Awareness With all this talk of hardware and conspicuous absence of any mention of make, it seemed reasonable to ask just what skills are required of today’s build managers. Burt, as it happened, had an open position at the time of our interview, along with plans to start work on a large-scale build and test infrastructure that would make it easier to catalog, manage and share software components across the company. “When you say ‘build management,’ it sounds like someone sitting around watching a program run and pushing buttons,” Burt said. “In fact, only 10 percent of the time here is spent on the day-to-day build grind. I consider this a software engineering position, and the majority of the job will entail integrating systems.” “It’s really broad rather than deep technical skills that are required,” agreed Kevin Lee, a U.K.-based IBM solution architect and author of “IBM

Rational ClearCase, Ant, and CruiseControl: The Java Developer’s Guide to Accelerating and Automating the Build Process” (IBM Press, 2006). “In J2EE, for example, there is actually a quite a long list of build-test-deployrelated technologies, including database technology. To put together a successful build, you really need to understand the overall process.” Which brings us back to Brack, the software engineer who has gone from buying 24-port consumer switches to business-grade gigabit Ethernet hardware—all in the name of improving his company’s build and test process. “We’re either freaks or idiots,” he said. “I still haven’t figured that one out.” I have. I’m going with freaks—in the best, ambitious-techie sense of the word. Idiots is too strong an indictment for my taste, but those who believe effective build management stops with make are hardly treading the leading edge of test-related techniques. ý

Index to Advertisers Advertiser

URL

BZ Research

www.bzresearch.com

41

EclipseCon 07

www.eclipsecon.org

35

Fortify

www.fortifysoftware.com

Klocwork

www.klocwork.com

Hewlett-Packard

www.optimizetheoutcome.com

Parasoft

www.parasoft.com/STPmag

9

Perforce

www.perforce.com

6

Seapine Software Inc.

www.seapine.com/stptcmr

4

Software Test & Performance

www.stpmag.com

8

Software Test & Performance Conference 2006

www.stpcon.com

SPI Dynamics

www.spidynamics.com/QA2

Software Security Summit

www.S-3con.com

TechExcel

www.techexcel.com/alm

44

• Software Test & Performance

Page Number

12-13 47 31-32,48

2-3 36 42,45 19

FEBRUARY 2007



Future Future Test

Test

Teamwork Is The Future Of Testing state of improvement. Test Teamwork is a critical part engineers can use processes of any successful software to ease work, encourage development project, and team contribution and solicnowhere is this more true it help. It’s also helpful to than in the testing process. inform your team members As software applications of the different stages of grow larger and more comtheir product’s developplex, they tend to involve ment; explain what’s expectlarger, more diverse and ed from them at each stage, geographically dispersed and how they may be able to teams. All this makes teamMurtada Elfahal contribute to the success of work an important factor each stage. in the success of the software developTwo people working to solve one ment process. problem are likely to reach a better, more Effective teamwork is important for building higher-quality products. For one, creative solution more quickly than one being part of a team can force members person trying to solve the same problem to focus on the bigger picture, and therealone. Use brainstorming sessions to help fore on the success of the entire product. solve big problems. Involve as many team It’s important to involve team memmembers as possible in these sessions, bers in the decision process as often as is and remember: No idea is a bad one. possible or practical. The synergy of Sometimes even the silliest-sounding idea group discussion frequently results in the gives birth to a great one after the contribest decision. Motivation also is improved butions of multiple brains. Don’t play the blame game. Being a when the team is involved in decisionteam player is a skill that can be learned, making because the achievement is but it can also be unlearned. For examshared among the whole team. ple, at times of crisis during a project, Lone-wolf developers (programmers people often point fingers and tear into or testers) can learn to improve their the team fabric. To combat this, managedevelopment methods, techniques and ment should encourage teams to view pace. Being a team player does not problems as team problems and solutions impair independent thought; it actually as a team responsibility. increases the independence of team Some people view their team memmembers by expanding their knowledge, bers as competitors, doing their best to creativity and experience. climb on their shoulders and leave others Make a good team better. Try looking for in the dust. And though this attitude more ways to involve greater numbers of might help one member become the team members in areas that might not team “star” and even help to foster indiordinarily be part of the team process, vidual advancement, it harbors hidden such as planning, execution, problem dangers for both the individual and the solving and designing new features. team. The greatest damage will be to the A good process is created for a reaproduct and therefore to the organizason, and it should exist in a continuous

46

• Software Test & Performance

tion—so the damage might come back to haunt them. Some people are natural team players, able to recognize the value of teamwork in the success of their development process. For those who are less team oriented, management has a responsibility to observe and educate these individuals, and consistently remind team members of the benefits of cooperation in the big picture. You can learn a lot while teaching others, and helping someone encourages him to help you. Tools of the trade. There are numerous tools for creating, automating and managing tests and handling other aspects of the process. Finding the one that will best serve your team takes some work. The first step is to inventory the technologies currently in use and list which benefits they provide. Not necessarily commercial testing tools, these can include applications or scripts written within the organization to perform or automate certain tasks and ease the testing process for the team. Automation will encourage and ease teamwork if it can be used to execute a difficult part of the application. However, when evaluating a commercial testing tool, be mindful of the benefits that the tool will provide to the entire team. For example, a test management tool—used for tracking test-case execution—if used properly, also can be useful in dividing tasks and responsibilities. Then, every tester in the project will be able to follow up on the test status just by logging into different sections of the test. This empowerment often can lead to increased volunteerism and contribution. Efficient use of technology will ease teamwork for tasks like documentation, which helps team members by making all information available all the time. Though your team may be complex, unwieldy or scattered throughout the globe, the tools and techniques outlined above will help to kick-start the community spirit that will spur your project toward a successful outcome. ý Murtada Elfahal has been in development and QA for the past six years. He is currently a software test engineer in the systems integration division of an international electronics manufacturer. You can reach him through feedback@bzmedia.com. FEBRUARY 2007



HP software is turning I.T. on its head, by pairing sixteen years of Mercury’s QA experience with the full range of HP solutions and support. Now you can have the best of both worlds. Learn more at OptimizeTheOutcome.com

©2006 Hewlett-Packard Development Company, L.P.

There’s a new way to look at QA software.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.