stp-2007-09

Page 1

A

Publication

: ST ES ing BE CTIC Test A PR Load b We

VOLUME 4 • ISSUE 9 • SEPTEMBER 2007 • $8.95 • www.stpmag.com

Clean Your Room! And Get Your QA Act Together A White-Box Approach To SOA Testing

How (And Why) To Justify Your Existence

A Set of Techniques With Plenty of Pop


October 2– 4, 2007 • Hyatt Regency

ur daily tside of yo u o n o ti a inform think “You'll find rnatives to e lt /a s n o ti and op ng.” activities, hes to testi c a ro p p a about new abs g, —Alex Kan

Staff Engin

eer, Tell

dset, nge in min a h c a r fo “Good and provement process im g.” and testin g in d o c r bette are w hang, Soft —Yiping Z trado Engineer, In

s t e r c e S g n i t s Te

! D E L A E V RE FA LL FALL

register today at


Cambridge • Boston, MA

t! n u o c s i D d r i B 00 2 y l $ r E a V E A S Get stthereby Sept. 14 and Regi

TERRIFIC TOPICS! Managing

Test/QA T Testing S eams OA and W e b Service C# and Ja s va Testing Agile Tes ting Locating Performa nc Just-in-Tim e Bottlenecks e Testing Effective Metrics Requirem ents Gath ering Improving Java Perf ormance Risk-Base d Testing Strategies Security T esting

TOTAL IMMERSION! Choose From 70+ Classes Pick From 8 In-Depth Tutorials Hands-on Tool Demo Sessions Network With Colleagues Ice Cream Social Reception in Demonstration Hall Pose Your Questions to 25+ Exhibitors Mingle With More Than 40 Speakers

ERS! K A E P lack BS SUPER cott Barber • Rex nBey

www.stpcon.com

ice Cha ach • S e B n s y l e C m • Ja en l Bolton ert Gal b o R Michae • ayes ldstein Linda H Jeff Fe • h t i weeney Goldsm S y n i r b a o R •M ! abourin s more S n t e r z e o b d Ro and Walsh t r e b o R


The days of

‘Play with it until it breaks’ are over!

Learn how to thoroughly test your applications in less time. Read our newest white paper: “All-pairs Testing and TestTrack TCM.” Download it today at www.seapine.com/allpairs4

Seapine ®

TestTrack TCM

Software for test case planning, execution, and tracking You can’t ship with confidence if you don’t have the tools in place to document, repeat, and quantify your testing effort. TestTrack TCM can help you thoroughly test your applications in less time. Seapine ALM Solutions: TestTrack Pro Issue & Defect Management

TestTrack TCM Test Case Planning & Tracking

Surround SCM Configuration Management

QA Wizard Pro

!UTOMATED &UNCTIONAL 4ESTING

In TestTrack TCM you have the tool you need to write and manage thousands of test cases, select sets of tests to run against builds, and process the pass/fail results using your development workflow. With TestTrack TCM driving your QA process, you’ll know what has been tested, what hasn't, and how much effort remains to ship a quality product. Deliver with the confidence only achieved with a well-planned testing effort.

s %NSURE ALL STEPS ARE EXECUTED AND IN THE SAME order, for more consistent testing. s +NOW INSTANTLY WHICH TEST CASES HAVE BEEN EXECUTED WHAT YOUR COVERAGE IS AND HOW MUCH testing remains. s 4RACK TEST CASE EXECUTION TIMES TO CALCULATE HOW much time is required to test your applications. s 3TREAMLINE THE 1! &IX 2E TEST CYCLE BY pushing test failures immediately into the defect management workflow. s #OST EFFECTIVELY IMPLEMENT AN AUDITABLE QUALITY assurance program.

Download your fully functional evaluation software now at www.seapine.com/stptcm or call 1-888-683-6456. ©2007 Seapine Software, Inc. Seapine, the Seapine logo and TestTrack TCM are trademarks of Seapine Software, Inc. All Rights Reserved.


VOLUME 4 • ISSUE 9 • SEPTEMBER 2007

Contents

12

A

Publication

COV ER STORY How to Keep Quality Assurance From Being a Juggling Act

With a sweet three-part strategy—quality planning, quality assurance and quality control—techniques such as weekly meetings, status reports and By Paul Joseph collocation can help you keep all the QA balls in the air.

18

Clean Up Your Room! (And Your QA)

When you establish your Quality Engineering Maturity Level in both technology and process fields, you can tidy your methodology while spiffing up your project—as well as your entire By Roger Nessier organization.

Depar t ments

26

SOAs Find Clarity With WhiteBox Testing

7 • Editorial

Group SOA testing according to level—service, security, orchestration, governance and integration—and you’ll get a clear view to see your way through complexity. By David Linthicum

Get to know this month’s experts and the best practices they preach.

Hackers messing with Eclipse Plugin Central are the lowest of the low. Get a life!

8 • Contributors

32

How Much Is Testing Really Worth? Testers are all too often misunderstood and underappreciated—so what is it going to take to broadcast your value and let your QA efforts be known? By Theresa Lanowitz and Dan Koloski

9 • Feedback It’s your chance to tell us where to go.

10 • Out of the Box New products for testers.

36 • Best Practices Web 2.0 is more than hype, but it presents a load-testing challenge. By Geoff Koch

38 • Future Test Adopting agility can be tough for testers and PMs—so cultivate it! By Shaun Bradshaw

SEPTEMBER 2007

www.stpmag.com •

5


4AKE THE

HANDCUFFS OFF

QUALITY ASSURANCE

Empirix gives you the freedom to test your way. Tired of being held captive by proprietary scripting? Empirix offers a suite of testing solutions that allow you to take your QA initiatives wherever you like. Download our white paper, “Lowering Switching Costs for Load Testing Software,” and let Empirix set you free.

www.empirix.com/freedom


Ed Notes VOLUME 4 • ISSUE 9 • SEPTEMBER 2007 Editor Edward J. Correia +1-631-421-4158 x100 ecorreia@bzmedia.com

EDITORIAL Editorial Director Alan Zeichick +1-650-359-4763 alan@bzmedia.com

Copy Editor Laurie O’Connell loconnell@bzmedia.com

Contributing Editor Geoff Koch koch.geoff@gmail.com

ART & PRODUCTION Art Director LuAnn T. Palazzo lpalazzo@bzmedia.com

Art /Production Assistant Erin Broadhurst ebroadhurst@bzmedia.com

SALES & MARKETING Publisher

Ted Bahr +1-631-421-4158 x101 ted@bzmedia.com Associate Publisher

List Services

David Karp +1-631-421-4158 x102 dkarp@bzmedia.com

Agnes Vanek +1-631-443-4158 avanek@bzmedia.com

Advertising Traffic

Reprints

Phyllis Oakes +1-631-421-4158 x115 poakes@bzmedia.com

Lisa Abelson +1-516-379-7097 labelson@bzmedia.com

Director of Marketing

Accounting

Marilyn Daly +1-631-421-4158 x118 mdaly@bzmedia.com

Viena Isaray +1-631-421-4158 x110 visaray@bzmedia.com

READER SERVICE Director of Circulation

Agnes Vanek +1-631-421-4158 x111 avanek@bzmedia.com

Customer Service/ Subscriptions

+1-847-763-9692 stpmag@halldata.com

Cover Art by The Design Diva, NY

President Ted Bahr Executive Vice President Alan Zeichick

BZ Media LLC 7 High Street, Suite 407 Huntington, NY 11743 +1-631-421-4158 fax +1-631-421-4130 www.bzmedia.com info@bzmedia.com

Software Test & Performance (ISSN- #1548-3460) is published monthly by BZ Media LLC, 7 High St. Suite 407, Huntington, NY, 11743. Periodicals postage paid at Huntington, NY and additional offices. Software Test & Performance is a registered trademark of BZ Media LLC. All contents copyrighted 2007 BZ Media LLC. All rights reserved. The price of a one year subscription is US $49.95, $69.95 in Canada, $99.95 elsewhere. POSTMASTER: Send changes of address to Software Test & Performance, PO Box 2169, Skokie, IL 60076. Software Test & Performance Subscribers Services may be reached at stpmag@halldata.com or by calling 1-847-763-9692.

SEPTEMBER 2007

Authentic Battle, Epic Proportions It was a lesson learned by cism of a single person or developers at EPIC, a Web group. What possible reasite popular among Eclipse son would someone have to developers and testers also destroy what someone else known as Eclipse Plugin has worked hard to build? Central. Perhaps it’s jealousy—some Just days after the launch sick joy in killing something of the newly redesigned that you yourself are incawww.eclipseplugincentral pable of creating. .com Web site, someone Or perhaps it’s like the stole or guessed an adminisvirus makers, who some say trator password and made gain a demented sense of Edward J. Correia mischief with sendmail, pride when their disease broadcasting unsolicited messages to all makes national headlines. To them, sucthe site’s registered users. They reacted cess means that enough Internet traffic immediately and with obvious concern. was interrupted, business productivity “That’s life in the connected age,” sapped and IT departments sent into said Bjorn Freeman-Benson, technical turmoil to put the name of their maldirector of process and infrastructure at ware onto the teleprompters of Katie the Eclipse Foundation. “You can’t Couric and Charlie Gibson. restrict [distribution] to upstanding So, to the hacker: If you’re reading people.” He said the breach was likely this right now, you’ve probably got a due to a good guess of a weak adminisknack for coding. Why not put your taltrator password on the fledgling system, ents to better use and help build somewhich is based on PHP Nuke. thing good, instead of tearing it down. Embarrassing, he said, but not danThere are thousands of high-paying jobs gerous. “I don’t think anybody got for talented programmers out there. Be infected by viruses, but we don’t want productive, not destructive. anybody to be afraid to open e-mail And to the EPIC testers, this month’s from Eclipse,” he said. Service was disissue is for you; it’s dedicated to quality rupted only for a short time. assurance. And as you probably already The EPIC Web site’s testers have by know, there are dozens of free high-qualnow certainly asked themselves how this ity security scanning tools available that could have been prevented. What types you can use to test your Web site at every of vulnerability scans were performed stage of development. Use them (after before the site went live? Did the tests you’ve read and implemented the QA include multiple levels of authentication? processes in this issue, of course)! Were the default passwords changed? Ensuring Quality The incident occurred the same day For our focus on quality assurance this that the launch was covered in the month, we present two industry experts EclipseSource e-mail newsletter, also who describe their methods for ensurproduced by ST&P parent company BZ ing that released software is of the highMedia, and written by yours truly. So I est possible quality. See the Contributors couldn’t help thinking that the person section (page 8) for more on them. responsible for this might also be a subAlso notable is a feature on how to scriber to my newsletter. Ouch. justify the existence of your QA departThere’s no logic behind malicious ment by Theresa Lanowitz and Dan hacking. As with terrorism, innocent peoKoloski. ý ple are negatively affected by the fanatiwww.stpmag.com •

7


Contributors The author of this month’s lead article is PAUL JOSEPH, a 12-year veteran of QA testing and consulting in areas of insurance services, health care, human resources applications, Web profiling, content management, and research and development. Prior to transitioning to IT, Paul spent 10 years as a civil engineer. Beginning on page 12, Paul breaks QA into its component parts of quality planning, quality assurance and quality control, offering guidance on how to define and ensure each. Paul’s experience includes developer management, QA and performance testing, end-user interaction, requirements collection and cost estimation. ROGER NESSIER is vice president of Symphony Services, a global product engineering and outsourcing consultancy. With years of experience in agile and Scrum methodologies, Roger counsels clients on management of product features, function and technology deliverables. Roger, who has managed teams as large as 120 people spread across the U.S., Canada and India, advises testers to clean up their act by attacking code a little at a time, rather than performing QA at the end of the development process. His quality engineering feature begins on page 18.

Successful testing of remote services requires an understanding of the architecture and interdependencies of their components. With the SOA wave showing no sign of flattening, we present the third and concluding installment in the SOA testing series by DAVID LINTHICUM, CEO of the SOA consulting and advisory firm Linthicum Group. David shows you how to ride the wave beginning on page 26, as he explains how to apply white-box testing to SOA systems and explores common misconceptions about what’s inside—and not inside— the box.

Does management appreciate your testing and QA efforts, or is your department considered an expensive bottleneck and delay to product deployment? If your answer is similar to the latter, you’ll need to turn directly to page 32. THERESA LANOWITZ, a voke technology analyst, and DAN KOLOSKI, director of strategy and business development at testing and monitoring tools maker Empirix, team up to bring you their sage advice on selling the value of your efforts to folks on the top floor. Theresa was an analyst with Gartner for seven years. Dan has extensive experience in CRM, content management and e-commerce application design, testing and deployment. TO CONTACT AN AUTHOR, please send e-mail to feedback@bzmedia.com.

8

• Software Test & Performance

SEPTEMBER 2007


Feedback TESTING SOFTWARE WITHOUT TESTING SOFTWARE I thought that Edward J. Correia’s “Testing Without Requirements? Impossible?” (Test and QA Report, June 29, 2007) was a great article. The company I work for is actually a business forms distributor, with programmers on staff who continually modify the program. The company is not willing to invest in testing software at this time. I use Word and Excel for writing and tracking test cases. To document system problems and design changes, I use a Lotus Notes Database that the developers also use to make their corrections, and the company has a development/testing environment. I would like to see more articles about testing solutions for companies like mine. Sharon Heike Glenwood, MN

SANS REQUIREMENTS IS THE ONLY WAY TO TEST Testing without requirements or documentation is the only way I use testers. Often the requirements are inaccurate, and the users come up with some new way of entering or retrieving data that I never thought of. This way, I pick up new user needs. Linda Ewen Forest Hills, NY

VARYING QA Regarding “QA Teams—Underappreciated, but Seldom Understood” by Edward J. Correia (T&QA Report, July 17, 2007), I actually don’t agree 100 percent with what is written about QA teams in this article. I think it varies from management to management, corporation to corporation. The management that does not understand the meaning of QA will at first not favor or recognize a QA department; hence practically no QA for their products. But this sort of business/management does not survive for long in today’s tough, competitive and quality-conscious environment. The management that values quality forms a separate QA department; proSEPTEMBER 2007

POSITIVE, FUNCTIONAL AND PRODUCTIVE In response to “QA Teams—Underappreciated, but Seldom Understood,” one of the more successful QA organizations with which I was associated was organized more like a newspaper than many software shops. Key components were: 1. Only the best became part of QA, and were compensated accordingly. 2. QA was the buck stop for every release (think editors). In other words, when errors were discovered post release, it was QA that was held accountable—not development (which helped QA maintain focus). Accordingly, development was appreciative of this safety net. 3. Project management was often facilitated from QA. 4. Senior management had QA and development reporting at same level, not as a sub-function of development. Yes, there was the typical conflict of late deliveries, with teams pushing hard to have their code tested and released NOW. But all in all, it was positive, functional and very productive. Mark Stieg Highlands Ranch, CO

vides them sufficient space and air; values their time, quality and inputs in improving their products, and succeeds in the long run. A place where QA people aren’t respected or valued does not deserve QA people. On the other hand, at those places QA may not be fully focused, motivated or have the right direction and objectives. Jaideep Khanduja Haryana, India

JUICE YOUR TEAM WITH CROSSSILO INTERACTION Your July 17th article by Edward J. Correia is such a breath of fresh air! One reason I highly value my current solo work is the accurate description of inside political scenarios he described. You got my creative juices stirred up for possible ways to overcome the barriers. I think this is a function (encouraging cross-communication) HR reps can and should have as a part of their strategic job description. Ultimately, the QA has to do it, though. Testers and QA teams are at the heart of successful new product development

and ongoing improvement. Some might say, “That goes without saying,” but it doesn’t seem that way when you are the one doing the job. (I am a good documenter, but a poor tester, but that may have been different had there been more cross-silo interaction.) Jeannie Welsch Atlanta, GA

DISTILLATION HICCUP From the editor: Thanks to those of you who alerted us to the error in the PDF of Software Test & Performance August issue. Our sincerest apologies for any inconvenience this may have caused. Apparently there was a hiccup in our PDF-making process that caused that issue's lead article to veer off course. A corrected version is available now at www.stpmag.com /retrieve/stp-0708.htm. FEEDBACK: Letters should include the writer’s name, city, state, company affiliation, e-mail address and daytime phone number. Send your thoughts to feedback@bzmedia.com. Letters become the property of BZ Media and may be edited for space and style. www.stpmag.com •

9


Out of t he Box

An Oracle Original Also Aids Migration Claiming an industry first, Original Software in early August added capabilities to its flagship TestBench automation tool that allow it to test applications visually and at the database level, and to help manage migration to Oracle databases. The so-called intelligent data extraction capability also “automates the creation and extraction of test data subsets from live databases,” according to company documents, and reportedly works with local and remote data stores and without affecting the integrity of references. Visual test capabilities include viewing live table updates, data validation rules, environmental rollback and pairwise input scenarios.

The tool offers control over test data by taking checkpoints of data while testing, with the capability to roll back to any point, the company said. “This eliminates the need to build complex algorithms and external checks to make allowances for changes in the live TestBench PC displays Oracle data cases visually, and simplifies testdata.” “TestBench… is ing and migration, the company claims. the only software testing solution to allow total manageer of the application,” said Original CEO ment and manipulation of the underColin Armitage in a statement accompanying the release. lying database as well as the visual lay-

Adding Mac OS X to the Fold, VMware Now Covers Virtually All Platforms VMware last month began shipping VMware Fusion, a version of its desktop virtualization tool that allows Intel-based Macs to run Windows applications and any ver-

sion of Windows from 3.1 to Longhorn x64 alongside Mac OS X, as well as Linux, Netware and Solaris. The US$79 tool has been in beta since

December and began shipping on Aug. 6. VMware Fusion supports 32- and 64bit versions of Windows, according to the company, addresses two processor cores simultaneously and can drive numerous USB 2.0 devices.

Virtual Benchmark

VMware Fusion runs virtually any version of Windows on Mac OS X, allowing regression testing back to the stone age.

10

• Software Test & Performance

The company in July gave testers using its virtualization tools their first means of measuring application performance. VMmark (www.vmware.com/go /vmmark) is a free utility that “measures the scalability of heterogeneous virtualized workloads and provides a consistent methodology so benchmark results can be compared across different virtualization platforms,” according to a company document. The tool can help measure the scalability of virtualized workloads and give testers a consistent benchmark across different virtualization platforms, the company said. SEPTEMBER 2007


Unleash the Hounds On Hackers Black Lab Security Systems recently unveiled Shadow “Insider Threat” Network Sensor, a tool that automates the detection and blocking of network intrusions for users of its Shadow Pocket Monitor. The US$149 tool identifies would-be attackers by IP address, logs their activities into a database and can send daily, weekly and monthly reports. It runs on Windows 2000 or later (including Vista) and works with any network switch that supports port mirroring.

Change Manager 4: Trio Sync-o Embarcadero Technologies in late July began shipping Change Manager 4.0, adding schema, data and configuration management capabilities to the database management tool. New features are implemented as three new modules: CM/Data, CM/Config and CM/schema; pricing starts at US$795 per user. CM/Data compares, validates and synchronizes data within a single database or across different database platforms, including DB2, Oracle, SQL Server, Sybase on Linux, Unix and Windows. The high-speed tool can validate replications, identify and manage differences, and ensure accuracy of reference data across applications, according to company documents. The database configuration tool CM/Config “compares and monitors database configuration attributed across hundreds or thousands of database instances” and can identify settings that have changed or are not in compliance with corporate norms or regulations. CM/Schema compares, synchronizes, reports differences and can reconcile database schemas. The tool also can generate synchronization scripts for database reconciliation.

BrowserCam Added To Gomez Family With its acquisition in late June of BrowserCam, Gomez adds screen capture and remote access services to its SEPTEMBER 2007

series of on-demand solutions for testing and managing the Web-application user experiences. BrowserCam’s remote access service lets Web designers, developers and QA engineers “validate user experiences across hundreds of possible combinations of operating systems, browser types and screen resolutions,” and allows for testing of dynamic technologies and techniques such as JavaScript, Flash and AJAX, according to company documents. The screen capture service can create a record of renderings for compatibility analysis and study. The service (www.browsercam.com/) is available now as a Gomez service.

WebInspect 7.5 Gives Apps the Once-Over SPI Dynamics, the security tools developer soon to be part of Hewlett-Packard, in August released WebInspect 7.5, sporting a new profiler that scans Web apps and suggests configuration settings for the most effective testing. A new traffic monitor reports HTTP activity during a scan in real time. A results window displays requests and responses sent by WebInspect during crawls and security audits. Completely rewritten in January to improve performance and compatibility with modern technologies and techniques, WebInspect 7.5 reportedly further improves auditing capabilities for spotting vulnerabilities in AJAX-based applications and better supports Vista.

CollabNet Hard At Work on SourceForge CollabNet in late July released SourceForge Enterprise Edition 4.4 with major improvements to Tracker, new project categorization capabilities and better integration with third-party SCM tools. It’s the platform’s first update since CollabNet acquired the software in April. The Tracker issue-management tool now permits artifacts to be broken down into an unlimited number of parent-child relationships, allows changes to the artifact field order, the creation of cross-project dependencies, and can “easily identi-

fy and manage blocking issues,” according to company documents. Tracker administration is made easier with pop-up calendars for issue, defect and request management, and flex fields for adding multiple user selections for artifacts. For users, the tool now can save filters, searches and customized columns. The new AJAX-based Tracker simplifies data management and other customizations. Browsing projects is simplified with a new categorization feature. This allows projects to be perused by taxonomy, making associated downloads, resources and assets easier to find and helping facilitate reusability and cross-team collaboration. Version 4.4 now integrates with IBM ClearCase and supports SSL-based server communication and CVS LDAP authentication. Also added is support for Subversion Multisite replication in real time regardless of network speed, with “local commit and access times while maintaining full security and source code coherency,” the company said.

Keep Web Services On Your SOAPSonar It’s not a new hockey league. Crosscheck Networks is a SOA testing tools company. In mid-July the company released SOAPSonar 3.0, an update to its flagship product that it says simplifies the creation of test cases for Web services deployments that are in an almost constant state of change. SOAPSonar includes tools for testing function and performance, debugging services and assessing their vulnerabilities, and for interoperability. Version 3 also extends the SOAPSonar testing framework, the company says, and includes facilities for test scheduling, reporting and e-mail, support for REST test cases and a command-line interface for integration with test management and build environments. Pricing starts at US$799 per year; the upgrade is free to licensees of version 2.6. Send product announcements to stpnews@bzmedia.com www.stpmag.com •

11


How to Keep Quality Assurance From Being A Juggling Act

12

• Software Test & Performance


By Paul Joseph

M

ost companies that develop software have a QA team. When asked what this team does,

the standard answer is, “Why, they do QA, of course!” The tone invariably implies a “duh.” But in reality, most companies don’t recognize that for software quality assurance to be truly effective, it should be just one component of a three-part strategy: quality planning (QP), quality assurance (QA) and quality control (QC). What’s the difference among the three? These important concepts govern the whole structure of your QA team as well as your company’s approach to software quality. You may also find that once you’re aware of the differences, your approach to software quality will change, quality will improve, and you can help save your company money by being more efficient.

The Quality Triumvirate Quality planning is a QA group’s general approach toward the management of product quality. QP includes a set of plans that are specific to each project. Quality assurance is just that—assurance that the software will have the specified functionality and will be up to a specified set of quality standards. Quality control is a set of rules that ascertain whether successive software builds continue to meet the specified functionality and quality. I recall a relevant example from my days working in a civil engineering laboratory. Our main task was to provide QA/QC for large construction projects. The QP was a set of docuPaul Joseph is a developer and IT consultant living in Massachusetts.

ments based on generic templates that were customized for each project. The plan laid down the overall framework for quality testing that we would use on a given project. QA was the approach we took to evaluate, for example, a steel bar that the project was going to use, and which we were seeing for the first time. It involved a lot of manual handling of the sample, determining which tests would apply, running these tests, and discussions with the design engineers about the results as compared to the desired values specified in the design documents. When this initial QA appraisal was over, the next task would be to determine the tests necessary to run as part of QC: the subset of QA tests to run on every new batch of steel bars to ensure that they were of the same quality as the sample. Overall, we used the QA process with its associated manual handling and extensive testing to obtain a feel for the item in question. Then, based on this feel, we developed the QC, the subset of tests to be run on every single such item that the project would use, to make sure the items had the specified functionality and quality. Two sub-teams were part of the quality testing: a QA team that was mainly based at the lab, and a QC team that was mainly based at the project site.

Software, Steel, Whatever Conceptually, software QA is much the same as materials QA. Here too, the approach consists of quality planning, assurance and control. Quality planning is generally done for the project by the QA manager. The QA team consists of two subgroups: a QA group and a QC group, each with its own lead, and tasked with doing quality assurance and quality control, respectively. Figure 1 (see page 14) shows the breakdown of a quality team and its activities and responsibilities.

The Quality Plan Quality planning involves the creation of project plans that specify how to test the software to determine if it meets the required functionality, performance, scalability and reliability, and that it integrates correctly with other applications and any legacy software. In companies that have a project man-

Some Tasty New Steps for Your QA Team to Improve Quality Right Now www.stpmag.com •

13


QUALITY ACT

FIG. 1: THE QUALITY CREW

Project Quality Management

Quality Plans 1. Test Strategy Plan 2. System Functionality Plan 3. Performance, Scalability and Reliability Plan 4. Integration Test Plan 5.User Acceptance Test Plan

Quality Planning (QA Manager)

Quality Assurance

Quality Control

QA Team 1. QA Lead+ Individual Contributors 2. Collocated with developers

QC Team 1. QC Lead+ Individual Contributors 2. Collocation optional

1. Manual inspection of new features as listed in the Release Notes for the build 2. Create new test cases for this new functionality 3. Manual validation of defects fixed in the build 4. Interface with developers, Product Management and Quality Control

1. Create and execute test scripts for selected use cases written by the QA team 2. Create and execute “deep QA” test programs 3. Performance, scaling and reliability testing 4. Interface with Release Engineering and Quality Assurance

agement’s responsibilities and so on. The system functionality plan continues the test strategy plan by addressing generic system issues such as platforms and environments to be used for the testing and for deployment. The performance scalability and durability plan describes how to test the application against performance, scalability and durability requirements. The integration test plan describes integration of the new product with existing products and systems. The user acceptance test plan describes acceptance testing and criteria. The project risks analysis plan identifies issues that could threaten, delay or derail the quality testing. Each plan should have a sign-off sheet on which agreed-upon representatives from senior management approve each plan. In my last decade of software development, I’ve seen the sign-off sheet actually used only once. However, the very presence of such a form helps set the tone and expectations for a project, resulting in senior management reading the plans in detail, even if they don’t physically sign off on them.

You Have My Assurance agement office, historical plans or generic templates of plans are maintained, and can be used by QA managers as a starting point. Such quality plans can set the tone with upper management and development and QA/QC teams about the approach that they can expect. Sometimes called artifacts, the plans also can serve as a (sometimes invisible) backbone of the QA/QC process. Personally, I wince at the thought that my spanking-new plans immediately qualify as an archeological specimen. Typically, my quality planning involves creating the following: • Test strategy plan • System functionality test plan • Performance, scalability and reliability test plan • Integration test plan • User acceptance test plan • Project risk analysis plan written from a QA perspective These plans don’t get into specifics about the testing, but rather are plans about plans, and tend to make for pretty dull reading. Their purpose is to clearly describe the procedures that will be followed during the detailed testing

14

• Software Test & Performance

of the product. Let’s look at each in more detail. The test strategy plan provides an overview of how the testing process will work for the newly proposed project. It describes the reference documents relevant to the QA process (such as the formal business requirements), the QA team, and the tools, techniques and methods to be used. Additionally, it specifies the deliverables to QA, smoke testing, defect tracking, bug triage (how priorities are assigned to bugs), new staff straining, handoffs from consultants, change control, senior man-

Quality assurance confirms that the application can meet the desired functionality and standards put in place in advance as specified by the business requirements and the quality plan. The QA group’s skill set should include the ability to comfortably meet with and talk to people face-toface. Its members interface with the business requirements group and the development team to understand the requirements. The QA team also writes test cases that address the requirements. They manually test functionality as it

TABLE 1: SAMPLE PROJECT FRAMEWORK Rumba Release 1.5 Builds 633, 643, 653, 663 Scripts

Manual

Bugs

John

NA

url test, Release Notes3

Queue3

Rita

NA

LDAP3, Release Notes3

Queue

Lana

NA

Release Notes3

Queue3

Joel

Client Script3

NA

NA

Monique

ETL Program

NA

NA

Administrator Script3

NA

NA

Mark

SEPTEMBER 2007


QUALITY ACT

becomes progressively available in the builds. In my experience, when builds are done on a weekly basis, the new build often has the same amount of new functionality and defects to validate as that of the previous build, so the time required to cover the new features manually and to validate the bugs remains roughly constant from build to build. Hence, one strong requirement of this process is that once the new application can be compiled and run, release engineering must provide a formal build to QA no less often than once a week. The first builds may have only five percent of the total anticipated functionality, but it starts the routine of examining new builds and turning them around within the week. It’s beneficial to the project to have a team member (preferably, the QA lead) sit in on product-management team meetings and listen as the team fleshes out the requirements. There is often pushback to this idea, with some fiscally conservative managers viewing this as premature and a waste of time and money. In my own experience, the reverse is often true, and a QA representative’s understanding of the requirements and their basis pays significant dividends down the road. Often, requirements get lost in translation, and what the development team delivers to QA doesn’t match what the QA lead knows that product management had in mind. Being present at the conceptual and requirements development meetings now gives the QA lead the confidence to bubble this up the chain of command and to quickly prevent the development team from wasting time and effort developing the wrong feature(s). I also find it useful for the QA team to have access to the code repository and the development tool used by the development group. At a minimum, I train the QA team lead to build the application and deploy it on a local desktop, using the head of the

checked-in code. This is useful for identifying new features in advance and clarifying any discrepancies between these features and the requirements. Another technique I find useful, especially in the early stages of development, is to have the lead developer and the developers who own modules of the code do periodic presentations to both the QA group and the product management group. This offers several benefits. First, the developer in question is usually pleased to be in the limelight and show off his work. Second, it gives the quality team a preview of the new functionality and access to its developer. Finally and importantly, it gives product management an opportunity to see how their requirements appear in reality.

Controlling Quality

Quality control ensures that successive builds delivered to the QA team continue to meet the functionality requirements and standards. The QC group’s skills lean toward scripting, automation and coding. They implement those test cases that lend themselves to scripting. They also work with the development team to discover opportunities for unit testing and release engineering to determine how to couple test scripts with the build process so that they run automatically after a build. Builds occur every night. If the build is successful, an automated build-management process can deploy it automatically. If that deployment is successful, the build management process can fire off the test scripts to test the new deployment. On completion of the test scripts, the build management process collects the results, including error details, if any, formats them into an e-mail, and distributes it to a list defined in the test strategy plan. Successful builds are then available to the QA team for manual testing. It’s hard to find a tool that creates scripts that you can set up to be automatically triggered to run, so the scripts

The QA team should have access to the code

repository and development tool used by the development group.

SEPTEMBER 2007

QA

BODY OF KNOWLEDGE

If you search the Web for QA terminology, you’ll find various definitions of QP, QA and QC.These terms are dry as dust and somewhat hard to keep straight. For the purposes of this article, I’ve adopted definitions taken from the Project Management Body of Knowledge, the bible of the Project Management Institute:

Quality planning – Identifying which quality standards are relevant to the project and determining how to satisfy them. Perform quality assurance – Applying the planned, systematic quality activities to ensure that the project employs all processes needed to meet requirements. Perform quality control – Monitoring specific project results to determine whether they comply with relevant quality standards and identifying ways to eliminate causes of unsatisfactory performance.

often must be run manually. Given this manual constraint, the QC team frequently runs its scripts only on the formal weekly builds, not in an every-night build. In this situation, the QC team first runs a subset of the scripts on a build that is deployed to a development environment—a smoke test. If (and only if) this build passes this smoke test, it’s then promoted to the formal QA testing environment. The QC group, given its facility with scripting and coding, is often also responsible for testing application performance, scalability and reliability. Performance testing involves working with the QA group and product management to identify common use cases, the frequency with which they will occur, and the growth of use anticipated over time. That time is usually dependent on the company’s hardware upgrade cycles. Another responsibility of the QC group is to do what I call “deep QA/QC.” This involves validating parts of the program that aren’t testable by scripts, but which can be tested programmatically. It validates not just functionality, but also data. In one example where deep QA/QC www.stpmag.com •

15


QUALITY ACT

is required, ETL is part of creating and maintaining the application. To perform deep QA/QC, the QC subgroup includes (or has access to) a developer who will write the necessary test programs.

Outsourcing Quality The QC role sometimes lends itself to outsourcing. But while the practice can often work well from a technical point of view, the process has the potential to roil the internal organization unless handled correctly. With the division of quality-checking activities into two groups, it’s possible to outsource or offshore QC with a justification that employees will find reasonable. This justification rests on the premise that what’s good for the company is also good for its existing employees, and that none of the existing employees will be let go because of outsourcing. It’s also a good idea to keep QA activities that involve a lot of “face time” internal to the company as much as possible. This permits the company to retain an independent measure of quality and minimize disruption to the team should the need arise to switch outsourcing companies. With your QP, QA and QC teams in place, you’re more than halfway toward an efficient testing process. The rest of this article suggests techniques that I’ve found useful—even necessary—to create an efficient team and testing process.

tasks that can help create a “cut and dried” system suitable to your needs that will clearly allocate responsibilities and provide a good project framework. One technique that works for me is a simple whiteboard, about 6 feet by 4 feet. Table 1 (see page 14) shows an example of what I draw on this whiteboard. I create one row per team member and one set of columns per product release being qualified. At the top of the columns is the release number and a list of builds that QA has received for testing to date. Each release column is further subdivided into three smaller columns headed Scripts, Manual and Bugs.

release notes to the appropriate members of the QA team. Each member of the team is now aware of what he’s responsible for. When they accomplish their tasks, I have my team members go to the whiteboard and check off with a red marker what they’ve done. At any time of the day, by simply glancing at the white board, I have a good idea of how far the build has been qualified. When I find that every item in the matrix has been checked or marked as NA (not applicable), the build has been qualified, and I put a check mark next to the build number. I then carefully erase the check marks by the team members next to their individual items as we prepare for the next build. This process is repeated over and over with every build, including the final candidate build. In no time at all, team members become conversant with the system and use it with minimal monitoring or supervision. Such autonomy empowers people to create, succeed, to make mistakes, to learn from their mistakes and to grow. It helps create a mature team that knows what it needs to do.

Devote weekly team meetings to project updates, and identifying and analyzing project risks.

Self-Managing Teams A good manager will mentor a team in the art of self-management. This isn’t rocket science—there are some simple

• Each team member’s row shows the scripts they own, their manual tasks including writing of (release notes) and any additional manual testing tasks. The Queue entry listed under the Bugs column is a placeholder to remind team members to check their queue in the bug-tracking system to make sure that they’ve validated the bugs assigned to them for that release. The team lead works with the development lead to make sure that every build has a set of release notes accompanying it. The lead then allocates the

TABLE 2: TRACKING OPEN AND CLOSED BUGS Open

One-on-Ones Another simple tool I find useful is the “one-on-one.” I meet privately with each team member at least once a week for about an hour. This allows me to really get to know each member, to find out interesting things about them and their personalities, and to help manage and mentor them. Key pieces of information I give and take include areas of interest, challenges, current workload and schedule, and bottlenecks. While all of this could come in status reports, such reports leave out the key information that’s available only in face-to-face meetings and which is sometimes conveyed only by body language.

Assigned to Dev./ Marketing

Assigned to QA or Testers

Total Open

Total Closed

Critical

0 (0)

0 (0)

0 (0)

21

Major

3 (0)

1 (0)

4 (0)

263

Average

201 (7)

6 (0)

207 (7)

838

Weekly Meetings

Minor/Cosmetic/ Enhancement

44 (1)

0 (0)

44 (1)

132

Total

248 (8)

7 (0)

255 (8)

1,254

Weekly team meetings should be devoted to company and project updates, and identifying and analyzing project risks. Drive all weekly meetings with a printed agenda written in advance and made available at the meeting. Never present project status reports

Total closed bugs (since start of Rumba) as of 05/07: 1,229 Total closed bugs (since start of Rumba) as of 04/30: 1,217 Total bugs since start of Rumba: 1,517

16

• Software Test & Performance

SEPTEMBER 2007


QUALITY ACT

at a meeting. Instead, e-mail status reports beforehand, and at the meeting, ask only for clarification questions. Meetings should always begin on time, ideally at the same time of day, day of the week, and in the same meeting room. This is of particular importance if the meeting is with an offshore team, where ideas of punctuality and regularity can differ markedly from those in the U.S. Meetings should finish early or on time.

Status Reports The status report is an important tool, and should be among every team member’s weekly task list. The report should list accomplishments of the previous week, status of their planned activities for the current week, and what they plan to do next week. Also included should be any risks or bottlenecks, and any time off required over the next 30 days. I consolidate individual status reports into a weekly status report for the whole group, keeping the contributing team members’ names next to their corresponding items to ensure traceability, transparency and ownership. I then add a summary of the bug status. For example, Table 2 lists open and closed bugs, and the status of open bugs along with their priority. I distribute the status report to the list defined in the test strategy plan.

Bug M*A*S*H It’s important to perform regular bug triages. I like to have at least one triage session per week. Typically, a bug triage team will include a representative each from product management, development and QA/QC teams, and looks only at new bugs and assigns priorities. Gradually becoming familiar with the needs of product management, the QA lead is often able to prioritize defects in line with product management and can sometimes take over the triage role.

Location, Location, Location Ideally, the QA team should be collocated with the developers. Collocation is a tried-and-true technique much recommended by seasoned project managers. SEPTEMBER 2007

S

CRIPTING’S UNIQUE PROBLEM The tools you choose for test scripting should offer more than record and playback capabilities. After creating a recorded script, it’s helpful to break the script into modules. This allows you to save common function tests into libraries and reuse them whenever you encounter a similar need in the future. Nowadays, fairly comprehensive, prewritten frameworks and libraries can be found on the Web. They’re usually free, and often of surprisingly good quality. I’ve found that scripts built using the record and playback technique tend to be brittle. Navigation from field to field is often “tab-driven” and susceptible to changes in field order. In the early stages of development, when GUIs are often not stable, fields may disappear or be moved, causing scripts to break. A less brittle approach is to use a library-based method that includes object identifiers for navigation. This way, as long as the object name doesn’t change, the script will work regardless of where the object may have moved in subsequent builds. The key here is for developers to tag their objects with unique identifiers, so be sure to advise the development team about this requirement.

The benefits are numerous, but the main advantages are that the QA team members quickly know the person who’s coding the module or functionality that they’re responsible for. Usually this results in greater degrees of ad-hoc communication between them. For example, if the QA person finds that something isn’t working as expected, he simply walks over to the developer who owns this code and describes the observed behavior. The two work together and quickly deter-

• For collocation to work, teams must be located cheek-by-jowl before synergy occurs.

• mine whether this is the way it’s really supposed to work, or whether they have a bug. In my experience, collocated teams tend to have higher efficiencies of problem discovery and resolution. Strangely, though, collocation is not often practiced. Even when developers and QA personnel are located on the same floor but separated by another

group, communication tends to be minimal. For collocation to work, teams have to be located cheek-by-jowl before synergy occurs.

Size Matters For optimum team size, the QA/QC personnel should number slightly more than a third of the developers working on the project. The QA manager should have a QA lead with a QA team and a QC lead with a QC team for each project. Using the processes outlined here, a well-trained team of this size should be able to complete manual testing of new features, validate fixed bugs, write new tests and scripts for the new functionality and bugs, all within five days of build availability. To implement this concept of quality assurance/control, the QA manager must put in place the processes and techniques I’ve outlined with support from key stakeholders. Once such a system is in place, the process of qualifying a build becomes simple—a question of the QA team manually testing new features and bug fixes as specified in the release notes for that build, and the QC team creating and running scripts and reporting errors. In this way and with suitable staffing, it should be possible to turn around a build in three to five days. ý www.stpmag.com •

17


Employing QA Methods By Roger Nessier

T

he software development approach taken by many companies is a lot like a child’s method

of cleaning a bedroom. Instead of simply picking up a little bit each day, children allow their rooms to deteriorate to the point where the floor is unrecognizable. Engineers do that very same thing by placing their QA processes at the end of the development life cycle. To improve software quality, ISVs and enterprises must forego their usual pre-release quality-control approaches and instill quality principles into every aspect of product development. Users no longer accept the trade-off of whiz-bang technology for buggy, hard-to-use software. Companies that develop software must focus on high-quality, secure and reliable deliverables. The question is, how? The answer is simple: placing a greater emphasis on quality

18

• Software Test & Performance

engineering. QE imbues quality principles into every aspect of the software development life cycle, and into technology and architecture decisions. The impact of the QE approach to software development can be huge. As everyone knows, catching process or design flaws early reduces the impact and cost of resolving those issues.

What’s Your QEML? The first step in adopting a QE-centric approach to development is to determine your Quality Engineering Maturity Level (see Figure 1, page 22). QEML is a framework developed to help organizations better understand where they are in QE adoption, and what level of QE adoption they want to achieve. Once you understand your company’s QEML, you can set incremental QE goals over time that are supported by business benefits. SEPTEMBER 2007


Can Help Avoid Punishment Development organizations should assess their QEML from the perspectives of both process and technology; it’s possible to rank QEML differently from each of those standpoints. For example, you may have excellent development processes with strong stakeholder involvement, but are limited because your applications are built on an inflexible architecture. Conversely, your application may be built on a modern Web framework, but your immature development processes could cause you to score differently in these fields. Once an organization understands its technology and process QEML, it should set goals over time for increasing them. QEML goals should be supported by business cases that have strong linkages to financial goals. For example, if “no Roger Nessier is vice president of Symphony Services, an agile development and testing consultancy. SEPTEMBER 2007

severity 3 defects in releases” doesn’t have a business justification because the application isn’t used that often and users are tolerant of severity 3 defects, it shouldn’t be a process QE goal. Organizations need to develop separate technology and process QE plans for each product developed using different technology and development processes. It‘s best to complete process QE improvements before making technology improvements because those process improvements will help when making the technology changes. Don’t attempt both technology and process changes concurrently or move too rapidly, since the shock of these changes could impair your organization’s ability to keep the lights on.

Process QE Let’s start by examining process QE. In the example shown in Figure 2 (see page 23), a development organization has decidwww.stpmag.com •

19




QA CLEANUP

FIG. 1: THE QEML AND THE PYRAMID

QEML5 Feedback mechanism from stakeholders to refine design/process on an ongoing basis. Strong reliance on metrics, surveys and interviews to identify quality trends. Use of trend data to take action to optimize technology, design and process decisions to meet stakeholders’ changing needs. Leverage predictive mechanisms to better understand weak links. QEML4 Recognition that quality extends beyond catching defects early to include QE as part of design and development processes. Every aspect of development is scrutinized to ensure optimal alignment with quality goals. Technology, architecture and design decisions are made with an eye on quality. Stakeholders are tightly integrated into design and process decisions to ensure their needs are met.

QEML3 QE oversight to processes ranging from requirements gathering to long-term support. Empower QE to “stop the line” should defect trends show a need to reassess design, process or technology decisions. Extensive use of metrics to understand quality trends. Culture of quality is instituted.

QEML2 QA and development activities are quite coordinated through the PDLC. No large backlog of bugs. Test cases derived from use cases and user stories early in development cycle. Developers responsible for unit testing. Limited use of metrics to better understand problem areas.

QEML1 “Throw it over the wall” pre-release QA, little emphasis on unit testing. Test cases developed late in development cycle. Releases regularly go out with several S1/S2 defects. Heavy reliance on point releases to improve product quality.

ed to revamp the development process for Product A first (see Step 1). Once the process changes are completed, their value will be assessed and tied to business benefit before proceeding with Step 2, improving the technology platform. Development organizations stand to derive the greatest benefit from an integrated, holistic QE process. If a “big bang” approach to process QE is too daunting or stakeholders need to be convinced of process QE’s business benefit, incremental or point adoption can also be leveraged. The following are process QE adoptions you may wish to consider.

Test regime. Use cases/test cases should be developed at inception, applied early in the development cycle and comprehensively documented. With the aid of use cases, comprehensive unit tests should be created and leveraged to ensure developers understand the business issue being addressed and catch bugs or misunderstandings early in the development cycle. When possible, peer reviews should be used to help identify issues. Automated testing reduces testing overhead and may help uncover more issues. A heavy reliance on frequent early testing eliminates issues before they become serious and expensive to fix.

TABLE 1: PROCESS OASIS Sample QEML Process Goals

Business/Financial Benefit

Emphasis on requirements traceability Make application easier for QA; catch bugs early Institute “culture of quality;” analyze every development process to ensure quality goals are met Adopt an incremental, iterative development process with frequent stakeholder feedback

Higher customer satisfaction = greater user adoption Higher customer satisfaction = greater user adoption

22

• Software Test & Performance

Lower cost of development

Greater agility for meeting market needs

Requirements traceability. Trace completed requirements back to business needs to ensure the need is met. Iterative and complex development sometimes leads to the original product mission getting changed or lost. If requirements evolve, it’s important to update requirements documents to reflect this evolution. The completed software product and its subsequent releases must always be in sync with the business requirements document. Use defect analysis to identify root causes. Finding clusters of defects may point to a particular root cause, such as a design flaw. Developers should investigate issue clusters and not focus solely on fixing observable defects. Defect analysis should be conducted to track the number and source of defects throughout the development cycle to identify trends and take early action if an endemic root cause is uncovered. Advanced techniques such as the Taguchi method can identify areas that are prone to defects and help focus testing activities on those areas. This results in fewer test cases to achieve comprehensive application coverage. Adopt agile processes. Adopting an agile development approach can help you test-drive a QE approach. Agility, by its nature, uses smaller teams and requires close collaboration across all disciplines. Each agile team, almost by definition, results in a QE approach, although it’s restricted to that team and what’s being accomplished in a given iteration. Applying any agile approach (be it Scrum, XP or Feature Driven Development) helps you build on early successes and validate design assumptions. Instead of trying to release multifaceted, complex functionality, start simply and collect market and stakeholder feedback to continually refine and enhance the functionality needed. By building it simply the first time, you eliminate unneeded functionality that can detract from the user experience and introduce more issues. Add in extra capabilities only when you feel you have a strong justification to do so. Increase focus on PSR testing. Comprehensive performance, scalability, reliability (PSR) testing reveals your product’s weak links. An effective PSR test covers all possible uses of the product, using different configurations, hardware/software environments and data sets, and helps drive coding standards to ensure that added functionality doesSEPTEMBER 2007


QA CLEANUP

FIG. 2: QA’S TWO HUMPS

Technology QE

5 4 3 Step 2 2 Step 1 1 1

2

3

4

5

Process QE n’t degrade product performance, scalability or reliability. Document processes to enhance repeatability and consistency. Great software products aren’t created unless development processes are well documented and followed. Whether you follow a classic waterfall approach, an agile methodology or something in between, it’s important to have total process clarity from the requirements stage to end of life for a software product. This is an area where many development organizations fall down. Good process understanding is especially important if you have distributed development teams. Moving to a global development model often resolves this because you end up documenting the process to help extend it outside the existing four walls. Always look for opportunities to improve the process, and be sure to

update process documents with those improvements. Compile metrics to understand how well the team is tracking with expectations of productivity, quality and on-time delivery. Report on those metrics using a real-time dashboard that provides total transparency on the state of a release. Deciding on QEML process goals (see Table 1) depends on a variety of factors: Are you rewriting all your applications or taking an incremental approach? What are the training and business challenges to making technology changes? What are the perceived benefits (business case)? Do your customers really care if your application is written in Java or C++? What is your expected ROI? How will you measure success?

Technology QE Development organizations are often

faced with supporting and/or extending several generations of applications from COBOL or RPG to Java or .NET. Disparate technologies, design and skill requirements all present challenges to a QE-centric development organization. One of the biggest questions many organizations face today is, “Can I afford to rewrite legacy applications to leverage the latest thinking in architecture/technology while continuing to meet the needs of the marketplace?” Any technology QE strategy must be balanced against market and business realities. Like process QE, many organizations take an incremental approach to technology QE, which can sometimes feel like changing the engine in your car while driving down the freeway. You don’t make a change simply for the sake of making a change—the investment required must be balanced against the business benefits. For example, a development organization doesn’t adopt SOA because it seems like the savvy thing to do or because developers like to have SOA expertise on their resume. SOA adoption should be justified by a strong business case. In some cases, SOA may not make sense for your organization. The following are technology QE adoption strategies you may wish to consider. Reduce design complexity. Eliminating complexity and encouraging reusability can help simplify large system design, thus reducing quality challenges. Designs incorporating SOA, object orientation and partitioning are more extensible and able to adapt as the product evolves. Avoiding a hard design

FIG. 3: A DAY AT THE QEML RACES

Assess development technology & process QEML

Determine QEML target & steps/ timeline to achieve

Use metrics/data to support next phase

SEPTEMBER 2007

Manage plan for attaining QEML goals

Define next phase

Review goals attained

Determine if goals are sufficient/correct and refine goals based on business value

www.stpmag.com •

23


QA CLEANUP

TABLE 2: TECHNOLOGY OASIS Sample QEML Technology Goals

Business/Financial Benefit

Improve application response time Make application easier to integrate with third-party applications

Higher customer satisfaction = greater user adoption

Fix security holes

Lower liability; lower risk of customer defection

Make it easier to add new functionality

Higher value from application; broader audience

Higher value from application; broader audience

commitment is preferable if an abstraction serves the same need. Refactoring code early in the development process will reduce complexity and help uncover flaws. Designing for ease of integration to other products/systems by encapsulating variation reduces thirdparty software integration/testing challenges. Keep external interfaces as stable as possible when changing internal application functionality. Improve usability/interactivity. Poor usability/interactivity should be considered a design defect. A rushed or poorly researched approach to UI design will result in user issues. In many cases, UI defects have a greater impact on customer satisfaction and adoption than other types of defects. Therefore, these issues can’t be ignored just because the product was “built to spec” and is supposed to have an unintuitive user interface. Study use cases and storyboards to learn how the product is used and to validate the final design. Take into consideration the different needs of casual users and power users when analyzing usability/interactivity and make sure UI designs are intuitive, self-explanatory and self-documenting. Design for maintainability. If fixing parts of the application causes an undue amount of collateral damage to other parts, the design may have issues. Engineers need to look at design, memory usage and data access to understand if there are fundamental flaws causing the issues. If porting the application to different platforms or environments causes an undue amount of functionality to break, there may be issues with the design portability that need to be

addressed. Poor maintainability is sometimes caused by developers who feel they can walk away from a software project after it’s released. Developers should realize that their team, or a subset of the team, is responsible for bug fixes and improvements in future releases. If new features cause existing features to break, the design also may be suspect. As the product evolves, revisit the architecture to ensure that it meets the requirements. Adding new functionality can be constrained by an architecture design that doesn’t anticipate product evolution. Poor extensibility is sometimes caused by a “project view” of software development rather than a long-term, multi-release view. Take security seriously. Proactively identify security holes and vulnerabilities before users stumble on them. Conducting a thorough security audit as part of the release process not only uncovers security flaws, but also helps refine coding standards so future designs are more secure. Make sure your security goals are aligned with the type of application you’re developing/supporting and how exposed the application is to hackers. Just as with process goals, deciding on QEML technology goals (see Table 2) depends on a familiar range of factors: Are you rewriting all your applications or taking an incremental approach? What are the training and business challenges of making changes in technology? What are the perceived business benefits? Do your customers prefer your application written in Java or C++? What is the expected ROI? How will you measure success? The QEML planning process should

If new features cause existing features to break, the design may also be suspect.

24

• Software Test & Performance

be sequential and iterative. Checkpoints and metrics are used to ensure that value is being generated. Figure 3 shows the steps your organization can use to set your QEML technology and process goals. Instituting a “culture of quality” is critical across the entire company, not just within the R&D organization. Training programs should be instituted to ensure that all stakeholders throughout the company understand the QE mandate of the development organization and its role in the business.

T

HE KING OF QES For businesses in which architects and developers are the power players, QE can be a big cultural shift. In some organizations, QA is a place where those who aspire to development roles attempt to prove themselves, and those who can’t cut it as developers are relegated to lesser roles. In a QE-centric organization, the quality engineer is equal to the architect or developer because teamwork is the hallmark of a QE organization. In communicating this change, quality should be regarded as the mutual goal of both the developer and the quality engineer. The quality engineer works to help development engineers preemptively identify issues and address them early in the development cycle, and thus needs to be a highly capable engineer who combines the problem-solving skills of a developer with the eye for quality of a QA engineer.

For example, sales or management should not over-commit to development efforts that can result in “death march” releases and degradation of quality. Professional services teams should create customizations or configuration changes that observe the same QE principles that development uses. Creating a culture of quality that permeates throughout the organization will return dividends in the form of higher customer satisfaction, increased ability to meet market needs, greater customer retention—and ultimately, a healthier company. ý SEPTEMBER 2007


Don’t Miss Out

On Another Issue of The

Test & QA Report eNewsletter! Each FREE weekly issue includes original articles that interview top thought leaders in software testing and quality trends, best practices and Test/QA methodologies. Get must-read articles that appear only in this eNewsletter!

Subscribe today at www.stpmag.com/tqa To advertise in the Test & QA Report eNewsletter, please call +631-421-4158 x102


How White-Box Testing Opaque Systems Crystal Seeing Through the WaIls of SOA Systems

By David Linthicum

I

t’s always surprising to hear how little people understand the concept of SOA—which also means they have an inability to test it. Conflicting infor-

mation means that mistakes will be made during SOA implementations. How do you avoid them?

Divide and Conquer No matter how you got it wrong during deployment, you need to get it right before testing. The SOA testing domains can be grouped according to their level: Service, security, orchestration, governance and integration. For the purposes of an SOA, white-box testing uses an internal perspective of the system to design test cases based upon internal structure. This requires the SOA tester to choose test case inputs to exercise paths through the SOA code and determines the appropriate outputs. However, the difficulty with SOA is the number of subsystems it goes through, thus requiring different testing approaches, technology, and skills. Since the tests are based on the actual SOA implementation, if the implementation changes, so usually do the tests. Thus, white-box testing of an SOA is domain/solution-specific. While white-box testing is applicable at the unit, integration and system levels of the software testing process, it’s typically applied to the unit, or in this case, the domain (governance). So while it normally tests paths within a unit, it can also test paths between units during integration, and between subsystems during a systemlevel test.

26

• Software Test & Performance

Service-Level Testing Services should be white-boxed like an application, but dealing with three major subsystems: the interface, the data and the behavior. This is the core of your SOA—the resources—so you’ll probably spend most of the time white-box-testing services and any of the other components described in this article. Dealing with the interface is similar to dealing with an API. While it’s typically SOAP interaction as defined by WSDL, in some instances you’re using REST or proprietary enabling technology, but the approach is just the same. It’s really an input output interaction, so the programmer needs to build in breaks or output calls that display the information and parameters as consumed by the service during execution. Thus, if you invoke Calculate_Tax _Rate and input $100,000, when you white-box-test the service, you should see the input figure match the figure that processed, and that the result from the processing matches the number produced. You’re indeed doing some data validation, but validating that the interface isn’t altering information that’s consumed, nor produced, out of the data flows inside of the service. White-box testing of the data as flowing through a service is almost identical to whiteDavid Linthicum is CEO of Linthicum Group, an SOA corporate consultancy. SEPTEMBER 2007


Can Make Clear

SEPTEMBER 2007

www.stpmag.com •

27


WHITE-BOX SOA TEST

FIG. 1: SERVICE TYPES AND APPROACHES

Services

Data Services

Data Abstraction and Mapping

Data

Data

box testing a traditional application. Once again, monitoring points are leveraged to watch the data as it flows through the service. For an example, you watch $100,000 go through a validation routine, a tax-rate lookup, the use of a rate (say .30), and the final results ($30,000). Once again, you must ensure that the data and the results match up with the test cases applied to the service. Finally, you must focus on the behavior, or the functionality of the service. This means that the logic of the service is monitored and determined to be sound, and that the information flowing into the service is processed correctly as to the design of the service. Moreover, if the service is designed to provide different behavior via context, that must be monitored throughout the processing of the service as well. Use the same approach: Set up monitoring points and watch the behavior through execution. In Figure 1, you can see there are indeed two different categories of services—data services and transaction services—and they may require a different approach to white-box testing. Data services, as the name implies, means that the service typically provides data abstraction representing a physical database or databases, and thus is more data-oriented and should be tested as such. You need to monitor the consumption of information from the phys-

28

• Software Test & Performance

ical data stores, the remapping of the data into the new schema, and the new structure provided to the other services or orchestration layer that’s interacting with the data service. Transactional services, as the name implies, deal with data and are really the core services providing application behavior and functionality. Thus, while you do need to monitor information flowing through these services and the changes made to that information, pay attention to the processing and whitebox-test those features as well.

Security-Level Testing Security strategy, technology and implementation should be systemic to an SOA, and even bring along new concepts such as identity management. When testing your SOA for security issues, the best approach is to first understand the security requirements and then design a test plan around them, pointing at specific vulnerabilities. Most people find that black-box testing is the best way to test for SOA security issues, including penetration testing and vulnerability testing using existing techniques and tools. However, you can white-box-test security—and this actually exposes a problem with security. In most cases, security is provided through integration with one or a few SOA security products, and typically leverages the follow-

ing use cases and their validation: human identity, service consumer identity and service provider identity. Human identity and validation refers to the fact that human beings are interacting with the system and thus must be identified and validated. We do this by their ability to answer questions, such as user ID and password. More sophisticated systems may use biological identifiers, such as retina and fingerprint scanners. Once identified to the security system, that identity should and can be maintained as that user interacts with applications and services that serve those applications. You white-box-test this by making sure that the identity is maintained through the flow of the use of the applications and/or services and does not change, nor can it be altered. In essence, this is integration testing with the other security components. Service consumer identity and validation means that, like the human identity and validation scenario, you’re validating that the service looking to consume other services is both authorized to do so and has the appropriate security information, and that information is flowing correctly between security systems. Service provider identity and validation means that, as with the service consumer, you’re validating that the service consumer looking to provide services to other services, applications or orchestrations is both authorized to do so and has the security credentials. Again, you monitor the passing of the credentials from one subsystem to another to make sure the information does not change, isn’t visible, and provides the proper exceptions-handling and failure-recovery capabilities. Many SOAs allow services to be consumed outside the enterprise, creating a new set of vulnerabilities including information security issues and denialof-service attacks. Moreover, many SOAs also make the reverse trip, allowing for the consumption of services outside of the firewall into the SOA. This opens the door for other types of attacks. Vulnerabilities here include malicious services.

Orchestration-Level Testing Orchestration is a standards-based mechanism that defines how Web services work together, including business logic, sequencing, exception handling, and process decomposition, including SEPTEMBER 2007


WHITE-BOX SOA TEST

service and process reuse. Orchestrations may span a few internal systems, systems between organizations, or both. Moreover, orchestrations are long-running, multi-step transactions, are almost always controlled by one business party, and are loosely coupled and asynchronous in nature. Consider orchestration as another complete layer over and above more traditional application-integration approaches, including information- and service-oriented integration. Orchestration encapsulates these integration points, binding them together to form higher-level processes and composite services (see Figure 2). White-box testing is the best approach for testing an orchestration layer since, in most cases, the ability to externalize data flows is built into the orchestration technology for enabling language, such as BPEL. In many respects, you white-box-test orchestrations the same way you test services. Indeed, in most cases, the orchestrations can become services. Thus, again you focus on the interface, the data and the behavior. Commonly, you’ll set up monitoring points within the execution of the BPEL or other enabling language to monitor how the data flows and changes, as well as the behavior and performance of the entire layer. Most orchestration tools support white-box testing, including the ability to create these monitoring points for white-box testing or ongoing monitoring during runtime execution.

However, the types of monitoring provided by the vendors vary greatly. As time moves forward, better tools should become available.

Governance-Level Testing SOA governance is an emerging discipline that enables organizations to provide guidance and control of their service-oriented architecture (SOA) initiatives and programs (see Figure 2). As you may recall, SOA governance is, in essence, a services life-cycle and policymanagement layer, and the best way to test an SOA governance system is by matching up the policies the governance system is looking to manage with the actual way they manage them. In the world of white-box testing, you monitor the creation of a policy with the actual use of the policy within a real SOA. For instance, set up a policy that sets a service-level agreement (SLA) for a particular service, perhaps no more than a 1-second delay in responding. Set up the policy in the governance system and then monitor how it’s enforced at runtime by testing the service and monitoring how the governance system is monitoring the service to validate that the SLA is being enforced by the governance system. Like orchestration, the way you white-box-test a governance system is dependent on the white-box testing tools provided or supported by your governance vendor. Most governance vendors provide the ability to monitor the execution of a policy at runtime,

FIG. 2: TIES THAT BIND

Monitoring/Event Management

Security

Governance

Process/Orchestration Services Data Services/Messaging Data Abstraction Data

Data

Legacy

Internet Based New Services Services

SEPTEMBER 2007

Legacy

Rep

which is where the rubber meets the road when white-box-testing a governance system. However, you also should consider integration with security, service versioning and the ability to spot and block rogue services.

Integration-Level Testing The purpose of this step is to figure out if all the interfaces, including behavior and information sharing between the services, are working correctly. This also can be approached by using black-box testing techniques. White-box integration testing should work through the layers of communications, exposing each one during execution. Work up through the network to the protocols and inter-process communications, testing the REST or SOAP interfaces to the services, or whatever communication mechanism employed by the services you’re deploying. How you expose the information flowing from one system or service into the other is dependent upon the technology. In most cases, data is sent to a logging mechanism and compared later as to validity and correctness.

White-Box-Testing Your SOA Testing architecture is much different than testing a single application, and the approaches, tools and enabling technology are eclectic. Your ability to white-boxtest an SOA is dependent on what the technology is able to do, and your ability to expose data and behavior out of the SOA for the purposes of validation. The core issue here is how you test the services, which are the foundation of the SOA. Fortunately, these are typically programmatic, so that building in white-box-testing mechanisms is just a matter of programming. Keep in mind that there isn’t much change in how you white-box-test services, sets of services or composites versus traditional applications. However, the interaction, interfaces and contextual behaviors need to be checked and doublechecked, much more than with traditional applications, as does the interaction with other components such as security and SOA governance. There are no established ways of holistically testing an SOA. However, as time goes on and SOA implementations get more sophisticated, so will the approaches and testing technology. Otherwise, SOA won’t be able to deliver on its expected value. ý www.stpmag.com •

29




Ka-ching! Don’t Sell Your Test Team Short Ring Up Your Value to Management By Letting Your QA Efforts Be Known

By Theresa Lanowitz and Dan Koloski

I

n today’s outsourcing-crazed environment, it might be natural for QA practitioners to be concerned about their future career

prospects. As organizations strive for greater efficiency, even QA teams shielded from outside pressure are constantly asked to do more work with less. It’s alarming to hear QA practitioners repeat complaints like “Nobody understands or even cares about what we’re doing,” or “I can’t seem to get the attention of management.” Failure to engage with constituents outside the QA department threatens the future security of the personnel involved and the team’s ability to execute.

How Are You Perceived? In many organizations, communicating value also means changing percepTheresa Lanowitz is a consultant with analyst firm voke; Dan Koloski is director of strategy and business development at Empirix, which develops voice testing and monitoring solutions.

32

• Software Test & Performance

tions. Start by taking a hard look at yourself today. How do you think you and your organization are viewed? Do you believe you’re seen more as customer advocates or as bottlenecks, or perhaps something in between? In a recent survey, the voke analyst firm asked QA professionals to identify their ideal role in an organization. Not surprisingly, a majority indicated roles that emphasized customer advocacy and strategic thinking, with 35 percent of those polled stating they believed their ideal role to be that of a customer advocate, and another 35 percent indicating that QA professionals should ideally serve as strategic managers. An additional 15 percent of you believed that the ideal role for quality assurance professionals was that of facilitator, and 10 percent said that the ideal role was as an enforcer of standards. Only five percent said they thought a

quality assurance position was a tactical or execution-only function. In spite of these desired states, as we wrote above, we also hear that QA professionals at most organizations are viewed from the outside primarily in terms of their tactical and execution capabilities. Your efforts to optimize communication within your organization will work to change this perception. To be fair, ensuring that others recognize and acknowledge the value of individual and/or team contributions to an organization can be a challenge in any functional area. But QA organizations find this process particularly frustrating due to long-held perceptions of QA as simply “software testers” or tacticians, rather than as teams that play an important strategic role in an organization’s success. That lack of visibility and understanding about the QA organization’s contributions creates the need to continuously justify the work being done, and most often leads to restricted budgets, fewer resources, tighter timeSEPTEMBER 2007


lines and, ultimately, lower group productivity. The good news? The IT model as a whole is changing, aligning more closely with the business. That makes now an ideal time for QA organizations and test teams to begin to change these mythical perceptions and communicate their value to management and the organization as a whole. For your career—and your mental health—here’s a strategy for communicating the value of your QA team to the organization and how the application of your principles can benefit the entire company. As QA practitioners, how can you communicate what you do in a way that is meaningful to the business? What steps can you take to ensure that your internal customers and constituents understand and value the strategic role you play in the organization and in the success of every proj-

ect? How can you ensure that you and your team are viewed as advocates for your organization’s customers rather than as simply tacticians or testers of software, or worse, bottlenecks?

As a QA practitioner, you can’t rewrite the corporate organization chart, but it is possible to influence the attitudes of those who are already trying to build an IT organization that is clearly aligned with the business. You need to help break down silos, which requires a clear understanding of the application “ecosystem”—the tools, the processes, the skills, and the people—that exist beyond individual roles and responsibilities. Testing and validation is just one component of a larger application life cycle in which your work exists—a life cycle that spans the line-of-business, the IT/development organization and IT operations. Figure 1 (see page 34) puts test and validation in the context of the broader life cycle. At the same time, test and validation as a functional area has any number of sub-activities, ranging from the strategic (test planning and results

The IT model is changing, aligning more closely with the business.

SEPTEMBER 2007

• It starts with effective and concise communication.

Optimize Communication To begin the process of communicating the value of testing, your organization must be optimized to facilitate it.

www.stpmag.com •

33


QA QUOTIENT

FIG. 1: YOUR PLACE IN THE SHOWCASE

Architecture

Business Storyboarding

Technical Requirements

Test and Validation

Development

Business Value

Production

IT Manages Technology Effort

analysis) to the more tactical (test execution, environment management), as shown in Figure 2. A subset of information from all of these activities is highly relevant to people throughout the enterprise, but only if they can be made to consume it. Your job is to ensure that they can. Against that, in many traditional organizations silos exist between teams, and these silos include silly-butcommon communication protocols, such as prohibitions against communicating directly with peers in other silos. The organizational chart in Figure 3 shows how information is usually shared in a typical enterprise. For your career, however, it’s critical to communicate beyond traditional silos, reaching across the IT organization and up the organization, communicating directly with application users. The organization chart as well as force-of-habit will discourage such reaching out, but if you make the effort, you might be surprised at the willingness of your peers in other silos to talk. After all, they’re dealing with the same issues. But you’ve got to do more than just make the effort. You also must understand the specific challenges and requirements of each of your audiences and know how they want you to communicate with them. A typical organization is loaded with information. Part of your job is ensuring that the right information is collected and communicated to the right people at the right time. For example, to communicate across the IT organization, between

34

}

}

} Line of Business & IT

• Software Test & Performance

Business & IT Manage Results

the quality assurance, system architect, development and operations teams, it’s usually best to use an approach that emphasizes teamwork. The issue is not your to-do list versus theirs— your mutual goal is to ship or deploy a good product, on time and on budget. On the other hand, to communicate up the organization, where too much information can be time consuming and decisions must be made quickly, you must deliver communica-

• It’s critical to your efforts to ensure concise communication.

• tions that are clear, concise and free of unrelated material. Finally, to communicate with the customer or users of the application, you must first acknowledge and understand their business goals, and communicate your need to strike a balance between quality and customer value.

Simplify the Complex For many IT organizations, a model used by traditional “product line management” staff at independent software companies can be helpful. In a software company, a product line manager has overall responsibility for the product, application, or solution

being put into production, as well as accountability for facilitating communications, ensuring compliance with government and internal requirements, and responding to competitive pressures. Product line managers therefore spend a good deal of time learning how to most effectively talk to their customers so they can better understand and meet their needs. Ensuring customer satisfaction by first understanding and meeting the needs of your customers is one of the best ways to combat competitive threats of any type. At many organizations, testing teams are distributed, test requirements are varied, and execution and run results are stored in different places. But communication requires a number of key elements: centralized information, customized reporting and complete collaboration. While the idea of centralizing all of your information may seem overwhelming, it’s critical to your efforts to ensure concise communication. One way to facilitate communication is through a centralized repository. A test management repository can be your first “communications vehicle,” enabling you to move from distributed, filesharing scenarios to a centralized repository from which communication will become more efficient. Far too often, organizations maintain a loose assortment of text files, spreadsheets and informal knowledge-sharing that’s inefficient and non-contextual, so that no single member of the team has a total understanding of the big picture. If you can’t describe that big picture to other team members, you can’t communicate it outside the team.

Avoid Default Reports When communicating through reports, one-size-fits-all just doesn’t cut it, because such reports are unlikely to meet all the specific requirements of your internal customers. Think beyond the default reports built into software and make use of the flexibility included to customize reports. Include data that is actionable and tailor it for those who will be using it. For example, design your reports by asking yourself these questions: • What information does an interSEPTEMBER 2007


QA QUOTIENT

Collaborate During Testing How many people sat in on your last load test? At most organizations, different components of the system are owned by different constituents, who may include network owners, systems owners or managed service providers. It’s rare today to find a system of any meaningful scale that’s owned by a single group. This disparate ownership makes collaboration and communication during testing essential. A perfect example of where collaboration yields a quantum leap in efficiency is in performance testing, which exercises an entire system (including all of the separately-owned sub-components) and measures its performance at scale. For maximum efficiency, invite all owners to your load test and collaborate and communicate with individual component owners during the testing itself, not afterward. Obviously, it’s especially important that your test strategy (including your tools, scheduling and communication) SEPTEMBER 2007

FIG. 2: LOCATION VS. ADVERTISING

Test Planning

Requirements Verification

Tactical

Strategic

} }

Strategic

}

nal customer or constituent need to mitigate the risk of putting an application into production? • What are the key metrics that internal customers will use to judge the success of their application or initiative? • What can I do to quickly and effectively communicate the key performance indicators that managers, constituents, fellow developers or IT staff need to understand? Ask your stakeholders what they need to know and in what format they’d like to see their data. They’ll probably be shocked that you bothered to ask, and will usually reveal what they need with a bit of prodding. Bear in mind that your internal customers aren’t a monolithic group, and answers will be different depending on who you ask. Your job as a “product line manager” is to find the answers. Then, demonstrate your effectiveness and the value of the information you can deliver by formatting it in the way that’s most meaningful to your customer. Ask yourself what will have the greatest impact in a world of limited time and short attention spans, dust off that reporting module you’ve been ignoring and make it happen. Most good tools allow you to build reporting templates that can then be applied against each round of updated data.

Test Results Analysis

Test Test Test Development Environment Execution Preparation

Reporting

Quality Mgmt.

Design Verification

Application Security Risk Analysis

allow for collaboration. You and each component owner should be able to view the same load test at the same time, manipulate the data in a way that makes the most sense for each of you, and then work together on diagnostics and tuning. Collaboration and communication in real time will deliver better results and higher value in less time.

Testing Is a Strategic Activity With the elevated importance of the IT organization to business strategies today, the QA department has the perfect opportunity to communicate the value of quality assurance and testing within the organization. Engaging with constituents outside the QA department solidifies the job security of the people involved (out of sight, out of mind) and the ability of the team to execute. The key to boosting the perception of QA is not just making the effort, but making the right effort.

Work to replace the notion that quality assurance is a bottleneck, and to spread the understanding that you’re part of a strategic organization focused on delivering high-quality solutions to customers. Three primary take-aways are to: • Understand the broader application life cycle and where your team adds value. • Communicate beyond traditional silos to your peers and upper management to customers or application users. • Move beyond default-report mentality; understand the specific information relevant to each of your unique internal customers and deliver it based on their needs. In the end, effective communication will yield higher-quality results, elevate your QA and testing organization to a strategic level, deliver greater business value—and help you advance your career. ý

FIG. 3: WEEKLY CIRCULAR

IT

Development

Application Architect

Application Security Architect

Operations

QA Application Quality Development

CHALLENGES • Communication with constituents • Time to market • Increased competition—internal & external

Functional

Load

System

Business Analyst

User Acceptance

Application Security

www.stpmag.com •

35


Best Prac t ices

Living Up to the Web 2.0 Load-Testing Challenge A wave of delay would The idea of Web load-testsweep through the applicaing itself is undergoing tion, then dissipate. something of a shift. For Sean Molloy and his colnow, the term still evokes leagues were struggling to the image of running tests make sense of this troubling to make sure that a hot new trait. They were performapplication, cobbled togethance-testing part of the comer in the wee hours of the pliance management suite morning by a would-be offered by ControlPath, entrepreneur, can at least where Molloy is director of theoretically scale to supGeoff Koch software engineering. port a worldwide audience. Such buggy behavior would have Scaling to support the masses remains been no big deal, except for two facts. important, a point made vividly by First, even after exhaustive testing, Facebook’s chief operating officer Owen Molloy’s engineers had never seen an Van Natta in a May article in Fast exception quite like it. Second, the hicCompany magazine. Here’s an excerpt of cup was occurring in front of a team the article, written by reporter Ellen from a potential suitor. McGirt: “The tech guys across the table want“We were trying to predict how many ed to see a live performance-test ‘off the new users we’d get, how they would use cuff’ [of] a test that we could perform in the site, and what we’d need to serve a pinch,” says Molloy, a 15-year software that,” [Van Natta] says. There weren’t industry veteran. “We were on edge.” enough people to do all the analysis. ControlPath’s application is Web“We were just trying to keep the wheels based and makes heavy use of AJAX. on the wagon.” When he went to check And so, at least broadly speaking, it can the data center, he was horrified. be tagged with the Web 2.0 label. Yes, we “There were little fans like this big” all agree that Web 2.0 is already well holding up his hands to indicate the worn and impossibly over-hyped. But size of a grapefruit “tucked between the anyone who roots around the Web for servers. It was over 110 degrees in some even a few hours each day will invariably aisles.” And the data-center guys were see several examples of browser-as-platplugging in more servers and screwing form, Web site–as-application approachthem into racks, trying to keep up with es to computing. the rapidly scaling site. The Plexiglas It’s not at all hard to find analysts to sides of the server racks were warping provide context for these anecdotal from the heat. “I was, like, Mayday!” he observations. In an Oct. 3, 2006, report, recalls. “We need to get on top of this!” Evans Data said that 1.7 million develIn reality, far fewer sites will ever deal opers are already using AJAX while with explosive, Facebook-type growth some three million more are evaluating than will start to experiment with AJAX, the methodology. A Gartner research Adobe Flex, RSS feeds or blogs. For publication released the same month these and other technologies and feadeclared there to be an 80 percent tures, it’s less about user clicks and page probability that “by 2008, the Web 2.0 refreshes, and more about an explosion vision will be adopted as the mainof browser-to-server HTTP calls made in stream Web and will disappear as a septhe background, of which few but the arate category.” savviest users are aware. Subtle changes

36

• Software Test & Performance

in such traffic can swamp servers and networks, which is why it’s increasingly important for vendors of hosted applications to be able to troubleshoot load issues at runtime. In other words, it was perfectly reasonable for Molloy and his engineers to suffer through their own Mayday! moment, however minor. “You can’t go running to the code to performance-tune an already deployed service,” acknowledges Molloy, whose developers use a load-testing tool provided by France-based Neotys. “If you find that there are aspects of the application that can affect performance and can be changed like block sizes on streams, SQL query text or numeric values, these should be abstracted to configurations so that server administrators can work to squeeze the max performance out of the app on the running hardware.” Web 2.0 is bringing other changes to load testing, which until recently was the specialty of a select few testers and developers, most of whom were usually kept far away from actual customers or users. Now, however, everything is loosely coupled and asynchronous, from applications and services to workflows and e-commerce transactions. This means everyone from customer service reps to coders has to pitch in when it comes to understanding performance.

Improvements On Board Molloy says that this all-hands-on-deck approach is made easier by improvements in Web load-testing tools during the last several years. Today, many loadtesting products automatically create complex usage patterns and scenarios. At Centric Software in New Hampshire, for instance, QA engineers use a load-testing tool not only to gauge Best Practices columnist Geoff Koch can be reached at koch.geoff@gmail.com. SEPTEMBER 2007


Best Practices performance when a new implementation is complete, but also to reproduce and better understand specific customer use cases. “The reports and results we receive at the end of each test yield precious information such as time to run every request, as well as what type of error resulted from the server,” says Jean-Paul Durand, the QA manager at Centric, which also uses Neotys tools. Features like this, increasingly available in even moderately priced load-test tools, address one of the more inalienable truths in all of software, best summarized by Molloy: “The human user base is often not 100 percent predictable,” he says. Of course, architects, developers and testers are human, too. So all the nuanced load and performance information provided by the newest tools occasionally gets ignored. “From a personal point of view, load test analysis and software support is as important as load testing itself,” says Porter,

whose firm uses Borland’s SilkPerformer. “If your Web site is an important part of your company’s revenue stream, then load testing and analysis should be a part of your deployment strategy.”

Don’t Forget the Database It’s not enough to stop analyzing once you reach the edge of your Web-based application, especially if it’ll be regularly querying a database. Indeed, as the compute model evolves away from client/server, writing code that can scale well with massive amounts of data may be equally important as scaling to support massive numbers of users. Not that this is easy. Molloy points out that understanding database loads means getting a handle on both the queries and the execution plans. “It’s more of a style choice than a best practice, but we believe that the data from the database should be sorted, paginated and have only the columns required before it hits the application’s processing,” he says. “This

helps ensure that, even if the data stored is immense, the network need only scale to handle the data required by end users.” There’s no word on the potential suitor’s identity or whether its representatives ever made a proposal, indecent or otherwise, to ControlPath. But Molloy is happy to report that his team did figure out that wave-of-delay problem in time to salvage the demo. “Since the ‘wave’ pattern is often associated in Java with garbage collection or memory usage, we looked at the memory paging on the three machines being tested,” Molloy recalls. “Sure enough, the Tomcat configuration had the memory set to what it is on a development laptop, not a production server. One change to a config file, and the load test line resolved to a nice, flat response time given static load.” Problem solved, for now. But expect waves of new load-test issues as Web 2.0 rolls toward the mainstream in the months and years ahead. ý

Index to Advertisers Advertiser

URL

EclipseWorld

www.eclipseworld.net

Empirix

www.empirix.com/freedom

Hewlett-Packard

www.hp.com/go/software

IBM

www.ibm.com/takebackcontrol/open

iTKO

www.itko.com/lisa

PNSQC

www.pnsqc.org

Seapine Software

www.seapine.com/stptcm

Software Test & Performance Conference

www.stpcon.com

Test & QA Report

www.stpmag.com/tqa

SEPTEMBER 2007

Page Number 30-31 6

40

20-21

8 39 4 2-3 25

www.stpmag.com •

37


Future Test

Agility Needs Some Cultivation Over the past couple of Another problem is that years, I’ve noticed a signifimanagement and developcant increase in IT organizament teams confuse true tions adopting the agile agile methods with the condevelopment methodolocept of “code fast and gies. Typically, organizations release often,” with no real recognize the need to intethought about documentagrate these methods slowly, tion or testing. This is, of first by introducing them in course, a complete mispilot projects and later reading of what agile tenets expanding to other projects. truly are, which include Shaun Bradshaw This slow adoption people over process, workallows the various groups involved in ing code over documentation, and testsoftware development an opportunity ing early and often. to adjust their culture, skills, processes In the third problem I’ve seen, the and expectations to the new methodolproject or organization doesn’t allow ogy. But among my clients that made or the testers to integrate into the agile are preparing to make this transition, teams (usually due to initial resource certain groups were far more comfortconstraints) because new agile projects able with the changes than others. are started without first completing iterDevelopers, architects and business ative or waterfall projects. analysts seemed more open to the shift Testers often get stuck doing the and were excited to get started. But pertesting at the end of the earlier projects haps because of the way agility is perand don’t have the resources to fully ceived, project managers and testers integrate into the new agile projects. appeared less enthused about adopting This initiates a chain reaction in which agile methods. testers continue to lag behind. Another problem is simply the culThe Misperception tural mindset of the testing community. For testers, I believe the negative perFor example, many (maybe most) ception is caused by implementations testers believe that requirements are that don’t truly follow agile tenets. The sacred, that testers must have thorimplementation teams (mostly the oughly documented requirements to developers) try to customize certain test an application, and that tests must aspects of agility to apply to the projects, be thoroughly analyzed and documentoften leaving out practices that are ed before test execution begins. Agile more QA- and test-focused. development just isn’t that rigid. In my experience, three common Developers are usually more comimplementation problems cause the misfortable creating code based on the perception. First and most common is the ideas presented to them without idea that agility is developer-centric. detailed documentation—they like Supporting this myth are the numerous prototyping, trying to develop features articles, books, presentations and classes first and see if they work and fit the that cover how developers should conusers’ needs. But testers state, “We duct themselves in these methodologies. have to know how it’s supposed to There is less information on how testers work before we can test it—otherwise, should integrate and collaborate within how will we know if the developers the agile teams. coded it correctly?”

38

• Software Test & Performance

The Embrace Testers need to start embracing agility, but how? The first step is to let go of preconceived notions. Second, learn more: Read books on the subject— Ken Schwaber’s “Agile Project Management With Scrum” (Microsoft Press, 2004) and Ken Larman’s “Agile and Iterative Development” (AddisonWesley, 2003) are two good ones to start. Attend an agile presentation or seminar, and focus on finding ones geared toward testers. Another strategy for embracing agility is to expand your skill set, especially in the areas of open-source automation tools and development technologies. Take a class in JavaScript, Ruby and/or Perl. Download Selenium, Fitnesse or Watir and give them a whirl. Embrace the communication and collaborative aspects of agility that allow you as a tester to have direct, immediate and meaningful input into the project as a member of the agile team. If your organization starts down the path of implementing agility without tester involvement, either due to the “code fast” mentality or the idea of “bolting” testing onto agility, protest. Let management know that there’s a better way to implement agility, and volunteer to be the tester on the agile project team. Tightly collaborate with the developers. Partner with the program office and/or customer to ensure their concerns are voiced in the projects. Enthusiastically assume the role of quality expert on the team, defining the quality criteria for the team’s work and ensuring that solid stories are crafted, that requirements are clarified and, most importantly, that acceptance testing meets the needs of the project. Even if your organization hasn’t fully committed to agility, look for ways to open up collaboration with the rest of the project team so you’ll be there to fill the role of quality expert when agility is finally implemented. Above all, keep an open mind, and if you have the opportunity, try it. ý Shaun Bradshaw is director of quality solutions at Questcon Technologies, a test and QA process consultancy based in Stamford, CT. SEPTEMBER 2007


3164&

ă &HOHEUDWLQJ \HDUV RI 4XDOLW\ WK $ QQXDO

3$&,),& 1

:

62)7:$

5(

48$/,7

&21)(

<

5(1&(

UDWLQJ &HOHE <HDU V RI ( [FHOO H Q FH

.H\QRWH 6SHDNHUV

)XOO 'D\ :RUNVKRSV

6FKHGXOH *DPHV 5HFRJQL]LQJ DQG $YRLGLQJ WKH *DPHV :H 3OD\ -RKDQQD 5RWKPDQ 5RWKPDQ &RQVXOWLQJ *URXS

: 8VLQJ /LIHF\FOHV WR 'HVLJQ <RXU 3URMHFW -RKDQQD 5RWKPDQ 5RWKPDQ &RQVXOWLQJ *URXS

+XQWLQJ 6HFXULW\ 9XOQHUDELOLWLHV 7KH ,OOXVWUDWHG *XLGH +HUEHUW +XJK 7KRPSVRQ 3HRSOH 6HFXULW\

6LQJOH $WWHQGHH 5DWH

%\ 6HSW

$IWHU 6HSW

'D\ :RUNVKRS

'D\ 7HFKQLFDO 3URJUDP

'D\ &RQIHUHQFH

*URXS 5DWH RU PRUH E\ 6HSWHPEHU

'D\ :RUNVKRS

'D\ 7HFKQLFDO 3URJUDP

'D\ &RQIHUHQFH

$ERXW 3164& 1RZ LQ LWV WZHQW\ ¿IWK \HDU 3164& GHOLYHUV EXVLQHVV FULWLFDO HGXFDWLRQ WUDLQLQJ DQG LQIRUPDWLRQ WR KHOS \RX LPSURYH WKH TXDOLW\ RI \RXU FRPSDQ\¶V VRIWZDUH

,QYLWHG 6SHDNHUV )XQFWLRQDO 7HVWV DV (IIHFWLYH 5HTXLUHPHQWV 6SHFL¿FDWLRQV -HQQLWWD $QGUHD 7KH $QGUHD *URXS 7UDQVLWLRQLQJ WR $JLOH 7LSV IRU 7HVW 7HDPV /LVD &ULVSLQ

: 0DVWHULQJ $JLOH 5HTXLUHPHQWV 3ULQFLSOHV DQG 3URFHVVHV -HQQLWWD $QGUHD 7KH $QGUHD *URXS : *HWWLQJ 7UDFWLRQ RQ 7HVW $XWRPDWLRQ WKH $JLOH :D\ /LVD &ULVSLQ : 0DVWHULQJ WKH $UW RI &KDQJH 'DOH (PHU\ : $JLOH ,QVSHFWLRQ 'RURWK\ *UDKDP *URYH &RQVXOWDQWV

7HFKQLFDO 3DSHU 7RSLFV

5HVLVWDQFH DV D 5HVRXUFH 'DOH (PHU\

$ VDPSOH RI WKH UHIHUHHG WHFKQLFDO SDSHU WRSLFV VFKHGXOHG IRU SUHVHQWDWLRQ DW 3164&

&RJQLWLYH ,OOXVLRQV LQ 'HYHORSPHQW DQG 7HVWLQJ 'RURWK\ *UDKDP *URYH &RQVXOWDQWV

0DQDJLQJ &KDQJH LQ 6RIWZDUH 7HVW (QYLURQPHQW 7HVWLQJ :HE 6HUYLFHV /HYHUDJLQJ 2SHQ 6RXUFH 7RROV ([SORUDWRU\ 7HVWHU¶V 1RWHERRN 8VHU $FFHSWDQFH 7HVWLQJ $-$; 7HVWLQJ 7HVWDEOH 6RIWZDUH $UFKLWHFWXUHV 0D[LPL]LQJ &RGH 5HXVH IRU &URVV 3ODWIRUP 'HYHORSPHQW 8OWUD /LJKWZHLJKW 6RIWZDUH 7HVW $XWRPDWLRQ 6HOHFWLYH 5HYDOLGDWLRQ 7HFKQLTXHV 6RIWZDUH 5LVN 0DQDJHPHQW 6RIWZDUH 3URFHVV ,PSURYHPHQW

:KR 6KRXOG $WWHQG 6RIWZDUH PDQDJHUV WHVWHUV DQG GHYHORSHUV LQWHUHVWHG LQ EXLOGLQJ EHWWHU VRIWZDUH WKURXJK EHVW SUDFWLFHV TXDOLW\ GHVLJQ RU LQQRYDWLYH PDQDJHPHQW WHFKQLTXHV LQ VRIWZDUH GHYHORSPHQW DQG WHVWLQJ SURFHVVHV

7R UHJLVWHU RU IRU PRUH LQIRUPDWLRQ YLVLW ZZZ SQVTF RUJ


Make sure your critical applications are never in critical condition. We’ve turned I.T. on its head by focusing on I.T. solutions that drive your business. What does this mean for Quality Management? It means efficiency that results in shorter cycle times and reduced risk to your company. It also means you can go live with confidence knowing that HP Quality Management

upgrades. Find out how. Visit www.hp.com/go/software. Technology for better business outcomes.

Š2007 Hewlett-Packard Development Company, L.P.

software has helped thousands of customers achieve the highest quality application deployments and


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.