Publication
VOLUME 3 • ISSUE 8
: ST ES BE CTIC g e A in s PR est Eclip T ith W
A
AUGUST 2006 • $8.95 • www.stpmag.com
Load-Test Anatomy: Making All The Parts Fit Together
Predict Testing Delivery Dates, Cost Overruns Security Zone: Keep Personal Information Safe
Bring Logic Into Play Deductive Reasoning Boosts Test Efficiency
VOLUME 3 • ISSUE 8 • AUGUST 2006
Contents
12
A
Publication
COVER STORY
Bringing Logic Into Play
A sound understanding of deductive reasoning can yield benefits in By Yuri Chernak software testing.
18
Anatomy of A Load Test
This dissection helps explain all the parts and how they fit together.
26
Predict QC Delivery Dates And Cost Overruns
Depar t ments 7 • Editorial
With the help of advanced scheduling tools, QC managers can calculate the earned value of their projects. By Andrew Makar
Soup up your system testing with some By Lindsey Vereen formal logic.
8 • Out of the Box New products for developers and testers. Compiled by Alex Handy
35
Confidential— Handle With Care In the Security Zone: Here are some practices to protect sensitive data from internal and external threats. By Michael Cooper
10 • Peak Performance New products for developers and testers. By Scott Barber
39 • Best Practices Don’t reinvent the wheel. By Geoff Koch
42 • Future Test How to make security more than an afterthought. By Ed Adams
AUGUST 2006
www.stpmag.com •
5
A MESSAGE FROM THE EDITOR
VOLUME 3 • ISSUE 8 • AUGUST 2006 Publisher Ted Bahr +1-631-421-4158 x101 ted@bzmedia.com
Editor Lindsey Vereen +1-415-412-4314 lvereen@bzmedia.com
Managing Editor Patricia Sarica psarica@bzmedia.com
Director of Events Donna Esposito +1-415-785-3419 desposito@bzmedia.com
Senior Editor Alex Handy ahandy@bzmedia.com Copy Editor Laurie O’Connell loconnell@bzmedia.com Art Director LuAnn T. Palazzo lpalazzo@bzmedia.com Art /Production Assistant Erin Broadhurst ebroadhurst@bzmedia.com Contributing Editors Scott Barber sbarber@perftestplus.com Geoff Koch koch.geoff@gmail.com Contributing Writers Ed Adams Yuri Chernak Michael Cooper Andrew Makar Thomas O’Mara Editorial Director Alan Zeichick alan@bzmedia.com Director of Editorial Operations David Rubinstein drubinstein@bzmedia.com Special Projects Editor George Walsh gwalsh@bzmedia.com
Director of Circulation Agnes Vanek +1-631-421-4158 x111 avanek@bzmedia.com Circulation Assistant Nyla Moshlak +1-631-421-4158 x124 nmoshlak@bzmedia.com Ad Traffic Manager Phyllis Oakes +1-631-421-4158 x115 poakes@bzmedia.com Office Manager/ Marketing Cathy Zimmermann czimmermann@bzmedia.com Web Developer Craig Reino creino@bzmedia.com Web/HTML Producer Nicole Schnatz nschnatz@bzmedia.com Controller Viena Isaray visaray@bzmedia.com Article Reprints Lisa Abelson Lisa Abelson & Co. +1-516-379-7097 labelson@bzmedia.com Customer Service/ Subscriptions +1-847-763-9692 stpmag@halldata.com
Cover Photograph by The Design Diva, NY Advertising Sales Manager David Karp +1-631-421-4158 x102 dkarp@bzmedia.com
President Ted Bahr Executive Vice President Alan Zeichick
BZ Media LLC 7 High Street, Suite 407 Huntington, NY 11743 +1-631-421-4158 fax +1-631-421-4130 www.bzmedia.com info@bzmedia.com
Software Test & Performance (ISSN #1548-3460, USPS #78) is published 12 times a year by BZ Media LLC, 7 High Street, Suite 407, Huntington, NY 11743. Periodicals privileges pending at Huntington, NY and additional offices. POSTMASTER: Send address changes to BZ Media, 7 High Street, Suite 407, Huntington, NY 11743. Ride along is included. ©2006 BZ Media LLC. All rights reserved. Software Test & Performance is a registered trademark of BZ Media LLC.
AUGUST 2006
Mr. Spock To The Rescue that the bug you found is, Have you stopped beating in fact, a bug. your wife, husband or sig“Bringing Logic Into nificant other? While Play,” the lead article of you’ve paused to compose the month, addresses foryour answer to this quesmal logic in the context of tion, consider the following system testing. It deals syllogism: with the need for quality All horses wear collars. assurance personnel to My Uncle Charlie wears a understand the logic of collar. software testing and to Therefore, my Uncle Charlie Lindsey Vereen present sound evidence is a horse. Editor If you’re unable to that supports their conanswer the question above without clusions about the quality of the prodsounding a little guilty, or if the syllouct they’re testing. The article may gism looks vaguely problematical but seem like heavy going, but it presents you can’t quite put your finger on some useful ideas. what’s wrong, take a look at this To Boldly Go… month’s lead story. But don’t stop with the article. Logic Logical fallacies—errors in reasoncan be quite absorbing (consider the ing—can be found everywhere, and in number of games based on it), and election years they’re even more you can find plenty of sources for prevalent than usual. Ad hominem information about logic and logical arguments—poisoning the well, fallacies on the Web. Webopedia is a appeal to ridicule, guilt by association good place to start; then sample other and such—are the order of the day in sources, such as logictutorial.com, the political sphere. But these aren’t www.philosophypages.com/lg and www the only logical fallacies; there are .logicalfallacies.info. many more. Even though many of you have had Logic is as important in software some exposure to formal logic in test as it is in any other discipline, and school, unless you’ve been thinking it must be approached correctly. The about it in relation to NAND and NOR negative proof fallacy (“Have you functions in integrated circuits or in stopped beating your . . .?”) has conwriting automation scripts, you may siderable relevance to testers: It’s not have done much with it recently. the one that says you can’t prove a And would you want to? negative. Mr. Spock may make logic seem a That means testers can never prove little forbidding and alien, but noththe absence of bugs in a software product; they can prove only the ing is more human: Formal logic has presence of bugs. So don’t fall for it been around since at least the time of when someone asks you if you can the ancient Greeks. A better underprove that there are no bugs in the standing of this discipline may very application you’re testing, because you well help testers improve their testing can’t. Sometimes just proving that a skills. And a familiarity with logic may bug is present can be challenge help testers to navigate around a enough. Most of you have encounperennial fallacy developers foist on tered situations as testers or even as them in response to the bug reports consumers in which you’ve tried they submit: “Hey, that’s not a bug— unsuccessfully to convince someone it’s a feature!” ý www.stpmag.com •
7
Out of t he Box Compiled by Alex Handy
IBM Integrates BuildForge IBM ramped up its Rational product line to version 7 in June with numerous updates to its current offerings, most of which focus on the integration with BuildForge, IBM Rational’s latest acquisition. The updates extend to much of the product line, including ClearQuest 7, ClearCase 7, Functional Tester Plus,
ity of all the artifacts within the development cycle. We’ve added test management to ClearQuest: Now, with RequisitePro, you have the ability in ClearQuest to view requirements and the requests that lead to those requirements, the defects associated with the tests for those requirements, and, with BuildForge, the actual build records to point to the builds that contain those requirements. In a single repository, you now have the ability to create enforceable processes or workflows from requirements all the way through to builds.” The company also announced that its Rational suite has been translated into some With full-scale build and life-cycle management support, the new languages. In parRational suite now communicates more information. ticular, the Rational Portfolio Manager 7, PurifyPlus, Visual Studio plug-ins have been transRequisitePro 7, BuildForge 7, the IBM lated so that menu items no longer Rational Team Unifying Platform and show up only in English. the Tivoli Provisioning Manager. Rational (www.ibm.com/rational) Chief among the updates are will also move into new territory, thanks BuildForge and IBM Rational to some offerings from IBM’s partners. ClearQuest 7. Roger Oberg, vice presiThese include a tool from Ring-Zero dent of marketing and strategy at IBM that gives Rational Eclipse–based tools Rational, said, “[We’ve added] enhancethe ability to manage Mercury ments to Rational ClearQuest and inteQuickTest Pro and WinRunner assets. grations with RequisitePro and All updates are available now, though BuildForge that allow end-to-end visibilpricing and compatibility vary.
mValent Rolls Out Integrity 4.0 mValent (www.mvalent.com) has announced the completion and availability of Integrity 4.0, its change management and control system. The software is aimed at helping organizational change-management teams to grasp the many software modifications, changes and patches that can exist within an enterprise. “This latest release of our product represents a significant step forward for the industry and further addresses our customers’ growing requirements for more efficient management of their application infrastructure and compliance initiatives,” said Joe Forgione, mValent’s president and CEO. “Configuration changes in IT environments are the number one cause of instability, performance issues and outages. The ability to holistically view and control these changes drastically reduces risk and creates greater confidence in the event of internal or external audits.” Integrity 4.0 offers an enhanced dashboard and a new round of automation features that help to assess and catalog changes without the need for user intervention. More information and pricing are available at www.mvalent.com.
Compuware Presents DevPartner Studio 8.1 Microsoft’s Tech-Ed conference in Boston was not only teeming with geeky soda enthusiasts and Windows gurus—it also played host to a preview of Compuware’s DevPartner Studio 8.1. This new version of the code analysis suite works hand in hand with Microsoft Visual Studio Team Foundation, and points out coding errors and best practices violations to users as they type. That includes automatic enforcement of .NET coding
8
• Software Test & Performance
standards and best practices, but users can also define their own standards for internal use. DevPartner Studio 8.1 also includes new customizable analysis sessions that can whip up charts and graphs detailing a memory usage and leaks, among other things. Seeing these firsthand can take much of the mystery out of tracking down show-stopping bugs before they make it to the QA team.
In addition to its dynamic features, the tool has also expanded its automated test features in code review. That means coders can whip up their own automated tests and run them whenever they feel like it, saving time and money for everyone on the team. Compuware (www.compuware.com) says that DevPartner Studio 8.1 should be available as you read this. The company does not discuss pricing outside of a sales environment. AUGUST 2006
SPI Dynamics Integrates With Rational With the release of an updated Rational line, SPI Dynamics (www .spidynamics.com) announced that its QAInspect Enterprise security testing suite can work alongside Rational 7 tools, synching with the platform’s new BuildForge and life-cycle management capabilities. “By incorporating the ability to configure, execute and manage application security testing within the IBM Rational Software Development platform, QA professionals are empow-
ered to accurately test for security vulnerabilities,” said Brian Cohen, president and CEO, SPI Dynamics. “The QAInspect integration with the IBM Software Development Rational Platform is a significant step forward to simplify application security within QA so that it doesn’t require months of training.” QA professionals can use QAInspect to analyze applications for a prioritized list of vulnerabilities. QAInspect applies techniques to identify application security vulnerabilities
Fanfare Tests Devices Device testing isn’t just a cut-and-dried kinds of products in the test automation process. With modern devices frequently space are what we’d call application testbasing their operations on advanced softing; they have one point of access. So in ware rather than embedded hardware, the world we live in, things look like sysit’s becoming important to perform stantems. Cisco won’t test a router in isoladard functional, performance, load and tion; it’ll test it in the context of several quality tests on devices and their software. other routers and some PCs, but many FanfareSVT offers its own automated of those don’t have a GUI. They may use testing suite for devices. It includes an a command line, a Web interface or a IDE for developing tests and allows users to build said tests without the need for any scripting or coding. FanfareSVT integrates with existing product development and testing infrastructure. Testers and developers can define and execute tests to verify device functionality, performance and behavior. Fanfare can automatically test devices through nonstandard interAs defects appear, faces such as command lines, Web administration panels and remote they can be report- connection clients. ed via the existing defect tracking system. Fanfare test proprietary protocol. Some use telnet or cases and test reports can be attached to ssh, and you’ll also see the big testdefect reports. The test cases execute equipment vendors that make boxes automatically inside any existing regresthat have special APIs. We talk on all sion-testing facility and continue to run those different interfaces.” against future builds. FanfareSVT costs $13,000 and runs Kingston Duffy, CEO of Fanfare under Windows. The test engine is com(www.fnfr.com), said, “Classically, the patible with both Linux and Windows. AUGUST 2006
from the perspective of a hacker and then reports on those vulnerabilities. Security vulnerabilities are software defects and should be treated the same way, Cohen advised, but stated that this is a fairly new concept for most development organizations. “SPI Dynamics is excited to work with IBM to bring to market a joint solution that helps the QA professional deliver more secure, higher-quality applications. We look forward to a strong ongoing relationship.”
Cenzic Offers Free Security Test Cenzic, the makers of the Hailstorm application security scanner, is offering a free Web application security scan as part of its “No Website Left Behind” program. The free Web application audit can be had by heading over to www.cenzic.com/c/val. But don’t dawdle—the validation scan is available only until September 30. “With Web applications constantly evolving, finding vulnerabilities is challenging, costly and time-consuming,” said Mandeep Khera, vice president of marketing for Cenzic. “Despite significant investments in testing solutions, many companies can’t keep pace with change in the threat environment due to deficiencies in their existing security analysis solutions, which suffer from a high number of false positives and negatives. The ‘Free Validation’ program offers these companies an easy and non-cost-prohibitive way to ensure regulatory compliance and reduce the financial risks associated with attacks.” Cenzic created the program to promote its ClickToSecure remote scanning program. Send product announcements to stpnews@bzmedia.com www.stpmag.com •
9
Pea k Perfor mance
Performance Testing Moves Toward Maturity In last month’s column, I dedicate their time to began sharing some of the building and maintaining key points that I took from open-source software, and I WOPR6, the sixth meetmake a point to try out ing of the Workshop on open-source software when Performance and Reliait appears to be a viable alternative to pay software. bility). In case you missed Sometimes, I find that the column, WOPR is the available open-source an ongoing series of inviproduct meets my needs; tation-only, minimal-cost other times, it doesn’t. peer workshops for expeScott Barber When it comes to performrienced performance testance testing, I’ve built up a significant ers and related professionals that library of open-source software that emphasizes mutual learning, sharing I use for everything from load generahands-on experiences, and solving tion to resource monitoring to file practical problems. WOPR6 (www parsing when I’m working on projects .per formance-workshop.org)was that use technologies this software specifically organized to explore evolvsupports. ing perceptions of performance testTypically, I’ve found that “houseing. We accomplished this through hold name” organizations are resistant reports of relevant experiences from to using open-source software. Their past projects and current initiatives reasons include limited support and that demonstrate or contradict the training, small user communities, thin view that performance testing is documentation and bad previous undergoing a period of significant, experiences. That’s why my ears rapid and positive change. perked up when Goranka Bjedov, a While it’s neither possible nor prosenior software engineer in testing at ductive to summarize the six days of Google, casually mentioned the folconversations among recognized lowing while sharing a recent perexperts and experienced practitioners, formance-testing experience. “Google there is value in mentioning some of uses open source and they build it out. the key points that I took from the They figure that they can pay for [a workshop. Specifically, I’d like to contool] or not pay for [a tool], but it’s tinue sharing with you points of view still [a tool]—and at least with open that either corroborate or oppose source, we can modify the code when some of the positions I’ve promulgatwe need it to do [something that isn’t ed in this column and other venues natively supported].” recently. This is an incredibly insightful stateWhat About Open Source? ment. While few organizations have As some of you may know, I’m a big fan Google’s pool of talented developers, of open-source software. That’s not to most of the tweaks that open-source say that I oppose pay software; rather, I tools need to meet a user’s specific have tremendous respect for those who needs don’t require any rocket science.
10
• Software Test & Performance
More About Open Source... A day or two after Bjedov shared her experience with us, Antony Marcano, a performance testing consultant in the U.K. and manager of TestingReflections .com, which aggregates some of the best software development and testing blogs on the Web, shared an experience with a related message. On a recent project, the team made the same open-source decision as Google, but his client didn’t make the tweaks to the tool themselves. In their case, it was cheaper and more efficient to pay someone to build out the open-source tool than to pay for the licenses of the payload generation tool. Interestingly, they found that the members of the open-source community for this tool charged quite reasonable rates in exchange for permission to put the tweak into the open-source project at the conclusion of the development effort. Marcano’s story gave me a fresh perspective. With some of the top payloadgeneration tools running six or seven figures to purchase and five or six figures in annual maintenance fees, why not simply hire a member of the opensource development team to enhance or customize the tool (and potentially provide some training in the process)? That could be a whole lot more costeffective than purchasing a proprietary tool that may or may not meet your current or future needs any better than the open-source solution.
Moving to IDEs One area of consensus about the testing evolution was evident in the increasing partnership between performance testers and developers. Just a few years ago, performance testers were seen as merely additional members of the “quality police” that many developers considered the bane of their existence. At WOPR6, some performance testers related their experiences of working closely with the Scott Barber is the CTO at PerfTestPlus. His specialty is context-driven performance testing and analysis for distributed multiuser systems. Contact him at sbarber @perftestplus.com. AUGUST 2006
development team. Marcano and Neill McCarthy, another top performance tester from the U.K., described finding a high degree of success in working side by side with developers in agile environments. Coupling these experiences with the tool vendor penchant for piling load generation and other performance-testing tools into the developer’s IDEs, the tipping point has apparently been reached. Performance testing as an activity in isolation from development, using tools unfamiliar to developers, will become a thing of the past. In fact, most of the new performance testers I’ve met at conferences or taught in recent training classes are converted developers. In further evidence of the shift in the perception of performance testing from an independent task to a job most efficiently accomplished in close collaboration with developers, Bill Barnett, a developer working on the performance-testing component of Microsoft’s Visual Studio Team System, shared his experience using VSTS to performance-test a new Microsoft application. Working with developers, he was able to conduct unit performance testing, stress testing, functional testing under load and other varieties of performance testing without switching tools. Whether you’re a fan of this trend or not, both Microsoft and IBM are taking it seriously, and it does present a powerful message.
Maturation, not Evolution Possibly the most eye-opening realization crystallized during a presentation by Mike Pearl, a friend and senior performance and reliability tester at Intuit. In fact, I was so intrigued by my epiphany that I forgot to record the technical details of his experience, being too busy jotting notes about the light bulb that had just illuminated my brain. The point of Mike’s presentation— and something that I’d subconsciously known for some time—is that maybe rather than evolving, paradigm shifting or undergoing revolution, performance testing is instead maturing— AUGUST 2006
adapting, merging, collecting, injecting and infusing lessons from other fields. Of all the recent advances I’ve witnessed in performance testing, virtually none are true innovations; rather, they’re new applications of well-known and documented research and practices from fields such as human psychology, statistical analysis, information modeling and operational research. If I could influence just one aspect of the field of performance testing, I’d like to encourage us all to reach out to other disciplines to learn what they have to teach us.
what production performance will be like.” I guess that shows that no matter how much we seem to be evolving in some areas, in others, the state of the practice has remained essentially stagnant for at least the last six years.
An Indicator of Optimism
Predicting Performance Over the years, I’ve been vocal about several pet peeves. Prominent among them: The practice of extrapolating results from a test environment that isn’t an accurate replication of production, for the purposes of predicting or even estimating an application’s performance, is unwise. In fact, it ranks somewhere between risky and crazy, unless you hire an expert in application performance modeling and then confirm both the production environment and the predictions. Craig Rapin, a performance test manager for one of the world’s largest providers of financial services, came to WOPR6 hoping that the experts in attendance could help him meet his most common challenge: predicting production performance based on results from non-production environments. Unfortunately for Craig, he received exactly two pieces of advice. First,“Don’t do it; it doesn’t work,” and second, “Find a way to convince your superiors to allow you to run at least a couple of tests in production; it’s the only reliable way to gain insight into
•
I started this two-part column with the intent of sharing some points of view from other respected members of the performance testing community that either corroborate or oppose some of the positions I’ve shared in this column and other venues recently. In general, I feel as though most of the leaders in this community who were able to attend this workshop agree on the potential of the advancements that are happening in our field, but I may have been a bit optimistic about how far we’ve come in applying those advancements to their full advantage. But I don’t mind being an optimist sometimes. To tell the truth, with the predominance of pessimism surrounding performance testing over the last five or so years, perhaps a bit of vocal optimism on the part of respected members of the community is an advancement in itself. ý
Rather than evolving, paradigm shifting or undergoing revolution, performance testing is maturing.
•
End Note: The portions of WOPR6 discussed in this column were attended, all or in part, by A.J. Alhait, Henry Amistadi, Charlie Audritsh, Marini Ballard, Scott Barber, Bill Barnett, Goranka Bjedov, Angela Cavesina, Ross Collard, Dan Gold, Corey Goldberg, Linda Hamm, Julian Harty, Dawn Haynes, Gabe Heininger, Doug Hoffman, Andy Hohenner, Paul Holland, Pam Holt, Mike Kelly, James Lyndsay, Shelton Mar, Antony Marcano, Neill McCarthy, Michael Pearl, Timmons Player, Craig Rapin, Harry Robinson, Robert Sabourin, Roland Stens, Tony Watson, Brian Warren, Donna Williamson, Steven Woody and Nick Wolf. www.stpmag.com •
11
Bringing Logic Into Play An Understanding of Deductive Reasoning Can Yield Benefits In Software Testing By Yuri Chernak
E
ven though the term test case is basic to software testing, and several published
sources define the word itself, the techniques to design test cases and the templates to document them, testers still find the definitions confusing and inconsistent. Misunderstanding test cases can be symptomatic of a failure to understand the logic of software testing. Software testing is the exploration of a software product to derive and report valid conclusions about its quality and suitability for use. In this regard, software testing is similar to mathematics—they both need proofs for their conclusions. However, mathematicians surpass software testers in deriving and proving their conclusions, thanks to their skill in using a powerful tool: deductive reasoning. To construct valid arguments, logicians have developed proof strategies and techniques based on the concepts of symbolic logic and proof theory. On critical software projects, testers have always been required to present valid evidence supporting their conclusions about the product’s quality. The Sarbanes-Oxley Act makes this requirement much more important going forward. In this article, I discuss the logic of one of the conventional levels of testing—system test—and propose a formal approach to constructing valid arguments supporting testers’ conclusions. Finally, understanding system test logic can help testers better understand the meaning of test cases. Yuri Chernak, Ph.D., is president and principal consultant of Valley Forge Consulting, Inc. As a consultant, he has worked for major financial firms, helping management improve the software testing process. Currently, his research focuses on aspect-oriented requirements engineering, use case–driven testing and test process assessment and improvement. Chernak is a member of the Institute of Electrical and Electronics Engineers (IEEE) Computer Society. He has been a speaker at several international conferences and has published papers on software testing in IEEE publications and other professional journals.
12
• Software Test & Performance
AUGUST 2006
Proofs and Software Testing Software testers have always dealt with proofs on their projects. One example can be concluding that a system passed testing. As testers can never prove the absence of bugs in a software product, their claim that a system passed testing is conditional upon the evidence and arguments supporting such a claim. On critical projects, either the project’s manager, end users or a compliance department commonly require documented test cases and test execution logs to be used as grounds for supporting testers’ conclusion that a software product passed testing. Another example is reporting a system failure. Whether it is formal testing or unscripted exploratory testing, testers are required to document and report the defects they find. By reporting a defect, a tester first claims that a certain system feature failed testing and then presents an argument in the form of a defect description to support the claim. Such an argument should be logically valid to be sufficiently convincing for developers. Deriving conclusions and presenting valid proofs, also known in mathematics as logical arguments, is frequently not a trivial matter. That is why mathematicians use deductive reasoning as a foundation for their strategies and techniques to derive conclusions and present valid arguments. Deductive reasoning is the type of reasoning used when deriving a conclusion from other sentences that are accepted as true. As I discuss in this article, software testers can also benefit from using deductive reasoning. First, they can better understand the logic of software testing and second, they can construct valid proofs better supporting their conclusions about product quality.
Applying Deductive Reasoning
y hb ap gr o ot Ph
AUGUST 2006
Y ,N iva D ign es eD Th
In mathematics, the process of deriving a conclusion results in presenting a deductive argument or proof that is defined as a convincing argument that starts from the premises and logically deduces the desired conclusion. The proof theory discusses various logical patterns for deriving conclusions, called rules of inference, that are used as a basis for constructing valid arguments. An argument is said to be valid if the conclusion necessarily follows from its premise. Hence, an argument consists of two parts: a conclusion and the premises offered in its support. Premises, in turn, are composed of an implication and evidence. Implications are usually presented as conditional propositions; for example, (if A, then B). They serve as a bridge between the conclusion and the evidence from which the conclusion is derived. Thus, the implication is very important for constructing a logical argument, as it sets the argument’s structure and meaning. In the following, I will apply this concept to software testing and identify the argument components that can be used in testing to construct valid proofs. In software testing, we derive and report conclusions about the quality of a product under test. In particular, in system testing, a common unit of a tester’s work is testing a software feature (also known in Rational Unified Process as test requirement); the objective is to derive a conclusion about the feature’s testing status. Hence, the feature status, commonly captured as pass or fail, can be considered a conclusion of the logical argument. To derive such a conclusion, test cases are designed and executed. By executing test cases, information is gained; i.e., evidence is acquired that will support the conclusion. To derive a valid conclusion, also needed are implications that in system testing are known as a feature’s pass/fail criteria. Finally, both the feature’s pass/fail criteria and the test-case execution results are the premises from which a tester derives a conclusion. The lack of understanding of how system testing logic works can lead to various issues, some of the most common of which I will discuss next. www.stpmag.com •
13
APPLYING LOGIC TO SYSTEM TEST
WHAT DO WE CALL A TEST CASE? Most of the published sources defining the term test case follow the definitions given in the Institute of Electrical and Electronics Engineers (IEEE) Standard 610 (IEEE Std. 610): a. Test Case: A set of test inputs, execution conditions, and expected results developed for a particular objective such as to exercise a particular program path or to verify compliance with a specific requirement. b. Test Case: Documentation specifying inputs, predicted results, and a set of execution conditions for a test item. Despite the fact that this standard was published many years ago, testers in the field still do not have a consistent understanding of the meaning of test cases. To better understand this meaning, we can use the concept of deductive reasoning. Following this concept, system testing can be viewed as a process of deducing valid conclusions about the testing status of system features based on evidence acquired by executing test cases. Hence, the main purpose of executing test cases is to gain information about the system implementation. This information can be used together with the feature’s pass/fail criteria to derive and support conclusions about the status of feature testing. The feature’s pass/fail criteria are important implications in the testing argument that determine the meaning of testers’ conclusions. These criteria are a link between a tester’s conclusion about the feature status and the test cases used to support the conclusion. Hence, the meaning of test cases follows from the definition of the feature’s pass/fail criteria. In system testing, the mission is finding and reporting software defects; the feature fail criterion is commonly defined as “If any of the feature’s test cases fails, then the feature fails testing.” What follows from this implication is that a test case is information that is sufficient to identify a software defect by causing a system feature to fail. The feature’s pass criterion can further explain the meaning of test cases. It is commonly defined as “The feature passes the test only if all of its test cases pass testing.” According to this definition, we imply that the system feature passed testing only if the whole group of its test cases passed testing. Hence, test cases are used as collective evidence to support the feature’s pass status. It should be noted, however, that this interpretation of the test case meaning refers to the system test only. In contrast, in acceptance testing, a testing mission and pass/fail criteria can be defined differently from the system testing. Correspondingly, the meaning of test cases can be different as well. If the IEEE definitions of test cases are examined again, we can see that these definitions are not specific to a particular testing mission, nor are they explicit about which testing conclusion—pass or fail—a test case is intended to support. Instead, they focus primarily on the test case structure: test inputs and expected results. As a result, these definitions alone and without the feature’s pass/fail criteria lack clarity about the test case purpose and meaning.
Issues With Testing Logic Three of the most common issues related to testing logic are: disagreeing about the meaning of test cases, presenting an argument without a conclusion, and presenting an argument without an implication. Issue 1: Disagreeing About the Meaning of Test Cases. Software testers frequently disagree about the meaning of test cases. Many testers would define a test case as the whole set of information designed for testing the same software feature and presented as a test case specification. Their argument is that all test inputs and expected results are designed for the same objective; i.e., testing the same feature, and they all are used as supporting evidence that the feature passed testing. For other testers, a test case consists of each pair—input and its expected result— in the same test case specification. In their view, such a test case specification presents a set of test cases. To support their point, they refer to various textbooks on test design, for example, that teach how to design test cases for boundary conditions, valid and invalid domains, and so on. Commonly, these textbooks focus their dis-
14
• Software Test & Performance
cussion on designing test cases that can be effective in finding bugs. Therefore, they call each pair of test input and its expected output a test case because, assuming a bug is in the code, such a pair provides sufficient information to find the bug and conclude that the feature failed testing. Despite these different views, both groups actually imply the same meaning of the term test case: information that provides grounds for deriving a conclusion about the feature’s testing status. However, there is an important difference: The first group calls test case the information that supports the feature’s pass status, while the second group calls test case the information that supports the feature’s fail status. Such confusion apparently stems from the fact that all known definitions of the term test case do not relate it to a feature’s pass/fail criteria. However, as the discussion in the sidebar shows, these criteria are key to understanding the meaning of test cases. Issue 2: Presenting an Argument Without a Conclusion. This issue is also very common. As discussed earlier, an important part of a logical argument is its conclusion.
However, a lack of understanding of this concept can lead to presenting arguments without conclusions. On a number of projects, I have seen testers produce test case documentation in the form of huge tables or Excel spreadsheets listing their test cases. In such tables, each row shows a test case represented by a few columns such as test case number, test input, expected result and test case execution (pass/fail) status. What is missing in this documentation is a description of what features testers intend to evaluate using these test cases. As a result, it is difficult to judge the validity and verify the completeness of such test cases, as the underlying purpose for which they were designed is not known. Such documentation suggests that the testers who designed it do not completely understand the logic of software testing. Issue 3: Presenting an Argument Without an Implication. This issue also stems from a lack of understanding of the structure of a logical argument, specifically that having an implication is necessary for deriving a valid conclusion. In software testing, such implications are a feature’s pass/fail criteria. The issue arises when such criteria are either forgotten or not clearly defined and understood by testers. This can lead to a situation where testers lose sight of what kind of conclusions they need to report. As a result, instead of deriving a conclusion about the feature and then reporting its testing status, they report the status of each executed test case. This situation presents an issue illustrated in the following example. Let’s assume a tester needs to test 10 software features, and she designed 10 test cases for each of the features under test. Thus, the entire testing requires executing 100 test cases. Now, while executing test cases, the tester found that one test case failed for each of the features. In our example, she did not define and did not think about the feature pass/fail criteria. Instead, she reported to a project manager the testing status for each executed test case. Thus, at the end of the testing cycle, the results show that 90 percent of testing was successful. When seeing such results, a manager would be fairly satisfied and could even make a decision about releasing the system. The project manager would see a completely different picture if the features’ pass/fail criteria hadn’t been forgotten. In this case, the tester would report the testing status for each feature as opposed to each test case. If the feature fail criterion had been defined as “If any of the AUGUST 2006
APPLYING LOGIC TO SYSTEM TEST
feature’s test cases fails, then the feature fails testing,” the testing end-result in our example would have been quite the opposite and shown that none of the software features passed testing; they all should be retested when the bugs are fixed.
tive test cases that a tester executes to evaluate the feature’s implementation and derive a conclusion about its testing status. Performing Step 1 can help testers avoid Issue 2, as discussed earlier. Step 2: Define an Implication of the Argument. The next important step is to define an implication of an argument. An implication of a logical argument defines an important relation between the conclusion and the premises given in its support. Correspondingly, the feature’s pass/fail criteria define the relation between the results of test case execution and the conclusion about the feature’s evaluation status. According to the IEEE Std. 829, the feature’s pass/fail criteria should be defined in the test design specification; this standard provides an example of such a specification. However, it does not provide any guidance on how to define these criteria, apparently assuming this is an obvious matter that testers know how to handle. Neither do the textbooks on software testing methodology and test design techniques. Contrary to this view, I feel that defining these criteria is one of the critical steps in test design that deserves a special consideration. As I discussed earlier and illustrated as Issue 3, the lack of understanding of the role and meaning of a feature’s pass/fail criteria can lead to logically invalid testing conclusions in system testing. Also, as discussed in the sidebar, from the well-defined implications—i.e.,
testers develop ideas about how the feature can fail. Then, based on these ideas, they design various test cases for a feature and execute them to expose defects in the feature implementation. If this happens, each test case failure provides sufficient grounds to conclude that the feaConstructing a Valid Proof ture failed testing. Based on this logic, Constructing a valid proof in system testthe feature’s fail criterion can be defined ing is a four-step procedure. The followas “If any of the feature’s test cases fail, ing sections discuss each step in detail then the feature fails testing.” In logic, and explain how to construct a valid arguthis is known as a sufficient condition (if A, ment to support a testing conclusion. then B). The validity of this implication can also be formally proved using the Step 1: Define a Conclusion of the truth-table technique; however, this goes Argument. In constructing a proof, always beyond the scope of this article. begin by defining what needs to be Defining the feature’s pass criterion proven; i.e., the conclusion. In system testis a separate task. In system testing, testers ing, the ultimate goal is to evaluate a softcan never prove that a system has no bugs, ware product. This is achieved by decomnor can they test the system forever. posing the entire functional domain into However, at some point and under cera set of functional features where each tain conditions, they have to make a claim feature to be tested is a unit of a tester’s that a feature passed testing. Hence, the work that results in one of the two possisupporting evidence—i.e., the test case ble conclusions about a feature’s testing execution results—can only be a necesstatus pass or fail. At any given time, only sary (C, only if D), but not a sufficient, conone of the two conclusions is valid. dition of the feature’s pass status. Based The term software feature is defined in the IEEE Std. 610 as: on this logic, the feature’s pass criterion A distinguished characteristic of a software can be defined as “The feature passes the item. test only if all of its test cases pass testing.” A software characteristic specified or impleIn this case, the feature’s pass criterion mented by requirements documentation. means two things: From a tester’s perspective, a software The feature pass conclusion is condifeature means any characteristic of a softtional upon the test execution results preware product that the tester believes might sented in its support. not work as expected and, therefore, Another condition may exist that should be tested. could cause the feaDeciding what fea- TABLE 1: DERIVING A FEATURE FAIL CONCLUSION ture to fail. Step 3: Select a tures should be in the Modus Ponens Form Testing Argument Form Technique to Derive a scope of testing is 1. If any test case fails, then a feature fails (implication). Conclusion. Once we done in the test 1. If A, then B–means A is true– 2. means 2. We know that at least one test case failed (evidence). have defined all complanning phase and 3. Then the feature fails the test (conclusion). ponents of a testing documented in a 3. Then B is true–means argument, the next test plan document. step is to select a technique that can be Later, in the test design phase, the list of the features’ pass/fail criteria—testers can used to derive a valid conclusion from features is refined and enhanced based better understand the meaning of test casthe premises. The word valid is very on a better understanding of the product’s es and avoid the confusion discussed earimportant at this point, as we are confunctionality and its quality risks. In this lier as Issue 1. cerned with deducing the conclusion that phase, each feature and its testing logic The rationale for defining the fealogically follows from its premises. In the are described in more detail. This inforture’s pass/fail criteria stems from the sysproof theory, such techniques are known mation is presented either in test design tem test mission that can be defined as as rules of inference. By using these rules, a specifications and/or test case specificacritically examining the software system valid argument can be constructed and tions. under the full range of use to expose its conclusion deduced through a The test design specification comdefects and to report conditions under sequence of statements, each of which is monly covers a set of related features, which the system is not compliant with known to be true and valid. In system testwhereas the test case specification comits requirements. As such, the system test ing, there are two types of conclusion: a monly addresses testing of a single feamission is driven by the assumption that feature fail status and a feature pass status. ture. At this point, a tester should already a system is not yet stable and has bugs; Correspondingly, for each of these conknow which quality risks to focus on in the tester’s job is to identify conditions clusions, a technique to construct a valid feature testing. Understanding the feawhere the system fails. Hence, our priargument is discussed. On software projture’s quality risks—i.e., how the feature mary goal in system testing is to prove ects, testers should discuss and define the can fail—is important for designing effecthat a feature fails the test. To do that, AUGUST 2006
www.stpmag.com •
15
APPLYING LOGIC TO SYSTEM TEST
logic of constructing valid proofs before they begin their test design. For example, they can present this logic in the Test Approach section of a test plan document. Deriving a Feature Fail Conclusion: I defined the feature’s fail criterion as a conditional proposition in the form (if A, then B), which means that if any of the test cases fail, then testers can conclude that the feature fails as well. This also means that each failed test case can provide sufficient evidence for the conclusion. In this case, a valid argument can be presented based on the rule of inference known as modus ponens. This rule is defined as a sequence of three statements (see Table 1). The first two statements are premises known to be
test case provides grounds for the valid conclusion that the feature has failed testing. As a feature can fail on more than one of its test cases, after finding the first defect, a tester should continue feature testing and execute all of its test cases. After that, the tester should report all instances of the feature failure by submitting defect reports, where each report should be a valid argument that includes the evidence supporting the feature fail status. On the other hand, if a given test case passed testing, the modus ponens rule does not apply and there are no grounds for any conclusion at this point; i.e., the feature has neither passed nor failed testing. Finally, only when all of the feature’s
TABLE 2: DERIVING A FEATURE PASS CONCLUSION Disjunctive Syllogism Form 1. Either P or Q is true–means 2. P is not true–means
Testing Argument Form 1. If any test case fails, then a feature fails (implication). 2. We know that the feature did not fail the test for all of its test cases (evidence).
3. Then Q is true–means
3. Then the feature passes the test (conclusion).
true and lead to the third statement, which is a valid conclusion. Deriving a Feature Pass Conclusion: The feature’s pass criterion was defined as a conditional proposition in the form (C, only if D), which means a feature passes the test only if all of its test cases pass testing. This also means that such a conclusion is derived only when all of the feature’s test cases have been executed. At this point, the feature status can be either pass or fail, but nothing else. Hence, the rule of inference known as disjunctive syllogism can be used, which is presented as three consecutive statements that comprise a valid argument (see Table 2). Step 4: Present an Argument for a Conclusion. At this point, there is a clear plan on how to construct valid arguments in system testing. The actual process of deriving a testing conclusion begins with executing test cases. By executing test cases, testers can learn the system’s behavior and analyze the feature implementation by comparing it to its requirements captured by expected results of test cases. As a result, testers can acquire evidence from which they can derive and report a valid testing conclusion; i.e., a feature pass or fail testing status.
Concluding a Feature Fail Status The feature fail criterion is defined as “If any of the test cases fails, then the feature fails testing.” According to the modus ponens rule, this means that each failed
test cases have been executed should it be decided whether there are grounds for the feature pass status, as discussed in the next section.
Concluding a Feature Pass Status Obviously, if the feature has already failed, the pass status cannot have grounds. However, if none of the test cases failed, then the disjunctive syllogism rule can be applied. According to this rule, the fact that none of the test cases failed provides grounds for a valid conclusion: The feature passed testing. To support this claim, evidence is provided: test case execution results. However, this conclusion should not be confused with the claim that the feature implementation has no bugs, which we know is impossible to prove. The conclusion means only that the feature did not fail on the executed test cases that were presented as evidence supporting the conclusion. ý REFERENCES • Bloch, E., “Proof and Fundamentals” (Birkhauser, 2000) • Copi, I., and Cohen, C., “Introduction to Logic,” 11th ed. (Prentice-Hall, 2002) • Kit, E., “Software Testing in the Real World” (Addison-Wesley, 1995) • Meyers, G., “The Art of Software Testing” (John Wiley & Sons, 1979) • Rodgers, N., “Learning to Reason: An Introduction to Logic, Sets, and Relations” (John Wiley & Sons, 2000) In a slightly different form , this article was originally published in CrossTalk: The Journal of Defense Software Engineering (Mar 2006).
AUGUST 2006
This Simple Dissection Helps Explain All The Parts And How They Work Together
By Thomas O’Mara
T
he best application in the world isn’t going to gain adherents if its performance doesn’t meet its users’
Photographs by L&D Palazzo
expectations. Developers realize this fact, of course, and as business applications and their underlying systems grow more complex and sophisticated, their concern about performance of the delivered product increases. One of the best ways to ensure a successful implementation is to execute a load-testing strategy as soon as the application and underlying system are at a point that can produce meaningful results. For the implementation strategy to succeed, the testing staff must have a thorough understanding of the fundamentals of load testing. Such a grounding in the basics will provide a stabilizing factor for the team as they put solutions into action based on what they’ve learned through their research and training. Thomas O’Mara is president of TEO Innovations, Inc., a consulting service that uses advanced technology to provide solutions to the business enterprise. He has been working with computing-based systems for the past 25 years. You can reach him at articles@teoinnovations.com.
In this article, I’ll examine the fundamental architecture and the critical components that make up a load test. I’ll also explore the basic elements, components such as ramp-up and rampdown, steady-state operation, think time and iteration delay. Even though load tests have only a few components, it takes some thought to fully grasp their meaning and understand how to tailor the load test to leverage them. I’ll also explore the interaction between the virtual user and the inherent randomness of the transactional data that results in request surges— something that should interest every tester.
CVU vs. TPS Before we start digging into the load-test material, let’s clarify the term Concurrent Virtual Users (CVU). Sometimes, test results will be presented with, say, 300 virtual users. These results can cause a slew of misinterpretations, leading you to miss the test’s overall meaning. You can run a test with 1,000 virtual users www.stpmag.com •
19
LOAD TESTING
with a throughput of one transaction per second (TPS), or you can run a test with 30 users with a throughput of three TPS. The differences in the test is the overall duration of the application under load. Here’s a case in point: When I worked with the senior technical advisor at one of my assignments, every time we talked in terms of virtual users, he would adamantly correct us. So, to alleviate any potential pain and miscommunication as your project moves forward, make sure you have the proper nomenclature in place when presenting your results.
Performance First!
•
Every load-testing initiative has an overall service level to meet and can be translated in terms of performance objectives as follows: Response Time. The amount of time taken for the system server to respond to a client’s request. Typically in a Web-based application, the response time is the time from the first byte to the last byte. In some loadtest systems, you group the individual request/ response transactions into higher-level transactions that display the summation of all the constituent transactions in real time. This is useful in business workflow analysis. Throughput (TPS). The number of requests serviced by the server per unit time; hence, transactions per second. You must utilize this parameter effectively in the performance analysis of the system under load. Resource Utilization. The measure-
Without proper management sponsorship and funding, you may end up with an inferior testing suite.
Load Testing, Soft And Hard
To implement a successful load-testing process, the proper load-testing software and system hardware are essential. First, you must ensure that the system hardware closely matches your production system targets. If it doesn’t, your results won’t accurately convey the application’s level of service and will cause issues on rollout. Also, choosing the right load-testing software can be challenging. Without the proper management sponsorship and funding, you may end up with an inferior testing suite that will cause stumbling blocks and dead ends throughout your project’s testing life-cycle. Remember the adage “You get what you pay for.” Now that we’ve examined some basics of load-testing systems and parameters, let’s get started.
•
ment of the resources used as the system runs under a specified load in real time. Some of the basic feedback performance counters are CPU usage, I/O reads and writes, physical and virtual memory usage, and network bandwidth. This feedback is the information you need to ascertain the system’s scalability and usability as a function of load. Workload. The number of concurrent active virtual users used in the load test. Once these objects are defined—the
TABLE 1: ALIGNMENT OF VUSERS
1
1
2
2
3
3
3
3
3
3
business workflows have been identified, the test scripts have been generated and the testing system has been calibrated—the actual load-test initiative can begin. After all of these tasks are completed, the parameters to run the load test at a predefined TPS for a given length of time must be calculated.
TABLE. 2: THE MAXIMUM NUMBER OF VUSERS
Setting Parameters 1
1
2
4
3 .
22
• Software Test & Performance
4
5
5
6
3
To achieve the performance objectives, load-testing parameters must be set in order to run a system under load at a certain throughput for a predefined AUGUST 2006
LOAD TESTING
your favorite book, at each step in the process, you pause to think about what to do next. Typically in a load test, each request has an associated think time. Iteration Delay. The duration the test is paused after it completes the designated transaction workflow. This is a very important parameter to understand because it throttles the TPS when the VUsers, number of iterations and think time are held constant. Enterprise-level software testing tools provide more parameters, such as bandwidth throttling, but we won’t discuss those in this basic exercise.
Keeping It Steady
length of time. There are a number of load-test parameters you should consider. Here are the basics: Virtual Users (VUsers). VUsers are the simulated users of the application that provide the workload on the system under load. Note: You can have a large number of VUsers iterating very slowly, yielding a low TPS for a long duration, or a small number of VUsers iterating very quickly, yielding a high TPS for a short duration—or any point in between. Iterations. The number of iterations the VUser will complete for the entire load test. The value is used in conjunction with the iteration. When using a large number of users and long test durations, make sure you’ve mined enough data to satisfy each of the VUser’s iterations, or the system will start caching up and your resulting data will be skewed. Ramp-up. Throttles the number of virtual users’ activations as a function of time (VUsers/second). For example, the test-ramp up will have five users every 10 seconds. Just to note: The ramp-down can be calculated and implemented, but in most cases we’re interested only in the steady-state area of the load-testing profile, so we won’t calculate the ramp-down. Typically, if the system is well-behaved, the rampdown has the same profile as the ramp-up. Think Time. The pause the user initiates when going through the steps of a given workflow in the application to be tested. For example, when you go to an e-commerce Web site and purchase AUGUST 2006
As our test gets going and our VUsers are being activated, to correlate the results from the test to the performance objectives, the test’s profile characteristics must be understood and defined. Typically, the testing profile will have an initial ramp-up section that is linear with a positive slope, and the VUsers are activated as a function of time. Then the steady-state area, or the part of the test where all the VUsers are activated and performing their respective functionality, will be flat in nature. Finally, the ramp-down should have the same profile as the ramp-up, but with a negative slope. Now that we’ve examined the testing profile of the load test, let’s look at the section we’re most interested in: steady state. This is the region of the test where the rubber meets the road, providing us with data as the system under load settles down and iterates through the workflows. All of the VUsers are ramped up, activated and running, the system is servicing the requests, and the load-testing software is controlling the operation. Again, from a graphical standpoint, the TPS values should be constant, and the line fairly flat. We’ve explored performance objectives and testing parameters to be set to meet those objectives. Now we need to define our objectives in a sample walkthrough. Our TPS performance objective for the walkthrough will be 6.0 TPS. Our example test will have the following parameter values: Total VUsers = 240 Total iterations = 50 Think time = 1 second (Four .250second pauses) Iteration delay = 35 seconds Ramp-up: 5 users every 10 seconds.
EQ. 1:
(Iterations TPS =
)( V )
steadyState
Users
TsteadyState
IterationssteadyState
= The number of iterations in the steady-state region of the test.
VUsers = The number of virtual users used in the test.
TsteadyState = The duration of the steady-state region of the test.
The performance objective that we want to calculate is the TPS. The general form of this calculation is shown in Equation 1. To calculate the desired TPS, the first value we need is the total transaction time. This is an approximation due to the random nature of the response-time results from the system under load. As a first-order starter, you can get an average response-time value of the workflow with a single user. Granted, the response times change with more users, but you can tweak the iteration delay down the road to get the number you’re looking for. The general form for the total transaction time is shown in Equation 2.
EQ. 2: n
TtransTotal = T requestResponse + ∑ Tthink + TiterationDelay n =1
TtransTotal = The total time for the request/response workflow with think time and the iteration delay included.
T requestResponse = The overall request/response workflow time.
Tthink = The pause between each request. T requestResponse = The overall request/response workflow time.
Example: TtransTotal
= 4.5 + (.250 + .250 + .250 + .250) + 35 = 40.5
Since we’re looking for TPS and the steady-state duration, the ramp-up time must be calculated because it will be needed later in our calculations. The general form for the ramp-up time is depicted in Equation 3. Next, we need to find the number of iterations in the ramp-up to subtract from the total iterations that yields the number of steady-state iterations required in our TPS calculation. Note: As the number of VUsers is activated www.stpmag.com •
23
LOAD TESTING
EQ. 6:
EQ. 3: T rampUp =
t rate ( N users )
N users N rampUsers
first order due to the randomness of the response time data of the request, and is used only as an approximation. Note: This number is useful to back into the TPS by starting with an overall test duration (for example, an eighthour work day), along with your ramp times, test parameters, and solving for the throughput. Sounds like fun indeed!
=
TsteadyState = Iterations steadyState (TtransTotal )
N rampUsers
t rate
TsteadyState = The duration of the steady-state region as a function of time.
TrampUp = Duration of time as the VUsers are activated as a function of time.
N users = The number of virtual users used in the test.
Example: TsteadyState = 38.15(40.5) = 1545.08 seconds
N rampUsers = The number of users used in the ramp-up per unit time.
t rate = The per-unit time each group of ramp-up users is activated.
Example: Using NUsers = 240, Ramp-up rate = 5 users every 10 seconds (5 NUsers / 10 seconds) Yields:
TrampUp =
10 (240) 5
=
480 seconds
per specified unit of time, a number of iterations will be completed at the end of the duration. The general form is shown in Equation 4.
EQ. 4:
=
Iterations ramp =
TrampUp TtransTotal
Iterations ramp = The number of iterations in the ramp-up as a function of iterations completed.
virtual users and the duration of the steady state. We can now calculate the TPS for the load test using Equation 1 and plugging in the calculated and given values. The TPS we calculated is close to 6.0 and can be tweaked a little with the iteration delay. Note: As the system runs in real time, the TPS value will fluctuate as a function of the behavioral characteristics of the overall system due to any system resource issues, such as CPU usage, I/O bottlenecks, data storage concurrency and contention problems. Therefore, an average value is what you would expect at the completion of the load test. Now that we have the ramp-up time and steady-state duration, we can calculate the total time for the test. The general form is depicted in Equation 7. Converting to hours, the test will be approximately .70 hours. This value is
Summing Up
EQ. 7:
Analyze Virtual Users And Randomized Data
Now, we have a test that will run approximately 42 minutes from end to end with a steady-state duration of approximately 26 minutes and a workload of 240 VUsers, running at a throughput of approximately 6.0 TPS. So from our original objectives, we have a pretty good start setting up our load-test parameters. Taking our calculations further, we can graph the relationship between TPS and the iteration delay parameter to get a better idea of how to tweak the parameter to meet the testing performance objective. Figure 1 depicts the relationship schematically. Upon inspection, you can see the curve is nonlinear, and the slope decreases as the iteration delay increases. As an interesting note, look at the TPS with a 35-second iteration delay: 6.0.
Example: Iterations ramp =
480 40.5
=
11.85
TtestTotal = TrampUp + TsteadyState = TrampDown
Now that we have the number of iterations in the ramp-up, we can calculate the number of iterations in the test’s steady-state section. The general form is depicted in Equation 5.
EQ. 5: Iterations steadyState
Now that we have a fundamental understanding of the load-testing parameters and their relationships, let’s examine another part of the load test: the interaction between the running VUsers and the response-time results of the workflow transaction.
TtestTotal = The total duration of the load test from start to finish.
Example: TtestTotal = 480 + 1545.08 + 480 = 2505.08 seconds
= =
Iterations Total - Iterations Ramp
Iterations steadyState = The number of iterations in the steady-state region.
Iterations steadyState = 50
FIG. 1: TPS VS. ITERATION DELAY (240 VUsers, 50 Iterations, 5 VUsers every 10 seconds) 10
8
Example: - 11.85 = 38.15
Remember, our overall goal is to ascertain the steady-state duration as a function of VUsers along with our test parameters. The general form for the steady-state duration is depicted in Equation 6. We have the number of iterations in the steady-state section, the number of
24
=
• Software Test & Performance
6 TPS 4
2
0 20
30
40
50
60
70
80
90
100
Iteration Delay (Seconds)
AUGUST 2006
LOAD TESTING
This interaction is important because it can be a red herring in the results data, as it’s accumulated in real time. Table 1 (see page 22) depicts the alignment of the binary characteristic of the VUsers as they’re activated and deactivated as a function of time. Just to get the point across, the transaction time and the iteration delay are equal. The six VUsers reach steady state as the last VUser starts (VUser 6–Iteration 6). Looking up or down at the sixth iteration, all of the boxes align vertically, and you see the maximum value of three VUsers running simultaneously. If the system behaved in this way, you could calculate with reasonable certainty the steady-state TPS and the length of the test. If we randomize the green section and look up or down at the sixth iteration and align all of the boxes vertically, we see that the maximum value is four VUsers running simultaneously. Horizontally down to the ninth iteration, we see a maximum of six. This is depicted in Table 2 (see page 22). If we start to look a little deeper, the
implications of an unstable system become readily apparent from the tables. As the response times start to overlap, the number of activated VUsers increases, thus increasing the TPS and, in turn, the Request/Sec. This creates a
•
However, you should always keep in mind the performance objectives and how they relate to the success of the application as it’s implemented into service. In this article, we first looked at the application throughput (TPS) as a function of workload as the performance goal. Then we gained an understanding of the relationships among the various parameters and how to derive the values needed to meet the performance objectives, such as TPS. Finally, we visually analyzed how the VUsers interact with each other as the test moves forward in time. Gaining an intuitive understanding of the load test and the various components that make up this complex concept will not only reduce the amount of time spent in the initial time frames, but will aid you in sorting out various issues that arise when the system is tested in real time. As we all know, “stuff happens” whether we like it or not. So make sure that you anticipate and are prepared for the “stuff” that happens to your load test. ý
In most cases, we’re interested only in the steady-state area of the load-testing profile.
• negative impact on the performance objectives because we calculated the parameters accordingly, anticipating some reasonable levels as the system runs under load in real time.
“Stuff Happens…” Setting up and running load tests can be both entertaining and interesting.
NOMINATIONS ARE NOW OPEN FOR THE 2006 TESTERS CHOICE AWARDS The Testers Choice Awards, sponsored by BZ Media’s Software Test & Performance Magazine, are bestowed each year on the top test/QA and performance management products. The judges are the most important people in our industry: the 25,000 test/QA professionals who read Software Test & Performance Magazine. Awards will be given for winners and finalists in each of the categories. The top vote-getter will also be awarded a grand prize. Winners will be announced on November 8, 2006, at the Software Test & Performance Conference in Boston. The results will be published in the December issue of the magazine and posted to STPmag.com on December 1, 2006.
Visit www.stpmag.com/testerschoice for further details. Remember, NOMINATIONS CLOSE AUGUST 18 for what is becoming one of the industry’s most important and prestigious awards.
www.stpmag.com/testerschoice AUGUST 2006
www.stpmag.com •
25
Pr ed ic t
Q C D ive el
s e t a D ry
With The Help Of Advanced Scheduling Tools, QC Managers Can Calculate The Earned Value Of Their Projects 26
• Software Test & Performance
AUGUST 2006
By Andrew Makar
s organizations adopt independent quality control teams for software quality assurance, it’s important
A
that the QC manager act as a project manager as well as a QC methodologist. Much like the software project it supports, an independent QC project has its own defined scope, time and budget. The savvy QC manager keeps these factors in synch throughout the project’s stint in quality control, juggling cost, quality and deadline to create a high-quality product on time and within budget. But how do you keep all the balls in the air?
Starting Small
stand and implement—but often fail to. However, if quality managers had a tool to forecast QC testing delivery dates and estimated cost overruns, corrective action could occur earlier. EVA is this tool: Used correctly, it’s a valuable technique that can help you identify schedule and budget variances early in the project life cycle, enabling QC managers to communicate an objective status to project stakeholders. According to the Project Management Body of Knowledge (PMBOK, the collection of processes and knowledge areas gen-
value amount, it’s important to remember that EV is derived from the original project budget, not the actual cost.
EVA in Action To investigate EVA and QC management in action, let’s go on a test drive with a fictional project. It’s now the second week of your fourweek, $10,000 QC testing project. You planned to have spent $5,000 worth of work at this point in time because the project would be 50 percent complete. The $5,000 figure represents the project’s planned value. But at the end of the second week, you review the project schedule and determine that only 30 percent of the work is complete and that the project has actually spent $7,000. Let’s apply EVA calculations to determine your project’s health. Based on the project schedule, your team should have been 50 percent complete. The planned value is calculated by multiplying the planned percentage completed by the project budget.
u r r n e s v O t s o C d n a
Photograph by Cora Reed
On small projects, the QC manager builds a project schedule with a calendar and a test case checklist. On complex projects and programs, he should devel-
op a QC testing schedule that supports the software project’s schedule and timeline projected across multiple releases and milestones. Acting as project manager, the QC manager can use advanced scheduling tools like Microsoft Project or Niku’s Open Workbench to support the project. Other resources, such as Elfriede Dustin’s recent article in this magazine, “Best Laid Process and Budget Problems” (June 2006), emphasize the importance of developing a realistic QC schedule and actively tracking your progress against it. Finally, implementing an earned value management system (EVMS) is an excellent approach to objectively track schedule performance and budget variances. The challenges with an EVMS implementation? Wrestling with the terminology and achieving consistent implementation across the project’s life cycle. As a QC manager, you may be familiar with earned value analysis (EVA) and earned value management systems. Since project management is just one of the many tasks QC managers perform, EVA is a concept most quality managers (and even project managers) should under-
AUGUST 2006
erally accepted as project management best practices and an internationally recognized standard [IEEE Std 1490-2003]), earned value analysis is an “objective method to measure project performance in terms of scope, time and cost.” For a new QC manager playing a project management role, this may seem complex, but if you can remember a few formulas and apply some third-grade math, EVA isn’t difficult to calculate. With its point-in-time evaluation of the project schedule, as the project timeline progresses, the EVA metrics change. You can calculate those metrics by asking three simple questions: 1. How much work did you plan to complete? (Planned value) 2. How much work did you actually complete? (Earned value) 3. How much did it cost to complete the work? (Actual cost) Planned value (PV) represents the budgeted cost of all the tasks planned to start and finish at a given point in time. It’s simply a measure of how much of the project budget the project manager planned to spend at any given point in time. Earned value (EV) represents the sum of all the budgeted cost of completed work at that point in time. Actual cost (AC) is the cost incurred to produce the work, and is also referred to as the actual cost of work produced (ACWP). Because EVA novices often confuse the actual cost with the earned
PV = Planned % Completed * Project Budget = 50% * $10,000 = $5000 The earned value is determined by multiplying the actual percentage completed by the project budget. The earned value determines the amount of value delivered to the project to date. EV = Actual % Completed * Project Budget = 30% * $10,000 = $3,000 Finally, the actual cost to deliver 30 percent of the project was $7,000. The actual cost is calculated by tracking the actual amount spent against the project budget. AC = $7,000 By applying these calculations, you can calculate the cost and schedule variances. The cost variance (CV)measures Andrew Makar is a PMI-certified Project Management Professional (PMP) and has worked in project management, design, implementation and maintenance of IT solutions since 1996. Contact him at andy@amakar.com.
www.stpmag.com •
27
EARNED VALUE ANALYSIS
bers are less than 1, indicating that the project schedule and budget need to be examined and rectified. In the status report, your stakeholders will want to know the end costs to complete the project. If it continues at this rate, your project, originally budgeted for $10,000, will cost a total of $23,809 to complete. To calculate the estimate at completion (EAC), divide the original budget, also known as the budget at complete (BAC), by the cost performance index.
FIG. 1: RESOURCE SHEET
EAC = BAC/CPI The EAC, SPI, CPI, SV and CV metrics are gauges to QC project schedule performance, providing an objective assessment of project health based on the project plan.
EVA 101 the difference between the actual cost incurred and the actual progress against the project budget. The schedule variance (SV) is a measure of the actual progress to the project schedule. Two simple equations describe the variances. CV = EV – AC SV = EV – PV The cost variance for the QC project is $3,000 - $7,000 = -$4,000. The schedule variance for this project is $3,000 - $5,000 = -$2,000. When reviewing cost and schedule variances, the ideal variance is 0 or greater. Positive variances indicate a cost savings or schedule efficiency; however, you should examine the schedule to confirm that this data is correct. In this example, the project has negative cost and schedule variances. By reviewing these calculations, you can quickly determine that your project had spent 70 percent of its budget to deliver just 30 percent of its work. The project is clearly behind schedule and over budget. If it continues at this performance level, it will significantly exceed its budget by the project’s end. You’ll likely need to reduce scope, extend the project schedule, or obtain more funding to complete the testing. Cost and schedule performance indices are also helpful in communicating an objective assessment of proj-
ect health to your stakeholders. Each index represents a performance ratio to either budget or schedule. The cost performance index (CPI) is a measure of a project’s earned value compared to the actual costs incurred. The schedule performance index (SPI) is a measure of actual progress to the project’s schedule. When reviewing these calculations, the indices should be close to 1 or greater. If the number is equal to 1, the project
•
If EVA is new to the organization or the project team, your stakeholders will need an orientation to interpret the results. In EVA terms, a QC manager’s job is to ensure that the SPI and CPI numbers are as close to 1 as possible. If they aren’t equal to 1 or greater, you must explain the gaps and communicate the plan to bring the project back on track. You may wonder why you can’t just report the number of completed test cases, remaining test cases and defect count. These metrics are useful, but they don’t provide any forecasting for project cost and schedule variances—different test cases require different levels of testing resources. While a simple metric count treats all test cases equally, EVA considers the effort required to deliver each test case. It’s a lot easier to test a role-based security menu than a payroll interface with multiple business rules and expected results. If you can answer the three simple questions, subtract and divide, EVA can easily be calculated to provide an objective assessment of your project’s health. Calculating EVA is straightforward, but it’s no replacement for good project and QC management. EVA requires you to track project actual start and finish dates while comparing actual costs to the project baseline. If the original project schedule is unrealistic and poorly planned, EVA won’t help you. But, used
If you can answer three simple questions, subtract and divide, EVA can provide an objective assessment of your project’s health.
28
• Software Test & Performance
• is on schedule. If the number is greater than 1, the project is ahead of schedule. CPI = EV / AC SPI = EV / PV In your sample project, the CPI is .42 (CPI = $3,000 / $7,000) and the SPI is .6 (SPI = $3,000 / $5,000). Both num-
AUGUST 2006
EARNED VALUE ANALYSIS
correctly, EVA is a project control technique that enforces good project management across the QC testing life-cycle.
Step by Step: QC, EVA And Microsoft Project In the IT project/portfolio management marketplace, a variety of vendors offers an array of EVA and EVMS tools. Without wading through reams of RFQs and vendor lunches, I’ve included a step-by-step approach to calculate EVA, using Microsoft Project as your EVMS tool. You can browse multiple books and online resources for instruction on Microsoft Project and schedule development. MS Project is a powerful scheduling tool offering multiple methods to accomplish the same task. For more information on MS Project schedule development, read Eric Uyttewaal’s “Dynamic Scheduling with Microsoft Office Project 2003: The Book by and for Professionals” (J. Ross Publishing, 2004). So let’s get started. To determine your QC project’s EVA, take these six steps in MS Project: 1. Enter resources with hourly bill rates in the Resource Sheet view 2. Develop the project schedule and assign resources to tasks in the Gantt Entry view 3. Save the project baseline 4. Record actual start/finish and actual work each week in the project plan 5. Review EVA metrics using the Earned Value views and reports 6. Calculate schedule peformance and cost performance indices
EVA metrics. The $1.00 rate eliminates any budget forecasting, but the SPI and CPI numbers can provide a direction for budget and schedule performance. Develop a schedule and assign resources: In the next step, you’ll assign resources to all your tasks in the Gantt Entry view. Once the work breakdown structure is complete, your next step is to assign resources. To do this, press Alt-F10 and use the Assign Resources dialog box to add resources to a task. Figure 2 identifies the Assign Resource icon and the Assign Resources dialog box. By assigning resources, MS Project calculates the task’s cost and work (see Figure 3). To view the cost per task: 1. Click on the Gantt Chart view 2. Select View > Table and select the Cost table Cost is calculated by the assigned resources’ hourly rates and the expected amount of work per task. Determining the actual cost to deliver a specific test case provides a better awareness of your project’s major testing deliverables and their complexity, enabling you to manage each task as a budget item in a portfolio instead of just a random list of tasks. Determining the cost of executing and managing your QC project will also help you prioritize test cases, the number of iterations, and the
scope of load and volume testing. Once you’ve reviewed your total project cost, verify that your total project amount does not exceed your total project budget. The QC manager often inherits the QC testing budget without the benefit of a detailed bottom-up estimate. If the project’s cost is greater than the original budget, a funding supplement or scope change may be required. Save the project baseline: After you’ve developed the schedule, assigned resources and confirmed the budgeted costs, next you’ll baseline the project plan. This important step helps to measure and manage your project’s performance. To save the project baseline: 1. Select Tools > Tracking > Save Baseline 2. The Save Baseline dialog box appears. Select Save Baseline 3. Select the Entire Project button to baseline the entire project plan, or select Selected Tasks to baseline any selected tasks 4. Click OK The Baseline Start, Baseline Finish, Baseline Work, Baseline Cost and Earned Value fields are populated with the schedule and budget estimates. These fields will be compared against the actual start, finish and work fields
FIG. 2: ASSIGN RESOURCES
Enter resources: The Resource Sheet view allows project managers to define the different types of resources available to a project. MS Project tracks work resources and material resources. Work resources are the people who consume time to accomplish project tasks. Material resources are the consumable supplies, such as steel, concrete and other construction materials, that are used to complete project tasks. Software testing projects are work comprised of different roles and cost rates. The resource sheet in Figure 1 depicts all the team members assigned to a project, with their respective hourly cost rates. To calculate your project’s planned value (PV) or budgeted cost of work scheduled (BCWS), you must enter your resource costs. If you’re managing an IT project without defined cost rates, assign a $1.00 hourly rate to each resource to calculate the future AUGUST 2006
www.stpmag.com •
29
“Best concentration of
Announcing The Third Annual
The Software Test & Performance Conference
November 7-9, 2006 The Hyatt Regency Cambridge, MA
provides technical education for test/QA managers, software developers, software development managers and senior testers. The wall between test and development has fallen, and the Software Test & Performance Conference brings the whole application life cycle together!
BUT DON’T TAKE OUR WORD FOR IT... “This is the best conference I have attended. The instructors were extremely knowledgeable and helped me look at testing in a new way.” —Ann Schwerin, QA Analyst Sunrise Senior Living
For Sponsorship and
Diamond Sponsor
Platinum Sponsors
Gold Sponsors
Silver Sponsor
Media Sponsors
Exhibitor Information Contact David Karp at 631-421-4158 x102 or dkarp@bzmedia.com
“If you get one impact idea from a conference it pays for itself. I got several at the
performance testing presentation/professionals I’ve seen.”— Nathan White, Manager, Testing Services, AG Edwards • OPTIMIZE Your Web Testing Strategies • LEARN How to Apply Proven Software Test Methodologies • NETWORK With Other Test/QA & Development Professionals • ACHIEVE a Better Return on Investment From Your Test Teams • GET the Highest Performance From Your Deployed Applications “This is a conference to help both testers and developers as well as managers and leads. There is enough variety and content for everybody.” —Michael Farrugia, Software Engineer Air Malta “Great topics – well presented by reputable presenters. Having attended two years in a row, I have yet to be disappointed.” —Ardan Sharp, QA Manager SunGard “Excellent conference – provided a wide range of topics for a variety of experience levels. It provided tools and techniques that I could apply when I got back, as well as many additional sources of information.” —Carol Rusch, Systems Analyst Associated Bank
“This conference is great for developers, their managers, as well as business-side people.” —Steve Margenau, Systems Analyst Great Lakes Educational Loan Services “I’ve received volumes of new information and ideas to share with my team.” —Theresa Harmon, Business Applications Developer Pharmacare Specialty Pharmacy “Reputable speakers and presenters. Great class topics.” —Jung Manson, QA Manager Webloyalty.com “Very informative and a good chance to network with others.” —Marty Palma,Technical Consultant Autodesk Inc.
“A conference with something for everyone.” — Scott L. Boudreau, QA Manager, Tribune Interactive
ST&P Conference.”— Patrick Higgins, Senior Software Test Engineer, SSAI
Regis Sept. ter by 1 advan 5 to take tage o f the Super E Bird R arly ates!
REGISTRATION IS NOW OPEN www.stpcon.com
EARNED VALUE ANALYSIS
FIG. 3: PROJECT COST
during project execution. To view the baseline information: 1. Select View > Table > More Tables 2. Select the Baseline Table 3. Click Apply Record actual start, finish and work information: Once the baseline is established and the team begins executing the testing schedule, tracking the actual cost and start and finish dates will help you calculate your project’s earned value. By tracking the actual start and actual finish dates and comparing them to the baseline start and baseline finish dates, you can identify schedule variances. Tracking the actual work and comparing it to the baseline work identifies cost variances. MS Project provides multiple methods to update the project schedule. One technique is to use the Update Tasks dialog box by selecting Tools > Tracking > Update Tasks from the Tools menu. Figure 4 displays the Update Tasks dialog box. You can update the actual work using the Tracking table. After you enter the project actual dates and hours, the earned value information is a few mouse clicks away. Review the EVA view and reports: Setting up a project schedule for earned value takes a little effort, but once you’ve got it under your belt, MS Project calculates the earned value information automatically. MS Project uses the Earned Value, Earned Value Cost Indicators and Earned Value Schedule Indicators tables to display earned val-
32
• Software Test & Performance
ue information. Before viewing the earned value information, make sure you set the project reporting date. Remember: EVA is a point-in-time calculation, and MS Project must know the project status date to arrive at a correct result. To set the project status date: 1. Select Project > Project Information 2. Enter the status date 3. Click OK 4. Remember to change the project status date each time you report EVA metrics To view the Earned Value table: 1. Select View > Table > More Tables menu item 2. The More Tables dialog box will appear, and you can select from a variety of tables 3. Select the Earned Value table and
Click OK 4. The current earned value information will be displayed for each task Figure 5 displays the Earned Value table for each task in the QC schedule. Based on its project status date, our four-week sample project planned to complete $9,600 worth of work. (MS Project represents PV as BCWS.) The project completed only $5,300 worth of work, based on the percentage completed for all tasks scheduled. The actual cost to deliver $5,300 worth of work was $8,000. The SV (Schedule Variance) and CV (Cost Variance) columns determine if the project is on schedule in project plan performance and budget. If the SV and CV columns are zero, the project is on track and under budget. In the sample project, the schedule variance is -$4,300, and the cost variance is -$2,700. The QC manager can conclude the project is behind schedule and over budget. If the project continues to perform at the same level, its estimated costs at the end are $156,981. The EAC (Estimate at Complete) column automatically calculates the estimated end costs, and the VAC (Variance at Complete) column calculates the total budget overrun. The Earned Value report in MS Project also provides earned value data for each task in the plan. To run the report: 1. Select View > Reports 2. Click on the Costs icon and click Select 3. Click on the Earned Value report and click Select The Earned Value report and table provide the objective metrics you need to assess the project’s health. By reviewing the SV column, you can quickly identify the tasks that are running behind
FIG. 4: UPDATE TASKS
AUGUST 2006
EARNED VALUE ANALYSIS
FIG. 5: EARNED VALUE
schedule. The next step is to calculate the schedule and cost performance indices for the QC status report. Calculate schedule performance and cost performance indices. The schedule performance index (SPI) is a measure of the actual project progress to the baselined schedule. The cost performance index (CPI) is a measure of the project’s earned value compared to the actual costs incurred. The indices can be calculated using the equations provided earlier, but MS Project already does the calculations. To view the SPI: 1. Select View > Table > More Tables 2. Select the Earned Value Schedule Indicators table To view the cost performance index: 1. Select View > Table > More Tables 2. Select the Earned Value Schedule Indicators table In our sample project, the SPI is .55 and the CPI is .66. Since both SPI and CPI are less than 1, you can quickly conclude that your testing schedule is significantly behind schedule and over budget, in need of corrective action. With two simple numbers, you can get an accurate report of your project’s schedule and financial health.
Ensuring Success In a schedule crunch, project teams often overlook or reduce quality management services, but the QA manager who can quantify the cost of quality with AUGUST 2006
the EVA is at a significant competitive advantage. With EVA, in just a few easy steps, you can quickly calculate your project’s earned value. While the actual earned value calculations require only a few clicks of the mouse, the proven project management process of tracking actual performance against the project baseline demands a degree of discipline. Because EVA is an advanced project management technique that requires repeatable processes to implement successfully, organizations lacking project management maturity may struggle with it. Earned value metrics are only as good as the underlying project schedule. If the QC testing schedule is unrealistic and unbalanced, EVA metrics will only continue to report poor budget and schedule performance. But with a reasonable schedule, EVA can help you keep your testing project on track and on time. If you’re a smart QC manager, you’re also a keen project manager. In either a software testing project or a large-scale software implementation program, it’s your job to measure performance against the schedule to ensure that your project is delivered on time and within budget. Incorporating EVA into your project’s status reporting reinforces schedule management and provides objective evidence of schedule and budget variances. And using project management processes like EVA in your QC testing processes will further ensure your timely delivery of high-quality software. ý
The Sec urit y Zone
Guarding Against Data Vulnerability
Confidential Data —Handle With Care By Michael Cooper he protection of assets is becoming a critical issue for many companies. Not only must they protect such confidential data as the company’s intellectual property embodied in patents, trademarks and copyrights, they must guard their customers’ personal information, as well. This data includes name, birthdate, address, Social Security number, account numbers, medical information and more. In the wrong hands, this information can be used for illicit purposes, including identity theft. Recently, thieves posing as a legitimate business deceived ChoicePoint, a major provider of decision-making information, into selling the sensitive records of thousands of Americans. And last year, a hacker was charged with stealing customers’ passwords and Social Security numbers, birthdates and candid photographs from the wireless phone company T-Mobile. But outsiders aren’t the only threat to a company’s confidential data. In 2005, an AOL employee was convicted for selling a list of screen names, zip codes and other sensitive data for $28,000 to a spammer who allegedly resold it to other spammers for $52,000 and used the list of 92 million screen names to promote his online gambling Web site. As a quality assurance director for both public and private companies, I’ve led QA professionals who test software applications that manage sensitive information. Safeguarding the privacy of that information, both on- and offline, is my top priority. Over the years, I’ve compiled a list of best practices to make sure that confidential data stays confidential. So let’s get started on the path to security.
Cull Bad Apples, Protect Assets Protecting personal information begins with careful hiring practices. Background, reference and credit checks, together with thorough interviewing, can help prevent bad apples from joining your organization. These same practices should be followed when working with contractors and consultants. You must also be vigilant about protecting data when employees leave the company. Web mail, portable hard drives, thumb drives and camera phones make data vulnerable. A disgruntled employee can pose an acute internal threat, especially when his employment is terminated. To protect company assets, you must immediately deactivate user accounts and restrict any access so that your exiting employees have no AUGUST 2006
Photograph by Luca de Filippo
T
opportunity for mischief. Tools like Web mail, portable hard drives, thumb drives and camera phones are easily damaged by an unscrupulous individual.
Be a Policy Wonk To enforce your company’s privacy and security policies, you must know them. You also must communicate that information to all new employees and contractors. One best practice is to educate every new employee and contractor as a part of their orientation. Joel Kosmich, a VP at Citigroup, explains, “A clear policy must be created and used in the entire group. This must be an active policy, not one that sits on a shelf and nobody pays attention to.” Kosmich adds, “Everybody must be held accountable to the sensitivity and the security of the data. When these policies aren’t adhered to, strict disciplinary action must occur.” If you become aware of an incident in which confidential data is compromised or stolen, you should report it to your supervisor and your company’s compliance officer immediately. Michael Cooper is enterprise director of quality assurance at Inovis, a B2B software and managed services company, and a frequent speaker at QA and software testing conferences. In 2005, he was elected president of the Atlanta QA Association. Contact him at mike@qacooper.com. www.stpmag.com •
35
The Sec urit y Zone Not all companies are so strict in enforcing their policies and procedures. A few years ago, while bidding for some consulting work at a large wealth-management firm, I was given a production super-user password that would have allowed me to anonymously transfer money from clients’ accounts. Unfortunately, my experience is not unusual. Caleb Sima, founder and CTO of SPI Dynamics, an application security testing company, says, “While working for a credit card company, almost all queries for development were against production database systems. Developers and quality assurance personnel had complete access to the database and could write queries that would retrieve all accounts that met the criteria of the query writer.” With access to the production database, QA engineers or contactors could essentially turn your database upside down and dump out all of the sensitive data. To prevent this type of debacle, if you catch an employee violating your company’s privacy and securities, act swiftly. The consequence for security and privacy violations is often immediate termination of employment.
able to have a logical parity of the QA environments and the production environments. Although maintaining several production-like QA environments can be pricey, tools like VMWare offer an inexpensive option for creating multiple test environments per server. Caleb Sima suggests making a database backup before automated testing begins. “Often, an assessment, especially automated assessments, will insert many records into the database. Having a backup will make cleanup a lot easier. If you don’t have or can’t make a backup, ensure that you know the values that the assessment will be inserting into the database will enable the DBA to perform the necessary queries to remove the garbage entries input by the assessment.“
Forget Mr. Natural: Go Synthetic
The safest way to protect production data in testing? Don’t use it at all. If it’s practical, I recommend building synthetic test data sets. You’ll probably find it worth the effort to develop a utility to create sets of data. If the data relationships are straightforward, this task can even be done in Microsoft Excel. Get Signed Non-Disclosure Agreements and I often use test data from a spreadsheet for data-driving autoAcknowledgment of Policy Agreements mated test scripts and manual testing. With more sophisticatBe sure to have signed Non-Disclosure Agreements (NDAs) and ed applications, I’ve used homegrown synthetic data generaAcknowledgment of Security and Privacy Policies Agreements tors and off-the-shelf data manipulation tools, including from every employee and non-employee with access to any inforDataFactory and Data Junction (now called Pervasive). mation not intended to be open to the public. This practice is For an effective technique to create safe test data, try makimportant for protecting your ing a copy of a subset of the company’s intellectual property FIG. 1: ENCRYPTING TEST DATA production data and change (IP) as well as confidential data. key fields. For some applicaPatents, trademarks and copytions, you must provide test rights are valuable company regions in the production enviassets that should be protected. ronment for customer and user While policy agreements acceptance tests. In test offer some help in protecting a regions, confidential data company’s intellectual propershould be manipulated so that ty and other confidential data, no test records inadvertently they’re far from a perfect solumatch real customer files. tion because they’re only as As Joel Kosmich says, “The good as the people who sign challenge is to strike a balance them. A company is safer using of scrambling the data to a an NDA with people of known point that can be recognized integrity, because the agreement has little value with those of by the systems under test while not losing the actual intent questionable honesty, Sima says. “Be careful whom you share of the data or mismatching account data to not make an the data with. For contractors, ensure that the legal docueffective test.” ments reference having responsibility to properly handle the “Keeping test data in a common format, such as an ASCII confidential data according to your policies,” he adds. or Excel file, is also very helpful in tracing problems,” he adds. “This ensures that no matter the system, a minimum of conGood Fences Make Good Neighbors versions occur to the data, especially when three or more sysAvoid testing in your production environment. Ideally, your tems are involved in the process.” test environment will mirror your production environment. I also recommend that you avoid testing in a hybrid Use a Proxy for Credit Card Testing test/production environment. A few years ago, I was placed Special care must be taken when testing with account numin charge of a performance-testing project the day after bers. Recently I received this e-mail from a colleague: dozens of e-mails were mistakenly sent to real customers during the payroll system’s load test. The engineer in Hi Michael, charge had pointed the test application server to the proCurrently I’m testing Web-based applications that require credit card duction database. For functional testing, it’s usually accepttransactions. While testing in “review,” we use generic credit card numbers,
36
• Software Test & Performance
AUGUST 2006
The Sec urit y Zone but of course once the site goes live, we can’t use any of these numbers. I’m thinking the only solution is to use a company corporate card (instead of using my own). But if there's another way, I would certainly like to know. Please advise if you can. Thanks, Jennifer
I advised my colleague not to use the corporate account and definitely not to test with her own account. I further suggested that she request test account numbers from the credit card processing company. The advantage of using test account numbers is that they act as live accounts, but no transactions settle and no money moves. The test card numbers always give a result. The address verification and security code will match any input values. Table 1 shows some sample test account numbers. Notice that each credit card company has a unique card prefix.
tool, depicting an example of test data encryption using Mercury QuickTest Professional.
Treat Test Data as if It’s “Live” It’s important to treat test data like production data. It’d be unwise to leave test results with confidential data on your desk unattended. Password-protect and encrypt files on your PC. Lock up or shred paper documents that contain confidential data and bring your laptop home at night. I personally had a business laptop stolen from my locked car.
Get to the Application Level
I recommend including application-level security testing in your overall testing strategy. Insecure Web applications carry the potential for costly losses for an organization. Many businesses deploy Web-based applications under the assumption that firewalls and other gateway security measures are sufficient protection against attack or misuse. Keep It Private: Encryption and Masking However, insecure code creates a number of easy opportuWhenever possible, I recommend that masking is used to nities for malicious behavior. protect Social Security numbers and account numbers for Security vulnerabilities are created because insecure code online and printed output. One common technique is to can expose confidential data to hacker exploits like session programmatically X out the hijacking, cross-site scripting first several digits of the SSN TABLE 1: SAMPLE CREDIT CARD DATA and SQL injection. In Caleb or account number. Masking Sima’s opinion, SQL injection Credit Card Card Prefix Numbers Sample Test Account Numbers an SSN may be displayed as is the most common and danVisa 13 or 16 numbers starting with 4 4111-1111-1111-1111 XXXX-XXX-XXXXX-1003. gerous privacy breach available. 16 numbers starting with 5 5431-1111-1111-1111 Encryption can be an effec- MasterCard He explains that SQL injection 15 numbers starting with 34 or 37 341-1111-1111-1111 tive way to provide data confi- Amex allows attackers to mirror a 16 numbers starting with 6011 6011-6011-6011-6611 dentiality. Encryption is Discover Web site’s production database achieved by scrambling data so right on their laptop or PC, all that it is time-consuming and difficult for anyone other than via the Web site using a standard Internet browser. the authorized recipients or owners to obtain the plain text. In short, secure code significantly reduces threats to data Authorized recipients and the owner of the information share confidentiality, integrity and availability. Insecure code the corresponding decryption keys that allow them to easily opens organizations to potential legal, regulatory and shareunscramble the text to a readable format. It’s important to use holder liability. a standard commercially available encryption scheme. Get Right with Regulations Encryption should also be used in automated testing to It is important to familiarize yourself with the regulations keep sensitive data confidential. Most automated testing that concern your industry. The Gramm-Leach-Bliley Act tools encrypt passwords automatically. I recommend (GLBA) includes provisions to protect consumers’ personal encrypting all confidential text strings used in automated financial information held by financial institutions. The test scripts. Health Insurance Portability and Accountability Act You can perform encryption both automatically, from the (HIPAA) includes requirements for ensuring the security user interface, and manually, through programming. When and privacy of individuals’ medical information. you encrypt a string, it appears in the script as a coded string. In the following example, I use the SetSecure Be Vigilant method in Mercury QuickTest Professional to encrypt my In our information-driven society, precautions to protect conpassword: fidential information are increasingly important. According to Dialog("AutopilotLogin").WinEdit("Password:").Set The New York Times, more than 27 million Americans have been Secure "3ff048zt834abc0d883d0" victims of identity theft in the last five years. The SetSecure method is recorded when a password or Implementing the best practices described here will help other secure text is entered. The text is encrypted while prevent personally identifiable information from getting recording and decrypted during the test run. This techinto the wrong hands. “As IT professionals, we all need to be nique is very useful when automated test scripts are develvigilant about using safe and secure systems, physical and oped by one group and run by another. electronic, to safeguard confidential information,” says In Figure 1, you can see how simple it is to encrypt test Cooper. ý data using the data encryption utility in an automated test AUGUST 2006
www.stpmag.com •
37
Best Prac t ices
Don’t Reinvent the Wheel Eat those veggies and make only since 2001, and it more frequent trips to the wasn’t until 2003 that the gym, and you’ll be healthiIBM-independent Eclipse er in your golden golfing Foundation was created. So years. Shy away from even a it’ll take time for more few of those Starbucks advanced testing practices mornings or Chinese taketo percolate through the out lunches, and you’ll save community. ten thousand dollars or In the meantime, more in a decade. Get Eclipse testing basics can be those tires rotated and that summarized in two words: Geoff Koch oil changed, and you might Use JUnit. squeeze a few more commuting months Renowned,Test Driven And out of the old jalopy. You know all this, of course, just as Bundled Right In Built by gurus Erich Gamma and Kent surely as you know that testing code earBeck in the 1990s, JUnit is the renowned ly, often and with some common sense open-source regression-testing framepays dividends down the line. But softwork that defines the ingredients nearware testing, even in the arguably more ly any unit test might need. JUnit is also agile world of Eclipse, usually involves part of the Eclipse core download, and a yawning gap between theoretical getting started is as simple as working potential and practical reality. through the “JUnit Cookbook,” written Consider Ray Clough, a Los by Beck and Gamma (http://junit Angeles–area programmer who works .sourceforge.net/doc/cookbook/ in the aerospace industry. With a decade cookbook.htm). of coding experience, the last six years David Gallardo, coauthor, along with of it spent toiling in Java, Clough Ed Burnette and Robert McGovern, of learned JUnit from a book and has had the book “Eclipse in Action” (Manning, good success with the basics, creating 2003), says JUnit is an essential develtest suites for application components. opment tool. “First, I write tests that will But he’s never used mock objects to exercise the functionality that I intend simulate a server or database, and so has my code to have; then I write the code,” never gotten around to end-to-end says Gallardo, also a developer at Topia testing. Technology in Tacoma, Wash. “One reaNot that he’s happy about it. son this works well is that it lets me start “The tools that allow [end-to-end] by defining the API from the client testing seem overly abstruse and difficode’s point of view; it makes me think cult to learn,” says Clough, who also about what interfaces I, as a client, teaches a UCLA extension class on J2EE would want to see, and what features I struts, design patterns and JSTL. “I keep would want exposed.” intending to learn them, but there’s Gallardo also uses EasyMock to simalways something popping up to prevent plify testing when he needs two or three me. Without the end-to-end functioncomponents to work together. The ality the more sophisticated methods open-source tool, which provides mock afford, I don’t feel good about the whole objects for interfaces in JUnit tests, is test process.” hosted at SourceForge and, at least Clough is certainly not alone. For all according to Gallardo, is easy to learn. the hoo-ha surrounding Eclipse, it’s still But just as staying in shape involves the new kid on the block when it comes more than slugging it out on the treadto integrated development environmill several times per week, writing and ments. The IDE, which began as an IBM delivering clean code using Eclipse Canada project, has been open-sourced AUGUST 2006
entails more than unit testing. Eventually, development teams will want to build and automate tests for transferring data from one computer to another, using variable input data, recording a QA-type interaction with a GUI program and so on—all requirements beyond what’s bundled in the Eclipse core.
Beyond the Basics Enter the Eclipse Test & Performance Tools Platform. TPTP aims to provide frameworks and services to help programmers build easy-to-integrate test and performance tools. Drew University mathematics and computer science professor Barry Burd does a yeoman’s job describing several TPTP highlights in his article “Better Software the Eclipse Way” in the winter 2006 Eclipse Review (available for download at www.eclipsereview.com). Yet for all TPTP’s potential—the platform promises to “enable the emergence of completely new tool models,” according to the project Web site—so far, it’s tough to focus on any best practices for using the thing. “I don’t know anyone using TPTP,” Gallardo says. “I mentioned it to one of the QA guys here at my company, and they weren’t aware of TPTP, but thought it sounded interesting.” And how does Gallardo’s coauthor Ed Burnette weigh in? “I’ve read about [TPTP] and taken a tutorial, but haven’t had occasion to use it yet,” says Burnette, a North Carolina–based developer at SAS who’s written several other articles and books on Eclipse, including O’Reilly’s 2005 “Eclipse IDE Pocket Guide.” It seems that even in the wet-behindBest Practices columnist Geoff Koch covers software from Lansing, Mich., where he eats lots of in-season veggies and hits the gym several times each week. Unfortunately, he also spends more than $1,000 per year at Starbucks and the local knockoff, Beaner’s. Write to him at koch.geoff@gmail.com. www.stpmag.com •
39
Best Practices the-ears Eclipse world, TPTP is radically new technology, and the best way to explore it might be to Google obsessively and get active in the eclipse.tptp newsgroup. Eclipse is made up of several active communities; beyond testing tools and frameworks available for free, you can sample a burgeoning collection of Eclipse-compatible commercial offerings. One is WindowTester, a systemlevel user interface tool from Instantiations, Inc., headquartered in Portland, Ore. “It’s the job of various member companies like Instantiations to utilize this raw [Eclipse TPTP] framework to build polished testing tools,” says Instantiations chief technology officer Dan Rubel.
In the Reality-Based Community… For those whose job it is to write code and build a business, appreciating the Eclipse IDE’s finer testing points is less important than building a working appli-
cation and finding paying customers. Tripconnect.com CTO Isaac Sacolick concocts a complex mix of testing processes and tools: Excel and tikiwiki(tikiwiki.org) to store requirements, bugs and test cases; Eclipse/ CVS to coordinate all code reviews; MySql’s slow query log to look for slow query statements; and JMeter to load-test the application after large development cycles. Unfortunately, only some of these work with his chosen IDE. This partial-Eclipse approach “is not my ideal process,” says New York City–based Sacolick, whose company enables people to get travel advice from people they know or who share similar interests. “I’d rather use a project/task management tool and bug management tool that plugs directly into the Eclipse environment.” However, in a startup with just two fulltime developers, Sacolick says he simply can’t afford to adopt every best practice at once, especially when it comes
to Eclipse testing, which he deems “part art and part science.”
Reinventing the Wheel Whether constrained by a lack of resources or opaque complexity, the key to Eclipse testing is similar to the secret for achieving good physical, financial or even automotive health—namely, don’t give up. Sacolick, an 18-month Eclipse veteran, continues to evaluate Java profilers and expects to make a change within the next six months as his development team grows. And Gallardo, readying himself for a Web application testing project, has been eyeing Cactus, an Eclipse-friendly framework for unit-testing server-side Java code. “You don’t want to reinvent the wheel,” says Gallardo. “Sometimes, a wheel-shaped object is all you need, and that’s okay, but generally you end up with a rickety wooden wheel that barely meets your needs when, with a little extra effort up-front, you could’ve gotten a fast alloy wheel for free.” ý
Index to Adver tisers
40
• Software Test & Performance
Adver tiser
URL
Page
Agitar Software Inc.
www.agitar.com/learnmore
Axosoft
www.axosoft.com
Bredex
www.bredexsw.com
16
Testers Choice Awards Nominations
www.stpmag.com/testerschoice
25
Eclipse World Conference
www.eclipseworld.net
17
Entrek Software Inc.
www.entrek.com
40
Fortify Software
www.fortifysoftware.com
IBM Corp.
www.ibm.com/takebackcontrol/flexible
Instantiations
www.instantiations.com/codepro
Parasoft Corp.
www.parasoft.com/stpmag
3
Seapine Software Inc.
www.seapine.com/qawstp
4
Software Test & Performance Conference
www.stpcon.com
SPI Dynamics
www.spidynamics.com/QA
34
SQS
www.sqs.com
44
Test & QA Report
www.stpmag.com/tqa
41
43 2
6 20-21 38
30-31
AUGUST 2006
Don’t Miss Out
Test & QA Report eNewsletter! On Another Issue of The
Each FREE weekly issue includes original articles that interview top thought-leaders in software testing and quality trends, best practices and Test/QA methodologies. Get must-read articles that appear only in this eNewsletter!
Sign up at: www.stpmag.com/tqa
Future Future Test
Test
Make Security More Than An Afterthought I used to shoot people for software development and a living. And from that testing are going—or experience, I gained valuwhere they should be going. able insight into developing quality software. Years Seeking Security ago, I was a mechanical Enrollment in computer design engineer working science as a major has on nonlethal weapons sysdeclined from 3.7 percent tems. One weapon I in the U.S. to only 1.1 perhelped design was a cent, largely because of outperimeter-weighted net sourcing to other counthat could be fired as a tries. This trend is not a Ed Adams ballistic projectile to problem in itself as long as restrain, but not harm, the target. As companies retain qualified testing and part of our field testing, I’d take fellow assessment capabilities when develop(unsuspecting) engineers out into a ment is outsourced. Unlike functionalifield and “shoot” gun-mounted canisty or performance, security is a critical ters containing these nets at them, just component of software testing. If you to see if they could escape. ship a product that’s difficult to use or Before we began shooting nets at our performs poorly, your users may get coworkers, we’d already done a tremenannoyed. However, if you ship without dous amount of work in the design phase security in mind, you’re putting your of the project. And we performed extenentire business at risk. sive testing before we constructed a proEven though many testers now totype to test on live people. In the understand what it means to test for mechanical design world, you follow an security, security testing is far from established process for assessing the simple. It takes imagination, technical quality of your design before you build it: expertise and a security mindset, 1. Model the application and create which means thinking in terms of a design. abuse cases rather than use cases and 2. Test the design. Here, you make negative rather than positive requiresure there are no safety or security ments. You can teach anybody how to flaws before you build. run test cases in a test plan, but find3. Analyze test results and make any ing security bugs takes training and a needed design changes. mind shift. 4. Feed the improved design back I hear testers asking for a security into the test workflow and assess tool that can find all security vulnerabilthe new design. ities on its own. Though a couple of 5. Repeat the process on the model decent security testing tools are on the until it passes the requirements— market today, most try to do too much. functional and safety/security. These tools will be nearly useless in five Only after we tested our model and years because most new applications will verified that the design was architecbe written (or rewritten) in Java or .NET turally sound and safe did we start buildlanguages. Testing Web applications will ing a “beta.” This is the direction that evolve to a three-step exercise: Drive
42
• Software Test & Performance
faults from both the server and client, modify results as necessary through a proxy, and record the outcome. Fault simulation will also be a critical future test category for security, enabling a tool to automatically test many components of the system at the same time rather than isolated components. Security must be integrated into organizations’ software development life-cycle (SDLC) via guidance and education. Top-down management pressures will force development and test teams to learn new security skills, and they’ll seek this in the form of training and knowledge retention.
Threat Modeling In the near future, threat modeling will be the most significant change in the SDLC. Within five years, threat modeling will become a standard practice early in the SDLC, with the aid of business analysts or product managers allowing companies to get a full picture of their software problems. Threat models provide a fast view to the biggest threats, but even more important, they can be reused as new vulnerabilities become known. Pump a new threat into the model, and you can instantly determine the potential risk for your specific context and application. Validations and checks against the threat model can be peppered throughout the SDLC, becoming a process much like requirements are today. To develop software that really works, threat models should be tightly coupled to the SDLC. Testers are stretched pretty thin as is, and with new security requirements, test teams will have to develop specialists. To be productive, security testers need to be dedicated to security throughout the SDLC. We’ll still have plenty of “legacy code”—anything five or more years old—that needs to be validated and tested, but unlike security, much of that can be automated. I’ve already seen the role of Security Guru develop at several companies—usually one person who thinks like an attacker or hacker and perpetually tries to compromise the application. This is a good start, but we’ve got a long way to go to reach nirvana. ý Ed Adams is the CEO of Security Innovation. AUGUST 2006
November 7-9, 2006 The Hyatt Regency Cambridge Boston, MA
r isteEarly g e R for ! nts y u a o Tod Disc ack d Bir See B er v Co
Course Listing
SUPERB SPEAKERS! Scott Barber Rex Black Clyneice Chaney Ross Collard Elfriede Dustin Jeff Feldstein Robert L. Galen Robin F. Goldsmith Hung Q. Nguyen Robert Sabourin Mary Sweeney And dozens more!
Produced by
TOTAL IMMERSION!
TERRIFIC TOPICS!
Choose From 67 Classes
Managing Test/QA Teams Testing SOA and Web Services C# and ASP.NET Test Security Testing Locating Performance Bottlenecks Just-in-Time Testing Effective Metrics Requirements Gathering Improving Java Performance Agile Testing Risk-Based Testing Strategies Test Automation
Pick From 8 In-Depth Tutorials NEW! Birds-of-a-Feather Sessions Network With Colleagues Ice Cream Social Reception in Exhibit Hall Pose Your Questions to 25+ Exhibitors Mingle With More Than 40 Speakers
Welcome Save the Dates! November 7-9 at the Hyatt Regency Cambridge in Boston From service-oriented architectures to AJAX to application security, change is the one constant in software development. In the face of continuous innovation, experts like you who are trying to improve the quality of your company’s software encounter new challenges every day. And with code size and complexity increasing year by year, is it any wonder your information needs continue to grow as well? Now in its third year, BZ Media’s Software Test & Performance Conference provides you with the practical, how-to information that will help you meet these challenges and make you successful in your profession. The technical program for this conference was designed to serve the needs of people just like you: test and QA managers, development managers, test-focused developers and senior testers. The conference addresses such diverse topics as requirements management, security testing and test automation. You can learn about testing service-oriented architectures and explore the fundamentals of database testing and how to recognize performance bottlenecks. Or you can delve into the intricacies of profiling J2EE applications, learn about performance tuning .NET applications and understand how to use metrics effectively to improve software quality. The three-day conference program packs in eight daylong tutorials plus more than 60 classes ranging from 60 to 90 minutes in length. The faculty was handpicked for its technical expertise and ability to communicate. You’ll meet and learn from industry luminaries like Rex Black, Rob Sabourin, Robin Goldsmith and Bob Galen. The program also features exciting keynote presentations to help give you a sense of
Diamond Sponsor Platinum Sponsors Gold Sponsors
Media Sponsors 2
where the industry is headed and what challenges you’ll likely be facing next year. While participating in the technical program is important, equally valuable is the opportunity you will have to meet with other software professionals outside the classroom. Conference activities are arranged to maximize your learning experience while leaving you time to compare notes with your classmates and confer with members of the faculty. As an added bonus, the conference schedule and format will provide time for you to discover the latest products—which will be presented in the exhibit area— and pick the brains of the tool vendors. Read through the class listings and build a custom course of study over three days that will give you and your team tools and techniques that you can take back to the office and put into effect immediately. We look forward to seeing you at the Software Test & Performance Conference.
Register today at www.stpcon.com
Lindsey Vereen Conference Chairman
Event
Schedule
NEW!
BIRDS OF A FEATHER SESSIONS Join your colleagues and our instructors for informal discussions. Pose your toughest questions on the following break-out topics: • • • • • •
Test Automation .NET Testing for Security Java Agile Testing Improving Performance
Tuesday, November 7 8:00 pm – 10:00 pm Wednesday, November 8 8:30 pm – 10:30 pm
Note: Sign-up for these sessions will be on-site.
Monday, November 6 4:00 pm – 7:00 pm
Registration
Tuesday, November 7 7:30 am – 7:00 pm 8:00 am – 9:00 am 9:00 am – 10:30 am 10:30 am – 11:00 pm 11:00 am – 12:30 pm 12:30 pm – 1:45 pm 1:45 pm – 3:15 pm 3:15 pm – 3:45 pm 3:45 pm – 5:00 pm 8:00 pm – 10:00 pm
Registration Continental Breakfast Full-Day Tutorials Coffee Break Full-Day Tutorials Lunch Break Full-Day Tutorials Coffee Break Full-Day Tutorials Birds of a Feather Session
Wednesday, November 8 7:30 am – 7:00 pm 7:30 am – 8:30 am 8:30 am – 10:00 am 10:00 am – 10:30 am 10:30 am – 12:00 pm 12:00 pm – 1:15 pm 1:15 pm – 2:30 pm 2:30 pm – 3:00 pm 2:30 pm – 7:00 pm 3:00 pm – 4:15 pm 4:15 pm – 4:30 pm 4:30 pm – 4:45 pm 4:45 pm – 5:30 pm 5:30 pm – 7:00 pm 8:30 pm – 10:30 pm
Registration Continental Breakfast Technical Classes Coffee Break Technical Classes Lunch Break Technical Classes Coffee Break Exhibit Hall Open Technical Classes Break Industry Keynote Keynote (Rex Black) Reception in Exhibit Hall Birds of a Feather Session
Thursday, November 9
Contents
7:30 am – 4:00 pm 7:30 am – 8:30 am 8:30 am – 10:00 am 10:00 am – 10:30 am 10:30 am – 12:00 pm 12:00 pm – 1:15 pm 12:00 pm – 4:00 pm 1:15 pm – 2:00 pm
Tutorial/Class Descriptions . . . . . . .4 Faculty Biographies . . . . . . . . . . . .16 Hotel and Travel Information . . . . .21 Conference Planner . . . . . . . . . . . .22 Pricing and Registration . . . . . . . .24
2:00 pm – 3:15 pm 3:15 pm – 3:45 pm 3:45 pm – 5:00 pm
Registration Continental Breakfast Technical Classes Coffee Break Technical Classes Lunch Break Exhibit Hall Open Ice Cream Social in Exhibit Hall Technical Classes Coffee Break Technical Classes
Exhibit Hours: Wednesday, 2:30 pm – 7:00 pm Thursday, 12:00 pm – 4:00 pm
Register at www.stpcon.com Updated July 26, 2006
3
Full-Day
Tutorials
"This is the best conference I have attended. The instructors were extremely knowledgeable and helped me look at testing in a new way.” —Ann Schwerin, QA Analyst, Sunrise Senior Living
FULL-DAY TUTORIALS Tuesday, Nov. 7 9:00 am – 5:00 pm
|
NEW T-1 Assessing Your Test Team Effectiveness, Efficiency and More By Rex Black
As a test manager, you’re probably looking for ways to do better and demonstrate your team’s value and to find ways to improve it. This tutorial delivers. You’ll learn techniques to assess your team that are driven by insightful questions and careful data analysis. By applying the ideas in this tutorial to each of the 12 critical testing processes, you’ll know where you and your team stand. This one-day tutorial is very hands-on with lecture primarily used to stimulate discussion. After each process is discussed, attendees will work through exercises that estimate performance metrics for their own test teams. After each exercise, attendees have a chance to discuss their results.
T-4 Twenty-One Ways to Spot—and Fix— Requirements Errors Early By Robin Goldsmith While many organizations have begun paying closer attention to defining requirements, few fully realize the need to know that their requirements are accurate and complete, and few know how to test requirements effectively. Most rely on one or two weak methods and have little awareness of how many errors they’ve missed— errors that later turn into expensive feature creep. This interactive class explains why it’s so hard to test requirements, and it introduces 21 increasingly powerful methods to help you find frequently overlooked requirements errors when they are easiest and least expensive to fix. Following the instructor’s proven CAT-Scan Approach, participants apply the techniques successively to a real case and discover how each different method reveals additional, otherwise overlooked defects in the requirements. Participants learn ways to find previously overlooked requirements, increase meaningful customer/user involvement, enhance communications and understanding and test the adequacy of requirements definitions.
T-2 Testing Techniques: Theory and Application By BJ Rollison This tutorial presents the formal theory and practical application of functional (behavioral) and structural (coverage) testing techniques. This tutorial will teach functional testing techniques, including exploratory testing, boundary value analysis, equivalence class partitioning and combinatorial analysis. Structural testing techniques covered include statement coverage, decision/branch coverage, condition and basis path coverage. By attending this tutorial, you’ll learn how to use functional testing techniques to establish a solid foundation and minimum baseline of test cases. You'll understand how structural testing techniques can be used to design additional tests from a white box approach to complement the test effort, to ensure that critical paths in the code have been exercised and to achieve higher code coverage results. You will also learn how to apply both black box and white box test design approaches to test more effectively.
T-3 Testing in Highly Iterative, Quasi-Agile Projects—Practical Strategies for Mixed Culture Projects By Timothy D. Korson In the highly iterative, fast-paced environment of agile development projects, the traditional approaches to testing, quality assurance, requirements gathering and team interactions break down. QA managers trying to encourage best practices recommended by CMMI and SPICE find themselves at odds with developers trying to adopt best practices as recommended by the Agile Manifesto. In the end, no one wins. Because of the constraints of corporate policies and management edicts, developers can’t fully adopt agile practices. Because the developers do adopt as much of the agile process as they can get away with, the QA team finds that traditional approaches to quality management no longer work. Such projects must succeed in a “quasi-agile” development environment. This tutorial will introduce you to software development processes and practices that affect your world. You will learn practical strategies for effectively integrating testing processes with modern software engineering processes. You will learn how to create effective tests, both component-level and system-level, for modern software systems. Detailed case studies will convey specific techniques for testing both components and entire systems.
4
Register at www.stpcon.com
Spe c Trac ial k!
Development or Test/QA Managers’ Classes
Need a higher-level view? You’ve come to the right place! Here are classes just for you—but feel free to deep-dive into any class or topic you want! T-1 T-5 107 108 109 203 207 209 304 403 502 509 602 702 705 707 801 806
Assessing Your Test Team Effectiveness, Efficiency and More – Black (Full-Day Tutorial) Foundations of GUI Test Automation – Makedonov (Full-Day Tutorial) Designing for Testability – Feldstein Testing the Software Architecture – Sangal Effectively Training Your Offshore Test Team – Hackett How to Turn Your Testing Team Into a HighPerformance Organization – Hackett Analyze the Return on Your Testing Investment – Black Quality Throughout the Software Life Cycle – Feldstein Managing Acceptance Testing Cycles More Efficiently – Makedonov Deciding What Not to Test – Sabourin Creating and Leading the High-Performance Test Organization, Part 1 – Galen Recruiting, Hiring, Motivating and Retaining Top Testing Talent – Feldstein Creating and Leading the High-Performance Test Organization, Part 2 – Galen S-Curves and the Zero Bug Bounce: Plotting Your Way to More Effective Test Management – Bradshaw Coding Standards and Unit Testing—Why Bother? – Hendrick Performance Testing for Managers – Barber Effective Metrics for Managing a Test Effort – Bradshaw Best Practices for Managing Distributed Testing Teams – Stevens
"Great topics—well presented by reputable presenters. Having attended two years in a row, I have yet to be disappointed." —Ardan Sharp, QA Manager, SunGard
NEW T-5 Foundations of GUI Test Automation By Yury Makedonov
Testers and managers find themselves between a rock and a hard place when implementing test automation. From one side they are bombarded by a constant stream of sales pitches promoting the “click, click, click” record-and-replay approach. From the other side they are pressed by test automation gurus promoting their own frameworks. So, it’s a challenge to keep their sanity under these conditions and to make sensible test automation decisions on tool and framework selection and test automation management. In this real-world tutorial, major myths and misconceptions are dispelled, and explanations are provided on how to ensure the efficiency of GUI test automation. The tutorial covers major principles and current industry standards of GUI test automation, how to decide if a specific project should be automated or not, how to define a scope for test automation, how to select a test tool to automate a specific application and much more.
Full-Day
Tutorials
NEW Software Testing By Linda Hayes
Most companies resist test automation until the software is stable, reasoning that any savings from automation will be offset by the maintenance required to keep up with changes. Also, traditional record/script/replay approaches can’t be implemented until the code is functional, which is too late in an agile development environment. This session will present an incremental approach to test automation that supports an agile development environment by allowing automated tests to be written before the code and then be rapidly updated as changes are introduced. You will learn how to improve code development practices through automation, define executable requirements and write self-documenting automated tests before code is developed. You will also learn how to implement a test tool and platform agnostic automation architecture and automate test case maintenance for rapid response to changes.
T-6 Software Endgames: How to Finish What
NEW You’ve Started By Robert Galen We’ve all survived more than one software project that ended badly, where either the requirements were misunderstood or were implemented poorly. Or overall quality targets couldn’t be met because there were simply too many defects. Or the team simply couldn’t decide on priorities and in which direction to steer the project. Many projects fail during testing. Not because of the testing per se, but because of the massive discovery of defects and functional gaps that indicate the true viability of the project. I call this time the Software Endgame, and I’ve spent a great deal of time negotiating its challenges through numerous software projects. This presentation focuses on a set of high-level practices and techniques that will help improve your management and project steering within the endgame, providing guidance that will increase the odds of successfully delivering a project. You’ll learn how to create an endgame delivery map that directs your release and testing milestones via entry/exit criteria; the importance of release criteria within the endgame and high-level rules of thumb for defining them; how to manage defect repairs—where to focus your efforts and scheduling rules of thumb—plus the many options you have for “fixing” defects.
T-7 Using Metrics to Improve Software Testing By Alfred Sorkowitz Software metrics can aid in improving your organization’s testing process by providing insight and early visibility into the “real” status of the testing effort and helping to make assessments as to whether progress, productivity and quality goals are being met. This tutorial presents a practical guide on how to start taking advantage of these new tools/techniques to aid in improving the testing process. These metric-based tools and techniques have successfully been used by software test teams, software developers and SQA/IV&V staffs. In this class, you will receive an overview of software quality goals, criteria and metrics. You will also learn: • The cost of inadequate software testing, a set of government/industry “best practices” metrics that can track the real status, quality and productivity of the testing effort, as well as provide an indication of future problems. • Software complexity metrics: A new structured testing methodology that uses metrics to aid in developing software that is easier to test and maintain and selecting an appropriate set of paths for more thorough testing. • How to integrate software metrics into the testing process.
T-8 Creating Agility and Effectiveness in
TECHNICAL CLASSES Wednesday, Nov. 8 8:30 am – 10:00 am
|
101 Seven Low-Overhead Software Process Improvement Methods By Robin Goldsmith For many, software process improvement is synonymous with highoverhead, long-term, organizationwide initiatives that often are resisted and fail to produce the desired results. In this interactive presentation, you’ll learn seven methods that can help you make software faster, cheaper and more reliable without all the hoopla. Key to meaningful results is recognizing, measuring and then specifically improving high-payback aspects of the instructor’s proven REAL software process, which often differs considerably from what we presume we are doing. In truly agile fashion, applying these methods proficiently focuses efforts most efficiently on effectively producing useful software from the start.
102 Just-in-Time Testing Techniques and Tactics, Part 1 By Robert Sabourin As the Boy Scout credo goes, “Be prepared.” In this class, you will learn how to be ready for just about anything in a software testing project within the volatile environment of a Web or e-commerce software project. Managers will learn an array of techniques to manage and track software testing in chaotic environments—specifically, projects with continuously changing requirements and shifting priorities. Members of the development and testing teams will learn how, even while working with minimal information, to develop tests and converge the product development effort.
103 Automated Database Testing: Testing and Using Stored Procedures By Mary Sweeney Today’s heterogeneous data environments place an increasingly heavy burden on test engineers. Applications, whether Web-based or client/server, must be tested for seamless interface with the back-end databases; this typically goes far beyond what the popular test automation tools can provide. The intricate mix of client/server and Web-enabled database applications are extremely difficult to test productively. As a result, you are increasingly expected to know how to create and use SQL queries, stored procedures and other relational database objects to effectively test data-driven environments.
Register at www.stpcon.com
5
Technical
"Good immersion in testing concerns."
Classes
—James Fields, Development Manager, Data-Vision Inc.
In this class, you will learn about testing at the database layer as an important adjunct to current tests. Using demonstrations and code examples, the instructor will present tips and techniques for creating efficient automated tests of the critical database back end using SQL, scripting languages and relational database objects. You’ll learn why testing of database objects and stored procedures is necessary; how simple and effective automated tests for the back end can be created using various programming languages, including PERL and VBScript; and how to successfully test database objects, such as stored procedures and views, with many examples and code.
104 Lessons Learned in Test Automation,
Spe c Trac ial k!
Need To Tune Performance?
Crank up your applications’ performance by choosing from these hot classes! 305 How to Optimize Your Web Testing Strategy – Nguyen 402 Accelerate Testing Cycles With Collaborative Performance Testing – Cavallaro 408 Techniques for Testing Packaged Application Performance – Feaster 503 Pinpointing and Exploiting Specific Performance Bottlenecks – Barber 504 Performance Tuning ASP.NET 2.0 Applications, Part 1 – O'Mara 603 SOA Performance Testing Challenges – Barber 604 Performance Tuning ASP.NET 2.0 Applications, Part 2 – O'Mara 608 Verifying Software Robustness – Collard 707 Performance Testing for Managers – Barber 803 Real-World Performance Testing Lab for (Almost) Free – Flint 805 Building a Bridge Between Functional Test Automation and Performance Testing – Sody
Part 1 By Elfriede Dustin This class will present and discuss a series of automated testing lessons learned from actual experiences and feedback from real projects. You’ll learn how to avoid some typical false starts and roadblocks when you implement your test automation efforts. Part 1 of this class includes a discussion of better ways to define automation criteria, how to avoid duplicating the development effort when designing automated test cases, how to create reusable automated test cases, the need to verify all vendor claims in your own environment, the pitfalls of delegating the tool selection to a reseller or consultant and how to select the right tool. You’ll also learn how to avoid losing sight of the testing efforts because developers or testers are too busy coming up with elaborate scripts to automate their unit and system tests.
105 Code Coverage Metrics and How to Use Them By
Register at www.stpcon.com 6
NEW Rex Black More and more testers and programmers are using tools that provide code coverage metrics. These metrics tell the tester or programmer how much of the code has been covered by a given set of tests and, more important, what conditions might not be covered. In addition, some tools can evaluate the coverage of data flows. Some tools can also provide insight into the complexity and, thus, the likely difficulty level of future refactoring of the code. In this practical class, we’ll examine the following code coverage metrics and how you can use them to write better code or tests: statement, branch, condition and loop coverage metrics; McCabe Cyclomatic complexity and basis path coverage; and data-flow coverage via set-use pairs. Each metric will be illustrated using a real program, with tests developed and run to achieve given levels of coverage. The code sample provided can be used, along with the course materials, to evaluate a given tool’s ability to provide useful code coverage metrics.
106 Foundations of GUI Test Automation
NEW Using C#, Part 1 By BJ Rollison
There is a trend in the software industry for companies to seek candidates whose qualifications include the ability to write automated tests. Additionally, some companies realize the limitations of commercially available tool sets that use proprietary scripted languages. So, modern programming languages are being employed to develop more effective and more robust automated tests. But for testers who lack a programming background, the initial hurdle of learning a programming language can be a bit intimidating. This is further complicated by the fact that most programming courses teach us how to develop applications and not how to use a programming language to write automated tests. This class is designed as a starting point for testers who lack a programming background, or for those with an understanding of programming concepts but are unfamiliar with automated testing using C# to test a Windows application. This class discusses common testing tasks illustrated with well-commented code examples and developed with free tools available on the Internet. In this session, you will learn how to launch, gather information, synchronize and close an application under test (AUT); manipulate and send test data to an AUT; and generate test data.
107 Designing for Testability NEW By Jeff Feldstein
Many developers believe testing begins when all the features are complete and they hand off their work to the test team. While this may be what actually occurs in many development projects, it is far from the ideal. Software quality assurance begins in the definition phase of the project. One important aspect of software quality to consider is testability of the software architecture. Testability of your application can have a profound effect on its overall quality. This class will explain what to look for in a testable architecture, avoiding common mistakes and pitfalls, how to present your ideas to the development team and how to build test automation systems that take advantage of the testable architecture.
108 Testing the Software Architecture NEW By Neeraj Sangal
Automated testing of the software architecture can keep quality from degrading and help preserve the design intent. In this class, a new lightweight approach to specifying and verifying the architecture is presented. Inter-module dependencies are utilized to represent the architecture of a software system using a dependency structure matrix (DSM). Once the architecture is specified, it can then be verified through automated tests during development. Furthermore, architectural violations can be easily prioritized for remediation. This approach will be presented through real life exam-
“Excellent conference—provided a wide range of topics for a variety of experience levels.” —Carol Rusch, Systems Analyst, Associated Bank
ples by applying it to a number of commonly used applications. Dependency analysis will be used to extract the architecture for applications such as Ant, JUnit, jEdit and the Eclipse platform. We will look at real examples of software development spanning several years to see how architecture evolves, how it often begins to erode and how regular testing can prevent this erosion.
109 Effectively Training Your Offshore Test Team By Michael Hackett Working with offshore teams is a fact of life now for domestic test leads and managers, but many are still struggling to make their global test team work effectively. Training your offshore test team is critical to the success of your projects. If done right, training can help minimize your stress and late-night phone calls and ensure that you are getting the right information from the offshore team to make sure your testing effort is successful. Training of the offshore team needs to focus on a broad range of topics and must be specifically designed to the unique needs of that team. In this class, we will discuss the key elements of successful offshore testing, including training in the areas of process, product/domain knowledge and testing techniques, and how training can be used as a retention tool for offshore staff. The class will include several real world examples based on the speaker’s experiences working with teams in the most common offshoring locations.
|
Wednesday, Nov. 8 10:30 am – 12:00 pm 201 Prevent Showstopper Overruns
NEW With Risk-Based Proactive Testing By Robin Goldsmith
Project budget and schedule overruns frequently are caused by late, unplanned significant redesign and rework to fix showstopper errors. Traditional, reactive testing misses too many showstoppers or catches them too late to fix easily. In contrast, Proactive Testing’s powerful risk analysis techniques identify many of the up to 75 percent of showstoppers that are ordinarily overlooked. Moreover, Proactive Testing can drive development to build systems in a truly agile different way that avoids much of the rework that showstoppers traditionally would have necessitated. The class will cover risk-based testing fundamentals and the limitations of traditional reactive testing approaches. You will learn how to continually refocus on testing higher-level risks more frequently and earlier in the testing cycle, as well as methods for identifying ordinarily overlooked showstoppers and reducing overruns.
Technical
Classes
Part 2 By Elfriede Dustin We continue to explore automated testing lessons learned from actual experiences and from feedback based on real projects to help you to avoid some typical false starts and roadblocks when you implement test automation efforts. In Part 2 of this class, attendees will learn when automated testing doesn’t speed up the testing effort. Attendees will also learn how to create mini development life cycles, how to maintain automated unit and system tests, how to implement smoke tests, and the pitfalls of using automated performance testing tools.
205 Database Security: How Vulnerable Is Your Data? By Mary Sweeney There are many levels of software security. But how secure is the most important component of your application: your database? Quality control organizations must step up to the challenge of ensuring data security with appropriate tests that focus on this vital area. In this class, you will learn what the test team needs to know about protecting the server, the database connections, controlling access to database tables and restricting access to the database server itself. If data is in jeopardy, the entire system is at risk. We will also discuss the basics of security testing to ensure protection for the critical database component.
206 Foundations of GUI Test Automation
NEW Using C#, Part 2 By BJ Rollison C# is a powerful programming language quickly becoming commonly used to develop test automation. But to effectively test Windows (Win32) applications written in C/C++, we must also learn to use common Win32 Application Programming Interfaces (APIs). Part 2 of this class discusses how to use process invocation services to use Windows APIs to perform common testing tasks illustrated with well-commented code examples, and developed with free tools available on the Internet. In this session, attendees will learn how to: • Import Win32 API library functions and marshal data types • Get and set AUT focus • Manipulate controls
202 Just-in-Time Testing Techniques and Tactics, Part 2 By Robert Sabourin Please see description under Class 102.
203 How to Turn Your Testing Team Into a HighPerformance Organization By Michael Hackett All development managers, test managers and their organizations are looking for ways to improve quality. Quality improvement can come in many forms: reducing risks by delivering higher and predictable quality products; optimizing time-to-market; increasing productivity; and building a more manageable organization. Some managers look for quality improvement by attempting to implement a more standard or formal process. This sounds good, but where is the road map for how to get there? This class will help! You’ll learn how to evaluate your test process and strategy, create a culture for change, implement change and use effective methods for measuring improvement.
204 Lessons Learned in Test Automation,
Register at www.stpcon.com
7
Technical
Classes
“As a project manager, this conference fit my role well. Developers would also benefit.” —Lloyd Goss, Project Manager, JAARS
• Manipulate menus • Create custom methods to perform repetitive tasks • Create a reusable test library
207 Analyze the Return on Your Testing Investment By Rex Black Testing is not just a good idea—it’s a good investment. This class demonstrates the value of solid testing through quantifiable returns on the investment. Through effective developer testing, skilled development managers deliver solid, quantifiable benefits in four ways: • Find bugs that get fixed—or even prevent them • Find bugs that don’t get fixed but are known • Run tests that mitigate (potentially expensive) risks • Guide the project with timely, accurate, credible information Attendees will learn about all of those benefits—and how to measure them. A hands-on exercise teaches you how to estimate the return you could—or currently do—achieve on your test investment.
208 Using Scrum to Manage the Testing Effort By Robert Galen Many testing efforts succumb to management and project pressures and become chaotic in terms of their focus and work quality. It’s simply the nature of the endgame phase of software development projects, where anything goes in pushing for the delivery of a product, and it’s usually quality that goes first. Beyond the product quality impacts, the team usually suffers, too, with low morale and little empowerment. Scrum is one of the agile methodologies, and it focuses on project management in agile and iterative development efforts. It can be successfully applied to testing efforts to renew their focus and drastically improve overall results. In this presentation, we will explore the Scrum methodology and learn to apply it practically to your testing cycles. In this class, attendees will learn how the Scrum methodology applies to the testing effort.
209 Quality Throughout the Software Life
NEW Cycle By Jeff Feldstein Software quality is everybody’s job. Quality cannot be tested into the product; it must be emphasized, monitored and measured from the beginning of the project. Each team involved in the project, including product marketing managers, program managers, development engineering, documentation and test engineering, plays a key role in assuring software quality. A carefully planned application development life cycle is a key requirement to successful delivery of on-time quality software. The application development life cycle consists of four broad phases: requirements, development, test and post-customer ship. Each phase has important activities that directly affect the quality of the delivered software. This presentation will explore each phase in detail from a software quality perspective. It will describe activities that need to happen at each step and the role of the test or software quality engineer, and will enumerate many common mistakes made. In addition you will learn how to catch bugs earlier in the life cycle when they are cheaper to fix.
Register at www.stpcon.com 8
|
Wednesday, Nov. 8 1:15 pm – 2:30 pm 301 Hacking 101: Donning the Black Hat
NEW to Best Protect Applications From Today’s Hacking Threats By Tom Stracener
Applications have become fertile ground for attackers to uncover seemingly innocuous features and utilities in today’s complex systems and gain unauthorized access. Hackers seek out weaknesses in the many modules and components of complex systems, looking for hidden fields, embedded passwords, exposed parameters to manipulate and ways to tinker with input strings and steal data. At a time when security is highly coveted, understanding the enemy’s mindset is more crucial than ever. Therefore, one of the best defense mechanisms against hackers is to understand how they think—and recognize your network’s Achilles’ heels before a hacker exploits them. In this class, you’ll learn the thinking, strategies and methodologies commonly used by hackers and how to effectively implement a sound defensive plan that will help mitigate multiple attacks. Top Web application security flaws, including invalidated input, broken authentication and management, buffer overflows, injection flaws, insecure storage, denial of service and insecure configuration management, will be addressed so as to arm you with the right knowledge to protect your company’s infrastructure.
302 Five Core Metrics to Guide Your Software Endgames By Robert Galen By its very nature, the endgame of software projects is a hostile environment. Typical dynamics include tremendous release pressure, continuous bug and requirement discovery, exhausted development teams, frenzied project managers and long hours. Testing teams are usually in the thick of this battle and accustomed to these dynamics. However, project managers may not be proactive enough in working with their testing teams to understand the change and repair workflows within their projects. Yes, we work hard at managing bug reports, but we can do so much more to influence and focus a project’s direction. In this presentation, attendees will learn how project managers can focus the entire team on a few key performance metrics to improve the overall endgame experience and increase the probability of delivering on time. And yes, to also survive yet another endgame.
“Best concentration of performance testing presentations/professionals I’ve seen.” —Nathan White, Manager, Testing Services, AG Edwards
303 Overcoming Requirements-Based Testing’s Hidden Pitfalls By Robin Goldsmith Testing based on requirements is a fundamental method that is relied on extensively. However, its thoroughness frequently can be compromised by traps that testers are not aware of. In this interactive presentation, you’ll learn key sources of requirements-based testing oversights, including distinguishing business requirements from system requirements; assessing the extent to which the requirements are complete; the premise of one test per requirement; the appropriate level of test case detail; and developers’ inclusion of requirements-based unit tests. The class will also focus on the strengths and often unrecognized weaknesses of requirementsbased tests; the importance of testing based on business as well as system requirements and determining how many tests a requirement needs.
NEW 304 Managing Acceptance Testing Cycles More Efficiently By Yuri Makedonov
Once in a while a supposedly “almost completed” project requires one more cycle of acceptance testing and then just one more, then another and so on. Surprised management looks on in disbelief as the project spirals out of control down into a bottomless pit of “acceptance testing cycles.” Yet, often management does not have a 100 percent clear and correct understanding of what is happening and why it’s happening. As a result, their actions might not necessarily be very effective in controlling the situation. In this presentation, you will learn the different reasons for why these testing cycles can happen and specific techniques for getting a project like this out of a tailspin.
305 How to Optimize Your Web Testing Strategy By Hung Q. Nguyen One of the key strategic challenges of Web testing is the dominance of change. Another key challenge is interdependence. Web applications are fundamentally dependent on cooperating tools and processes. Many of the processes, tools and standards in use by groups that do Web testing were originally developed with simpler and less dynamic situations in mind. Used by skilled and thoughtful people, in the context of a clear strategy, these processes and tools can add value. But if we allow them to drive our testing practices, they can easily do more harm than good. In this talk, you will learn how to analyze and optimize your Web testing strategy by selecting the right types of tests, how to execute them at the right time with a balanced number of cycles, and how to drive changes to improve your team’s testing throughput.
NEW 306 Model-Based Testing for Java and WebBased GUI Applications By Jeff Feldstein
Classic test automation simply repeats the same tests (with optionally varying data) until it stops failing or the application ships. The problem with this approach is that customers rarely flow through the application in the same sequence as the automation, and thus they are likely to find bugs that the automation missed. Model-based testing is a form of automated testing that brings random and flexible behavior to your automated test cases. Model-based testing can be used for many types of software or application testing. This class will teach how to implement model-based testing, specifically as applied to Java and Web applications. Part of the course includes a demonstration of model-based testing; you will be able to download the XDE Tester source code used in the demonstration. Although the example application tested by this source code is fairly simple, it contains all of the data structures, concepts and program flow for implementing a largescale, industrial strength, model-based test system.
Technical
Classes
NEW 307 Taking AIM—Using Visual Models for Test Case Design By Robert Sabourin Designing test cases is a basic fundamental skill all testers master over time. This workshop teaches a fun graphical technique to help design powerful test cases and choose test data that will surface important bugs fast. These skills can be used in exploratory, agile or engineered contexts—anytime you need to design a test. Mindmaps are powerful graphical tools used to help visualize complex paths and relationships between concepts. The workshop shows how Mindmaps can be used to visualize test designs and help understand variables being tested, alone and in complex combinations with other variables and conditions. The AIM (Application Input Memory) heuristic is taught through a series of interactive exercises. Real recent project examples are used to demonstrate these techniques. We will look at using some widely available free open-source tools to help implement great test cases and to help focus our testing on what matters and quickly hone in on critical bugs! If you are new to testing, these techniques will remove some of the mystery of good test case design. If you are a veteran tester, these techniques will sharpen your skills and give you some new test design approaches.
NEW 308 Elements of Software Design for Unit Testing By Thierry Ciot
Too often, software projects are built around classes and components that incorporate a tangled web of dependencies that can hinder—or even prevent—unit testing. What steps can you take to keep this headache out of your project? This course, for beginner to intermediate programmers/testers and managers, will introduce you to the design guidelines that facilitate unit testing. By building applications around classes and components with clearly defined dependencies, code will be more unit-testable by design. Following these guidelines can also greatly reduce the number of stubs or mock objects needed to perform unit testing. The guidelines presented in this course are based on solutions that have been used in actual projects, and each guideline is illustrated with code examples
Spe All c Trac ial k! Sabourin, All the Time! Rob Sabourin is one of our highest-rated speakers, and heck, he just loves teaching. If you like his workshop style and are ready to think out of the box, then don’t miss these classes! 102 Just-in-Time Testing Techniques and Tactics, Part 1 202 Just-in-Time Testing Techniques and Tactics, Part 2 307 Taking AIM—Using Visual Models for Test Case Design 403 Deciding What Not to Test 507* Unit Testing for Agile Development, Part 1 607* Unit Testing for Agile Development, Part 2 708 What Hollywood Can Teach You About Software Testing
*Note that the Agile classes are limited to 30 participants, so register early!
Register at www.stpcon.com
9
Technical
Classes
“I learned things I didn’t know existed! I met people from all ranges of QA, all of whom were brimming with information they were willing to share.” —Rene Howard, Quality Assurance Analyst, IA Systems
that emphasize practical solutions. The class will end with a review of a complete real-world example.
NEW 309 Models for Security Testing in the Software Development Life Cycle By Ryan Berg
How should security testing be implemented during software development to ensure a more secure product? There is general agreement in the industry that improving software security is a valuable endeavor, but implementing programs that generate positive, measurable results has eluded most companies. Questions arise about the lack of security expertise among development teams and lack of development expertise among security teams, and there is a misconception that the addition of security reviews will ultimately extend development schedules. At the same time, centralized decisions must be made to define security policies, determine what constitutes a vulnerability, and prioritize remediation efforts according to available resources. Organizations need a concrete model for security evaluation and a comprehensive task list detailing the roles and responsibilities for each group involved. The class will include practical models that give testing responsibility to developers, QA staff or security teams, explaining the specific requirements for each approach as well as expected outcomes.
Spe c Trac ial k!
Secure Your Software! Securing the network is fine, but it’s not enough! These classes will help you test your software for security.
205 Database Security: How Vulnerable Is Your Data? – Sweeney 301 Hacking 101: Donning the Black Hat to Best Protect Applications From Today’s Hacking Threats – Stracener 309 Models for Security Testing in the Software Development Life Cycle – Berg 401 The Secure Software Development Life Cycle – Dustin 606 Exploiting Web Application Code: The Methodologies and Automation of SQL Injection – Fisher 701 The Five Most Dangerous Application Security Vulnerabilities—And How to Test for Them – Basirico
|
Wednesday, Nov. 8 3:00 pm – 4:15 pm
NEW 401 The Secure Software Development Life Cycle By Elfriede Dustin According to Gartner and Symantec, most business security vulnerabilities are now at the application layer. Attackers are focusing their efforts on regional targets, desktops and Web applications that potentially give attackers access to personal, financial or confidential information. Consequently, companies responsible for developing software
Register at www.stpcon.com 10
must build security into their products as they are being developed. This class focuses on application security throughout the software development life cycle. Attendees will learn secure coding guidelines that will help prevent defects from getting into code and that they must adhere to during the development process. They will learn about the Secure Software Development Life Cycle (SSDL) and its relationship to system development starting with the guidelines for security implementations. The class covers the security program review and assessment activities that need to be conducted throughout the testing life cycle and the secure deployment considerations that have to be implemented. It also addresses the metrics and final review and assessment activities that need to be conducted to allow for adequate and informed decision making.
402 Accelerate Testing Cycles With Collaborative Performance Testing By Rick Cavallaro Testing and tuning the performance of enterprise Web applications is a complex task, undertaken by a team of individuals that may include performance engineers, QA testers, architects, developers, database administrators and related project team members. Fostering communication among these individuals can be challenging and can often lead to testing delays. The process is especially difficult when testers and developers are distributed around the building, around the country or even around the globe. This session will provide a new methodology for collaborative load testing—an antidote to the iterative, multiweek process based on e-mail and conference calls that most organizations are forced to use today. You will learn: • The drawbacks of traditional approaches to performance testing • How to incorporate a team-based methodology for performance testing • A new solution for collaborative load testing in a Web-based environment • How outsourcing can impact QA efforts, and what you can do to mitigate that impact
NEW 403 Deciding What Not to Test By Robert Sabourin
Software project schedules are always tight. There is not enough time to complete planned testing. Do not stop just because the clock ran out. This presentation explores some practical and systematic approaches to organizing and triaging testing ideas. Testing ideas are influenced by risk and importance to your business. Information is coming at you from all angles—how can it be used to prioritize testing and focus on the test with the most value? Triage of testing ideas, assessing credibility and impact estimation can be used to help decide what to do when the going gets tough! Decide what not to test on purpose—not just because the clock ran out!
404 Putting the User Back in User Acceptance Testing By Robin Goldsmith User acceptance testing (UAT) is often a source of consternation. Even though the process takes up considerable user time, too many defects continue to slip through, and users increasingly beg off from participating with claims that they don’t have the time. Both effects may be symptoms of professional testers’ mistaken conventional wisdom about the nature and structure of UAT. In this eye-opening presentation, you’ll learn ways to gain user confidence, competence and cooperation. Plus, you’ll learn how to create userdriven UAT that increases user testing competence and confidence.
405 Getting a Handle on Risk: Risk-Based Testing Strategies By Clyneice Chaney With the rapid pace of application development, testing has become
Keynote Address: Rex Black • President, RBCS Wednesday, November 8 • 4:45 pm – 5:30 pm
Technical
Classes
Five Trends in Software Engineering Five strong winds of change are blowing in the software and systems engineering world. As winds affect a sailboat, these winds of change will affect software engineering, including development and testing, as a field, and software engineers as a community. Your career is at stake, and both risks and opportunities abound. In this talk, Rex Black will speak about these five trends and how they affect software engineering. He will offer cautions about the risks and identify the potential opportunities you face. For each trend, he will provide references to books and other resources you can use to prepare yourself to sail the ship of your software engineering career to the destination you desire: professional success. a challenging proposition. Trying to meet tight deadlines and deliver products that meet customer requirements is the greatest challenge testers face today. This presentation discusses a risk assessment tool that is used to assess risks associated with product testing. The assessment tool provides an alternative to “guesses” about what should be tested and helps test managers determine where they should concentrate their efforts. The proposed risk strategy for testing moves us from the informal approach experienced testers often use to a more formal and systematic way of assessing risk that allows you to base your test strategy on the assessment as well as address the quality concerns of the stakeholder.
NEW 406 Java EE Performance Tuning
Methodology: Wait-Based Tuning By Steven Haines
Java EE performance experts know that the key to success is to focus application tuning effort where it’s needed most—at the wait-points, where the delays happen. But what are the wait-points, how can you find them in your application, and what can you do about them? That’s what you’ll learn in this intermediate-level class. Wait-points can encompass Web and business tier thread pools, external dependency connection pools, persistence caches, object pools and even garbage collection. We’ll show you how to disassemble your application call stack, identify wait-points and tune from the inside-out to optimize throughput and suspend requests where they are best suited to wait. You’ll leave this class knowing how to focus your application tuning efforts to immediately improve the performance of your Java EE systems.
NEW 407 Agile Test Development By Hans Buwalda
Agile methods have become standard in the software development world. The emphasis on short, iterative cycles, constant feedback and a team-based approach to quality has proven effective for delivering software on time and on budget. The same approach can be applied to developing your tests and test automation, even if your development project is using a traditional “waterfall” life cycle. Good test design, especially good automated test design, requires constant feedback from project stakeholders outside the QA team, including the development team, management and customers. The tests should go through several “iterations” of review before being put into “production” against the system under test. This class will discuss an agile approach to building tests and test automation, so that the QA team can ensure that the system is tested early and often, thereby taking testing off the critical path to releasing the product. This class will present a methodology and case study to illustrate how agile test development can be implemented in real world projects.
NEW 408 Techniques for Testing Packaged Application Performance By Michel Feaster Enterprises are investing more and more in ERP/CRP applications like SAP and Oracle. While these are more critical to the business than ever, new paradigms such as SOA, agile development and offshoring are creating an added layer of complexity for QA organizations. This presentation will drill down on performance test-
ing techniques, including workload modeling, endurance and stress testing, diagnosis and problem isolation and automated script creation. In addition, the speaker will address how to create an optimal team structure and enhanced reporting and communications techniques.
|
Thursday, Nov. 9 8:30 am – 10:00 am
501 Strategies and Tactics for Global Test Automation, Part 1 By Hung Q. Nguyen We automate software testing to gain speed. We organize our distributed teams globally to maximize round-the-clock coverage and cost efficiency. Both solutions fulfill legitimate objectives. However, implementing them successfully while keeping the risks contained with a high degree of certainty proves to be an enormous challenge. In this class, through a series of technical and management case studies and real life examples, you will learn about seven steps that will deliver return on investment through a global test automation program. Attendees will learn how to: assess testing strategy and needs; minimize the costs and risks of global resources; select the right test automation technology for the job; align testing with business processes and development practices; and measure, analyze and optimize for continuing improvement.
NEW 502 Creating and Leading the High-
Performance Test Organization, Part 1 By Robert Galen Issues such as fewer people, less time, constantly changing technologies and increasing business expectations are clearly the norms for what software teams must face today. Nowhere is this more evident than within testing teams, since the pressure increases as we move down through the life cycle. This pressure poses a tremendous leadership challenge for testing team managers, group leaders or anyone chartered with directing testing. However, this challenge creates the opportunity for effective test leaders to differentiate themselves and their teams as they meet and exceed organizational expectations. This two-part class focuses on acquiring the fundamental skills to become that outstanding test leader. It will explore such issues as how to: build, motivate and lead great testing teams; create impact driven communications on testing state; properly plan and execute your team’s evolution, growth and ability to meet project challenges; handle the toughest “people” challenges facing good managers; and be agile and adaptable—learning to change with the organizational landscape.
503 Pinpointing and Exploiting Specific Performance Bottlenecks By Scott Barber One part of the system is always slowest—the bottleneck. Until you remedy that bottleneck, no other tuning will improve performance along that usage path, but before you can tune it, you must first conclusively identify it. Once the bottleneck has been identified, the resolution can be reached more quickly if you modify your existing tests to eliminate distraction from ancillary issues. Pinpointing the bottleneck precisely is an art all its own. Designed for technical performance testers and developers/architects, this class will show how the performance testing team and the development team can work collaboratively to analyze results and identify bottlenecks by tier, component and object. Then you’ll learn how to design tests to exploit those bottlenecks for tuning purposes with examples using IBM Rational and free tools.
Register at www.stpcon.com
11
Technical
Classes
“I’ve received volumes of new information and ideas to share with my team.” —Theresa Harmon, Business Applications Developer, Pharmacare Specialty Pharmacy
504 Performance Tuning ASP.NET 2.0 Applications, Part 1 By Thomas O’Mara This class will provide an intuitive understanding of how to set up a solid testing infrastructure, will help you gain an in-depth understanding of critical .NET and ASP.NET components, and will show you how to monitor the operating system and the ASP.NET application in real time. You will also get a good overview of statistical measurements and their meanings as applied to the data. The first part of this class will take an in-depth look at the .NET Framework 2.0 that includes the Common Language Runtime, Windows 2003 Server with Internet Information Server 6.0 and ASP.NET 2.0 architecture as it relates to performance tuning. Part 2 of this class will offer an in-depth real time look at the critical performance counters on Windows 2003 Server, IIS 6.0 and ASP.NET 2.0 as they provide feedback on the health of the application running under load.
505 Identify and Mitigate Risks Through Testing, Part 1 By Rex Black Over the past 10 years, professionals working in software and system development have learned how to apply the powerful techniques of risk analysis and risk management to their projects. In this class, you will learn: • How to apply risk analysis techniques ranging from informal discussions to ISO 9126 to failure mode and effect analysis • How risk prioritization can tell system development professionals where to focus development and test resources • How the project team can improve the accuracy of the risk analysis—and thus the effectiveness and efficiency of testing—throughout the system development life cycle Part 1 of this class will discuss the various techniques and illustrate them through real case studies. Part 2 includes a hands-on exercise to prepare you to apply these powerful techniques to your next project.
506 Rapid Business-Driven Testing By Clyneice Chaney Structured testing is a vital part of any development project. The problem is that almost no one is given the time and resources to properly execute a thorough test process. In an ideal world, rapid testing would not be necessary, but with most development projects there are schedule crunches and times when a quick assessment of the product quality is necessary. Rapid testing is a way to scale thorough testing methods to fit arbitrarily compressed schedules. “Rapid” doesn’t mean “not thorough,” but it does mean as thorough as is reasonable given constraints on time. In this class, you will learn how to use new rapid business-driven testing techniques, methods and templates that will increase product quality in rapid development projects.
507 Unit Testing for Agile Development, Part 1 By Robert Sabourin With the increasing popularity of agile development methods, the role of testing is starting earlier in the software development cycle. Testers and developers are challenged to develop software at lightning speed, often using new and untested technologies. The class will show you how development and testing teams can work together to promote and implement improved unit testing. Attendees will learn how to save your company money by finding and fixing bugs long before system testing even starts. Get the
12
Register at www.stpcon.com
ammunition you need to convince management of the economic and business benefits of comprehensive unit testing. This two-part class addresses unit testing issues within the context of different development life-cycle models, especially new agile approaches, and demonstrates the tools and techniques needed to organize for and implement unit testing. The class is taught in workshop style and includes many hands-on group and team exercises, examples and unit testing tool demonstrations. Due to the interactive nature of these workshops, class size is limited to 30 people.
Spe c Trac ial k!
Black is Back!
Rex Black was one of our most popular speakers at the first STPCon, and he’s back. Sign up for one or more of these classes, and you won’t be disappointed!
T-1 105 207 505 605
Assessing Your Test Team Effectiveness, Efficiency and More (Full-Day Tutorial) Code Coverage Metrics and How to Use Them Analyze the Return on Your Testing Investment Identify and Mitigate Risks Through Testing, Part 1 Identify and Mitigate Risks Through Testing, Part 2
508 Using Code Metrics for Targeted Code Refactoring By Andrew Glover Oftentimes, candidate code for refactoring is based on subjective determinations. The proper uses of code metrics, such as cyclomatic complexity, fan-in, fan-out and depth of inheritance, can also facilitate the discovery of candidate code that is in need of refactoring. For example, cyclomatic complexity is adept at spotting methods containing a high degree of conditional logic, which, consequently, can be replaced with polymorphism as elaborated by Martin Fowler. Additionally, excessively deep hierarchy trees create problematic testing targets, which can be broken out into separate objects with Fowler’s Replace Inheritance with Delegation and Collapse Hierarchy patterns. Fan-in and fan-out are quite effective at pinpointing brittle code, which can be refactored into a more stable state with numerous patterns, including Extract Hierarchy and Extract Class. You’ll leave this class with an understanding of seven industrystandard code metrics; moreover, you will have the ability to utilize these metrics to spot “complex” code and will have a grab bag of techniques with which to improve the code.
509 Recruiting, Hiring, Motivating and Retaining Top Testing Talent By Jeff Feldstein The expectations today are for increasingly high-quality software, requiring more sophisticated automation in testing. Test and QA teams must work more closely with development to ensure that this sophisticated automation is possible. This has led to software engineers applying creativity, talent and expertise to not just application development, but testing as well. This transition from manual to scripting to highly engineered test automation changes the way we recruit, hire, motivate and retain great test engineering talent. The speaker uses examples of how his team at Cisco changed the way it tests over the past six years. In this class, he’ll review eight reasons why test is a better place for software developers than software development, and he’ll show how and when to express
“This conference helps testers and developers as well as managers and leaders. There is enough variety and content for everybody. ” —Michael Farrugia, Software Engineer, Air Malta
these points to hire, motivate and retain top talent. You’ll see how to inspire greater innovation and creativity in your testing processes and how to manage and inspire test and development teams that are spread across different locations. You’ll also learn the place of manual testing in the new environment.
|
Thursday, Nov. 9 10:30 am – 12:00 pm
601 Strategies and Tactics for Global Test Automation, Part 2 By Hung Q. Nguyen Please see description under Class 501.
NEW 602 Creating and Leading the HighPerformance Test Organization, Part 2 By Robert Galen Please see description under Class 502.
NEW 603 SOA Performance Testing Challenges By Scott Barber Officially, SOA stands for service-oriented architecture, though Martin Fowler quips that maybe service-oriented ambiguity would be more apropos due to the diversity of technical methods being used to implement the SOA concept. The great thing about this ambiguity is that while the developers, architects, vendors and standards groups struggle to narrow the range of technologies, we testers have a chance to get ahead in our preparations for testing these applications. This presentation will introduce the core concepts of SOA, discuss the challenges these concepts present to performance testing and finally map out a performance testing strategy that allows us to use SOA as a springboard to move the state of performance testing significantly forward in your organization and the software industry as a whole.
604 Performance Tuning ASP.NET 2.0 Applications, Part 2 By Thomas O’Mara Please see description under Class 504.
605 Identify and Mitigate Risks Through Testing, Part 2 By Rex Black Please see description under Class 505.
606 Exploiting Web Application Code: The Methodologies and Automation of SQL Injection By Matthew Fisher SQL injection is a technique for exploiting Web applications that use client-supplied data in SQL queries without stripping potentially harmful characters first. Despite being remarkably simple to protect against, there are an astonishing number of production systems connected to the Internet that are vulnerable to this type of attack, due to the simple fact of improper input validation. Developers and quality assurance professionals who design, build and test business-enabling applications generally lack the security knowledge necessary to avoid creating common defects that are so easily exploited by hackers. In this class, you’ll learn about the techniques that can be used to take advantage of a Web application that is vulnerable to SQL injection. The session addresses proper mechanisms that should be put in place to protect against SQL injection, as well as overall improper-input validation issues.
607 Unit Testing for Agile Development, Part 2 By Robert Sabourin Please see description under Class 507.
Technical
Classes
608 Verifying Software Robustness By Ross Collard Do you like breaking things? If so, this session’s for you! It’s not enough to design systems for dependability; we have to verify that reliability as well. Software is robust if it can tolerate such problems as unanticipated events, invalid inputs, corrupted internally stored data, improper uses by system operators, unavailable databases, stress overloads and so on. Systems that include both hardware and software are robust if they can tolerate physical problems such as equipment damage, loss of power and software crashes. Since these problems can and do occur in live operation, this session examines how to evaluate a system’s robustness within the relative sanctity of the test lab.
609 Metrics: How to Track Things That Matter By Clyneice Chaney Metrics programs have often been a dirty word, misused and poorly implemented. This class discusses ways to provide metrics that really matter to organizations and provide visibility into their or their customers’ organizations. The class will begin with discussions about why metrics programs fail and will move on to discuss keys to successful metrics programs, developing quality metrics that matter and ways to implement and maintain these metrics over time.
|
Thursday, Nov. 9 2:00 pm – 3:15 pm
NEW 701 The Five Most Dangerous
Application Security Vulnerabilities— and How to Test for Them By Joe Basirico The most difficult problems of IT security are found at the application layer. Exploitability of applications due to poor design has reached epidemic levels. Perimeter/network defenses are not enough to protect organizations from attacks and most software teams possess neither the tools nor the expertise to properly secure their applications. This class highlights the top five security vulnerabilities that face testers today. You will learn practical how-to tips for testing your applications with a security mindset to attack their applications before a hacker does.
NEW 702 S-Curves and the Zero Bug Bounce: Plotting Your Way to More Effective Test Management By Shaun Bradshaw
The use of objective test metrics is an important step toward improving the ability to effectively manage any test effort. Two significant test metrics concepts, the S-Curve and Zero Bug Bounce, allow test leads and test managers to easily track the progress of a test effort, improve the ability to communicate test results and test needs to the project team, and make better decisions regarding when an application is ready to be delivered. You will learn: • How to establish an S-Curve: What an S-Curve is, why it is important in testing, how to develop a theoretical S-Curve, what metrics can be tracked using the S-Curve and how to track them. • How to create a Zero Bug Bounce: What the Zero Bug Bounce (ZBB) is, why it is important in tracking defects, and how to generate a ZBB. • Interpreting the graphs: Finally, we discuss a method for examining and interpreting the graphs created by these
Register at www.stpcon.com
13
Technical
“Reputable speakers and presenters. Great class topics.”
Classes
—Jung Manson, QA Manager, Webloyalty.com
metrics to make improvements to the current test efforts as well as future development efforts.
NEW 703 Testing Java Programs—Memory
Management Issues By Averil Meehan
It is a myth that Java’s garbage collection has solved memory management problems. These can still cause space leaks severe enough to crash a program or for a thread of execution to stop. Unfortunately, often such errors occur only when a program runs for a long time, so they may not show up until the testing stage of development. Such errors are extremely difficult to debug as the error can appear in one section of code yet be caused by another. The nature of memory errors means that the problem can manifest itself in different ways and at different times in code execution, something that also makes detection and solution problematic. In this class, we will discuss what memory problems can occur and consider different approaches to detecting the code that caused them.
NEW 704 Defining Test Data and Data-Centric Application Testing By Chris Hetzler
As applications grow larger and more complex and as automated testing of these applications is increasingly adopted, the data that is used during the execution of the automated tests needs to be clearly defined and identified early in the development life cycle. Currently, the Software Engineering Body of Knowledge does not contain a definition for “test data,” nor do any of the top books on the subject of software testing. This presentation will propose a definition for the term “test data” and outline what testers can do to ensure that teams are considering it in their planning phases and provide useful ideas on how to isolate what those data needs might be. The presentation will use examples of software development projects that have defined their test data early and those that have not to provide you with context and examples.
705 Coding Standards and Unit Testing—Why Bother? By Joshua Hendrick Many developers think that the industry “best practices” of coding standards and unit testing are a waste of time: They require additional effort, but they don’t seem to make your life any easier, or your code any better. This is not surprising. This class explains how developers can apply coding standards and unit testing to improve their code and prevent the number of problems they need to identify, diagnose and fix over the course of the project. The first half teaches you how to apply coding standards to prevent errors related to code functionality, security and performance. The second part focuses on how you can extend traditional unit testing to expose reliability problems that could lead to instability, unexpected results or even crashes or security vulnerabilities. Discussion will include how these test cases can be leveraged to build a projectwide automated regression system that runs in the background each night and immediately alerts the team when code modifications or additions break previously verified functionality.
NEW 706 The 5% Challenges of Test Automation By Hans Buwalda Some of the most common problems with software test automation projects are that too much effort is spent on developing test scripts where the percentage of tests that is actually automated is only a meager 20% to 30% because too much time is spent on script maintenance. These issues cause numerous pains, including
14
Register at www.stpcon.com
skyrocketing costs, a lack of management visibility and questionable quality of tests. These universal problems in test automation have led this speaker to introduce the 5% challenges of test automation: No more than 5% of tests should be executed manually, and no more than 5% of the test effort should be spent creating automation. The speaker will present the keys to meeting the 5% challenges and illustrate them through a case study of a successful project that maximized test automation while minimizing the time spent automating tests.
707 Performance Testing for Managers By Scott Barber Performance testing as an activity is widely misunderstood, particularly by managers and others not directly involved in doing it. This presentation details the most critical things for managers to know about the performance testing process and ways to improve it. Learning, understanding and applying these nuggets of knowledge to your current or future performance testing projects will dramatically increase your team’s chances of success. In this class you will learn how to work with experienced performance testers to get the results you need, why performance testing should begin well before the application is fully functional and how to do it, and ways to better integrate performance-testing personnel into the development team effort. In addition, you will learn how to recognize the difference between “delivery” and “done” as they relate to performance testing and how to assess and balance the risks inherent in each. Also covered is how to create and maintain a program that will ensure not only that your performance testers have the tools they need, but that they will know how to use them and when to put them away.
NEW 708 What Hollywood Can Teach You About Software Testing By Robert Sabourin
Powerful lessons can be learned from some of the great epic movies of our day: “Star Wars” and bug triage, “Indiana Jones” and requirements, “Monty Python” and configuration management, “Jurassic Park” and unit testing, “The Usual Suspects” and teamwork, and “Star Trek” and SLAs. There are important metaphors within these movie stories that you can apply to real test management problems. Robert Sabourin’s entertaining talk ties practical real-world experiences to lessons learned from the movies, offers tips to manage your team and provides his unique insight into how to get things done. You will learn: • How to identify your testing project genre—epic saga, action thriller, mystery or comedy • How to storyboard a testing project • How to draw lessons from an unlikely source
|
Thursday, Nov. 9 3:45 pm – 5:00 pm
NEW 801 Effective Metrics for Managing a Test Effort By Shaun Bradshaw
When managing a test effort, test leads and test managers sometimes find it difficult to empirically convey the impacts of scope changes, delays and defects to upper management. This class introduces a set of well-defined test metrics related to tracking and managing a testing effort. It demonstrates how to consistently apply these metrics to software projects and how to improve communication of your findings to the rest of the organization. The following concepts will be taught as a part of this class: • Test Metrics Philosophy – A four-step strategy for creating, tracking and interpreting test metrics • Base Test Metrics – Fourteen simple metrics easily tracked
“This conference is great for developers and their managers, as well as business-side people.” —Steve Margenau, Systems Analyst, Great Lakes Educational Loan Services
by test analysts that can be used to derive more sophisticated test management information • Management Test Metrics – Ten metrics and two graphs that make managing and communicating the status of a test effort easier for test managers and test leads • Interpreting Test Metrics – A method for examining and interpreting the data to make improvements to the current test effort as well as future development efforts
NEW 802 Software Testing a Service-Oriented Architecture By Ted Rivera and Scott Will
The presence of service-oriented architectures (SOA) has grown significantly in the past few years, and test professionals must stay abreast of the latest technologies in order to continue providing significant value-add to the software development process. This class describes briefly what SOA is, methodologies that can be employed when testing in an SOA environment, common and potential pitfalls that can be avoided, a comparison and contrast of testing in an SOA environment versus a “traditional” test environment and a look ahead at future opportunities.
NEW 803 Real-World Performance Testing Lab for (Almost) Free By Aaron Flint This course will discuss how to set up and maintain an effective performance test lab without spending a great deal of money on specialized performance testing hardware or software. Topics that will be covered include: what is needed to set up an effective performance lab; the hardware needed to get the maximum performance from the lab; and the software that must be installed and run in the lab, including tuning options and software configurations. We will also discuss how to manage and run performance tests and generate results. Finally, we will talk about maintenance issues such as how to preserve the environment.
804 Testing Java Applications Using the Eclipse Test and Performance Tools Platform (TPTP) By Paul Slauenwhite The Eclipse Test and Performance Tools Platform (TPTP) provides a flexible and extensible framework for creating and managing tests, deployments, datapools, execution histories and reports with extensions for performance, JUnit, GUI and manual testing of Java applications. In this class, you will learn how to use the performance, JUnit, GUI and manual test tooling in the TPTP for testing Java applications. Intended for developers and testers who want to test their Java applications, the class begins by providing an overview of the motivation, history and architecture for the TPTP project. Then the TPTP test framework is explained with a focus on extending the framework for vendorspecific purposes. Finally, the performance, JUnit, GUI and manual test tooling in the TPTP project is demonstrated to illustrate the life cycle of an application’s test assets. Attendees are provided with the sample configuration and code used in the technical class.
NEW 805 Building a Bridge Between
Functional Test Automation and Performance Testing By Peter Sody
In a lot of software development organizations there is a clear distinction between functional test automation and performance testing. This often leads to different teams testing either the functionality or performance of the same software systems. This class highlights the commonalities between functional test automation and performance testing and shows how these two areas can benefit from each other. This class gives practical examples on how to benefit from a collaboration, covering aspects of the whole test cycle. Applied in the right way, this can go far beyond reusing common artifacts. With the right approach the teams can complement their skill sets and significantly increase the efficiency and coverage of their testing. You will learn how
Technical
Classes
to determine the common areas that can boost the testing performance for both sides and how to recognize the limits of such efforts.
NEW 806 Best Practices for Managing
Distributed Testing Teams By Dean Stevens
Software testing projects are now often completed using a distributed model to leverage offshore labs to lower costs or decrease project delivery time. But without building certain cultural, methodological and technological foundations into your organization’s core values, it can be difficult or impossible to effectively manage projects being completed by dispersed virtual teams. Managing a distributed testing project requires a highly disciplined approach. Developing an effective framework for the execution of distributed projects goes hand in hand with requiring consistently documented processes and standards (“best practices”). However, by architecting a work model to accommodate a distributed model, you ensure a rigorous approach that will assist the successful execution of projects according to a well-defined set of best practices, regardless of the execution model employed. Once methodological and technical frameworks are in place, the bulk of the remaining challenges are related to “soft issues,” including cultural incompatibility, leadership problems, trust issues and negative competitiveness. These are actually the major obstacles to successful completion of distributed projects. However, there are concrete ways to alleviate these problems, including redefining your corporate culture, improving project planning and adjusting project staffing and technical infrastructure. In this session, attendees will learn methodologies and best practices designed to build “teamness” between distributed groups and set up both the physical and project structure so your teams can succeed.
NEW 807 Diagnosing and Resolving J2EE
Application Issues Before Deployment By Ferhan Kilical and Stanley Au Yeung J2EE offers many advantages to developers but introduces many challenges to the development, performance diagnosis, tuning, deployment and management of applications in the enterprise network. In this class, attendees will learn how to proactively detect problems and isolate them at different layers and tiers. They will acquire the tools to be able to pinpoint root causes to specific application components. The class begins with the end-user business process, then drills down to application layers, communication between different layers, transaction trend-request data, method calls and class data. During the class, the presenters will share some of the hands-on samples that they mastered during their own J2EE tuning efforts.
NEW 808 GUI Test Automation for Eclipse RCP Applications By Phil Quitslund
The Eclipse IDE and Eclipse RCP applications are taking the Java world by storm, and along with that wave is the need for automated user interface testing. This class introduces open-source tools and methodologies such as Abbot for SWT and Eclipse TPTP GUI Recorder for automated testing of Eclipse RCP applications. Attendees will be taken step-by-step through the process of creating and running GUI tests using these tools, and learn the strengths and weaknesses of each approach. This intermediate-level course assumes you have knowledge of Eclipse technology, building user interfaces and testing user interfaces.
Register at www.stpcon.com
15
Conference and Tutorial
Faculty Scott Barber is software test manager at AuthenTec and a member of the technical advisory board for Stanley Reid Consulting. Previously, he was a consultant who focused on teaching and performing practical performance testing and analysis. His project-level experience was evenly split between testing and analyzing performance for complex systems and mentoring organizations in the development of customized corporate methodologies based on his performance testing approach. Mr. Barber has a master’s degree in IT from American Intercontinental University. He writes Peak Performance, the performance testing column in Software Test & Performance magazine, and he also speaks at many technical conferences.
Joe Basirico has spent most of his educational and professional career studying security and developing tools that assist in the discovery of security vulnerabilities and general application problems. His primary responsibility at Security Innovation is to deliver the company’s Security Training Curriculum to software teams in need of application security expertise. Mr. Basirico has trained developers and testers from numerous world-class organizations, such as Microsoft, HP, EMC, Symantec and ING. He holds a B.S in computer science from Montana State University. Ryan Berg is a co-founder and lead security architect of Ounce Labs, an innovator of software vulnerability risk management solutions, based in Waltham, Mass. Prior to Ounce Labs, Mr. Ryan co-founded Qiave Technologies, a pioneer in kernel-level security, which was sold to WatchGuard Technologies in 2000. He also served as a senior software engineer at GTE Internetworking, leading the architecture and implementation of new managed firewall services. Mr. Berg holds patents, and has patents pending, in multilanguage security assessment, intermediary security assessment language, communication protocols and security management systems. A 20-year-plus software and systems engineering veteran, Rex Black is president and principal consultant of RBCS, which offers training, assessment, consulting, staff augmentation, insourcing, offsite and offshore outsourcing, test automation and quality
16
Register at www.stpcon.com
assurance services. Mr. Black has published several books, including “Managing the Testing Process” and “Critical Testing Processes.” He has also written more than 20 articles, presented hundreds of papers, workshops and seminars, and given more than a dozen keynote speeches at conferences and events around the world. Mr. Black is the president of both the International Software Testing Qualifications Board and the American Software Testing Qualifications Board.
Shaun
Bradshaw serves as director of quality solutions for Questcon Technologies. He is responsible for managing Questcon’s team of senior practice managers in the areas of quality solutions development and service delivery, and also works with clients to improve their quality assurance and software test processes.
Hans Buwalda leads LogiGear’s action-based testing (ABT) research and development, and oversees the practice of ABT methodology. Mr. Buwalda is an internationally recognized expert specializing in action-based test automation, test development and testing technology management. He’s also a speaker at international conferences, delivering tutorials and workshops, as well as presenting testing concepts such as ABT, the three Holy Grails of test development, soap-opera testing and testing in the cold. Recently, Mr. Buwalda coauthored “Integrated Test Design and Automation.” He holds an M.S. in computer science from Free University, Amsterdam. In his five years at Empirix, Rick Cavallaro, senior applications engineer, has worked with hundreds of companies helping to ensure the performance of their most critical Web applications. A 10-year veteran of the software industry, he specializes in testing and application development. Prior to joining Empirix, Mr. Cavallaro served in engineering roles at Aviv, Workstation Solutions and Revelation Software. He holds a BSEE degree from the University of Massachusetts, Lowell.
Clyneice Chaney, quality manager at Project Performance, brings more than 16 years of testing, quality assurance and process improvement experience. She is an American Society for Quality Certified Quality Manager and a Quality Assurance Institute Certified Quality Analyst. Focusing on process improvement and procedure development in the software testing and quality assurance areas, Ms. Chaney has successfully led process improvement, methodology development and re-engineering projects for organizations wishing to improve their software development, testing processes and tools implementation.
Conference and Tutorial
Faculty Thierry Ciot is a senior software engi-
Jeff Feldstein is currently a manager
neer with more than 15 years of experience in software design and implementation. His experience spans many operating systems, programming languages (ADA, C, C++, Java, .NET) and a multitude of application domains, from mail and messaging to debugging tools. Mr. Ciot currently works at Compuware as a technical lead on DevPartner Studio. He is also the lead developer for System Compare, a tool that enables users to quickly find why an application works on one system but not on another.
of software development at Cisco Systems Inc. During his 24-year career, he has been a software developer, tester, development manager and computer consultant; for the past five years, he has been involved with software testing or has managed a team of developers who write software test tools. His specialties have included internetworking, real-time embedded systems, communications systems, hardware diagnostics and firmware, databases and test technologies. Mr. Feldstein has been one of the highest-rated speakers at previous Software Test & Performance conferences.
Ross Collard is president of Collard & Co., a New York consulting firm. While he specializes in software testing and quality assurance, his consulting assignments have included strategic planning on the use of information technology for competitive advantage, the facilitation of quality improvement teams, management of large software development projects and the development of software engineering practices. Mr. Collard has made keynote presentations at major software conferences, published articles and conducted seminars on information technology topics for businesses, governments and universities, including George Washington University, Harvard, New York University and U.C. Berkeley. He holds a B.E. in electrical engineering from the University of Auckland, New Zealand, an M.S. in computer science from the California Institute of Technology and an M.B.A. from Stanford. Elfriede Dustin is an SQA manager at Symantec, author of the book “Effective Software Testing” and lead author of “Automated Software Testing” and “Quality Web Systems.” She is currently writing the “Security Testing Handbook,” along with two security experts, to be published by Symantec Press (spring 2006). She has also authored various white papers on the topic of software testing and is a frequent speaker at various software testing conferences. Ms. Dustin has a B.S. in computer science and more than 15 years of IT experience in various positions, such as QA director for BNA Software and assistant director for integration test and deployment at CSC on the IRS modernization effort. Michel Feaster is the director of product management for Mercury Interactive. She has seven years of experience as a systems engineer, and has worked closely with hundreds of Mercury Interactive software developers and QA manager customers helping them to streamline QA operations. She has spent a total of 12 years in the enterprise software industry. Ms. Feaster’s in-depth technical knowledge and engaging style have made her a sought-after speaker at numerous industry events.
Matthew Fisher is a senior security engineer for SPI Dynamics and has been specializing in Web application security assessments for many years. A native Washingtonian, he has performed countless assessments of Web applications within the DoD and the federal government, as well as some of the largest commercial institutions around the globe, and is registered as a subject matter expert to the Defense Information Services Agency. Prior to joining SPI Dynamics, Mr. Fisher worked at Computer Sciences and Digex, where he acted as lead technical adviser on large-scale enterprise Web applications for Fortune 500 companies. He is a contributing author to the book “Google Hacking for Penetration Testers” and is currently working on his own book, titled “Web Application Security: A Guide for Developers and Penetration Testers.” In addition, Mr. Fisher leads the Washington, D.C., OWASP chapter. Aaron Flint has more than 10 years of increasingly senior quality assurance experience, primarily focused on testing enterprise-level server software, and responsible for functional testing, performance testing and building teams for quality assurance. For the past three years, Mr. Flint has been the QA manager at Layer7 Technologies. Prior to this, he worked at GTE Enterprise Solutions as QA team lead for multiple-listing-service real estate system software, and InfoWave Software as QA team lead and manager for enterprise wireless connectivity software.
Bob Galen is a principal consultant of RGalen Consulting Group LLC, and has nearly 25 years of experience working in a wide variety of domains at companies including Bayer, Bell & Howell Mail Processing, EMC, Lucent, Unisys and Thomson. Mr. Galen regularly speaks at international conferences and professional groups on software development, project management, software testing and
Register at www.stpcon.com
17
Conference and Tutorial
Faculty team leadership. He is a certified Scrum Master and a member of the Agile Alliance. Mr. Galen is author of “Software Endgames: Eliminating Defects, Controlling Change, and the Countdown to On-Time Delivery.”
Andrew Glover is president of Stelligent, where his primary responsibilities include the strategic development of Stelligent’s products and services. Mr. Glover’s career includes founding Vanward Technologies and leadership in software development for IBM, Philips Electronics and Procter & Gamble. He is a graduate of George Mason University in Fairfax, Va., and is a frequent speaker at industry events. Mr. Glover is a co-author of “Java Testing Patterns” (Wiley, 2004).
Robin F. Goldsmith has been president of the Go Pro Management consultancy since 1982. He works directly with and trains professionals in business engineering, requirements analysis, software acquisition, project management, quality assurance and testing. Previously he was a developer, a systems programmer/DBA/QA and a project leader with the City of Cleveland, leading financial institutions and a “Big 4” consulting firm. Author of numerous articles and the recent book “Discovering REAL Business Requirements for Software Project Success,” Mr. Goldsmith was formerly international vice president of the Association for Systems Management and executive editor of the Journal of Systems Management. Mr. Goldsmith has an A.B. from Kenyon College, an M.S. from Pennsylvania State and a J.D. from Boston University. Michael Hackett is a founding partner of LogiGear and is responsible for the direction and development of the company's training program. He has indepth experience in software engineering and the testing of applications developed for deployment across multiple platforms. Mr. Hackett writes and teaches a software testing curriculum for LogiGear University, and for the U.C. Berkeley Extension. He is also co-author of “Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems,” Second Ed., and holds a B.S. in engineering from Carnegie Mellon University. Steven Haines started Quest Software’s Java EE Performance Tuning professional services organization in 2002. Mr. Haines is the author of both “Java 2 Primer Plus” and “Java 2 From Scratch” and shares author credits on “Java Web Services Unleashed.” He is also the author of two newly released books, “InformIT Java Reference Guide” and “Pro Java EE Performance Management.” Mr. Haines has
18
Register at www.stpcon.com
taught Java at the University of California in Irvine and at Learning Tree University. As the Java host on InformIT.com, he publishes a weekly column on everything from Java Web technologies to design patterns and performance tuning.
Linda Hayes is the CTO of Worksoft, and also the founder of three software companies, including AutoTester, which delivered the first PC-based test automation tool. Ms. Hayes holds degrees in accounting, tax and law, is an awardwinning author on software quality, and has been a frequent industry speaker at numerous industry conferences and shows. She has been named one of Fortune Magazine’s “People to Watch” and one of the “Top 40 Under 40” by Dallas Business Journal. She has been a columnist and contributor to StickyMinds and Better Software magazines, as well as a columnist for Computerworld and Datamation, author of the “Automated Testing Handbook” and co-editor of “Dare to Be Excellent” with Alka Jarvis on best practices in the software industry. Joshua Hendrick recently joined Parasoft Professional Services team and has previously worked as a software engineer in Parasoft SOA Solutions group. He has contributed to the development of Parasoft Javabased SOA and Web services testing solutions, including development from an Eclipse environment. Mr. Hendrick earned his B.S. in computer science from the University of California, Davis, where he worked actively as a programmer in the Biological and Agricultural Engineering department research lab. His experience with SOA and Web services includes development of automated testing methodologies for SOA and working with numerous Parasoft customers worldwide to ensure secure, reliable and compliant Web services.
Chris Hetzler is employed in the Fargo, N.D., Microsoft Business Solutions development office as a software development engineer in test. On his latest project, Microsoft Dynamics GP Web Services, he was the lead technical tester and was responsible for the development of the product’s test strategy. Mr. Hetzler has a B.S. in computer science and an M.S. in software engineering from North Dakota State University. He has won numerous public speaking awards throughout his academic career. Ferhan Kilical is an experienced IT professional with more than 20 years of software development and engineering experience. She has successfully managed tuning efforts of many Webbased systems in government and nonprofit institutions. Early in her career, Ms. Kilical successfully managed high-profile projects for the Department of Defense. She has taught computer science courses in the Washington, D.C., metro area universities. Recently she started a Washington, D.C.-based consulting firm specializing in performance, load and stress testing, performance monitoring, application tuning and optimization.
Conference and Tutorial
Faculty Timothy Korson has had more than two decades of substantial experience working on a large variety of systems developed using modern software engineering techniques. This experience includes distributed, real-time, embedded systems as well as business information systems in an n-tier, client/server environment. Dr. Korson’s typical involvement on a project is as a senior management consultant with additional technical responsibilities to ensure highquality, robust test and quality assurance processes and practices. Dr. Korson has authored numerous articles, and co-authored a book on Object Technology Centers.
Yury Makedonov was trained as a researcher and worked in an R&D organization dealing with composite materials. He has a Ph.D. degree in physics and mathematics. He is now using his skills and knowledge to improve software quality. Dr. Makedonov has 10 years of testing experience. Currently, he is working as a QA manager, a test automation manager and a senior consultant for the Centre of Testing and Quality at CGI Group, a leading Canadian provider of end-to-end information technology and business process services.
acquisition using the stack-based FORTH language to Web-based applications utilizing the .NET Framework and ASP.NET. In between, there were C, Visual Basic and various database and middleware initiatives. Mr. O’Mara has been working with and writing articles about .NET technology since early 2001. He has considerable direct performancetuning experience on a Web-based ASP.NET banking software application for large credit unions.
Phil Quitslund, window tester architect at Instantiations, is an expert in GUI testing and automation. He has been active in the Eclipse research community since 2002 and has developed numerous tools for supporting advanced programming language features and extensions such as aspect-oriented programming. Mr. Quitslund is also an expert in programming language design, static analysis and code generation.
Ted Rivera is currently the global manager for quality assurance for the Automated Software Quality product suite for the Rational Products Division within IBM. He has been with IBM for more than 20 years as a developer, customer support manager and software testing manager. He currently holds two patents and has several others on file, and has contributed to the upcoming book “Agility and Discipline Made Easy: Best Practices From the Rational Unified Process,” by Per Kroll and Bruce MacIsaac. Mr. Rivera is nearing completion of his Ph.D. from Southeastern Baptist Theological Seminary.
Averil Meehan has a Ph.D. in the area of memory management of object-oriented programming languages. Currently she lectures at Letterkenny Institute of Technology, a third level college in Ireland with students at degree and masters levels. Dr. Meehan has presented conference papers and written articles for publications internationally, including “Performance Implications of Java Memory Management,” “Computer Programming in a Blended Learning Environment,” “Java Garbage Collection—A Generic Solution” and “The Semantics of Garbage Collection Rules, A Denotational Approach.”
Hung Q. Nguyen is CEO and founder of LogiGear, a software quality engineering firm offering training, testing services and test automation products. He is author and co-author of several software testing books, including “Testing Applications on the Web,” Second Ed., and “Testing Computer Software,” Second Ed. Mr. Nguyen writes and teaches a software testing curriculum for LogiGear University, as well as for U.C. Berkeley and the U.C. Santa Cruz Extension. He holds a B.S. in quality assurance from Cogswell Polytechnical College, is a graduate of a Stanford Graduate School of Business Executive Program, and is a certified quality engineer.
Thomas O’Mara has more than 25 years of experience with PC-based computing, ranging from Fiber Optic Gyroscope data
BJ Rollison is a technical trainer in the Engineering Excellence Group at Microsoft, where he designs and develops an intensive, hands-on technical training curriculum for new and experienced test engineers. He started his professional career in the industry working on developing custom solutions for hospitals and local government agencies in Japan. In 1994 he joined the Windows 95 project at Microsoft, focusing on the internationalization of the Windows operating system. In 1996, Mr. Rollison became a test manager in the Internet Client and Consumer Division, responsible for several client products and a Web server. He moved to Microsoft’s Internal Technical Training group in 1999 as the director of test training, responsible for planning and organizing training for more than 6,000 test engineers. He also teaches software testing courses at the University of Washington and sits on the advisory boards for software testing certificate programs at the University of Washington and Lake Washington Technical College. Robert Sabourin, P.Eng., has more than 20 years of management experience leading teams of software development professionals to consistently deliver projects on time, on quality and on budget. Mr. Sabourin is an adjunct professor of software engineering at McGill University who often speaks at confer-
Register at www.stpcon.com
19
Conference and Tutorial
Faculty ences around the world on software engineering, SQA, testing and management issues.
Neeraj Sangal is president of Lattix, a company specializing in software architecture management solutions and services. He has analyzed many large proprietary and open-source systems. Prior to Lattix, Mr. Sangal was president of Tendril Software, which pioneered model-driven EJB development and synchronized UML models for the Java programming language. Tendril was acquired by BEA/WebGain. Prior to Tendril, he managed a distributed development organization at Hewlett-Packard. Paul Slauenwhite is a software developer on the IBM Autonomic Computing (AC) Tools and Technologies project and a committer to the Eclipse Test and Performance Tools Platform (TPTP) opensource project. After receiving a B.Sc. in computer science from Dalhousie University, Mr. Slauenwhite joined IBM in 2000 and worked on the WebSphere Object Level Trace (OLT) project. In 2001, he joined the IBM WebSphere Studio Team and developed logging and tracing technologies. He is currently working on an M.Math in software engineering at the University of Waterloo. Over the past 15 years, Peter Sody has been a testing contractor, software developer, test engineer and development manager for several companies and organizations in Germany and the United States, most recently for Eclipsys and Vertex. Being an advocate of testdriven development, he is now using his skills for developing architectures and frameworks for both functional test automation and performance testing. Mr. Sody has authored publications on various aspects of software development, including presentations at conferences such as IEEE RE and Net.ObjectDays. He holds an M.S. in computer science from Kaiserslautern University, Germany. Now an independent software consultant, Alfred Sorkowitz was a computer scientist with the Department of the Navy, responsible for developing realtime, software-intensive embedded systems. Prior to joining the Department of the Navy, he was director of the Standards and Quality Control Staff, U. S. Department of Housing and Urban Development. Mr. Sorkowitz has published papers and has presented seminars on software metrics, SQA and testing at conferences sponsored by the IEEE Computer Society, ACM and the British Computer Society.
Dean Stevens has been involved with developing and delivering world-class hardware, software and service products for more
Register at www.stpcon.com 20
than 20 years. Prior to joining Symbio, Mr. Stevens founded and served as the CEO of China TechSource—an outsourcing broker for Chinese services firms. In addition, he has operated a consulting firm that worked with corporations to resolve executive management and execution issues. He has demonstrated success managing global remote development, multicompany projects and distributed virtual organizations. Mr. Stevens began his career writing FORTRAN code for a CDC mainframe. He is a graduate of the University of Idaho.
Tom Stracener was one of the founding members of nCircle Network Security. While at nCircle he served as the head of vulnerability research from 1999 to 2001, developing one of the industry’s first quantitative vulnerability scoring systems, and co-inventing several patented technologies. Mr. Stracener is an experienced security consultant, penetration tester and vulnerability researcher. One of his patents, “Interoperability of vulnerability and intrusion detection systems,” was granted by the USPTO in October 2005. He is the senior security analyst for Cenzic’s CIA Labs and the architect of Cenzic’s Application Penetration Testing Methodology.
Mary Sweeney has been developing, using and testing relational database systems for more than 20 years for such companies as Boeing and Software Test Labs. She’s the author of “Visual Basic for Testers” (Apress, 2001) and “A Tester's Guide to .NET Programming” (Apress, 2006). Ms. Sweeney is a college professor with a degree in mathematics and computer science from Seattle University. She is an MCP in SQL Server, and is on the board of IIST (International Institute of Software Test).
Scott Will is the manager of the Quality Engineering Team for the Tivoli Software Products Division within IBM. He has been with IBM for more than 15 years and was previously an Air Force combat pilot. He has held positions as chief programmer, customer support team lead and software test manager. He holds three patents and has several others on file, and has also contributed to the upcoming book “Agility and Discipline Made Easy: Best Practices From the Rational Unified Process,” by Per Kroll and Bruce MacIsaac. Mr. Will graduated from Purdue University with degrees in computer science, mathematics and numerical analysis. He also holds an M.B.A. from Wayland Baptist University. Stanley Au Yeung has years of IT experience in database applications, performance testing and quality assurance. After getting his M.S. degree from George Washington University in Information Technology, he was heavily involved with research and application development projects in the university. Then he joined a nonprofit organization where he mastered skills and knowledge in quality assurance and performance testing. He was involved with various high-profile Web systems tuning and optimization projects. He is currently working on a performance optimization project for a mission-critical Navy application as a senior consultant.
Hotel and Travel
Hotel
The Software Test & Performance Conference has secured a special rate of $159/single per night with the Hyatt Regency Cambridge. To make your reservations, use the link on the show’s Web site, www.stpcon.com. You can also call the hotel directly. Be sure to identify yourself as being with the Software Test & Performance Conference to receive the group rate.
Information
YOU MUST MAKE YOUR RESERVATIONS BY OCTOBER 16 TO SECURE THIS RATE.
Directions
From Logan International Airport: Follow signs to Boston via Sumner Tunnel. After you come out of the tunnel take exit 26A—(Storrow Drive/Back Bay/Cambridge). Keep left, go onto Storrow Drive. Go 3/4 mile and take exit on left-hand side (Government Center/Kendall Square). At the top of the ramp take a right over the Longfellow Bridge; at the end of the bridge turn right onto Memorial Drive West (Rt. 3 North). Stay on Memorial Drive approximately 1 1/2 miles; at the third traffic light turn right. The entrance to the Hyatt will be on your left.
Hyatt Regency Cambridge 575 Memorial Drive Cambridge, MA 02139-4896 Tel: +1-617-492-1234 Fax: +1-617-491-6906
Transportation From the Boston Airport: • Taxi – Approximately $35 • Subway – $1.25 (T stops), 1 1/2 miles from hotel (taxi available) Above Ground: B.U. Central, Green Line (T stops) Underground: Harvard Square, Kendall Square Red Line Amtrak: • Back Bay Station is located 3 miles from hotel, approximately 10-15 minutes away. Amtrak also stops at South Station in the Financial District, which is approximately 5 miles, 20-30 minutes, from hotel. Taxi is available to and from the hotel. From Back Bay Station, cost is $7-$10. From South Station, cost is $12-$15. Downtown Cambridge: • Hotel offers complimentary scheduled shuttle service to locations in Cambridge. The shuttle service will take guests anywhere from Harvard Square to the Cambridgeside Galleria Mall area and all points between. • For guests who would like to get into the city of Boston, the shuttle will stop at Kendall Square T Stop (Red Line) or the Boston University T Stop (Green Line). For more information or to sign up for the shuttle, contact the Guest Services Desk at in-house extension 51 or call +1-617-492-1234. General Shuttle Information: Traffic conditions may change pick-up times, forcing the shuttle off schedule, especially during rush hour and inclement weather. The Hyatt shuttle is for registered guests only. If requested, please show your room key to the driver as identification. T Stop schedules are available at the Guest Services Desk. For the safety of our passengers, luggage is not allowed on the shuttle. Shuttle schedule is subject to change without notice. MBTA: (Subway) 1 1/2 miles from hotel. Parking: Self-parking is $28 per night with in/out privileges (clearance 6' 8"). Valet parking is $30 per night; RV and van parking is available at your own risk in the employee parking lot, based on availability. See the doorman or front desk upon arrival for details.
Register at www.stpcon.com
21
Conference
Planner
Register at www.stpcon.com
November 6
November 7
TUESDAY
WEDNESDAY
REGISTRATION OPEN 4:00 pm - 7:00 pm
REGISTRATION OPEN 7:30 am - 7:00 pm
REGISTRATION OPEN 7:30 am - 7:00 pm
MONDAY
November 8
INDUSTRY KEYNOTE
4:30 pm - 4:45 pm
KEYNOTE: REX BLACK
EXHIBITS OPEN
2:30 pm - 7:00 pm
ATTENDEE RECEPTION
TUTORIALS
TECHNICAL CLASSES 8:30 am - 10:00 am
9:00 am - 5:00 pm
10:30 am - 12:00 pm
1:15 pm - 2:30 pm
T-1 Assessing Your Test Team Effectiveness, Efficiency and More - Black
101
Seven Low-Overhead Software Process Improvement Methods - Goldsmith
201
Prevent Showstopper Overruns With RiskBased Proactive Testing - Goldsmith
301
Hacking 101: Donning the Black Hat to Best Protect Applications From Today's Hacking Threats - Stracener
T-2 Testing Techniques: Theory and Application - Rollison
102
Just-in-Time Testing Techniques and Tactics, Part 1 - Sabourin
202
Just-in-Time Testing Techniques and Tactics, Part 2 - Sabourin
302
Five Core Metrics to Guide Your Software Endgames - Galen
T-3 Testing in Highly Iterative, Quasi-Agile Projects—Practical Strategies for Mixed Culture Projects - Korson
103
Automated Database Testing: Testing and Using Stored Procedures - Sweeney
203
How to Turn Your Testing Team Into a High-Performance Organization - Hackett
303
Overcoming Requirements-Based Testing's Hidden Pitfalls - Goldsmith
T-4 Twenty-One Ways to Spot—And Fix— Requirements Errors Early - Goldsmith
104
Lessons Learned in Test Automation, Part 1 - Dustin
204
Lessons Learned in Test Automation, Part 2 - Dustin
304
Managing Acceptance Testing Cycles More Efficiently - Makedonov
T-5 Foundations of GUI Test Automation - Makedonov
105
Code Coverage Metrics and How to Use Them - Black
205
Database Security: How Vulnerable Is Your Data? - Sweeney
305
How to Optimize Your Web Testing Strategy - Nguyen
T-6 Software Endgames: How to Finish What You've Started - Galen
106
Foundations of GUI Test Automation Using C#, Part 1 - Rollison
206
Foundations of GUI Test Automation Using C#, Part 2 - Rollison
306
Model-Based Testing for Java and Web-based GUI Applications - Feldstein
T-7 Using Metrics to Improve Software Testing - Sorkowitz
107
Designing for Testability - Feldstein
207
Analyze the Return on Your Testing Investment - Black
307
Taking AIM—Using Visual Models for Test Case Design - Sabourin
T-8 Creating Agility and Effectiveness in Software Testing - Hayes
108
Testing the Software Architecture - Sangal
208
Using Scrum to Manage the Testing Effort - Galen
308
Elements of Software Design for Unit Testing - Ciot
109
Effectively Training Your Offshore Test Team - Hackett
209
Quality Throughout the Software Life Cycle - Feldstein
309
Models for Security Testing in the Software Development Life Cycle - Berg
BIRDS OF A FEATHER SESSION 8:00 pm - 10:00 pm
BIRDS OF A FEATHER SESSION 8:30 pm - 10:30 pm 22
Last Ye a Confere r ’s SOLD O nce Registe UT! r Early!
Conference
Register at www.stpcon.com
Planner
THURSDAY
November 9 REGISTRATION OPEN 7:30 am - 4:00 pm 4:45 pm - 5:30 pm
EXHIBITS OPEN 12:00 pm - 4:00 pm
5:30 pm - 7:00 pm
TECHNICAL CLASSES 8:30 am - 10:00 am
3:00 pm - 4:15 pm
3:45 pm - 5:00 pm
2:00 pm - 3:15 pm
10:30 am - 12:00 pm
401
The Secure Software Development Life Cycle - Dustin
501
Strategies and Tactics for Global Test Automation, Part 1 - Nguyen
601
Strategies and Tactics for Global Test Automation, Part 2 - Nguyen
701
The Five Most Dangerous Application Security Vulnerabilities—and How to Test for Them - Basirico
801
Effective Metrics for Managing a Test Effort - Bradshaw
402
Accelerate Testing Cycles With Collaborative Performance Testing - Cavallaro
502
Creating and Leading the HighPerformance Test Organization, Part 1 - Galen
602
Creating and Leading the High-Performance Test Organization, Part 2 - Galen
702
S-Curves and the Zero Bug Bounce: Plotting Your Way to More Effective Test Management - Bradshaw
802
Software Testing a Service-Oriented Architecture - Rivera, Will
403
Deciding What Not to Test - Sabourin
503
Pinpointing and Exploiting Specific Performance Bottlenecks - Barber
603
SOA Performance Testing Challenges - Barber
703
Testing Java Programs—Memory Management Issues - Meehan
803
Real-World Performance Testing Lab for (Almost) Free - Flint
404
Putting the User Back in User Acceptance Testing - Goldsmith
504
Performance Tuning ASP.NET 2.0 Applications, Part 1 - O'Mara
604
Performance Tuning ASP.NET 2.0 Applications, Part 2 - O'Mara
704
Defining Test Data and Data-Centric Application Testing - Hetzler
804
Testing Java Applications Using the Eclipse Test and Performance Tools Platform (TPTP) - Slauenwhite
405
Getting a Handle on Risk: Risk-Based Testing Strategies - Chaney
505
Identify and Mitigate Risks Through Testing, Part 1 - Black
605
Identify and Mitigate Risks Through Testing, Part 2 - Black
705
Coding Standards and Unit Testing—Why Bother? - Hendrick
805
Building a Bridge Between Functional Test Automation and Performance Testing - Sody
406
Java EE Performance Tuning Methodology: Wait-Based Tuning - Haines
506
Rapid BusinessDriven Testing - Chaney
606
Exploiting Web Application Code: The Methodologies and Automation of SQL Injection - Fisher
706
The 5% Challenges of Test Automation - Buwalda
806
Best Practices for Managing Distributed Testing Teams - Stevens
407
Agile Test Development - Buwalda
507
Unit Testing for Agile Development, Part 1 - Sabourin
607
Unit Testing for Agile Development, Part 2 - Sabourin
707
Performance Testing for Managers - Barber
807
Diagnosing and Resolving J2EE Application Issues Before Deployment - Kilical, Au Yeung
408
Techniques for Testing Packaged Application Performance - Feaster
508
Using Code Metrics for Targeted Code Refactoring - Glover
608
Verifying Software Robustness - Collard
708
What Hollywood Can Teach You About Software Testing - Sabourin
808
GUI Test Automation for Eclipse RCP Applications - Quitslund
509
Recruiting, Hiring, Motivating and Retaining Top Testing Talent - Feldstein
609
Metrics: How to Track Things That Matter - Chaney
23
Pricing and
Registration Register By Full Event Passport November 7-9
Two-Day Technical Conference Only
Register Early and SAVE! x
e treme Early Bird
Super Early Bird
Early Bird
Full Price
By July 28
By Sept. 15
By Oct. 20
After Oct. 20
$1,095
$1,195
$1,365
$1,565
$935
$1,035
$1,195
$1,345
$695
$795
$845
$945
FREE
FREE
FREE
Best Value
November 8-9
Tutorials Only November 7
Exhibits Only November 8-9
$50 All prices are in US dollars
How to Register REGISTER ONLINE. Register online at www.stpcon.com and use one of the following payment methods: CREDIT CARD. You can use the secure online form to pay via credit card and get immediate confirmation of your classes. MasterCard, Visa and American Express are accepted cards. You’ll receive a REGISTRATION RECORD and RECEIPT. Please print out these pages and bring them with you to the conference. Present them at the Registration Desk to pick up your badge and any course materials. CHECK. Fill out the online registration form. Print out the REGISTRATION RECORD and RECEIPT and mail to BZ Media LLC, 7 High Street, Suite 407, Huntington, NY 11743. Be sure to include your payment. Online registrations that are mailed without payment will not be confirmed until payment is received. P.O. If you register using a P.O., you will be invoiced immediately for the registration amount. Payment must be received before your registration can be confirmed. GROUP DISCOUNTS. Registering four or more attendees from your company? You can receive a $100 discount off the Full Event Passport on each registration. Enter the word GROUP when asked for a code on our online registration form. ALUMNI DISCOUNT. Have you attended the Software Test
BZ Media LLC 7 High Street Suite 407 Huntington, NY 11743
& Performance Conference in previous years? Welcome back! Enter the code ALUMNI to receive $100 off the Full Event Passport registration price. REFUND POLICY. You can receive a full refund, less a $50 registration fee, for cancellations made by October 10, 2006. Cancellations after this date are non-refundable. Send your cancellation in writing to registration@bzmedia.com. Paid Conference/Tutorial Registration Includes: • Admission to tutorials and/or technical classes. Please make your class selections when registering. • Admission to keynotes • Admission to exhibits • Conference materials • Attendee reception • Continental breakfast, coffee breaks Exhibits-Only Registration Includes: • Admission to keynotes • Admission to exhibits • Attendee reception Register Online Today at www.stpcon.com! Registration Questions Contact Donna Esposito at +1-415-785-3419 or desposito@bzmedia.com.