Publication
: ST ES BE TIC ols AC To PR CM S
A
VOLUME 5 • ISSUE 12 • DECEMBER 2008 • $8.95 • www.stpmag.com
Reeled In By The Allure Of Test Tools? School Your Team On Putting Test Automation Into Practice
page 12
Teach Old Apps New Testing Tricks Using Quality Gates To Prevent Automation Project Failure
VOLUME 5 • ISSUE 12 • DECEMBER 2008
Contents
A BZ Media Publication
12
COV ER STORY
Reeled In By The Allure Of Fancy Test Automation Tools?
Here are some real-world examples that can school your team on the ways of putting test automation into practice. By Aaron Cook and Mark Lustig
18
Teach Old Apps Some New Tricks
When redesign’s not an option, and adding testability interfaces is difficult, you need ways to improve testability of existing legacy apps. By Ernst Ambichl
Depar t ments 5 • Editorial How many software testers are there in the world, anyway?
6 • Contributors Get to know this month’s experts and the best practices they preach.
7 • ST&Pedia Industry lingo that gets you up to speed.
9 • Out of the Box News and products for testers.
11 • Feedback It’s your chance to tell us where to go.
32 • Best Practices SCM tools are great, but they won’t run your business or make the coffee. By Joel Shore
34 • Future Test When automating Web service testing, there’s a right way and a wrong way. By Elena Petrovskaya and Sergey Verkholazov
DECEMBER 2008
27
Quality Gates For the Five Phases Of Automated Software Testing “You shall not pass,” someone might bellow, as if protecting the team from a dangerous peril. And so might your team if it embarks on the journey through the five phases of automated testing. By Elfriede Dustin
www.stpmag.com •
3
Ed Notes VOLUME 5 • ISSUE 12 • DECEMBER 2008 EDITORIAL Editor Edward J. Correia +1-631-421-4158 x100 ecorreia@bzmedia.com
Editorial Director Alan Zeichick +1-650-359-4763 alan@bzmedia.com
Copy Desk Adam LoBelia Diana Scheben
Contributing Editors Matt Heusser Chris McMahon Joel Shore ART & PRODUCTION Art Director Mara Leonardi SALES & MARKETING Publisher
Ted Bahr +1-631-421-4158 x101 ted@bzmedia.com Associate Publisher
David Karp +1-631-421-4158 x102 dkarp@bzmedia.com Advertising Traffic
Reprints
Nidia Argueta +1-631-421-4158 x125 nargueta@bzmedia.com
Lisa Abelson +1-516-379-7097 labelson@bzmedia.com
List Services
Accounting
Lisa Fiske +1-631-479-2977 lfiske@bzmedia.com
Viena Ludewig +1-631-421-4158 x110 vludewig@bzmedia.com
READER SERVICE Director of Circulation
Agnes Vanek +1-631-443-4158 avanek@bzmedia.com
President Ted Bahr Executive Vice President Alan Zeichick
Customer Service/ Subscriptions
+1-847-763-9692 stpmag@halldata.com
BZ Media LLC 7 High Street, Suite 407 Huntington, NY 11743 +1-631-421-4158 fax +1-631-421-4130 www.bzmedia.com info@bzmedia.com
Software Test & Performance (ISSN- #1548-3460) is published monthly by BZ Media LLC, 7 High Street, Suite 407, Huntington, NY, 11743. Periodicals postage paid at Huntington, NY and additional offices. Software Test & Performance is a registered trademark of BZ Media LLC. All contents copyrighted 2008 BZ Media LLC. All rights reserved. The price of a one year subscription is US $49.95, $69.95 in Canada, $99.95 elsewhere. POSTMASTER: Send changes of address to Software Test & Performance, PO Box 2169, Skokie, IL 60076. Software Test & Performance Subscribers Services may be reached at stpmag@halldata.com or by calling 1-847-763-9692.
DECEMBER 2008
How Many Software Testers Are Out There? What is the size of the softQA] are in the 1-to-3 or 1-toware testing market? How 5 range.” However, the ratio many software testers exist in about a third of compain the world? nies is around 1-to-10, he I was asked that question said, which skews the averrecently, and I had to admit age. “We focus on the enterthat I had no idea of the prise. ISVs tend to have a answer. My estimate was higher saturation of testers. about 250,000, but that was I’ve never seen an enterprise just a guess. It was not based with a 1-to-1 ratio; maybe on research, statistical measwith certain teams [working Edward J. Correia urement or anything at all, on] critical apps. So I would really. It was just a number plucked from say that 1-to-5 as an industry average thin air. would be close.” Numbers I do know for sure are these: Murphy points to what he called the The circulation of this magazine is about flip-side of that argument. “How many lines 25,000. My e-mail newsletter goes to anothof code does it take to test a line of code?” er 40,000. Our conferences (there are If, let’s say, that ratio was 10-to-1, “how three of them each year) attract another could I have three developers cranking few thousand software testers, QA profesout code and expect one tester to keep up sionals and senior test managers. Of with that?” Of course, test tools and course, there’s some overlap between magautomation have the potential to help the azine and newsletter readers and confertester here, but his point is well taken; ence attendees, so let’s say BZ Media is there are not enough testers out there. reaching about 60,000 unique testers. Is Which perhaps helps to explain the that a lot, or are we just scratching the surmeteoric rise of test-only organizations face? cropping up in India. “We’ve seen a huge To help find the answer, I contacted growth of Indian off-shore groups where Thomas Murphy, a research analyst with all they do is testing. Tata has spun out Gartner. While he couldn’t provide even testing as its own entity,” he said, referan approximate body count, he said that ring to the gigantic Indian conglomersales of software testing tools this year is ate. “They wanted [the testing organizaexpected to yield US$2.2 billion in revtion] to be independent, with IV&V charenue for the likes of Borland, Hewlettacteristics. Packard and other AD tool makers. That “Testing has been a growth business figure includes about $1.5 billion in testin India. There’s a huge number of inditool revenue for so-called “distributed” viduals doing manual testing, [and] buildplatforms such as Linux, Mac OS X and ing regression suites and frameworks for Windows, and another $700 million for package testing.” They’ve also had to shift mainframes. their processes to adapt to changes in As for people, Murphy estimates that technology. “As they get deeper into testfor the enterprise—that is, companies ing Web 2.0 and SOA, just to get to the building applications for internal use— same level of quality they used to have the most common ratio for developers to requires tremendous increases in their testers is about 5-to-1. That ratio can be own quality proactives.” quite different for ISVs. “At Microsoft, for Still, the number of software testers in instance, the ratio is almost 1-to-1. But the world remains elusive. I’ve only just most companies doing a good job [of begun to search. ý www.stpmag.com •
5
Contributors If you’ve ever been handed a software-test automation tool and told that it will make you more productive, you’ll want to turn to page 12 and read our lead feature. It was written by automation experts AARON COOK and MARK LUSTIG, who themselves have taken the bait of quick-fix offers and share their experiences for making it work. AARON COOK is the quality assurance practice leader at Collaborative Consulting and has been with the company for nearly five years. He has led teams of QA engineers and analysts at organizations ranging from startups to large multinationals. Aaron has extensive experience in the design, development, implementation and maintenance of QA projects supporting manual, automated and performance testing processes, and is a member of the American Society for Quality. Prior to joining Collaborative, he managed the test automation and performance engineering team for a startup company. MARK LUSTIG is the director of performance engineering and quality assurance at Collaborative Consulting. In addition to being a hands-on performance engineer, he specializes in application and technical architecture for multi-tiered Internet and distributed systems. Prior to joining Collaborative, Mark was a principal consultant for CSC Consulting. Both men are regular speakers at the Software Test & Performance conference.
In her upcoming book titled “Implementing Automated Software Testing,” (Addison-Wesley, Feb. 2009), ELFREDE DUSTIN details the Automated Software Testing Process best practices for test and QA professionals. In this third and final installment on automated software testing, which begins on page 27, she provides relevant excerpts from that book on processes describing use of quality gates at each phase of a project as a means of preventing automation failure. Elfriede has authored or collaborated on numerous other works, including “Effective Software Testing” (Addison Wesley, 2002), “Automated Software Testing,” (Addison Wesley, 1999) and “Quality Web Systems,” (Addison Wesley, 2001). Her latest book “The Art of Software Security Testing,” (Symantec Press, 2006), was co-authored with security experts Chris Wysopal, Lucas Nelson, and Dino Dai Zovi. Once again we welcome ERNST AMBICHL, Borland’s chief scientist, to our pages. In the March ‘08 issue, Ernst schooled us on methods of load testing early in development to prevent downstream performance problems. This time he tells us how to make legacy and other existing applications more testable when redesign is not an option, and adding testability interfaces would be difficult. Ernst served as chief technology officer at Segue Software until 2006, when the maker of SilkTest and other QA tools was acquired by Borland. He was responsible for the development and architecture of Segue’s SilkPerformer and SilkCentral product lines. For Borland, Ernst is responsible for the architecture of Borland’s Lifecycle Quality Management products. TO CONTACT AN AUTHOR, please send e-mail to feedback@bzmedia.com.
6
• Software Test & Performance
DECEMBER 2008
ST&Pedia Translating the jargon of testing into plain English
The Automation Automaton KEYWORD DRIVEN
Even the experts don’t agree on the “right” way to approach test automation. If there was one, it would address one specific situation— which might or might not match yours. What we might all agree on are which problems are the most challenging, and on some of the ways to attack them. We'll start with two of the most common problems of test automation:
An alternative to record/playback is to isolate user interface elements by IDs, and to examine only the specific elements mentioned. Keyword-driven frameworks generally take input in the form of a table, usually with three columns, as shown in Table 1. Matt Heusser and Chris McMahon
The Minefield Exploratory testing pioneer James Bach once famously compared the practice of software testing to searching for mines in a field. “If you travel the same path through the field again and again, you won’t find a lot of mines,” he said, asserting that it’s actually a great way to avoid mines. Automated tests that repeat the same steps over and over find bugs only when things stop working. In some cases, as in a prepared demo in which the user never veers from a script, that may be exactly what you want. But if you know that users won’t stick to the prepared test scripts, the strategy of giving them a list of tests to repeat may not be optimal.
Q:
What would your answers be?
Can we shrink the project schedule if we use test automation? Now that the tests are repeatable...are they any good? Which heuristics are you using? Are we doing an SVT after our BVT? Does the performance testing pass?
A:
ST&Pedia will help you answer questions like these and earn the respect you deserve.
The Oracle
Upcoming topics:
While injecting randomness solves the minefield problem, it introduces another. If there are calculations in the code, the automated tests now need to calculate answers to determine success or failure. Consider testing software that calculates mortgage amounts based on a percentage, loan amount, and payment period. Evaluating the correct output for a given input requires another engine to calculate what the answers should be. And to make sure that answer is correct, you need another oracle. And to evaluate that...and on and on it goes.
January Change Management, ALM
RECORD/PLAYBACK In the 1990’s, a common automation strategy was to record a tester’s behavior and the AUT’s responses during a manual test by use of screen capture. Later, these recorded tests can be played back repeatedly without the presence of the manual tester. Unfortunately, these recordings are fragile. They’re subject to (false) failure due to changes in date, screen resolution or size, icons or other minute screen changes. DECEMBER 2008
TABLE 1 Command type click wait_for_text_present_ok
Element your_name_txt Button Body
Argument Matthew Submit hello, Matthew
Some versions omit the Command or verb, where the framework assumes that every verb is a “click” or that certain actions will always be performed in a certain order. This approach is often referred to as data-driven. Where record/playback reports too many false failures, keyword driven frameworks do not evaluate anything more than the exact elements they are told to inspect, and can report too many false successes. For example, in Table 1, if the software accidentally also included a cached last name, and wrote “Hello, Matthew Heusser,” this would technically constitute failure but would evaluate as true.
MODEL DRIVEN TESTING (MDT) February Tuning SOA Performance March Web Perf. Management April Security & Vuln. Testing May Unit Testing
Most software applications can be seen as list of valid states and transitions between states, also known as a finite state machines. MDT is an approach popularized by Harry Robinson that automates tests by understanding the possible valid inputs, randomly selecting a choice and value to insert. Some MDT software systems even record the order the tests run in, so they can be played back to recreate the error. Under the right circumstances, this can be a powerful approach.
June Build Management
AUTOMATED UNIT TESTS
Matt Heusser and Chris McMahon are career software developers, testers and bloggers. They’re colleagues at Socialtext, where they perform testing and quality assurance for the company’s Webbased collaboration software.
Software developers commonly use Test Driven Development (TDD) and automated unit tests to generate some confidence that changes to the software did not introduce a serious “break,” or regression. Strictly, TDD is unit testing typically developed in the same language of the code, and does not have the GUI integration issue challenges associated with automatcontinued on page 33 > www.stpmag.com •
7
Hardware Makers Get Windows 7 Pre-Beta Attendees of the Windows Hardware Engineering Conference (WinHEC) in Los Angeles in early November received a pre-beta of Windows 7, the version of Windows that Microsoft said will replace Vista. The software, which Microsoft characterized in a statement as “APIcomplete,” introduces a series of capabilities the company says will “make it easier for hardware partners to create new experiences for Windows PC customers.” The move is intended to “rally hardware engineers to begin development and testing” for its nascent operating system. Windows 7 could be available as soon as soon as the middle of next year, according to a Nov. 7 report on downloadsquad.com. Microsoft’s original promise
for the operating system when it was introduced in 2007 was that of a three-year timetable that “will ultimately be determined by meeting the quality bar.” Word from Microsoft is that development of Windows 7 is on schedule. In a Nov. 5 statement, Microsoft said touchsensitivity and simplified broadband configuration were ready. Helping to afford those opportunities (Microsoft hopes) is a new component called Devices and Printers, which reportedly presents a combined browser for files, devices and settings. “Devices can be connected to the PC using USB, Bluetooth or Wi-Fi,” said the statement, making no of devices connected via serial, parallel, SCSI, FireWire, IDE, PS/2,
Zephyr 2.0: Now in the Cloud Zephyr in late October launched version 2.0 of its namesake software test management tool, which is now available as a SaaS in Amazon’s Elastic Compute Cloud. Zephyr gives development teams a Flex-based system for collaboration, resource, document and project management, test-case creation automation and archiving, defect tracking and reporting. The system was previously available only as a self-hosted system. “The advantage of a SaaS is that you need no hardware,” said Zephyr CEO Samir Shah. “It’s a predeveloped back-end up and running immediately with 24/7 availability, backup, restore, and high bandwidth access from anywhere.” The cost for either system is US$65 per user per month. If you deploy in-house, you also get three free users for the first year. Also new in Zephyr 2.0 is Zbots, which Shah said are automation agents for integrating Zephyr with other test tools. “These are little software agents that run on the remote machines on which you run Selenium, [HP QuickTestPro] or [Borland] SilkTest.” They let you run all your automated tests from within Zephyr, and bring the results back into Zephyr for analysis, reporting and communication to the team. “You build your automation scripts in those tools,” he explained, “then within Zephyr, you get comprehensive views of all test cases so you can DECEMBER 2008
look at the coverage you have across manual and automated tests.” The system also lets you schedule test to run. “When it comes time to run automated scripts, we can kick off automation scripts on the target machines, bring results back to Zephyr and update metrics and dashboards in real time.” Shah claims that moving test cases into Zephyr is largely automatic, particularly if they’re stored in Excel spreadsheets or Word documents, which is said is common. “At the end of the day, you have to give a report of the quality of your software. That process is manual and laborious. We automate that by bringing all that [data] into
PCI or SATA. The module also provides Wizards. Claiming to simplify connections to the Internet while mobile, Microsoft has broadened the “View Available Networks” feature to include mobile broadband. For makers of specialized or highly custom devices, Microsoft provides Device Stage, which “provides information on the device status and runs common tasks in a single window customized by the device manufacturer.” Microsoft also said that the “Start menu, Windows Taskbar and Windows Explorer are touch-ready,” in Windows 7. It’s unclear if application developers can access the Windows Touch APIs. WinHEC attendees also received a pre-beta of Windows Server 2008 R2. one place, from release to release, sprint to sprint, via a live dashboard. We give the test team and management a view into everything you are testing.” Also new in Zephyr 2.0 is two-way integration with the JIRA defect tracking system. “Objects stay in JIRA, and if they’re modified in one place they’re modified in the other. We built a clean Web services integration, and the live dashboard also reflects changes in JIRA.” Version 2.0 improves the UI, Shah said, making test cases faster and easier to write thanks to a full-screen mode. “When a tester is just writing test cases, we expand that for them and let them speed through that process. Importing and reporting also were improved, he said.
The test manager’s ‘desktop’ in Zephyr provides access to information about available human and technology resources, test metrics and execution schedules, defects and administrative settings. www.stpmag.com •
9
Feedback SOFTWARE IS BUGGED BY BUGS Regarding “Software is Deployed, Bugs and All,” Test & QA Report, July 29, 2008, (http://sdtimes.com/link/3299432634): I was very interested in this article. I will have to track down the white paper and take a look but there are some comments that I’d like to make based on the article. First a disclosure of my own— used to work for a company that made tools in this space and presently work for a distributor of IBM Rational tools. Your “paradoxical tidbit” is I think absolutely a correct observation. It highlights something I’ve observed for some time...that many businesses seem to think managers in the IT department don’t need necessarily to have an IT background or they have an IT background but [no] real management training. The net result is they either don’t fully understand the implications of problems with the process, or they understand the problems but don’t have the management skills to address them. It’s very rare to find a truly well managed IT department. I think what you’ve described is really a symptom of that. Some of the other “conclusions” seem less clearly supportable. Statistics is a dangerous game. Once you start looking at the lower percentage results it is very easy to make conclusions that may be supportable, but often there are other equally supportable conclusions. For example, the data re time required to field defects... just because it could take 20-30 days to fix a problem that isn’t necessarily the worst outcome for a business. I can think of several cases where this would be acceptable. To list a few... 1. The cost to the business of not releasing the product with defects could be even greater 2. There is a work around for the defect 3. The defect is non-critical 4. There is release cycle in place that only allows fixes to be deployed once a month. I’m sure there are others. The takeaways are where the real test of a survey are. From what you’ve published of the report it seems that the second one is supportable after a fashion. Clearly they need to fix their process and it would seem obvious that automated solution should be a part of that. However I do wonder if they actually got data to support that from the survey. The first conclusion says there are “debilitating consequences.” Again I wonder if the survey actually established that. Clearly there are consequences, but were they debilitating? Was there any data about the commercial impact of the defects? Without that it is hard to say. Yes we all know about the legendary bugs and their consequences, but that does not automatically imply that all defects are debilitating. In any event, it is a topic that should be discussed more often and I enjoyed the article. Mark McLaughlin Software Traction South Australia FEEDBACK: Letters should include the writer’s name, city, state, company affiliation, e-mail address and daytime phone number. Send your thoughts to feedback@bzmedia.com. Letters become the property of BZ Media and may be edited for space and style. DECEMBER 2008
www.stpmag.com •
11
Beyond Tools: Test Automation Once you’ve taken the bait, how By Aaron Cook and Mark Lustig
ne of the biggest challenges facing quality teams today is the development and maintenance of a viable test automation
O
solution. Tool vendors do a great job of selling their products, but too often lack a comprehensive plan for putting the automation solution into practice. This article will help you undertake a real-world approach to planning, developing, and implementing your test automation as part of the overall quality efforts for your organization. You will learn how to establish a test automation environment regardless of the tools in your testing architecture and infrastructure. We also give you an approach to calculating the ROI of implementing test automation in your organization, an example test automation framework including the test case and test scenario assembly, and rules for maintenance of your automation suite and how to account
12
• Software Test & Performance
for changes to your Application Under Test (AUT). We also cover an approach to the overall test management of your newly implemented test automation suite. The set of automation tools an organization requires should fit the needs, environments, and phases of their specific situation. Automation requires an integrated set of tools and the corresponding infrastructure to support test development, management, and execution. Depending on your quality organization, this set of automation tools and technologies can include solutions for test management, functional test automation, security and vulnerability testing, and performance testing. Each of these areas is covered in detail. In most organizations we have seen, phases and environments typically include development and unit testing, system integration testing, user acceptance testing, performance testing, security and vulnerability testing, and regression testing. Depending on your specific quality processes and software development methodologies, automation may not be
applicable or practical for all testing phases. Additional environments to also consider may include a break-fix/firecall environment. This environment is used to create and test production “hot fixes” prior to deployment to production. A training environment may also exist. An example of this would be for a call center, where training sessions occur regularly. While this is usually a version of the production environment, its dedicated use is to support training. Automation is most effective when applied to a well-defined, disciplined set of test processes. If the test processes are not well defined, though maturing, it may be practical and beneficial to tactically apply test automation over time. This would happen in parallel to the continued refinement and definition of the test process. A key area where automation can begin is test management.
Test Management Mature test management solutions provide a repeatable process for gathering DECEMBER 2008
in Practice to make the most of your catch and maintaining requirements, planning, developing and scheduling tests, analyzing results, and managing defects and issues. The core components of test management typically include test requirements management (and/or integration with a requirement management solution), and the ability to plan and coordinate the testing along with integration with test automation solutions. Many of the test management processes and solutions available today also include reporting features such as trending, time tracking, test results, project comparisons, and auditing. Test managers needs to be cognizant of integrating with software change and configuration management solutions as well as defect management and tracking. These core components can be employed within a single tool or can be integrated using multiple tools. Using one tool integrated tool can improve the quality process and provide a seamless enterprise-wide solution for standards, communication and collaboration among disDECEMBER 2008
tributed test teams by unifying multiple processes and resources throughout the organization. For example, business analysts can use the test management tool(s) (TMT) to define and store application business requirements and testing objectives. This allows the test managers and test analysts to use the TMT to define and store test plans and test cases. The test automation engineers can use the TMT to create and store their automated scripts and associated frameworks. The QA testers can use the TMT to run both manual and automated tests and design and run the reports necessary to examine the test execution results as well as tracking and trending the application defects. The team’s program and project managers can use the TMT to create status reports, manage resource allocation and decide whether an application is ready to be released to production. Also, the TMT can be integrated with other automation test suite tools for functional test automation, security and vulnerability test
automation, and performance test automation. The test management tool allows for a central repository to store and manage testing work products and processes, and to link these work products to other artifacts for traceability. For example, by linking a business requirement to a test case and linking a test case to a defect, the TMT can be used to generate a traceability report allowing the project team to further analyze the root cause of identified defects. In addition, TMT allows for standards to be maintained which increases the likelihood that quality remains high. However, facilitating all resources to leverage this tool requires management sponsorship and support.
Functional Test Automation Functional test automation should be coupled with manual testing. It is not practical for all requirements and functions to be automated. For example, verifying the contents of a printed document will likely best be done manually. www.stpmag.com •
13
HOOKED ON TESTING
However, verifying the print formatting for the document can easily be automated and allow the test engineer to focus efforts on other critical testing tasks. Functional test scripts are ideal for building a regression test suite. As new functionality is developed, the functional script is added to the regression test suite. By leveraging functional test automation, the test team can easily validate functionality across all environments, data sets, and select business processes. The test team can more easily and readily document and replicate identified defects for developers. This ensures that the development team can replicate and resolve the defects faster. They can run the regression tests on upgraded and enhanced applications and environments during off hours so that the team is more focused on new functionality introduced during the last build or release to QA. Functional test automation can also prol vide support for specific technologies (e.g., GUI, text-based, protocol specific), including custom controls.
generation across multiple machines and data centers/geographic locations.
Security and Vulnerability Testing Security testing tools enable security vulnerability detection early in the software development life cycle, during the development phase as well as testing phases. Proactive security scanning accelerates remediation, and save both time and money when compared with later detection. Static source code testing scans an application’s source code line by line to detect vulnerabilities. Dynamic testing tests applications at runtime in many environments (i.e., development, acceptance test, and production). A mature security and vulnerability testing process will combine both static and dynamic testing. Today’s tools will identify most vulnerabilities, enabling developers to prioritize and address these issues.
Automation is an investment that yields the most significant benefits over time.
Test Environments
When planning the test automation environment, a number of considerations Performance Testing must be addressed. These Performance testing tools include the breadth of can be used to measure applications, the demand load/stress capability and for automation within the predict system behavior enterprise, the types of using limited resources. automation being executed Performance testing tools and the resource needs can emulate hundreds to support all system and thousands of concurrequirements. A rent users putting the key first step is tool(s) selecl application through rigtion. Practitioners should orous real-life user loads. IT departconsider questions such as: ments can stress an application from • Should all tools be from a single vendor? • How well do the tools interact with the end-to-end and measure response times AUT? of key business processes. Performance • How will tools from multiple vendors work tools also collect system and compotogether? nent-level performance information That last issue—that of integration— through system monitors and diagnosis key. Connecting tools effectively increastic modules. es quality from requirements definition These metrics can be combined to through requirements validation and analyze and allow teams to drill down to reporting. isolate bottlenecks within the architecThe next step is to determine the ture. Most of the commercial and open number of test tool environments source performance testing tools availrequired. At a minimum, a production able include metrics for key technologies, test tools environment will be necessary operating systems, programming lanto support test execution. It is worth conguages and protocols. sidering the need for a test automation They include the ability to perform development environment as well. By havvisual script recording for productivity, as ing both a test automation development well as script based viewing and editing. environment and production automation The performance test automation tools environment, automation engineers can also provide flexible load distribution to develop and execute tests independentcreate synthetic users and distribute load
14
• Software Test & Performance
ly and concurrently. The size and scale of each environment also can be managed to enable the appropriate testing while minimizing the overall infrastructure requirement. The sample environment topology below defines a potential automation infrastructure. This includes: • Primary test management and coordination server • Automation test centralizedcontrol server • Automation test results repository and analysis server • System and transaction monitoring and metrics server • Security and vulnerability test Execution server • Functional automation execution server(s) • Load and stress automation Execution server(s) • Sample application under test (AUT), including: • Web server • Application server(s) • Database server
Return on Investment Defining the ROI of an automation installation can be straightforward. For example, comparisons across: time to create a defect before and after test automation; time to create and execute a test plan; and time to create reports before and after test management, can all be effective measures. Test automation is an investment that yields the most significant benefits over time. Automation is most effectively applied to a well defined, disciplined test processes. The testing life cycle is multiphased, and includes unit, integration, system, performance, environment, and user acceptance testing. Each of these phases has different opportunities for cost and benefit improvements. Additionally, good test management is a key competency in realizing improvements and benefits when using test automation. There are different ways of defining the return on investment (ROI) of test automation. It is important to realize that while ROI is easily defined as benefits over costs, benefits are more accurately defined as the benefits of automated testing versus the benefits of manual testing over costs of automated testing versus the costs of manual testing. Simply put, there are costs associated with test automation including software acquisition and maintenance, training costs, automation development costs (e.g., test script development), and DECEMBER 2008
HOOKED ON TESTING
automation maintenance costs (e.g., test script maintenance). One simple approach to determining when it makes sense to automate a test case is: The benefits of automation are twofold, starting with a clear cost savings associated with test execution. First, automated tests are significantly faster to execute and report on than manual tests. Secondly, resources now have more time to dedicate toward other testing responsibilities such as test case development, test strategy, executes tests under fault conditions, etc. Additional benefits of automation include: • Flexibility to automate standalone components of the testing process. This may include automating regression tests, but also actions directly related to test management, including defect tracking, and requirements management. • Increased depth of testing. Automated scripts can enable more thorough testing by systematically testing all potential combinations of criteria. For example, a single screen may be used for searching, with results formatted differently for different results sets returned. By automating the execution of all possible permutations of
DECEMBER 2008
results, a system can be more thoroughly executed. • Increased range of testing. Introduce automation into areas not currently being executed. For example, if an organization is not currently executing standalone component testing, this area could be automated. Though ROI is a prevailing tool for conveying the value of test automation, the less quantifiable business advantages of testing may be even more important than cost reduction or optimization of the testing process itself. These advantages include the value of increased system quality to the organization. Higher quality yields higher user satisfaction, better business execution, better information, and higher operational performance. Conversely, the negative benefits of a low quality experience may result in decreased user satisfaction, missed opportunities, unexpected downtime, poor information, missed transactions, and countless other detriments to the company. In this sense, we consider the missed revenue or wasted expense of ill-performing systems and software, not just the isolated testing process.
Test Automation Considerations When moving toward test automation, a number of factors must be considered.
1. What to automate. Automation can be used for scenario (end-to-end) based testing, component-based testing, batch testing, and workflow processes, to name a few. The goals of automation differ based on the area of focus. A given project may choose to implement differing levels of automation as time and priorities permit. 2. Amount of test cases to be automated, based on the overall functionality of an application: What is the duration of the testing life cycle for a given application? The more test cases that are required to be executed, the more valuable and beneficial the execution of test cases becomes. Automation is more cost effective when required for a higher number of tests and with multiple permutations of test case values. In addition, automated scripts execute at a consistent pace. User testers pace varies widely, based on the individual tester’s experience and productivity. 3. Application release cycles: Are releases daily, weekly, monthly, or annually? The maintenance of automation scripts can be more challenging as the frequency of releases increases if a framework based approach is not used. The frequency of execution of automated regression tests
www.stpmag.com •
15
HOOKED ON TESTING
will yield financial savings as more regression test cycles can be executed quickly, reducing overall regression test efforts from an employee and time standpoint. The more automated scripts are executed, the more valuable the scripts become. It is worth noting the dependency of a consistent environment for test automation. To mitigate risks associated with environment consistency, environments must be consistently maintained. 4. System release frequency: Depending on how often systems are released into the environments where test automation will be executed (e.g., acceptance testing environment, performance testing environment, regression testing environments), automated testing will be more effective in minimizing the risk to releasing defects into a production environment, lowering the cost of resolving defects. 5. Data integrity requirement within and across test cases: If testers create their own data sets, data integrity may be compromised if any inadvertent entry errors are made. Automation scripts use data sets that are proven to maintain data integrity and provide repeatable expected results. Synergistic benefits are attained with consistent test data can be used to validate individual test case functionality as well as multi-test case functionality. For example, one set of automated test cases provides data entry while another set of test cases can test reporting functionality. 6. Automation engineers staffing and skills: Individual experience with test automation varies widely based on experience. Training courses alone will result in a minimum level of competency with a specific tool. Practical experience, based on multiple projects and lessons learned is the ideal means to achieve a strong set of skills, time permitting. A more pragmatic means is to leverage the lessons learned from industry best practices, and the experiences of highly skilled individuals in the industry.
An additional staffing consideration is the resource permanence. Specifically, are the automation engineers permanent or temporary (e.g., contractors, consultants, outsourced resources). Automation engineers can easily develop test scripts following individual scripting styles that can lead to costly or difficult knowledge transfers. 7. Organizational support of test automation: Are automation tools currently owned within the organization, and/or has budget been allocated to support test automation? Budget considerations include
16
• Software Test & Performance
• Test automation software • Hardware to host and execute automated tests • Resources for automation scripting and execution, including test management.
Test Automation Framework To develop a comprehensive and extensible test automation framework that is easy to maintain, it’s extremely helpful if the automation engineers understand the AUT and its underlying technologies. They should understand how the test automation tool interacts with and handles UI controls, forms, and the underlying API and database calls. The test team also needs to understand and participate in the development process so that the appropriate code drops can be incorporated into the overall automation effort and the targeted framework development. This comes into play if the development organization follows a traditional waterfall SDLC. If the dev team follows a waterfall approach, then major functionality in the new code has to be accounted for in the framework. This can cause significant automation challenges. If the development team is following an agile or other iterative approach, then the automation engineers
should be embedded in the team. This also has implications for how the automation framework will be developed.
Waterfall Methodology To estimate the development effort for the test automation framework, the automation engineers require access to the business requirements and technical specifications. These begin to define the elements that will be incorporated into the test automation framework. From the design specifications and business requirements, the automation team begins to identify those functions that will be used across more than one test case. This forms the outline of the framework. Once these functions are identified, the automation engineers then begin to develop the code required to access and validate between the AUT and the test automation tool. These functions often take the form of the tool interacting with each type of control on a particular form. Determining the proper approach for custom-built controls and the test tool can be challenging. These functions often take the form of setting a value (typically provided by the business test case) and validating a result (from the AUT). By breaking each function down into the
TABLE 1: GUTS OF THE FRAMEWORK A business scenario defined to validate that the application will not allow the end user to define a duplicate customer record. Components required: • Launch Application LaunchSampleApp(browser, url) • Returns message (success or failure) • Login Login(userID, password) • Returns message (success or failure) • Create new unique customer record CreateUniqueCustomer({optional} file name, org, name, address) • Outputs data to a file including application generated customer ID, org, name, and address • Returns message (success or failure) • Query and verify new customer data QueryCustomerRecord(file path, file name) Returns message (success or failure) • Create identical customer record CreateUniqueCustomer({optional} file name, org, name, address) • Outputs data to a file including application generated customer ID, org, name, and address Returns message (success or failure) Handle successful error message(s) from application • Gracefully log out and return application to the home page Logout Returns message (success or failure) DECEMBER 2008
HOOKED ON TESTING
simplest construct, the test automation engineer can begin to assemble them into more complex functions easily, without a great deal of code development. For example, let’s say the AUT has a requirement for Login. This is driven from the business requirement. This can also be a good example of a reusable test component. The function Login can be broken down into four separate functions: SetUserID, SetPassword, SubmitForm, and VerifyLogin. Each of these functions can be generalized so that you end up with a series of functions to set a value (in this case, both the UserID and the Password) and there is a generic submit function and a generic validate result function. These three functions form the basis for the beginning of the development of your test automation framework.
FIGURE 1: TRANSPARENT SCALES WebTableSearchByRow_v3 ‘This Function searches through a webtable to locate any item in the table. ‘WebTableSearchByRow_v3 returns the row number of the correct item. ‘This function takes 6 parameters ‘ @param wtsbr_browser ‘ @param wtsbr_page ‘ @param wtsbr_webtable ‘ @param wtsbr_rowcount ‘ @param wtsbr_searchcriteria ‘ @param wtsbr_columncount ‘
return Row Number for Search Criteria
Example: rownum = WebTableSearchByRow_v3 (br, pg, wt, rc, sc, cc) development.
Agile Methodology In the case of an Agile methodology, often times the development begins by coding a test case that will fail. Then the developers write just enough code to pass the test case. Once this is accomplished, then the code is added to the hourly/daily build and compiled into the release candidate. For test automation to be successful in this type of environment, the test automation engineers need to be involved upfront in the decisions of what tests to automate and what priority to automate them in. In this case, the team determines the list of requirements for the given sprint and assigns priority to the development items. The testers in conjunction with the rest of the team determine which test cases will be candidates for this sprint and what order to begin development. From there, the actual development begins to follow a similar pattern to the earlier waterfall methodology. For example, suppose one of the scrum team decides to develop the AUT login function. The testers will follow the same process to decompose the high-level business process into three discrete technical components, SetValue, SubmitForm, and VerifyResult. Table 1 shows another example. These functions will form the basis for the underlying test automation framework. After each sprint is accomplished, the scrum team performs the same evaluation of functions to build and automate, priority and resources required. From that point forward, the automation framework begins to grow and mature alongside of the application under DECEMBER 2008
‘ Browser as defined in the Library ‘ Page as defined in the Library ‘ WebTable as defined in the Library ‘ Total Row Count for the WebTable ‘ String value to search for ‘ Total column count for the webtable
Putting the Pieces Together When it comes time to execute your automated tests, it is often the case that they must be assembled into multiple technical and business scenarios. By taking a framework-based approach to building the automation components, the job of assembly is straightforward. The business community needs to dictate the actual business scenario that requires validation. They can do this in a number of ways. One way is to provide the scenario to the automation engineers (in a spreadsheet, say) and have them perform the assembly. A more pragmatic approach is for the business community to navigate to the test management system that is used for the storage and execution of the test automation components and to assemble the scenario directly. This makes the job of tracking and trending the results much easier and less prone to interpretation. Because maintenance is the key to all test automation, the QA team needs to standardize the approach and methods for building all test automation functions. Each function needs to have a standard header/comment area describing the function as well as all necessary parameters. All outputs and return values need to be standardized. This allows for greater flexibility in the development of your automation code by allowing the automation engineer to focus on the tool interaction with your AUT without having to reinvent the standard output methods. Any QA engineer can pick up that function or snippet of code and know what
the output message will be. For example, all WebTableSearchByRow functions return a row number based on the search criteria (Figure 1). To be successful, the automation effort must have organizational support and be given the same degree of commitment as the initial development effort for the project, and goals for test automation must be identified and laid out up front. Automation should have support and involvement of the entire team and the tool(s) should be selected and evaluated from within your environment. Your test automation team needs a stable and consistent environment in which to develop components of the test automation framework and the discrete test cases themselves. The number and type of tests to be automated need to be defined (functional UI, system level API, performance, scalability, security), and automators need to have dedicated time to develop the automation components, store and version them in a source code control system, and schedule the test runs to coincide with stable testable software releases. Failure in any one of these areas can derail the entire automation project. Remember, test automation cannot completely replace manual testing, and not all areas of an application are candidates for automation. But test automation of the parts that allow it, when properly implemented, can enhance the ability of your quality organization to release consistent, stable code in less time and at lower cost. ý
www.stpmag.com •
17
Stepchild of the Architecture
TEACH
The likelihood and willingness of development teams to provide test interfaces varies considerably. Designing your application with testability in mind automatically leads to better design with better modularization and less coupling between modules. For modern development practices like extreme programming or test-driven development, testability is one of the cornerstones of good application design and a built-in practice. Unfortunately for many existing applications, testability was not a key objective from the start of the project. For these applications, it often is impossible to intro-
Your Old Applications By Ernst Ambichl
I
n my 15 years of experience in building functional and performance testing tools, there is one statement I’ve heard from customers
more than any other: “Changing the application to make it more testable with your tool is not an option for my organization.” If companies don’t change this mindset, then test automation tools will never deliver more than marginal improvements to testing productivity. Testability improvements can be applied to existing, complex applications to bring enhanced benefits and simplicity to testers. This article will teach you how to enhance testability in legacy applications where a complete re-design of the application is not an option (and special purpose testability interfaces can not easily be introduced). You’re also learn how testability improvements can be applied to an existing, complex (not “Hello World”) application and the benefits gathered through improved testability. This article covers testability improvements from the perspective of functional testing as well as from that of performance and load testing, showing which changes are needed in the application to address either perspective.
18
• Software Test & Performance
You’ll also understand the effort needed to enhance testability with the benefits gathered in terms of increased test coverage and test automation and decreased test maintenance, and how an agile development process will help to successfully implement testability improvements in the application.
What Do I Mean by ‘Testability?’ There are many ways to interpret the term testability, some in a general and vague sense and others highly specific. For the context of this article, I define testability from a test automation perspective. I found the following definition, which describes testability in terms of visibility and control, most useful: Visibility is the ability to observe the states, outputs, resource usage and other side effects of the software under test. Control is the ability to apply inputs to the software under test or place it in specified states. Ultimately testability means having reliable and convenient interfaces to drive the execution and verification of tests.
duce new special purpose test interfaces. Test interfaces need to be part of the initial design and architecture of the application. Introducing them in an existing application can cause mature architectural changes, which most often includes re-writing of extensive parts of the application that no one is willing to pay for. Also test interfaces need a layered architecture. Introducing an additional layer for the testability interfaces is often impossible for monolithic applications architectures. Testability interfaces for performance testing need to be able to sufficiently test the multiuser aspects of an application. This can become complex as it usually requires a remote-able test interface. Thus, special purpose testability interfaces for performance testing are even less likely to exist than testability interfaces for the functional aspects of the application. To provide test automation for applications that were not designed with testability in mind, existing application interfaces need to be used for testing purposes. In most cases this means using the graphical user interface (GUI) for functional testing and a protocol interface or a remote API interface for performance testing. This is the approach traditional DECEMBER 2008
functional and load testing tools are using. Of course, there are problems using the available application interfaces for testing purposes, and you will find many examples of their deficiencies, especially when reading articles on agile testing. Some of those examples are: • Existing application interfaces are usually not built for testing. Using them for testing purposes may use them in a way the existing client of the interface never would have intended. Driving tests through these interfaces can cause unexpected application behavior as well as limited visibility and control of the application for the test tool.
Interfaces with Testability Hooks A key issue with testability using an existing interface is being able to name and distinguish interface controls using stable identifiers. Often it’s the absence of stable identifiers for interface controls that makes our life as testers so hard. A stable identifier for a control means that the identifier for a certain control is always the same – between invocations of the control as well as between different versions of the application itself. A stable identifier also needs to be unique in the context of its usage, meaning that there is not a second control with the same identifier accessible at the same time.
This does not necessarily mean that you need to use GUID-style identifiers that are unique in a global context. Identifiers for controls should be readable and provide meaningful names. Naming conventions for these identifiers will make it easier to associate the identifier to the actual control. Using stable identifiers also avoids the drawbacks of using the control hierarchy for recognizing controls. Control hierarchies are often used if you’re using weak control identifiers, which are not unique in the current context of the application. By using the hierarchy of controls, you’re providing a “path” for how to find the
To Be More Testable • A GUI with “custom controls” (GUI controls that are not natively recognized by the test tool) is problematic because it provides only limited visibility and control for the test tool. • GUI controls with weak UI object recognition mechanisms (such as screen or window coordinates, UI indexes, or captions) provide limited control, especially when it comes to maintenance. This results in fragile tests that break each time the application UI changes, even minimally. • Protocol interfaces used for performance testing are also problematic. Session state and control information is often hidden in opaque data structures that are not suitable for testing purposes. Also, the semantic meaning of the protocol often is undocumented. But using existing interfaces need not to be less efficient or effective than using special purpose testing interfaces. Often slight modifications to how the application is using these interfaces can increase the testability of an application significantly. And again, using the existing application interfaces for testing is often the only option you have for an existing applications. Ernst Ambichl is chief scientist at Borland. DECEMBER 2008
www.stpmag.com •
19
FutureTest 2009: QA Who Should Attend? The program is designed for high-level test managers, IT managers or development managers that manage the test department or consider future test and QA strategies to be critical to their job and their company. By attending you will gain ideas and inspiration, be able to think freely, away from the day-to-day demands of the office, and also exchange ideas with peers after being stimulated by our inspirational speakers. You will leave this conference with new ideas for managing and solving your biggest challenges both now and in the future.
Typical job titles at FutureTest include: CIO/CTO
Senior VP, IT Operations
Director of Systems Programming
CSO/CISO
Senior Software Test Manager
VP, Software Engineering
Vice President, Development
Security Director
Test/QA Manager
Test Team Leader
Test Architect
Senior Software Architect
Vice President, Test/QA
Manager of Quality Assurance
Test Director
Project Manager
FutureTest is a two-day conference created for senior software test and QA managers. FutureTest will provide practical,
• Stay abreast of trends and best practices for managing software quality • Hear from the greatest minds in the software test/QA community and gain their knowledge and wisdom • Hear practitioners inside leading companies share their test management and software quality assurance secrets
results-oriented • All content is practical, thought-provoking, sessions taught by
future-thinking guidance that you can use today
top industry professionals.
• Lead your organization in building higher-quality, more secure software
A BZ Media Event
• Network with high-level peers
and the Web 5 Great Reasons to
1.
You’ll hear great ideas to help your company save money with more effective Web applications testing and quality assurance and security.
3.
You’ll listen to how other organizations have improved their Web testing processes, so you can adopt their ideas to your own projects.
5.
2.
4.
Add it all up, and it’s where you want to be in February.
You’ll learn how to implement new test & QA programs and initiatives faster, so you save money and realize the benefits sooner.
You’ll be ready to share practical, real-world knowledge with your test and development colleagues as soon as you get back to the office.
You’ll engage with the newest testing methodologies and QA processes — including some you may never have tried before.
REGISTER by December 19 Just $1,095
SAVE $350! February 24 –25, 2009 The Roosevelt Hotel New York, NY
www.futuretest.net
OLD DOG, NEW TRICKS
control. Relying your control recognition on this path just increases the dependency of the control recognition on other controls and introduces a maintenance burden when this hierarchy changes. To improve the testability of your application, one of the simplest and most efficient practices is to introduce stable control identifiers and expose them via the existing application interfaces. This practice not only works for functional testing using the GUI but also can be adopted for performance testing using a protocol approach. How to accomplish this for a concrete application is shown next. Case Study: Testability Improvements in an Enterprise Application A major theme for the latest release of one of our key applications that we are developing in Borland’s R&D lab in Linz, Austria, was to make the application ready for enterprise-wide, global usage. This resulted in two key requirements: • Providing localized versions of the application. • Provide significantly increased performance and scalability of the application in distributed environments. Our application is a Web based multiuser application using a HTML/Ajax front-end with multiple tiers built on a Java EE infrastructure with a database backend. The application was developed over several years and now contains several hundred thousand lines of code. To increase the amount of test automation, we wanted to be able to test localized versions of the application with minimal or no changes to the functional test scripts we used for the English version of the product. Also we wanted to be able to test the scalability and performance of existing functionality on a regular nightly build basis over the whole release cycle. These regular performance tests should ensure that we detect performance regressions as early as possible. New functionality should be tested for performance as soon as the functionality was available, and in combination with existing functionality to ensure it was built meeting the scalability and performance requirements. We thought that by executing performance tests on a regular basis, we should be able to continuously measure the progress we were making toward the defined scalability and performance objectives.
Performance Testing Among the problems we had was fragile performance test scripts. Increasing
22
• Software Test & Performance
FIG. 1: PARSING WORDS Parse the dynamic control handler from a HTTP response: WebParseDataBound(<dynamic control handle>, "control_handle=", “!” …)
The function WebParseDataBound parses the response data of a HTTP request using “control_handle=” as the left boundary and “!” as the right boundary and returns the result into <dynamic control handle>. You’re tools’ API will vary, of course, but might look something like this. Use the dynamic control handler in an HTTP request: http://host/borland/mod_1?control_handle=<dynamic control handle>!...
FIG. 2: PARSING LOGINS WebPage(“http://host/borland/login/”); WebParseDataBound(hBtnLogin, "control_handle=", "!”, 6); To call the actual “Login” button then might look like this: WebPage(“http://host/borland/mod_1?control_handle=” + hBtnLogin + “!”); the amount of test automation for performance tests increases the importance of maintainable and stable performance test scripts. In previous releases, our performance test scripts lacked stability, maintainability and customizability. Often it happened that only small modifications in the application broke performance test scripts. Also was it hard to detect the reason of script failures. Instead of searching for the root-cause of broken scripts, performance test scripts were re-recorded and re-customized. Complex test cases were not covered (especially scenarios that change data) as they were hard to build and had a high likelihood to break when changes were introduced in the application. Also customization of tests was complicated, and increased the fragility of the tests. To get a better insight on the reasons for the poor testability, we needed to look at the architecture of the application and the interfaces we used for testing: For Web-based applications, the most common approach for performance testing is to test at the HTTP-protocol-level. So let’s take a look into the HTTP-protocol of the application: Our application is highly dynamic, so HTTP requests and responses are dynamic. This means that they include dynamic session information as well as dynamic information about the active UI controls. While session information (as in many other Web applications) is transported using cookies and therefore automatically handled by most HTTP based testing tools, the information about actions and controls is represented as
URL query string parameters in the following way: Each request that triggers an action on a control (such as pressing a button, clicking a link, expanding a tree node) uses an opaque control handle to identify the control on the server: http://host/borland/mod_1?control_handle= 68273841207300477604!...
On the server, this control handle is used to identify the actual control instance and the requested action. As the control handle references the actual instance of a control, the value of the handle changes after each server request (as in this case, where a new instance of the same control will be created). The control handle is the only information the server exposes through its interface. This is perfectly fine for the functionality of the application, but is a nightmare for every testing tool (and of course for testers). What does this mean for identifying the actions on controls in a test tool? It means that because of the dynamic nature of the requests (control handles constantly change), a recorded, static HTTP-based test script never will work as it calls control handles, which are not valid. So the first thing testers need to do is to replace the static control handles in the test script with the dynamic handles generated by the server at runtime. Sophisticated performance test tools will allow you to define parsing and replacement rules that do this for you and create the needed modifications in the test script automatically (Figure 1). An application of course has many DECEMBER 2008
OLD DOG, NEW TRICKS
actionable controls on one page, which in our case all use the same request format shown above. The only way to identify a control using its control handle is by the order number of the control handle compared to other control handles in the response data (which is either HTML/JavaScript or JSON in our case). E.g.: The control handle for the “Login” button in the “Login” page might be the 6th control handle in the page. A parsing statement needed to parse the value of the handler for the “Login” button then might look like that is Figure 2. It should be obvious from the small example in Figure 2 that using the control handle to identify controls is far from a stable recognition technique. The problems that come with this approach are: • Poor readability: Scripts are hard to read as they use order number information to identify controls. • Fragile to changes: Adding controls or just rearranging the order of controls will cause scripts to break or even worse introduces unexpected behavior causing subsequent errors which are very hard to find. • Poor maintainability: Frequent changes to the scripts are needed. Detecting the changes of the order number of control handles is cumbersome, as you have to search through response data to find the new order number of the control handle.
Stable Control Identifiers Trying to optimize the recognition technique within the test scripts by adapting more intelligent parsing rules (instead of only searching for occurrences of “control_handle”) proved not to be practical. We even found it counterproductive because the more unambiguous the parsing rules were the less stable they became. So we decided to address the root cause of the problem by creating stable identifiers for controls. Of course, this needed changes in the application code —but more on this later. We introduced the notion of a control identifier (CID), which uniquely identifies a certain control. Then we extended the protocol so that CIDs could easily be consumed by the test tool. By using meaningful names for CIDs, we made it easy for testers to identify controls in request/response data and test scripts. Also was then easier for the developers to associate CIDs with the code that implements the control. DECEMBER 2008
CIDs are only used to provide more context information for the test tool, the testers, and developers. There is no change in the application behavior – the application still uses the control handle and ignores the CID. The HTTP request format changed from: http://host/borland/mod_1?control_handle= 68273841207300477604!...
to: http://host/borland/mod_1?control_handle= *tm_btn_login.68273841207300477604!...
As CIDs are included in all responses as part of their control handlers, it is easy to create a parsing rule the uniquely parses the control handle – the search will only return one control handle as the CIDs are always unique in the context of the response. The same script as above now looks like: WebPage(“http://host/borland/login/”); WebParseDataBound(hBtnLogin, "control_handle=*tm_btn_login.", “!”, 1); WebPage(“http://host/borland/mod_1?control_ handle==*tm_btn_login.” + hBtnLogin + “!”);
By introducing CIDs and exposing CIDs at the HTTP protocol-level, we are now able to build extremely reliable and stable test scripts. Because a CID will not change for an existing control, there is no maintenance work related to changes such as introducing new controls in a page, filling the page with dynamic content that exposes links with control handlers (like a list of links), or reordering controls on a page. Also the scripts are now more readable and the communication between testers and developers is easier because they use the same notion when they were talking about the controls of the application.
Functional Testing The problem here was the existence of language-dependent test scripts. Our existing functional test scripts heavily relied on recognizing GUI controls and windows using their caption. For example, for push buttons, the caption is the displayed name of the button. For links the caption is the displayed text of the link. And for text fields, the caption is the text label preceding the text field. To automate functional testing for all localized versions of the application, we needed to minimize the dependencies
between the test scripts and the different localized versions of the application. Captions are of course language dependent, and therefore are not a good candidate for stable control identifiers. The first option would have been to localize the test scripts themselves by externalizing the captions and providing localized versions of the externalized captions. But still this approach introduces a maintenance burden when captions change or when new languages need to be supported. Using the caption to identify an HTML link (HTML fragment of an HTML page): <A … HREF=" http://...control_handle=*tm_btn_login .6827..." >Login</A>
Calling the actual “Login” link then might look like this: MyApp.HtmlLink(“Login”).Click();
As we already have introduced the concept of stable control identifiers (CIDs) for performance testing we wanted to reuse these identifiers also for GUI level testing. Using CIDs makes the test scripts language independent without the need to localize the scripts (at least for control recognition – verification of control properties still may need language dependent code). To make the CID accessible for our functional testing tool, the HTML controls of our application exposed a custom HTML attribute named “CID.” This attribute is ignored by the browser but is accessible from our functional testing tool using the browser’s DOM. Using the CID to identify a link: <A … HREF=" http://...control_handle=*tm_btn_login .6827..." CID="tm_btn_login" >Login</A>
Calling the actual “Login” link using the CID then might look like this: MyApp.HtmlLink(“CID=tm_btn_login”).Click();
Existing Test Scripts We had existing functional test scripts where we needed to introduce to the new mechanism how to identify controls. So it was essential that we had separated the declaration of UI controls and how controls are recognized from the actual test scripts using them. www.stpmag.com •
23
I’m a person, just like you. I’ll buy when and how I want. Do not call me. Do not make me register to learn about you. Don’t try to generate me. I am not ‘actionable.’
I am Not a Lead I WILL GET TO KNOW YOU on my own time and
in my own way. I will learn about you myself and
then I may choose to read your white papers,
attend your webinars, or visit your web site. I am in control of the process. And if I don’t get to know you first, you will never get my business.
Print Advertising - How Buyers Get to Know You LEARN MORE AT I-AM-NOT-A-LEAD.COM Based on study of SD Times readers by Readex, February 2008. How do you prefer to receive marketing information from software tool companies 61% ads in print magazines, 40% presentations at trade shows, 38% vendor white papers, 29% direct mail, 19% banners.
OLD DOG, NEW TRICKS
Therefore we only had to change the declaration of the controls but not their usage in multiple test scripts. A script that encapsulates control information from actions might look something like this: // declaration of the control BrowserChild MyApp { … HtmlLink Login { tag “CID=tm_btn_login” // before: tag “Login” … } } // action on the control MyApp.Login.Click();
This approach works similarly for GUI toolkits such as Java SWT, Java Swing/AWT, toolkits supporting the Microsoft User Interface Automation (MSUIA) and Adobe Flex Automation. All these GUI toolkits allow developers to add custom properties to controls or are offering a special property that you then can use to expose CIDs to testing tools. Of course you need to check if your GUI testing tool is able to work with custom properties of controls for each toolkit.
Code Changes Needed to Add Testability Hooks One of the most enlightening experiences in this project was how easy it was to add the testability hooks to the application code. Speaking with the developers of the UI framework that was used in the application and explaining them the needs to use CIDs for recognizing controls, they immediately found out that there was already functionality in the framework APIs that allowed them to add custom HTML attributes to the controls the UI framework provided. So no changes in the UI framework code were even needed to support CIDs for functional testing! Certainly there was work related to creating CIDs for each UI control of the application. But this work was done step-by-step at first introducing CIDs for the parts of the applications, which we focused our testing efforts on. Changes in the application code to introduce a CID: Link login = new Link("Login"); login.addAttribute("CID", "tm_btn_login"); // additional code line to // create CID the control
More Changes For Performance Testing For performance testing, we needed to extend the protocol of the application to also include the CID as part of the control handler, which is used to relate the actual control instance on the server with the UI control in the browser. By understanding what we needed, our UI framework developers immediately figured out how to accomplish it. Again the changes were minimal and were needed in just one base class of the framework. Changes in the UI framework of the application code to introduce CIDs in control handlers: private String generateControlHandle() { String cid = this.getAttribute(“CID”); if (cid != null) return "*" + cid + "." + this.generateOldControlHandle(); else return this.generateOldControlHandle(); }
The new version of the generateControlHandle method, which is implemented in the base class of all UI control classDECEMBER 2008
es, now generates the following control handler, which contains the CID as part of the handler: *tm_btn_login.68273841207300477604!
Lessons Learned Costs for enhancing testability using existing application interfaces were minimal. One of the most enlightening experiences in this project was how easy it was to add the testability hooks at the time we were able to express the problem to the developers. The code changes needed in the application to add the testability hooks to the existing application interfaces (GUI and protocol) were minimal. There was no need to create special purpose testing interfaces, which would have caused a major rework of the application. And the runtime overhead that was added by the testability hooks was negligible. Most effort was related to introducing stable identifiers (CIDs) for all relevant controls of the application. All changes needed in the application summed up to an effort of about 4 person weeks—this is less than 1% of the resources working on this application per year! For the changes and effort to make them, the benefits we gathered were dramatic. Maintenance work for test scripts dropped significantly. This was especially true for performance testing, where for the first time we were able to have a maintainable set of tests. Before that we needed to recreate test scripts whenever the application changed. The performance testing scripts are now extremely stable and if the application changes the test scripts are easy to adjust. What’s more, the changes in the existing functional test scripts that were needed for testing different localized versions of the application were small. Here, it helped that we had already separated the declaration of UI controls and how controls are recognized from the actual test scripts using them. Being able to also cover the localized versions of the application with automated tests not only increased the test coverage of the application, but it reduced the time we needed for providing the localized versions of the application. We are currently releasing all localized versions and the English version at the same time. Performance testing is now done by regularly running a core set of benchmark tests with different workloads against different configurations on each nightly build. So there is continuous information how the performance and the scalability the application is improving (or degrading). We’re also now able to detect defects that affect performance or scalability as soon as they are introduced—which is a huge benefit as performance problems are usually extremely hard to diagnose. Knowing how performance has changed between two nightly builds greatly reduces the time and effort needed to find the root cause. Moving to an agile development process was the catalyst. When it’s so easy to improve the testability of existing applications and by that increase test automation and improve quality so significantly, I wonder why it’s not done more often. ý REFERENCES • Software Test Automation, Mark Fewster, Dorothy Graham, Addison Wesley, 1999 • Design for Testability, Bret Pettichord, Paper, 2002 • Lessons Learned in Software Testing, Cem Kaner et. al, John Wiley & Sons, 2002 • Agile Tester, Internet Blog: Why GUI tests fail a lot? (From tools perspective), http://developer-in-test.blogspot.com/2008/09/ problems-with-gui-automation-testing.html
www.stpmag.com •
25
By Elfriede Dustin
I
mplementing a successful Automated Software Testing effort requires a well defined and structured, but lightweight technical
process, using minimal overhead and is described here. It is based on proven system and software engineering processes and consists of 5 phases, each requiring the “pass” of a quality gate before moving on to the next phase. By implementing quality gates, we will help enforce that quality is built into the entire implementation, thus preventing late and expensive rework (see Figure 2). The overall process implementation can be verified via inspections, quality checklists and other audit activities, each of which is covered later in this section. In my experience in the defense industry, I have modified the Automated Testing Lifecycle Methodology (ATLM) to adapt to the needs of my current employer, Innovative Defense Technologies. See Figure 1 for an illustration of the process that’s further defined next. Our proposed Automated Software Testing technical process needs to be flexible enough to allow for ongoing iterative and incremental improvement DECEMBER 2008
feedback loops, including adjustments to specific project needs. For example, if test requirements and test cases already exist for a project, Automated Software Testing will evaluate the existing test artifacts for reuse, modify them as required and mark them as “to-be-automated,” instead of re-documenting the test requirement and test case documentation from scratch. The goal is to reuse existing components and artifacts and use/modify as appropriate, whenever possible. The Automated Software Testing phases and selected best practices need to be adapted to each task at hand and need to be revisited and reviewed for effectiveness on an ongoing basis. An approach for this is described here. The very best standards and processes are not useful if stakeholders don’t know about them or don’t adhere to them. Therefore Automated Software Testing processes and procedures are documented, communicated, enforced and
tracked. Training for the process will need to take place. Our process best practices span all phases of the Automated Software Testing life cycle. For example, in the requirements phases, an initial schedule is developed and maintained throughout each phase of the Automated Software Testing implementation (e.g. update percentage complete to allow for program status tracking, etc). See the section on quality gates activities related to schedules. Weekly status updates are also an important ingredient for successful system program management, which spans all phases of the development life cycle. See our section on quality gates related to status reporting. Post-mortems or lessons learned play an essential part in these efforts, and are conducted to help avoid repeats of past mistakes in ongoing or new development efforts. See our section on Quality Gates related to inspections and reviews. By implementing quality gates and Elfriede Dustin is currently employed by Innovative Defense Technologies (IDT), a Virginiabased software testing consulting company specializing in automated testing. www.stpmag.com •
27
QUALITY GATES
related checks and balances along Automated Software Testing, the team is not only responsible for the final testing automation efforts, but also to help enforce that quality is built into the entire Automated Software Testing life cycle throughout. The automation team is held responsible for defining, implementing and verifying quality. It is the goal of this section to provide program management and the technical lead a solid set of automated technical process best practices and recommendations that will improve the quality of the testing program, increase productivity with respect to schedule and work performed, and aid successful automation efforts.
Testing Phases and Milestones Independent of the specific needs of the application under test (AUT), Automated Software Testing will implement a structured technical process and approach to automation and a specific set of phases and milestones to each program. Those phases consist of: • Phase 1: Requirements Gathering— Analyze automated testing needs and develop high level test strategies • Phase 2: Design & Develop Test Cases • Phase 3: Automation framework and test script development • Phase 4: Automated test execution and results reporting • Phase 5: Program review Our overall project approach to accomplish automated testing for a specific effort is listed in the project milestones below.
FIGURE 1: THE ATLM
28
• Software Test & Performance
Phase 1: Requirements Gathering Phase 1 will generally begin with a kickoff meeting. The purpose of the kick-off meeting is to become familiar with the AUT’s background, related testing processing, automated testing needs, and schedules. Any additional information regarding the AUT will be also collected for further analysis. This phase serves as the baseline for an effective automation program, i.e. the test requirements will serve as a blueprint for the entire Automated Software Testing effort. Some of the information you gather for each AUT might include: • Requirements • Test Cases • Test Procedures • Expected Results • Interface Specifications In the event that some information is not available, the automator will work with the customer to derive and/or develop as needed. Additionally, during this phase of automation will generally follow this process: 1) Evaluate AUT’s current manual testing process and determine a) areas for testing technique improvement b) areas for automated testing c) determine current quality index, as applicable (depending on AUT state) d) collect initial manual test timelines and duration metrics (to be used as a comparison baseline ROI) e) determine the “automation index” —i.e. determine what lends itself to automation (see the next item) 2) Analyze existing AUT test requirements
for ability to automate a) If program or test requirements are not documented, the automation effort will include documenting the specific requirements that need to be automated to allow for a requirements traceability matrix (RTM) b) Requirements are automated based on various criteria, such as i) Most critical feature paths ii) Most often reused (i.e. automating a test requirement that only has to be run once, might not be cost effective) iii) Most complex area, which is often the most error-prone iv) Most data combinations, since testing all permutations and combinations manually is time-consuming and often not feasible v) Highest risk areas vi) Test areas which are most time consuming, for example test performance data output and analysis. 3) Evaluate test automation ROI of test requirement a) Prioritize test automation implementation based on largest ROI b) Analyze AUT’s current life-cycle tool use and evaluate reuse of existing tools c) Assess and recommend any additional tool use or the required development d) Finalize manual test effort baseline to be used for ROI calculation A key technical objective is to demonstrate a significant reduction in test time. Therefore this phase involves a detailed assessment of the time required to manually execute and validate results. The assessment will include measuring the actual test time required for manually executing and validating the tests. Important in the assessment is not only the time required to execute the tests but also to validate the results. Depending on the nature of the application and tests, validation of results can often take significantly longer than the time to execute the tests. Based on this analysis, the automator would then develop a recommendation for testing tools and products most compatible with the AUT. This important step is often overlooked. When overlooked and tools are simply bought up front without consideration for the application, the result is less than optimum results in the best case, and simply not being able to use the tools in the worst case. At this time, the automator would also DECEMBER 2008
QUALITY GATES
identify and develop additional software Once the list of test requirements for as required to support automating the automation has been agreed to by the testing. This software would provide interprogram, they can be entered in the faces and other utilities as required to requirements management tool and/or support any unique requirements while test management tool for documentation maximizing the use of COTS testing tools and tracking purposes. and products. A brief description: • Assess existing automation framework Phase 2: Manual Test Case for component reusability Development and Review • GUI record/playback utilities compatArmed with the products of phase 1, ible with AUT display/GUI applications manual test cases can now be devel(as applicable in rare cases) oped. If test cases already exist, they can • Library of test scripts able to interface simply be analyzed, mapped as applicato AUT standard/certified simulable to the automated test requirements tion/stimulation equipment and sceand reused, ultimately to be marked as narios used for scenario simulation automated test cases. • Library of tools to support retrieval of It is important to note for a test to be test data generation automated, manual test cases need to be • Data repository for expectautomated, as computer l ed results inputs and expectations • Library of performance differ from human inputs. testing tools able to supAs a general best practice, port/measure real time before any test can be and non-real time AUTs automated, it needs to be • Test scheduler able to supdocumented and vetted port distributed testing with the customer to veriacross multiple computers fy its accuracy and that the and test precedence automator’s understandThe final step for this ing of the test cases is corphase will be to complete rect. This can be accomthe configuration for the plished via a test case walkapplication(s), including through. the procurement and instalDeriving effective test l lation of the recommended cases is important for suctesting tools and products along with the cessfully implementing this type of additional software utilities developed. automation. Automating inefficient test cases will result in poor test program perThe products of Phase 1 will typically be : formance. 1. Report on test improvement In addition to the test procedures, othopportunities, as applicable er documentation, such as the interface 2. Automation index specifications for the software are also 3. Automated Software Testing needed to develop the test scripts. As requirements walkthrough with required, the automator will develop any stakeholders, resulting in agreemissing test procedures and will inspect ment the software, if available, to determine 4. Presentation report on recomthe interfaces if specifications are not mendations for tests to automate, available. i.e. test requirements to be autoPhase 2 also includes a collection and mated entry of the requirements and test cases 5. Initial summary of high level test into the test manager and/or requireautomation approach ments management tool (RM tool), as 6. Presentation report on test tool applicable. The end result is a populated or in-house development needs requirements traceability matrix inside and associated recommendations the test manager and RM tool that links 7. Automation utilities requirements to test cases. This central 8. Application configuration details repository provides a mechanism to 9. Summary of test environment organize test results by test cases and requirements. 10. Timelines The test case, related test procedure, 11. Summary of current manual testtest data input and expected results from ing level of effort (LOE) to be each test case are also collected, docuused as a baseline for automated mented, organized and verified at this testing ROI measurements
Automating inefficient test cases will result in poor test program performance.
DECEMBER 2008
time. The expected results provide the baseline to determine pass or fail status of each test. Verification of the expected results will include manual execution of the test cases and validation that the expected results were produced. In the case where exceptions are noted, the automator will work with the customer to resolve the discrepancies, and as needed update the expected results. The verification step for the expected results ensures that the team will be using the correct baseline of expected results for the software baseline under test. Also during the manual test assessment, pass/fail status as determined through manual execution will be documented. Software trouble reports will be documented accordingly. The products of Phase 2 will typically be: 1. Documented manual test cases to be automated (or existing test cases modified and marked as “to be automated”) 2. Test case walkthrough and priority agreement 3. Test case implementation by phase/priority and timeline 4. Populated requirements traceability matrix 5. Any software trouble reports associated with manual test execution 6. First draft of “Automated Software Testing Project Strategy and Charter” (as described in the Project Management portion of this document)
Phase 3 : Automated Framework and Test Script Development This phase will allow for analysis and evaluation of existing frameworks and automation artifacts. It is expected that for each subsequent implementation, there will be software utilities and test scripts we’ll be able to reuse from previous tasks. During this phase we determine scripts that can be reused. As needed, the automation framework will be modified, and test scripts to execute each of the test cases are developed. Scripts will be developed for each of the test cases based on test procedures for each case. The recommended process for developing an automated test framework or test scripts is the same as would be used for developing a software application. Keys to a technical approach to developing test scripts is that implementations are based on generally accepted development standards; no proprietary implewww.stpmag.com •
29
QUALITY GATES
mentation should be allowed. This task also includes verifying that each test script works as expected. The products of Automated Software Testing Phase 3 will typically be: 1. Modified automated test framework; reuse test scripts (as applicable) 2. Test case automation—newly developed test scripts 3. High-level walkthrough of automated test cases with internal or external customer 4. Updated requirements traceability matrix
Phase 4: Automated Test Execution and Results Reporting The next step is to execute automated tests and develop the framework and related test scripts. Pass/fail status will be captured and recorded in the test manager. An analysis and comparison of manual and automated test times and pass/fail results will be conducted and summarized in a test presentation report. Depending on the nature of the application and tests, you might also complete an analysis that characterizes the range of performance for the application. The products of Automated Software Testing Phase 4 will typically be: 1. Test Report including Pass/Fail Status by Test Case and Requirement (including updated RTM) 2. Test Execution Times (Manual and Automated)—initial ROI reports 3. Test Summary Presentation 4. Automated Software Testing training, as required
The test program review also includes an assessment of whether automation efforts satisfy completion criteria for the AUT, and whether the automation effort itself has been completed. The review also could include an evaluation of progress measurements and other metrics collected, as required by the program. The evaluation of the test metrics should examine how well original test program time/sizing measurements compared with the actual number of hours expended and test procedures developed to accomplish the automation. The review of test metrics should conclude with improvement recommendations, as needed. Just as important is to document the activities that automation efforts performed well and were done correctly in order to be able to repeat these successful processes. Once the project is complete, proposals for corrective action will surely be beneficial to the next project, but the corrective actions, applied during the test program, can be significant enough to improve the final results of the test program. Automated Software Testing efforts will adopt, as part of its culture, an ongoing iterative process of lessons learned activities. This approach will encourage automation implementers to take the responsibility to raise corrective action proposals immediately, when such actions potentially have significant impact on test program performance. This promotes leadership behavior from each test engineer.
ty gates, which apply to Automated Software Testing milestones. Our process controls verify that the output of one stage represented in Figure 2 is fit to be used as the input to the next stage. Verifying that output is satisfactory may be an iterative process, and verification is accomplished by customer review meetings; internal meetings and comparing the output against defined standards and other project specific criteria, as applicable. Additional quality gates activities will take place as applicable, for example: Technical interchanges and walkthroughs with the customer and the automation team, represent an evaluation technique that will take place during and as a final step of each Automated Software Testing phase. These evaluation techniques can be applied to all deliverables, i.e. test requirements, test cases, automation design and code, and other software work products, such as test procedures and automated test scripts. They consist of a detailed examination by a person or a group other than the author. These interchanges and walkthroughs are intended to help find defects, detect nonadherence to Automated Software Testing
FIGURE 2: AUTOMATED SOFTWARE TESTING PHASES, MILESTONES AND QUALITY GATES
The products of phase 5 will typically be: 1. The final report
Quality Gates Phase 5: Program Review and Assessment The goal of Automated Software Testing implementations is to allow for continued improvements. During the fifth phase, we will review the performance of the automation program to determine where improvements can be made. Throughout the automation efforts, we collect test metrics, many during the test execution phase. It is not beneficial to wait until the end of the automation project to document insights gained into how to improve specific procedures. When needed, we will alter detailed procedures during the test program, when it becomes apparent that such changes are necessary to improve the efficiency of an ongoing activity.
30
• Software Test & Performance
Internal controls and quality assurance processes verify that each phase has been completed successfully, while keeping the customer involved. Controls include quality gates for each phase, such as technical interchanges and walkthroughs that include the customer, use of standards, and process measurement. Successful completion of the activities prescribed by the process should be the only approved gateway to the next phase. Those approval activities or quality gates include technical interchanges, walkthroughs, internal inspections, examination of constraints and associated risks, configuration management; tracked and monitored schedules and cost; corrective actions; and more as this section describes. Figure 1 reflects typical qualiDECEMBER 2008
QUALITY GATES
standards, test procedure issues, and other problems. Examples of technical interchange meetings include an overview of test requirement documentation. When test requirements are defined in terms that are testable and correct, errors are prevented from entering the development pipeline, which would eventually be reflected as possible defects in the deliverable. Automation design-component walkthroughs can be performed to ensure that the design is consistent with defined requirements, conforms to standards and applicable design methodology and errors are minimized. Technical reviews and inspections have proven to be the most effective form of preventing miscommunication, allowing for defect detection and removal. Internal automator inspections of deliverable work products will take place to support the detection and removal of defects early in the development and test cycles; prevent the migration of defect to later phases; improve quality and productivity; and reduce cost, cycle time, and maintenance efforts. A careful examination of goals and constraints and associated risks will take place, which will lead to a systematic automation strategy, will produce a predictable, higher-quality outcome and will enable a high degree of success. Combining a careful examination of constraints, as a defect prevention technology, together with defect detection technologies will yield the best results. Any constraint and associated risk will be communicated to the customer and risk mitigation strategies as necessary will be developed. Defined QA processes allow for constant risk assessment and review. If a risk is identified, appropriate mitigation strategies can be deployed. We require ongoing review of cost, schedules, processes and implementation to prevent potential problems to go unnoticed until too late, instead our process assures problems are addressed and corrected immediately. Experience shows that it’s important to protect the integrity of the Automated Software Testing processes and environment. Means of achieving this include the testing of any new technologies in isolation. This ensures that validating that tools, for example, perform up to specifications and marketing claims before being used on any AUT or customer environment. The automator also will verify that any upgrades to a technology still run in the DECEMBER 2008
the customer immediately and necessary current environment. The previous veradjustments made accordingly. sion of the tool may have performed corTracking schedules on an ongoing rectly and a new version may perform fine basis also contributes to tracking and conin other environments, but might adversetrolling costs. ly affect the team’s particular environCosts that are tracked can be controlled. ment. Additionally, using a configuration By closely tracking schedmanagement tool to basel ules and other required line the test repository will resources, the automator help safeguard the integriassures that a cost tracking ty of the automated testand controlling process is ing process and help with followed. Inspections, walkroll-back in the event of throughs and other status failure. reporting will allow for a The automator incorclosely monitored cost conporates the use of configtrol tracking activity. uration management tools Performance is conhelp control of the tinuously tracked with integrity of the automanecessary visibility into tion artifacts. For examproject performance, ple, we will include all related schedule and cost. automation framework The automator manager components, script files, maintains the record of test case and test procedelivery dates (planned dure documentation, vs. actual), and continuschedules and cost trackl ously evaluates the projing data under a configect schedule. This is mainuration management systained in conjunction with all project tem. This assures us that the latest and tracking activities, is presented at weekaccurate version control and records of ly status reports and is submitted with the Automated Software Testing artifacts and monthly status report. products are maintained. Even with the best laid plans and Schedules are Defined,Tracked implementation, corrective actions and adjustments are unavoidable. Good QA and Communicated It’s also important to define, track processes will allow for continuous evaland communicate project schedules. uation and adjustment of task impleSchedule task durations are determined mentation. If a process is too rigid, implebased on past historical performance mentation can be doomed to failure. and associated best estimates. Also, any When making adjustments, it’s critischedule dependencies and critical path cal to discuss any and all changes with cuselements should be considered up front tomers, explain why the adjustment is recand incorporated into the schedule. ommended and the impact of not makIf the program is under a tight deading the change. This communication is line, for example, only the automation essential for customer buy-in. tasks that can be delivered on time should QA processes should allow for and supbe included in the schedule. During port the implementation of necessary corPhase 1, test requirements are prioritized. rective actions. This allows for strategic This allows the most critical tasks to be course correction, schedule adjustments, included and prioritized for completion and deviation from Automated Software first, and less critical and lower priority Testing Phases to adjust to specific projtasks to be later in the schedule. ect needs. This also allows for continuous After Phase 1, an initial schedule is process improvement and ultimately to presented to the customer for approval. a successful delivery. ý During the technical interchanges and REFERENCES walkthroughs, schedules are presented • This process is based on the Automated Testing Lifecycle on an ongoing basis to allow for continMethodology (ATLM) described in the book Automated uous schedule communication and monSoftware Testing — A diagram that shows the relationship of the Automated Software Testing technical itoring. Potential schedule risks will be process to the Software Development Lifecycle will communicated well in advance and risk be provided here mitigation strategies will be explored and • As used at IDT implemented, as needed. Any potential • Implementing Automated Software Testing, Addison Wesley Feb. 2009. schedule slip can be communicated to
By closely tracking schedules and other required resources, the automator assures that cost tracking and controlling process is followed.
www.stpmag.com •
31
Best Practices
SCM Tools Are Great, But Won’t Mind the Store When it comes to managing that the details of the develcollaborative code-developopment process are followed ment, it is not so much the sounds completely silly, but choice of tools used, but it works,” says Kapfhammer. rather, ensuring that everyThat seems simple one plays by the same set of enough. But the follow-up rules. It takes only one cowquestion, of course, is “ok, boy coder to bring disaster but exactly which proceto otherwise carefully condures are we talking about?” structed processes or one That’s where the idea of a rogue developer to bring a process model comes into Joel Shore project to a standstill. play. Whether you’ve implemented an The current darling of process modopen-source tool such as CVS, Subversion, els is the Information Technology or JEDI; or a commercial tool, experts Infrastructure Library (ITIL), a compreagree there had better be a man in a hensive set of concepts and policies striped shirt ready to throw a penalty flag for managing development, operations, if short cuts are attempted. and even infrastructure. There’s serious “You can’t have a free for all when it weight behind ITIL—it’s actually a tradecomes to version control; it’s essential to mark of the United Kingdom’s Office of have clearly defined policies for code Government Commerce. check in and check out,” says David The official ITIL Web site, www.itil-offiKapfhammer, practice director for the cialsite.com (apparently they’re serious Quality Assurance and Testing Solutions about being THE official site), describes Organization at IT services provider ITIL as the “most widely accepted Keane. “We see companies installing approach to IT service management in sophisticated source code and configuthe world.” It also notes that ITIL aims to ration management systems, but what they provide a “cohesive set of best practices, neglect to do is audit their processes.” drawn from the public and private secAnd it’s often large, well-established tors internationally.” That’s especially corporations that should know better. important as projects pass among teams Kapfhammer says he is working with “very worldwide in an effort to keep developmature, very big clients” in the healthment active 24 hours a day. care and insurance industries that have But again, it works only if everyone all the right tools and processes in place, plays, and only if development tasks are “but no one is auditing, no one is makgranular enough to be manageable. ing sure everyone follows the rules.” “If I had to boil everything I know It boils down to simple human nature. about source code management and verWith enormous pressure to ensure that sion control down to three words, it would schedules are met (though they often are be this,” says Norman Guadagno, direcnot), it’s common for developers—in their tor of product management for Microwell-meant spirit of getting the product out soft’s Visual Studio Team System: “Every the door—to circumvent check-in policies. build counts.” A simple, low-tech way to avoid potenGuadagno’s view of the development tial disasters, such as sending the wrong world is simple: Build control and process version of code into production, is a simautomation must be ingrained in the culple checklist attached to developer’s cubiture of the team and not something that’s cle walls with a push pin. “This method of left to a manager. “If it’s not instilled in managing repetitive tasks and making sure the culture of the team, if they don’t
32
• Software Test & Performance
understand branching structures and changes, it all becomes disconnected from what makes great software.” Developers, he notes, often think about build quality as something that happens downstream. “The thinking is ‘if I build and it breaks, it’s no big deal.’ But that’s not true. Do it up front and you’ll invariably a deliver better product and do it more timely.” Branching, while necessary in an environment where huge projects are divvied up among many developers, is the proverbial double-edged sword. Handled correctly, it simplifies development and maintenance. But implement a model based on bad architecture, and projects can collapse under their own weight. Guadagno recalled a small ISV with a specialized product in use by five customers. Their model of building a custom version of the app for each customer and doing that with separate branches was a good idea, he says. “But what happens if they grow to 50 or a hundred customers?” Introducing new functionality eventually will cause every build to break—and it did. Spinning off different branches and hoping for the best when it comes to customizations simply didn’t work. “It turns out that this was not a problem that any source code management tool or process could solve. The problem was the original system architecture, which could handle the addition of a few new customers, but which never foresaw the impact of adding dozens.” Guadagno’s point is that while tools, processes, and procedures are essential in assuring a successful outcome, poor design will still yield poor results. “Good builds,” he says, “are not a substitute for good architecture.” ý Joel Shore is a 20-year industry veteran and has authored numerous books on personal computing. He owns and operates Reference Guide, a technical product reviewing and documentation consultancy in Southboro, Mass.
DECEMBER 2008
ST&Pedia
< continued from page 7 ed acceptance testing.
AUTOMATED BUSINESSFACING TESTS A final approach is to have some method of accessing the business logic outside of the GUI, and to test the business logic by itself in a standalone form. In this way, the tests become a sort of example of “good” operations, or an executable specification. Two open-source tools designed to enable this are FIT and Fitnesse. Business-facing tests also sidestep the GUI, but miss any rendering bugs that exist.
AUTOMATED TEST SETUP Software can be complex and yet not have simple save—open functionality— for example, databases. Building import/ export hooks, and a “sample test database” can save time, energy, and effort that would have to be done over and over again manually. Automated environment setup is a form of test automation.
LOAD AND PERFORMANCE TESTING Measuring a page load—or simulating
Discover all the best
100 page loads at the exact same time— will take some kind of tooling to do economically.
SMOKE TESTING Frequently a team’s first test automation project, automating a build/deploy/ sanity-check project almost always pays off. (Chris once spent a few days building a smoke test framework that increased efficiency 600% for months afterward.)
COMPUTER ASSISTED TESTING Jonathan Kohl is well known for using GUI scripting tools to populate a test environment with a standard set-up. The test automation brings the application to the interesting area, where a human being takes over and executes sapient tests in the complex environment. Any form of cybernetics can make a person better, faster, and less error prone. The one thing they cannot do is think. There’s no shame in outsourcing required, repetitive work to a computer, but you will always need a human being somewhere to drive the strategy. Or, to paraphrase James Bach: The critical part of testing is essentially a conversation between a human and the software. What would it even mean to “automate” a conversation? ý
Index to Advertisers
software practices,
Advertiser
gain an in-depth
Automated QA
www.testcomplete.com/stp
8
Empirix
www.empirixfreedom.com
4
FutureTest 2009
www.futuretest.net
vendors, and educate
Hewlett-Packard
hp.com/go/quality
36
yourself on a variety of
I Am Not a Lead
www.i-am-not-a-lead.com
24
topics directly related
Qualitest
www.QualiTest-int.com
4
to your industry.
Ranorex
www.ranorex.com/stp
15
Learn Some New Tricks!
Reflective Solutions
www.stresstester.net/stp
11
SD Times Job Board
www.sdtimes.com/jobboard
26
Seapine
www.seapine.com/stptcm08
2
Software Test & Performance
www.stpmag.com
Software Test & Performance
www.stpcon.com
35
Free White Papers at www.stpmag.com
Conference
www.stpmag.com/tqa
10
Page
URL
understanding of the products and services 20, 21
of leading software
DECEMBER 2008
Test & QA Report
6, 33
www.stpmag.com •
33
Future Future Test
Test
Automate Web Service Testing the Right Way Web services Web services are perfect candidates for automated testing. Compared with validating a UI, Web services Web services testing is quite simple: send a request, get a response, verify it. But it's not as easy as it may look. The main challenge is to identify the formal procedures that could be used for automated validation and verification. Other problems include dynamic data and differing request formats. Elena Petrovskaya In the project our team was working and Sergey Verkholazov on when this was written, there were many different Web services in need of manual validation, all valid responses are testing. We needed to check not only the considered correct and saved for future format of the request and response, but reference. The response of all next veralso the business logic, data returned by sions of that Web service will then be comthe services and the service behavior pared with these template files. There is after significant changes absolutely no need to write l were made in the archiyour own script for XML tecture. From the very file comparison; just find beginning we intended to the most suitable free tool automate as many of these and use it within your tests as possible. Some of scripts. However, this the techniques we used for method is not applicable automated verification of when dealing with dynamWeb services functionality ic data or if a new version were as follows: of Web services involves Response schema valchanges in the request idation. This is the structure or Web services most simple and formal architecture. procedure of response Check XML nodes structure verification. There and their values. In are free schema validators the case of dynamic data, it out there that can easily can be useful to validate l be built into automated only the existence of nodes scripts. However, schema validation is not using XPath query and/or check the sufficient of you also need to validate the node values using patterns. Web services logic based on data returned Validate XML response data against original data source. As a “data from a service. Comparison of the actual response source” it’s possible to use database, another service, XML files, non-XML files with the expected response. This is and in general, everything that can be a perfect method for Web service regresconsidered a “data provider” for Web servsion testing because it allows for checkices. In combination with response ing of both response structure and data schema validation, this method could be within the response. After the successful
We are able to automate about
1.
70 percent of our test cases [with a] small utility built in-house.
4.
2. 34
• Software Test & Performance
3.
the perfect solution for testing any service with dynamic data. On the other hand, it requires additional skills to write your own scripts for fetching data from data source and comparing it with service response data. Scenario testing. This is used when the test case is not a single request but a set of requests sent in sequence to check Web services behavior. Usually the Web services scenario testing includes checking one Web service operation using another. For example, you can check that the CreateUser request works properly by sending a GetUser request and validating the GetUser response. In general, any of the above techniques can be used for Web services scenarios. Just be careful to send requests one after another in the correct order. Also be mindful of the uniqueness of test data during the test. The service might not allow you to create, for example, several users with the same name. It’s also common for one request in a sequence to require data from a previous response. In this case you need to think about storing that data and using it on the fly. All of these techniques were implemented using a home-grown tool which we were using in our day-to-day work. Using that tool, we are able to automate about 70 percent of our test cases. This small utility that was originally created by developers for developers to cover typical predefined test conditions and gradually evolved into rather powerful tool. Having such a tool in your own organization could help your testing efforts well into the future. ý
5.
Elena Petrovskaya is lead software testing engineer and Sergey Verkholazov is chief software engineer working for EPAM Systems in one of its development centers in Eastern Europe, and currently engaged in a project for a leading provider of financial information. DECEMBER 2008
Be Th s e tin of t Va g C an lue on y fer en ce !
Te s
Attend SPRING
March 31 – April 2, 2009 San Mateo Marriott • San Mateo, CA
3
Days
packed the usy, so we b e ’r u yo inimize We know 3 days to m to in e c n April 2009 confere Monday Tuesday ut Wednesday Thursday your time o 30 31 . 1 e ic 2 ff o e th of r u o Y Mark 6 7 8 9 Calendar!
60+ Classes
30 31+ Friday
3
, l be offered classes wil rmance 60 o n rf a e p th More /QA and st te re a w ft ation life covering so tire applic n e e e th ss a given tim issues acro classes in 8 re a d re n e fi h cycle. T ’ll always means you slot, which r needs. u that fits yo something
Speakers
for their nd-picked a h re a rs e Our speak d ability to xpertise an technical e nowledge ate their k communic at are , in ways th g effectively tin ra g l for inte most usefu ur yo to in dge that knowle daily life.
eagues l l o C + 500 ers rk with pe es! Netwo c n e e e ri ff e o p c x , Share e ception rs at our re to c ru st in and breaks and on the exhibit floor.
29
Minutes
, CA is San Mateo tween midway be co and is c San Fran ey. ll a V Silicon
6+
Tutorials
te r a comple torials offe g in w o ll a Full-day tu t, a subjec to in n on io si n rs imme mprehe r detail, co te a e re th g f o h c e mu ractic e cases) p n m io so ss n (i se r d u n o a an a 1-h th r e tt a m subject y. could conve
s
5+
Tracks
acks learning tr We’ll offer r specially fo designed e ecialists sp r and othe rs e g a n a m st stay u, who mu just like yo hnologies e latest tec on top of th niques h pment tec and develo y n a ur comp to keep yo . ve ti ti compe
$895
Value!
Con ending STP ense of att p x e n io it and tu The travel rences. st other confe n a th d Southwe is less lu JetB e an Check out t airfares. ve. for discoun ore you sa n up, the m g si u yo r Includes: ie n The earl gistratio e R rt o es p ss vent Pa hnical class Your Full E ps and tec o sh rk o w n to • Admissio te Reception n to keyno o si is m d Attendee n a • Ad ll s a H it l Showcase n to Exhib lks and Too • Admissio Ta g in tn h g n to Li • Admissio rials nch rence mate aks, and lu fe n o • All c , coffee bre st a kf a re b l ta • Continen
For more information, matiion go to
31+ Exhibitor 25
ftware e latest so akers of th m e . th ll t a e H e M hibit ls in our Ex nd testing too products a w e n ir e th e t u o b to a Learn nd talk th them out, a st te s, . re m e tu fea o built th experts wh
PRODUCED BY
www.stpcon.com
A LT E R N AT I V E T H I N K I N G A B O U T Q U A L I T Y M A N A G E M E N T S O F T WA R E :
Make Foresight 20/20. Alternative thinking is “Pre.” Precaution. Preparation. Prevention. Predestined to send the competition home quivering. It’s proactively designing a way to ensure higher quality in your applications to help you reach your business goals. It’s understanding and locking down requirements ahead of time—because “Well, I guess we should’ve” just doesn’t cut it. It’s quality management software designed to remove the uncertainties and perils of deployments and upgrades, leaving you free to come up with the next big thing.
Technology for better business outcomes. hp.com/go/quality ©2008 Hewlett-Packard Development Company, L.P.