TEST Magazine - April-May 2012

Page 1

iNNOVATiON FOR SOFTWARE QuALiTy Volume 4: issue 2: April 2012

Cloud testing for an Agile world Fred Beringer reports from Silicon Valley

inside: Automation tools | Model-based testing | Testing certifi cation FEATuRE FOCuS: Fuzzing web applications – The new web auditing: P20-23


FUZZ IT LIKE YOU MEAN IT! Codenomicon's Fuzz-o-Matic automatizes your application fuzz testing! Fuzz anywhere, early on in the development process, get only relevant results and remediate on time. Access your test results through your browser anywhere.

Get actual, repeatable test results Save time and money by avoiding false positives! Find unknown vulnerabilities before hackers do. Not everyone has the staff, time, or budget to effectively use penetration testers and white hat security auditors. For executable but untested software, Fuzz-o-Matic gives longer lead time to remedy bugs before release.

Identify bugs easily

Verify your applications

Fuzz-o-Matic gives you working samples that caused issues during testing. Each crash case includes also programmerfriendly trace of the occurred crash. Identification of duplicate crashes is effortless. In addition to plain old crashes, you also get samples that caused excessive CPU or memory consumption. After remediation, a new version can be tested to verify the fixes.

Test your builds with Fuzz-o-Matic. The world’s best software and hardware companies have Software Development Lifecycle processes (SDLC) that identify fuzzing as a necessity to pre-emptively thwart vulnerabilities. These days simple functional testing done as Quality Assurance is not enough. Improve your application resilience by finding and remediating bugs that cause product crashes.

www.codenomicon.com/fuzzomatic


TEST : INNOV ATION FOR SO FTWARE QUAL ITY

Leader | 1 Feature INNOVATION FO R SOFTWARE QU ALITY

Volume 4: Issue 2: April 2012

Cloud testing for an Agile world Fred Beringer reports from Silicon Valley VOLUME 4: ISS UE 2: APRIL 20 12

iNNOVATiON FOR SOFTWARE QuALiTy

Inside: Automation tools FEATURE FOCUS: Fuzzing

| Model-based testing

web applications – The

| Testing certification

new web auditing: P20-23

A testing heroine

O In 1944, the then Lieutenant Grace Murray Hopper was assigned to the Bureau of Ordnance where she became involved in the development an embryonic electronic computer. While this primitive electronic contraption was being tested at Harvard University, she found a moth trapped between points at Relay # 70, Panel F that was causing the calculator to malfunction. The offending insect was removed and her team put out the word that

ne of the fi rst things I noticed when we launched TEST magazine was the relatively large number of women working in senior positions. While this certainly shouldn’t be a big deal in this day and age, it is perhaps counter-intuitive in what is after all a very technical – some may go so far as to say nerdy – industry. Over the years i think the contributors to TEST have refl ected this wealth of female talent and many women have written features on a range of diverse technical subjects – one feature even tackled the challenges of being a women working in this technical fi eld.

With all this in mind, it came as no great surprise to find out that a woman was the first ‘tester’ to find a computer bug, indeed, her team even coined the term. A friend pinged me a link in Facebook to a web page on the (US) Naval History & Heritage site (http://www.history.navy.mil/ photos/pers-us/uspers-h/g-hoppr.htm) which outlined how US Navy Rear Admiral Grace Murray Hopper was responsible for finding the first ever computer bug in 1947 when she working on the Mark II Aiken Relay Calculator. In 1944, the then Lieutenant Grace Murray Hopper was assigned to the Bureau of

Ordnance where she became involved in the development an embryonic electronic computer. While this primitive electronic contraption was being tested at Harvard University, she found a moth trapped between points at Relay # 70, Panel F that was causing the calculator to malfunction. The offending insect was removed and her team put out the word that they had “debugged” the machine; the rest as they say is history. The log of this event, with the moth taped to the entry, is still in the Naval Surface Warfare Center Computer Museum at Dahlgren, Virginia or so I am informed. I am happy to report that Grace Murray Hopper, over the more than four decades that followed the first debugging, was in the forefront of computer and programming language progress and she remained active in the industry until her death in January 1992. I had always wondered where the word ‘bug’ actually came from and now I know. Until next time...

Matt Bailey, Editor

they had “debugged” the machine; the rest as they say is history. Matt Bailey, Editor © 2012 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor or TEST Magazine or its publisher, 31 Media Limited. ISSN 2040-0160

www.testmagazine.co.uk

Editor Matthew Bailey matthew.bailey@31media.co.uk Tel: +44 (0)203 056 4599 To advertise contact: Grant Farrell grant.farrell@31media.co.uk Tel: +44(0)203 056 4598 Production & Design Toni Barrington toni.barrington@31media.co.uk Dean Cook dean.cook@31media.co.uk

Editorial & Advertising Enquiries 31 Media Ltd, Three Tuns House 109 Borough High Street London SE1 1NL Tel: +44 (0) 870 863 6930 Fax: +44 (0) 870 085 8837 Email: info@31media.co.uk Web: www.testmagazine.co.uk Printed by Pensord, Tram Road, Pontllanfraith, Blackwood. NP12 2yA

April 2012 | TEST


Join the Revolution Don’t let your legacy application quality systems hamper your business agility At Original Software, we have listened to market frustrations and want you to share in our visionary approach for managing the quality of your applications. We understand that the need to respond faster to changing business requirements means you have to adapt the way you work when you’re delivering business-critical applications. Our solution suite aids business agility and provides an integrated approach to solving your software delivery process and management challenges.

Find out why leading companies are switching to Original Software by visiting: www.origsoft.com/business_agility


Contents | 3

Contents... APRIL 2012 1 Leader column Celebrating the perhaps surprising finder of the first ever bug. 4 News 6 Cover story – Cloud testing for an Agile world Always at the cutting edge of IT, Silicon Valley in California is now the epicentre for using the cloud for load and performance testing for today's Web and mobile world. Fred Beringer reports.

10 Customer profile – Safe & secure trading TEST asks online trading company CMC Markets about its software development and testing processes and how it benefits from its relationship with its load testing tool supplier Facilita.

12 Testing: the keystone of the bridge between Dev and Ops DevOps brings together two traditionally divided sides of the IT house to overcome the disconnect that has grown out of competing motivations and processes. David Hurwitz explains how testing fits into the picture.

16 Are you ready to be Agile? How do you know when a project is ready to use Agile software development methodology? Dan Fuller, associate director Agile Centre of Excellence at Cognizant explains.

6

20 Fuzzing web applications – The new web auditing Fuzz testing is a new way to approach web application testing. Codenomicon web security experts Rikke Kuipers and Miia Vuontisjärvi report.

24 Modelling the drive for quality With a growing list of cars that have had all too public issues with quality, Peter Farrell-Vinay suggests that a model-based testing (MBT) approach could help validate the complex aspects of the vehicle design that are often at fault, in advance of manufacturing.

16

28 Supplier profile – Flexibility & functionality TEST talks to Christoph Preschern head of sales at test automation specialist Ranorex.

30 Training corner – Qualifications – the choice is all yours Is there such a thing as too much choice? Angelina Samaroo shows the way through the maze of qualification and certification.

33 Design for TEST – App testing

ith platforms proliferating, app developers need to take testing far more seriously, W according to Mike Holcombe.

34 Product profile – Cloud-based collaboration

24

Test management is a crucial part of the testing process, without management you could have chaos. Here Automation Consultants explains its cloud-based test management tool, TestWave which enables testers to work collaboratively by storing their work in a single place, which lets them keep up with progress in real time.

38 Testing in a project environment IT analyst Martin Chalkley weighs up the options for testing software in a project environment when outsourcing software development and testing to contractors.

41 TEST Directory 48 Last Word – The end is nigh Dave Whalen gives his verdict as he approaches the end of his first Agile project. www.testmagazine.co.uk

34 April 2012 | TEST


Will the Raspberry Pi spark a programming revolution?

T

he Raspberry Pi is a credit-card sized computer that plugs into any TV plus a keyboard which the makers – a Cambridge-based charity – want to see “being used by kids all over the world to learn programming”.

The idea for a tiny and cheap computer for kids came in 2006, when Eben Upton and his colleagues at the University of Cambridge’s Computer Laboratory, became concerned about the yearon-year decline in the numbers and skills levels of the A Level students applying to read Computer Science. “From a situation in the 1990s where most of the kids applying were coming to interview as experienced hobbyist programmers, the landscape in the 2000s was very different; a typical applicant might only have done a little web design,” he commented. The Cambridge colleagues observed that something had changed the way that kids were interacting with computers. A number of problems were identified: the colonisation of the ICT curriculum with lessons on using Word and Excel, or writing webpages; the end of the dot-com boom; and the rise of the home PC and games console to replace the Amigas, BBC Micros, Spectrum ZX and

Commodore 64 machines that people of an earlier generation learned to program on. The makers describe the Raspberry Pi as “a capable little PC which can be used for many of the things that your desktop PC does, like spreadsheets, word-processing and games. It also plays highdefinition video.”

QA Systems buys Cantata++ business from IPL

Q

A Systems GmbH, a Stuttgart-based provider of software development solutions, says it has purchased the Testing Products Business Unit from UK IT services company IPL, thereby launching QA Systems Ltd. IPL’s Testing Products Business Unit, based in its head offices in Bath, was responsible for the development and worldwide partner network that distributed the C/C++ and Ada embedded systems unit and integration testing tools Cantata++ and AdaTEST.

Having seen an increasing demand for testing products for business and safety-critical components and embedded systems, QA Systems says it saw an opportunity to combine IPL’s dynamic testing tools with its current portfolio of requirements engineering and static testing tools. In order to do so freely and determine the direction of research and development activities, QA Systems has purchased the entire Testing Products Business Unit from IPL and set up QA Systems as a result – an independent UK-based company with a global remit predominantly targeting the aerospace, defence, automotive, engineering and healthcare sectors. Andreas Sczepansky, CEO and president of QA Systems, comments, “A historical focus on static testing has limited our market reach. However, TEST | April 2012

having seen the market potential of IPL’s dynamic testing software products, we were keen to be able to combine these product lines with our own so that we could go to market with an allencompassing proposition. In order to maintain control over the ongoing development so that it would suit our needs and ambitions, we decided that the most practical option would be to offer to purchase the business unit from IPL and establish a new UK-based company from the existing resource.” The employees, intellectual property and assets previously held within IPL’s Testing Products Business Unit are being transferred to the newly-established QA Systems Ltd, which will also be based in Bath. All existing customers of AdaTEST and Cantata++ products will continue to be supported by the new entity and will benefit from the greater resources of the overall QA Systems group. Existing partners already reselling the two product lines will also continue to be supported by QA Systems Ltd. Indeed, as part of its global strategy, QA Systems Ltd. is actively seeking out further additional partners to assist in territorial and vertical market reach.

As for the OS and programming language: “We’ll be using Fedora as our recommended distribution. It’s straightforward to replace the root partition on the SD card with another ARM Linux distro if you want to use something else. The OS is stored on the SD card. By default, we’ll be supporting Python as the educational language. Any language which will compile for ARMv6 can be used with the Raspberry Pi, though; so you’re not limited to using Python.” Computing at School is writing the user guide and programming manual, but others are already seeing the potential of the Raspberry Pi. “We’re aware of a few books being planned and written around the Raspberry Pi,” says the maker, “and others have already started to produce some excellent tutorials including video. We’re also working with partners to use it as a teaching platform for other subjects, including languages, maths and so on. Once we launch, we hope that the community will help bodies like Computing at School put together teaching material such as lesson plans and resources and push this into schools. In due course, the foundation hopes to provide a system of prizes to give young people something to work towards.”

Finland has the cleanest computers in the world

S

tatistics recently published by forensics malware tools vendor Norman ASA show that Finland is the safest country in the world in terms of malware infections. The statistics are also backed up by a Microsoft regional threat assessment report from last year, as reported by MSNBC.

According to the report, in general, Finland is thought to be ‘mostly harmless’ in terms of cyber security: thanks to a collaboration between security community, CERT-FIs and Finnish ISPs, there are relatively few attacks originating in Finland. CERT-FI and other similar actors handle vast volumes of incident report data information with AbuseHelper. Codenomicon has expanded the toolset to Abuse Situation Awareness, gaining fully automated sharing of actionable incident data and situational awareness over malicious activity. As a long-standing homeland for internationally acclaimed security companies, Finland has a strong cyber security culture which shows in the recent studies. Cyber threats are taken seriously, which has resulted in R&D efforts to create better security. This has ultimately led to the creation of many international companies working with cyber security software. www.testmagazine.co.uk


T

estPlant, the developer of robotic test tool product eggPlant, has been named runner-up in the UK’s National Challenge: Exporting for Growth London Prize. The company’s chief executive George Mackintosh accepted the award from Lord Green, Trade and Investment Minister at a cceremony in London in February. The award also secures grants for UK Trade & Investment (UKTI) support and advice to help the continued growth of TestPlant’s export business. Mackintosh commented: "Being recognised for this award is great news for TestPlant, especially given the calibre of the other finalists. As well as promoting and supporting our export drive, this is also a huge stamp of approval for eggPlant in the UK - a core market for software innovation and design. The help from UKTI and sponsors HSBC and PwC will be used to accelerate our expansion into India and China. The whole process has been of real business benefit to us.” Parveen Thornhill, regional director for UKTI

London, added: “The standard of entries was incredibly high and the creativity and innovation of the ideas was fantastic.” Since it was founded in 2008, TestPlant has seen year-on-year success with an average compound annual revenue growth rate of over 80 percent. In the last year alone it has achieved a 104 percent growth rate. Today 85 percent of its sales are to export markets, including America, Canada, China and India. This latest recognition complements existing awards and plaudits from the British Venture Capital Association and the London Export Awards. The company’s other market-firsts include becoming the only company in its market niche with a patented technology, secured in 2011 from the US Patent and Trademark Office for eggPlant, its intelligent robotic software testing tool. “The recognition of this UKTI Export for Growth prize reinforces our position as a market leader and innovator, and also reflects the global demand for automated software testing and validation,” concludes

Germany gets a Cloud Testlab

C

loudgermany.de has announced that it will house its Cloud Testlab within Interxion’s Frankfurt data centre. According to the company, the proof-of-concept test environment allows service providers and system integrators to test and develop cloud services at high speed and with best-in-class performance guarantees. By locating its platform at Interxion, Cloudgermany combines the performance of its native cloud software and state-of-the-art Infrastructure as a Service platform with the connectivity, high availability, and reliability of Interxion’s Frankfurt data centres. Cloudgermany. de offers its services in full compliance with German data protection rules. Between 1 March and 30 April 2012, companies can try Software as a Service solution – including Microsoft Windows Server® – via a trial account in a secure environment in real time for free. Additionally, users can migrate their own applications onto cloudgermany.de’s infrastructure, such as CRM or ERP systems, and test the performance. The customer doesn’t even have to worry about basic parameters like memory, server capacity, data centre capacity, or data backup as all elements are provisioned as a service by cloudgermany.de. Instead, the enterprise customer can fully concentrate on its core business and allocate IT resources accordingly. The cloudgermany.de platform can also be replicated at all of

www.testmagazine.co.uk

Interxion’s data centres across eleven European countries, which provide equally high-performance environments for enterprise applications. Interxion data centres do not only offer the highest safety and availability, but also direct access to other companies based within the data centre, including service providers, systems integrators, connectivity providers, and the Internet Exchange DE-CIX. “With the establishment of the Cloud Testlab, we would like to make it easier for medium-sized companies in particular to take the first step toward cloud computing by offering them all the programs and applications they already use on a daily basis on the cloudgermany.de infrastructure platform. Thus we enable them to dynamically tailor IT infrastructure to fit fluctuations in demand and to avoid engineering to meet peak demands. We have been using Interxion’s infrastructure since May and have had no downtime so far,” commented Udo Würtz, advisor of the management board at cloudgermany.de. Peter Knapp, German managing director at Interxion, added, “The rapid adoption of cloud computing has seen cloud-based services become a central part of business strategy. The launch of this Cloud Testlab reinforces our commitment to the cloud and offers both customers and prospects the opportunity to test and develop cloud services at high speed and with best-inclass performance guarantees.”

NEWS

Testing export success

TestPlant receives runner-up in the UK’s National Challenge: Exporting for Growth London Prize. left to right: Sir Eric Peacock (event chair), Parveen Thornhill and George Mackintosh from TestPlant, with Lord Green.

Mackintosh. “TestPlant is the fastest-growing business in this sector which is driven by the increasing software complexity and the diversity of devices such as smartphones and tablets, which are making the testing of business critical applications fastpaced, consumer-facing and ever more important.”

Reflective partners with TEAM Global

L

oad and performance testing specialist, Reflective Solutions, has announced that consulting company TEAM Global is the latest organisation to sign up as a Reflective Solutions partner. TEAM Global, with offices in Manila, Tokyo, Dubai and the USA, is a leading provider of performance testing services for Maximo®, Infor, Lawson, SAP and Oracle Financials. TEAM Global says it chose Reflective as a partner because of its long and deep expertise in performance testing. The ease and speed of use of Reflective’s StressTester and Sentinel, will enable it to improve speed of delivery for its performance testing and monitoring services. Richard Taggs, president of Team Global commented: “We rigorously tested and evaluated StressTester and were amazed at how readily it handled all aspects of our IBM Maximo test applications. We are actively bringing its performance testing capabilities to both existing and new clients, to provide them with assured application performance, previously only attainable at much greater cost and over longer testing cycles.” Peter Richardson, director of Global Alliances at Reflective Solutions, adds: “We are delighted that Team Global has joined the ever growing list of specialist solution providers who have adopted StressTester. TEAM Global have deep expertise and are well established in a number of markets. Their presence spans the Middle East, Asia Pacific as well as the US. We view this as being an ideal partnership, and one that will help fuel the growth of both of our businesses.”

April 2012 | TEST


6 | Cover Featurestory

Cloud testing for an Agile world Always at the cutting edge of IT, Silicon Valley in California is now the epicentre for using the cloud for load and performance testing for today's Web and mobile world. SOASTA’s Fred Beringer reports.

T

he Internet has revolutionized the way we conduct business, consume information and interact with each other. For some businesses, the Internet has become the primary source of revenue and the main customerfacing outlet for communicating, advertising and brand management. Applications have become an aggregation of content, information, data, and media, while the architecture has become more complex, and more dependent on external third parties. The Web and the mobile Web have forever changed the application landscape requiring a new emphasis on load and performance testing. These changes can be seen everywhere today. Massive amounts of data are being stored and accessed,

TEST | April 2012

and online applications have to adapt to this growth. In parallel, the behaviour of Internet users is becoming increasingly eventdriven. Applications will often need to serve a much larger than normal customer base for perhaps a few weeks or sometimes just a few critical hours. We’re seeing an increasing unpredictability in traffic patterns as social media outlets, such as Facebook and Twitter, can drive more traffic to a web site than it’s prepared to handle. A company will spend millions of dollars creating engaging content and running promotional campaigns to draw users to its site. But if the site crashes, or response time crawls, all that energy and money will be wasted. Even public perception of the company could be seriously downgraded. This unpredictability must be taken into consideration when shaping up a performance testing strategy.

Four key requirements for effective performance testing 1. Scale: The test must simulate a realistic volume of user traffic. Optimally, simulating at minimum the average number of users expected to use the application on a daily basis. But not only the average! There is also a need to test for unexpected traffic spikes. If the web site or application is accessed from all parts of the world, the test must replicate global traffic. Businesses need to understand the local performance of their website, as some parts of the world might generate more revenue than others. In general, they want to optimise some of those high revenue locations. To generate a high volume of concurrent users with traditional performance testing tools requires a substantial investment in hardware. To reduce costs, often the testing is diverted to a small subset of the

www.testmagazine.co.uk


Cover story | 7 production environment. Tests are done on a small scale and the results are extrapolated to peak production numbers. This is problematic for a number of reasons. The lab is significantly different from production in terms of hardware, scale, configuration, and activity at any given time (batch jobs, shared services, etc). Perhaps more critical is the fact that real users come from outside of the firewall where latency is an important factor in the customer experience. While important for extracting some type of results, testing in the lab cannot answer questions about production performance or capacity with a high degree of accuracy and confidence. 2. Speed: Websites can change every day, and even hourly for the largest e-commerce outlets, and all have an effect on performance. Software development builds happen multiple times per day, releases to customers happen frequently and the rate of change is high. Without agile test practices and the right tools, applications may leave development without being tested, or worse, the testing itself will slow down the release process. All processes and tools need to be able to adapt to this fast paced software development lifecycle. Lack of speed and reactivity is the main inhibitor to agility. 3. Real-time results for actionable intelligence: The first objective of performance testing is to gather relevant data from every component involved in the overall performance the application under test. This is raw and dumb data. A performance test will end up with LOTS of data, easily generating terabytes of data. But it’s USELESS data if there is no mechanism to transform the data into actionable information. In order to deal with this “big data” problem, results need to be aggregated and displayed in real-time. Performance engineers want to be able to answer questions such as: • What is the relationship between the number of virtual users and the memory consumed by the application server? • How much server capacity is left with when reaching 10,000 concurrent users during the test? • What is the correlation between the numbers of process counts on the database server and the overall throughput? • At what stage during the test is there a drop in overall response time? Is it possible to correlate this drop with other data to understand this behaviour? • What is the correlation between an

www.testmagazine.co.uk

Fig 1

increase in response time and the number of errors coming from the SSL Server? • Why is 90 percent of response time for the overall homepage taken by this particular file? Where does it come from? Why does it take longer than the other page assets? Traditional performance testing approaches have always struggled with analytics. To combine, aggregate, and correlate these performance results requires high computing power. Performance engineers are typically left doing their analysis offline and manually with a high potential for human error and ultimately low confidence with the result. 4. Cost: This is a key inhibitor for performance testing excellence. Initial hardware investment is very high, as is the subsequent maintenance cost. Testing organisations have to deal with memory and CPU upgrades, and operating system and disk maintenance. Investment in dedicated and skilled IT people to handle the environment is not cheap. Plus three years after the initial investment, hardware is obsolete, and the requirement for load has become so much higher. What’s the answer?

Cloud testing – performance testing for today's world Cloud testing was born based on cloud computing’s promises, and today means a fast, scalable and affordable approach to test web or mobile applications using cloud computing Specifically, leveraging the Cloud to do this type of testing yields a

number of benefits: • Tests can be achieved at a level only observed on production systems. Typically companies will want to test at 100 percent of typical traffic levels but cloud testing allows them to test at 150, 300, 500 percent! Using cloud resources, they’re able to generate from hundreds to millions of users. • It is possible to generate a realistic and geographically dispersed load. Today’s online world requires realworld traffic. • It is possible to test both inside and outside the firewall, which brings the most efficient and effective results. • The cost of tests is definitely lower because businesses can rent the hardware and only pay for actual usage. • It allows testers to respond to fast development cycle times by making agile performance testing a reality. • For the first time, performance tests can run in production. This is the only way to gain full confidence in the application. Figure 1 describes a fairly common setup for cloud testing. Load injectors are deployed in public clouds and are able to target a website whether it is located in a data center or in the cloud. Performance results are typically gathered from each cloud location and aggregated in a central location.

Continuity between testing in the performance lab, in a staging environment and in production Historically, because of scale and lack of real-time results, companies were always testing at a fraction of the expected load in a

April 2012 | TEST


8 | Cover story

One of the challenges with continuous testing, whether it is for functional or performance testing, is to have a suitable environment of the right size, ready to run tests at all times. The investment is difficult to justify. Tests won’t consume 100 percent of the hardware capacity, and spare cycles are difficult to use for other purposes as the environment needs to remain clean. Cloud computing offers a fresh environment every time a test is run. As soon as the setup is performed there is no need to worry about it for a very long time!

TEST | April 2012

dedicated performance lab, and then extrapolating results to gain confidence that their live site would be able to handle the load. Not a very good practice, but they had no other choice. They have an alternative today as cloud testing brings them everything they need to test in production. Scale introduces the potential for chaos. The number of elements, from hardware to software to network, that can impact performance grows exponentially as an application scales and the inter-dependencies between the various components grow increasingly complex. Testing at scale and in production requires an infrastructure to support the traffic, a means to manage the deployment and execution of the test, and analytics that can massage the massive amounts of data and deliver actionable information. Of course, performance engineers with the experience to navigate through complex environments are required as well.

Testing in the performance lab Ongoing performance testing in a lab allows engineering teams to assess performance degradation or improvement over time, and helps catch any show-stopping performance bugs before they reach production. The issues found are inherent to the application itself such as: • Memory leaks; • Inefficient database schema or queries; • Garbage collection issues; • Bad CPU consumption; • Slow pages. Tests in the lab are absolutely necessary and don’t require a large load to identify issues and bottlenecks within the application itself.

Testing in a staging environment A staging environment brings you closer to production-like circumstances and allows you to perform important tests such as verifying capacity, establishing correct configuration, stressing available resources and verifying performance expectations. The issues you’ll find and fix are a level higher and cross the application and the infrastructure. It will allow you to find issues such as: • Bad configuration settings; • Slow third-party plug-ins; • Auto-scaling failures; • Security bottlenecks; • Inadequate server resources; • Low database thread counts.

There are still limitations such as availability of the environment and exclusion of key components, but it tests closer to full scale.

Testing in production Testing in production is the best way to get a true picture of capacity and performance in the real world. Testing in production is the only way to ensure that online applications will perform as expected and certify them in normal and extreme conditions by testing them beyond expected limits. There are many issues you will catch that cannot be found when testing in a lab or in a staging environment: • Latency between systems; • Bad network configuration; • Low network bandwidth; • Bad load balancer configuration settings; • Firewall max capacity; • Bad CDN file placement; • Wrong DNS routing; • Unbalanced web servers; • The effect of batch jobs running in the background.

Cloud testing is meant for the agile world Performance engineering organisations have struggled in the past due to the inertia created by their dependence on a fixed hardware environment. It is difficult to rapidly deploy a reasonable number of servers. Without real-time results, it’s impossible to provide quick turn-around of information. Cloud testing changes the game because it is so flexible, scalable and adaptable that you can run your tests for every build, every change in your store inventory, every change in your operational infrastructure, at any time, at any scale.

Test performance through the whole application life cycle All the various types of performance tests are feasible and they can be run in the development stage where low volume testing is needed, all the way to production at full scale. Here are just a few types that you can run: Baseline: the most common type of performance test. Its purpose is to achieve a certain level of peak load on a pre-defined ramp-up and sustain it while meeting a set of success criteria usually acceptable response times with no errors. Spike: simulates steeper ramps of load, and is critical to ensuring that an application can withstand unplanned surges in traffic, such as users flooding into a site after a commercial or email

www.testmagazine.co.uk


Cover story | 9 to identify quickly any bottlenecks. In that particular case, it could be a misconfiguration in a load balancer sending too much traffic to one particular web server.

Continuous performance testing One of the challenges with continuous testing, whether it is for functional or performance testing, is to have a suitable environment of the right size, ready to run tests at all times. The investment is difficult to justify. Tests won’t consume 100 percent of the hardware capacity, and spare cycles are difficult to use for other purposes as the environment needs to remain clean. Cloud computing offers a fresh environment every time a test is run. As soon as the setup is performed there is no need to worry about it for a very long time!

Fig 2

Test from all locations Relying on a central data centre to generate load doesn’t account for true global traffic. Leveraging a grid provided by a cloud computing vendor with a global presence reflects global deployment. If the application is accessed from the US, Europe and Asia, generation of user load coming from all these locations will give a true and realistic picture of performance. This is an increasingly important factor for companies doing business on the Internet, as they want to understand the impact of distributed traffic on their application.

Fig 3

campaign. A spike test might ramp to the baseline peak load in half of the time, or you may initiate a spike in the middle of steady state of load. Endurance: helps ensure that there are no memory leaks or stability problems over time. These types of tests typically ramp up to baseline load levels, and then run for anywhere from two hours to 72 hours to assess stability over time. Failure: ramps up to peak load while the team simulates the failure of critical components such as the web, application, and database tiers. A typical failure scenario would be to ramp up to a certain load level, say 25,000 concurrent users, and while at steady state the team would pull a network cable out of a database server to simulate one node failing over to the other. This would ensure that failover took place, and would measure the customer experience during the event. Stress: finds the breaking point for each individual tier of the application, or for isolated pieces of functionality. A stress test may focus on hitting only the home page until the breaking point is

www.testmagazine.co.uk

observed, or it may focus on having concurrent users logging in as often as possible to discover the tipping point of the login code. Diagnostic: designed to troubleshoot a specific issue or code change. These tests typically use a specially designed scenario outside of the normal library of test scripts to hit an area of the application under load and reproduce an issue, or verify issue resolution.

Real-time performance analysis Not only does cloud computing help generate load, it also brings enough computer power to be able to compute performance results in real-time. By receiving, absorbing and understanding the data, it is possible to make decisions and take actions. For example, if the environment is experiencing a surge of traffic going into one web server, the performance engineer is able to see the full performance picture coming not only from the application but also from the whole infrastructure. He is able

Encourage collaboration between teams The fact that organisations are able to get results in real-time and take action, allows the test to be run in a very collaboratively way. Imagine having developers, testers, DBAs, IT people, all in the same room watching a test run, being able to discuss results, and making changes in real time. That’s the real way of doing performance testing. With traditional performance testing, companies had to wait days to get a fairly accurate picture of the application’s performance. There was a gap in the discussion. And the usual question from developers or operation people was: “Go run the test again, it can’t be right.” By having a way to test and get results in realtime, the discussion keeps going. DBAs can make changes to their database queues on the fly, IT can make change to their load balancer in real-time as the test is running. Real-time results are conversation enablers, which is the cornerstone of any Agile organisation.

Fred Beringer Vice President Business Development SOASTA www.soasta.com

April 2012 | TEST


10 | TEST Feature Customer Profile

Safe & secure trading TEST asks online trading company CMC Markets about its software development and testing processes and how it benefits from its relationship with its load testing tool supplier Facilita.

F

acilita says it is determined to take the lead in developing tools and techniques to meet the load testing needs of developers and testers. It provides a combination of openness and extensibility, functionality and wide technical coverage and support. The company’s approach is flexible, friendly and responsive according to CEO Gordon McKeown CEO.

TEST asked ‘currency management corporation’, CMC Markets, a company that relies heavily on its in-house software development, about how it benefits from its relationship with Facilita which provides the company with its load testing tools. TEST: What is the history of CMC? CMC: CMC Markets started out as a foreign exchange broker in 1989 when entrepreneur Peter Cruddas started the company with a £10,000 investment. Cruddas called his firm the ‘Currency Management Corporation’ and set up a small office with just a desk and a phone. In 1996 he launched the world's first online forex trading platform. Since then CMC has evolved into a leading online CFD and financial spread betting provider, with over 26 million trades executed annually.

TEST | April 2012

TEST: What software products deliver your online services and how are they developed? CMC: Our NextGen trading platform is written across a combination of Java and .NET services, running on linux and windows hosts. Front ends have been developed for web (Flex), and mobile (iPhone/iPad/Android) users. Inter-service communication uses fuse ActiveMQ Message broker, while a combination of MSSQL/Oracle databases and Coherence are used for data persistence and caching. They are all developed in house using the Agile methodology across development teams in London, Vienna and Bangalore. TEST: CMC is one of the world’s leading online financial services businesses and renowned for technical innovation. How do you ensure software quality and system reliability are maintained when the pace of innovation is so high? CMC: Delivery teams across different areas of functionality (Trading, Client Data Services, Front ends, Core frameworks, Pricing & Risk, Market Data, Infrastructure) deliver functionality to shared test environments which are built and supported using the same tools and procedures as our production environment. Sign-off process follows the progress of the deployments

www.testmagazine.co.uk


TEST Customer Profile | 11

across test environments – local testing, integration testing, NFR, UAT, preprod, production. The automation test suite ensures a low incidence of bug regression, while business users test application functionality and NFR tests ensure performance, capacity and reliability are as expected. The NFR tests follow anticipated usage in production environment and known failure scenarios, using statistics gathered from the production environment. TEST: Where does load and performance testing fit in? CMC: Load and performance testing is run before, alongside or after the UAT test phase depending on the functionality being delivered. Testing targets the messaging backbone, and each FE server to replicate the load from a message or user perspective. Specific application servers can be targeted for testing to capacity without saturating the entire environment or requiring hardware to be scaled in each aspect. Tests run to date have included capacity tests of FE data and pricing channels, the environment as a whole, the JMS, and the common infrastructure. Releases are considered based on the risk implied in changes generic soak tests are run daily, tests for more specific components are created as and when needed. Data quality is a key issue – most simple issues with applications are easy to detect in NFR. Complex issues regarding data distribution, peraccount trades and frequency require targeted tests and preparation. TEST: You chose Facilita as your load testing tool supplier, three years on how has that relationship worked out? CMC: We would like to think that the relationship with Facilita has been beneficial for both parties. We have very definite ideas of how we would like to be able to use the load test tools, develop the tests, log datapoints and view the results. Feedback to Facilita has been received well, acted upon and new features have been included allowing us to graph datapoints inside tests, merge graphs and coordinate transactions across threads of execution, while feature releases are frequent and well supported, by email, screen sharing and telephone.

www.testmagazine.co.uk

TEST: You deliver systems using both .Net and Java platforms and a variety of web and mobile client-side technologies. Does Facilita’s Forecast tool meet these challenges? CMC: Forecast's ability to give lowlevel control of threads inside the Java VM allows us to scale our tests up to high target load using a small injector footprint. Our environment's flexibility has a lot to do with a language-agnostic message format. Using Google protocol buffers we can test a .NET service using a java client with no impact. Our tests scale well across environments, injectors and VMs, all running inside the Forecast injector process. TEST: Integrating load testing with an Agile development cycle has been crucial to your success. Indeed you run load tests as part of your daily build process. Some traditional load testing tools struggle in such an environment. How has Facilita Forecast performed? CMC: Key abilities that helped us test inside an Agile environment are to define 'daily' test load which broadly soaks all service, and specific tests for new or problematic functionality. Forecast allows us to define clear test scenarios and run them interchangeably. Integration with source control and ease of backups allow us to run the same test from multiple workstations and collaborative test/development from multiple staff. And repeatable tests allow us to clearly demonstrate issues with developers monitoring application logs and exceptions. TEST: Load testing is technically demanding so support and assistance from tool vendors can be crucial. How have you found Facilita in this regard? CMC: Our requests are usually expedited by the Facilita support team which results in less downtime on our side. Much of our testing is in a format not traditionally supported by load test tools – our requests to enhance the test tool to provide extra functionality or be used in different ways is always received well and often implemented. The beam to support sessions used in troubleshooting is also very helpful as it supplies a real time view of the issue. TEST: CMC thank you very much.

We would like to think that the relationship with Facilita has been beneficial for both parties. We have very definite ideas of how we would like to be able to use the load test tools, develop the tests, log datapoints and view the results. Feedback to Facilita has been received well, acted upon and new features have been included allowing us to graph datapoints inside tests, merge graphs and coordinate transactions across threads of execution, while feature releases are frequent and well supported, by email, screen sharing and telephone.

www.facilita.co.uk

April 2012 | TEST


12 | Agile business

Testing: the keystone of the bridge between Dev and Ops DevOps brings together two traditionally divided sides of the IT house to overcome the disconnect that has grown out of competing motivations and processes. David Hurwitz explains how testing fits into the picture.

D

elivering software that meets the needs of the business covers two distinct teams within iT: application development and operations. Development focuses on creating solutions and improving what exists while Operations tends to concentrate on maintaining stability and reliability. DevOps aims to unify how these teams work in a more connected and ultimately more productive way. DevOps brings together two traditionally divided sides of the iT house to overcome the disconnect that has grown out of competing motivations and processes.

The term DevOps was first coined in 2009 at a conference about the divisions between developers and operations staff. Since then, the term has become more widely adopted as part of a movement that has grown up to overcome the hurdles

TEST | April 2012

that exist around producing software packages and updates. DevOps aims to overcome the silos that exist on both sides of the divide and increase awareness on best practices for getting software into production. DevOps takes a cross-disciplinary approach to software delivery and ensures that everyone is on the same team. With everyone pulling in the same direction, greater focus can be placed on delivering results. Whether internal or external, the customer is not interested in who did what, just whether the result does what they want it to. Testing plays a central role in DevOps being successful and bringing together the two functions in this new mode of delivering applications.

Why is testing important to DevOps? Testing will always play an important role in the development of new software. Firstly, it ensures that the requirements of the original request

www.testmagazine.co.uk


Agile business | 13

The term DevOps was first coined in 2009 at a conference about the divisions between developers and operations staff. Since then, the term has become more widely adopted as part of a movement that has grown up to overcome the hurdles that exist around producing software packages and updates. DevOps aims to overcome the silos that exist on both sides of the divide and increase awareness on best practices for getting software into production.

from the business are met. Secondly, testing ensures that any faults and bugs are identified and eliminated so that the product can be handed over, safe in the knowledge that it is operational. A full testing programme saves time and money through removing the need to rework development or reduce performance issues, and benefits reputation and customer relations. In an organisation that is looking at its DevOps processes, testing can play an even more critical function, since it helps smooth the process of work from Development to Operations. Testing can act as the keystone in the bridge between these two camps, effectively enabling the IT team to show that its application development work not only works, but also meets production requirements like interoperability and scalability as well. The testing function can help process management when it comes to three key challenges: firstly, managing the handoff between Development and Operations; secondly, implementing greater automation across application development; and thirdly, helping IT improve its profile in the business through showing a unified front.

predicts that the complexity of software demands will force IT to streamline and speed up these handoffs, so these existing systems are becoming unfit-for-purpose. Handoffs can be improved by putting in place more appropriate processes in the first place. At the heart of DevOps is the requirement for systems that fit both sides of the application delivery chain. While automation can speed up how updates are passed through from one part of the chain to the next, attention to process ensures that both Development and Operations can manage these updates better. For testers, their function has a role to play in helping improve the handoff in a DevOps environment, as performance testing of applications intersects the concerns and responsibilities of both the Development and the Operations teams. Testing therefore provides a useful bridge through which these normally divided functions can connect. By working together on the overall strategy for testing, the needs of both development and production can be considered rather than both teams having their own test requirements separated.

Managing the handoff between Development and Operations

Implementing more automation

The way in which work is moved between Development and Operations can be a challenge. Critical stages of the application delivery process are rarely linked up and often rely on either manual effort to pass on packages or information, or use spreadsheets as their management tools. Gartner

Manual processes, especially those used in release management, pose a challenge to DevOps: the more manual the process, the greater the likelihood of errors and ultimately delays in delivery. The problem of heavy reliance on manual process is exacerbated by the introduction of ‘agile’ methods of development. While

www.testmagazine.co.uk

Agile enables developers to be more responsive to business requirements and produce smaller updates more quickly, this rapid cadence of releases can cause bottlenecks as updates come through to production much faster than previously. One of the ways to overcome this challenge is to implement more automation. This covers testing and release management, since the continuous delivery of regular releases favoured by DevOps causes bottlenecks in manual testing too. When moving to a DevOps model, automated testing is important, as it can help smooth the process and reduce the overall amount of work required around an update coming through. By using automated testing for standard, ‘low value’ situations around each update, this makes it easier to concentrate on more specific requirements where manual testing can be used. This combination of automated and manual testing enables the testing team to cover more ground with each update, while still keeping pace with the number of software packages that come through. As testing is completed, each update can then be put through an automated release management process as well. This makes the act of getting the software into production and out to the business easier. By working with the operations team on what testing they need to see completed ahead of release, the testing function can ensure that each release is as good as it can be. One of the issues with release management is that releases can be

April 2012 | TEST


14 | Agile business

held up as a result of the Ops team (or DevOps team) not having visibility into the testing that has occurred within development against the build, so may feel compelled to test it themselves as well. This can cause problems as they may not have the specialised insights of test professionals and also because it takes up additional time and unplanned labour. The solution is to look at the release process, particularly around the control capability. Both QA and DevOps would participate in this, giving every function up and down the line the control and visibility they require around each release.

Testing leadership and DevOps

David Hurwitz Senior vice president Serena Software www.serenasoftware.com

TEST | April 2012

The introduction of more automation raises the question of where human QA and testing should fit into DevOps. Even though automation can bring certain benefits, skilled testers will always be needed to evaluate testing scenarios that are more complex. There is also the analysis element: testers should know how the business IT function operates and the overall goals that the organisation has in place. By analysing what comes through from development and into production, there is the opportunity to feed back on any issues or opportunities that can be spotted from the technical and user interface sides. By knowing more about the end-user population, testing can therefore provide a higher level of value and advice back to the organisation alongside its quality assurance role. Testing can create a niche for itself as the crucial link in software delivery back to the organisation. Testing is not usually a large part of the software delivery chain but like the keystone in an arch, it can form a small but critical part of the DevOps structure. It has a useful vantage point from which to see and report on problems as they arise in the software delivery cycle.

Bringing IT software delivery closer together is important as it strengthens the united face of IT within the enterprise. IT in general can be seen as a separate support function within a company, rather than being linked to the revenue-generating aspects of the organisation. To overcome this prejudice, a unified front from IT can help improve its visibility within the organisation as supporting revenue generation, rather than being a cost centre. This relies on quality of information being delivered out to the organisation, rather than IT being a ‘black hole’ into which requests are poured and not answered. IT has to think about the impression that it gives back to the organisation. The ‘IT Front Office’ covers this ability to report on work done, along with easy and comprehensible access to status updates for stakeholders, as opposed to the back-office IT infrastructure that actually does this work. For the testing function, this awareness of providing information back to the organisation should be considered as one of their roles for the future. The opportunity is there for IT to demonstrate its contribution to the business bottom line, rather than continuing its perception as a cost centre. As DevOps grows in popularity, it will affect the processes and procedures that work will have to go through. This has the potential to affect each stage of the software delivery chain, including testing. In order to make the most of this opportunity, testing can move from its current role around validation over to being the direct link between operations and development, as well as potentially looking at feeding back material that takes the business requirements in general into account. For the testing profession, the opportunity is there to be a central and crucial part of IT project successes, as well as wider business aims.

Testing can create a niche for itself as the crucial link in software delivery back to the organisation. Testing is not usually a large part of the software delivery chain but like the keystone in an arch, it can form a small but critical part of the DevOps structure. It has a useful vantage point from which to see and report on problems as they arise in the software delivery cycle.

www.testmagazine.co.uk


TM

Powerful multi-protocol testing software

- A powerful and easy to use load test tool - Developed and supported by testing experts - Flexible licensing and managed service options

Don’t leave anything to chance. Forecast tests the performance, reliability and scalability of your business critical IT systems. As well as excellent support Facilita offers a wide portfolio of professional services including consultancy, training and mentoring.

Facilita Software Development Limited. Tel: +44 (0)1260 298 109 | email: enquiries@facilita.com | www.facilita.com


16 | Agile testing

Are you ready to be Agile? How do you know when a project is ready to use Agile software development methodology? Dan Fuller, associate director Agile Centre of Excellence at Cognizant explains.

O

ne of the main themes of the ‘Manifesto for Agile Software Development’ is that “we have come to value… working software over comprehensive documentation.” In reality, we are saying that we value working systems as opposed to merely limiting ourselves to only delivering working software. TEST | April 2011

Most of the very large and complex IT projects these days are really as much about integrating systems together as they are about building new bespoke software applications. But do all of these different types of project lend themselves to using Agile? The fact remains that many organisations have already adopted Agile as the primary software development methodology for custom building new applications, and Agile

www.testmagazine.co.uk


Agile testing | 17

methodologies such as Scrum are becoming the fastest growing methods for building software. However, many of the projects that are undertaken by IT organisations don’t fit neatly into the paradigm of a brand-new application.

Are you suited to Agile? In order to evaluate whether a project is suited to Agile, we first have to ask four questions. And if these questions cannot be answered easily, then it may be that this particular project team isn’t quite ready to adopt Agile for the given project. The first question is ‘What do our user stories look like?’ For example, a company that makes banking software might have a user story which included the user’s wish to be able to see the last thirty days of transactions on their account. This, of course, needs a clear picture of what other types of artefacts we will need to produce to augment the user story. It is vital to keep the Agile manifesto in mind and remember that sticking to the manifesto means valuing working systems over comprehensive documentation. It’s about finding that balance of how much documentation is the ‘sweet spot’ for building working systems. As part of this, we must also consider whether we are describing interactions with a system, describing a feature or describing a component. For an organisation that is migrating from a set of home-grown order management and fulfilment applications to an off-the-shelf package, it’s likely that interactions would be between end users and the system.

Sequencing the backlog The next key question is ‘How will we sequence our backlog?’ Should we be prioritising the stories based on architectural uncertainty or complexity, or whether they are foundational stories that later stories on the backlog are inherently dependent upon? Usually, it is the responsibility of the product manager to arrange the set of user stories, often in numerical order with an absolute ranking, and to come up with a rationale for how the ranking is performed. In some instances, monetary value is placed on the user story (for example, some user stories might be related to a new product feature that would allow us to sell our product to new customers and create incremental revenue) and in other cases, stories are ranked based on how many end customers have requested a given feature or capability.

www.testmagazine.co.uk

ETL user stories, however, differ dramatically from those in a bespoke development project; rather than describing the way an actor interacts with a software application, they describe an extraction routine that needs to pull data from a source system. In this instance, the sequencing of the backlog could be based on the chain of inheritance in the data model, as early stories may relate to the extraction and loading of primary keys and identifying standalone entities such as customer, location and product. Taking again the example of the organisation that is migrating to an offthe-shelf package, rather than ranking based on desirability of a feature, it is likely that there are certain core modules within the target package that have to be configured first before other parts of the enterprise resource planning (ERP) package can work. For example, you have to set up and load the product module before you can set up and configure order entry applications, so the stories might get ordered based on these dependencies and would look something like a critical path schedule.

Partitioning The third question is ‘How will tasks be partitioned?’ Can we partition the user stories into sets of tasks that the Scrum team can perform within a Sprint to deliver some sort of ‘potentially shippable’ product that can be demonstrated to the product owner? When building a new application, these tasks include perform design, coding and unit test, build and integration. In other cases, tasks can include data reengineering activities such as extract, transform and load or perhaps they are tasks that would be performed while using a 3GL or CASE tool to generate code. In the case of a bespoke application, partitioning the story into tasks is a straightforward exercise in which tasks such as coding and unit testing, followed by acceptance and validation, may be oriented around creating technical stories. But in the case of the company migrating to the off-the-shelf package in terms of partitioning the story, a lot of the work performed is more oriented around configuration rather than software development. This means that the story is more in tune with the type of technical work that would be performed, with it being partitioned into tasks such as

It is vital to keep the Agile manifesto in mind and remember that sticking to the manifesto means valuing working systems over comprehensive documentation. It’s about finding that balance of how much documentation is the ‘sweet spot’ for building working systems.

April 2012 | TEST


18 |Agile testing

‘configure hierarchies’ or ‘customise business rules’.

is it done yet?

How do we know when something is ‘potentially shippable’? Of the four questions, this one arguably requires the greatest consideration, as it involves important conversations around how we are going to test what we are delivering, whether test-driven development will be used, and whether a separate testing team is required downstream of the Scrum team to certify the product coming out of the Sprints before it is actually shipped to customers.

TEST | April 2012

And finally, we have to ask ‘What is our definition of ‘done’?’ Or, in other words, how do we know when something is ‘potentially shippable’? Of the four questions, this one arguably requires the greatest consideration, as it involves important conversations around how we are going to test what we are delivering, whether testdriven development will be used, and whether a separate testing team is required downstream of the Scrum team to certify the product coming out of the Sprints before it is actually shipped to customers. Each project is different and will have different indicators decided upon by the product manager that demonstrate when projects are complete. In most cases, completion of a project would include the fulfilment of stories that describe a working functionality where an end user is able to perform a business-oriented task such as entering an order. In the banking software example case cited earlier, customers being able to see last thirty days of transactions on their bank account show that the project was completed as intended. In terms of deciding whether or not the project is done, it may help to consider two different levels of completion. Many people would agree that the project is complete when 80 percent or more of the sales data is flowing cleanly into the data marts as that incomplete set can still be used to make fairly reliable product marketing decisions. Then when the data quality hits the higher grade later on, we can strive closer towards 99 to 100 percent quality, the second level of completion. Answering these questions easily is a good indication that Agile will facilitate the success of a project. However, there are other elements such as learning and maturity, organisational cultural fit and availability of resources, among others, which should also be taken into account and which may determine the outcome of the project. Dan Fuller Associate director Agile Centre of Excellence Cognizant www.cognizant.com

Answering the questions In one particular large program my organisation managed, we were developing a brand new customer

portal for one of our largest customers. The way we answered the four key questions in that particular example were as follows. At first, our user stories were very long narratives describing the way a user would interact with a particular page or portlet in our overall customer portal. Over time during our retrospectives, we arrived at the learning that our stories had way more content than was really needed for the developers to implement the functionality desired by the product manager, so we improved our process and reverted to the more traditional user story that could fit on a three by five index card. In our particular program, we were essentially doing a complete re-write of the existing portal, so our product manager wasn’t necessarily looking to prioritise new features for us to work on based on some derived business value. In our case, we worked with the product owner to prioritise our stories in a logical sequence based on which stories were foundational to build the core elements of pages in the portal, and which ones were then additive upon the foundational. We partitioned our user stories into the major technical tasks that were required by our cross-functional scrum teams in order to transform the user story from the written words that elaborated the vision of our product manager. In our case, we had development-related technical tasks implemented by Java developers, portal-related tasks implemented by Websphere resources, user interfacerelated tasks implemented by human factor engineers, and content management tasks implemented by Interwoven resources. Lastly, the user story was done, considering if the product owner accepted it, if testing validated all the acceptance criteria and documented the test results, and if the content team catalogued the content required for the story in the content management tool. If you are struggling to answer these four questions, it may be that your project is not quite ready for agile adoption. yet, most organisations should consider Agile as a viable development solution, and while your project may not be ready for Agile now, deploying it in the future may be integral to the success of your project.

www.testmagazine.co.uk


Ride the wave, stay on top with TestWave Testing is complex, why not make it simple?

Running a test project takes patience, endurance, tact and hard work. When you need to keep track of what’s been done, still needs to be done and who’s doing what without any tracking tools, your team’s work just gets harder. TestWave keeps track of all your test cases, requirements, releases and defects in a central location allowing you to concentrate on running your project and not chasing spreadsheets. Since TestWave is cloud based you can be up and running in minutes, not weeks, allowing you to focus on what’s important. • • • •

A full featured test management tool incorporating requirements, test planning, execution and defects Delivered and hosted online: no complex installation and no costly servers Tests teams can be using TestWave within minutes from anywhere in the world Extensible - interfaces with applications such as Jira® and QTP

For a free 30-day trial or for more information visit our web site.

www.testwave.co.uk Automation Consultants Ltd. Email: info@automation-consultants.com © 2012 Automation Consultants Ltd. All rights reserved.


20 | Web application testing

Feature sponsored by

Fuzzing web applications – The new web auditing Fuzz testing is a new way to approach web application testing. It is more focused on DoS level problems and it is suited particularly well to finding previously unknown vulnerabilities in software. Codenomicon web security experts Rikke Kuipers and Miia Vuontisjärvi explain.

I

t goes without saying that society as it is these days is heavily dependent upon the current IT infrastructure and IT in general. Large networks such as the internet and all the interconnected devices forming these became part of the critical infrastructure, on which even lives are dependent. Not surprisingly, security has been a big point for decades. Traditionally, the focus of IT security has been on protecting the network infrastructure and the operating system itself, on a kernel level that is. Attackers would try to inject their own messages in lower level protocol communication or initiate these themselves, and thus effectively ‘control’ the network traffic. A game changer was the introduction of cryptography in the 90s, which could now provide data confidentiality, data integrity, and authentication. Basic security was hereby provided by the network and transport layer of the OSI model, and later by the application layer though the use of SSL/TLS.

TEST | April 2012

Introducing the web applications However, the global internet made rapid growth in both size and technological perspective and was not just used for exchanging data anymore, but now offering services through web-based applications. A browser engine will use client side technologies such as (D)HTML, Java, JavaScript, Flash, Silverlight to render static content and on the webserver server side technologies like ASP, ASP. NET, CGI, ColdFusion, JSP/Java, PHP, Perl, Python and so on will be used to render dynamic content and perform tasks a normal desktop application would do as well. Modern web applications store, edit and retrieve dynamic content, and use databases and/or fileservers for these purposes. A browser sends the request to the server side technology in use, the application processes it, queries a database and if needed returns information to the end user. Large web applications usually consist out of multiple webservers, database clusters, web application firewalls and interact with huge data storage facilities.

www.testmagazine.co.uk


Web application testing | 21

Feature sponsored by

Fig.1 Example list of attack vectors

New attack vector After years of extensive testing the network infrastructure and the devices connected to these have become more resilient to lower protocol attacks, and it requires more resources to find new vulnerabilities. The focus has shifted towards the application layer, which has proven to be a very effective attack vector in recent times. Desktop applications have been the target for a long time, but it seems as if web applications have taken over the stick. Web applications inherit the same weaknesses as desktop applications, the effects bad coding practices can have need no further explanation. However, a whole new range of possible attack vectors have been introduced. Normally the handling of sessions between users are serviced by the operation system, but in the case of web application these have to be initiated and destroyed by the server side technology in use. The same goes for authentication to services, in web applications usually done against databases or services similar to LDAP. One of the reasons for the immense popularity of web applications as a target among attackers is the fact that these are usually permanently online and reachable from any computer connected to the Internet. The servers which these applications run on and talk to are not uncommonly connected to internal networks, and thus an easy way to evade firewalls, IDSs and IPSs. Traffic originating from webservers towards a database server is usually trusted, since it’s assumed to be controlled by the web application.

Web application vulnerabilities Numerous articles have been written about typical web application vulnerabilities, such as code injection techniques. In SQL injection, for example, an attacker would try to input SQL statements in a web form to have the web application perform operations on the database other than

www.testmagazine.co.uk

the ones intended by the designer. Another example is cross-site scripting, which allows attackers to bypass clientside security mechanisms normally imposed on web content by modern web browsers and thus directly attack another user of the same application by stealing session cookies, or performing operations in the name of the victim, more commonly known as cross-site request forgery. These techniques are built upon knowledge of client and server side technologies in use, and exploit known flaws in application logic.

Fuzz testing web applications Research conducted in our labs has shown that it is time to take a step back and look at web applications for what they actually are: just applications with an interface to the Internet. While it is very important to perform static code analysis and traditional web application pen testing to root out any known flaws, new vulnerabilities will never be found by these tools and techniques. To root out unknown bugs, we have found that fuzzing does the job. Contrary to traditional web application penetration testing, fuzz testing is not aware (and does not care much) for application logic, and does not try to exploit any known vulnerabilities. It just feeds the system with malformed input to find abnormal responses that indicate a possibly exploitable vulnerability in software. Fuzzing web application has proven to be much harder than traditional robustness testing which we have been conducting for more than ten years. Instead of just one attack interface which we target using one protocol, we now face multiple interfaces with an almost infinite combination of client and server side technologies used, talking to a very wide variety of databases and file servers all using their own specific (query) language.

After years of extensive testing the network infrastructure and the devices connected to these have become more resilient to lower protocol attacks, and it requires more resources to find new vulnerabilities. The focus has shifted towards the application layer, which has proven to be a very effective attack vector in recent times. Desktop applications have been the target for a long time, but it seems as if web applications have taken over the stick.

April 2012 | TEST


22 | Web Feature application testing

Feature sponsored by

Fig.2 GET request in a HAR file is chosen to be animalized

Fig.3 Example of a POST anomaly

What to fuzz? The first challenge when fuzzing a web application is to decide what to fuzz. Since there is no specification or existing model of the messages used by the application, each session will be different depending upon how the user interacts with the application. What we can do is record a session using a browser, and base our models on the message sequences found therein. These messages can be fuzzed and replayed back to the application just as we would do in a normal fuzzing session. The effectiveness of this approach will unfortunately always be limited to the data recorded while browsing through the web application, any vulnerability outside the recorded session is not found during the fuzz tests.

TEST | April 2012

In the beginning of 2011 Google released their “HTTP Archive� tool (HAR) as part of the Google Chrome developer tools, used to record browsing sessions. Included in these packages are all the requests to and from the web application, including cookie information. The idea behind this is to allow effective processing and analysing data coming from various sources by external tools, such as WebApp fuzzer. The information saved in these HAR files proved to be perfect to base fuzz test models upon. After walking through a web application in Chrome, the HAR file can be exported and imported in the WebApp sequence creator which will show all the sequences found. The more interesting messages are automatically selected as candidates

for fuzzing and replaying. Interesting messages would be requests and replies containing query parameters, cookies, POST payload. These fields will be anomalised, together with all the HTTP header fields present in the request. In the picture below we can see the HTTP request indicated with the red arrow which will be used for anomalisation, and the blue arrow indicating the matching HTTP response. The less interesting requests, greyed out, are requests for CSS files, pictures and the like. The WebApp suite starts sending the malformed requests and broken sequences to the web application. The following screenshot shows an example of a malformed POST message; the malformed part of the message is marked red.

www.testmagazine.co.uk


Web application testing | 23

Feature sponsored by

Fig.4 External instrumentation possibilities

Once the anomalies are sent to the system under test, we try look for unexpected replies, which would indicate that our injected packet triggered a malfunction and thus found a vulnerability. This brings us to the second challenge of fuzzing web applications.

Detecting faults Detecting problems is not always selfevident. A test case which triggers a crash-level bug is easy to identify due to the discontinuation of a service or process, but a test case which causes the application to overwrite database records or configuration files on the webserver is less obvious. This obstacle can be overcome by using external instrumentation. Before, during and after each test case there is the option to use external scripts to determine what exactly happened on the application, database and fileservers. All of these components can be loaded in a memory and I/O profiler, effectively monitoring syscalls for suspicious writes. Apart from that, log files can be read after each test case checking for interesting lines. Based on the findings, vulnerable code can now be patched, or sent to the appropriate vendors for patching.

Conclusion Fuzz testing is a new, unique way to approach web application testing. It is more focused on DoS level problems and it suits particularly well in finding previously unknown vulnerabilities in software. However compared to the traditional protocol fuzzing, web application fuzzing is more challenging. Since there is no existing model or specification, the test cases have to be created based on recordings. Walking through larger

www.testmagazine.co.uk

web application can create HAR files with thousands of unique requests, and with 30.000 test cases per request as it is now, this might be very time and resource consuming to test. The crawling through web applications can be automated, similar to what bots currently do to index pages for search engines. Nevertheless, vulnerability outside the recorded session will not be discovered. Despite the challenges and limitations, we have already found numerous flaws from large applications. We have every reason to believe that fuzz testing complemented with traditional penetration testing and static code analysis will be the future of web application testing.

About Rikke Kuipers Rikke Kuipers is a Dutch native with a background as a senior network engineer at several large ISPs. Since joining Codenomicon, Rikke has conducted security audits and done security research in the field of network and web application security.

About Codenomicon Codenomicon develops proactive software security testing and situation awareness tools which find software bugs. Defensics is a fully automatic security testing bundle for over 200 communication interfaces. Situation awareness tools collect, filter, and visualise network and abuse information concurrently. Governments, leading software companies, operators, and manufacturers use Codenomicon's solutions.

Rikke Kuipers Security Specialist Codenomicon www.Codenomicon.com

Miia Vuontisj채rvi Security analyst Codenomicon www.Codenomicon.com

Research conducted in our labs has shown that it is time to take a step back and look at web applications for what they actually are: just applications with an interface to the Internet. While it is very important to perform static code analysis and traditional web application pen testing to root out any known flaws, new vulnerabilities will never be found by these tools and techniques. To root out unknown bugs, we have found that fuzzing does the job.

April 2012 | TEST


testing | Feature 24 |Model-based

Modelling the drive for quality With a growing list of cars that have had all too public issues with quality, Peter Farrell-Vinay suggests that a model-based testing (MBT) approach could help validate the complex aspects of the vehicle design that are often at fault in advance of manufacturing.

M

ad car disease’ has become a staple of motoring stories: “Crazy cars: the newest hazard on the roads”. New cars are jamming their own brakes, locking doors, shifting gears and mysteriously shutting down. The major cause is electromagnetic interference, the same phenomenon that affects hospital equipment and aircraft instruments.

At risk are a small number of new cars with inadequate ‘immunity’ built into the electronic systems which control engines, brakes, transmissions, and features such as central door-locking, cruise control and air-conditioning and the Evening Standard reported: “Drivers are defeated by clever cars.

TEST | April 2012

Breakdowns caused by lost keys and confusion over sophisticated car security systems overtook flat tyres for the first time in an analysis of calls to the AA. Of the 4.5 million call-outs last year, the most, 825,424, were due to flat or faulty batteries, followed by 269,070 because alarm systems or locks failed.”

This year’s model It would of course be useful if automakers could spot safety-critical problems in a vehicle acceleration control or braking system long before the underlying software or hardware was developed or tested. The growth and innovation in automotive control and monitoring systems is leading to more complex and inter-connected systems than ever before. In the automotive industry,

www.testmagazine.co.uk


Model-based testing | 25

model-based testing reduces development costs and the time taken for new vehicles to come to market by validating complex aspects of the vehicle design in advance of manufacturing. While EMI (electro-magnetic interference) is a known problem and (mostly) easily solvable, car control can even be hi-jacked by taking over a vehicle’s on-board operating system using a mobile phone connecting through the vehicle’s Bluetooth system (the Toyota Lexus was believed to be vulnerable).

Reliability and recalls Thus reliability is a major problem: Mercedes had to recall 600,000 autos for a brake-system software upgrade (a counter that they thought would not run over between services did, twice.) This recall probably costs about EUR 30m. Given that margins amongst those few profitable manufacturers are in the low USD 108 region a single recall cuts seriously into profit. According to the Economist, in 2004 there were only 8 car companies above the curve of cost-of-capital versus revenue per unit, namely Porsche, Nissan, Honda, Toyota well above, and Mercedes, BMX, PSA and Hyundai barely so. There is also the cost of recompensing any victims of accidents in which system malfunction was a causal factor.

Model-based development and testing Models in model-based testing are treated as a critical part of the requirements’ specification since they identify how the design should work so that testers can write better tests, ensuring that all the integrated systems behave as required and expected. A model allows testers to ‘run’ scenarios validating complex safety-related systems’ ability to prevent dangerous states without having to build a physical system and put it into such states. Model-based testing is also modular and offers an inexpensive and very early way of carrying out rigorous requirements and design testing at each level of integration. A good example of early-use model-based testing in vehicles is the identification of a potential locking of a vehicle’s acceleration long before it may have been found by physical or software tests later in the manufacturing process. Without model-based testing, this acceleration-locking problem may not be discovered until very late in the manufacturing process, or indeed before the vehicle goes on sale, leading to costly remedial work and damage to the manufacturer’s brand. If you want to stop a software hi-jacking, a place to start with is the software.

A measure of risk

Testing as part of development

A model such as the Ford Focus sells about a million cars a year. Each can be expected to be driven for 300-500 hours a year, and the cars themselves are expected to have a three to five year service lifetime. So one model-year alone can be expected to accumulate between 9 x 108 and 2.5 x 109 hours of service. Systems for such cars are often built by component manufacturers such as Bosch, who also install similar systems in other cars, and you are looking at needing to attain an actual dependability of the order of one critical failure in 1010 hours of service. Nobody actually knows how to manufacture software that is guaranteed to be that dependable. Current limits (through exhaustive testing of the final product) seem to be about 105 operational hours. That is the theoretical limit of certainty through practical model based testing.

Product development is increasingly prone to complexity; the number of products built around embedded software is growing rapidly together with the embedded software's functional capabilities. Embedded software frequently displaces functions previously realised in hardware; thus digital fly-by-wire flight control systems have superseded mechanical control systems in aircraft. Software also increasingly enables new functions, such as intelligent cruise control, driver assistance, and collision detection systems in highend automobiles. Indeed, the average car today contains roughly seventy computer chips and 500,000 lines of code. While high code volume itself is not a problem, the increasingly complex interactions between components and subsystems are. Bugs that emerge only after models enter production have resulted in rising warranty, and testing

www.testmagazine.co.uk

costs and damage to brand images. As Figure 1 illustrates, in the automotive industry, malfunctions in electronic components easily account for as many failures as all other automotive systems put together.

CMore functions, fewer components Additionally engineers face four sets of pressures: • To tighten the integration among components and thereby decrease their number; • To be sufficiently agile to cope with changing requirements due to the rapid pace at which underlying system technologies change; • To collaborate with related teams both within and across teams and engineering disciplines; • The need for loosely-coupled components that can more readily be replaced and/or reused.

Systems design and function allocation Functions were once decomposed down to the point where hardware and software components were distinct, however as the number of functions increases, so does complexity and the need to allocate functions across more than one component. Thus collision preparation in an auto will affect the brake system, the safety system, and suspension. This complexity leads to a combinatorial explosion with possibly millions of allocations and thus test cases. These are beyond the scope of existing requirements-driven systems development methods which fail to simplify this allocation, or to optimize trade-off analysis and functional assignment to hardware, firmware, and software entities and their physical component allocations. Using and testing models reduces this allocation space to a manageable size by enabling engineers to focus on outcomes of interest secure in the knowledge that the tool on which the model is built will allow them to generate them all. Why doesn’t everyone use modelbased testing already? There are several reasons. If you want a modelbased test you need a model: • That says everything you need to know and can validate about a vehicle’s components and their operating environments, (choice: it

April 2012 | TEST


26 | Model-based Feature testing

Fig.1

Without modelbased testing, this acceleration-locking problem may not be discovered until very late in the manufacturing process, or indeed before the vehicle goes on sale, leading to costly remedial work and damage to the manufacturer’s brand. If you want to stop a software hi-jacking, a place to start with is the software.

TEST | April 2012

may express everything you want to say but at the price of being unable to prove internal consistency); • That is internally consistent; • That covers properties such as time, (there’s no universally-accepted method of denoting time); • That gives you answers quickly, (choice: a quick answer or a comprehensive answer); • That is tool-supported and can be used by subject-matter experts rather than only by test experts. Many companies use MBT already where the complexity of their systems and the ability of their development and test teams dictate: thus the then Statenskårnkraftsinspektion (Swedish Nuclear Inspectorate) built an enormous model of a new Nuclear Power Station type using a tool now known as Prover Technology. And immediately shut down the four stations concerned: they had discovered that under certain very-rare circumstances two pumps on the secondary cooling system could work in opposite directions.

The MBT process There are many MBT process models. Here is a simple view: Models are very tool-dependant and some tools allow for a series of model decompositions in which a simple model becomes increasinglycomplex and representative. These decompositions (shown in Fig 1 as ‘items’) allow for different test types ranging from logical models through to the code. In a well-structured environment, once the models

have been validated they can be substituted by real hardware or a hardware simulator.

So we don’t write specifications any more? We certainly no longer write specifications as we used to. We begin with as simple a description as possible of the functions to be modelled and as the model grows add the detail about all the unmodelled interfaces to this description. Each change to the model is however annotated as a decision with the same rigour as we applied to paper specifications. At the end we have a model whose functions can be exhibited and traced to some decision. Each test of that function can also be traced back to the part of the model or interface that it covers.

Are we there yet? Being ‘there’ means how powerful your models need to be: • Modelling tools based on executable UML (build the model in XML, generate code, generate tests, execute the tests, correct the model); • Modelling tools creating representation of closely-tied hardware and software functions such as Prover which can generate proven models of limited systems and generate outputs (like railway signalling routes) or which show problems in existing systems through detailed modelling; • Models of requirements which can automatically generate tests such as AutoFocus.

Peter Farrell-Vinay Managing consultant SQS UK www.sqs-uk.com

www.testmagazine.co.uk


Industry-leading Cloud, CEP, SOA and BPM test automation Putting you and the testing team in control Since 1996, Green Hat, an IBM company, has been helping organisations around the world test smarter. Our industry-leading solutions help you overcome the unique challenges of testing complex integrated systems such as multiple dependencies, the absence of a GUI, or systems unavailable for testing. Discover issues earlier, deploy in confidence quicker, turn recordings into regression tests in under five minutes and avoid relying on development teams coding to keep your testing cycles on track. GH Tester ensures integrated systems go into production faster: • • •

Easy end-to-end continuous integration testing Single suite for functional and performance needs Development cycles and costs down by 50%

GH VIE (Virtual Integration Environment) delivers advanced virtualized applications without coding: • • •

Personal testing environments easily created Subset databases for quick testing and compliance Quickly and easily extensible for non-standard systems

Every testing team has its own unique challenges. Visit www.greenhat.com to find out how we can help you and arrange a demonstration tailored to your particular requirements. Discover why our customers say, “It feels like GH Tester was written specifically for us”.

Support for 70+ systems including: Web Services • TIBCO • webMethods • SAP • Oracle • IBM • EDI • HL7 • JMS • SOAP • SWIFT • FIX • XML


28 | TEST Feature Supplier Profile

Flexibility & functionality TEST talks to Christoph Preschern head of sales at test automation specialist Ranorex.

R

anorex is a software development company focused on the development of tools for automated testing via the user interface. It provides innovative software testing solutions to hundreds of companies and education institutions around the world with a comprehensive range of tools for software testing automation, including specialist Windows GUI test automation frameworks.

TEST: What are the origins of the company; how did it start and develop; how has it grown and how is it structured? Christoph Preschern: The basic Ranorex idea was born in 2004. The co-founder and today’s CEO, Jenö Herget, was working with a software development team responsible for daily-build and release management. The company he worked for already used a test automation tool; the time came when the tool was no longer supported and so he was looking for alternatives,

TEST | April 2012

specifically something easy to use and easy to integrate into modern software development processes. He worked for many years as a developer, and it just didn’t make sense to him that the common test automation tools offered only proprietary and therefore limited scripting languages. He thought: “Why isn’t there any tool or framework available that supports C#, C++ and other state-of-the-art programming techniques?” He decided to develop a test automation library based on C++, making it easy to use for anyone who knows how to write code. The flexibility and functionality of this first Ranorex version was really great, and good enough to convince Philips Medical Systems in California to give up on legacy tools and use Ranorex instead. From that day on, the Ranorex company started to grow. Seven years later, functionality, flexibility and transparency are still the most important differentiators between Ranorex and other tool vendors. Today there is much more than a simple test automation library

available, but the library will always be the heart and therefore the major centrepiece of the Ranorex suite 0of tools. TEST: What range of products and services does the company offer? CP: Ranorex focuses on the development of tools for automated testing via the user interface. The advantage of Ranorex in this niche is supporting testers in the creation of robust and reliable test automation projects for any kind of user interface. The products offered by Ranorex are used to test not only modern and up-coming technologies like .NET, Flash/Flex, Silverlight and HTML5-based web applications in several browsers, it can also be used to automate functional tests for enterprise level software like SAP. In addition to continuing to develop the product, Ranorex attaches great importance on supporting the test automation community with their everyday challenges. The Ranorex forum is great place get in touch with our experts or to learn from other users.

www.testmagazine.co.uk


TEST Supplier Profile | 29

TEST: Does the company have any specialisations within the software testing industry? CP: Tools for automated software testing are offered by many companies. We specialise in helping testers creating automated tests in a rapidly changing environment. We don’t offer any high level test management solution like other companies do. We’re concentrating more on the problems testers have in getting their automated tests up and running and at the same time offering flexible interfaces to integrate with common test management solutions. TEST: Who are the company’s main customers today and in the future? CP: Our customers are called on to provide high-quality software for many different industries, for example healthcare, the financial and insurance industries and telecommunication. Software development is no longer limited to specialised teams of programmers working for enterprise level companies. There are more and more small and mid-sized companies all over the world that are developing and selling great software products through the internet. We see that many companies – especially small companies – are starting to automate tests because of limited resources so they can deliver high quality products in an environment of consistently growing competition. TEST: What is your view of the current state of the testing industry and how will the recent global economic turbulence affect it. What are the challenges and the opportunities? CP: Economic hard times cause companies to rethink common software development processes. Being flexible was always an important factor in being successful, but in these times it might be the only way to survive. Currently the whole testing industry is changing because the way software is developed is also changing. The classic ‘wall’ between testers and developers is starting to crumble. Optimising communication within a team is one of the major goals in Agile software development teams. Modern teams have already realised the importance of ‘short paths’, especially between developers and testers. With faster software release cycles – required to fulfil customer demands – the need for automated regression

www.testmagazine.co.uk

testing is growing significantly. In the past test automation was something nice to have – today Agile principles won’t work without it. TEST: What are the future plans for the business? CP: The plan is to support upcoming and modern user interface technologies for desktop and Web as well as for mobile platforms. Ranorex will continue focusing on developing tools to better support testers with their day-to-day activities, but at the same time we know that a professional test tool is only one part of a successful test project. For this reason we’ve already started building strong relationships with international partner organisations across the world to better assist people on-site in test automation using Ranorex. Additionally, we’ll offer affordable online-based training for small and mid-sized companies. TEST: How do you see the future of the software testing sector developing in the medium and longer terms? CP: Agile principles already have and will continue to have a huge impact on test automation in the future. While software developers have already started considering integrating unit tests in their continuous integration systems, classic testers might be afraid to work with daily build software for automated integration testing. Many testers don’t really know how to test early stage software automatically without the need to adapt the test after each build. For that reason future test teams will consist of individuals with different specialities, for example some testers could work more closely with the development team in order to create a stable test automation framework which is useful for testers who are concentrating on test case specification. The use of virtual cloud-based environments in the testing sector opens the door for more flexibility and scalability. Investment in expensive hardware just to simulate thousands of users is no longer needed. Test automation tools will need to assist testers in configuring virtual instances, deploying and executing their tests, and finally representing the results using a dashboard. TEST: Christoph Preschern, thank you very much.

Christoph Preschern Head of sales Ranorex www.ranorex.com

The classic ‘wall’ between testers and developers is starting to crumble. Optimising communication within a team is one of the major goals in Agile software development teams. Modern teams have already realised the importance of ‘short paths’, especially between developers and testers. With faster software release cycles – required to fulfil customer demands – the need for automated regression testing is growing significantly. In the past test automation was something nice to have – today Agile principles won’t work without it.

April 2012 | TEST


30 | Training Corner

Qualifications – the choice is all yours Is there such a thing as too much choice? Angelina Samaroo shows the way through the maze of qualification and certification.

A

key tenet of this free world we all enjoy is that of choice; the ability to choose at will and to willingly pay the price for that choice. In the world of qualifications, we have much choice. So, from the top, from the Engineering Council history lesson on their website: • In the UK we had the Corps of Engineers in 1717 to recognise the need for professional engineers in fighting for these freedoms we now wake up to everyday (notwithstanding the Googlefish bowl we all now find ourselves in). • In 1818 the Institution of Civil Engineers was formed; engineering prowess was needed back home too. • In 1847 we then had the Institution of Mechanical Engineers to recognise the need for standards as the railway economy took hold. • Then in 1871 we had the Institution of Electrical Engineers, now the Institution of Engineering and Technology. All had the same goal, to advance their discipline; to go for gold; to set the standards. As ever with professionalism, there really needs to be just one set of rules, know your stuff; tell it like it is; and be nice to your fellow man, the planet and the bottom line. If you can

TEST | April 2012

demonstrate all of these qualities, not just in the classroom but in your everyday life, then you have already struck gold. In 1982 the UK government appointed a single entity to set the standards across the engineering disciplines – the Engineering Council. It governs the now many professional engineering institutions in registering engineers as professional engineers. Once registered, for instance as a Chartered Engineer (CEng), it is for life, and protected in law. You cannot be summarily removed from the register, nor are you likely to be ‘grandfathered’ into some other qualification. In return, there are rules of professionalism that you must follow – it is possible to be struck off the register. These are recognised world-wide, no one asks what a CEng is.

The certification world The registration world described above is complemented by the certification world. Here the focus is not on you as a well-rounded professional, but on imparting specific knowledge and skills. We have certifications in every aspect of software development. These include certificates in project management, business analysis, systems analysis, requirements engineering, software testing; solutions

development; service management; data protection, freedom of information; and of course Agile and Scrum. For each of these, and more, we have more than one certificate. They can be hierarchical, for instance from Foundation to Advanced/Practitioner/ Expert levels and from Introductory/ Associate to Senior Management/ Master/Specialist levels, or they can just stand alone. For each discipline we can have many certificates. For example, under the banner of Service Management alone, there are over 20 certificates that you can hold. You can study for certificates in IT Service Management; Change Management; Problem and Incident Management; Service Management according to ISO/IEC 20000; Service Desk and Incident Management; Service Level Management; Problem Management; Business Relationship Management; Supplier Management; Continual Service Improvement; Service Design; Service Operation; Service Strategy; Service Transition; Operational Support and Analysis; Planning; Protection and Optimisation; Release, Control and Validity; Service Offerings and Agreements. The list does not end there, but you get the point. We also have significant choice in who we obtain our certificates from.

www.testmagazine.co.uk



32 | Training Feature Corner

Most certificate providers claim to be global. Thus, training could be done here, the exam flying in from there, marked somewhere else and back to us as a branded certificate when we pass. The global net has probably never been more webbed. Specifically, tracking down the hands that pen (tap) exams is proving a challenge. In software testing we also have a choice of certificates and certification bodies. Certifications worldwide include the typical hierarchical ones from Foundation/Associate to Expert/Manager, or specialised ones such as TMap®; Agile testing; Pen testing; Security testing and Ethical Hacking. International certification bodies include the International Institute of Software Testing ® (IIST); the International Software Quality Institute®

We also have significant choice in who we obtain our certificates from. Most certificate providers claim to be global. Thus, training could be done here, the exam flying in from there, marked somewhere else and back to us as a branded certificate when we pass. The global net has probably never been more webbed. Specifically, tracking down the hands that pen (tap) exams is proving a challenge.

TEST | April 2012

(iSQI); the International Software Testing Qualifications Board® (ISTQB); the Quality Assurance Institute® (QAI); the Chartered Institute for IT ® (BCS) and Exin.

A route through the maze

Angelina Samaroo Managing director Pinta Education www.pintaed.com

With all this choice it is difficult to know which way to go. However, this is a challenge we share with all of the certification bodies, not just those in testing. Whilst we try to find a route through the maze, the certifications are being renamed, tweaked or evolved. I am not convinced on the evolution thing, this is a concern of species and biology, you don’t plan it in conferences and words, it happens, over millennia – from ape to man, in the competition for survival against those who see us as just plain old tasty. However, if I come off the pedant’s podium and save my academics for the classroom, then here are my steps to a ‘planned evolution’ of a certification body: 1. Recognise and respect the investment required by delegates and exam candidates in bringing certifications to life. The exams are generally not time-stamped, thus the expectation of quick and sweeping change is not set. Someone sitting an exam without this proviso expects that once attained, it will stand the test of (reasonable) time.

2. Recognise and respect the investment required by training providers in bringing certifications to life. Training providers will invest in course production. In the commercial world every change costs. Every cost borne has an expected return on the investment. 3. Agree on a naming convention for certification schemes worldwide. Currently we have Foundation, Introductory, Associate and possibly others to signify the first step. This level of choice is unhelpful if my intention is to develop my all-round skills and take exams in subjects across the disciplines and countries. If I want a truly international outlook, then I must travel extensively and I must learn from other cultures. What I would like on my return to work, wherever I happen to lay my laptop, is to understand and be able to confidently sell back to the market my levels of competence for which I have been independently assessed. 4. Recognise the significant drawbacks of multiple-choice examinations. In my view, they should only be used in situations where there is one clear right answer. Thus, who is the current Prime Minister, not who, on balance would make the best Prime Minister? This one will always be easy for me – me of course. Sadly this will never be an option in any exam; I probably have to be elected first. Selecting the best/most likely/most significant/least likely from a list of pre-determined options leads to either a tortuous exam causing you to turn the scenario this way and that to get to the bottom of where the examiner may be headed, or a very trivial one making it pointless. Both outcomes lead not to raising the standard of the profession, but to confusion on the purpose of the exam – is it to test knowledge of the subject matter or to test knowledge of the subject matter expert. The ISTQB has now recognised this at the Expert level, but it really does need to consider its use at Advanced level as it goes through its ‘evolution 2012’ – turning back to Practitioner could lead us to the future.

www.testmagazine.co.uk


DesignFeature for TEST | 33

App testing 5. Include in each syllabus a ‘currency’ statement. This should allow for changes in training material in line with current developments in technology and legislation – they both move at a pace. As a certification body, take responsibility for keeping up with the times and make training and materials available to the training providers. By all means levy a charge for it, but take charge. Thus, the standard of the profession is raised, and your value-add is clear. Tweaking words and repackaging is about marketing, not quality. 6. Show your trust in your accreditation scheme for training providers. Where the value of the certificate rests in demonstration of practical skills, roll with it. Accredit the training providers to conduct in-class assessments. And yes, there will be the less-than-satisfactory provider. However, if you focus your efforts less on word play and more on role play, the training provider truly worthy of accreditation will come to the fore, and both they and the market will thank you for it. Work with them, train them and provide learning opportunities. Choose multi-skill, not multi-choice. 7. Start with smart. Playing the numbers game is attractive, but they’re all at it. One provider is claiming over a million certificates. This raises the question of standing out from the crowd from the buyer perspective. What will set you apart is to seek feedback on your examination at the point of sale and publishing these on-line. Then say what you’re doing about the areas for development. Then do it. Continual and open improvement. The distinction between course provider, syllabus provider and exam provider should be blurred. We can all be in this together if you’re interested. For those buying into the certification schemes, as ever, caveat emptor, let the buyer beware. If you’re a tester, go test, before you buy. If you’re not a tester, become one.

www.testmagazine.co.uk

With platforms proliferating, app developers need to take testing far more seriously, according to Mike Holcombe.

Mike Holcombe Founder and director epiGenesys Ltd www.epigenesys.co.uk

T

he growth of the market in mobile phone and tablet apps gives rise to some interesting issues in terms of how these are tested and made available. You may have found that downloading an app onto your phone or tablet from an official App Store does not necessarily mean that you end up with something useful. A surprisingly large number of these apps do not work. This is embarrassing for both the app developers and the platform/device vendors. There seems to be little authoritative Quality Assurance that can give consumers confidence about what they are purchasing. User’s comments only go so far. App developers need to take testing much more seriously but, as ever, things are not straightforward. One issue is that the number of platforms is proliferating. The recent introduction of some low cost Windows Mobile smartphones has suffered from problems in that a number of apps designed for more powerful phones no longer work on these devices. A common cause is insufficient memory on the phone. It raises the problem for app developers of how to test apps on multiple platforms quickly and cheaply. The use of suitable emulators is a start and can be used reasonably effectively since they allow the specification of some of the parameters such as screen size and model name. But even then it is necessary to test on the actual physical device. The human in the loop is hard to simulate especially with some of the touch interfaces used. Non-functional testing – not what it does but how well it does it – is also a

key factor in user acceptance of an app. There is no point in having a ‘cool’ app if it takes too long to do its stuff! Support from vendors can be good but it is problematic to build up a collection of all possible devices, platforms and a spread of capabilities. The availability of access to banks of test devices offered for hire by some companies (eg DeviceAnywhere) might be useful. The usual message applies, however. Plan out your testing in advance, decide on how you will address the multiple platform and device challenge and build your design and implementation strategy around this.

April 2012 | TEST


34 | TEST Product Profile

Cloud-based collaboration Test management is a crucial part of the testing process, without management you could have chaos. Here Automation Consultants explains its cloud-based test management tool, TestWave which enables testers to work collaboratively by storing their work in a single place, which lets them keep up with progress in real time.

B

ringing automation to the software development lifecycle is what Automation Consultants (AC) says it aims to do. it specialises in automation services across the whole software development life cycle, accommodating both traditional structured software development methods as well as new Agile models based on iterative, incremental and cyclical approaches to software development. Here the company tells TEST about its TestWave cloud-based test management tool. TEST: What is the history of your company? Automation Consultants: Automation Consultants (AC) is a software and services company founded in 2000 to bring automation to the software development lifecycle. One of the parts of the lifecycle best suited to

TEST | April 2012

automation is testing. AC has therefore done much of its work in providing services and software for testing, especially performance testing and functional test automation. TEST: What is the specialist product area of your company? AC: We produce software in a number of different areas, mostly related in some way to testing. The areas include latency analysis, network discovery, telecoms billing and test management. This product review focuses on TestWave, our cloud-based test management tool. TestWave enables a team of testers to work collaboratively by storing their work in a single place, which lets them keep up with progress in real time and is much more efficient than using spreadsheets. TEST: What specific tools and /or systems set your product offering apart from the competition and why? AC: TestWave has the full set of features normally seen in a test management

www.testmagazine.co.uk


TEST Product Profile | 35

tool: test script storage; recording of test execution and results; defect tracking; and mapping tests onto requirements and releases. Unlike other tools with this functionality, TestWave is: • Affordable for small and medium sized companies; • Delivered via the cloud, which removes the need for servers and makes it quick and easy to get started; • Cross platform and cross browser; • Well suited to global use, which is important for today's onshore/ offshore test teams. We have successfully tested it from locations as far away as New Zealand. TestWave integrates tightly with test automation tools such as HP Quicktest Professional (QTP). Support for Selenium will be added in a few months. It stores automated scripts and can execute them on remote machines. Results can be viewed without opening the test automation tool (eg QTP). Over and above this, TestWave has an optional automation framework, TestWave AF, which is designed to analyse the commercial benefits of automation and manage and maintain a pack of automated regression tests written in, for example, QTP. TestWave AF suggests scripts which are suitable for automation and forecasts the return on investment (ROI) of automating them. It also matches the steps of a manual script with the code in its automated counterpart, and can generate script templates in QTP (and soon Selenium) automatically from the manual script. It is well suited to Agile projects. The requirements and release map onto sprints and phases, and the integration with automated tools helps manage and run the automated tests which are indispensable to many agile projects. Finally, the next version of TestWave will integrate with JIRA. Many organisations use JIRA for bug and issue tracking, but do not have a test case management system. With its JIRA integration, TestWave will allow

www.testmagazine.co.uk

users to use it for test case storage and execution and requirements mapping, and JIRA for defect management. TEST: Explain how the product works AC: Users include testers, test managers and developers. A tester would log in and might write test cases or execute tests. In the execution of a manual test, TestWave records the progress of the test step by step, as well as the final result. If a test fails, the tester can raise a defect. They can see in real time exactly which tests they have run themselves and by team mates, so there is no risk of tests being missed out or duplicated. That kind of risk is much higher when keeping track of tests using spreadsheets or word processor documents. Test managers can see the progress of all the testing done by the team, and can track it against requirements and releases. They have a set of dashboards and reports which present this information in a variety of ways. Test Managers can also assign tests and defects to members of the team, so it is clear who is responsible for what. Developers typically work on fixing defects. A developer would normally work through a list of defects assigned to him or her. As the developers work on a defect, they can add comments to it, attach files and link the defect to other entities, eg the test which uncovered the defect. The developer can see all the information stored on the system about that defect, such as the comments from the testers and from other developers, as well any files, such as screenshots, which may have been attached to the defect. This is far more efficient than looking through e-mails from multiple sources, and through many different locations for relevant files. The view of defects can be extensively customised so as to show only the most relevant information, and customisable reports can be produced based on the view. TestWave also generates customisable reports of requirements, releases and tests.

The view of defects can be extensively customised so as to show only the most relevant information, and customisable reports can be produced based on the view. TestWave also generates customisable reports of requirements, releases and tests.

April 2012 | TEST


36 | TEST Product Profile

We also have plans to improve the automation framework so that it will automatically generate QTP and Selenium scripts from manual scripts based on previously stored GUI objects of the application under test. It will be necessary first to capture these onscreen objects, but only once and when it has been done, the tool will be able to generate complete QTP and Selenium scripts automatically. An entire regression suite could be generated in a few minutes. This will bring a step change to the efficiency of test automation and we are very excited about it.

TEST | April 2012

TEST: How do your products help a typical user in their software testing tasks? AC: TestWave helps testers by allowing them to share the test information they write instantly. As soon as a tester runs a test, the whole team can see the result. Without a test management tool, the tester would have to enter the result into a spreadsheet and email it to colleagues or save it to a share. TestWave speeds up manual testing by presenting each step of the test to the tester in turn as the tester carries it out on the application being tested. This makes it easier for the tester to enter the result of a step in the right place and if the step passes, the tester can record that with a single mouse click. Test managers also benefit from seeing the results of testing instantly and in one place. They no longer have to go through multiple documents to check on progress; and they no longer have to create reports manually. Technical staff involved in fixing defects can see all the information on a defect in a single place, and can make their updates quickly, without having to do so by email. With automated testing, TestWave simplifies the management and maintenance of the automated tests – one of the main challenges of test automation. By storing tests, it ensures that all parties can see the latest version. By triggering test runs on a local or remote machine, it saves the hassle of manually copying tests and their data to the target machine and starting the tests manually. By storing the results of automated tests, TestWave allows user to view the results of a large number of tests and identify any failures quickly. It also enables the viewing of results without having QTP installed. TEST: Are there any specific support services that you offer for these products, if so what makes them better than the competition? AC: We offer a free service that allows users to import their existing test data into TestWave. It has an import facility, but the data format may need changing so as to fit the import tool. We will do whatever work is needed to

format and import new users' data so they can start work quickly on their own test cases. In addition, we offer training and general test consultancy services to enable users to get the most from their investment in the product. TEST: What overall are the benefits of your product compared to the competition? AC: The main benefits are: • Full test management functionality at an affordable price; • No server, database and related admin costs; • Ease of use with offshore teams – no need for a VPN; • Cross platform and cross browser usability – Windows, Mac OSX and Linux are supported, as are Internet Explorer, Chrome, Firefox and Safari. • Integration with QTP and soon Selenium TEST: Do you have any plans to develop the product in future, if so, how? AC: Our future development plans for TestWave include: • Integration with Selenium; • Integration with JIRA; • Changes to the reporting of releases and requirements to improve their usability in agile projects; and • More and better management reports. We also have plans to improve the automation framework so that it will automatically generate QTP and Selenium scripts from manual scripts based on previously stored GUI objects of the application under test. It will be necessary first to capture these onscreen objects, but only once and when it has been done, the tool will be able to generate complete QTP and Selenium scripts automatically. An entire regression suite could be generated in a few minutes. This will bring a step change to the efficiency of test automation and we are very excited about it. TEST: Automation Consultants, thank you very much. www.automation-consultants.com

www.testmagazine.co.uk



38 | Outsourced testing

Testing in a project environment IT analyst Martin Chalkley weighs up the options for testing software in a project environment when outsourcing software development and testing to contractors.

TEST | April 2012

www.testmagazine.co.uk


Outsourced testing | 39

So what are the advantages and disadvantages of each approach?

The general contractor model

W

hen it comes to software development projects, validating the upfront requirements to ensure that project scope is managed and the right level of functionality is delivered, is a fundamental necessity. Similarly, defining effective test regimes to confirm that the scope has been met and that the software is ready for release is also critical. Approximately 75 percent of the work in these types of projects is development work, the remaining 25 percent being the subsequent test and release of the end product.

Many organisations will use one of two methodologies for the development of software on projects, they will either work with one general contractor to do both the development and test, or they will work with two systems integrators, one looking at the development functionality and the other undertaking the test aspect.

www.testmagazine.co.uk

From the moment the tender is released there is only one process that is needed to be followed in the procurement phase. Only one bidder will be successful and only one contract will have been let. From a pure economics and purchasing perspective this is the lowest cost model for awarding the work. Again during the project lifecycle, as there is only one external party involved the relationship can be more straightforwardly managed, there is no other party upon whose shoulders the blame for failure can be laid. Either the contractor will be at fault or the awarding company is, which, in theory, means that a dispute should have a relatively un-complex resolution. That said there are downsides that need to be considered. First and foremost amongst these is that the general contractor (systems integrator) is responsible for testing their own development which is essentially akin to ‘marking your own homework’. Since you weren’t allowed to do this at school, why would you consider it appropriate in this environment? No matter how well the contract has been constructed there are likely to be loopholes that the systems integrator can exploit in the event of an issue, especially considering this is their bread and butter business model and they will have significantly more experience in such contracts than your company. And that is where the second issue comes into play, you will very likely, only have an occasional and limited familiarity with managing such contracts so may find it difficult to keep on top of the specifics within it.

Using two systems integrators It should be clear that the major benefit of using two systems integrators is going to be that one company is marking the

There are downsides that need to be considered. First and foremost amongst these is that the general contractor (systems integrator) is responsible for testing their own development which is essentially akin to ‘marking your own homework’. Since you weren’t allowed to do this at school, why would you consider it appropriate in this environment?

April 2012 | TEST


40 | Outsourced testing

Forrester have identified that much of the recent growth in outsourced application services has been fuelled by customers engaging independent testing services companies where the development provider is separate from the provider performing the testing.

TEST | April 2012

work produced by the other. As such there should be a much better chance of ensuring the final output has been clearly approved as fit for purpose and possibly there will be a quicker time to completion of the project. The development company is less likely to build in the opportunity to revisit code especially if the contract has been let on a time and materials basis. However there are downsides, the procurement process will be longer and more involved as there are two contracts to be let and subsequently managed. There is also the consideration that any issues will need to find a tripartite resolution and any blame for issues that arise is likely to be levied by one contractor against the other. The greatest disadvantage to be aware of though is that the test organisation will often use this as an opportunity to muscle their way onto the development side, since this represents 75 percent of the workload. They are likely to attempt to identify numerous issues to discredit the capability of the development systems integrator and subsequently look to take on the development themselves. The development systems integrator will be wary of this and therefore look to discredit the test regime of the testing organisation in order to both protect their reputation and also create an opportunity to subsume the test workload themselves.

Martin Chalkley Analyst COnsultandomi www.consultandomi.com

is there an alternative option? There is a third way. In the world of software testing there are a number of pure-play software test companies

who don’t do any development but specialise in the validation of the requirements gathering process and the subsequent testing of the systems integrator’s development work. The benefit is that these companies have no interest in development opportunities, they will not be looking to undermine the systems integrators, and indeed it is likely that they will have a trusted relationship with some of those from previous projects where they have worked together. Essentially they have a shared objective without conflict if interest, rather than competing objectives. Unfortunately this does mean that you will need to still undertake two tenders but there is much greater certainty that the final outcome will be a fit for purpose release. There will have been independent verification that the systems integrator’s development work was properly tested and meets the requirements of the initial project scope.

independent testing Forrester have identified that much of the recent growth in outsourced application services has been fuelled by customers engaging independent testing services companies where the development provider is separate from the provider performing the testing. For the originating organisation the benefits of independent test ensure that the final software release is a fit for purpose solution that meets the original requirements. The opportunity to manage scope creep and reduce the frequency of revisiting poor code is increased, and so the chance of a successful project brought in on time and to budget is much improved.

www.testmagazine.co.uk


Can you predic TEST company profile | 41

Facilita Facilita load testing solutions deliver results Facilita has created the Forecast™ product suite which is used across multiple business sectors to performance test applications, websites and IT infrastructures of all sizes and complexity. With class leading software and unbeatable support and services Facilita will help you ensure that your IT systems are reliable, scalable and tuned for optimal performance.

Forecast™ is proven, effective and innovative A sound investment: Choosing the optimal load testing tool is crucial as the risks and costs associated with inadequate testing are enormous. Load testing is challenging and without the right tool and vendor support it will consume expensive resources and still leave a high risk of disastrous system failure. Forecast has been created to meet the challenges of load testing now and in the future. The core of the product is tried and trusted and incorporates more than a decade of experience and is designed to evolve in step with advances in technology. Realistic load testing: Forecast tests the reliability, performance and scalability of IT systems by realistically simulating from one to many thousands of users executing a mix of business processes using individually configurable test data. Comprehensive technology support: Forecast provides one of the widest ranges of protocol support of any load testing tool. 1. Forecast Web thoroughly tests web based applications and web services, identifies system bottlenecks, improves application quality and optimises network and server infrastructures. Forecast Web supports a comprehensive and growing list of protocols, standards and data formats including HTTP/HTTPS, SOAP, XML, JSON and Ajax. 2. Forecast Java is a powerful and technically advanced solution for load testing Java applications. It targets any non-GUI client-side Java API with support for all Java remoting technologies including RMI, IIOP, CORBA and Web Services. 3. Forecast Citrix simulates multiple Citrix clients and validates the Citrix environment for scalability and reliability in addition to the performance of the published applications. This non-intrusive approach provides very accurate client performance measurements unlike server based solutions. 4. Forecast .NET simulates multiple concurrent users of applications with client-side .NET technology.

6. Forecast can generate intelligent load at the IP socket level (TCP or UDP) to test systems with proprietary messaging protocols, and also supports the OSI protocol stack. Powerful yet easy to use: Testers like using Forecast because of its power and flexibility. Creating working tests is made easy with Forecast's application recording and script generation features and the ability to rapidly compose complex test scenarios with a few mouse clicks.

4

G

Facilita Software Development Limited. Tel: +44 (0)12

Supports Waterfall and Agile (and everything in between): Forecast has the features demanded by QA teams like automatic test script creation, test data management, real-time monitoring and comprehensive charting and reporting. Forecast is successfully deployed in Agile "Test Driven Development" (TDD) environments and integrates with automated test (continuous build) infrastructures. The functionality of Forecast is fully programmable and test scripts are written in standard languages (Java, C# and C++ ). Forecast provides the flexibility of Open Source alternatives along with comprehensive technical support and the features of a high-end commercial tool. Monitoring: Forecast integrates with leading solutions such as dynaTrace to provide enhanced server monitoring and diagnostics during testing. Forecast Virtual User technology can also be deployed to generate synthetic transactions within a production monitoring solution. Facilita now offers a lightweight monitoring dashboard in addition to integration with comprehensive enterprise APM solutions. Flexible licensing: Our philosophy is to provide maximum value and to avoid hidden costs. Licenses can be bought on a perpetual or subscription basis and short-term project licensing is also available with a “stop-the-clock” option.

Services Supporting our users In addition to comprehensive support and training, Facilita offers mentoring by experienced consultants either to ‘jump start’ a project or to cultivate advanced testing techniques. Testing services Facilita can supplement test teams or supply fully managed testing services, including Cloud based solutions.

5. Forecast WinDriver is a unique solution for performance testing windows applications that are impossible or uneconomical to test using other methods or where user experience timings are required. WinDriver automates the client user interface and can control from one to many hundreds of concurrent client instances or desktops.

Facilita Tel: +44 (0) 1260 298109 Email: enquiries@facilita.co.uk Web: www.facilita.com

www.testmagazine.co.uk

April 2012 | TEST


42 | TEST company profile

Parasoft Improving productivity by delivering quality as a continuous process For over 20 years Parasoft has been studying how to efficiently create quality computer code. Our solutions leverage this research to deliver automated quality assurance as a continuous process throughout the SDLC. This promotes strong code foundations, solid functional components, and robust business processes. Whether you are delivering Service-Orientated Architectures (SOA), evolving legacy systems, or improving quality processes – draw on our expertise and award winning products to increase productivity and the quality of your business applications.

Specialised platform support:

Parasoft's full-lifecycle quality platform ensures secure, reliable, compliant business processes. It was built from the ground up to prevent errors involving the integrated components – as well as reduce the complexity of testing in today's distributed, heterogeneous environments.

Trace code execution:

What we do Parasoft's SOA solution allows you to discover and augment expectations around design/ development policy and test case creation. These defined policies are automatically enforced, allowing your development team to prevent errors instead of finding and fixing them later in the cycle. This significantly increases team productivity and consistency.

End-to-end testing: Continuously validate all critical aspects of complex transactions which may extend through web interfaces, backend services, ESBs, databases, and everything in between.

Advanced web app testing: Guide the team in developing robust, noiseless regression tests for rich and highly-dynamic browserbased applications.

Access and execute tests against a variety of platforms (AmberPoint, HP, IBM, Microsoft, Oracle/ BEA, Progress Sonic, Software AG/webMethods, TIBCO).

Security testing: Prevent security vulnerabilities through penetration testing and execution of complex authentication, encryption, and access control test scenarios.

Provide seamless integration between SOA layers by identifying, isolating, and replaying actions in a multi-layered system.

Continuous regression testing: Validate that business processes continuously meet expectations across multiple layers of heterogeneous systems. This reduces the risk of change and enables rapid and agile responses to business demands.

Multi-layer verification: Ensure that all aspects of the application meet uniform expectations around security, reliability, performance, and maintainability.

Policy enforcement: Provide governance and policy-validation for composite applications in BPM, SOA, and cloud environments to ensure interoperability and consistency across all SOA layers. Please contact us to arrange either a one to one briefing session or a free evaluation.

Application behavior virtualisation: Automatically emulate the behavior of services, then deploys them across multiple environments – streamlining collaborative development and testing activities. Services can be emulated from functional tests or actual runtime environment data.

Load/performance testing: Verify application performance and functionality under heavy load. Existing end-to-end functional tests are leveraged for load testing, removing the barrier to comprehensive and continuous performance monitoring.

Spirent CommunicationsEmail: plc Tel:sales@parasoft-uk.com +44(0)7834752083 Email: Web: www.spirent.com Web: www.parasoft.com Tel:Daryl.Cornelius@spirent.com +44 (0) 208 263 6005

TEST | April 2012

www.testmagazine.co.uk


TEST company profile | 43

Seapine Software TM

With over 8,500 customers worldwide, Seapine Software Inc is a recognised, award-winning, leading provider of quality-centric application lifecycle management (ALM) solutions. With headquarters in Cincinnati, Ohio and offices in London, Melbourne, and Munich, Seapine is uniquely positioned to directly provide sales, support, and services around the world. Built on flexible architectures using open standards, Seapine Software’s cross-platform ALM tools support industry best practices, integrate into all popular development environments, and run on Microsoft Windows, Linux, Sun Solaris, and Apple Macintosh platforms. Seapine Software's integrated software development and testing tools streamline your development and QA processes – improving quality, and saving you significant time and money.

TestTrack RM TestTrack RM centralises requirements management, enabling all stakeholders to stay informed of new requirements, participate in the review process, and understand the impact of changes on their deliverables. Easy to install, use, and maintain, TestTrack RM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Whether as a standalone tool or part of Seapine’s integrated ALM solution, TestTrack RM helps teams keep development projects on track by facilitating collaboration, automating traceability, and satisfying compliance needs.

TestTrack Pro TestTrack Pro is a powerful, configurable, and easy to use issue management solution that tracks and manages defects, feature requests, change requests, and other work items. Its timesaving communication and reporting features keep team members informed and on schedule. TestTrack Pro supports MS SQL Server, Oracle, and other ODBC databases, and its open interface is easy to integrate into your development and customer support processes.

TestTrack TCM TestTrack TCM, a highly scalable, cross-platform test case management solution, manages all areas of the software testing process including test case creation, scheduling, execution, measurement, and reporting. Easy to install, use, and maintain, TestTrack TCM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Reporting and graphing tools, along with user-definable data filters, allow you to easily measure the progress and quality of your testing effort.

QA Wizard Pro QA Wizard Pro completely automates the functional and regression testing of Web, Windows, and Java applications, helping quality assurance teams increase test coverage. Featuring a nextgeneration scripting language, QA Wizard Pro includes advanced object searching, smart matching a global application repository, datadriven testing support, validation checkpoints, and built-in debugging. QA Wizard Pro can be used to test popular languages and technologies like C#, VB.NET, C++, Win32, Qt, AJAX, ActiveX, JavaScript, HTML, Delphi, Java, and Infragistics Windows Forms controls.

Surround SCM Surround SCM, Seapine’s cross-platform software configuration management solution, controls access to source files and other development assets, and tracks changes over time. All data is stored in industry-standard relational database management systems for greater security, scalability, data management, and reporting. Surround SCM’s change automation, caching proxy server, labels, and virtual branching tools streamline parallel development and provide complete control over the software change process.

www.seapine.com Phone:+44 (0) 208-899-6775 Email: salesuk@seapine.com United Kingdom, Ireland, and Benelux: Seapine Software Ltd. Building 3, Chiswick Park, 566 Chiswick High Road, Chiswick, London, W4 5YA UK Americas (Corporate Headquarters): Seapine Software, Inc. 5412 Courseview Drive, Suite 200, Mason, Ohio 45040 USA Phone: 513-754-1655

www.testmagazine.co.uk

April 2012 | TEST


44 | TEST company profile

Micro Focus Deliver better software, faster. Software quality that matches requirements and testing to business needs. Making sure that business software delivers precisely what is needed, when it is needed is central to business success. Getting it right first time hinges on properly defined and managed requirements, the right testing and managing change. Get these right and you can expect significant returns: Costs are reduced, productivity increases, time to market is greatly improved and customer satisfaction soars. The Borland software quality solutions from Micro Focus help software development organizations develop and deliver better applications through closer alignment to business, improved quality and faster, stronger delivery processes – independent of language or platform. Combining Requirements Definition and Management, Testing and Software Change Management tools, Micro Focus offers an integrated software quality approach that is positioned in the leadership quadrant of Gartner Inc’s Magic Quadrant. The Borland Solutions from Micro Focus are both platform and language agnostic – so whatever your preferred development environment you can benefit from world class tools to define and manage requirements, test your applications early in the lifecycle, and manage software configuration and change.

Requirements Defining and managing requirements is the bedrock for application development and enhancement. Micro Focus uniquely combines requirements definition, visualization, and management into a single '3-Dimensional' solution, giving managers, analysts and developers precise detail for engineering their software. By cutting ambiguity, the direction of development and QA teams is clear, strengthening business outcomes. For one company this delivered an ROI of 6-8 months, 20% increase in project success rates, 30% increase in productivity and a 25% increase in asset re-use. Using Micro Focus tools to define and manage requirements helps your teams: ollaborate, using pictures to build mindshare, drive •C a common vision and share responsibility with rolebased review and simulations. •R educe waste by finding and removing errors earlier in the lifecycle, eliminating ambiguity and streamlining communication. • Improve quality by taking the business need into account when defining the test plan. Caliber ® is an enterprise software requirements

definition and management suite that facilitates collaboration, impact analysis and communication, enabling software teams to deliver key project milestones with greater speed and accuracy.

Software Change Management StarTeam® is a fully integrated, cost-effective software change and configuration management tool. Designed for both centralized and geographically distributed software development environments, it delivers: • A single source of key information for distributed teams • Streamlined collaboration through a unified view of code and change requests • Industry leading scalability combined with low total cost of ownership

Testing Automating the entire quality process, from inception through to software delivery, ensures that tests are planned early and synchronize with business goals even as requirements and realities change. Leaving quality assurance to the end of the lifecycle is expensive and wastes improvement opportunities. Micro Focus delivers a better approach: Highly automated quality tooling built around visual interfaces and reusability. Tests can be run frequently, earlier in the development lifecycle to catch and eliminate defects rapidly. From functional testing to cloud-based performance testing, Micro Focus tools help you spot and correct defects rapidly across the application portfolio, even for Web 2.0 applications. Micro Focus testing solutions help you: • Align testing with a clear, shared understanding of business goals focusing test resources where they deliver most value • Increase control through greater visibility over all quality activities • Improve productivity by catching and driving out defects faster Silk is a comprehensive automated software quality management solution suite which enables users to rapidly create test automation, ensuring continuous validation of quality throughout the development lifecycle. Users can move away from manual-testing dominated software lifecycles, to ones where automated tests continually test software for quality and improve time to market.

Take testing to the cloud Users can test and diagnose Internet-facing applications under immense global peak loads on the cloud without having to manage complex infrastructures. Among other benefits, SilkPerformer ® CloudBurst gives development and quality teams: • Simulation of peak demand loads through onsite and cloud-based resources for scalable, powerful and cost effective peak load testing • Web 2.0 client emulation to test even today’s rich internet applications effectively Micro Focus, a member of the FTSE 250, provides innovative software that enables companies to dramatically improve the business value of their enterprise applications. Micro Focus Enterprise Application Modernization, Testing and Management software enables customers’ business applications to respond rapidly to market changes and embrace modern architectures with reduced cost and risk.

For more information, please visit www.microfocus.com/solutions/softwarequality

TEST | April 2012

www.testmagazine.co.uk


TEST company profile | 45

Original Software Delivering quality through innovation With a world class record of innovation, Original Software offers a solution focused completely on the goal of effective software quality management. By embracing the full spectrum of Application Quality Management (AQM) across a wide range of applications and environments, we partner with customers and help make quality a business imperative. Our solutions include a quality management platform, manual testing, test automation and test data management software, all delivered with the control of business risk, cost, time and resources in mind. Our test automation solution is particularly suited for testing in an agile environment.

Setting new standards for application quality Managers responsible for quality must be able to implement processes and technology that will support their important business objectives in a pragmatic and achievable way, and without negatively impacting current projects. These core needs are what inspired Original Software to innovate and provide practical solutions for Application Quality Management (AQM) and Automated Software Quality (ASQ). We have helped customers achieve real successes by implementing an effective ‘application quality eco-system’ that delivers greater business agility, faster time to market, reduced risk, decreased costs, increased productivity and an early return on investment. Our success has been built on a solution suite that provides a dynamic approach to quality management and automation, empowering all stakeholders in the quality process, as well as uniquely addressing all layers of the application stack. Automation has been achieved without creating a dependency on specialised skills and by minimising ongoing maintenance burdens.

An innovative approach Innovation is in the DNA at Original Software. Our intuitive solution suite directly tackles application quality issues and helps you achieve the ultimate goal of application excellence.

Empowering all stakeholders The design of the solution helps customers build an ‘application quality eco-system’ that extends beyond just the QA team, reaching all the relevant stakeholders within the business. Our technology enables everyone involved in the delivery of IT projects to participate in the quality process – from the business analyst to the business user and from the developer to the tester. Management executives are fully empowered by having instant visibility of projects underway.

Quality that is truly code-free We have observed the script maintenance and exclusivity problems caused by code-driven automation solutions and has built a solution suite that requires no programming skills. This empowers all users to define and execute their tests without the need to use any kind of code, freeing them from the automation specialist bottleneck. Not only is our technology easy to use, but quality processes are accelerated, allowing for faster delivery of businesscritical projects.

Top to bottom quality Quality needs to be addressed at all layers of the business application. We give you the ability to check every element of an application - from the visual layer, through to the underlying service processes and messages, as well as into the database.

Addressing test data issues Data drives the quality process and as such cannot be ignored. We enable the building and management of a compact test environment from production data quickly and in a data privacy compliant manner, avoiding legal and security risks. We can also manage the state of that data, so that it is synchronised with test scripts, enabling swift recovery and shortening test cycles.

A holistic approach to quality Our integrated solution suite is uniquely positioned to address all the quality needs of an application, regardless of the development methodology used. Being methodology neutral, we can help in Agile, Waterfall or any other project type. We provide the ability to unite all aspects of the software quality lifecycle. Our solution helps manage the requirements, design, build, test planning and control, test execution, test environment and deployment of business applications from one central point that gives everyone involved a unified view of project status and avoids the release of an application that is not ready for use.

Helping businesses around the world Our innovative approach to solving real pain-points in the Application Quality Life Cycle has been recognised by leading multinational customers and industry analysts alike. In a 2011 report, Ovum stated: “While other companies have diversified, into other test types and sometimes outside testing completely, Original Software has stuck more firmly to a value proposition almost solely around unsolved challenges in functional test automation. It has filled out some yawning gaps and attempted to make test automation more accessible to non-technical testers.” More than 400 organisations operating in over 30 countries use our solutions and we are proud of partnerships with the likes of Coca-Cola, Unilever, HSBC, Barclays Bank, FedEx, Pfizer, DHL, HMV and many others.

www.origsoft.com Email: solutions@origsoft.com Tel: +44 (0)1256 338 666 Fax: +44 (0)1256 338 678 Grove House, Chineham Court, Basingstoke, Hampshire, RG24 8AG

www.testmagazine.co.uk

April 2012 | TEST


46 | TEST company profile

Green Hat The Green Hat difference In one software suite, Green Hat automates the validation, visualisation and virtualisation of unit, functional, regression, system, simulation, performance and integration testing, as well as performance monitoring. Green Hat offers codefree and adaptable testing from the User Interface (UI) through to back-end services and databases. Reducing testing time from weeks to minutes, Green Hat customers enjoy rapid payback on their investment. Green Hat’s testing suite supports quality assurance across the whole lifecycle, and different development methodologies including Agile and test-driven approaches. Industry vertical solutions using protocols like SWIFT, FIX, IATA or HL7 are all simply handled. Unique pre-built quality policies enable governance, and the re-use of test assets promotes high efficiency. Customers experience value quickly through the high usability of Green Hat’s software. Focusing on minimising manual and repetitive activities, Green Hat works with other application lifecycle management (ALM) technologies to provide customers with value-add solutions that slot into their Agile testing, continuous testing, upgrade assurance, governance and policy compliance. Enterprises invested in HP and IBM Rational products can simply extend their test and change management processes to the complex test environments managed by Green Hat and get full integration. Green Hat provides the broadest set of testing capabilities for enterprises with a strategic investment in legacy integration, SOA, BPM, cloud and other component-based environments, reducing the risk and cost associated with defects in processes and applications. The Green Hat difference includes: • Purpose built end-to-end integration testing of complex events, business processes and composite applications. Organisations benefit by having UI testing combined with SOA, BPM and cloud testing in one integrated suite. • Unrivalled insight into the side-effect impacts of changes made to composite applications and processes, enabling a comprehensive approach to testing that eliminates defects early in the lifecycle. • Virtualisation for missing or incomplete components to enable system testing at all stages of development. Organisations benefit through being unhindered by unavailable systems or costly access to third party systems, licences or hardware. Green Hat pioneered ‘stubbing’, and organisations benefit by having virtualisation as an integrated function, rather than a separate product.

• ‘ Out-of the box’ support for over 70 technologies and platforms, as well as transport protocols for industry vertical solutions. Also provided is an application programming interface (API) for testing custom protocols, and integration with UDDI registries/repositories. •H elping organisations at an early stage of project or integration deployment to build an appropriate testing methodology as part of a wider SOA project methodology.

Corporate overview Since 1996, Green Hat has constantly delivered innovation in test automation. With offices that span North America, Europe and Asia/Pacific, Green Hat’s mission is to simplify the complexity associated with testing, and make processes more efficient. Green Hat delivers the market leading combined, integrated suite for automated, end-to-end testing of the legacy integration, Service Oriented Architecture (SOA), Business Process Management (BPM) and emerging cloud technologies that run Agile enterprises. Green Hat partners with global technology companies including HP, IBM, Oracle, SAP, Software AG, and TIBCO to deliver unrivalled breadth and depth of platform support for highly integrated test automation. Green Hat also works closely with the horizontal and vertical practices of global system integrators including Accenture, Atos Origin, CapGemini, Cognizant, CSC, Fujitsu, Infosys, Logica, Sapient, Tata Consulting and Wipro, as well as a significant number of regional and country-specific specialists. Strong partner relationships help deliver on customer initiatives, including testing centres of excellence. Supporting the whole development lifecycle and enabling early and continuous testing, Green Hat’s unique test automation software increases organisational agility, improves process efficiency, assures quality, lowers costs and mitigates risk.

Helping enterprises globally Green Hat is proud to have hundreds of global enterprises as customers, and this number does not include the consulting organisations who are party to many of these installations with their own staff or outsourcing arrangements. Green Hat customers enjoy global support and cite outstanding responsiveness to their current and future requirements. Green Hat’s customers span industry sectors including financial services, telecommunications, retail, transportation, healthcare, government, and energy.

• Scaling out these environments, test automations and virtualisations into the cloud, with seamless integration between Green Hat’s products and leading cloud providers, freeing you from the constraints of real hardware without the administrative overhead. • ‘Out-of-the-box’ deep integration with all major SOA, enterprise service bus (ESB) platforms, BPM runtime environments, governance products, and application lifecycle management (ALM) products.

sales@greenhat.com www.greenhat.com

TEST | April 2012

www.testmagazine.co.uk


TEST company profile | 47

T-Plan T-Plan since 1990 has supplied the best of breed solutions for testing. The T-Plan method and tools allowing both the business unit manager and the IT manager to: Manage Costs, Reduce Business Risk and Regulate the Process. By providing order, structure and visibility throughout the development lifecycle from planning to execution, acceleration of the "time to market" for business solutions can be delivered. The T-Plan Product Suite allows you to manage every aspect of the Testing Process, providing a consistent and structured approach to testing at the project and corporate level.

What we do Test Management: The T-Plan Professional product is modular in design, clearly dierentiating between the Analysis, Design, Management and Monitoring of the Test Assets. • What coverage back to requirements has been achieved in our testing so far?

Test Automation: Cross-Platform Independence (Java) Test Automation is also integrated into the test suite package via T-Plan Robot, therefore creating a full testing solution. T-Plan Robot Enterprise is the most flexible and universal black box test automation tool on the market. Providing a human-like approach to software testing of the user interface, and uniquely built on JAVA, Robot performs well in situations where other tools may fail. • Platform independence (Java). T-Plan Robot runs on, and automates all major systems, such as Windows, Mac, Linux, Unix, Solaris, and mobile platforms such as Android, iPhone, Windows Mobile, Windows CE, Symbian. • Test almost ANY system. As automation runs at the GUI level, via the use of VNC, the tool can automate any application. E.g. Java, C++/C#, .NET, HTML (web/browser), mobile, command line interfaces; also, applications usually considered impossible to automate like Flash/Flex etc.

• What requirement successes have we achieved so far? • Can I prove that the system is really tested? • If we go live now, what are the associated Business Risks?

Incident Management: Errors or queries found during the Test Execution can also be logged and tracked throughout the Testing Lifecycle in the T-Plan Incident Manager. “We wanted an integrated test management process; T-Plan was very exible and excellent value for money.” Francesca Kay, Test Manager, Virgin Mobile

Web: hays.co.uk/it Email: testing@hays.com Tel: +44 (0)1273 739272

www.testmagazine.co.uk

April 2012 | TEST


48 | The last word...

the last word... The end is nigh Dave Whalen – once an avowed Agile hater – is coming to the end of his first Agile project...

W

e're coming to the end of my fi rst real Agile project. We are in the fi nal week of our fi nal sprint for Release 1.0. i'm sure many of you are wondering (alright maybe just a few of you) if the guy that hated Agile is still a convert. The simple answer is yes, but with a caveat. Time for the big retrospective; here's the highlights and lessons learned:

The process worked!: If there is one thing that I think the entire team will agree with it would be that the Agile process can and does work. That said, however, it works under specific conditions. It takes time to get the process down and to get into a groove. you need a highly motivated team: Teamwork is the key to success. Collaboration is critical. you also need to build the team with the right team members. I'd even venture to say that attitude is more important than skill set. Skills are important, but someone with an amazing set of skills but a poor attitude can really kill an effective team. Everyone on the team was motivated to succeed – not as individuals, but as a team. Anyone would step up and help another team member, anytime. But we did have a small glitch. We brought in a new team member with highly needed skills and he wasn't a team player, he was a lone wolf. It affected the entire team and caused a huge distraction. Luckily for us his skills were more needed elsewhere. Luckily others stepped up and picked up the slack. Failure is an option: you will fail – a few times. What is important is why we failed and what did we do about it. At first our failures were due to bad estimating. We missed our sprint goals. Over time we got better at it but we still underestimated here and there. We also implemented a number of new technologies. It took us some time to learn and implement them.

TEST | April 2012

Change is inevitable: Expect it, embrace it. One of the key tenets of Agile is flexibility. We had lots of changes, some small, others, not so small. We completely changed direction twice, but it was easy to refocus our efforts when we needed to. We could turn on a dime! Calculating velocity: From our leader: “One of the best things about Agile is the ability to calculate a velocity for the team (and for individual scrums), as that helps plan, track progress and project completion dates. This is very helpful when working on date driven efforts. If you know what the team/ scrum velocity is, and you know what the due date is you can easily tell if you have a shot at delivering on time. If the math adds up, off you go. But if things are off, you’ll know early on. Then you can define your options and line up business unit support for the appropriate choice. This is a big benefit. But you need to have a timetested velocity calculation to lend weight to this analysis.” Estimation can be a challenge: It's something you learn and improve on over time. New technologies add time. At first we didn't factor in the time it takes to research a new technology, learn it, and finally to implement it. you also need to consider Murphy's Law. Things can and will happen that you can't plan on and never envision. Agile requires an entirely new management paradigm: Agile needs someone filling more of a leadership role than a management role. We had a great project manager. Not only did he help keep us focused, he gave us lots of leeway to try new things and never scolded us when we failed. Of course he showed the entire team his status slides every morning. If we weren't up to par or perhaps didn't record our tasks or time correctly, he would simply put a little red box around our faux pas. That was it, no judgement. We hated being on that slide! The motivation to avoid the red box was

purely self-driven. A little friendly ribbing from team members helped too. So I'm sure the question on everyone's mind is: am I now an Agile fan? Will I convert? Did I drink the Agile Kool-Aid? After all, I'm the guy that wrote the ‘I Hate Agile’ cover story for this very publication. The answer is (drum roll) yes! Well a qualified yes. I loved the process as we implemented it. I'm actually looking forward to the next release. But, that said, I'm still not a fan of many Agile implementations nor am I a fan of Agile consultants. There are some processes that still remain very rigid. There are still Agile consultants out there that see any deviation from their idea of Agile as failure. In fact, I'll bet most would look at what we've done and pick holes in our process and tell us that we're not really Agile. Whatever. I'll let our success stand as a measure of our Agility. Try the Kool-Aid!

So I'm sure the question on everyone's mind is: am I now an Agile fan? Will I convert? Did I drink the Agile Kool-Aid? After all, I'm the guy that wrote the ‘I Hate Agile’ cover story for this very publication. The answer is (drum roll) Yes! Well a qualified yes.

Dave Whalen

President and senior software entomologist Whalen Technologies softwareentomologist.wordpress.com

www.testmagazine.co.uk


INNOVATION FOR SOFTWARE QUALITY

Subscribe to TEST free! Y

g s in er ad vid Le ro 20 ing P st Te

1 tob er 201 Issu e 5: Oc Vol um e 3:

LIT RE QUA

TEST : INNOV ATION FOR SO FTWAR E QUA LITY

FTWA FOR SO

TEST : INNOVATION FOR SOFTWARE QUALITY

LITY E QUA FTWAR FOR SO ATION INNOV TEST :

ATION INNOV

TE Insi ST de Dig : est

INNOV ATION INNOVATION FOR SOFTWARE QUALITY FOR SO FTWA Volume 3: Issue 6: December 2011

Vol um e 4: Issu e 1: Feb rua ry

201 2

RE QUA LIT

Y

ct & centres Ta Testing Testing in cy of a excellence Diplom New rade on Devyani Bo Focussing on collaboration attitude ht rig e th g developin

Zealand the 'numb er-8 fencing w ire' appro ach

VOLUM E 4: IS SUE 1: FEBRU ARY 20 12

VOLUME 3: ISSUE 6: DECEMBER 2011

1 ER 201 OCTOB SUE 5: E 3: IS VOLUM

irements n | Requ automatio Inside: Te automat Inside: Static analysis | Data obfuscation |stTesting tools ion | Out sourcing ine.co.uk | Data-d testmagaz w. ww at riven testi line it TEST on ng

| Test ad testing Inside: Lo Vis

Visit TEST Visit TEST online at www.testmagazine.co.uk online at www.testm agazine.co .uk

For exclusive news, features, opinion, comment, directory, digital archive and much more visit

www.testmagazine.co.uk Published by 31 Media Ltd Telephone: +44 (0) 870 863 6930 Facsimile: +44 (0) 870 085 8837

www.31media.co.uk

Email: info@31media.co.uk Website: www.31media.co.uk

INNOVATION FOR SOFTWARE QUALITY


BORLAND SOLUTIONS FROM MICRO FOCUS DESIGNED TO DELIVER BETTER SOFTWARE, FASTER

Borland Solutions are designed to:

Align development to business needs

Strengthen development processes

Ensure testing occurs throughout the lifecycle

Deliver higher quality software, faster

Deliver stable mobile applications, even under peak loads

Borland Solutions from Micro Focus make up a comprehensive quality toolset for embedding quality throughout the software development lifecycle. Software that delivers precisely what is needed, when it is needed, is crucial for business success. Borland Solutions embed quality into software delivery from the very beginning of the development lifecycle whichever methodology you use – traditional, Agile or a mixture – from requirements, through regression, functional, performance and load testting. The result: you meet both business requirements and quality expectations with better quality software, delivered faster.

Micro Focus Ltd. Email: microfocus.communications@microfocus.com © 2011 Micro Focus IP Development Limited. All rights and marks acknowledged.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.