TEST Magazine - June-July 2011

Page 1

INNoVATIoN FoR SoFTWARE QuALITY Volume 3: Issue 3: June 2011

Banishing security bugs Miia Vuontisj채rvi and Ari Takanen on the power of fuzzing Inside: Risk-based testing | Application development | Safety testing Visit TEST online at www.testmagazine.co.uk



TEST : INNOV ATION FOR SO FTWARE QUAL ITY

VOLUME 3: ISS UE 3: JUNE 20 11

INNoVATIoN FoR SoFTWARE QuALITY

Leader | 1 Feature INNOVATION FO R SOFTWARE QU ALITY

Volume 3: Issue 3: June 2011

Banishing security bugs Miia Vuontisjärvi and

Ari Takanen on the power of fuzzing

Inside: Risk-based testing

| Application developme nt | Safety testing

Visit TEST online at www.testma gazine.co.uk

Who does the testing after the singularity?

I

am a big fan of Ray Kurzweil. The man is a true visionary and a force for good in our age of increasing complexity and chaos. Kurzweil is one of the world’s leading inventors, thinkers, and futurists, with a twenty-year track record of accurate prediction; he has been described as the "rightful heir to Thomas Edison.”

With the computers effectively writing their own code, who does the testing? I suppose the simple answer is ‘the machines do the testing’. But could this lead to defects being multiplied and magnified – like pathological genetic mutations in inbred organisms – to make a very bad biological analogy? Who knows. Matt Bailey, Editor

© 2011 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. no part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor or TEST Magazine or its publisher, 31 Media Limited. ISSn 2040-0160

www.testmagazine.co.uk

One of his (and many others’) theories concerns the ‘Singularity’ which he describes thus: “The Singularity is an era in which our intelligence will become increasingly non-biological and trillions of times more powerful than it is today – the dawning of a new civilisation that will enable us to transcend our biological limitations and amplify our creativity.” Without getting into too much technical detail, the ‘Technological singularity’ page on Wikipedia has this: “If superhuman intelligences were invented, either through the amplification of human intelligence or artificial intelligence, it would bring to bear greater problem-solving and inventive skills than humans, then it could design a yet more capable machine, or re-write its source code to become more intelligent. This more capable machine then could design a machine of even greater capability. These iterations could accelerate, leading to recursive self improvement, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in,” (my italics). Sign me up! One thing does occur to me though. With the computers effectively writing

Editor Matthew Bailey matthew.bailey@31media.co.uk Tel: +44 (0)203 056 4599 To advertise contact: Grant Farrell grant.farrell@31media.co.uk Tel: +44(0)203 056 4598 Production & Design Toni Barrington toni.barrington@31media.co.uk Dean Cook dean.cook@31media.co.uk

their own code, who does the testing? I suppose the simple answer is ‘the machines do the testing’. But could this lead to defects being multiplied and magnified – like pathological genetic mutations in inbred organisms – to make a very bad biological analogy? Who knows. To every argument that runs, “Humans need to assess the usability of the interface,” the answer could plausibly be, “the machines can mimic human interaction with a high degree of accuracy.” They could probably calculate that accuracy down to the last decimal place. I’m not fearing for the future of testers just yet. The singularity may come in the next ten years or the next 100, it depends who you believe. And when it does come, like other long heralded quantum leaps forward, the reality may yet prove be a long way from the predictions – good and bad. As this will be my last issue in the editor's chair, I won't bid you my usual 'until next time...', but I will leave you in the capable hands of the new editor Mr John Hancock (editorial queries to john.hancock@31media. co.uk). It has been a fantastic couple of years launching and editing TEST, so I thank you all for reading as well as for your valuable help and assistance. I'm off to new challenges in the design engineering sector – they also know all about quality there! Good luck and best wishes

Matt Bailey, Editor

Editorial & Advertising Enquiries 31 Media Ltd, Three Tuns House 109 Borough High Street London SE1 1nL Tel: +44 (0) 870 863 6930 Fax: +44 (0) 870 085 8837 Email: info@31media.co.uk Web: www.testmagazine.co.uk Printed by Pensord, Tram Road, Pontllanfraith, Blackwood. nP12 2YA

June 2011 | TEST


2 | Feature

TM

Powerful multi-protocol testing software

Can you predict the future? Don’t leave anything to chance. Forecast tests the performance, reliability and scalability of your business critical IT systems backed up by Facilita’s specialist professional services, training and expert support.

Facilita Software Development Limited. Tel: +44 (0)1260 298 109 | email: enquiries@facilita.com | www.facilita.com


Contents | 3

Contents... june 2011 1 Leader column Who will do the testing when computers write their own programming? 4 News 6 Cover story – Banishing security bugs – the power of fuzzing Miia Vuontisjärvi and Ari Takanen explore the area of security testing from both a vulnerability management perspective, and by examining some of the most common testing techniques in use today.

10 A positive mindset

T EST editor Matt Bailey discusses the business of testing with Nic Godall an experienced tester working in the digital marketing sector.

12 Fitter, better, faster: what the future holds for application development

evin Parker has a look at what has changed around software development K and how the increasingly complex IT world is impacting the software industry.

16 Farming the organisation – for process improvement Farmers have to prepare a strategy and follow processes before they start the physical hard work of farming itself. Pratik Shah says testers should adopt the same approach.

6

18 Businesses are failing to support testers Matt Bailey talks to Sogeti’s deputy CEO Richard Terry about what conclusion can be drawn from his company’s survey of testing and QA professionals at TestExpo 2011.

20 Testing virtual applications Application virtualisation can be a tremendous aid to the production, testing and implementation of modern IT systems. Andres Thompson reports.

24 Smarter mobile devices – smarter testing According to industry analysts problems are likely to emerge in the smartphone market due to the increased importance of software, combined with a lack of adequate testing. Paul Beaver reports.

12

28 The five challenges to risk-based testing

S tuart Reid, CTO of Testing Solutions Group highlights the obstacles to effectively implementing risk-based testing and suggests means of addressing them.

32 Perfect partnerships The days in which an outsourced testing engagement consisted of a client mapping out a specification, passing it to the testing company and devolving all responsibility are gone. Yann Gloaguen outlines his vision of the perfect testing partnership.

35 The importance of testing for software safety standards

24

Good verification and testing practice is crucial, but nowhere is this more so than in safety-critical areas like aerospace and medicine. Ian Gilchrist, software products consultant at IPL reports.

38 Climbing out of the testbox In the first of her regular ‘Training Corner’ contributions, test training specialist Angelina Samaroo looks at the complex world of ICT and risk-based testing.

42 TEST Directory 48 Last Word – Dave Whalen

Dave Whalen returns to tell us how he has made his peace with Agile... Sort of.

www.testmagazine.co.uk

32 June 2011 | TEST


Cloud offers a secure route to using personal IT devices at work

C

loud computing applications can provide employees with a secure and effective means to use their personal IT devices at work without compromising their organisation’s security levels according to Advanced 365, a managed services and Cloud computing provider. With smartphones, tablet computers and non-PC notebooks now everyday items in many households, there is a growing reluctance amongst employees to use outdated IT systems in the workplace. This has led some organisations to allow authorised staff to use their own IT device to perform their role. Whilethis shift in attitude is being welcomed by end users it has understandably raised questions amongst IT managers. “Many organisations still insist that employees only use recognised and secure IT devices on their company’s infrastructure. However as tech savvy office workers seek to use the latest mobile IT devices in preference to using aging company PCs and laptops, network security concerns are inevitably resurfacing,” says Neil Cross, managing director of Advanced 365. “To address this challenge, it is essential that organisations ensure that their employees have authentication, identity verification

Automated software testing from the cloud

W

eb-based, automated software testing solutions provider Janova has launched what it says are powerful scriptless testing tools designed to meet the needs of organisations ranging from the single user up to the enterprise. According to Janova the solution streamlines the often burdensome task of software implementation with efficient, easy-to-use tools that run tests up to 20 times faster, while decreasing implementation times by up to 40 percent. Janova’s simple project structure of Features, Pages and Flows allows for English requirements to become automated tests that execute securely in the cloud. With nothing to download and no infrastructural investment required, users can access Janova from anywhere with an Internet connection and receive detailed reporting in seconds. “Janova is redefining software testing with our revolutionary tools that aim to simplify a task many organizations struggle with on a daily basis,” said Jeff Lusenhop, founder and chief executive officer, Janova. “Janova’s solutions are different because they leverage the power of the cloud while being easy to understand and use for both business analysts and quality assurance personnel.”

TEST | June 2011

and security applications installed on their IT devices before allowing them to access their corporate network.” Developments in Cloud computing technology are now providing an opportunity for organisations to address these security fears without losing control of their desktop environment, even if they can’t directly control the device being used by the employee. These Cloud services, such as Microsoft’s Office 365 (currently in beta testing), allow users to access their company network and applications such as email over a secure internet connection as and when they require for a monthly fee. Alternatively, organisations can opt for a more integrated approach by utilising applications such as VDI, Citrix or online portals such as Microsoft SharePoint to

deliver their business applications via a secure controlled environment. “While Cloud computing technology can reduce the pressure that IT teams are under to maintain hardware, and potentially reduce costs, working with a trusted partner that has the breadth of skills to understand these new and more complex environments is now more important than ever,” says Cross.

EggPlant cracks US and China

L

ondon-based TestPlant has secured what it says is a global test industry first in securing a US patent for its automated software testing tool eggPlant. The company is now working with UK Trade & Investment (UKTI) to expand their business across the globe. TestPlant says eggPlant is now in use in more than 20 countries. Its application in the defence and security sector led the company to work with DSO, the defence and security arm of UKTI, and the company was named Best Newcomer at the London Export Awards 2010 organised by UKTI London. “The award has been an enormous help,” says co-founder and chief executive of TestPlant, George Mackintosh. “In fact, the buzz surrounding it helped us sell more in the domestic market.” Mackintosh emphasises the need to look

at new markets and build a regional presence in order to secure future growth, and is working with UKTI to help develop the business in its target markets. As part of this strategy, the company joined UKTI on a trade visit to China in July 2010 to conduct market research and meet potential partners and clients. “Our schedule was full of meetings with Chinese companies that we would not have been able to arrange without UKTI’s help,” says Mackintosh. “One of these meetings was with a senior vice-president of the giant Huawei telecoms business. As a direct result of this, we now have an active project with Huawei’s mobility and smartphone team involving our eggPlant mobile testing solution. And the UKTI staff based in China have been tremendously supportive throughout the whole process.”

Reflective Solutions signs US West Coast technology partner

R

eflective Solutions says demand for its web application performance testing software, StressTester, is steadily increasing in the US; as if to prove the point, San Diego-based IT consultancy Norima Consulting has adopted StressTester as its performance testing tool of choice for all its consultancy projects. Reflective Solutions has a growing number of partnerships with consultancies on the East Coast of the US, but the relationship with Norima Consulting will be its first West Coast collaboration. Norima has selected StressTester to help extend its range of professional IT consultancy services it currently offers. The company is embarking on an ambitious

growth plan and aims to expand its team of 45 consultants significantly over the next three years. Providing application performance testing based on the StressTester platform is an important strategic move by the company and is seen as one of the keys to the success of the business development plans. David Kuik, CEO at Norima Consulting, comments: “Performance testing is important to our growth strategy and we wanted to partner with the best available performance testing software, to reflect the quality of our own services. StressTester gives us all of that and more, enabling us to offer a highly competitive solution to our clients.”

www.testmagazine.co.uk


A

ppLabs, which claims to be the world's largest software testing and quality management company, has published a survey of several quality assurance analysts and IT decision makers from Global 2000 companies. The study was to assess the readiness of the organisations to adopt software test automation. It was held during a recent webinar, A Structured Approach to Enterprise Test Automation, AppLabs organised, featuring a leading analyst fi rm. The results suggest that most organisations today understand the value software test automation can provide to an enterprise. The benefits include reuse, better labour utilisation and cost savings. When asked what percentage of their organisation’s application testing was automated, 65 percent of the participants indicated

that test automation in their organisations was currently less than 50 percent while the remaining participants indicated above 50 percent and below 75 percent automation. Answering another question posed during the webinar on whether they would consider using application test automation within the next three quarters, if they have not considered yet, 88 percent of the participants answered in the affirmative. The survey results indicate that even though most of the enterprises have not yet automated their application testing, they are in the process of doing so. For many enterprises 2011 seems the right time to establish an automation strategy for testing and they are making sure such an initiative is part of their plans for 2011.

nEWS

SuRVEY FINDS 88% oF ENTERPRISES CoNSIDERING AuToMATIoN

Sony in ‘database Industrial strength debugger leak hell’

R

eports that Sony is in database leak hell – with a further 25 million users of its online entertainment service having had their credentials compromised – is serious blow to the Japanese IT giant's credibility but the bigger question is what other database leaks are lurking in the electronic undergrowth? According to Andy Cordial, managing director of the secure storage systems specialist Origin, with major database incursions taking place on an almost daily basis, it is clear that current corporate security defence strategies are no longer enough. “Quite aside from the Sony double-whammy, there have been hacks of several corporates, including the Epsilon database cracking incident, in recent weeks. Regardless of what caused these incursions, it is now clear that the database security systems in active use on both sides of the Atlantic are no longer sufficient," he said. “Most security professionals understand that a multi-layered approach can be the best option, but – until now – this was not always the most cost-effective approach. The $64,000 question, however, is what is the real solution to this pressing issue,” he added. What we are seeing, says Cordial, is an obvious change in the modus operandi of hackers who are intent on extracting user credentials from as many corporates as possible. Whatever their methodology, however, the fact is that IT managers need to raise the bar when it comes to protecting their data, and this can most cost-effectively be carried out using a mixture of security technologies. “It's very easy to lose sight of the fact that fraudsters will always tend to gravitate towards the easiest system to crack. Put simply, this means that, if you make it difficult enough for them on your own firm's IT systems, they will go elsewhere,” concludes Cordial.

www.testmagazine.co.uk

A

llinea Software, a leading supplier of software development tools for high performance computing (hPC), has released what it says is the fi rst industrial strength debugger for parallel computing on systems with hundreds of thousands of processor cores that deliver Petafl ops performance. According to Allinea, the totally new version of its DDT dramatically shortens debugging time and improves efficiency for small and mid scale users – however the new technology it incorporates makes debugging possible for the first time for users of the world’s largest supercomputers as they approach Petascale performance. Allinea DDT 3.0 is based on a new tree-based architecture that provides a logarithmic improvement in scalability and response time. The enhanced user interface also makes it easy to scale from debugging at small to very large scales. Smaller system users can thus now benefit from the powerful new features in Allinea DDT, such as Smart Highlighting, enhanced C++ features and new support for the most complex and demanding codes. Large scale users will now be able to manage huge programs which can run on very large systems. All these features together offer users dramatically improved debugging performance. Mike Fish, CEO of Allinea Software, explains: “The release of DDT 3.0 marks a massive performance revolution for users, making possible tasks that were previously unattainable. As systems become ever larger, huge amounts of debug data are generated which - with conventional tool architectures - are almost impossible to present intelligibly to the user and create unacceptably long response times. Allinea’s partnerships with the largest and most demanding supercomputer centres around the world give it a unique insight on how to deliver the performance needed for Petascale debugging – as evidenced by the logarithmic degradation in response time demonstrated at Petascale on Cray XT5 super computers – making Allinea DDT the most user-friendly product for all types of user.”

June 2011 | TEST


6 | Test cover story

Banishing security bugs – the power of fuzzing For quality assurance people, security-related bugs have become a new challenge. Most organisations require some type of security testing from their development teams, but it is often left for the testers to choose which one. In this article Miia Vuontisjärvi and Ari Takanen of Finish security test specialist Codenomicon explore the area of security testing from both a vulnerability management perspective, and by examining some of the most common testing techniques in use today.

I

magine a house that has a bug problem. Every now and then, an exterminator shows up to help you get rid of certain bugs, but the bug problem remains since the exterminator targets only one particular critter. Any other kind of nuisance is left untouched. There are bugs in the software. It is in the nature of software development process: unwanted features end up in the implementation without anyone knowing they are there. These features may not all be harmful, sometimes they can even prove to be useful. Still, a vast majority of the undesired features are seen as bugs. In the worst case these weaknesses cripple the robustness of the software or manifest themselves as exploitable vulnerabilities that hackers can make use of to compromise systems. We know about some vulnerabilities and can get rid of them. Databases typically list thousands and thousands of them, often with patches which administrators can use to fix their systems if they find out they are impacted. In fact, it is fairly easy to

TEST | June 2011

protect the systems against known vulnerabilities. Just install all the latest patches and you should be safe. unknown vulnerabilities are a completely different matter. unknown vulnerabilities are the bugs that the exterminators (the security experts) do not know to target. They are not listed anywhere, and there are no patches or workarounds available to protect the infected systems. These vulnerabilities are not detected by security scanners, because they can only find vulnerabilities that have already been reported. This means, that reactive security tools such as anti-virus products and intrusion detection systems cannot detect if those weaknesses are being exploited. The traditional security measures have no insecticide against unknown bugs.

unknown vulnerabilities are found by thorough testing “Don’t worry! Your testing team can help you!”, is what you should be saying to your security architects. Finding the unknown vulnerabilities can be a challenge, especially if you do not even know what you are looking for.

www.testmagazine.co.uk


Test cover story | 7

However, with the help of systematic test automation techniques those nasty bugs in hiding can be found. A technique called fuzz testing has proven to be the most efficient method for the job. What security people call fuzzing is known as syntax testing, grammar testing, fault injection, robustness testing or sometimes even mutation testing to us quality assurance specialists. To find unknown security related bugs, you need to go through the syntax and semantics of software communications and expose the software to systematic set of unexpected inputs. For the people in your IT team, fuzzing might be a new technology, but for testing professionals it is not really anything new. Curious? Read on to learn how you can easily integrate fuzzing into your own verification and validation process.

structures and message sequences, and then altering these to form nearlyvalid messages that systematically anomalise some parts of the information exchange to test the target system for robustness. A communication protocol is a formal system defining the digital message formats and rules describing the exchange of messages. A protocol model consists of a formal syntax, which is used to both create valid messages and to confirm the validity of messages created by others. It can also be used to create and validate sequences consisting of several messages. Fuzzing tools use protocol models to form a structure of a protocol. The models that fuzzing tools use are typically variants of industry standard protocol description languages, so we will start by explaining these standards.

Fuzz testing: Go hack yourself!

Augmented Backus-Naur Form: The ‘Backus-naur Form’ is a popular method for describing individual messages. An extension to BnF, which can also contain message exchange descriptions, is the ‘Augmented BnF’', or ABnF.

In fuzzing, unexpected data in the form of modified protocol messages and message sequences is fed as inputs to the system under test (SuT), and the behaviour of the system is monitored with various instrumentation techniques. If the system fails for example by crashing or by failing built-in code assertions, then there is an exploitable vulnerability in the software. Because fuzzers do not focus on finding a specific set of known vulnerabilities, but rather look for new instances of typical problems, they can also discover previously unknown vulnerabilities. Fuzzing is also used by hackers to find unknown vulnerabilities for which they can then create exploits. Based on recent studies, all hackerfound zero-day threats were found by using different fuzzing techniques. By hacking your own software instead of waiting for someone else to hack it, preferably before it is even released, you can protect your systems from zero-day threats before it is attacked.

Building and using fuzzers In order to create fuzz tests, you first need to understand message structure and sequences. Only then do you have the knowledge needed to make subtle but decisive changes, which turn valid messages into robustness tests. In fuzz testing, you have to be capable of first creating valid message

www.testmagazine.co.uk

ASN.1 : Another common format used in fuzzing is the standard Abstract Syntax notation One (ASn.1). Due to its complexity, it is primarily used to describe protocols whose specifications were originally written using ASn.1.

AuGMENTED BACKuS-NAuR FoRM

ASN.1

XML/SChEMA

XML/Schema: For testing any interfaces that use the XML specification format, the most common language that fuzzers need to support is ''XML/ Schema'' which is a method for describing valid XML files. PCAP with PDML: Fuzzing tools can also use Packet Details Markup Language (PDML), a dissection of PCAP recordings, to create models directly from captured messages. PDML was originally designed to describe the structure of dissected packets.

PCAP WITh PDML

Protocol models in Peach Fuzzing Framework Peach is an open source fuzzing framework used for building fuzzing tools. Peach is available in http:// peachfuzzer.com/ Peach uses an XML Schema augmented with BnF and ASn.1 functionalities.

June 2011 | TEST


8 | Test cover story

Security testing is about finding the bugs before hackers find them and exploit them. Fuzzing is the most common technique for finding vulnerabilities in software, especially if you do not want to browse through all of the code with a magnifying glass.

Fuzzing tools with readymade interface models Some fuzzing tools come with readymade interface models. It is much easier to use these tools than to use notation language to describe the messages. When using such tools, the only thing that users need to do is configure the fuzzer to find the test target. This typically involves just typing in the IP address or physical MAC address of the target. The graphical interface also makes it easier to edit message structures and sequences, as can be seen from the Defensics screenshot below.

Managing unknown vulnerabilities Fuzz tests will usually detect at least some bugs. Without metrics, planning and follow-up, there is no guarantee of the quality of the tests, though. A prerequisite for effective and reliable fuzz testing is having a testing process in place. It helps target tests, and ensures that the found vulnerabilities are fixed. For this purpose we developed the process of unknown vulnerability management. The process of managing the unknown vulnerabilities includes four phases: 1. Analyse 2. Test 3. Report 4. Mitigate The purpose of network analysis is to identify critical network areas. A carefully conducted network analysis helps testers determine the areas that most need testing. More resources can be allocated to test these areas, while less critical areas can be covered with fewer resources and less time. Second step is the actual fuzz testing. The execution of this phase is largely based on the findings of the network analysis, which not only reveals the open interfaces but also the protocols used in them. In the testing phase, the protocol implementations are tested using modified protocol messages, that is, fuzz tests. The best fuzzing results are achieved using model-based

TEST | June 2011

fuzzers, since they utilise protocol specifications to target protocol areas most susceptible to vulnerabilities. As a result, the number of test cases needed is reduced without compromising coverage. Fuzzing does not end in finding vulnerabilities. Actually, the most challenging tasks in unknown vulnerability management process are related to handling the test results. Handling of the found vulnerabilities can be divided in to two phases: reporting, and mitigation. The reporting phase is focused on the reproduction of found vulnerabilities, and in communication with developers. The log files and test case documentations help communicating the issues and are critical for getting the bugs fixed. Mitigation step is most relevant to your customers. While you are working on a fix for the critical bug, you immediately need to provide some work-arounds to your customers, in order to help them protect themselves.

Banishing the bugs Security testing is about finding the bugs before hackers find them and exploit them. Fuzzing is the most common technique for finding vulnerabilities in software, especially if you do not want to browse through all of the code with a magnifying glass. Fuzzing is not efficient, though, unless you know what to test and how. You must be aware of your network’s open interfaces, the protocols that are used, and the protocol message structure. Only then can you target the tests correctly and alter the valid protocol message to create fuzz tests that bring out the vulnerabilities. Once the testing is done, you need to be prepared as an organisation for handling the critical issues found with fuzz testing. Testing as such is not enough; the results must also be followed through. It is the question of assuming responsibility for product security, and the test results from fuzzing are often the first step in that direction.

Ari Takanen CTO Codenomicon www.codenomicon.com

Miia Vuontisjärvi Security analyst Codenomicon www.codenomicon.com

www.testmagazine.co.uk


vital focus groups Helping you overcome obstacles

28th June 2011 ●

One Day Event ● 120 Decision Makers ● 15 Thought Leading Debate Sessions Peer-to-Peer Networking

Exhibition

Cutting Edge Content

For more information

TM

Contact Grant Farrell on +44 (0) 203 056 4598

Powerful multi-protocol testing software

To register visit:

Can you predict the future? www.vitalfocusgroups.com/register.html Forecast tests the performance, reliability and scalability of IT systems. Combine with Facilita’s outstanding professional services and expert support and the future is no longer guesswork. visit Facilita at:

Event Sponsor:

Sponsors:

WINTER 2010

4th October

7th December

Guoman Tower Hotel. London

Plaisterers Hall, London

Facilita Software Development Limited. Tel: +44 (0)1260 298 109 | email: enquiries@facilita.com | www.facilita.com


10 | Testing business

A positive mindset TEST editor Matt Bailey discusses the business of testing with Nic Godall an experienced tester working in the digital marketing sector.

N

ic Goodall is QA manager at digital marketing agency, Agency Republic. The company lives and dies on the quality, usability and cutting edge slickness of the digital marketing aids it creates for its list of famous brand name clients.

Clearly in a fast-moving and dynamic sector like digital marketing there is no room for glitches, bugs and other embarrassing and potentially financially damaging coding errors and user interface problems. TEST editor Matt Bailey asked Nic Goodall some business-focussed questions about the company’s approach to software quality. TEST: What does your company do and what, if anything is its unique selling point in the testing world? Nic Goodall: Agency Republic is a specialist in web-based marketing. Our offering covers creative, planning and media strategy. We make big ideas come to life as online ads, microsites, websites, interactive content, viral clips, applications, emails and mobile comms. Along the way we’ve also made an interactive shop front, TV and cinema executions and press ads. I am

TEST | June 2011

the QA manager, so I am responsible for all of these things being bug free and easy to use. TEST: What value does the software testing function add in your organisation? NG: Software testing has become much more prevalent over the last three to four years since the recession has really started to bite. Online marketing has really taken a front seat – for obvious financial reasons – and other marketing methods plus the rise of social media have driven a lot of advertising budgets into the digital arena. Most brands are now pushing towards the web. In this environment, quality has to be the highest priority, if anything goes down it’s instantly visible. But even with this crucial role, in the media industry as a whole, testing is still often treated like an unwanted child or an afterthought, and the digital media sector is even more guilty of this – they don’t realise the impact of poor quality on brands. Most media agencies are fairly small. There are a couple of big ones that use high end testing tools but most don’t have the budget so they tend to use of off the shelf solutions. The business case for more investment in testing is

www.testmagazine.co.uk


Testing business | 11

often difficult to get across. I’ve been at a few agencies and it’s always a problem, but the proof is in the pudding, it’s a case of maximising the KPIs and delivering above and beyond what the org needs. We need to start educating about the benefits of QA and testing and how they are a key way of driving quality. By sending emails out highlighting new testing technologies that may benefit the agency and slowly and surely we are getting the message across. OK, testing is expensive, but the consequences of not testing are even more damaging. We have to show that testing is good value. We have to make the case for quality. I try to use the examples to highlight where te sting could have improved processes and show how the results would have improved going forward. In previous jobs it has taken me ages to change the ingrained mindset but this has changed a lot over the past year and is improving everyday. Of course for me it’s second nature, a no brainer but for some... Many places still don’t have testers and in digital media the developers are having to think a lot more out of the box. Many smaller developers have only just woken up to testing and perhaps the developers are doing it all themselves. They tend to miss a lot and they have had to change! TEST: How is the current economic climate affecting your business – if at all? What are the challenges and the opportunities? NG: Outsourcing has its pros and cons, and many outsource companies’ services are too basic for what is actually needed. There are some outsource companies – Applabs for one – that are forging closer relationships with their clients however. But people are looking for cheaper ways to do things and offshoring and automation make good sense in certain circumstances. Automation can provide a cheaper and faster solution, but most people don’t understand the intricacies of implementing and maintaining automated tools. You really have to keep on top of it. Crowd sourcing/testing may provide another solution if budgets are shrinking, but in my opinion smaller test teams are easier to handle and you can deal with the guys on a one to one basis. For me this is the best way forward for the next few years. It has to be at the right price though.

www.testmagazine.co.uk

Outsourcing is far more expensive than many people realise and good functional testers in digital media are hard to find. Perhaps because of the nature of the work being fast paced and not conforming to everyday testing structures. TEST: What technologies are having an impact on what you do? How has the IT world changed recently? Are things like Web 3.0, the cloud, virtualisation etc having an effect? NG: Web 3 will certainly change t hings. New technology means new testing strategies and new problems; there are definitely exciting times ahead. There will be a big drive for mobile too. It is still in its early stages for a lot of companies. Some of the bigger brands are really starting to use mobile technologies for marketing, but it hasn’t reached maturity yet. The competition between the three major mobile operating systems has already changed the way we use our mobiles and interact with each other. This is only going to grow and will drive a lot of development and a lot of exciting new products. Coupled with how Social media is shaping things at the moment, the interaction between the customer and the company is changing radically. If an advert goes ‘viral’ it really does change things fundamentally, and this will have an impact on testing. The cloud is still an unknown entity to many people and security is a major issue but it’s only going to improve get better as understanding and utilisation increase. It is already having a massive impact on testing, if not actually on the way we test. We now use a lot of cloudbased servers, they give a much more effective way of managing websites – with virtual servers – they are more cost effective. Before we needed to have the full range of platforms physically available to test on, but now we can set the platform up virtually on the cloud in minutes. This is a major benefit. TEST: How is testing perceived in the business? NG: While it is still crucial to get more people to understand and value of testing, there is a better appreciation of what we are doing now, people are much more open and engage and ask questions. There is a positive mindset about testing at the moment. TEST: Nic Goodall, thank you very much.

Web 3 will certainly change things. New technology means new testing strategies and new problems; there are definitely exciting times ahead. There will be a big drive for mobile too. It is still in its early stages for a lot of companies. Some of the bigger brands are really starting to use mobile technologies for marketing, but it hasn’t reached maturity yet. The competition between the three major mobile operating systems has already changed the way we use our mobiles and interact with each other. This is only going to grow and will drive a lot of development and a lot of exciting new products.

June 2011 | TEST


12 | Software devlopement

Fitter, better, faster: what the future holds for application development Kevin Parker of Serena Software has a look at what has changed around software development.

TEST | June 2011

www.testmagazine.co.uk


Software devlopement | 13

D

eveloping software that meets the needs of users, whether it is a mobile app that will be used by consumers or a piece of business software, is not rocket science. Getting the basics right so that applications work well, are fit-for-purpose and come in on budget should be a given for any project. However, so many projects either fail to meet their needs, or the time taken to deliver the software stretches on much longer than anticipated or budgeted for. Why is this, when the industry as a whole has some of the smartest people around working in it?

The first thing to recognise is that processes around software development are changing. A recent survey of application development and IT professionals showed that application priorities are shifting, with 75 percent of respondents citing that managing application development as a business process was a key target for them in the future. The biggest priority in application delivery was delivering applications to the business faster, rather than cutting costs. This change in mindset comes at the same time as new methods for delivering applications have come into play. The growth of software-as-aservice and cloud has an impact on how internal software is viewed: why do we have all this work going on around an application that has to be tested and is not simple to use, when we could buy something in from the cloud that not only fits our needs, but also looks cool on an iPad or smartphone? The understanding of IT and what it can provide to the business has also been growing. Whereas before, the CEO or CFO only cared about the numbers and did not understand IT, now the average head of a company can probably not only follow a conversation about IT but understand when investments are not paying off enough compared to other approaches. This can lead to both opportunities for the application development team, but also to more difficult questions being asked. For example, if you have a CEO who knows IT, or at least thinks they

www.testmagazine.co.uk

do, then you can have a better quality discussion around what application development can really deliver. However, it is also more difficult to shrug off questions with technology arguments by themselves: an IT-savvy CEO will not only ask why something can’t be done, but why other approaches won’t produce better results. What does this mean for application development professionals as individuals, and for the development sector as a whole? Well, the biggest impact is around how application development is managed. Too often, the process itself is made up of silos requiring manual intervention and intensive work just to get decisions made and code handed over. Without working on these areas, businesses will not be able to improve their management processes and properly orchestrate how application development is carried out.

Best-in-class of one? Part of this is to do with the myths that have developed around application development. The first myth that often affects the speed of application delivery is the best-in-class ideal. Each part of the Software Development Lifecycle (SDLC) has its own tools that are regarded as ‘best in class’. Regardless of an individual’s role – architect, designer, developer, tester, release engineer and so on – there are incredibly good point solution tools on the market today to support them delivering their part of the application. However, getting any of these tools to work together in the wider context of application development can be as big a challenge as the actual development work itself. They rarely provide any automation on how work within one phase of the SDLC is handed on to the next. Often, different databases will be used, requiring some kind of data migration and transformation process. The tools don’t ‘talk’ to each other, so when data changes in one it is not necessarily reflected in other locations. At this point, the tools themselves, while providing value to the individuals, can hinder the overall process and lead to additional costs being incurred. When more complex application

A recent survey of application development and IT professionals showed that application priorities are shifting, with 75 percent of respondents citing that managing application development as a business process was a key target for them in the future. The biggest priority in application delivery was delivering applications to the business faster, rather than cutting costs.

June 2011 | TEST


14 | Software devlopement

In reality, IT environments are more dynamic and heterogeneous. What might be the right platform for one development project or application might not fit others, leading to increased costs in support or additional expenditure. Secondly, migrating all of your existing data into a one-sizefits-all product can be expensive, time consuming and itself prone to errors.

development projects are going on, the pressure of connecting various disjointed tools together across the lifecycle of development acts to slow the process down and increase costs. The ultimate issue is when organisations end up throwing away their technology investments and effectively starting over from scratch in order to avoid these problems.

One tool does not fit all

Kevin Parker Chief evangelist Serena Software www.serena.com

TEST | June 2011

The flip side of this is the ‘one-size-fitsall, one-stop-shop’ approach, where one vendor provides all the tools and solutions that an organisation might be using. The problem is that these one-size-fits-all tools never meet all the development needs but, instead, aim at some idealised, normalised IT that just doesn’t exist in reality. They drive toward bland, average solutions instead allowing the freedom, creativity and innovation today’s business units demand. And, to add insult to injury they frequently require an overhaul of the very development process that you have developed and want to automate, just to meet the needs of the tool. For these solutions, the big selling point is often the ‘single repository’ database. While the concept might make sense at first, perhaps the most dangerous feature is that these tools are optimised for a single platform, forcing a commitment to one OS, platform, hardware or methodology. In reality, IT environments are more dynamic and heterogeneous. What might be the right platform for one development project or application might not fit others, leading to increased costs in support or additional expenditure. Secondly, migrating all of your existing data into a one-sizefits-all product can be expensive, time consuming and itself prone to errors. When you are serving a mass audience with more generic solutions that are entirely agnostic to the different roles in the SDLC, the typical Pareto principle comes into play. While 20 percent of the available

functionality may cover 80 percent of your requirements, the remainder will still remain uncovered.

Caught between the devil and the deep blue sea In-between these two extremes, we must be able to find a better approach. One that has been attempted is point-to-point integrations – that is, linking one point tool to another. While this should meet the spirit of what application development requires, it does not actually deliver what it promises. Typically, point-topoint integrations are typically limited in functionality and little more than automated cut-and-paste. The second problem with this is around the dependencies between tools. If this is done on an impromptu basis, this leads to one-off integrations that are impossible to maintain. These tend to be brittle by nature, with upgrades to tools at either end of the integration causing the link to fall apart. If it becomes a long-term part of workflow, then it will require active management and maintenance, adding another task when it should be taking work away. When looking at this integration work, it helps to consider how these links will be maintained, and how they will provide the kind of reporting that actually helps people to manage the business. Point-to-point integrations do not support this, and lack the controls needed to ensure the right project approval throughout the lifecycle.

So what is the answer? Managing application development processes will have to consider how to remove those steps that act as roadblocks to progress. Removing manual work and automating where possible is one easy way to speed up delivery, but it does have some potential pitfalls to bear in mind. Crucial to success here is understanding where the different phases of application development sit – both within the application development and in the

wider business – and what information they need to process work faster. Looking at the available options, businesses may want some best-in-class solutions, but the critical requirement is for all of your tools to actively integrate and talk to each other, so that the end-to-end process can be supported better. This provides the value of a “one-stop” implementation, while still giving the benefits of best in class tools. This automation of processes around the lifecycle of the application also ensures that all the necessary steps are followed, and that designated individuals will be able to provide approvals where required. This is essential for accountability and traceability, which is an important requirement for businesses in some industries. This also involves the business more directly in the workflow, both from a design and a process point of view. As the software development process begins with business demand and ends with delivery of the completed application to the organisation, looking at how this workflow can be orchestrated across different disciplines and sections of the company becomes a necessary requirement. Instead of simply linking tools together, looking at the overall process and merging tools into this can provide the necessary degrees of automation and management. This approach is designed to allow the most flexible implementation without requiring massive data migration or replacing all the tools that are in place. The pressure on application development is to deliver better quality software faster, and more efficiently. This requires greater automation in order to be achieved, but it also means looking at the whole cycle that goes into defining requirements, developing software that fits these needs, and then delivering this out to the organisation. Without this understanding of the business side, application development will not be able to supply all the value that it can offer.

www.testmagazine.co.uk



16 | Process improvement

Farming the organisation – for process improvement Farmers have to prepare a strategy and follow processes before they start the physical hard work of farming itself. Test location lead, Pratik Shah* says testers should adopt the same approach. ‘It is thus with farming, if you do one thing late, you will be late in all your work’ – Cato The Elder

E

very year farmers prepare a strategy and follow certain processes before they actually start planning for the farming. Across the world, the weather varies, as does the soil and the water and moreover, the mentality of the farmers is not the same. What is common worldwide is that farmers must upgrade their processes and working methods to get the best outcome. The same should be true for worldwide organisations, be they IT companies or manufacturing companies or banks. Their core business varies according to their domain; their processes are also different. The good part is they are all aware of the fact that they need to continually upgrade their existing processes, the not so good part is that they struggle. This becomes a trigonometric puzzle for them, they know what the answer is but they struggle to get it. Sometimes, they do not realise that they have just partially achieved what they have thought of.

TEST | June 2011

Sometimes, they do realise the partial achievement but they now do not want to go further due to time crunch or resource crunch. Hence it is necessary for every organisation to understand clearly what processes they need to follow to get better and improved processes in place. Farmer or IT techie, process improvement – continuous process improvement – is a must. The simple principles farmers follow every year to improvise the quality of future gain is similar to the process the IT industry follows for quality improvement. How are they similar?

1. Processes are human-driven Farm workers know the soil. They know what is good for current seeds and how the current climate will affect the seeds. There is no harm in asking them or just taking their suggestions. Similarly, in an organisation, employees are actually going to use processes. They are the best candidates to suggest what sort of change they expect, where they are paying more efforts,

www.testmagazine.co.uk


Process improvement | 17

do they have better way of doing the work, do they think they are doing some repetitive work which is just waste of time and money. We must not forget that processes are human-driven. The suggestions might not be worth implementing all the time but at least we can collect a number of inputs.

2. Picking up weeds – Identify unwanted processes Weeds are always troublesome, hence farmers arrange the eradication of weeds from their fields. In an organisation there is a mixture of processes, some wanted and some unwanted. Unwanted processes may eat up shared time, resources and deflect attention from the wanted processes. And this is where organisations can start to lose their grip over their governance systems. Carefully identify what is needed and what is not needed. It might be possible that some processes are unwanted in some parts of the organisation but wanted in other parts. This needs a very thorough analysis without leaving any part of the organisation untouched.

3. Refining the field – Reorganise the existing processes Once the weeds are picked up, you can actually start the treatment you want to give to the field and seeds. Now you can be sure that the inputs only go to your wanted seeds. Similarly in an organisation, once the unwanted processes are identified and removed, all the processes that are left are wanted. Restructure them and bring in any small changes that are required.

4. Choose a treatment – Market survey for the technology support Based on current climate, soil and seeds, a fertiliser is selected and applied. Similarly, for an organisation, to upgrade processes, you might need to explore the market for the best suitable options available. There could be some technology or software available in the market which could make your job easier and more efficient. Explore each option’s advantages and disadvantages.

5. Know the treatment – Know requirements and expectations One must understand the expectations from the in general processes in place. Some of them may be already in place

www.testmagazine.co.uk

– some might be in progress. But before acting for any improvement, you must know the role of every new process or new change. What they actually expect this change to bring – this needs to be defined and visualised.

6. Maintenance Processes, new and existing, are base-lined, restructured and now in place with the best treatment. What next? Keep an eye on them. Once the governance system in the organisation is up and running, it is equally important to keep it running in the same state consistently. There must be a regulatory body in the organisation that takes care of all the processes and makes sure all the processes are being followed. They keep control over the processes.

7. Improvement measurement We have invested a lot to improve our processes. Where is the proof that they are better? It is important to measure the improvement. For the organisation, once the new or improved process is up and running, keep an eye on the outcome becomes essential. Measurement factors need to be decided. Time, effort, accuracy, transparency of the work etc are key measurement factors. Compare them with the earlier results to keep track on movement and make sure it is in the right direction. These factors don’t always need to be numbers, it could be feedback from co-workers; it could be easiness and effectiveness. If the work is organised in an easily measurable way, the processes become compatible can be integrated with any other desired (in future) governance systems. Process improvement is not a onetime activity, it’s not something which can be done in a couple of days. It is an ongoing activity, the vision needs to be there for future needs. If today’s small change is not compatible with tomorrow’s need, then something is wrong. *Pratik Shah is a test location lead working in India, for a UK-based software MNC. He has seven years’ experience in software quality assurance industry in the manufacturing, health insurance, banking and retail-ecommerce sectors. Working and implementing process improvement methodologies within the organisation is something which interests him.

Pratik Shah Test location lead – India

Farmer or IT techie, process improvement – continuous process improvement – is a must. The simple principles farmers follow every year to improvise the quality of future gain is similar to the process the IT industry follows for quality improvement.

June 2011 | TEST


18 | Test survey

Businesses are failing to support testers TEST editor Matt Bailey talks to Sogeti’s deputy CEO Richard Terry about what conclusion can be drawn from his company’s survey of testing and QA professionals at TestExpo 2011.

C

ost and innovation were the pivotal themes presented to test and QA professionals attending TestExpo Spring 2011 last month, and a revealing survey conducted during the event certainly proved that these were key factors impacting today’s businesses and application development.

Regarding costs, the survey indicated that despite significant risks to operations, businesses are still failing to fully support testing adequately. Just 37 percent of respondents believed that their budgets and resources were being allocated correctly to address the significant and increasing risk of end user communities becoming more widely distributed geographically, despite nearly 83 percent highlighting ‘distributed users’ as a major new risk to application performance management. As for innovation, when asked how important Agile development was to their organisation, a worrying 14 percent said it was “not important at all”, while 29 percent said it was “no more important than any other methodology”. More reassuringly, of the remaining respondents, 31 percent said Agile was important while 27 percent said it was vital. Indeed 65 percent of respondents said they were adopting or starting to adopt Agile testing methods, 18 percent said they had fully adopted Agile methodologies. Meanwhile 13 percent of respondents replied that they had chosen not to adopt Agile, and the remaining four percent did not know what ‘Agile’ was. “This survey is a genuine snapshot of the real-life concerns of the UK’s leading testing and QA professionals. TestExpo is the only

TEST | June 2011

industry event that provides a true insight into the risks, challenges and issues that the lifeblood of our industry faces on a daily basis,” explained Sogeti deputy CEO Richard Terry. “Organisations should be making a stronger connection between the demands of the business and the support needed from IT, but some of the survey results prove that there is still a lag. For example, Agile is way beyond a buzzword now, there are more and more Agile projects every day, and yet while 59 percent of businesses realise Agile is vital or important, the rest are still some way behind. The old ways are being superseded and businesses must act now.” Respondents had different attitudes in the areas which present the biggest concerns in terms of performance challenges for the testing and deploying of applications. Almost half (49 percent) of respondents said their biggest performance challenge was distributed WANs, followed by a third (33 percent) stating that mobile was the biggest testing performance challenge when deploying applications. A quarter (25 percent) said that cloud testing was their biggest challenge in terms of performance. When asked about the trend for distributed users and application deployment, almost half (47 percent) of respondents said that getting performance testing processes to meet the performance challenges associated with deploying applications to geographically remote and distributed WAN locations was a “constant challenge”, while less than a fifth (19 percent) said they considered this “easy”. The remaining 34 percent said it was “nothing to worry about”. Security also remains a challenge in these areas. “Testing code to make sure that the

data is secure and cannot be easily stolen, misused or moved around is vital if people are to really trust cloud-based solutions,” explained Richard Terry. “Security and security testing have to be embedded as part of the testing cycle.” More than 45 percent of respondents said they were able to quickly pinpoint and identify the root-cause of performance issues with remediation steps to optimise the system, with 20 percent saying they could pinpoint but not resolve, and the rest admitting this wasn’t possible at their organisations. 29 percent said that they could not pinpoint and resolve, but admitted “it would be great if we could”; Over 26 percent of professionals surveyed revealed their testing processes either couldn’t or didn’t attempt to meet end user response times of SLAs prior to production deployment. 60 percent said their performance testing process could “usually” predict how well applications meet end user response time targets and SLAs prior to production deployment; while 14 percent claimed this was always the case. Richard Terry concluded: “The opinions of engaged test professionals must be taken extremely seriously by our industry. At Sogeti we listen and respond to these genuine concerns as part of a continual process of refining our services. The findings of this survey demonstrate to businesses that innovative technologies and methodologies must be budgeted and supported appropriately as part of the overall processes of maintaining and assuring the quality of the products they deploy. We will be looking for a steady improvement in the perceptions of test professionals in all areas of their work at each future Test Expo event.”

www.testmagazine.co.uk


INNOVATION FOR SOFTWARE QUALITY

Subscribe to TEST free! Y

PR EV TES IE W TEX ON PO PA GE

ER R E T E S T INNOVATION

Issu e Vol um e 2:

TEST : INNOV ATION FOR SO FTWAR E QUA LITY

TESTER TWARE N SOF ROPEA THE EU T.E.S.T

S O F T WA OPEAN T H E E U R4: December 2010

TEST : INNOVATION FOR SOFTWARE QUALITY

IN

G CHNOLO WITH TE TOUCH

36

INNOV ATION FOR SO FOR SOFTWARE QUALITY FTWA

Volume 3: Issue 1: February 2011

Vol um e 3: Issu e 2: Apr il 201 1

RE QUA LITY

THE RACE WITH Taking a IT's Y T I X E L strategicInvisibl COMP e approach VOLUM E 3: IS SUE 2: APRIL 2011

10 BER 20 DECEM SUE 4: E 2: IS VOLUM

VOLUME 3: ISSUE 1: FEBRUARY 2011

allenges y on the ch Ian Kenned major IT projects of testing

Raja Neravati on independent testing

Giant

Chris Lives ey on the massive p ot

ential of te sting ss testing Inside: Vi hods | Stre sual testing | Agile met ce testing Inside: Agile automation | Testing tools | Penetration testing an | Te rm st rfo Q ualiďŹ catio Inside: Pe ns .uk | Regressio .co stmagazine n testing Visit TES at www.te T online Visit T.E.S.

Visit TEST online at www.testmagazine.co.uk T online at

www.testm agazine.co .uk

For exclusive news, features, opinion, comment, directory, digital archive and much more visit

www.testmagazine.co.uk Published by 31 Media Ltd Telephone: +44 (0) 870 863 6930 Facsimile: +44 (0) 870 085 8837

www.31media.co.uk

Email: info@31media.co.uk Website: www.31media.co.uk

INNOVATION FOR SOFTWARE QUALITY


20 | Virtual testing

Testing virtual applications Application virtualisation can be a tremendous aid to the production, testing and implementation of modern IT systems. Andres Thompson, managing director of Parasoft explains.

TEST | June 2011

www.testmagazine.co.uk


Virtual testing | 21

M

odern IT systems have long since ceased to be one big monolithic application. They are a complex conglomerate of traditional applications such as CRM or ERP; smaller and dedicated to the specific needs of the industry and external services increasingly appearing in the cloud. Everything has to communicate with each other, often in real time.

With this high degree of interdependence appropriate testing of a new application becomes a huge challenge – consuming high costs, human resources and time. As a remedy to this issue, the concept of application virtualization has been created. Thanks to the virtualization, companies can save considerable sums spent on the development, testing and implementation of new components in their systems.

• Next, the portal contacts the main system, perhaps based on a mainframe, which checks what options are possible. In the next step the portal contacts the external system for information on the competitors’ prices of similar flights. Only after gathering all the pieces of information the price for this specific flight is offered to the customer. When the customer chooses the offer, a credit card authorisation is required to complete the transaction. Again, the authorisation is executed by another external system. So we have five systems involved in the transaction: 1. Portal 2. CRM 3. Mainframe 4. World Span Pricing Engine 5. Common Payment Gateway

Airline ticket economics It always fascinated me how airlines set fares. It is so different from that of public transport tickets to which I’m accustomed. In the latter case, for a train connection in a given period of time, the rigid, predetermined price is offered. Meanwhile, an airline ticket purchase is a gambling game. Recently I had the opportunity to learn about some of this mystery. In one of the airline companies, a ticket purchase over the Internet is roughly as follows: • First, the customer logs in to the portal. By identifying the customer, the CRM system, based on the customer identity, provides information on the customer's purchase history, loyalty points, etc. • Then the client specifies from where to where and when he wants to go.

www.testmagazine.co.uk

However the above-mentioned airline experiences a big problem with the high cost of changing the functionality of the portal. The portal itself is that part which the customer sees and works with, and which substantially affects the derived revenues. Well, each new version of the portal must be thoroughly tested. Unfortunately, this cannot be done in

June 2011 | TEST


22 | Virtual testing

The problem described above is a real case, an example of the challenges faced by companies developing their IT systems, systems which are a key factor of their competitiveness. Each system element must be properly tested, and what is most essential, tested in the target environment, or a well-simulated one. Achieving this is becoming increasingly difficult, and therefore more expensive, labour and resource consuming, especially when the individual components of the system are more expensive and harder to configure.

TEST | June 2011

any other way than when the portal is connected to the supporting systems. And this could be a problem since the supporting systems and external services are also subject to upgrade. The airline has calculated that the slowdown in the portal testing process is close to 50 percent when one of the supporting systems/ services is unavailable or overloaded with a high number of transactions. And it happens often enough that it becomes a great obstacle for the company. Building a completely separate test environment is not an option for financial reasons. Moreover, there are other problems with infrastructure, but those are beyond the scope of this article. The problem described above is a real case, an example of the challenges faced by companies developing their IT systems, systems which are a key factor of their competitiveness. Each system element must be properly tested, and what is most essential, tested in the target environment, or a well-simulated one. Achieving this is becoming increasingly difficult, and therefore more expensive, labour and resource consuming, especially when the individual components of the system are more expensive and harder to configure.

Virtual testing To help companies solve this problem the new concept of application virtualisation was invented. It is based on the finding that, for testing purposes, we don’t need the external services and supporting systems available on site. We need them only so far as it is required for simulation of the transaction. So we can look at the communication between the elements of our system and learn how it should work, and then replace the real application with a ‘mock-up’ that can collect messages and respond

in the same way as the original application does. Let’s once again refer to the ticket ordering system mentioned above. A test case may cover a process of searching for a Glasgow-Taiwan connection, accepting money and then making a purchase. To carry out this case, the application working on the mainframe gets information on dates for the beginning and end of the journey. Then it returns the list of connections. You can safely assume that in a short term the responses of this application for the same input data will not differ considerably. In the case of the application temporary being unavailable we could use the stored results. Later, when the application is again available, it may be re-used for the test case. This simple concept allows the continuity of testing in the absence of access to the original application. You can even completely reproduce elements of the production environment for testing purposes. This is to avoid downtime in the tests, and considerably ease the implementation. Virtualization can also be used in development. Developers can use a virtualised application in the development of its software. This is important especially for companies that outsource the development process. In such case companies may provide their suppliers with a virtualised system to let them test the components under conditions as close as possible to destination ones.

The human factor Of course, the issue is not trivial when it comes to the implementation of relevant systems. We have to remember about human interactions, productivity, the need to emulate more complex transactions, etc. Building a customised virtual environment requires a lot of effort.

www.testmagazine.co.uk


Virtual testing | 23

Fortunately, there are a few solutions on the market available. We should also note that application virtualisation is not a ready-to-use solution; rather it’s a bespoke process. With an ‘out-of-box’ solution we may virtualise only some basic test cases. However, in order to virtualise the entire application behaviour we need ‘to teach’ our tool how the application works itself. So we have three phases of virtualisation: 1. Observing and learning how the original application behaves (Capture); 2. Tuning of the virtualized components (Provision); 3. using the virtualized components for testing (Test). You should be aware that the virtualised application is not the same as the original one. It's just a ‘mockup’, good enough to let you test the components, but not so good as to fully replace the original application. So this is not the same as hardware virtualization, eg, VMWare, where the simulation is complete. With this in mind, it should be stressed that the concept of virtualisation perfectly fits the needs of modern enterprises. As mentioned above, today's IT systems are often a key factor to gain an advantage over the competition. Therefore, companies are constantly developing their IT systems. Application virtualisation accelerates the development of software both ’in-house’ and ‘outsourced’. This allows organisations to better and more quickly test the components of their systems. Moreover, it reduces costs by eliminating unnecessary purchases of equipment and licenses, and avoiding unnecessary extra charges for external services. Finally, it facilitates the implementation allowing the gradual connection of components to the application.

www.testmagazine.co.uk

Andres Thompson Managing director Parasoft

Today's IT systems are often a key factor to gain an advantage over the competition. Therefore, companies are constantly developing their IT systems. Application virtualisation accelerates the development of software both ’inhouse’ and ‘outsourced’. This allows organisations to better and more quickly test the components of their systems. Moreover, it reduces costs by eliminating unnecessary purchases of equipment and licenses, and avoiding unnecessary extra charges for external services.

www.parasoft.com / VIRTuALIZE

June 2011 | TEST


24 | Mobile device testing

Smarter mobile devices – smarter testing According to industry analysts problems are likely to emerge in the smartphone market due to the increased importance of software, combined with a lack of adequate testing. Paul Beaver, products director at Anite reports.

B

etter, faster, smaller. This relentless advance is a fact of life in consumer electronics, and mobile phones are no exception. However, in recent years mobile has seen a major change: while the focus was on creating smaller, more attractive handsets, the rise of the smartphone and the tablet have seen expectations shift radically. Steve Jobs’ comments at the launch of iPad 2 of being in a “post-PC world” emphasised the degree to which wireless devices are taking centre stage in the lives of end users. And user expectations of device performance are higher than ever. According to industry analyst Gartner, smartphones now account for over half of mobile phone purchases in North America and Western Europe. In 2011 smartphone app purchases were added to the UK’s Consumer Price Index – the nation’s official measure of the cost of living. With mobile now an inherent part of modern life, expectations have grown accordingly, as have the costs

TEST | June 2011

www.testmagazine.co.uk


Mobile device testing | 25

of failure. Device performance can impact the brand as a whole – leading to churn away from the manufacturer or network operator. The centrality of the phone in users’ lives, combined with the echo effect of social media is turning issues like antenna performance or software glitches into international news. For device manufacturers, this problem is compounded due to what Gartner describe as the “intense competition… at the top of the smartphone market”. Furthermore, the signs are that the lower end of the smartphone market is also growing. At a recent Anite event, industry analyst Ben Wood of CCS Insight said that 2011 was likely to see the first sub-£60 smartphones emerge. In other words, smartness is starting to come as standard. A consequence of the insatiable demand for devices is that vendors are relentlessly driving technology, working on increasingly squeezed margins, with time to market ever more important. This drive is likely to come at a real cost: According to Ben Wood, CCS Insight predicts that “at least three major phone-makers will encounter high profile technology problems in 2011”, with problems likely to emerge due to the increased importance of software, combined with a lack of adequate testing, caused by “competitive time to market considerations overtaking rational timetables for product launches.”

Tougher conformance testing and beyond With the stakes so high, there is a clear imperative to comprehensively test devices before launch, but this needs to be achieved in a faster and more cost effective way. Wireless device testing solutions from companies such as Anite test the software that is resident in the chipsets used by wireless devices

www.testmagazine.co.uk

and specifically within the baseband processor that manages the radio and signalling with the communications network. This type of testing is fundamental to ensure that a handset can perform on an operator’s network as expected, and crucially to maintain quality of experience for the end user. Historically, quality issues have been addressed primarily through formal certification processes. Regulatory bodies such as the Global Certification Forum and PTCRB have prescribed a number of conformance tests that device manufacturers need their products to pass before being accepted for use. These tests can be seen as a set of minimum acceptance quality criteria without which a wireless device is likely to fail even simple device-network interoperability scenarios. Conformance testing is a moving target, with each successive release from standards bodies like the 3GPP creating a need for new test cases. Additionally, new network technologies and standards add further testing requirements. Testing is additive: In the next few years the mobile industry will witness the global roll out of networks using the LTE standard and its variants. Devices using LTE will also need to support the older standards so have to be assessed according to the test cases developed for those standards as well. Where testing devices based on 2G and 3G technologies involved around 2500 protocol test cases, LTE will add a further 500 tests. This accumulation of testing requirements is one of the factors that makes creating leading-edge test software particularly challenging, with testing suppliers having spent hundreds of staff years to create their libraries of test cases. It is also a fairly complex process to get new test cases approved by certification bodies, and testing suppliers need to collaborate closely with several device manufacturers to

Device performance can impact the brand as a whole – leading to churn away from the manufacturer or network operator. The centrality of the phone in users’ lives, combined with the echo effect of social media is turning issues like antenna performance or software glitches into international news.

June 2011 | TEST


26 | Mobile device testing

This accumulation of testing requirements is one of the factors that makes creating leading-edge test software particularly challenging, with testing suppliers having spent hundreds of staff years to create their libraries of test cases. It is also a fairly complex process to get new test cases approved by certification bodies, and testing suppliers need to collaborate closely with several device manufacturers to verify and validate tests to enable the roll out of new product features.

TEST | June 2011

verify and validate tests to enable the roll out of new product features. In this context of collaboration, changing customer demand shapes the role of test providers and the nature of their solutions. While formal certification processes are becoming more demanding, device complexity has also led to an increase in testing throughout the entire development lifecycle, from the initial component and product design stages to further testing by mobile network operators before a product is launched. Increasingly, testing solutions have needed to scale beyond conformance testing to address these earlier and later stages.

Development testing Development testing is a key area where increased testing has become vital. As well as validating a new design’s compliance to the technology standards and GCF and PTCRB requirements, the objective of development testing is to identify potential problems as early as possible in the process. The use of host-based protocol development tools is of particular importance, as it allows manufacturers to verify designs at the pre-silicon phase, and thereby avoid the punitively high costs – and delays – of remanufacturing flawed chipsets at a later stage in product development. Far later in the product lifecycle, testing is also playing a role in de-risking

new product launches, with mobile network operators increasingly pushing for a greater degree of quality control. In particular, the largest “Tier1” network operators that are most likely to deploy new handsets first, are insisting that devices undergo additional interoperability testing. Such “carrier acceptance” schemes go beyond standard conformance testing and assess the device performance against additional tests and criteria specific to the desired network services. These include testing areas like data throughput, preferred network selection (roaming), and multiple handovers, or scenarios based on live cell configuration parameters. While some of these areas are covered by the standard conformance tests, operators’ schemes are each customised to their own specific services and network characteristics, incorporating real cell configuration parameters and are also more likely to include comparative metrics that show not only that a device works but also how well it is performing. A good example of where a more nuanced form of testing becomes important is in understanding the signalling performance of new devices. Recent years have seen smartphones impacting network performance through increased – and often unnecessary – signalling traffic, often as the device shifts in and out of dormant states to extend battery life. While the devices may be working

www.testmagazine.co.uk


Feature | 27

perfectly well on their own terms, interoperability testing within a carrier acceptance scheme would also evaluate these behaviours in the context of how they may impact the network and services as a whole. Field or “drive” testing, where prototype devices are tested on live operator networks, had once been the main way for operators to conduct interoperability tests. However, the sheer volume of tests required and the need to conduct testing more quickly, and wherever possible at lower cost is now starting to lead to a greater uptake of laboratory-based solutions that simulate networks. The use of network simulation means that scenarios of any type can be created and repeated time and time again on each new device, not only improving quality by offering greater statistical certainty but achieving consistency as to how all new devices are tested.

Efficient, lower cost testing platforms The increased use of laboratorybased simulation is one of the

main ways that manufacturers and operators are doing more testing while managing down the costs of doing so. Lab testing permits extensive automation of tests through modest investments in automation software and hardware. This can significantly reduce the time and manpower required to create and run tests. The deployment of more flexible, integrated and user-friendly test platforms is another important trend that is making testing more efficient and cost-effective. Increasingly, test equipment vendors are finding that they need to offer a portfolio of solutions across development, conformance and interoperability. Within this context, platforms that use common hardware and software for testing across these various stages are particularly valued – not only to deliver more functionality for a given investment, but so those testing can apply a common discipline and skill set at different stages of the product lifecycle.

Paul Beaver Products director Anite www.anite.com

Lab testing permits extensive automation of tests through modest investments in automation software and hardware. This can significantly reduce the time and manpower required to create and run tests.

Stand out from the crowd with ICTTech Are you supporting users of ICT hardware and applications? Do you want to prove you are worth more than just your vendor qualifications? If the answer is yes then ICTTech could be what you are looking for. As an ICTTech you will enjoy a huge range of benefits, including:

■ recognition of your expertise and hard work ■ globally established professional qualification ■ improved career prospects

Simply submit your CV to find out more: icttech@theiet.org www.theiet.org /icttech The Institution of Engineering and Technology is registered as a Charity in England & Wales (no 211014) and Scotland (no SCO38698). The Institution of Engineering and Technology, Michael Faraday House, Six Hills Way, Stevenage, SG1 2AY

www.testmagazine.co.uk

June 2011 | TEST


28 | Risk-based testing

The five challenges to risk-based testing Risk-based testing (RBT) has been around for over 20 years, and accepted as a mainstream approach for over half that time, now being an integral part of popular certification schemes, such as ISTQB, and the basis of the new international software testing standard, ISO/IEC 29119, Stuart Reid, CTO of Testing Solutions Group highlights the obstacles to effectively implementing RBT and suggests means of addressing them.

TEST | June 2011

www.testmagazine.co.uk


Risk-based testing | 29

W

e all use risk on a day-to-day basis in our daily lives (eg. ‘should I walk the extra 50 metres to the pedestrian crossing or save time and cross here?’) and similarly many businesses are based on the management of risk, perhaps most obviously those working in fi nance, such as banks and insurance companies. Despite this, and the fact that risk-based testing (RBT) is not a complex approach in theory, it is still rare to see RBT being applied as successfully as it could be.

RBT in a nutshell Before considering the obstacles, let’s first briefly describe the principles behind RBT. Risk analysis is used to identify and score risks, so that the perceived risks in the delivered system (and to the development of this system) can be prioritised and categorised. The prioritisation is used to determine the order of testing (higher priority risks are addressed earlier), while the category of risk is used to decide the most appropriate forms of testing to perform (eg, which test phases, test techniques, and test completion criteria to use). A valuable side-effect of using this approach is that at any point the risk of delivery can be simply communicated as the outstanding (untested) risks. The results of performing testing against perceived risks and also the evolving business situation will naturally change the risk landscape over time thus requiring RBT to be considered as an ongoing activity. It is best practice to involve as wide a range of stakeholders as possible in these activities to ensure that as many risks as possible are considered and their respective treatments (or not – we may decide to not address some risks) are agreed.

www.testmagazine.co.uk

RBT Challenge 1 – RBT should work at all test levels Many test managers are initially introduced to RBT as testers and use it to prioritise and target their testing within a certain test phase, often system testing. The use of RBT by individual testers to manage their own testing is certainly a valid application of the approach, but its potential at the level of deciding test strategy for the complete project is even more powerful. At this level RBT is used to decide and justify which test phases to use (or not) and to define test completion criteria for each of these phases, thus addressing higher level risks. The use of RBT at even higher levels should not be ignored. It is also worthwhile identifying risks that apply across the whole programme or organisation and determining means of mitigating these risks via testing. Such an approach can lead to the mitigation of risks through the definition of an organisational test policy and an organisational test strategy that will define guidelines for testing across the whole organisation. One obvious way of ensuring RBT is used at all levels is to mandate its use in the organisational test policy, or if there is no test policy, include requirements for the use of RBT in the organisational test strategy.

RBT Challenge 2 – RBT should address testing across the whole life cycle As a test manager, when writing the Project Test Plan there is a temptation to only include those testing activities that you control directly. The Project Test Plan, however, needs to address all the testing performed across the whole life cycle whether it is carried out by testers or developers (or any

The use of RBT by individual testers to manage their own testing is certainly a valid application of the approach, but its potential at the level of deciding test strategy for the complete project is even more powerful. At this level RBT is used to decide and justify which test phases to use (or not) and to define test completion criteria for each of these phases, thus addressing higher level risks.

June 2011 | TEST


30 | Risk-based testing

Most new users of RBT tend to concentrate on the risks in the deliverable product (eg, the risk that the accounting system miscalculates profits or the risk that the web-based system discloses customer details), however for RBT to work effectively we also need to consider the risks to the performance of testing on the project itself.

TEST | June 2011

other stakeholders). After all, if the test manager doesn’t take responsibility for testing, who else will? A common occurrence is that testing activities early in the life cycle, such as reviews of requirements and designs, which we know are extremely efficient in terms of defect detection and prevention, are not planned and executed effectively, if at all. A second is that the rigour of developer testing (typically limited to unit testing) is decided by the developers alone, which may lead to lower quality code being passed into later testing phases. We need to ensure that the planned testing is targeted at mitigating risks as early as possible in the life cycle, even when that testing may not actually be performed by members of the testing team.

RBT Challenge 3 – RBT should not be a ‘stand alone’ means of managing risk Although RBT is a valid and valuable approach to managing testing, it becomes far more powerful when it is integrated with the risk management

performed by the project manager and the developers. In this way, we can share a common understanding of the risks to the project (ideally documented in a shared risk register), how these risks interact (eg, poor development leads to higher testing costs), and how they should be mitigated. This allows us to ensure risks are mitigated in the most efficient manner by those that are best placed to do it; often prevention by developers is far more efficient than later detection by testers. If introducing RBT into an organisation, be sure to take advantage of any other risk-based approaches to project management and development that are already there. Not only is it far better to end up with an integrated approach to risk management, but getting buy-in to risk management from all the relevant stakeholders is time consuming and if they are already managing risk in other areas it is far easier than starting from scratch.

RBT Challenge 4 – RBT requires a ‘professional’ level of test maturity to work effectively www.testmagazine.co.uk


Risk-based testing | 31

RBT will not work if those attempting to use it do not possess a high enough level of test maturity. In order to be able to select the testing that is most appropriate for a given risk, then the tester or test manager must know the range of testing options that are available to them and how these options relate to the different risk types. This requires a practical level of familiarity with test case design techniques, the effectiveness of each at detecting different types of defects (and so mitigating risks), and in which test phases they are most effective. This knowledge is the bedrock of the professional software tester. In an industry where many testing practitioners find it difficult to name more than one test case design technique (let alone apply it), it is not surprising that many struggle to apply RBT effectively. One specific group of testers that seem to have particular difficulty applying RBT are those that limit themselves to ‘requirements testing’, assuming all requirements are equal, and performing no test prioritisation. Typically these testers are not aware which test case design techniques they are using (if any), or that there are options open to them to vary the level of rigour of their testing for different levels of risk. This is one area for improvement where many testers could cost-effectively spend some time.

RBT Challenge 5 – RBT should not only address risks in the deliverable product Most new users of RBT tend to concentrate on the risks in the deliverable product (eg, the risk that the accounting system miscalculates profits or the risk that the web-based system discloses customer details), however for RBT to work effectively we also need to consider the risks to the performance of testing on the project itself. This category includes risks such as the late delivery of

www.testmagazine.co.uk

code from developers and the lack of testing resources available to the test manager. For RBT to work most effectively both product and project risks (and their mitigation) need to be considered together as they can have an immediate effect on each other, and the optimal balance needs to be achieved. For instance, if there is a project risk that the available time for testing is shortened it is not satisfactory to simply reduce the amount of testing as this will typically result in fewer defects being detected and will nearly always result in a consequential product risk of a ‘buggy’ deliverable. In this case any mitigation needs to strike a balance between the two interacting risks to achieve an outcome that is acceptable to all stakeholders, and the ability to successfully identify and implement such compromises is the sign of a true professional test manager.

Best practise RBT is known to be best practice for today’s professional testers, but, despite the simple concept, many testers struggle to apply it in an effective manner. This is often due to the scope of RBT as applied being too narrow, by it not encompassing the higher levels of testing, such as in the organisational test strategy, and by it not being used as the basis for test planning that covers the whole life cycle. RBT can also fail to fulfil expectations when it is not integrated into the wider risk management practices within an organisation, or when RBT is only used to address risks in the deliverable software rather than also considering the risks to the testing itself. These challenges can largely be addressed by the industry changing its perspective on RBT and widening its view. Probably the biggest challenge to the effective use of RBT is the lack of maturity of test practitioners, however, this should be seen as an opportunity for testers to ensure that they acquire the full ‘testing toolset’ to allow them to effectively mitigate the risks with the right testing options.

Stuart Reid CTO Testing Solutions Group www.testing-solutions.com

RBT is known to be best practice for today’s professional testers, but, despite the simple concept, many testers struggle to apply it in an effective manner. This is often due to the scope of RBT as applied being too narrow, by it not encompassing the higher levels of testing, such as in the organisational test strategy, and by it not being used as the basis for test planning that covers the whole life cycle.

June 2011 | TEST


32 | Test outsourcing

Perfect partnerships The days in which an outsourced testing engagement consisted of a client mapping out a specification, passing it to the testing company and devolving all responsibility are gone. Yann Gloaguen, global service delivery manager for SQS outlines his vision of the perfect testing partnership.

W

hile outsourced testing has delivered cost reductions and access to skilled technical resources, many companies today are seeking testing providers that can move beyond a simple transactional outsource engagement and work with them as true partners. In a partnership model testing providers take greater ownership of the outsourced work and deliver measurable results tied to business benefi ts rather than bodies and processes.

When client companies start to consider engaging an outsourced/ offshored testing provider, the cost savings are often uppermost in their minds, but the savvier organisation will look at the bigger picture. Most people, when asked whether they want the cheapest solution will answer in the affirmative initially; upon consideration they will also admit to wanting to quality factors being key, for example mitigating risks and hitting timelines. A trusted partner can help balance these criteria much better than an armslength outsourcer.

TEST | June 2011

Fostering partnership Mutually resonant Service Level Agreements (SLAs) and output based pricing are two keys to successful client/testing provider partnerships. The failure to structure SLAs correctly is still prevalent in the testing industry. While all would agree that effective SLAs are essential, few really ensure that an SLA is crafted in a way that motivates both parties to foster a partnership rather than a suppliercustomer relationship. SLAs should include some element of risk and reward to encourage good performance as well as operational efficiencies and innovation to ensure a true partnership, not a stale relationship.

SLAs, the carrot and the stick In a real two-way business partnership, there is an unspoken commitment to work through any difficulties that arise, nonetheless good performance must be incentivised as well as bad performance penalised. A risk-reward agreement, where both parties have a measure of transparency is far more likely to succeed than one where

www.testmagazine.co.uk


Test outsourcing | 33

some issues are kept behind closed doors. Moreover, an outsourced testing partnership must be flexible, and in a target-based risk-reward partnership, both savings and benefits are shared. At SQS, when advising our clients, we often find that existing SLAs are not sufficiently structured towards this end, with many allowing risk-averse testing companies to wriggle out of accepting their due share of penalties, while safeguarding their benefits. Often key performance indicators in testing SLAs focus on metrics and targets, ignoring means of measurement and so resulting in inconsistencies between expected and actual behavioural, or other, outcomes from SLAs. The clear definition of exact metrics and measurements empowers enterprises to build better, more flexible and more effective governance models, which minimise communication issues when offshoring projects. However, this standardised approach can function only if everyone involved can agree that the measurements specified are a true indicator of performance and the agreed targets are achievable.

The right metrics From the business perspective, ‘percentage of milestones met’ is a metric of paramount importance. This is intended to be a true indicator of expected delivery time and a reflection of a tester's responsiveness. Discussion of milestones as a metric is often stormy due to varying opinions of the definition of a milestone. We have seen milestones mistaken for tasks and vice versa. Milestones do not have duration; they are the product of multiple tasks delivering towards one project objective, especially in offshore testing environments. For example, a "test approach analysis" is not an example of a milestone, but ‘test approach document signed off’ is. When tasks are called milestones, the result is to weaken and omit dependencies, and also increasing the number of

www.testmagazine.co.uk

‘milestones’, which dilutes failures. In addition, it's essential to consider – which milestones to choose. They should always be hand-picked from project plans, ensuring that the ones selected will impact directly on the business if missed (although one could argue that failing to meet any of them will be detrimental). Selecting them all, however, will result in diluting the perceived impact of a missed major milestone, which may in fact be far more significant than many others combined. The incentive model is not always greeted warmly in most testing engagements. However, the alternative, the common head-countbased pricing model works against the concept of partnership. So, is there a viable commercial model that encourages long-lasting effective partnership without compromising quality? Possibly the best answer to this is output-based pricing.

Pricing for partnership The biggest benefit of pricing on the basis of output is obvious – the customer pays only for what is delivered. This pricing structure often brings about a complete change of mindset in both the customer and the vendor. Output-based pricing shifts risk away from the customer. In contrast, in head-count based pricing, the risk lies predominantly with the customer, as the testing provider is paid for resources used on the project at a predetermined rate. There are of course SLAs to cover the risk, but on the ground it is extremely difficult to design and execute these SLAs. In an output-based pricing mechanism, the SLA is built-in. There is a more subtle, but extremely important partnership benefit that output-based pricing provides. The head-count driven vendor has no interest in increasing efficiency – the way to grow business is to increase head-count. Conversely, in outputbased pricing, the testing provider is incentivised to increase efficiency. The testing provider aims to increase

When client companies start to consider engaging an outsourced/ offshored testing provider, the cost savings are often uppermost in their minds, but the savvier organisation will look at the bigger picture. Most people, when asked whether they want the cheapest solution will answer in the affirmative initially; upon consideration they will also admit to wanting to quality factors being key, for example mitigating risks and hitting timelines. A trusted partner can help balance these criteria much better than an armslength outsourcer.

June 2011 | TEST


34 | Test outsourcing

output with the same team-size, or even reduce the team-size by using automation and improving processes. Customers benefit too, as testing providers can offer year-on–year benefits, increasing their units of delivery per unit of cost. Output based pricing really does present a win-win situation for both parties, which is very difficult, if not impossible to achieve in a head-count based engagement.

The requirements of output-based pricing An initial calibration phase is critical for output-based pricing engagements. During this phase both parties agree on a unit of testing deliverable (at SQS, these units are known as Quality Points), and on the rate for each unit. To maximise benefits, the engagement needs to be planned over multiple years, giving the vendor the confidence to plan and invest in building efficiency in the service offered. By signing up for multiple years, customers get year-on-year efficiency benefits. Of course, reviews must be planned to track performance and ensure that the output-based pricing is in line with the objectives of the outsourcing.

A true partnership approach is a fine balancing exercise which adds an element of mutuality – both partners seek to benefit each other as well as themselves. This in turn means sharing the risks as well as the rewards.

TEST | June 2011

Output-based pricing, as applied to test automation

Yann Gloaguen Global service delivery manager SQS for Northern Europe, India & Africa

www.sqs-uk.com

We recognised that test automation is an area where output-based pricing can be applied successfully. In 2010, we set up the Test Automation FaQtory which combines best practice in testing with lean manufacturing to offer a simpler and more reliable way of testing large, business-critical systems. The key benefits of the FaQtory approach are: • Transparent, output-based pricing; • Shared-risk approach to delivery, based on test products delivered rather than head count or hours worked; • Industrialised automated testing for the reliable testing of highly complex systems;

• Testing resource can be scaled up or down as required to meet customer demand; • Measurable assembly line operation; • Multi-lingual teams, fluent in English, French and German; Using the Test FaQtory paradigm, our rate card only mentions prices for test cases delivered, executed and maintained. To be able to prepare the rate card, in the initial calibration phase a sizeable number of test cases of representative size and complexity are taken up for automation and delivered. The output-based rates are calibrated based on the efforts taken on the automated test cases as well as release plans of the application under test. The engagement is necessarily over multiple years covering many releases and thus repeated executions of the test automation suite.

Lead us into innovation Output-based pricing and effective SLAs are key to creating the conditions for a successful outsourcing partnership. However suppliers and customers seeking to move to the next level need to look beyond traditional approaches by encouraging innovation in outsourcing engagements. Innovation can add significant value, but it does require the allocation of time and effort from project leaders and senior architects within both partners – people who understand the organisation’s roadmap and who have the vision for how things could be done differently. In conclusion, a true partnership approach is a fine balancing exercise which adds an element of mutuality – both partners seek to benefit each other as well as themselves. This in turn means sharing the risks as well as the rewards. The metrics by which a successful project is measured change in this environment too. Through initiatives such as output-based pricing, where the delivery responsibility lies completely with the testing provider, a true partnership based engagement can exist.

www.testmagazine.co.uk


TEST data | 35

The importance of testing for software safety standards Good verification and testing practice is crucial, but nowhere is this more so than in safety-critical areas like aerospace and medicine. Ian Gilchrist, software products consultant at IPL reports.

T

he current set of software safety standards exist mainly to ensure that software is developed to a ‘high’ standard. While there is agreement between the standards in some areas there is also significant lack of agreement in others. This suggests that it is still a matter of luck that more accidents don’t occur because of faulty software, and it also suggests that there may indeed be ‘too much’ testing going on, which carries a cost burden.

The standards The software safety standards have a history going back about 25 years. Among the earliest are included IEC/ EN 50128 standard for railway software, the RTCA DO-178 standard for civil avionics, and the IEC 61508 standard, a generic one for all ‘programmable electrical and electronic systems’. There are also standards in use in many other industries, such as defence (ISO/ IEC 12207), space (NASA 8719), medical (FDA Software Validation Guidelines and IEC 62304), automotive (MISRA Guidelines and ISO 26262) and nuclear (IEC 60880).

www.testmagazine.co.uk

The purpose of these standards is to try to encapsulate best practice in a form which can be followed by software engineers and scrutinised by independent parties. The main aim of having such standards is of course to protect the public from injury and death which might result from faulty software. A secondary aim might be said to be the creation of a legal (‘state of the art’) defence behind which an organisation developing such software can shelter in the event that its practices are questioned. For example, if an aircraft accident were to be attributed to faulty avionics then the fact that the software can be shown to have followed industry best practice would be a mitigating factor if the supplier is being sued. There are many aspects to ‘good’ software development that these standards attempt to formalise, but software verification activities, especially testing, occupy a large part of the lifecycle for such safetyrelated work. All the applicable safety standards agree that these activities are an essential part of creating a certifiably safe system. This article highlights what is common and where less agreement is evident.

June 2011 | TEST


36 | TEST data

The purpose of testing is, “to discover errors, not prove correctness”. It is accepted that no amount of testing can conclusively show that there are no more bugs. Further, testing on its own cannot demonstrate that a given piece of code is safe. So testers need to create tests which will show up faults, and use this as evidence to support an overall safety case for the software.

TEST | June 2011

The available evidence suggests that, when followed properly, the current standards do work; there have been only a small proportion of certified systems failing due to a breakdown in the code verification process. A notable counter example was the data conversion exception which crashed the first Ariane V rocket 40 seconds after launch in 1996. But given the lack of agreement on what constitutes a really ‘safe’ approach it might be wondered whether this is just a matter of luck. There is the further question of whether or not projects are being required to do too much or too little verification. The consequences of too little testing are obvious and of great concern to all professionals involved in software testing. However, it is also reasonable to ask whether current practices veer too much in the other direction? Too much testing can involve a significant cost burden, which might be avoided if agreement could be reached on how best to achieve an acceptable level of confidence in safety-related software.

What broad testing principles are agreed? Systems are assigned a ‘safety level’: Before starting a safety-related system development it is normal to assign a Safety Integrity Level (SIL). For most of the IEC standards this tends to be numbered 1-4, with SIL 4 representing the safety-critical end of the spectrum. These SILs are important to developers because they determine the level of rigour to be used in the various testing activities. Generally speaking, the higher the SIL the more demanding (and hence expensive) the verification. Verification requires a combination of techniques: Developers should not rely exclusively on one technique to gain confidence that their software works properly. In very broad terms the main techniques recognised are code reviews, code analysis, and testing. For example, IEC 61508 says that, “It is the combination of code review and module testing that provides assurance…”

Testing on its own is not proof: The purpose of testing is, “to discover errors, not prove correctness”. It is accepted that no amount of testing can conclusively show that there are no more bugs. Further, testing on its own cannot demonstrate that a given piece of code is safe. So testers need to create tests which will show up faults, and use this as evidence to support an overall safety case for the software. Verification activities are best done by an independent team: The purpose of all verification activities is to show that the ‘implementers’ have correctly created what the designers intended. The MISRA Guidelines states, “It is recommended that Validation and Verification are carried out by a person or team exhibiting a degree of independence from the design and implementation function.” Traceability: It is important that all requirements can be traced down through design and then into implementation and verification. This is the preferred route for ensuring that all requirements are actually correctly incorporated. The different levels of testing give a number of opportunities to verify that any specific requirement is included. Most of the standards ask for some kind of traceability matrix as part of the overall safety case.

What is less agreed? Software professionals recognise different stages of testing in the development lifecycle: Unit, Integration and System/Acceptance levels. This concept is widely endorsed by the standards, but in fact all they can agree on is that it is up to a project’s engineers to determine what lifecycle model they will use and consequently what levels of verification they will do.

Unit testing The most (labour) intensive area of software testing is that of unit or “module” testing. The IEC 61508 standard defines this: “Each software module shall be tested as specified during software design. These tests shall show that each software module performs its intended function and does not perform any unintended

www.testmagazine.co.uk


TEST data | 37

STANDARD

Functional Testing

Structural Coverage

Equivalence Partitioning

Boundary Values

DO-178B

Yes

Yes

Yes

Yes

IEC 61508

Yes

Yes

Yes

Yes

IEC 50128

Yes

Yes

Yes

Yes

IEC 60880

Yes

Yes

Yes

Yes

Yes

MISRA Guidelines

Yes

Yes

Yes

Yes

Yes

functions.” unit testing is a requirement of all the standards for the higher SILs. Test case definition: While there is good general understanding of what unit testing is, there is a disagreement on how units should be tested to gain a degree of confidence that they work reliably. A wide range of techniques are mentioned in the various standards, but there is only moderate commonality on what these should be, and the theoretical basis for some of the techniques is questionable. The table above gives some idea of the commonality and the differences. how much testing should be done? The FDA Software Validation Guidelines state this issue clearly: “A developer cannot test forever,” so, “it is a matter of developing an acceptable level of confidence.” The standards differ markedly on both the forms and amount of testing suitable for safetyrelated software. The definitions provided are frequently vague and rely heavily on users’ engineering judgement. The most common recommendation is a combination of ensuring that all ‘requirements have been met’ and the use of structural code coverage analysis to ensure that all the code has been tested. Code coverage has two advantages: it can be automated, and it is objective. However beyond those points there is again, little agreement.

Integration and system testing The disagreements between the standards are even more apparent at the next level of testing – integration testing. In simple terms this refers to the

www.testmagazine.co.uk

stage(s) of verification after unit testing when real code and real hardware are brought together and re-verified against higher levels of specification. In fact there is little agreement between the standards as to what the different forms of integration testing should even be called. Further, when it comes to the specifics of what techniques should be used at the both integration and system testing levels the divergences become so marked that you could be forgiven for wondering if the standards writers had any common understandings at all!

Robustness

Error Seeding

Yes

Yes Yes Yes

Ian Gilchrist Software products consultant

IPL www.ipl.com

Conclusions There is a reasonable amount of agreement amongst the safety-related standards about the broad principles of good verification and testing practice, but there is variation regarding details. The objective evidence suggests that the standards ‘work’ as there have been relatively few failures involving software where a certified process was followed. However the disparity of techniques, both mandated and recommended, suggests that there is still lacking a scientific basis for their use. This can lead to two contrasting conclusions: 1. That the success so far achieved has been a matter of luck and that it is just a matter of time before something does go wrong. 2. That over-engineering is being required, leading to an unnecessary cost burden. A more scientific approach to determining ‘value for money’ on the verification side might allow costs to be cut, but it would take a brave person to start down that line.

June 2011 | TEST


38 | Training Corner

Climbing out of the testbox In the first of her regular ‘Training Corner’ contributions test training specialist Angelina Samaroo looks at the complex world of ICT and risk-based testing.

T Information, disseminated at the right time, at precise accuracy, to the right people, may well be the thing that saves our sanity. We may not be able to save lives, livelihoods, cities, or the planet, but we can at least know that the world cares.

TEST | June 2011

esting isn’t rocket science. Neither is systems design; nor is systems programming, unless we’re at NASA. 132 years ago, a boy was born. His name was to become synonymous with genius. When Einstein declared, e=mc2, energy = mass x speed of light (in a vacuum) squared, the space-time continuum became destined for great things – someday, it would lead James T Kirk to boldly go…

At the turn of the 20th century, news couldn’t travel at 300,000 km/s (the speed of light). To be an international hit then meant that your teachings were so good that they listened in awe, and left with a story to tell. Today, we have that technology, that ICT. Today, we need that communication more than ever. As the planet draws ever closer to that final tipping point, plunging us all into the world of hope, we will need the technology to work as never before. 24/7 will become

24/7/387 (the number of time zones in the world). Information, disseminated at the right time, at precise accuracy, to the right people, may well be the thing that saves our sanity. We may not be able to save lives, livelihoods, cities, or the planet, but we can at least know that the world cares.

Complexity The world of ICT is a rather complex one. From waking us up in the morning through our smartphone (or not, if Daylight Savings Time kicks in, unexpectedly), to checking train times, to filling in census forms, to shutting down nuclear reactors, to opening them up again to vent. Sometimes simple will do, many other times not. As the IT certification world in the UK marches on in its multiple-choice frenzy, because it’s quick and cheerful, and we all want it, let us remind ourselves of the testers’ lot. At work our technical peers are the system designers and those who cut code. We belong to the DCT fraternity –

www.testmagazine.co.uk


Training Corner | 39

the design, code and test bods. Sure, business analysis, project management and service management all have their place around us. But we are the DCT lot – those who create and test systems. We can’t all be Einsteins, but we can give it our best shot; we may have gambled away the planet’s future, but we still have seven billion people to save. More people, more technology; fewer sustainable resources. So the job of today’s DCT community is to help fulfil that old doctrine, less is more. With this in mind, let’s have a reminder of the real complexities in the world of a serious tester.

Running the risk We begin with risk-based testing. The theory says that we should get our stakeholders together to identify and analyse risks, then put a mitigation plan in place. Getting everyone together requires a coordinator. Who takes charge of the activity? Who gets invited, and how do we get them to turn up? If the test manager is leading, then we need to have influencing, negotiation, presentation and report writing skills. How does a test manager muscle in at the requirements definition stage in order to bring the risk management exercise up front? How does a test manager influence a development manager to release a developer or two for a risk identification exercise? To make that (over) simplified theory happen, the test manager needs to be conversant with the business and technical domains. He needs to have significant communication skills in order to help those stakeholders to feel pain that hasn’t happened yet. Note that our test manager here is the person leading from the inside – the test manager who knows his stuff. The test manager who just finds a man who can isn’t a test manager here, he is a test management facilitator – he can talk the talk and tell them the boxes that need to be ticked, but not needing to walk the walk.

Testing on the agenda Having got everyone together we need to have a sensible agenda for our meeting. The agenda should be to highlight things that could go wrong with the product as well as the usual delivering late or running out of money. To understand what could go wrong with the product requires real knowledge of the product – in

www.testmagazine.co.uk

development and in live. Thus the people we need in our meeting include the DCT, and the users. The designers should be able to tell us which customer needs could be stretching our technical capabilities (eg, the proximity sensor on the Iphone 4), the developers will tell us which pieces of functionality could be difficult to implement (eg, the multitouch interface from screen to Retina display), the testers will tell us which areas of functionality may not work from an end-to-end perspective (the antenna death grip), the users will tell us that dropping the phone causes the toughened glass to shatter. To be able to identify these possibilities requires significant knowledge of how the technical domain is expected to work together – in this case a glass-steel-gyro-software combo. To know how to score each risk in terms of likelihood and damage requires both technical and business know-how. You’d imagine that the techies on the glass ceiling side would have known that glass is still glass, and while it may be scratch proof, will shatter on impact. So, the likelihood we would assume to be high, but the damage to the business presumably low – the market is captive so let them buy a wraparound, and insurance. The next step in the theory of riskbased software testing says that you should create test strategies and test plans to show the why, what, when, who and how of testing required to mitigate the risks. A test strategy would typically include the tools (test management, test execution, performance etc.), test design techniques (process cycles, decision tables, pairwise etc), retest and regression test approaches. The test plan then drops down a level or two by breaking the above requirements into a set of tasks, and adds the people and timescales to the mix. You can refer to IEEE 829-2008 for more on this, but beware, it is a test documentation standard, and thus covers both strategies and test plans (as described above), both contained within a master test plan. That’s the flat theory. Our round world is all about risk. The number of coalition governments around the world shows that as voters we’re just not sure who can lead. Some assumed leaders just don’t trust that we know how to use our hard-earned vote. The public uprisings in other parts shows

that the status quo just won’t do – we know too much about how the other half live to just accept our lot in life. In business, the trick is to keep looking around you so you know you might trip up – leaving your head in the sand could stifle you. The wise ones will keep abreast of the news, the technical magazines, the competitive environment, the beat at work.

What's the strategy? Back in our world, those in the know will ask neither for a test strategy nor a test plan. They will not dive for cover under the strategy created for the last project. They will know that it was out-of-date the day after it was written – and anyway they don’t need the prop. Today’s expert in testing will want to know what he can do about those risks – each and every day. Today’s expert will know how to shine the spotlight on those sneaky larvae waiting to grow legs and start crawling. Today’s expert will hunt them down until death or mitigation do them part. He will have the strategy and test plan headings from the theory books open in front of him. That’s the simple bit though. He will stare into space or blank screen, grey cells churning, digging deep into his past experiences – first and foremost, what can he do about each risk in a fair world; who will he need and how will he get them on-board in the unfair world? His document will have a label something like – ‘Project X – release 1, risks and what we’re doing about them’ – long-winded, but you get the point, short and snappy is for the marketeers, us techies want it to do what it says on the tin. We want others to understand what’s in our tin – so that they might open it and offer up more risks for us to fret with. The test professional eats risks for breakfast, lunch and dinner. The test professional’s dessert is yet another risk that has been identified – before go-live. The test professional’s ‘Project X – release 1, risks and what we’re doing about it’ is date and time stamped – breakfast, lunch and dinner for him are not the same each day. The document could be on file (traceable) and on the wall (visible). The test professional does not want his customer to know much about testing, the test professional just wants his customer to come back for more – his product, his pride. His pride is his unique selling point.

Angelina Samaroo Managing director Pinta Education www.pintaed.com

June 2011 | TEST


Can you predic

40 | TEST company profile

Facilita Load testing solutions that deliver results Facilita has created the Forecast™ product suite which is used across multiple business sectors to performance test applications, websites and IT infrastructures of all sizes and complexity. With this class-leading testing software and unbeatable support and services Facilita will help you ensure that your IT systems are reliable, scalable and tuned for optimal performance.

Forecast, the thinking tester's power tool A sound investment: A good load testing tool is one of the most important IT investments that an organisation can make. The risks and costs associated with inadequate testing are enormous. Load testing is challenging and without good tools and support will consume expensive resources and waste a great deal of effort. Forecast has been created to meet the challenges of load testing, now and in the future. The core of the product is tried and trusted and incorporates more than a decade of experience but is designed to evolve in step with advancing technology. Realistic load testing: Forecast tests the reliability, performance and scalability of IT systems by realistically simulating from one to many thousands of users executing a mix of business processes using individually configurable data. Comprehensive technology support: Forecast provides one of the widest ranges of protocol support of any load testing tool. 1. Forecast Web thoroughly tests web-based applications and web services, identifies system bottlenecks, improves application quality and optimises network and server infrastructures. Forecast Web supports a comprehensive and growing list of protocols, standards and data formats including HTTP/HTTPS, SOAP, XML, JSON and Ajax. 2. Forecast Java is a powerful and technically advanced solution for load testing Java applications. It targets any non-GUI client-side Java API with support for all Java remoting technologies including RMI, IIOP, CORBA and Web Services. 3. Forecast Citrix simulates multiple Citrix clients and validates the Citrix environment for scalability and reliability in addition to the performance of the hosted applications. This non-intrusive approach provides very accurate client performance measurements unlike server based solutions. 4. Forecast .NET simulates multiple concurrent users of applications with client-side .NET technology. 5. Forecast WinDriver is a unique solution for performance testing Windows applications that are impossible or uneconomic to test using other methods or where user experience timings are required. WinDriver automates the client user interface and can control from one to many hundreds of concurrent client instances or desktops.

6. Forecast can also target less mainstream technology such as proprietary messaging protocols and systems using the OSI protocol stack. Powerful yet easy to use: Skilled testers love using Forecast because of the power and flexibility that it provides. Creating working tests is made easy with Forecast's script recording and generation features and the ability to compose complex test scenarios rapidly with a few mouse clicks. The powerful functionality of Forecast ensures that even the most challenging applications can be full tested.

Facilita Software Development Limited. Tel: +44 (0)12

Supports Waterfall and Agile (and everything in between): Forecast has the features demanded by QA teams like automatic test script creation, test data management, real-time monitoring and comprehensive charting and reporting. Forecast is successfully deployed in Agile ‘Test Driven Development’ (TDD) environments and integrates with automated test (continuous build) infrastructures. The functionality of Forecast is fully programmable and test scripts are written in standard languages (Java, C#, C++ etc). Forecast provides the flexibility of open source alternatives along with comprehensive technical support and the features of a high-end enterprise commercial tool. Flexible licensing: Geographical freedom allows licenses to be moved within an organisation without additional costs. Temporary high concurrency licenses for ‘spike’ testing are available with a sensible pricing model. Licenses can be rented for short term projects with a ‘stop the clock’ agreement or purchased for perpetual use. Our philosophy is to provide value and to avoid hidden costs. For example, server monitoring and the analysis of server metrics are not separately chargeable items and a license for Web testing includes all supported Web protocols.

Services In addition to comprehensive support and training, Facilita offers mentoring where an experienced Facilita consultant will work closely with the test team either to ‘jump start’ a project or to cultivate advanced testing techniques. Even with Forecast’s outstanding script automation features, scripting is challenging for some applications. Facilita offers a direct scripting service to help clients overcome this problem. We can advise on all aspects of performance testing and carry out testing either by providing expert consultants or fully managed testing services.

Facilita Tel: +44 (0) 1260 298109 Email: enquiries@facilita.co.uk Web: www.facilita.com

TEST | June 2011

www.testmagazine.co.uk


TEST company profile | 41

Seapine Software TM

With over 8,500 customers worldwide, Seapine Software Inc is a recognised, award-winning, leading provider of quality-centric application lifecycle management (ALM) solutions. With headquarters in Cincinnati, Ohio and offices in London, Melbourne, and Munich, Seapine is uniquely positioned to directly provide sales, support, and services around the world. Built on flexible architectures using open standards, Seapine Software’s cross-platform ALM tools support industry best practices, integrate into all popular development environments, and run on Microsoft Windows, Linux, Sun Solaris, and Apple Macintosh platforms. Seapine Software's integrated software development and testing tools streamline your development and QA processes – improving quality, and saving you significant time and money.

TestTrack RM TestTrack RM centralises requirements management, enabling all stakeholders to stay informed of new requirements, participate in the review process, and understand the impact of changes on their deliverables. Easy to install, use, and maintain, TestTrack RM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Whether as a standalone tool or part of Seapine’s integrated ALM solution, TestTrack RM helps teams keep development projects on track by facilitating collaboration, automating traceability, and satisfying compliance needs.

TestTrack Pro TestTrack Pro is a powerful, configurable, and easy to use issue management solution that tracks and manages defects, feature requests, change requests, and other work items. Its timesaving communication and reporting features keep team members informed and on schedule. TestTrack Pro supports MS SQL Server, Oracle, and other ODBC databases, and its open interface is easy to integrate into your development and customer support processes.

TestTrack TCM TestTrack TCM, a highly scalable, cross-platform test case management solution, manages all areas of the software testing process including test case creation, scheduling, execution, measurement, and reporting. Easy to install, use, and maintain, TestTrack TCM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Reporting and graphing tools, along with user-definable data filters, allow you to easily measure the progress and quality of your testing effort.

QA Wizard Pro QA Wizard Pro completely automates the functional and regression testing of Web, Windows, and Java applications, helping quality assurance teams increase test coverage. Featuring a nextgeneration scripting language, QA Wizard Pro includes advanced object searching, smart matching a global application repository, datadriven testing support, validation checkpoints, and built-in debugging. QA Wizard Pro can be used to test popular languages and technologies like C#, VB.NET, C++, Win32, Qt, AJAX, ActiveX, JavaScript, HTML, Delphi, Java, and Infragistics Windows Forms controls.

Surround SCM Surround SCM, Seapine’s cross-platform software configuration management solution, controls access to source files and other development assets, and tracks changes over time. All data is stored in industry-standard relational database management systems for greater security, scalability, data management, and reporting. Surround SCM’s change automation, caching proxy server, labels, and virtual branching tools streamline parallel development and provide complete control over the software change process.

www.seapine.com Phone:+44 (0) 208-899-6775 Email: salesuk@seapine.com United Kingdom, Ireland, and Benelux: Seapine Software Ltd. Building 3, Chiswick Park, 566 Chiswick High Road, Chiswick, London, W4 5YA UK Americas (Corporate Headquarters): Seapine Software, Inc. 5412 Courseview Drive, Suite 200, Mason, Ohio 45040 USA Phone: 513-754-1655

www.testmagazine.co.uk

June 2011 | TEST


42 | TEST company profile

Green Hat The Green Hat difference In one software suite, Green Hat automates the validation, visualisation and virtualisation of unit, functional, regression, system, simulation, performance and integration testing, as well as performance monitoring. Green Hat offers codefree and adaptable testing from the User Interface (UI) through to back-end services and databases. Reducing testing time from weeks to minutes, Green Hat customers enjoy rapid payback on their investment. Green Hat’s testing suite supports quality assurance across the whole lifecycle, and different development methodologies including Agile and test-driven approaches. Industry vertical solutions using protocols like SWIFT, FIX, IATA or HL7 are all simply handled. Unique pre-built quality policies enable governance, and the re-use of test assets promotes high efficiency. Customers experience value quickly through the high usability of Green Hat’s software. Focusing on minimising manual and repetitive activities, Green Hat works with other application lifecycle management (ALM) technologies to provide customers with value-add solutions that slot into their Agile testing, continuous testing, upgrade assurance, governance and policy compliance. Enterprises invested in HP and IBM Rational products can simply extend their test and change management processes to the complex test environments managed by Green Hat and get full integration. Green Hat provides the broadest set of testing capabilities for enterprises with a strategic investment in legacy integration, SOA, BPM, cloud and other component-based environments, reducing the risk and cost associated with defects in processes and applications. The Green Hat difference includes: • Purpose built end-to-end integration testing of complex events, business processes and composite applications. Organisations benefit by having UI testing combined with SOA, BPM and cloud testing in one integrated suite. • Unrivalled insight into the side-effect impacts of changes made to composite applications and processes, enabling a comprehensive approach to testing that eliminates defects early in the lifecycle. • Virtualisation for missing or incomplete components to enable system testing at all stages of development. Organisations benefit through being unhindered by unavailable systems or costly access to third party systems, licences or hardware. Green Hat pioneered ‘stubbing’, and organisations benefit by having virtualisation as an integrated function, rather than a separate product.

• ‘Out-of the box’ support for over 70 technologies and platforms, as well as transport protocols for industry vertical solutions. Also provided is an application programming interface (API) for testing custom protocols, and integration with UDDI registries/repositories. • Helping organisations at an early stage of project or integration deployment to build an appropriate testing methodology as part of a wider SOA project methodology.

Corporate overview Since 1996, Green Hat has constantly delivered innovation in test automation. With offices that span North America, Europe and Asia/Pacific, Green Hat’s mission is to simplify the complexity associated with testing, and make processes more efficient. Green Hat delivers the market leading combined, integrated suite for automated, end-to-end testing of the legacy integration, Service Oriented Architecture (SOA), Business Process Management (BPM) and emerging cloud technologies that run Agile enterprises. Green Hat partners with global technology companies including HP, IBM, Oracle, SAP, Software AG, and TIBCO to deliver unrivalled breadth and depth of platform support for highly integrated test automation. Green Hat also works closely with the horizontal and vertical practices of global system integrators including Accenture, Atos Origin, CapGemini, Cognizant, CSC, Fujitsu, Infosys, Logica, Sapient, Tata Consulting and Wipro, as well as a significant number of regional and country-specific specialists. Strong partner relationships help deliver on customer initiatives, including testing centres of excellence. Supporting the whole development lifecycle and enabling early and continuous testing, Green Hat’s unique test automation software increases organisational agility, improves process efficiency, assures quality, lowers costs and mitigates risk.

Helping enterprises globally Green Hat is proud to have hundreds of global enterprises as customers, and this number does not include the consulting organisations who are party to many of these installations with their own staff or outsourcing arrangements. Green Hat customers enjoy global support and cite outstanding responsiveness to their current and future requirements. Green Hat’s customers span industry sectors including financial services, telecommunications, retail, transportation, healthcare, government, and energy.

• Scaling out these environments, test automations and virtualisations into the cloud, with seamless integration between Green Hat’s products and leading cloud providers, freeing you from the constraints of real hardware without the administrative overhead. • ‘Out-of-the-box’ deep integration with all major SOA, enterprise service bus (ESB) platforms, BPM runtime environments, governance products, and application lifecycle management (ALM) products.

sales@greenhat.com www.greenhat.com

TEST | June 2011

www.testmagazine.co.uk


TEST company profile | 43

Micro Focus Continuous Quality Assurance

Change

Micro Focus Continuous Quality Assurance (CQA) ensures that quality assurance is embedded throughout the entire development lifecycle – from requirements definition to ‘go live’.

StarTeam® is a fully integrated, cost-effective software change and configuration management tool. Designed for both centralized and geographically distributed software development environments, it delivers:

CQA puts the focus on identifying and eliminating defects at the beginning of the process, rather than removing them at the end of development. It provides capabilities across three key areas: Requirements: Micro Focus uniquely combines requirements definition, visualisation, and management into a single ‘3-Dimensional’ solution. This gives managers, analysts and developers the right level of detail about how software should be engineered. Removing ambiguity means the direction of the development and QA teams is clear, dramatically reducing the risk of poor business outcomes. Change: Development teams regain control in their constantly shifting world with a single ‘source of truth’ to prioritize and collaborate on defects, tasks, requirements, test plans, and other in-flux artefacts. Even when software is built by global teams with complex environments and methods, Micro Focus controls change and increases the quality of outputs. Quality: Micro Focus automates the entire quality process from inception through to software delivery. Unlike solutions that emphasize ‘back end’ testing, Micro Focus ensures that tests are planned early and synchronised with business goals, even as requirements and realities change. Bringing the business and end-users into the process early makes business requirements the priority from the outset as software under development and test is continually aligned with the needs of business users. CQA provides an open framework which integrates diverse toolsets, teams and environments, giving managers continuous control and visibility over the development process to ensure that quality output is delivered on time. By ensuring correct deliverables, automating test processes, and encouraging reuse and integration, Continuous Quality Assurance continually and efficiently validates enterprise critical software. The cornerstones of Micro Focus Continuous Quality Assurance are: • Requirements Definition and Management Solutions; • Software Change and Configuration Management Solutions; • Automated Software Quality and Load Testing Solutions.

Requirements Caliber® is an enterprise software requirements definition and management suite that facilitates collaboration, impact analysis and communication, enabling software teams to deliver key project milestones with greater speed and accuracy.

• A single source of key information for distributed teams; • Streamlined collaboration through a unified view of code and change requests; • Industry leading scalability combined with low total cost of ownership.

Quality Silk is a comprehensive automated software quality management solution suite which: • Ensures that developed applications are reliable and meet the needs of business users; • Automates the testing process, providing higher quality applications at a lower cost; • Prevents or discovers quality issues early in the development cycle, reducing rework and speeding delivery. SilkTest enables users to rapidly create test automation, ensuring continuous validation of quality throughout the development lifecycle. Users can move away from manual-testing dominated software lifecycles, to ones where automated tests continually test software for quality and improve time to market.

Take testing to the cloud Users can test and diagnose Internet-facing applications under immense global peak loads on the cloud without having to manage complex infrastructures. Among other benefits, SilkPerformer® CloudBurst gives development and quality teams: • Simulation of peak demand loads through onsite and cloud-based resources for scalable, powerful and cost effective peak load testing; • Web 2.0 client emulation to test even today’s rich internet applications effectively. Micro Focus Continuous Quality Assurance transforms ‘quality’ into a predictable managed path; moving from reactively accepting extra cost at the end of the process, to confronting waste head on and focusing on innovation. Micro Focus, a member of the FTSE 250, provides innovative software that enables companies to dramatically improve the business value of their enterprise applications. Micro Focus Enterprise Application Modernization and Management software enables customers’ business applications to respond rapidly to market changes and embrace modern architectures with reduced cost and risk.

• Streamlined requirements collaboration; • End to end traceability of requirements; • Fast and easy simulation to verify requirements; • Secure, centralized requirements repository.

For more information, please visit http://www.microfocus.com/cqa-uk/

www.testmagazine.co.uk

June 2011 | TEST


44 | TEST company profile

Original Software Delivering quality through innovation With a world class record of innovation, Original Software offers a solution focused completely on the goal of effective quality management. By embracing the full spectrum of Application Quality Management across a wide range of applications and environments, the company partners with customers and helps make quality a business imperative. Solutions include a quality management platform, manual testing, full test automation and test data management, all delivered with the control of business risk, cost, time and resources in mind.

Setting new standards for application quality Today’s applications are becoming increasingly complex and are critical in providing competitive advantage to the business. Failures in these key applications result in loss of revenue, goodwill and user confidence, and create an unwelcome additional workload in an already stretched environment. Managers responsible for quality have to be able to implement processes and technology that will support these important business objectives in a pragmatic and achievable way, without negatively impacting current projects. These core needs are what inspired Original Software to innovate and provide practical solutions for Application Quality Management (AQM) and Automated Software Quality (ASQ). The company has helped customers achieve real successes by implementing an effective ‘application quality eco-system’ that delivers greater business agility, faster time to market, reduced risk, decreased costs, increased productivity and an early return on investment. These successes have been built on a solution that provides a dynamic approach to quality management and automation, empowering all stakeholders in the quality process, as well as uniquely addressing all layers of the application stack. Automation has been achieved without creating a dependency on specialised skills and by minimising ongoing maintenance burdens.

An innovative approach Innovation is in the DNA at Original Software. Its intuitive solution suite directly tackles application quality issues and helps organisations achieve the ultimate goal of application excellence.

Empowering all stakeholders The design of the solution helps customers build an ‘application quality eco-system’ that extends beyond just the QA team, reaching all the relevant stakeholders within the business. The technology enables everyone involved in the delivery of IT projects to participate in the quality process – from the business analyst to

the business user and from the developer to the tester. Management executives are fully empowered by having instant visibility of projects underway.

Quality that is truly code-free Original Software has observed the script maintenance and exclusivity problems caused by code-driven automation solutions and has built a solution suite that requires no programming skills. This empowers all users to define and execute their tests without the need to use any kind of code, freeing them from the automation specialist bottleneck. Not only is the technology easy to use, but quality processes are accelerated, allowing for faster delivery of business-critical projects.

Top to bottom quality Quality needs to be addressed at all layers of the business application. Original Software gives organisations the ability to check every element of an application - from the visual layer, through to the underlying service processes and messages, as well as into the database.

analysts alike. In a 2010 report, Ovum stated: “While other companies have diversified, into other test types and sometimes outside testing completely, Original has stuck more firmly to a value proposition almost solely around unsolved challenges in functional test automation. It has filled out some yawning gaps and attempted to make test automation more accessible to nontechnical testers.” More than 400 organisations operating in over 30 countries use Original Software solutions. The company is proud of its partnerships with the likes of Coca-Cola, Unilever, HSBC, FedEx, Pfizer, DHL, HMV and many others.

Addressing test data issues Data drives the quality process and as such cannot be ignored. Original Software enables the building and management of a compact test environment from production data quickly and in a data privacy compliant manner, avoiding legal and security risks. It also manages the state of that data so that it is synchronised with test scripts, enabling swift recovery and shortening test cycles.

A holistic approach to quality Original Software’s integrated solution suite is uniquely positioned to address all the quality needs of an application, regardless of the development methodology used. Being methodology neutral, the company can help in Agile, Waterfall or any other project type. The company provides the ability to unite all aspects of the software quality lifecycle. It helps manage the requirements, design, build, test planning and control, test execution, test environment and deployment of business applications from one central point that gives everyone involved a unified view of project status and avoids the release of an application that is not ready for use.

Helping businesses around the world Original Software’s innovative approach to solving real pain-points in the Application Quality Life Cycle has been recognised by leading multinational customers and industry

www.origsoft.com solutions@origsoft.com Tel: +44 (0)1256 338 666 Fax: +44 (0)1256 338 678 Grove House, Chineham Court, Basingstoke, Hampshire, RG24 8AG

TEST | June 2011

www.testmagazine.co.uk


TEST company profile | 45

Parasoft Improving productivity by delivering quality as a continuous process For over 20 years Parasoft has been studying how to efficiently create quality computer code. Our solutions leverage this research to deliver automated quality assurance as a continuous process throughout the SDLC. This promotes strong code foundations, solid functional components, and robust business processes. Whether you are delivering Service-Orientated Architectures (SOA), evolving legacy systems, or improving quality processes – draw on our expertise and award winning products to increase productivity and the quality of your business applications.

Specialised platform support:

Parasoft's full-lifecycle quality platform ensures secure, reliable, compliant business processes. It was built from the ground up to prevent errors involving the integrated components – as well as reduce the complexity of testing in today's distributed, heterogeneous environments.

Trace code execution:

What we do Parasoft's SOA solution allows you to discover and augment expectations around design/ development policy and test case creation. These defined policies are automatically enforced, allowing your development team to prevent errors instead of finding and fixing them later in the cycle. This significantly increases team productivity and consistency.

End-to-end testing: Continuously validate all critical aspects of complex transactions which may extend through web interfaces, backend services, ESBs, databases, and everything in between.

Advanced web app testing: Guide the team in developing robust, noiseless regression tests for rich and highly-dynamic browserbased applications.

Access and execute tests against a variety of platforms (AmberPoint, HP, IBM, Microsoft, Oracle/ BEA, Progress Sonic, Software AG/webMethods, TIBCO).

Security testing: Prevent security vulnerabilities through penetration testing and execution of complex authentication, encryption, and access control test scenarios.

Provide seamless integration between SOA layers by identifying, isolating, and replaying actions in a multi-layered system.

Continuous regression testing: Validate that business processes continuously meet expectations across multiple layers of heterogeneous systems. This reduces the risk of change and enables rapid and agile responses to business demands.

Multi-layer verification: Ensure that all aspects of the application meet uniform expectations around security, reliability, performance, and maintainability.

Policy enforcement: Provide governance and policy-validation for composite applications in BPM, SOA, and cloud environments to ensure interoperability and consistency across all SOA layers. Please contact us to arrange either a one to one briefing session or a free evaluation.

Application behavior virtualisation: Automatically emulate the behavior of services, then deploys them across multiple environments – streamlining collaborative development and testing activities. Services can be emulated from functional tests or actual runtime environment data.

Load/performance testing: Verify application performance and functionality under heavy load. Existing end-to-end functional tests are leveraged for load testing, removing the barrier to comprehensive and continuous performance monitoring.

Spirent CommunicationsEmail: plc Tel:sales@parasoft-uk.com +44(0)7834752083 Email: Web: www.spirent.com Web: www.parasoft.com Tel:Daryl.Cornelius@spirent.com +44 (0) 208 263 6005

www.testmagazine.co.uk

June 2011 | TEST


46 | TEST company profile

31 Media 31 Media is a business to business media company that publishes high quality magazines and organises dynamic events across various market sectors. As a young, vibrant, and forward thinking company we are flexible, proactive, and responsive to our customers' needs. www.31media.co.uk

T.E.S.T Online Since its launch in 2008 T.E.S.T has rapidly established itself as the leading European magazine in the software testing market. T.E.S.T is a publication that aims to give a true reflection of the issues affecting the software testing market. What this means is that the content is challenging but informative, pragmatic yet inspirational and includes, but is not limited to: In-depth thought leadership; Customer case studies; news; Cutting edge opinion pieces; Best practice and strategy articles The good news is that the T.E.S.T website, T.E.S.T Online has had a root and branch overhaul and now contains a complete archive of previous issues as well as exclusive web-only content and testing and IT news.

Heathrow, the VitAL Focus Groups promises to be a dynamic event that provides a solid platform for the most influential professionals in the IT industry to discuss and debate their issues, voice their opinions, swap & share advice, and source the latest products and services.

For more information visit: www.vitalfocusgroups.com or contact Grant Farrell on +44 (0) 203 056 4598

vital Inspiration for the modern business

At T.E.S.T our mission is to show the importance of software testing in modern business and capture the current state of the market for the reader. www.testmagazine.co.uk

VitAL Magazine VitAL is a journal for directors and senior managers who are concerned about the business issues surrounding the implementation of IT and the impact it has on their customers. Today senior management are starting to realise that implementing IT effectively has a positive impact on the internal and external customer and it also influences profitability. VitAL magazine was launched to help ease the process. www.vital-mag.net

VitAL Focus Groups VitAL Magazine, the authoritative, thought provoking, and informative source of information on all issues related to IT service, IT delivery and IT implementation is launching a specifically designed programme of Focus Groups that bring together senior decision makers for a series of well thought out debates, peer-to-peer networking, and supplier interaction. Held on the 21st June 2011 at the Park Inn Hotel,

31 Media Limited www.31media.co.uk info@31media.co.uk Media house, 16 Rippolson Road, London SE18 1NS united Kingdom Phone: +44 (0) 870 863 6930 Fax: +44 (0) 870 085 8837

TEST | June 2011

www.testmagazine.co.uk


TEST company profile | 47

hays Experts in the delivery of testing resource Setting the UK standard in testing recruitment We believe that our clients should deal with industry experts when engaging with a supplier. Our testing practice provides a direct route straight to the heart of the testing community. By engaging with our specialists, clients gain instant access to a network of testing professionals who rely on us to keep them informed of the best and most exciting new roles as they come available.

our testing expertise

Tailored technical solutions

We provide testing experts across the following disciplines:

With over 5,000 contractors on assignment and thousands of candidates placed into very specialised permanent roles every year, we have fast become the pre-eminent technology expert. Our track record extends to all areas of IT and technical recruitment, from small-scale contingency through to large-scale campaign and recruitment management solutions.

Functional Testing: including System Testing, Integration Testing, Regression Testing and user Acceptance Testing; Automated Software Testing: including Test Tool selection, evaluation & implementation, creation of automated test frameworks; Performance Testing: including Stress Testing, Load Testing, Soak Testing and Scalability Testing; operational Acceptance Testing: including disaster recovery and failover; Web Testing: including cross browser compatibility and usability; Migration Testing: including data conversion and application migration;

unique database of high calibre jobseekers As we believe our clients should deal with true industry experts, we also deliver recruitment and workforce related solutions through the following niche practices: • Digital;

Agile Testing;

• Defence;

Test Environments Management.

• Development;

The testing talent we provide

• ERP; • Finance Technology;

• Test analysts;

• Infrastructure;

• Test leads;

• Leadership;

• Test programme managers;

• Public, voluntary and not-for-profit;

• Automated test specialists;

• Projects, change and interim management;

• Test environment managers;

• Security;

• Heads of testing;

• Technology Sales;

• Performance testers;

• Telecoms.

• Operational acceptance testers.

We build networks and maintain relationships with candidates across these areas, giving our clients access to high calibre jobseekers with specific skills sets.

Our expert knowledge of the testing market means you recruit the best possible professionals for your business. When a more flexible approach is required, we have developed a range of creative fixed price solutions that will ensure you receive a testing service tailored to your individual requirements.

Our specialist network

To speak to a specialist testing consultant, please contact: Sarah Martin, senior consultant, Tel: +44 (0)1273 739272 Email: testing@hays.com Web: hays.co.uk/it

Working across a nationwide network of offices, we offer employers and jobseekers a highly specialised recruitment service. Whether you are looking for a permanent or contract position across a diverse range of skill sets, business sectors and levels of seniority, we can help you.

Web: hays.co.uk/it Email: testing@hays.com Tel: +44 (0)1273 739272

www.testmagazine.co.uk

June 2011 | TEST


48 | The last word...

the last word... ‘I hate Agile’ revisited... Flying in the face of his ‘I hate Agile’ stance, Dave Whalen is now up to his neck in the SCRuM version of Agile, and loving it!.

F

or the record, I'm still not an Agile fan. Ever since I wrote the ‘I hate Agile’ article I have paid for my opinion in one way or another, usually in job interviews. It seemed every company I interviewed with was either Agile or going Agile. “So you're the I hate Agile guy?” Yeah that's me, but let me explain, it's not really Agile that I hate, it's the way that it is sold by authors and consultants as the only way to do software development; it's the rigid “My way or the highway” approach to Agile that I despise.

As luck would have it, I found a company willing to take a chance on me in spite of my opinions. So I am now deeply involved in the development of a new product using an Agile process. They have opted to implement the SCRuM version of Agile. Believe it or not – I'm absolutely loving it! Bottom line – they are doing Agile right! It's a large company with lots of projects and lots of teams. Some are doing well, others not. We are doing it the way I've always envisioned it: embracing flexibility and adaptability. Have there been some growing pains? Absolutely. Are we doing it perfect

I am now deeply involved in the development of a new product using an Agile process. They have opted to implement the SCRUM version of Agile. Believe it or not – I'm absolutely loving it!

TEST | June 2011

every sprint? nope. In fact we have yet to complete the planned work in any of the five sprints to date. We're adapting though. We'll get there. Here's how we do it: We have a core team of six to eight people from various disciplines – development, test, and business analysts. We have a customer representative that visits during sprint planning and during demonstrations. The team is small and highly cohesive. We really enjoy working together. I can't stress enough how important that is. We all know that our individual success depends on the team's success. no one wants to let anyone down. Are we all the best in our disciplines? not necessarily. I've had to swallow a humble pill now and then, but we're not idiots. We operate in two-week sprints. At first we didn't even come close to getting everything done in two weeks. We estimated really badly, but as a result, over five sprints the estimation process is getting much better. We found that we were giving best case estimates. The new system relies on us using a lot of new processes and new technologies. We have learned to better factor the unknowns into our estimates. Things like researching new tools or technologies, contingencies if something goes wrong or an idea doesn't pan out. We also learned that we were trying to do too much in one sprint. We have learned to cut back and to prioritise our work, and the team is much more comfortable speaking up if we think anyone's estimate may be a bit unrealistic.

Location, location, location Our team works from a shared workspace. Even though we all have desks or personal workspaces throughout the building, we all sit together in the work room. While we are working, we can overhear conversations between other team members. They may be discussing a coding problem, or a configuration

problem, whatever, if it sounds like something I need to know, I'll pay attention, if not, I focus on the task at hand. I've learned more from these informal discussions than I have ever learned in any meeting. You have to trust your team members. Working with them in close proximity can really foster trust. Part of that trust is admitting that you may not know something or may not have the answer. Be honest. Admit you may not know something. But be willing to learn it. Chances are, someone on the team knows it. use them and be willing to be used. We are all focused on the same goal: successful, working software! We want this product to not only be state of the art, but darn near perfect. Something we can all be proud of. It's not a written goal, it’s just something we all feel, it shows in our work. We're all proud to be part of this team. We are constantly making adjustments. It may be something as simple as monitoring adherence to coding standards or changing our approach to how we plan to attack a problem. From a test perspective, while we are still doing our traditional testing tasks, we are doing things and using tools as we have never used them before. We are always looking for ways to do things better, faster, and smarter. For example, I recently mentioned an issue that may affect my future testing. There are some things that the developers can do to help me better automate a test. The customer will never see it. It will exist only to facilitate my testing. I merely mentioned it to the team and we had a discussion on what we could do to resolve the issue and a plan to do it. no hesitation from anyone. The attitude was "What can I do to help." So do I still hate Agile? no, not really. We have proved it can work. now Agile authors and consultants, that's a different story, I reserve judgement on them.

Dave Whalen

President and senior software entomologist Whalen Technologies softwareentomologist.wordpress.com

www.testmagazine.co.uk


The Whole Story Print Digital Online

For exclusive news, features, opinion, comment, directory, digital archive and much more visit

www.testmagazine.co.uk

www.31media.co.uk

INNOVATION FOR SOFTWARE QUALITY



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.