TEST: THE EUROPEAN SOFTWARE TESTER
INNOVATION FOR SOFTWARE QUALITY VOLUME 7: ISSUE 1 FEBRUARY 2015 THE EUROPEAN SOFTWARE TESTER
www.testmagazine.co.uk
INSIDE: ISOFIXED DO WE NEED A STANDARD? THE CYBORG TESTER WHERE MAN MEETS MACHINE
RIP TESTING 2018 VOLUME 7: ISSUE 1: FEBRUARY 2015
GOODBYE TESTERS? CHRIS AMBLER ON THE FUTURE OF OUR INDUSTRY
cover_with_spine_february_2015.indd 1
20/02/2015 16:29
CONTENTS
INSIDE THIS ISSUE
8. The Digital Default
14. RIP Testing 2018
20. Rise of the Machines?
THOUGHT LEADERSHIP
8
The Digital Default Siva Ganesan, Vice President and Global Head of Assurance Services at Tata Consultancy Services, analyses the new world in which digital delivery is the default mode.
COVER STORY 14
NEWS 10 I nfosys Spends $200 Million on Cloud Testing Provider
TEST questions Chris Ambler, Head of Testing for Capita Customer Management, over his prediction that software testers are about to vanish.
The Indian tech giant swoops for an Israeli
CAREER PATHING 18
cloud testing company.
11 Airline Blames Software Provider for Cheap Fares Customers of a US airline rushed to exploit a
RIP Testing 2018
Defining the SDET Role
Shane Kelly, Head of QA and Test at William Hill, describes how his team changed job descriptions and roles to reflect blurred boundaries.
mistake.
12 UK Releases Report on Automated Vehicles A consultation with profound implications for software testing.
EDITORIAL 13 Why We Talk
The mission behind our conference.
FEBRUARY 2015 | www.testmagazine.co.uk
FUTURE TECH 20
Automatic Software Testing: Rise of the Machines?
Sheffield University’s Mathew Hall considers where to draw the line between human testers and algorithms.
22 The Developing World of Software Testing Babuji Abraham, Senior Vice President and CTO at ITC InfoTech, counsels on how to handle future-shock.
PAGE 3
CONTENTS
INSIDE THIS ISSUE
34. Standard Deviation
26. The Burning Question
TESTERS AS CUSTOMERS
30.
24
The Never-Ending Story
Baby Duck Syndrome
SEGA’s Director of Development Jim Woods shows how in the gaming industry, testers adopt the role of customers.
THE CLOUD 26 The Burning Question
Rajesh Sundararajan, Practice Head of Testing Services at Marlabs Software, advises on the upsides and downsides of the cloud.
MANAGING CHANGE
30
Baby Duck Syndrome
NEW PROCESS 40
Rise of the Integrator
Julia Liber, Head of Web Applications and Telecom Testing for A1QA, explains the role of an Integrator.
EFFECTIVE COMMUNICATION 44
Building Bridges
Rajini Padmanaban, Senior Director, Engagements, at QA InfoTech shows how to build effective global dialogue between testers and their stakeholders.
Evgeny Tkachenko, Test Manager at Innova Systems, gives a brutally honest account of change-management pitfalls.
45 Future of Communication
COUNTER POINT
34
Standard Deviation
Iain McCowatt, Director and Head of Testing, Treasury, Barclays, and independent specialist Stuart Reid go head-to-head over ISO 29119, intended as the new standard for all software testers.
38 Do We Need a Software Testing Standard?
Graeme Harrison, Executive Vice President of Marketing at Biamp Systems, says that global communications are let down by inferior technology.
LAST WORD 46
I’ve Been Framed!
Dave Whalen dusts off his coding hat.
Kieran Cornwall, Head of Testing at ITV, weighs in on the ISO debate.
FEBRUARY 2015 | www.testmagazine.co.uk
PAGE 5
LEADER
DRAWING THE BOUNDARIES Hello, and welcome to the FEBRUARY 2015 issue of TEST Magazine.
A
s we warm up for our 2015 National Software Testing Conference in May, this month’s TEST tackles one of the most contentious issues facing our industry. Is a common set of standards for software testing desirable – or even possible – when the nature of the task is changing so rapidly? In our featured article Chris Ambler, Head of Testing at the $5 billion-a-year Capita, reaffirms his view that in a couple of years there will be no more software testers. Nor should this be a cause for alarm, he adds. Instead, Chris argues that software testers will evolve rapidly into business transformers, aligning products with the demands of the marketplace but also refining how consumers view their own needs. If he’s right, then the International Organization for Standardization (ISO) is trying to standardise a process that is about to disappear. It has released its ISO 29119 precisely to establish a common set of standards applicable to our ever changing – and ever more important – industry. The ISO’s move is not universally welcomed. In this issue Iain McCowatt, Director and Head of Testing, Treasury, for Barclays, mounts an eloquent assault on the need for the standard. His position is supported by Kieran Cornwall, Head of Testing at ITV. In response Dr Stuart Reid, a member of the panel that drafted the ISO standards, tackles these criticisms, setting out why he believes that standardisation will build upon the best practices of the software-testing industry. Readers will doubtless draw their own conclusions, though it is an argument that will run and run.
Both sides can find ammunition in this month’s news. For the nay-sayers, a report that India’s HCL Technologies plans to replace its junior testers with automation, upskilling employees with domain experience, supports the idea that our industry is just too fluid to accommodate a one-size-fits-all way of doing things.
Do you want to write for TEST Magazine? Email james.brazier @31media.co.uk
Pointing in the opposite direction is a consultation by the British government into the regulations needed for self-driving cars. Software failures in such vehicles are likely to attract the same legal liability as hardware failures, implying the same kind of quality standards for software that apply to physical components – and few question the worth of ISO metrics for those. These two concepts, of standardisation on the one hand and flux on the other, flow together in the theme for the National Software Testing Conference: “Breaking Today’s Boundaries to Shape Tomorrow.” To what extent must boundaries be broken, and to what extent must they be upheld to assure quality? These are critical questions we will seek to explore, with able assistance from an excellent array of speakers. For more information, see the conference website, www.softwaretestingconference.com Food for thought!
James Brazier Editor
© 2015 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor or TEST Magazine or its publisher, 31 Media Limited. ISSN 2040-01-60 T H I R T YO N E
FEBRUARY 2015 | www.testmagazine.co.uk
EDITOR James Brazier james.brazier@31media.co.uk Tel: +44 (0)203 056 4599 TO ADVERTISE CONTACT: Sarah Walsh sarah.walsh@31media.co.uk Tel: +44(0)203 668 6945 PRODUCTION & DESIGN JJ Jordan jj@31media.co.uk
EDITORIAL & ADVERTISING ENQUIRIES 31 Media Ltd, 41-42 Daisy Business Park, 19-35 Sylvan Grove, London, SE15 1PD Tel: +44 (0) 870 863 6930 Email: info@31media.co.uk Web: www.testmagazine.co.uk PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA
PAGE 7
THOUGHT LEADERSHIP SIVA GANESAN VICE PRESIDENT AND GLOBAL HEAD, ASSURANCE SERVICES, TATA CONSULTANCY SERVICES (TCS)
THE DIGITAL DEFAULT With debate ever more focused on the risks and rewards of a world where digital has become the default mode of delivery, Siva Ganesan, Vice President and Global Head, Assurance Services at Tata Consultancy Services (TCS), analyses what this means for the software testing industry.
PAGE 8
FEBRUARY 2015 | www.testmagazine.co.uk
THOUGHT LEADERSHIP
W
hether we like it or not, digital technology permeates our existence. It keeps us connected, 24 hours a day, seven days a week. It straddles all industries, and it allows us to transact online. It affords us almost inconceivable possibilities, and its implications are profound. The eradication of latency and the ability to close transactions almost in real time strips away inefficiencies, but it also carries risks. The instantaneous nature of digital commerce minimises the margin for error and can render the costs of failure exponentially high. Brands are made overnight, but the churn-rate is frenetic. The digital imperative links human algorithms – software – to hardware. The Internet of Things means that entities in the real and virtual worlds are now interconnected seamlessly, and that the boundaries between them are ever more blurred. This has profound implications for what we mean by privacy. What data is private, and what is public? Demarcating this line is critical to ensuring client confidentiality and the security of data. Part of the answer is self-regulation by companies, to ensure that clients’ data are protected at all costs. For any IT firm, the protection of customers’ information should be top priority (if it isn’t already!). They expect complete confidentiality, to the point that this has become second nature to all who work in this business. So long as the need for privacy and confidentiality is respected, the digital revolution promises a more open, more democratic society. The power of social media to demolish barriers to participating in public conversations is one that cannot be undermined. Ultimately, it promises a brighter future. The digital mode of delivery also levels the global playing-field. Once, the existence of fixed-line telephone networks was a huge advantage for some markets over others - today usage of cellular networks which have rapidly penetrated both developed and developing markets have leapfrogged over the fixed line advantage. This profusion of mobile devices is part of a broader trend of technology becoming cheaper and more widespread. Google’s Project Loon, for example, aims to use high-altitude balloons to bring connectivity to remote rural areas, a venture that could have a tremendous developmental impact. By breaking down barriers to commerce, the digital sphere promises to provide from all parts, to all parts. Successful businesses will be those that operate globally but are sensitive to local cultures and nuances. Today, there are five digital forces driving change. The first is Mobile and Pervasive Computing. These small devices liberate us from the desktop and allow us to remain connected and transact wherever we are on the planet, whether socially or with the market. Generations of children are being born who will know nothing else. Providing assurance for this new world is
FEBRUARY 2015 | www.testmagazine.co.uk
challenging, because there is no tolerance for failure. With everything online, batch and offline processes have become redundant. This leads to the second of the five forces, the Cloud. The Cloud means that we have access to storage and processing anywhere on the planet, subject to jurisdictional differences relating to data-privacy. Distributed computing has shaken up data centres and shifted the business model for software and hardware services to that of a utility, like water or electricity. Providing assurance for the Cloud means tackling the challenges of reliability, availability, security and integrity, to ensure that transactions over a large distributed network are not dissipated, corrupted or diluted. The third Force is Big Data and Analytics. Big Data is about the volume and variety of data. Companies can leverage the power of the data generated by the multiplicity of mobile devices. Analysing this data provides the enhanced powers of differentiation that yield a competitive advantage. Assurance, in this context, means sustaining brand reputation and therefore customer loyalty. Artificial intelligence and robotics is the fourth of the five forces. Self-learning systems and robotics can deliver a highly efficient and automated transactional capability. When human activity is replaced with an algorhythmic or robotic process, the onus of ensuring that the automation does not malfunction falls on the assuranceprovider. Managing this risk requires calibrating the correct balance between the probability of an error, and the negative impact that error would have. The fifth force is Social Media. Many companies have taken advantage of social media for advertising. Assuring the success of such campaigns necessitates a return-on-investment (ROI) calculation and the endto-end assurance of ROI. Setting the metrics for the campaign; collecting feedback from customers; and building correlations from that data against sales, while accounting for exogenous factors, is key to measuring the success of an advertising campaign. End-to-end assurance means that success is no accident, but by design. These forces coalesce. The Internet of Things will dovetail with the Cloud to make connected objects ubiquitous. Physical safety will be a primary concern, but interconnected systems of things can be protected with multiple points of failure. The exponential rise in bandwidth and velocity is a huge engineering challenge but in response, assurance is rapidly evolving into the creation of algorithms to manage the speed and volume of information.
PAGE 9
NEWS TESTING TIMES FOR F-35 STRIKE FIGHTER The US Department of Defense has issued a new report highlighting difficulties with the software testing for the F-35 fighter jet, a single seat, single engine strike aircraft intended for the US military and its allies.
The F-35’s avionics rely on mission data-loads, which are compilations of mission data files produced by the U.S. Reprogramming Lab, a government body. They work with the system software to define radar search parameters and detect hostile radar signals. Mission data-loads undergo a threephased lab development and test regimen, and then are flight-tested in the aircraft itself. The first two data loads for the Marine Corps variant of the F-35 were planned for July. However, DOT&E reported that because the lab received its equipment late from a contractor, these loads would be delayed until November 2015. This is later than demanded by the Marines, but Gilmore warned that speeding up the process would “create significant operational risk to fielded units”, since the loads will not have completed the lab tests and because the test infrastructure on the open-air range could verify only a small portion of the mission data. His report noted that the software continues to “exhibit high falsealarm rates and false target tracks, and poor stability performance, even in later versions of software”. Over its 55-year lifetime, the F-35 is expected to be the single most expensive defence project in history, costing more than $1 trillion. Three variants will be produced, with the US Air Force, Marine Corps and Navy each receiving their own specialised versions. Close US allies, including the UK, are also participating in the development programme. In related news, the US Navy awarded Raytheon a $270 million contract to support the systems, testing and software behind the V-22 Osprey, an aircraft that uses a helicopter-like configuration for vertical take off and landing, but whose wings tilt to a turboprop configuration to fly like a fixed-wing aeroplane. The contract will be fulfilled by Raytheon Intelligence, Information and Services at its facility in Indianapolis. The work is planned to conclude in December 2019.
PAGE 10
PHOTO BY BINOYJSDK
Michael Gilmore, the Director of Operational Test & Evaluation (DOT&E) at the Pentagon, reported that progress had been slowed by deficiencies uncovered during testing of the Block 2B software, developed for US Marine Corps aircraft.
INFOSYS SPENDS $200 MILLION ON CLOUD TESTING PROVIDER Infosys, a major consulting and technology business, is to spend $200 million buying Panaya, an Israeli cloudservices firm that uses automation and Big Data analytics to test software. The Bangalore-based Infosys announced the news on 16 February. A statement from the company said the acquisition would enhance its competitiveness in automation, innovation and artificial intelligence. Panaya’s CloudQuality suite will be used to support several Infosys service lines via an agile software-as-a-service model. Currently, 1,220 companies in
62 countries use the suite, according to Panaya. “This will help amplify the potential of our people, freeing us from the drudgery of many repetitive tasks, so we may focus more on the important, strategic challenges faced by our clients,” said Vishal Sikka, CEO and Managing Director of Infosys. Panaya’s CEO Doron Gerstel said the deal will “position Infosys as the services leader in the enterprise application services market”. The transaction is expected to complete by 31 March 2015.
HCL TECHNOLOGIES TO REPLACE JUNIOR TESTERS: REPORT A corporate vice-president at HCL Technologies, one of India’s largest software companies, has apparently indicated a remodelling of the HCL workforce, with software testers working at the “lowest layer of the pyramid” being replaced by automation. In an interview C Vijay Kumar, corporate vice-president for infrastructure services delivery, told India’s Economic Times newspaper in January that HCL was shifting towards an “hourglass structure” with a “flattening of the pyramid” and a greater emphasis on engineers with “domain-work experience”. "The volume of work which is being done at the lowest layer
of pyramid is getting automated," he said. Another HCL Technologies executive, who spoke to the paper anonymously, said that it was inevitable the company’s headcount would fall over the next three to five years. TEST contacted HCL to verify the paper’s interpretation of Vijay Kumar’s comments, but had received no comment as of press. HCL Technologies continues to expand internationally. It opened two new service delivery centres in Oslo, Norway, and Frisco, Texas, on 3 February. One reason the company chose Frisco was its proximity to Texas universities with highly regarded computer science faculties, and the corresponding concentration of skilled graduates in the area.
FEBRUARY 2015 | www.testmagazine.co.uk
NEWS GROUP ATTACKS UK GOVERNMENT FOR IT ‘WASTE’ A lobby group has accused the British government of wasting millions of pounds on failed software projects. The Taxpayer’s Alliance, which campaigns for lower rates of taxation, outlined roughly £100 million-worth of software-related losses among the published accounts of UK ministries for 2013/14.
AIRLINE BLAMES SOFTWARE PROVIDER FOR CHEAP FARES A glitch on United Airlines’ website allowed passengers to buy a first-class ticket from London to New York for the equivalent of £50 ($76). The exploit was discovered by a website called DansDeals. It instructed readers to visit the airline’s website but not to login to their frequent-flyer accounts (this would produce an error-message when the exploit was attempted). Clicking the flag in the top-right corner of the page allowed visitors to change their country of origin and billing to Denmark. Through this means, it was possible to find round-trip, first class flights from the UK to New York priced at 497 Danish kroner, equivalent to £49. Flights from London to Honolulu, Hawaii – a 7,000 mile journey – were priced at the equivalent of £126. Flights had to originate from the UK for the trick to work.
The cancellation of projects was a major source of losses. The heaviest was incurred at the Ministry of Justice, whose decision to switch from an inhouse to an outsourced Enterprise Resource Planning (ERP) model cost taxpayers £56 million. France’s Steria was awarded the outsourced contract. The “reappraisal” of another software system at the ministry cost a further £1.7 million. The Highway Agency recorded a loss of £2.23 million due to the early termination of an IT contract. At the the Ministry of Defence, issues discovered in testing meant that a software integration tool was cancelled at a cost of £1.52 million, as it could not be delivered in a “timely manner. (The MoD did not provide further details.) The purchase of a SATCOM airtime service unsuitable for UK conditions represented a loss of almost one million. The Department for Work and Pensions (DWP) cancelled a programme designed to allow benefits recipients
access to claims and information online, leading to a loss of £27 million. The DWP’s IT reporting functionality project was also terminated early, at a cost of almost £1 million, while software licenses purchased to support the development of a redundant system wasted a further £777,000. The Department of Health’s aborted contract with Computer Sciences Corporation (CSC) to deliver electronic patient records was highlighted. The reduction of service charges, and thus revenue, when websites were decommissioned resulted in a loss of £4.7 million to the exchequer - a sum dwarfed by the losses to CSC itself in the cancellation of the giant contract. British taxpayers are not the only ones suffering as a result of these aborted programmes. In a blow to the industry, last year Francis Maude, the UK’s Cabinet Office minister, pledged to end all IT contracts worth more than £100 million. The May 2015 general election offers little encouragement: Jon Cruddas, policy strategist for the opposition Labour party, said this month that a Labour government would not outsource to companies that lack a “social purpose”. His comments were criticised by the National Outsourcing Association.
Several thousand United customers took advantage of the fault, according to the airline. However, United voided the bookings. It blamed the glitch on a third-party software provider, saying the supplier had applied an incorrect currency exchange rate between the British and Danish currencies on United’s Denmark site. The airline itself had filed the fares correctly, it claimed. Despite this assertion, on 12 February the US Department of Transportation said that it was investigating whether United was legally obligated to honour the fares. It added that it was reviewing customer complaints about the voided bookings.
FEBRUARY 2015 | www.testmagazine.co.uk
PAGE 11
NEWS UK RELEASES REPORT ON TESTING AUTOMATED VEHICLES The UK government has published a consultation regarding the testing of automated vehicles. Roadworthiness regulations are decided at a European Union level, but the results nevertheless made interesting reading for software testers. The “Pathway to Driverless Car” review examined approaches being taken in North America, Europe and Asia, while gathering the views of UK stakeholders. It said there was a “general agreement” that vehicle manufacturers must accept liability for the software in their vehicles. One stakeholder noted that UK law does not provide for strict liability for software, unlike case law surrounding physical products, and recommended that software on automated vehicles attract the same liability as physical components. Addressing concerns that the software could be hacked to cause an accident, the report suggested that a “state of the art” defence could protect software suppliers, if they proved that the hack could not have been anticipated. There was a suggestion that manufacturers should comply with the British Standards Institution (BSI) PAS 754 Software Trustworthiness, Governance and management.
Tesla, a US manufacturer of electric vehicles, this month released a software update that appreciably improved the acceleration of its Model S P85D supercar. In a Tweet Elon Musk, Tesla’s high-profile CEO, said that the software update would improve the car’s 0-60mph acceleration by 0.1 of a second, from 3.2 to 3.1 seconds. This was achieved by an over-the-air update to the car’s inverter algorithm, the inverter being the device that changes direct current to alternating current.
so these updates would need to be more regular than the UK’s current annual roadworthiness test. Moreover, the software would also need frequent updates to protect against cyber-security risks. Given the onerous legal and professional pressure such an update cycle would place on software developers and testers, some stakeholders raised the prospect that the updates would need to be paid-for service. They also raised the prospect of older vehicles losing software support and therefore the ability for automated driving.
The review raised the possibility that the person who owned the vehicle be legally obliged to ensure that its software is up to date. Failure to do so would void their insurance policy. Another suggestion was that until the software was updated, the self-driving functionality should be automatically disabled and only manual driving be allowed. The issue of updates appeared to be a difficult one. The report noted that software would have to be updated constantly to keep up to date with the ever-changing road network,
US MANDATES CREATION OF INFORMATION SHARING PANELS On 13 February US President Barack Obama signed a new Executive Order to tackle cyber-security risks. The order provides for the creation of Information Sharing and Analysis Organizations (ISAOs) to bring together the public and private sectors. These ISAOs will liaise with the US government’s National Cybersecurity and Communications Integration Center to share information related to cybersecurity risks and incidents, on a confidential basis, and to discuss means of securing critical infrastructure. In addition, the US will ask a nongovernmental body to serve as the ISAO Standards Organisation, which will lay down a common set of standards and guidelines against which the ISAOs will (voluntarily) operate.
PAGE 12
The title of the Order, ‘Promoting Private Sector Cybersecurity Information Sharing’, may have sent the odd shudder through Silicon Valley, where executives are caught between ensuring confidentiality for their customers and the demands of the US security establishment. Evidence that they are not entirely comfortable with Obama’s agenda came at a ‘summit’ of technology leaders the president addressed on 13 February. Apple’s chief executive Tim Cook was the only chief executive from a major tech company to attend the event, with Facebook’s Mark Zuckerberg, Yahoo’s Marissa Mayer and Google’s Larry Page and Eric Schmidt all reportedly declining their invitations. “We only have to think of real-life examples – an air traffic control system going down and disrupting flights, or blackouts that plunge cities into darkness – to imagine what a set of systematic cyber attacks might do,” Obama told
delegates. “American companies are being targeted, their trade secrets stolen, intellectual property ripped off.” The president likened the situation to the Wild West and spoke of a “cyber arms race”. He cited December’s cyber-attack on Sony Pictures, which he attributed to North Korea. Preventing such intrusions had to be a “shared mission”: “So much of our computer networks and critical infrastructure are in the private sector, which means government cannot do this alone. But the fact is that the private sector can’t do it alone either, because it’s government that often has the latest information on new threats.” Obama urged Congress, now dominated by his Republican opponents, to take even tougher measures, such as forcing companies to inform customers within 30 days of a security breach. Whether this will materialise remains to be seen: Congress remains heavily polarised on many issues, although national security is less contentious than most.
FEBRUARY 2015 | www.testmagazine.co.uk
EDITORIAL JAMES BRAZIER EDITOR TEST MAGAZINE
WHY WE TALK As an industry, software testing sometimes struggles with communication. Too many testers plough isolated furrows. They view their daily tasks through the prism of their company or industry, rather than from the wider perspective of quality assurance as a discipline. Opportunities to build bridges with other testers are scarce and when they arise, it is all too easy to let the pressure of a deadline, or even one’s own natural hesitance, prevent us from taking those opportunities.
T
hat is why we stage the National Software Testing Conference. Many senior QA professionals and heads of testing realise that their professional experience is now so extensive that really, the only way of gleaning new insights is by listening to their peers, and by talking to them. Getting a feel for the obstacles that arise in the real world, and by speaking to those who found ways around them, builds the knowledge to overcome similar problems when they manifest themselves. That knowledge also builds confidence. Some testers worry that they are simply a cost-centre – a necessary evil to keep ‘creatives' on track and allow them to pick up the plaudits at the end of the day, to be automated out of existence by the next clever algorithm. Believing this can do much to limit the horizons of an industry that, in reality, has a gripping future. The technological whiteknuckle ride on which we all now find ourselves means that none of us can afford to tread water. Improving skills and learning from one another is the key to thriving in a fast-changing sector. Understanding the direction in which we are travelling together is an organising principle behind the National Software Testing Conference. None of us know where we will be in five years time. Identifying the ways in which our industry is changing, and what core skills will
FEBRUARY 2015 | www.testmagazine.co.uk
stay in demand regardless, is why bringing senior QA professionals together, under one roof, is so vital. It is in this crucible of discussion that you uncover new insights. It is the difference being knowing something, and realising it. These are the reasons that the 2015 Conference has attracted a Headline Sponsor of the calibre of Borland, who are supporting the meeting for the second consecutive year. It is why the speakers include the QA Directors and Heads of Testing at some of the UK’s, and indeed the world’s, most recognised and innovative companies. They understand that in order for us to thrive as an industry we need to come together to celebrate what we have achieved and to consider where we go next. Just as other groups of workers unite to forge strong professional bodies, so too must the world of software testing come together in order to understand our own discipline, better to understand those whom we serve. These are the reasons why the two days at the British Museum on 19-20 May promise to be so valuable. Of last year’s delegates, 86% described the presentations as either ‘good’ or ‘fantastic'. We hope to do better still, this year, but the mission remains the same: to engage, to inform, to challenge received wisdom but to preserve those principles that hold true for all of us. We hope you will join us.
PAGE 13
COVER STORY CHRIS AMBLER HEAD OF TESTING CAPITA CUSTOMER MANAGEMENT
RIP Testing 2018 In 2013, Chris Ambler, Head Of Testing at Capita Customer Management, predicted the demise of testing as we know it within five years. Two years on, how does this prediction tally with the trajectory of events? He speaks to TEST.
PAGE 14
FEBRUARY 2015 | www.testmagazine.co.uk
COVER STORY
W
ith a degree in Computer Science, and as a Fellow of the British Computer Society, Chris is a senior testing professional with over 30 years’ experience in the IT industry. He is passionate about product and process quality, although he understands that this comes at a price, and tries to adopt the concept of ‘good enough’ testing, to ensure value for money, and to quantify return on investment. TEST: You said back in 2013 that you thought testers wouldn’t exist in five years’ time. Two years on, what’s your feeling now? Chris Ambler: Some thought it was a brave statement, and others thought I was mad, but I stick by that prophecy. Five years is a long time in our evolutionary journey to create a consistently successful development and deployment process. Looking at the way the industry has developed over the years, and where things are going, I firmly believe that by 2018 we will have seen a seachange in our roles as testers, and it would surprise me if the development discipline hasn't changed massively also. TEST: What do you mean by a consistently successful development and deployment process? CA: Project failure, system outages and software glitches are constantly and consistently part of the modern business world, and are usually very public. Customers, stakeholders, users and shareholders can all be badly affected by these disruptions and failures, which result in lost revenue, and potential damage to company and individual reputations. The way we develop and configure our systems and applications, and the methods we use to deploy them, are absolutely key to their success. How we build our architectures and infrastructures, and define our data needs, also adds to the overall impact. It’s really important that we figure out why things goes wrong on such a regular basis and, once we have figured it out, understand how we can stop it from happening again.
ABOUT CHRIS AMBLER Now Head Of Testing for Capita Customer Management, Chris has previously led testing teams in a multitude of industries, locally, nationally, and internationally. He has developed world class QA testing facilities across Europe, for both Electronic Arts and Microsoft Game Studios. He has defined the strategy and delivery methodologies for a number of high value customers in defence, testing combat systems. He has undertaken test management and consultancy, working with banks, insurance companies, and the likes of Orange, Seeboard, Transport for London, Lombard, the Home Office, Sky Television, Sega and Maersk, to name a few. Chris is passionate about his craft, and has spoken at conferences all over the world. He introduced the first games testing track within one of Europe’s main testing conferences. He was a judge at TESTA 2014, and writes a blog at www.chrisambler.net
TEST: So, why is it going wrong?
was to ensure the agreed requirements were met, and CA: To explain this, I need to refer to testing history. that there weren’t any defects within the system. Then If I go back 35 years, to the start of my career, when client/server was born. This allowed the business and user everything was done on ‘big booming mainframes’, in community to be more involved in the development of computer rooms the size of sports halls, running dumb systems and applications, controlling the client end more, terminals around the business, links between what was and defining the ‘how’ part of the requirement. then called ‘data processing’ (DP, today known as IT) and the business community were formal and Next, the advent of DIY tools, such as spreadsheets and documented, with business or system database tools, allowed users to become creative requirements (the ‘what’) being around solutions – previously the domain of the produced completely and signed DP community – which started to shift creative off before being ‘thrown over the control to the users, although DP at least wall’ to be developed. Once still had control over the infrastructure and developed, testers made sure AT LEAST THE BUSINESS networks. that the requirements were met, But then the web came along, taking before throwing it back over the ENDED UP WITH WHAT away those last strands of DP control, and wall for the users to deploy. turning the user community into IT experts, THEY ASKED FOR, At least the business ended up who stopped telling us why they wanted EVEN IF IT TOOK A LONG a particular application and what they with what they asked for, even if it took a long time to deliver. wanted it to do, and started telling us how TIME TO DELIVER. Change was a formal and slow they wanted it developed. Instructions like, process. The tester’s role in all this ‘Build me something like this spreadsheet’
FEBRUARY 2015 | www.testmagazine.co.uk
PAGE 15
COVER STORY became commonplace. The problem was that development and testing didn’t move with this evolution, and still tried to deliver in the old fashioned way.
TEST: But now methods have changed? CA: Yes they have. In an attempt to work with the user community, and allow them to be involved in the ‘how’ part of the process, development came up with agile methodologies. The theory was that you could develop applications a bit at a time, giving the user community and business a fast turnaround for their ideas. While this was all well and good, it’s not easy to see the bigger picture and simultaneously work toward a recognised goal. And the bigger the organisation, the more need for a recognized goal. Where there are complex user experiences, poorly defined by the business, and badly designed architectures and infrastructures, some of the older methods are still needed. Don't get me wrong - used properly, the agile approach has its place in the development lifecycle, but on larger, more corporate applications and systems, it can be left wanting. The application of agile process in larger projects where a traditional approach would have been more suitable, coupled with the incorrect notion that agile means less planning and less documentation, has meant that some projects have been incorrectly defined, badly planned, and poorly executed leading to the need for interpretation and assumptions, creating the failures we have seen too often. From a testing point of view, the failure at a corporate level to see the bigger picture and work toward an end goal makes it difficult to prove that success is being achieved. Testers try hard to adapt, but it’s nearly impossible. It's easy to see how testers can be blamed for failure.
TEST: So, going back to the idea of RIP testing, what does this mean to testers?
To fix the cause, we need to work with stakeholders to specify their needs in a way that makes sense to the delivery process, and thus obviate the need for assumption, interpretation and unnecessary creativity from developers. Get it right first time and get rid of rework, saving time and money. Companies’ requirements are currently defined by business analysts, but not necessarily with the delivery process in mind. This is where testers can fit in ... instead of trying to deal with partial, incorrect, and sometimes wrong requirements – let’s work on getting them right in the first place, to start the delivery process on a sound footing, from both a functional and an architectural perspective. TEST: This is all well and good, but what do testers need to do? CA: For many years, we have said that testing needs to be involved earlier in the lifecycle, but never really understood or defined what that meant. Testers will become ‘business transformists’, replacing traditional business analysts, and coaching users in defining their needs properly – with the effort going into getting it right, rather than putting it right. Looking at a simple example to make the point, you ask a user what they want and they say ‘a shape’. You go away and create a circle. Show it to the user, who says, ‘I wanted a shape with corners’, so you go away and produce a square. Take it back to the user, who says, ‘That’s the right shape, but I wanted a 3D shape, and this is 2D.’ Take it away again, and come back with a cube. The user says, ‘Right shape, but it's blue and I wanted yellow’, so you take it away and produce a yellow cube. This sums up the agile process. Wouldn’t it have been easier to ask the user what they wanted in a way that made them respond, ‘I want a yellow cube’? Imagine the time saved.
BETTER REQUIREMENTS LEAD TO BETTER APPLICATIONS, SO LET’S ADDRESS THE CAUSE, NOT THE SYMPTOM.
CA: Currently, testing focuses on development. It’s used to try and prove that development assumptions and interpretations are accurate, against badly defined user requirements and infrastructures. Attempting to be a bridge between business stakeholders and the creativity of developers, we try to improve development techniques and understanding of the business, while trying to build better, more effective and efficient testing methods to improve accuracy.
When a project goes wrong, delivery is blamed, whereas all to often failure is in fact due to unclear instructions from stakeholders, who often aren’t even sure themselves what they want.
PAGE 16
Better requirements lead to better applications, so let’s address the cause, not the symptom. The root cause of our problems is inaccurate, incomplete requirements and specifications, both functionally and technically.
Speed and cost are the key drivers in this age, so it’s imperative that we speed up this process, allowing testers to ‘drive’ requirements from start to finish, defining them, making sure they are correct and complete, and delivering what the customer wants. TEST: So, it’s not really RIP testing then?
CA: No, not really, but our role will change massively. Someone in a very senior position once told me that testing will never be seen as a profit maker while it’s called ‘test' which, in some circles, is seen as a four letter word. Business transformist is a positive title, with connotations of building and growth, rather than of breaking and delay. We need to concentrate on developing our analytical skills, breaking away from the notion that we’re attached to development, and showing that we are dedicated to delivering what the business wants. Our role will become more important and pivotal to the process. We have a bright future!
FEBRUARY 2015 | www.testmagazine.co.uk
www.neotys.com
CAREER PATHING SHANE KELLY HEAD OF QA AND TEST WILLIAM HILL
DEFINING THE SDET ROLE As we moved ever more toward agile and attempted to ‘Automate All The Things’ (Rackspace.com) the skills that testers needed began to change. Test career ‘standards’, such as Test Analyst, QA Specialist, Technical Tester, Performance Tester, and Automation Tester, no longer seemed to fit what was required of someone working within an agile environment, where role boundaries are blurred, but where software still needed to be adequately tested.
I
n an attempt to define what’s required, and to simplify understanding of the tester’s role, we at William Hill investigated how to better align its functions to engineering principles and career aspirations. Following research undertaken by Google and Microsoft, we looked at how the Test Team within our organisation could be changed to better incorporate the new test paradigm.
CHANGING NAMES The titles we came up with were Software Test Engineer (STE), and Software Development Engineer in Test (SDET). Where the STE focused on testing best practice and test coordination, the SDET focused more on automation, performance, and nonfunctional based testing.
While we spent some time debating what the name of the role should be, one thing we agreed upon was the word ‘engineer’. By definition, testing is an engineering discipline, and should be considered as such. People who test need a good understanding of the software under test, but also a detailed technical knowledge of the systems upon which the software sits. Another key factor was to ensure that those testers creating the automation framework were both seen as developers, and possessed that tester gene of ‘we love to break stuff’.
PAGE 18
FEBRUARY 2015 | www.testmagazine.co.uk
CAREER PATHING As with the roles they replaced, the two continued to work hand-in-hand as a single test function, defining the test approach, coverage and execution. But now, we were able to build toward a technically defined test framework, which incorporated automation as a standard, rather than something considered as best endeavours, but never actually achieved.
DEFINITIONS
The SDET faced the challenge of not only meeting the requirements of the current sprint, and ensuring that the user story was correctly automated, but also of ensuring they were building toward a maintainable toolset that would benefit other teams within the company further down the line. By ensuring we kept the concept of a test function (or 'guild' as Spotify has called it), we could continue to work together as a group of SDETs, defining best practice, and commit fully to the cell and channel ('squad' and 'tribe', as per Spotify).
The newly implemented role of SDET has advanced the Test Team, and helped ensure we’re able to meet the complex test requirements of true agile delivery. We are testing more products, more quickly, and have removed most of the monotonous repetitive soul sapping work that the STEs used to have to do. They can now focus more on those WHILE WE hard to reach edge cases and off-the-cuff type tests, which ultimately lead to a better product, providing CONTINUE TO LOOK a better experience for our customers.
The first challenge was to define these roles, both for use within the organisation and then for explaining to the market what type of person we wanted to recruit. The test career path needed to be clearly defined for both STE and SDET roles, and it was important that that they should match the other disciplines in a technology career path, such as Development.
FOR BOTH NEW AND The SDETs meanwhile, under the influence EXPERIENCED SDETS, WE of continuous integration, pushed for early STILL RELY ON THE SKILLS OF automation, and the concept of ‘Green Dot Addiction’ (This is the idea that, in a test GOOD STES, ENSURING WE environment, automation scripts should always HAVE AN ALL-ROUND execute, without fail. Jenkins uses green dots to that scripts have passed correctly, and we try BALANCED TEST TEAM tosignify ensure that the whole team signs up to keeping
We also wanted to ensure that people could see how they might progress within the Test Team, if they had the correct skillset (see Figure 1). Defining this career path along engineering roles showed how the company was willing to invest in testing, and that, by advancing the team, we would ultimately provide a better experience for our customers.
We agreed that the inherent ability to ‘just break stuff’ was, as ever, very important, but that in addition the SDET would now have to develop test automation harnesses and frameworks for test execution within a Java development environment, and so would need good programming skills. They would also need to use or develop specialty test tools for performance, load and security testing, and still participate in test case and bug reviews. As a key technologist within the agile team, they would be providing input to design and code reviews, too.
AGILE By this point, the majority of teams had really started to embrace agile as their delivery framework. This was made easier with great support and drive from senior management. As a consequence of embracing agile, the requirement for early automation was now even greater, and so the importance of the SDET role was commensurately greater, too.
FEBRUARY 2015 | www.testmagazine.co.uk
these ‘dots’ green after each deployment of new code).
Having shown the importance of left shift automation, the SDETs then started to work more closely with the STEs and analysts, to agree on a behaviour driven development (BDD) approach, and beginning to move toward test led engineering where we were thinking of (and even automating) the test cases before the code had even started. By utilising a Given-When-Then (GWT) approach, we were then able to work with the analyst to define the acceptance criteria, within a tool such as Cucumber, which allowed us to build test cases quickly, utilising the framework the SDETs had been building. We also developed in-house Java training for the STEs, so that they could better understand this approach, and help shape future improvements to the framework and requirements. While we continue to look for both new and experienced SDETs, we still rely on the skills of good STEs, ensuring we have an all-round balanced test team. STEs can build their skills in programming, and some have moved along the career path to the role of SDET. All in all, this has led to a very productive team, which has the ability to drive advances in automation and to build a really good quality test automation framework.
PAGE 19
FUTURE TECH MATHEW HALL RESEARCH ASSOCIATE IN THE DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF SHEFFIELD
O
n its maiden voyage in 1996, after a decade of development costing $7 billion, the European Space Agency’s unmanned Ariane 5 rocket exploded just 40 seconds after lifting off from Kourou, French Guiana.
An investigation discovered that the component responsible for the failure was a 64-bit floating point number, reused from Ariane 4, that hadn’t been tested for Ariane 5’s different flight path. Simulations demonstrated the failure, but too late to save the rocket’s $500 million cargo.
AUTOMATIC SOFTWARE TESTING: RISE OF THE MACHINES? Mathew Hall, Research Associate in the Department of Computer Science at the University Of Sheffield, discusses why computers should test software, how to prioritise what they search for, and how automatic search-based techniques can complement human testers.
PAGE 20
Testing isn’t easy. In his book Normal Accidents, Charles Perrow analysed the failures of complex safety-critical systems. He demonstrated that failure is endemic to complex systems, often due to unpredictable interactions between management, operators, and the system itself. ‘Normal accidents’ continue to plague technological endeavours. The Fukushima Daiichi nuclear disaster in 2011 was arguably a similar ‘normal’ accident: a failure triggered by a cascade of faults.
COMPLEX INTERACTIONS The show must go on. Most of our actions aren’t risk free, after all, and testing reduces risk, ideally by identifying the defects when they’re least troublesome – before the rocket is launched – and building confidence that the product will behave as it should. Failures are often found in complex interactions between various components. One of software testing’s key aims is to provide assurance that these interactions won’t cause failures when the system is deployed. As systems become more complex, guessing which interactions might fail (and therefore need to be tested) becomes more difficult.
FEBRUARY 2015 | www.testmagazine.co.uk
FUTURE TECH While the amount of testing that can potentially go into managing the risk is open-ended, the budget to pay for it isn’t. Testing must be delicately prioritised to eliminate as much risk as is possible within budget. This might mean foregoing or reducing the depth of tests for less critical components to ensure that the most significant failures are protected against, and that common functionality is available.
CREATIVITY VS METHODOLOGY Applying human creativity to the search for bugs can be very effective. Given sufficient time, humans have a good chance of finding all the failure modes in a piece of software. Unfortunately, relying on exploratory testing alone is too risky. There are tasks for which humanity is not well suited. One crucial testing activity is the repetition of processes over multiple configurations and scenarios. Unlike people, computers do not get tired or bored; hence the increase in automation for software testing and QA, part of the ‘shift-left’ approach to match available resources to the complexity of the issues needing resolution.
suite, we can build a feedback loop in which an algorithm constructs some tests (checks), then tweaks them in response to the coverage they attain. Fuzzing works on this principle. A ‘farm’ of machines automatically generates random inputs and looks for crashes. The ‘American Fuzzy Lop’ evolutionary fuzzer has been used successfully to find bugs in over 50 open source projects, and it needs no human input other than setup. This loop is fundamental to a lot of automatic systematic testing tools such as afl-fuzz, EvoSuite, KLEE, and Smart Unit Tests (aka Code Digger/Pex). EvoSuite and Smart Unit Tests aim to automatically generate a unit test suite for code in Java and .net languages respectively, to make it easier for developers to scrutinise their software with near-instantaneous results, allowing them to spot anomalous behaviour or regressions before code even reaches the QA team.
WHILE THE AMOUNT OF TESTING THAT CAN POTENTIALLY GO INTO MANAGING THE RISK IS OPEN-ENDED, THE BUDGET TO PAY FOR IT ISN’T.
Automated tests can be used to deliver ‘quick wins’ by reducing cycle times and freeing up testers to spend more time on exploratory testing. Automatic (as opposed to automated) software testing has long been a dream of industrial practitioners and researchers alike, with the promise of further freedom for creative QA activities and improved assurance. Unfortunately, the dream has yet to be realised, although researchers are getting closer. The main barrier to the adoption of test automation tools is the need for a human to tell a computer what to do. Scripting is a technical skill that many testers don’t need, so it’s often unreasonable to ask them to don a developer’s hat and write some code (code they’ll undoubtedly have to maintain). Automatic testing removes the need to explain how the computer should carry out a task, instead focusing on what it should try to achieve.
FAULT FINDING
Most tools work out of the box with a simple goal: discover inputs that reach different parts of the program to the extent possible. Results from afl-fuzz show how effective this process can be. With further guidance, computers can produce test cases that are hard for humans to foresee, such as scenarios that trigger crashes in an automated car parking system.
These evolutionary techniques in testing represent only a small portion of the advances being made in the searchbased software-engineering field. A growing community of researchers has discovered new applications for search algorithms to solve industrial software problems. Algorithms have been used successfully to design antennas for spacecraft. Researchers have developed techniques to automatically optimise software, and even to write programs. Does all this mean that manual QA, or software engineering, will be replaced by robots? Thankfully not, but we can look forward to a time when computers help us build better software and save us time doing it. Unlike humans, it is feasible to ask machines to spend hours trying different inputs to reach different parts of the code. Combined with code analysis, for a small investment in computing time, testing tools can automatically find inputs that exercise the hard-to-reach corner cases in the program.
The creative application of knowledge, and the ability to ask questions such as, ‘What should the program do?’, rather than simply, ‘What does this program do?’, are the things that are hard to automate. This is where a complementary relationship between human and computer is needed for software testing. Computers can be used to answer queries of the latter type easily, but only QA specialists are able to answer both questions. Computers can nevertheless add value to QA by tirelessly characterising the program in an attempt to improve ‘coverage’ of the software's behaviour. In the academic software-testing field, work is often evaluated in terms of its ability to maximise ‘coverage’. When we talk about coverage, we often use it to mean specific things, such as code coverage for test adequacy. More broadly, however, ‘coverage’ represents our level of confidence that the system is unlikely to exhibit faulty behavior in the real world, because our test suite has (or should have) already allowed us to observe it. With an appropriate method of measuring the coverage of a test
FEBRUARY 2015 | www.testmagazine.co.uk
PAGE 21
FUTURE TECH BABUJI ABRAHAM SENIOR VICE PRESIDENT AND CHIEF TECHNOLOGY OFFICER ITC INFOTECH
THE DEVELOPING WORLD OF SOFTWARE TESTING
We have come a long way in a relatively short time, so what does the future hold for software testing? As the testing environment becomes more complex Babuji Abraham, Senior Vice President and Chief Technology Officer at ITC InfoTech, assesses how best to tackle this changing situation.
E
very year, software testers face the challenges of radical innovation and new complexity. The pace of change is constantly accelerating. Just think of the world eight years ago. No smartphones. No tablets. No apps. Now imagine about what the world will look like eight years from now. It’s easy to be overwhelmed by the possibilities.
IT TRENDS AND THEIR IMPLICATIONS As testing as an activity within the software development life cycle reaches another level of maturity it is becoming more service-driven and at the same time more collaborative. Continuous testing in an agile framework will continue to be a popular approach as companies today are more focused on reducing lead times. Other headline factors in the progress of testing reflect the general trend for overall IT modernisation and include process optimisation and further automation Cloud computing is certainly still a major trend. By the end of this year 26% of software applications will be hosted in the cloud. While cloud-based test environments provide a whole new level of convenience and cost effectiveness compared to traditional models, there are still concerns about data security and performance in cloud environments. The move towards web-scale IT, starting with DevOps, has provided challenges for the tester, not least because it is the user experience that is key to these new systems, so the interface has to be right, and it has to be right across an ever-wider range of platforms – fixed and mobile – and operating systems. Software defined networking, storage, data centres and security are maturing. Cloud service software is configurable thanks to rich application programming interfaces (APIs). Computing is moving away from static models to deal with the changing demands of digital business.
CONTEXT-DRIVEN TESTING One-size-fits-all is no longer an effective approach for testing. The most successful testers in the future will be the ones that can bring the most skills to the table for any given context or business situation. With an explosion in ever more complex and ambitious software, testing requirements are becoming more complex. Software development companies often find that they must look outside their own organisations for partners to help them test their products in a cost-effective way. Outsourcing the testing, or parts of the process, to contractors, consultants, Testing Centres of Excellence (TCoEs) or external organisations which specialise in specific areas of testing are some of the more successful
PAGE 22
options used by these organisations. Often, the outsourced testers need specific domain knowledge and must be schooled to a certain degree in the client’s business. They also need access to the right equipment, the right tools and the right infrastructure to get the job done properly, on budget and within the timescales specified.
GOING SOCIAL There seem to be few areas of our personal and business lives that are not touched by the ubiquitous social media these days. And social media are also fundamentally changing the testing world. Software is now integrated virtually into everything and with the growth of SoLoMo (social media, localisation, mobility) the focus has increasingly shifted to security and reliability, making the role of the tester ever more important. This new paradigm requires a special set of technical skills on the part of testers and with sensitive and critical data being generated and used by SoLoMo technologies, apps have to be reliable, scalable, private and secure across an ever-broader range of platforms operating systems, browsers and locations. Business solutions are being tested with real-world data using multiple apps and high-end mobile platforms from the cloud. The testing framework needs to move towards an integrated solution that comprises device, security and business solution testing deployed from the cloud, with a judicious mix of functional, automated, performance, security as well as other 'ility' testing disciplines.
THE POWER OF THE TECH TREND Of course software testing itself is not immune to the power of these tech trends. Crowd-sourcing is coming to prominence in a number of areas. With crowd-sourcing, testing is carried out by groups of testers often in different locations potentially scattered across the globe, rather than by hired professionals. The software is tested by the ‘crowd’ on a diverse range of realistic platforms. This makes the process quicker, more reliable, more costeffective and leads to better, more bug-free software, but it does add a whole new layer of complexity. In these cases, it is vital to have a management system that provides a common platform, bringing together the test process, artefacts used and work items from groups of testers working in different geographical locations and time zones. The scope and range of new fronts opening up every day in the fight for software quality bears testimony to the enormous pace of change in software development. Of course the key is that every new process, approach and method has to add explicit value at every stage of the Software Development Lifecycle.
FEBRUARY 2015 | www.testmagazine.co.uk
TESTERS AS CUSTOMERS JIM WOODS DIRECTOR OF DEVELOPMENT SERVICES SEGA WEST
THE NEVER - ENDING STORY Jim Woods, Director of Development Services at Sega West, explains why testing within the games sector has to be especially challenging and rigorous, and underlines the role that both testers and customers play after release.
W
ith games, probably more than with most other software applications, the end-user experience is ultimately what the customer pays for. Rather than the experience being a means to an end, for instance when using a holiday booking site, it is an end in itself: it’s the end-user experience that makes the game. A tester in the gaming sector therefore must not only verify that the software performs as intended, and undertake the enormous task of destructive testing to see how robust the code is. The tester has a much more significant role to play: that of ensuring that the customers’ experience is actually enjoyable. This is where the ‘quality’ part of QA really applies. Those responsible for this part of the process – normally the most experienced on the testing team – are required to use not just their knowledge of the product and franchise, but also their understanding of current and emerging gaming trends, to provide constructive feedback about players’ experiences.
unfinished project, usually at no initial cost. In return, they are encouraged to provide feedback to the developer, both with respect to the bugs they fund, also also about the whole enduser experience, the theory being that the people interested in doing this are likely to be those most interested in that particular game, and therefore its key target audience. They will spend more if the game is tailored to their requirements, and so will everyone else. This is the game industry applying a crowdsourcing model to many aspects of game design and user experiences.
LOCALISATION Localisation can have a huge impact on the quality of the end-user experience. Once again, a certain allowance should be made, depending on the cost of the product. The quality of localisation can vary tremendously, dependent on the complexity of the project and the ability of the translator. Cost was the overriding factor in the early days of budget mobile games, and even web-based games. As a result, some game developers tried to find cheap ways to translate their products. As translation software really isn’t suited to this kind of use, this resulted in some real horror stories.
And it doesn't end there; the customer has a vital role to play in this process, too.
Even within translation agencies, the quality of work can vary hugely. To achieve the best end-user experience – to capture the nuance, humour, or horror of particular situations in the way that was intended when it was written in its native language – requires a team with a deep understanding and passion for a particular genre or game.
PRICE DIFFERENTIALS
INTERNATIONALISATION
If we look at the customer experience from a price perspective, it’s easy to understand that anyone who walks into a shop and pays upward of £50 for a game has the right to expect a high-quality experience, whereas those who pay under £10 for a game should have a lower expectation. This is reasonable, as the lower priced game may be far less polished, less complex, or just not as long an experience as the more expensive game.
In addition to localisation is the issue of internationalisation. This is where fundamental changes are required, over and above localisation, in order for a game to provide an acceptable experience for the end-user in a particular territory. This can cover anything from unique spellings (eg tyre/tire) and words (eg rubbish/trash) to visuals and gameplay affecting content, such as, to take a very simple example, driving on the left or the right.
Normally, price is related to the volume of content rather than its quality, and customers will usually expect to pay for additional content. This is where the biggest difference occurs, as it allows the developer or publisher to engage with customers on an ongoing basis post-launch, in order to continue to meet their needs.
Full internationalisation of a game can have a far-reaching impact on the way it is played, which can also impact on the game balance. This makes it an incredibly expensive process, meaning that it doesn’t happen very often.
This dialogue provides an opportunity for customers to have a direct impact on the direction of the game’s ongoing development. It also means that there needs to be a commitment to maintain and even improve the quality of the end-user experience, as this is what will determine how successful the game is in terms of customer investment.
BETA RELEASES The increased use of ‘closed beta’ and ‘open beta’ releases takes this whole process a step further. In each case, the customer is knowingly agreeing to take what is effectively an
PAGE 24
THE ROLE OF THE TESTER All of this means that the role of the tester suddenly becomes far more open-ended, and is likely to continue as long as there is a demand for more content for the game. The other effect of this process, certainly when it comes to the end-user experience, is to turn customers into testers. So we start to see a blurring between the role of the tester and feedback from customers. Of course, customers don’t normally have the same discipline as professional testers to document accurately the steps to reproduce complex issues, but more and more developers are building agents into the games that are capable of recognising unexpected outcomes and reporting these back to a server automatically.
FEBRUARY 2015 | www.testmagazine.co.uk
THE CLOUD RAJESH SUNDARARAJAN PRACTICE HEAD, TESTING SERVICES MARLABS SOFTWARE
THE BURNING QUESTION The burning question of the day, says Rajesh Sundararajan, Practice Head of Testing Services at Marlabs Software, is how you can increase the effectiveness of your software testing by leveraging cloud infrastructure.
PAGE 26
FEBRUARY 2015 | www.testmagazine.co.uk
THE CLOUD
A
s a software testing manager or engineer, you face the constant challenge of managing the key dimensions of quality, cost and testing cycles. For software development and testing organisations, improved quality, lower cost and reduced cycle times are all areas for continuous improvement.
tools are available on the cloud from vendors including HP, IBM, Neotys, and Micro Focus. There are also various performancetesting tools and frameworks, which can provision load generators and execute distributed performance testing over the cloud.
While teams adopt myriad strategies to achieve these goals, the burning question of today is how you can increase the effectiveness of your software testing by leveraging cloud infrastructure.
MIGRATING TEST RESOURCES TO THE CLOUD
WHY YOU SHOULD MIGRATE TO THE CLOUD The question of why you should migrate testing to the cloud is answered by the three key business drivers: quality, cost, and time. Before moving a part or the complete test environment to the cloud, a reduced total cost of ownership and positive return on investment are prerequisites. The worldwide cloud testing and automated software quality (ASQ) software-as-a-service (SaaS) market has experienced a 37.7% growth since 2011. International Data Corporation projects an annual growth rate of over 30% for the period 2013-17, resulting in projected 2017 revenue of $1 billion from this service alone.
TECHNICAL CONSIDERATIONS Technical considerations include application constraints, hardware dependencies, and types of testing. At this stage, you need to evaluate various cloud models, the main ones being public, private, and hybrid. The choice of model will be based on testing context, business drivers, security and compliance requirements, as well as existing infrastructure.
It is commonly seen that organisations start off with some portion of their test infrastructure on the cloud. However, those serious about the cloud need to take a systematic approach to moving their test environments to it. This is in some ways similar to migrating the production environment, but because of lower business risks, relatively simpler configuration, and lower utilisation, testing/QA is often seen as one of the stronger use-cases for the cloud. This means that QA environment migration can serve as a pilot before migrating live production environments to the cloud.
MOBILE TESTING
With its diversity of devices, networks and geographies, mobile testing provides a compelling case for using the cloud. Due to device fragmentation and the constant upgrade to new models, ALTHOUGH it is very difficult to maintain the requisite mobile devices for testing, so using the IT DOESN’T cloud becomes a beneficial proposition CHANGE BASIC TEST for comprehensive mobile testing. Apart from early movers, such as Keynote Device METHODOLOGIES, USE Anywhere and Perfecto Mobile, others, like Experitest and Xamarin have started OF THE CLOUD ALTERS providing cloud-based mobile testing THE LANDSCAPE OF solutions.
TEST INFRASTRUCTURE RESOURCES
Once the decision and design are complete, the actual migration to the cloud needs to be executed as a project, with a comprehensive plan and schedule, and resource and risk management built in.
On similar lines, there are tools which provide cloud-based desktop OS and browser platforms for testing – Browsera and BrowserStack, to name a couple.
THE TEST ENVIRONMENT Although it doesn’t change basic test methodologies, use of the cloud alters the landscape of test infrastructure resources, and enables better test coverage at lower cost, in a shorter timespan. The resources used in testing environments can be broadly grouped into: infrastructure (servers, network, storage) which hosts the system under test; a scaled-down test environment; a production-scale performance testing environment; infrastructure on which the testing tools are deployed; software (tools) used for testing; and other systems such as mobile devices, desktop browsers used for mobile testing, and crossbrowser testing. One, several or indeed all of these resources can be provisioned on, or accessed from, the cloud. Hardware server resources can be quickly and economically provisioned on the cloud from providers such as Amazon EC2, Microsoft Azure, and VMWare. Where large environments are used for short periods, as in the case of performance testing, the benefits can quickly be seen.
TRADITIONAL ENVIRONMENTS VS THE CLOUD Test organisations have strong business drivers for alternatives to traditional environments, meanwhile. For instance: •
The utilisation of test environments in their dedicated on-premise infrastructure is usually low.
•
A number of environments (and servers) are required for different test types.
•
Setting up new environments entails long lead times (weeks or even months), which impact the software release schedule.
•
Errors occur due to the incorrect configuration of environments.
Most commercial software tools have a cloud version, which offers pay-per-use pricing models, and are delivered across the internet. From pioneers such as SOASTA, popular commercial
FEBRUARY 2015 | www.testmagazine.co.uk
PAGE 27
THE CLOUD
RISKS AND KEY CONSIDERATIONS Despite its advantages, the cloud comes with its share of risks. These need to be evaluated and considered before migrating your test environment. •
Loss of control over data. First of all, organisations face a risk in giving control of their QA assets to an external entity. With cloud, the data and resources aren’t in your physical control: responsibility for the data and its safety is with someone else external to your organisation.
•
Security, data compliance, and performance guarantees. While cloud vendors define service-level agreements (SLAs), most of their focus is on availability. The buyers get service credits for the provider’s deviations from availability SLAs. However, it is necessary to read through the fine print, clarify how availability SLAs are actually defined, and how the other service parameters critical to the business – security, data compliance, and performance – are guaranteed.
•
Multi-tenancy. The cloud is a shared resource, and there are risks, both known and unknown, of data leakage to other users sharing CPU, memory, servers, and so on.
•
Virtualisation. While vulnerabilities to the underlying physical resources persist, additional vulnerabilities of the virtualised environment pose an added layer of risk.
•
Cyber attack. Data stored on the internet, especially in large volumes on the cloud, is prone to cyber attacks. Having said that, most cloud providers have stringent security measures in place.
•
Costs. Users need to understand that, apart from hardware, they will also be paying for network usage, storage and data transfer, and plan accordingly.
BENEFITS OF MIGRATING YOUR TEST RESOURCES TO THE CLOUD Depending on the testing context and constraints, use of the cloud brings various possibilities. Cloud-based infrastructure enables you to achieve greater test coverage at lower cost, and test across more devices and browsers. Lower costs meanwhile encourage teams to carry out fully fledged performance testing, stimulates real time testing from multiple geographic locations, and enables the accurate and repeated setup and testing of complex integrated environments. A dedicated physical infrastructure requires a lot of time to procure and set up, while the cloud provides the option of quick provision of the required environment, a set configuration which can be recorded and applied at short notice. It is also very easy to scale up the environment, and to provision additional resources whenever required. There is value for money with cloud. You only pay for the resources you consume, and the large upfront costs of capex, such as servers and other hardware and testing tools, are replaced by opex, which are spread out over time. Being intermittent, the testing lifecycle is a good candidate for utilising the cloud. At the end of testing, the cloud-based resources can be stopped, and restarted when testing resumes, to avoid accumulating higher costs.
MIGRATION TO THE CLOUD: • Converts capital expenditure (capex) to operational expenditure (opex), thereby spreading your costs. • You pay only for what you use. • It quickly provisions the resources on demand. • It gives you the ability to test against multiple configurations. • It is especially useful where large amounts of resources are required in short cycles.
PAGE 28
FEBRUARY 2015 | www.testmagazine.co.uk
Software Testing Network Strength in numbers www.softwaretestingnetwork.com
Membership benefits include: Series of one day debate sessions High-brow webinar streams Research & industry findings Exclusive product discounts Peer-to-peer networking Annual gala dinner And so much more...
Becoming a member of the Software Testing Network joins you together with like-minded professionals that are all striving for technical excellence and championing best practice and process
MANAGING CHANGE EVGENY TKACHENKO TEST MANAGER INNOVA SYSTEMS
BABY DUCK SYNDROME Keeping existing users on board is crucial when making interface changes to an established website. Evgeny Tkachenko, Test Manager at Innova Systems, reports on lessons his company learnt when users rejected a new site en masse, and the strategies they put in place to get it right the next time.
PAGE 30
FEBRUARY 2015 | www.testmagazine.co.uk
MANAGING CHANGE
I
nnova Systems in Russia has an impressive track record of publishing and localising online multiplayer games such as Lineage 2, Aion, and Point Blank. But we learnt a salutary lesson when we pulled the plug on our previous website, and moved our army of users all at once to a single access program named Zapuskatr. Up until this point, users had accessed a subscription centre to service their needs, and every game had its own launcher and required a separate password. However, an ever-growing number of games had made the centre unwieldy, requiring the players to enter an endless string of passwords. It was time for a change. But despite the new system’s obvious advantages, it was given a very chilly reception. A staggering 93% of users demanded a return to the previous system. The experience was an eye opener, revealing our flawed strategic planning, lack of knowledge of users, and failure to take into consideration what is known as ‘baby duck syndrome’: the tendency of computer users to ‘imprint’ on the first system they learn, and then judge subsequent systems on the basis of their similarity to it. The result is that users generally prefer systems similar to those they learnt on, and dislike unfamiliar systems.
ADDITIONAL FUNCTIONALITY In order to cushion the impact of the transfer, we didn’t want the transition to 4game to feel forced or imposed upon players. Although they were in fact compelled to do so, we wanted users to feel that they were transferring of their own volition. We planned to achieve this by delivering compelling features in 4game that hadn’t been available on Zapuskator. Initially high on our list of such features was a global leaderboard – something that provides much more satisfaction and value for player than reporting their scores on Facebook or Twitter. However, we had to drop this, simply due to lack of time. It would have considerably thrown us off schedule, when we were already stretched thin and had a huge backlog. It was a great pity. Granted, the development of a leaderboard would have held us back by a couple of months, but the tradeoff would have offset this in spades, and made for an even softer transition, eventually perhaps attracting an influx of new players (we are currently working hard to put out this feature in the near future).
DEDICATED VOLUNTEERS
So, when three years after releasing Zapuskator, we decided to attempt migrating our users to a new online web platform, 4game, we were keen to learn the lessons of our previous launch and ensure that this transition would go as smoothly and seamlessly as possible.
We assembled a group of external volunteers, who set about testing the application and new services, and let us leverage an incredible amount of valuable feedback. Every new version was first tested by in-house testers, then by outside testers. Only after that did the official version get its upgrade.
BABY DUCK SYNDROME
The value of the work that these volunteers undertook on 4game is hard to overestimate, and made it possible for us to achieve a stable and user contributed feature-packed version. Based on their generous and rapid input, rather than instructing users to do things, the interface now communicates with them.
Instead of trying to do everything at once, we took a series of incremental steps. Taking baby duck syndrome into account from the outset, we aimed to make our games accessible to users on any system, setting our sights on providing a single interface for games, and access to their installation, updates and launch, straight from the browser. 4game runs smoothly through a plugin and WebSockets, and supports all popular browsers, meaning users can install, update, and launch multiplayer online games directly from the website. We set up a User Research Lab, and made a call for all who might be interested in sampling the platform. We watched, analysed and dissected users’ reactions, as they pointed out the cracks and bumps for us to iron out.
RAISING AWARENESS
PROMOTIONAL VIDEO Our previous promo video was a bit dry, so to make us a new one, we hired a studio that had gained tremendous popularity in Russia with its tongue-in-cheek voiceover for the sitcom Big Bang Theory. They nailed it for us, too.
FOCUS TESTING
THERE WERE CONFLICTING RESPONSES WITH REGARD TO THE VIDEO. NONE OF OUR ORIGINAL IDEAS WENT DOWN WELL.
We put the word out on all fronts – social networks, forums, and word of mouth. We used a ticketing system, UserVoice, to collect feedback and fix bugs online. We set up a ‘Move to 4game’ notification system, which appeared every time a user launched the old program, and became more frequent as 4game neared launch. If the user agreed to move from the previous program, it would install the new application, with the previous one automatically uninstalling itself. We also set up a website to pitch the advantages of 4game, and sent messages to our mailing lists, with a link to the site.
FEBRUARY 2015 | www.testmagazine.co.uk
We mocked up a crude prototype of the site, with icons, the video, and quite a lengthy description of every game on the Play page. We then rigged up a usability lab in a meeting room. The testing showed that users didn’t need icons, nor the text that accompanies them.
There were conflicting responses with regard to the video. None of our original ideas went down well. We cut and chopped descriptions, replaced icons with game logos, and put it all up on a tile interface. The video now lasts no more than five seconds.
While some of our volunteers reported that they were startled by the brevity of the video teaser, this type of response was usually knee-jerk, and we found that, as the initial unnerving perception dissipated, the player would want to trigger the video, and we decided that the freshness and novelty of this experience outweighed the risk of discomforting a few players.
PAGE 31
MANAGING CHANGE
Focus testing was a learning curve for us, and one of the key experiences that led to a shift in our approaches to usercentric design, and is now a standard part of the development process.
A GRUELLING TESTING CRUNCH As the launch date neared, we decided to test the water by launching our new site first with our European audience. Our Russian audience, being disproportionately larger, was too much of a risk. This turned out to be fortuitous. Just at the point when we were ready to pat ourselves on the back and pop open a magnum of Champagne, nearly every part of the system began to crumble. Integration of the website with the application staggered; the Play button would stubbornly disappear due to glitches in the client architecture; there were page load and billing system problems; gaps and holes in integration with social networks appeared; there were registration and login performance problems … the list went on.
LESSONS LEARNT It is clear that creating a test production site is essential, while warning users of pending changes when adding new functionality ensures a smoother transition. Thanks to intense work with users, the work of external testers, conducting usability testing, and a strategy of transferring users smoothly, we achieved amazing results, and have been able to overcome the notorious baby duck syndrome! Looking forward, we will do more focus testing. But the bigger challenge remains: how to keep the majority satisfied without overlooking the minority, so that every player can get the most out of the site, whether it’s support, interaction with other players, or the stunning visual aesthetics of Lineage 2. Right now, we’re still struggling to figure out the magic formula that will allow us to gauge and reconcile the preferences of everyone who comes to our site!
This all took its toll on the testing people, who had to put in intense all-nighters, just to be faced the next day with a plethora of new bugs cropping up in other places unexpectedly. We called daily meetings, and put up a bug fix chart that reflected our progress every few hours. To keep everyone in sync, bug lists were regularly distributed to every member of the team. This period made us re-evaluate our approaches and adjust our processes, to incorporate unit testing and test driven development (TDD). Although writing automated tests for the code using TDD is time consuming, it’s helpful in tracking bugs in advance. Nowadays, we use automated tests to reproduce bugs, so while we still do experience occasional crunches, we are better-equipped to handle them. After a few intense months and all kinds of tweaks, we had a stable version of our platform, along with some clarity as to where we were going with it. It was time to introduce a multimillion horde of players to an uncharted terrain.
OUTCOME Introducing the website took about four months, and has allowed our players to enter a game in a couple of clicks, follow the news on the website, and share and comment. After the transition was implemented, we worked feverishly to cushion and minimise the impact, by fixing bugs, Tweeting, and taking up concerns that users raised on forums. There is no doubt that the effort paid off tremendously. Our experiences with launching the two sites could barely have been more different. A poll conducted by Vkontakte on this occasion showed that the changes were well received by 50% of users, with 30% neutral, and 20% greeting them unfavorably. As you usually get the reverse picture when it comes to accommodating users to dramatic change, and considering the 93% disapproval rating we had the last time, we were happy with our performance.
PAGE 32
FEBRUARY 2015 | www.testmagazine.co.uk
COUNTER POINT IAIN McCOWATT DIRECTOR AND HEAD OF TESTING TREASURY BARCLAYS
DR STUART REID SOFTWARE TESTING SPECIALIST
STANDARD DEVIATION Discord surrounds the introduction by the International Organization for Standardization (ISO) of a set of five standards for software testing, ISO/IEC 29119. Its stated purpose is to define an internationally agreed set of standards that can be used for any type of software testing.
A
n independent, non-governmental membership organisation, made up of the 165 member countries, the ISO is the world's largest developer of voluntary international standards. Says the ISO, “International standards make things work. They give world-class specifications for products, services and systems, to ensure quality, safety and efficiency. They are instrumental in facilitating international trade. “ISO standards impact everyone, everywhere. ISO has published more than 19,500 standards over the years, covering almost every industry, from technology, to food safety, to agriculture and healthcare. “ISO standards ensure that products and services are safe, reliable and of good quality. For business, they are strategic
PAGE 34
tools that reduce costs by minimising waste and errors and increasing productivity. They help companies to access new markets, level the playing field for developing countries, and facilitate free and fair global trade.” All of which sounds great, so why the controversy around this particular set of standards? In August 2014, an online petition was created by Iain McCowatt, Director and Head of Testing, Treasury, at Barclays, and author of software testing blog Exploring Uncertainty, calling for the withdrawal of the ISO 29119 standards. It states: “It is our view that significant disagreement and sustained opposition exists amongst professional testers as to the validity of these standards, and that there is no consensus as to their content.”
FEBRUARY 2015 | www.testmagazine.co.uk
COUNTER POINT Here, Iain details his reasons for opposing the standards, while below Dr Stuart Reid, a software testing specialist and convenor of ISO Software Testing Working Group (WG26) that devised the ISO 29119, explains why he feels that their implementation is important.
IAIN McCOWATT Standards in manufacturing make sense. The variability between two widgets of the same type should be minimal, so acting in the same way each time a widget is produced is desirable. This does not apply to services, where demand is highly variable, or indeed in software, where every instance of demand is unique. Attempting to act in a standardised manner in the face of variable demand is an act of insanity: it’s akin to being asked to solve a number of different problems yet merrily reciting the same answer over and over. Sometimes you’ll be right, sometimes wrong, sometimes you’ll score a partial hit. In this way, applying the processes and techniques of ISO 29119 will result in effort being expended on activities that do nothing to aid the cause of testing. And in testing, that’s a major problem. When we test, we do so with a purpose: to discover and share information related to quality. Any activity, any effort that doesn’t contribute to doing so is waste. As ‘complete’ testing is impossible, all testing is a sample. As any such waste results in a reduction in sample size, it equates to opportunity cost: an opportunity lost to perform certain tests. For a project constrained by quality, this translates into increased time and cost. For a project constrained by time or money, this translates into a reduction in the information available to stakeholders, and a corresponding increase in risks to quality. While those in favour of ISO 29119 might tell you that the new standard encourages you to tailor your application of the standard to each project, that it is sufficiently comprehensive that you need only select the processes and techniques that apply, this is the Swiss Army Knife fallacy: if you have one you’ll never need another tool. But one of the problems with a Swiss Army knife is that it’s not much use if you need a pneumatic drill, or an ocean liner. Training testers to use a standard in this way has a tendency to frame their thinking. Rather than trying to solve testing problems, they instead seek to choose from a set of readymade solutions that may or may not fit. When released, and if widely adopted, part four of the 29119 standard (on test techniques) will give rise to a generation of testers locked firmly inside the 29119 box, without the ability or freedom to solve the problems they need to solve. And that’s not the worst of it. For my sins, I spent a number of years as an ISO9000 – Quality Management – auditor. It seemed like a great idea at the time: understand the system, monitor the system, improve the system. But I gradually realised that this wasn’t the reality. People were documenting the system, documenting their reviews, documenting their responses to audit findings, and doing very little by way of improving the operation of the business. What was going on? Goal displacement, that’s what. We’d created a machine geared toward complying with the standard, demonstrating conformance to the satisfaction of an assessor, and maintaining our registration once obtained. Somewhere along the line, the goal of improving our business had been forgotten.
FEBRUARY 2015 | www.testmagazine.co.uk
This phenomenon isn’t limited to organisations that seek compliance with external standards. Not so very long ago, I watched an organisation doing much the same while attempting to standardise its testing processes. Significant effort was directed to completing templates for test documentation, reporting metrics and self-assessing vs the internal standard – all with no regard for the relevance or value of doing so for the projects that testing was meant to be serving. So when standards proponents tell you that following ISO 29119 will improve the efficiency or effectiveness of your processes, call them out: far from making testing more efficient or effective, conformance will have the opposite effect. The text of ISO 29119 claims that it is ‘an internationally-agreed set of standards for software testing’. This agreement is meant to be the product of consensus, defined by ISO as, ‘general agreement, characterised by the absence of sustained opposition to substantial issues by any important part of the concerned interests, and by a process that involves seeking to take into account the views of all parties concerned and to reconcile any conflicting arguments’ (ISO/IEC Guide 2:2004). There is no such consensus. Instead there is a small group of members of a working group who claim to represent you. Meanwhile, hundreds of testers are calling for the withdrawal of ISO 29119.
PAGE 35
COUNTER POINT
MEMBERS OF THE ISO SOFTWARE TESTING WORKING GROUP (WG26) ARE WELL-VERSED IN THE DEFINITION OF CONSENSUS.
However, as a Working Group (WG), we can only gain consensus when those with substantial objections raise them via the ISO/IEC or IEEE processes. The petition talks of sustained opposition. A petition initiated a year after the publication of the first three standards (after over six years' development) represents input to the standards after the fact, and inputs can now only be included in future maintenance versions of the standards as they evolve. However the petition comments raise a number of issues which deserve a considered response: • The standards are not free. Most ISO/IEC/IEEE standards cost money (ISO is partially funded by the sale of standards), and the testing standards are no different in this respect. Personally, I would prefer all standards to be made freely available, but I am not in a position to make this change – and do not know where the costs of development would come from. • The standards ‘movement’ is politicised and driven by big business to the exclusion of others. A large proportion of the members of WG26 are listed at our About WG26 page, along with their employers. This list does not support the assertion in the petition. The seven editors (who do the majority of the work) are from a government department, a charity, two small testing consultancies, a mid-size testing consultancy, and a university, and one is semi-retired. All WG members give their time freely, and many use their own money to attend meetings. As all received comments have their resolution fully documented, anyone who submits a comment on a draft standard can easily see how their suggested change was handled – thus even those who cannot afford the time to come to WG meetings can easily influence the content of the standards. • The methods advocated haven't been tried and the standards do not emerge from practice. The number of years' and range of testing experience on the WG (and the number of countries represented) shows that a wide range of experiences have been drawn on to create the standards. Early drafts were widely distributed, and many organisations started (and continue) to use the standards – and so provided important feedback on their use, to allow improvements to be made.
Here, Stuart, an indepedent consultant and convener of ISO JTC1/SC7 WG26 (Software Testing), currently developing the new ISO 29119 Software Testing standard, replies to the petition.
STUART REID While there are existing standards that touch upon software testing, many of these overlap and contain what appear to be contradictory requirements with conflicts in definitions, processes and procedures. Given these conflicts and gaps, developing an integrated set of international software testing standards that provide far wider coverage of the testing discipline provided a pragmatic solution to help organisations and testers. And ideally this initiative would not reinvent the wheel, but build upon the best of the available standards; thus the motivation for the ISO/IEC/ IEEE 29119 set of standards. Members of the ISO Software Testing Working Group (WG26) are well-versed in the definition of consensus. The six years we spent in gaining consensus on the so far published testing standards provided us all with plenty of experience in the discussion, negotiation and resolution of technical disagreements - if nothing else, we are now experts at compromise and reaching consensus.
PAGE 36
• The standards represent an old fashioned view and do not address testing on agile projects. The standards were being continually updated until 2013, and so are inclusive of most development life cycles, including agile. As an example, the test documentation standard (ISO/IEC/IEEE 29119-3) is largely made up of example test documentation, and for each defined document type, example documentation for both traditional and agile projects is provided. The standards will be regularly reviewed, and changes based on feedback from use have already been documented for the next versions. • The standards require users to create too much documentation. Unlike the IEEE 829 standard, which it replaces, the test documentation standard, ISO/IEC/IEEE 29119-3, does not require documentation to follow specific naming conventions, nor does it require a specific set of documents – so users can decide how many documents to create and what to call them. The standard is informationbased, and simply requires the necessary test information to be recorded somewhere (eg on a Wiki). It is fully aligned with agile development approaches, so users taking a lean approach to documentation can be fully compliant with the standard. • The existence of these standards forces testers to start using them. According to ISO, standards are, ‘Guideline
FEBRUARY 2015 | www.testmagazine.co.uk
COUNTER POINT documentation that reflects agreements on products, practices, or operations by nationally or internationally recognised industrial, professional, trade associations or governmental bodies’. They are guideline documents, therefore they are not compulsory unless mandated by an individual or an organisation - it is up to the organisation (or individual) as to whether or not following the standards is required, either in part or in their entirety. • The Testing Standards are simply another way of making money through certification. There is currently no certification scheme associated with the testing standards, and I am not aware of one being developed. There is also no link between the ISO/IEC/IEEE Testing Standards and the ISTQB tester certification scheme. • The Testing Standards do not allow exploratory testing to be used. Exploratory testing is explicitly included as a valid approach to testing in the standards. The following is a quote from part 1: ‘When deciding whether to use scripted testing, unscripted testing or a hybrid of both, the primary consideration is the risk profile of the test item. For example, a hybrid practice might use scripted testing to test high risk test items, and unscripted testing to test low risk test items on the same project.’
• No one outside the WG is allowed to participate. There are a number of options for those interested in contributing to the standards – and these are all still open. See below. • Context Driven Testing isn't covered by the standards. I fully agree with the seven basic principles of the Context Driven School. To me, most of them are truisms, but I can see that they are a useful starting point to those new to software testing. Jon Hagar, the Project Editor for the ISO/IEC/IEEE 29119 set of standards, is a supporter of the context driven school and he, along with the rest of the WG, ensured that that many of the context driven perspectives were considered. For instance, the following statement is early in part 1: ‘Software testing is performed as a context managed process.’ The standards do, however, also mandate that a risk-based approach is used, as can be seen in the following quote (also from part 1): ‘A key premise of this standard is the idea of performing the optimal testing within the given constraints and context using a risk based approach.’ I see no problem in following both risk based and context driven approaches simultaneously, as I believe the context and the risk profile to be part of the same big picture which I would use to determine my approach to the testing.
• No one knew about the standards, and the WG worked in isolation. From the outset, the development of the testing GETTING INVOLVED standards has been well publicised worldwide by members of the WG So, what should you do about ISO 29119? Get involved. Apprise at conferences, in magazine yourself of the available information. Says the ISO, “The most articles and on the web. Early significant way you can get involved is to join ISO/IEC JTC1/ in the development, in 2008, SC7 WG26 Software Testing – the working group that is workshops were run at developing the standard. To do this, you first need to THE WG ALSO WENT both the IEEE International become a member of your national standards body. Conference in Software TO GREAT LENGTHS “Being a member of the working group requires Testing and the EuroSTAR attendance at six days of meetings, every six months, conference, where the TO INVITE THE BROADER content and structure at various locations around the world. This may make TESTING COMMUNITY of the set of standards it sound glamorous, but it also requires significant were discussed and effort and commitment, both at the meetings and TO COMMENT ON THE improved. The WG also throughout the year. Also, contributing to international STANDARD went to great lengths to standards is a voluntary position, and ISO/IEC do not invite the broader testing provide funding to members of working groups to attend community to comment on meetings. the standard and to voice “The second option is to become a reviewer of the standard their opinions, at meetings of via the mirror working group within your national standards our software testing standards body. Requiring less commitment than the first option, this WG, and by placing information on softwaretestingstandard.org, as to how individuals can get nonetheless still contributes wonderfully to the development involved in the development of the standard (see our How of the standards. To do this, contact your national standards to Get Involved page). Testing experts from around the body and request to join the local mirror working group (or start world were invited to take part in the development of the one up if it doesn't exist in your country yet!) and start reviewing standards – these included a number of prominent members copies of the standards. of the AST who were personally approached and asked to “The third option is to become a reviewer of the standard contribute to the standards early in their development, but via an industry standards body such as the IEEE Standards they declined to take part. Other members of the AST have Association, offering another avenue for individual experts to provided input, such as Jon Hagar, who is the IEEE-appointed participate internationally, without involvement in national Project Editor - and he has presented on the standards at the CAST conference. organisations.”
You can find a petition addressed to the President of the ISO, calling for the suspension of publication of parts of ISO 29119, and the withdrawal of other parts, at www.ipetitions.com/petition/stop29119 Read more from Iain McCowatt at http://exploringuncertainty.com/blog/
FEBRUARY 2015 | www.testmagazine.co.uk
PAGE 37
COUNTER POINT KIERAN CORNWALL HEAD OF TESTING ITV
DO WE NEED A SOFTWARE TESTING STANDARD? The scale and pace of change in the software engineering world is immense. It encompasses hardware - local, hosted and cloud - with limitless combinations of operating systems, versions and configurations, and thousands of languages to generate applications and services.
S
oftware engineering practice has moved on from the old days of large scale sequential deliveries, where testing was a phase in the middle (or at the end!), and documents needed to be generated and scripts prepared. We are now working under models of continuous integration, delivering small, iterative, functional pieces of code, regularly deployed. If we want work that supports quick change, testing can no longer be a phase. There’s no need to document a test plan: your tests are part of the product and code base. Automation is key, because products are now supported across the limitless combinations of environments that are updated regularly. Testing has adapted to the dynamic nature of this type of industry; can testing support a potentially static standard? In my view, testing standards cannot reflect the dynamic approaches that testing needs to deal with in an everchanging environment. Since May 2007 it has taken six years to reach agreement on three of the five parts of ISO-29119 and in that time we have seen six generations of the iPhone. Netflix has gone from posting out DVDs to streaming across a multitude of devices. If ISO-29119 is to make any positive impact in this industry it will either have to be extremely generic or be updated on a regular basis. If it is too generic then it devalues its worth. If it changes regularly, companies will constantly need to alter their approach to stay in line with the standard. Considering the time taken to get agreement on 29119 this far, I don’t believe that changes will be implemented with any great speed.
Testers have spent many years reaching the conclusion that the test strategy doesn’t answer all problems. We can no longer hide behind test plans just because business stakeholders signed them off. We have a higher responsibility to pragmatism and innovation in order to help our customers reach their goals with the highest possible level of quality. As testers, we need to think for ourselves and adapt to our environment. We need to be dynamic and I don’t believe there is such a thing as a dynamic standard. Testing has been shackled before with IEEE-829 and this could be history repeating itself. We cannot stop the standard from reaching the market. What we can do as a community TESTING HAS is continue to educate, innovate ADAPTED TO THE and adapt to the DYNAMIC NATURE OF ever changing THIS TYPE OF INDUSTRY; software world.
CAN TESTING SUPPORT A POTENTIALLY STATIC STANDARD?
This brings us to a strong point of contention on the standard. ISO-29119 does not reflect the views and opinions of the wider testing community. Only full ISO members, it seems, can directly influence the standards. To be a full member involves investing one’s personal time. I’m sure the majority of members are individuals who sacrifice a great deal to have an input. Then again, larger organisations with plentiful personnel resources will find it easier to donate resources. This could create a perception that powerful companies have undue influence over the process.
PAGE 38
FEBRUARY 2015 | www.testmagazine.co.uk
TESTING & PERFORMANCE. CONNECTED.
Tech Mahindra's Global Test Practice is the next generation concept in software testing, providing a superior and flexible alternative to standard off-shoring model. It is a multi-location based testing centre of excellence that combines best-in-class test processes, people, techniques, facilities and methodologies. It also offers a full range of scalable testing services, cutting-edge innovation along with optimum unit cost of testing.
Connect with us: TestingPreSales@TechMahindra.com FEBRUARY 2015 | www.testmagazine.co.uk
NEW PROCESS
JULIA LIBER HEAD OF WEB APPLICATIONS AND TELECOM TESTING A1QA
THE INTEGRATOR’S EDUCATION MUST BE ‘HANDS ON’ AND INFORMED BY PROCESSING REQUESTS DURING THE ACTIVE PHASE OF TESTING.
RISE OF THE INTEGRATOR Julia Liber, Head of Web Applications and Telecom Testing for A1QA, explains the concept of integration in testing, and discusses its merits relative to classic testing scenarios.
PAGE 40
FEBRUARY 2015 | www.testmagazine.co.uk
NEW PROCESS
T
he process of OSS/BSS* testing requires preparation of project infrastructure, including deployment of several project environments. And this, in its turn, implies not only the production-environment (which will become the ground for software during real operation) but also test environments for artificial tests of the system. Test environment, as a rule, is deployed on software development company side (for internal testing) or software consumers’ side (for external and acceptance testing). In case of OSS/BSS solutions, telecom operators serve as software consumers.
CLASSIC VERSUS INTEGRATED SCENARIOS For successful operation of the test environments, there are two possible scenarios for interaction between testers and administrators: Classic scenario: The testing and administration teams are separated. Their interaction is implemented via standard ITIL processes, including creation and processing of requests created in an incident tracking system, such as Jira. Integration scenario: A technical specialist, known as an integrator, is incorporated directly into the testing team, for resolving issues existing within the test environment.
THE INTEGRATOR THE FOLLOWING CRITERIA ARE USUALLY APPLIED TO TEST ENVIRONMENTS: Hardware and software resources must meet the minimum requirements of the execution environment for the developed software. They should have a structure and architecture equivalent to the production environment and – for certain types of tests – be completely identical to the production environment.
A dedicated technician, the responsibilities of the integrator include system administration to ensure the stable functioning of the system. Their competences must be somewhat broader than this, however, in that, unlike a simple system administrator, they must know all the details of system business processes, and the correct behaviour of the system. For efficient interaction with the testing team, and to configure the system with options for testing purposes, the integrator requires a deep understanding of the core business processes of supported systems.
ADVANTAGES OF INTEGRATION The integration scenario brings with it several advantages: It provides the testing team with the possibility of performing a wider range of tests, due to applying not only a ‘black box’ but also a ‘grey box’ approach. It reduces test environment downtime, due to more rapid reporting of problems by the testing team. It reduces the average tester’s idle time due to faster processing of operation requests, which can block testing processes. It raises the technical skills of the testing team, and their level of knowledge as to the architecture and capabilities of the system. It reduces the number of FAD (function as designed) defects rejected by the development team, by means of pre-filtering of defects entered by the integrator.
STRATEGIES FOR PREPARING INTEGRATORS TO BE ‘LITERATE’ AND EFFECTIVE The integrator should be somebody within the testing team who is both technically competent, and who understands the business logic underlying the concept of the integrator. The integrator’s education must be ‘hands on’ and informed by processing requests during the active phase of testing. It should follow the principles of apprenticeship, in that the integrator should learn from a more experienced person while performing production tasks – this is much more effective than simple examination of technical documentation (theoretical training is acceptable only for general technical questions).
* Operations Support System / Business Support System FEBRUARY 2015 | www.testmagazine.co.uk
PAGE 41
NEW PROCESS
ANALYSIS Having had the opportunity to evaluate the work of testing teams using both classic and integration strategies, I have concluded that the integration approach works best in cases with an active phase of product testing by a team of dedicated testers. However, in cases where people other than technical experts are working with the system, such as business representatives and (in particular) customer service employees, the classic strategy may be preferable, in that it will tend to reduce the number of similar requests to the administration team and thereby make the interaction more efficient. Thus, the classic strategy is good in the later stages of a project, when the general quality of the system is already quite high, and a large number of specialists with a good knowledge of core business processes, but a rather poor understanding of the technical implementation side, are joining the project. In turn, the integration strategy is most useful in the early stages of a project, when those involved are mainly qualified technical experts in testing, with a clear insight into the internal implementation of the software under test. Conversely, integration strategy is not suitable, even in the early stages, in cases involving a large number of requests from users. Implementing the integration strategy in these instances runs the risk that the integrator won’t be able to process these requests or to prioritise them correctly. In such cases, the advice would be to use the classic strategy. Choosing the correct interaction strategy between testing and administration teams can significantly improve efficiency and quality when testing complex software solutions, resulting in higher quality for the developed system, and reducing the likelihood of end users suffering losses associated with any defects.
PAGE 42
ABOUT THE AUTHOR: Julia Liber is Head of Web applications and Telecom Testing for A1QA. In this role, she manages the Internet applications and telecom systems testing team and provides consulting for wireless operators. She also assists with organizing the testing process and the acceptance phase for modification or new billing solution implementations.
FEBRUARY 2015 | www.testmagazine.co.uk
NEWS
SOFTWARE
THE GATEWAY TO SOFTWARE EXCELLENCE
FOR EXCLUSIVE NEWS, FEATURES, OPINION, PRODUCT NEWS, EVENTS, DIGITAL AND MUCH MORE VISIT: www.softwaretestingnews.co.uk
Published by T H I R T YO N E
www.31media.co.uk
Telephone: +44 (0) 870 863 6930 Email: info@31media.co.uk Website: www.31media.co.uk
EFFECTIVE COMMUNICATION RAJINI PADMANABAN SENIOR DIRECTOR, ENGAGEMENTS QA INFO TECH
BUILDING BRIDGES In an environment where the stakeholders in a project may be scattered to all corners of the globe, effective communication in testing is more vital than ever, says Rajini Padmanaban, Senior Director, Engagements, at QA Info Tech.
I
n the past, it was common to have centralised operations, with a few offshore service providers operating remotely, and vendors helping with development and testing. Today, global centres of development have become common, whether for reasons of attracting local talent, taking advantage of timezones for enhanced productivity, or establishing closer proximity with end-users.
and well-grounded. Establishing this trust goes a long way towards ensuring that professional interactions are positive, with the added advantage of saving time. It doesn’t follow that all a tester’s communication must be pointed and focused, though. When informal conversations are interspersed with professional communication in an appropriate way, they help to make the relationship even tighter.
With this expansion, and with people able to To build such trust and rapport, many organisations even communicate online at any hour of day or night, the encourage periodic onsite visits to promote in-person concept of a working day has almost ceased to exist. relationship building. While this can certainly be valuable, Organisations are increasingly comfortable with it should not be regarded as only way in which one-tohandling worldwide communications. one relationships can be promoted. Communication technologies underwent a revolution over the past decade, which Today, advances in technology have made has eased the process. Even now, remote communication almost able to however, communication is not wholly mimic real-time, in-person interactions. A A SIMPLE RULE OF THUMB without its challenges. combination of making intelligent use IS TO BUILD TRUST AND of remote communication technology Effective communication is critical RAPPORT. THAT REMAINS TRUE and doing one’s homework beforehand to any engineering assignment, IN THE TESTING ENVIRONMENT, can resolve the common dilemma of given its volume, complexity and AS ELSEWHERE, NO MATTER whether ‘to ask or not to ask’ in remote ambiguity. This is especially the case HOW TECHNOLOGY AND testing assignments. Understanding this for testers, because much of what communication balance comes with COMMUNICATIONS DEVELOP they do depends on information experience, as testers mature in their role. IN THE FUTURE they gather by talking to various Nevertheless, taking the time to establish a stakeholders (business and marketing rapport with your offsite stakeholders through entities, the development team, the the use of communications technology will operations team, end users, and so on) all always pay dividends. of whom can be found almost anywhere on the planet.
Testers need to talk to people to get the information they require, but they must also strike a balance to ensure that such communication is not seen as an overhead. This can be tricky, especially when they work with remote groups they don’t see on a daily basis. A reluctance to communicate excessively could at times hold testers back from submitting questions to stakeholders, which then has a direct impact on their testing effort. Let us take an instance where a defect has been filed. If there are issues or disagreements around its behaviour, repeatability, resolution, regression and so on, the communication process can become longwinded, even for a single defect. To be confident that the quality of the product isn’t compromised, a tester needs to ensure that every defect receives commensurate attention. A simple rule of thumb is to build trust and rapport. That remains true in the testing environment, as elsewhere, no matter how technology and communications develop in the future. Testers must convince stakeholders that they will do everything in their power to undertake due diligence and homework and that, when they contact someone, the tester’s inputs are well-researched
PAGE 44
FEBRUARY 2015 | www.testmagazine.co.uk
EFFECTIVE COMMUNICATION GRAEME HARRISON EXECUTIVE VICE PRESIDENT OF MARKETING BIAMP SYSTEMS
FUTURE OF COLLABORATION With businesses looking to increase their global reach, colleagues are not simply separated by desks or cubicles but by oceans, mountains and time zones. The traditional office is disappearing, writes Graeme Harrison, Executive Vice President of Marketing at Biamp Systems.
B
usinesses still need to ensure that teams can work effectively together. As a result, communication technologies—including conference calls, e-mail, instant messenger and telepresence—have become integral to the modern office. Research Biamp conducted earlier this year found that 60 per cent of companies have noted an increased reliance on communication technology over the past year alone. Unfortunately the available technology often doesn’t live up to our requirements. Many conference calls are plagued by static, dropped dialogue or unclear audio. Others can have complex dial-in procedures or distracting background noises. These issues add up, and organisations lose time and money as a result of lowquality communication systems. Businesses need to realise that an investment in better communications technology doesn’t simply mean clearer conference calls, but will help improve collaboration and productivity across the organisation – offering a real impact to the bottom line.
TALKING SENSE Virtual offices are becoming more common, with nearly two thirds of companies offering teleworking facilities to their employees. Attempting to communicate across continents is a challenge that low-quality conference solutions simply can’t meet – poor sound and unreliable systems mean teams spend calls straining to hear the person rather than actually listening to and understanding what is said. Although many of us are comfortable using solutions like Skype™ or FaceTime to keep in touch with friends and family, they’re not always suitable for business use. Factors such as fluctuations in bandwidth, low-quality speakers as well as security concerns in enterprise environments make multi-participant calls problematic at best. These lowered expectations mean that businesses are more likely to fire off huge volumes of email or spend a significant amount of money on business travel rather than tapping into the power of a good, clear conferencing solution.
FEBRUARY 2015 | www.testmagazine.co.uk
When used correctly, communication technology can make collaboration with colleagues and clients easier, less expensive, and more productive. However it relies on coupling this tech with investment in the appropriate solutions. One of the biggest arguments against virtual meetings is the difficulty of building a rapport between participants. I believe this is more a reflection of people’s current experiences than a criticism of all collaboration technologies. Businesses succeed based on their relationships, and it’s vital that organisations have the technology that allows colleagues to communicate most efficiently. Having the right telepresence solution and being able to hold a virtual meeting that is “face-to-face” means you can build a richer relationship with clients whilst being available for meetings at times that work for them, without needing to schedule travel. Cisco CEO John Chambers has said that the technology giant closed its 2007 acquisition of WebEx in just three weeks, largely thanks to the Cisco TelePresence™ high-definition meeting system. Cisco and its advisors were able to move so fast, not just because they’re all technology-literate people, but because the technology itself successfully minimized the ‘distance’ between them. Key to Cisco’s success was not simply that the user experience was far richer but that it successfully replicated the experience of a face-to-face meeting faster than business travel could allow. Once concerns over unreliable technology are removed from the equation, you’re left with a safe space where teams can collaborate and come up with the best creative ideas, all while bonding as a team. With the right tools, a virtual meeting should give you the same environment and many of the results you’d expect from a face-to-face meeting. Real investment in collaboration technology can make businesses easier in many ways. While the industry is plagued by myths around its effectiveness, it is possible to not only save money with conferencing tools but to also increase both productivity and creativity, as well as improving team and client relationships – all of which will ultimately impact the bottom line.
PAGE 45
LAST WORD DAVE WHALEN PRESIDENT AND SENIOR SOFTWARE ENTOMOLOGIST WHALEN TECHNOLOGIES HTTP://SOFTWAREENTOMOLOGIST.WORDPRESS.COM
I’VE BEEN FRAMED! In this case, being framed is actually a good thing. When I began working with test automation, I quickly realised why I never wanted to write code for a living. Nothing personal, my development friends. I love you all. For me, it just seemed boring and repetitive. Yawn.
B
ut I wrote code. Was it pretty? Nope. Was it efficient? Nope. Was it effective? Yes. Was I totally humbled by the experience? Absolutely! I found a new appreciation for my development teammates.
Now, I'm no slouch when it comes to writing code. I've done it in the past. I also taught it at one time. That time was aeons ago. At the current rate of advances in technology, an aeon is about a year. Back then, all lines of code had line numbers. There were no tools to identify errors, like we have today. You ran the program. If it didn't work, you could literally spend hours trying to find the problem. In my case, I typically misspelled something or didn’t capitalise something. Silly stuff. Amazingly, I exited the situation with all of my hair intact. It was a totally frustrating experience. I hated it! But I love building stuff. You’d think that writing a program would satisfy my need to build and create. Not so much. So I turned to testing. Telling someone, ‘You’re doing it wrong’, was a bit fun. It was also very rewarding. I had a major role in the final quality of the product, which I loved. Dave was a happy tester. Manual testing soon got boring, though. Dave doesn’t handle boredom well. Then along came the first test automation tools.
FIRST TOOLS The first tools were mostly record and playback tools. Most had a way of validating things on the screen or the page. We could record the boring typing and click parts of testing. As the tools advanced, we could do more advanced tasks, like data driven tests. But there were things we couldn’t do, like decisions, branching and looping. So there were some limitations on what we could do. The next evolution included a coding interface, which allowed us to overcome these limitations. Since I had some coding experience, I became the person responsible for writing these little code snippets. Yippee! The tool evolution continued. Fast forward to today, when we have some very powerful test automation tools. But, to really exploit this power, coding is required. Time for Dave to brush off his coding skills. Piece o' cake!
A NEW WORLD Not so fast. All the languages that I’d learnt were now obsolete. I had to relearn a few things. A whole lot of things. It was a completely new way of thinking. ‘What’s this object-oriented gibberish?’ These days, object-oriented languages like C# and Java are the norm. No line numbers? How does that work? No worries. I'm wicked smart. I'll figure it out. I bought some books, and PAGE 46
began to absorb all of this new fangled programming. It was, and still is, a totally humbling experience. What I don’t know constantly amazes me. The one thing I quickly learnt is that I seemed to be duplicating a lot of things. I knew this was bad. The code that implemented the functionality of a page was intermingled with the code used to test it. It was hard to separate the tests from those things that were defining the elements on the page, like text boxes or buttons. I was smart enough to know there had to be a better way. A way to isolate the manipulation of the page elements from the tests. A way to avoid duplicating effort. I thought back to my previous coding experience, where we could define subroutines, and functions. Back then, rather that repeating the code, I could call on one of these functions. For example, a login test. I’m repeating the same function over and over – entering user credentials, and then clicking a button. The only difference is the data I’m entering, and the results based on that data when I click the button. For a happy path, I’d enter a valid username and password, click the login button, and evaluate the response. The process was the same for the negative tests. I’d send bad values, still click the same button, then evaluate the result. I was using the same functionality, the same login page, entering data in the same text boxes, and then clicking the same button. In the old days – five years ago – I’d build a login function and call it multiple times. Each time I called it, I’d supply different values. Surely that’s still possible? My first automated tests were terrible. They worked. They were a nightmare to maintain. Any little change on a page could cause multiple test changes. Ugh!
A BETTER WAY There had to be a better way. Enter the page framework, or page model. I could have a set of page classes that define the elements on every page in my application, then I could have a set of test classes that call these page classes to manipulate the various page elements. I can define the page functionality once, and then reuse it over and over. The maintenance time is drastically reduced. If a page changes, I update the page class – not 20 different test classes. The tests themselves are completely isolated from the functionality of the page. Perfect! I still can’t write code that well. I write what I like to call ‘street Java’. But now I can get my development team to write or assist with the page classes. My test team, also not coders, only need to learn how to write much simpler test classes. Even I can teach them to do that! Am I ready to become a full time coder now? No way!
FEBRUARY 2015 | www.testmagazine.co.uk
THE EUROPEAN SOFTWARE TESTER
EXECUTIVE DEBATES Offering you the key to successful solutions
• One-day event • Monthly • Lunch & refreshments provided • Central London venue • Network with like-minded individuals • Cutting edge content
For more information, contact Sarah Walsh on +44 (0) 203 668 6945 or email: sarah.walsh @31media.co.uk
Organised by 31 Media, Publishers of VitAL Magazine www.31media.co.uk
T H I R T YO N E