TEST Magazine - April 2015

Page 1

TEST: THE EUROPEAN SOFTWARE TESTER

INNOVATION FOR SOFTWARE QUALITY VOLUME 7: ISSUE 2 APRIL 2015 THE EUROPEAN SOFTWARE TESTER

www.testmagazine.co.uk

NOT-SO-SMART CITIES CRITICS ARE PREDICTING A SMART-METER FIASCO IN THE UK. TEST ASKS THE TESTERS. INSIDE: WHAT WE LEARN FROM HARDWARE TESTING WEARABLE Q.A.

VOLUME 7: ISSUE 2: APRIL 2015 cover_with_spine_april_2015.indd 1

20/04/2015 12:42



CONTENTS

INSIDE THIS ISSUE 8. What assurance means in the Internet of Things

14. Not-so-smart cities?

18. Wearables, Things and the User Experience

COVER STORY

THOUGHT LEADERSHIP

8

What Assurance Means in the Internet of Things Siva Ganesan, Vice President and Global Head of Assurance Services at Tata Consultancy Services, contemplates the nature of risk and reassurance in this new world.

10

NEWS 10

10 Nationwide buys eggPlant tools 10 A Balanced View 11 Dev complacency 'a risk’

Not-so-smart cities?

Great Britain has embarked on an ambitious plan to install smart meters in every home, but the programme has its critics.

WEARABLE RISKS

18

Wearables, Things and the User Experience

Perfecto's Eran Kinsbruner gives some insights on how to test wearable software.

F iat recalls electric cars due to software incompatibility

10 Tricentis announces new partnerships

14

INTERVIEW 20

The case for testing, re-made

Two years ago, Karen Thomas, Lead User Acceptance Test Manager at Barclays, wrote an article for TEST in which she made the case for testers in financial services. So what's changed?

11 Survey: less than 50% of mobile apps are in users’ own language

HARD LESSONS 22

NEWS 12

The Measure of Man

Dr Mike Bartley, the founder and chief executive of TVS, explains why software assurance can learn from the processes used to verify the designs of processors

The UK's Science and Technology Committee has raised concerns over the testing of biometric systems.

APRIL 2015 | www.testmagazine.co.uk

What we learn from hardware testers

PAGE 3



CONTENTS

INSIDE THIS ISSUE

30. Introducing a method to the madness

34.

British Museum 19-20 May 2015

Are we providing the best testing environment for our applications?

36. CONFERENCE 2015

MANAGE YOUR DATA

24

How to manage what testing generates

Sponsors & Exhibitors Media Pack EDUCATION 32

HEADLINE SPONSOR

The Testing Universities EVENT PARTNERS

EXHIBITORS

Professor Mike Holcombe of the University of Sheffield examines how the UK’s centres of higher-education handle testing and quality SUPPORTED BY www.softwaretestingconference.com assurance, and how it fits into the broader IT curriculum. MAGAZINE

Kyle Hailey, a performance architect at Data as a Service provider Delphix, explains the way forward.

GOLD SPONSOR

SOFTWARE-DEFINED STORAGE 26 NEW RESEARCH

Soft borders

James Brazier caught up with Steve Costigan,

Solutions Architect for Zadara Storage, to discuss how software-defined storage supports testing.

HYBRID TESTING

28

Are we providing the best testing environment for our applications?

The testing landscape has changed significantly over the past few years. Archie Roboostoff, product director at Micro Focus, picks out the findings of some recent research.

Testing as a Service - a New Way

Kevin Hahn, owner of newly launched business,

CONFERENCE 36

explains his hybrid business model.

MOBILE COMPATIBILITY

34

30

The National Software Testing Conference 2015

...at the British Museum, 19-20 May 2015

Introducing a method to the madness

Amitava Sanyal, Senior Manager at Maveric Systems, suggests a methodology for managing the many challenges of compatibility testing for mobile applications.

APRIL 2015 | www.testmagazine.co.uk

LAST WORD 46

I Don't Know And I Don't Care

Dave Whalen on everyday risks.

PAGE 5



LEADER

THE INTERNET OF APPLE THINGS Hello, and welcome to the APRIL 2015 issue of TEST Magazine.

T

his month was an important milestone for the Internet of Things, for one reason: Apple finally launched its Watch. This was by no means the first smart watch on the market, any more than the iPhone was the first mobile phone, but in both cases Apple’s aspirational magic altered the perceptions and expectations of consumers. After years of anticipation, and the missteps of the Google Glass, the world’s most valuable company had finally launched some wearable tech. The Internet of Things (IoT), and how those Things interact with each other, is a theme of this month’s edition of TEST. In the UK, the rollout of smart meters has raised fears of a new IT fiasco in the making. We spoke to QA experts involved in testing the meters to get their view. What we found is that for all the potential pitfalls, smart meters have the potential to become the foundation of Smart Cities, giving householders and power generators the information to yield significant efficiencies. Nevertheless, the IoT multiplies the already daunting challenges presented by testing software across the many permutations of browser, operating system and device. And with wearable tech straying into areas that overlap with medical devices, they may find themselves facing

the vertiginous wall of regulation that governs systems monitoring human health. Do you want to write for This edition also looks at the TEST Magazine? other end of the IoT, where Email james.brazier data is centralised and stored. @31media.co.uk Software-defined storage and virtualisation are trends that offer considerable advantages to testers, but the business models and technologies are changing quickly. We ask the experts to explain. Of course, this is also our conference edition. Following the success of last year’s inaugural National Software Testing Conference, the meeting convenes once again at the British Museum on 19-20 May, bringing together the cream of the testing community. Read on to acquaint yourself with the excellent array of speakers and sponsors who will be leading this year’s event. We look forward to seeing you there!

James Brazier Editor

© 2015 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor or TEST Magazine or its publisher, 31 Media Limited. ISSN 2040-01-60 T H I R T YO N E

APRIL 2015 | www.testmagazine.co.uk

EDITOR James Brazier james.brazier@31media.co.uk Tel: +44 (0)203 056 4599 TO ADVERTISE CONTACT: Gavin de Carle gavin.decarle@31media.co.uk Tel: +44(0)203 668 6946 PRODUCTION & DESIGN JJ Jordan jj@31media.co.uk

EDITORIAL & ADVERTISING ENQUIRIES 31 Media Ltd, 41-42 Daisy Business Park, 19-35 Sylvan Grove, London, SE15 1PD Tel: +44 (0) 870 863 6930 Email: info@31media.co.uk Web: www.testmagazine.co.uk PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA

PAGE 7


THOUGHT LEADERSHIP

WHAT ASSURANCE MEANS IN THE INTERNET OF THINGS

SIVA GANESAN VICE PRESIDENT AND GLOBAL HEAD, ASSURANCE SERVICES, TATA CONSULTANCY SERVICES (TCS)

TESTING MUST TAKE PLACE BOTH IN THE OLD WORLD OF ISOLATION, AND IN THE NEW CONTEXT OF IMMERSION

The Internet of Things is already with us. Connected watches, cars, and thermostats are the most visible tip of an iceberg that is penetrating ever deeper into industry and commerce. In the words of the 19th Century writer Nathanial Hawthorne, the “world of matter has become a great nerve, vibrating thousands of miles in a breathless point of time”. Here, Siva Ganesan contemplates the nature of risk and reassurance in this new world.

PAGE 8

T

he old, monolithic environment of desktop and mainframe computing offered some comforting certainties. There was a discrete, finite number of variables, and a limited number of points of failure. Staging environments could reflect the insulated and stationary context within which the systems and devices were found. Today, we face potentially billions of inter-connected devices operating in any number of unexpected environments; in the gym and the passenger lounge, naturally, but also in cruise ships and mountain ranges, down mineshafts and in thunderstorms. The scattered distribution of the Internet of Things (IoT) implies an exponential rise in the number of points of failure. One might almost describe it as a Pandora’s Box. The defects of the old, static world must still be managed of course, but it is possible we are about to see entire new genres of risk arise. No device, no cloud, no data centre exists in isolation. The traffic that passes between the devices lends itself to vulnerabilities and to interception, and to the loss or dilution of data along new vectors and tangents.

APRIL 2015 | www.testmagazine.co.uk


THOUGHT LEADERSHIP How do we assure complexity, when the volume of change, the magnitude of change, literally the frequency and velocity of change is spiralling into a disruptive vortex? Herein lies the IoT’s challenge to assurance and risk management. The basic premise is to ensure that interconnectivity is tested in an adequate way. When different and diverse devices are interconnected, it is important to test all the permutations of failure, well in advance. Rather than testing simply for feature functionality, it is important to figure out all the ‘break points’. The idea is to identify where these are and to make sure they are resilient and, if they fail, that they can recover gracefully. Testing must take place both in the old world of isolation, and in the new context of immersion. In isolation, automated tests ensure that the software works to specification. In immersion, the system or device is tested in the wider world of the IoT, and its performance monitored. Monitoring cuts two ways. Prior to launch, the device must be crunched through various scenarios to make sure it doesn’t break, using automation to replace repetitive testing. After launch, the devices must be constantly monitored, in what is a new and very interesting aspect of assurance: one that is commensurate with the process of running the business itself, in terms of measuring and responding to customer needs. Here, assurance acts as a safety net to allow the business to act preemptively in support of its customers. The first priority must be confidentiality and controlled, secure access to consumer and corporate information. Many smart devices store sensitive information about their owners which is transmitted locally, for instance via Bluetooth or Wi-Fi between a smart watch and smart phone. As the IoT eco-system includes so many potential vulnerabilities, ever more vigorous diagnostics are needed to assure privacy and confidentiality while preserving accessibility. This is not only a customer requirement. Increasingly, regulators and legislators demand it. They pass rigorous standards and statutes to ensure that networked objects do not physically damage people or property, and that they handle data lawfully. This has elevated assurance beyond pre-empting faults and averting defects. Today, assurance is confluent with compliance and risk management. Its overall aim is superior risk management and brand assurance, protecting the loyalty that is generated by persistent quality. Within our vortex of technological change, assuring quality requires the judicious deployment of upgrades. The hardware of the IoT changes every year, and its firmware refreshes almost monthly, so backwards and forwards compatibility is essential. Users must be confident that their devices will continue to function despite the rapidity of change. When faults occur in the IoT, it is vital they are swiftly corrected with software updates. Assurance must not only guarantee that any failure-points are resilient and self-healing, but that any regression of that fix addresses the issue rather than exacerbating it. Rather like a pitstop in a Formula 1 race, the tyres have to be on and off in seconds. There is zero-tolerance for failure, and the propensity to switch supplier over night, or even intra-day, is very high.

APRIL 2015 | www.testmagazine.co.uk

Nor can testing be purely technical; it must also be culturally aware. Given the geographic dispersal of the new devices, and sometimes their limited ability to convey information, it is essential to adapt them to the context in which they function. Two trends define the IoT approach to localization and internationalisation. First, global supply chains have resulted in the creation of true multinational businesses. They must discover integrated, holistic ways of engaging with customers who are themselves truly multinational. Easy assumptions about the needs of one market over another will not serve IoT-enabled businesses, which must instead pioneer universally applicable devices and systems. Secondly, however, the touch points between devices and users must be carefully localised. Building confidence in the IoT requires the connected devices to be intimately aligned with the cultural context and preoccupations of their users - or, in many cases, their wearers. Assuring cultural consonance becomes all the more important as the spread of 3D and holographic visualization heightens perceptual immersion. The demographics of consumption are changing very quickly. Adding to the challenge, the IoT has enabled new forms of marketing that are more efficient and more disruptive than ever before. Its insights are gleaned from the Big Data collected from geo-located IoT devices, data that is aggregated in the cloud and then analysed using computational processes of unprecedented power. Traditionally, the role of assurance and testing was to harvest client requirements and then to validate them. Now, the contagious nature of electronic marketing means that those requirements are no longer necessarily predictable, either in terms of the functions or the demand that is initially anticipated. The viral propagation of brands, and the demand spikes it generates, place engineering demands on the architecture, the concurrency of services and the umpteen devices themselves. With internet-enabled sensory devices, the need of the hour is to test not only for functionality but also for engineering. Assurance plays a role not just in testing software but also in the proofing of devices and architecture. All these observations cohere at a single point. To assure the IoT, it is necessary to understand its business models and the psyche of the consumers themselves. This is not just the art of prioritizing the needs; automation has to be the default approach to ensuring that software works to specification. Quality specialists must understand not only their clients, but the clients of their clients: the users, owners and wearers of connected objects. Taking a holistic approach to risk management means understanding risk not only from a technological standpoint but from a statutory and reputational one. Only by expanding our gaze can we understand the interconnected risks of an interconnected world.

PAGE 9


NEWS FIAT RECALLS ELECTRIC CARS DUE TO SOFTWARE INCOMPATIBILITY Italian carmaker Fiat is recalling thousands of electric Fiat 500 EV models in the US due to a bug introduced by a software update. In a filing with the US National Highway Traffic Safety Administration on 24 March, the company said that a software incompatibility between the car’s Electric Vehicle Control Unit (EVCU) and Battery Pack Control Module (BPCM) could cause the electric motor to shut down. The accompanying documentation went into detail. In May 2014, Fiat released a software update intended to improve the cars’ charging systems, diagnostics and range. A ‘limp home’ mode, which ensures that the vehicle remains drivable in the event of a serious malfunction, was loaded onto the older vehicles’ systems as part of the update. This introduced the incompatibility issue. When the BPCM places the battery in ‘limp home’ mode, by design there should be no requests to accept regenerative current to the battery. However, as a result of the software incompatibility, the EVCU does not recognise the ‘limp home’ mode and directs current to the battery regardless, forcing the vehicle to conduct a general shut-down. The result was what in a conventionally fuelled car would be called a ‘stall’. Dealers will now update the vehicle software to ensure compatibility between components. The recall is expected to begin by 15 May 2015. According to the company, there are 5,660 cars affected.

A BALANCED VIEW At the Cloud Expo in London’s docklands in March, TEST caught up with Ed Martin, the UK country manager for KEMP Technologies. KEMP is the third biggest supplier of load balancers by volumes shipped. Typically, these devices sit in a data centre, behind the firewall, balancing traffic between servers to prevent one from being overwhelmed with requests (familiar to end users as a 404 error).

PAGE 10

TRICENTIS ANNOUNCES NEW PARTNERSHIPS

NATIONWIDE BUYS EGGPLANT TOOLS

India’s Cigniti Technologies, an independent provider of softwaretesting services, has entered a strategic partnership with Austria’s Tricentis. The partnership will allow Cigniti to benefit from Tricentis’s Tosca Testsuite, an end-to-end continuous testing suite. Cigniti’s Executive Director Srikanth Chakkilam said that the partnership would unite Tricentis’ “unique modelbased approach to test automation” with Cigniti’s IP-led tools & technologies.

TestPlant, the maker of the eggPlant range of software quality tools, announced in April that it has signed a deal with Nationwide, one of the UK’s largest financial institutions, to support the company’s digital transformation.

Tricentis also announced a tieup with Automic, a US-based specialist in DevOps and continuous integration, to produce what the companies called “nextgeneration testing”. “Our customers have been very clear about the need to move faster and thus adopt Agile and DevOps methodologies,” said Tricentis CEO Sandeep Johri, explaining the link up. Todd DeLaughter, Automic’s CEO, added that the DevOps space was changing rapidly, but that automated testing remained essential and that partnering with Tricentis ensured Automic’s testing remained robust and thorough.

Ed flagged two aspects of KEMP’s work that may be of interest to software testers and developers. First, advanced load balancers come with features that can reduce the coding burden for developers. “For instance, developing session persistence in a new application is unnecessary because that's something the LB offers,” Ed pointed out. “I've spoken to developers who spent months coding that into their application and who have kicked themselves when I told them that the load balancer does it automatically!”

eggPlant Functional is a test automation tool that uses image recognition technology to “look at” and interact with the screen as a real user would. In addition to eggPlant Functional and eggPlant for Mobile, Nationwide will also use eggCloud to centralise access to mobile devices for testing, as well as eggMan and eggBox to support manual testers and help speed deployment of eggPlant. “The rapid growth in digitalisation of our services needs to be supported by testing systems that will ensure the quality of our services and applications, and that is why we have selected TestPlant,” said Andrew Young, Head of Testing Services, Nationwide. Nationwide is the world's largest building society as well as one of the largest savings providers and a topthree provider of mortgages in the UK. The company has made a significant investment in information technology transformation since 2008. eggPlant will become the testing platform for the development of all of Nationwide’s mobile and multi-browser testing.

Secondly, KEMP offers the use of its load balancer free-of-charge when under 20 megabits of throughput, for instance in a test suite. KEMP issues major and minor releases each year to address new threats to security and to introduce new features, for which it does not charge supported customers. Kemp also runs a program for service providers whereby the load balancer is zero cost up to 10 megabits, customers can then change their licence according to changing usage, so there's no need to build for peak demand.

APRIL 2015 | www.testmagazine.co.uk


NEWS DEV COMPLACENCY 'A RISK’ A survey of software developers found that 93% regarded the quality of their code as ‘good’ or great’. This prompted the company that commissioned the survey, New Zealand-based Raygun, to warn that over-confidence posed a risk to businesses relying on software to produce sales. “With 70% of software developers using a basic in-house reporting system or not using anything at all to detect errors in their applications, it’s easy to see why many may feel like they’re on top of things,” Raygun CEO John-Daniel Trask said. “The simple facts are that many developers assume their software is free of issues because their users aren’t reporting them, or their current systems did not have the smarts to discover an underlying issue”. The survey also found that developer time and fixing errors is expensive. On average, developers spend seven hours per week addressing software bugs and making fixes. A Cambridge University study published in 2013 estimated the global cost of fixing software bugs at £205 billion ($312 billion) annually.

Raygun provides error-tracking services to software developers, so it has a stake in highlighting this issue. “Many developers try out a product like Raygun, and discover problems that they didn’t know existed, which have been present for months, even years,” Trask said. “Generally speaking, only 1% of users will ever take the time to report problems, so your software quality is probably 100x worse than you actually think it is.” The company cited research showing that 84% of users will give up on an app if it crashes more than twice. Trask, who set up the Raygun company with co-founder Jeremy Boyd in late 2012, said that he has seen his fair share of customers who under-estimate the amount of bugs their software contains. “Almost everyone that signs up for Raygun believes they have far fewer bugs in their software than Raygun identifies. It can be quite surprising for many developers how many things they didn’t know about, or continue to be notified about when customers are interacting with their software.”

SURVEY: LESS THAN 50% OF MOBILE APPS ARE IN USERS’ OWN LANGUAGE At least half the planet is being forced to use mobile apps that are not available in their own native language, according to the mobile division of One Hour Translation, an online translation agency. A survey conducted by the company showed that only 49.7% of respondents reported their apps being all in their own language.

Unsurprisingly, there were large differences between countries. In the United States, 78% of respondents answered that all apps on their mobile device were in their native language; the UK was close behind on 71% and in Canada and Australia the figures were 66% and 65% respectively. The picture in non-English-speaking countries was very different. In Japan, only 37% of respondents said that all their apps were in their native language, with similar levels in Italy (34%) and Germany (33%). In the Netherlands, 16% answered either that “none” or “only a few” of their apps were localised into Dutch. A second survey conducted by the company showed a general demand for mobile games and news apps in the native languages of users. "The results of our two new surveys are extremely clear: apps should be more effectively targeted to each specific market," said Ofer Shoshan, CEO of One Hour Translation. “The first survey shows that mobile application developers can win additional users by leveraging localisation. The second survey strengthens the insight that people clearly prefer apps in their native tongue.” The two surveys, conducted in February 2015, were jointly carried out with Google Consumer Surveys based on a two representative samples of 800 respondents each. Each research survey asked 100 respondents from each of the following countries: the US, the UK, Australia, Canada, Italy, Germany, the Netherlands and Japan.

APRIL 2015 | www.testmagazine.co.uk

PAGE 11


NEWS JAMES BRAZIER EDITOR TEST MAGAZINE

THE MEASURE OF MAN The Science and Technology Committee of the UK’s House of Commons published a report on biometric software and systems on 25 February. The panel noted the lack of any standardised way of testing, and voiced concerns about the lack of rigour in the testing of major systems that were already operational. James Brazier reports.

T

he panel’s report, ‘Current and future uses of biometric data and technologies’, found that identities were valuable and that preventing them from being stolen was important. Biometrics offers a potentially more secure way of doing so than do passwords, ID passes, and mothers’ maiden names. For this reason, the biometric technologies market is likely to grow quickly: the report quoted analysts who project the market to grow from $8.7 billion in 2013 to nearly $27.5 billion by 2019. In its testimony, Innovate UK, formerly known as the government’s Technology Strategy Board, described the testing of biometrics as “difficult”. A biometric algorithm must be tested before launch against an artificial or simulated database containing biometric data samples. Several witnesses pointed to the difficulty of building a comprehensive test database from an unbiased population sample. They noted that it was a “probabilistic science” that was unlikely ever to yield full accuracy, and said it was up to the end user how many false-positives they were willing to tolerate.

PAGE 12

Ben Fairhead from 3M likened the process to a drugs trial. He said that to test biometrics effectively required the participation of a large group of people, and that this tended to be expensive. This observation was supported by Marek Rejman-Greene of the UK Home Office and by Erik Bowman from the US defence contractor Northrop Grumman. Others questioned the value of pre-launch testing altogether. Some experts argued that developers must be on hand to implement changes post-launch, when the software was exposed to market conditions. In the words of Lockstep Consulting: "...testing on artificial or simulated databases tells us only about the performance of a software package on that data. There is nothing in a technology test that can validate the simulated data as a proxy for the ‘real world’." The report noted that previous reviews of the technical literature on biometric device testing had highlighted a “wide variety of conflicting and contradictory testing protocols”, including “single organisations” producing

APRIL 2015 | www.testmagazine.co.uk


NEWS “multiple tests, each using a different test method”. It noted the disappearance of the Biometrics Assurance Group, which formerly interpreted the outcomes of such tests, concluding that:

Many of the witnesses pointed out that public confidence and enthusiasm for biometrics was low, and that this reflected concerns over personal privacy and the security of data centres after a number of high-profile hacks. Large, centralised databases are essential if biometrics are to be used for the purposes of identification. However, the report said they are prone to ‘function creep’, defined by the European Commission as “technology and processes introduced for one purpose [and] extended to other purposes which were not discussed or agreed upon at their implementation”.

“When testing does occur, the continued use of a variety of testing protocols by suppliers makes it difficult to analyse and compare, with any degree of confidence, the performance of different systems.” There was also a serious concern that untested systems had been put into operation. Some UK police forces are already harvesting the photographs of those taken into custody, including those who were not subsequently charged, and storing them on a database. The police force in Leicestershire, for instance, had piloted the ‘NeoFace’ system that matched facial measurements against 92,000 images in the force’s database.

Nevertheless, industry witnesses were optimistic; 3M reported that “staggering increases in the speed and accuracy of automated biometric search engines” had allowed the science of biometrics to advance rapidly. Others said that it required more investment by early-adopter industries such as banking before biometrics would become ‘second nature’. Alastair MacGregor, the UK’s Biometrics Commissioner, Barclays bank, for instance, this year plans to launch a ™ reported that a “searchable national database biometric reader that scans a finger for its unique of custody photographs” had “been vein patterns. Some mobile phones, put into operational use” by notably Apple’s iPhone 6, and laptops police forces in “the apparent Using the power of ultrasonic sound waves for a new generation of mobile fingerprint authentication. now have fingerprint readers. absence of any very rigorous

QUALCOMM SNAPDRAGON SENSE ID 3D FINGERPRINT TECHNOLOGY ®

testing of the reliability of the facial matching technology that is being employed”. He added that the Home Office’s Centre for Applied Science and Technology was currently “looking at the algorithm applied to images on the police national database” and that, at the moment, the software was being used for “investigatory purposes only”: “No one is being prosecuted simply on the basis of, ‘We’ve got an automatic match’”, he noted.

Security & Privacy Fingerprint data stays on-device while maintaining security across services.

Accuracy & Consistency

Convenience A password-free experience offers ease of authentication in real-world situations.

Highly detailed, consistent image quality, even through contaminants, like lotion, oil, sweat and sunblock.

Ecosystem Opportunity

Elegant Design

Utilizing the FIDO UAF biometrics protocol, an open industry standard for online devices, designed to enable secure biometric authentication.

Sensor detects through cover glass, plastic, certain metals, and sapphire, creating opportunities for differentiated device designs.

QUALCOMM SECURE MSM TECHNOLOGY ®

The report said that despite the abandonment of the UK’s planned national ID card scheme in 2010 (such cards are common in other countries), the use of biometric identification by the British state had increased. The panel was particularly shocked by the government’s failure to publish a promised strategy on such law-enforcement tactics.

The hardware-based foundation of all Qualcomm Security Solutions.

“Despite undertaking to publish this document at the end of COMPREHENSIVE AUTHENTICATION On 2 March the US 2013, we were dismayed chipmaker Qualcomm Device Material to find that there is Qualcomm Qualcomm Biometric Platen Snapdragon Integrated Circuitannounced the world’s first Processor still no Government Sensor Nevertheless, the panel said it was ultrasonic fingerprint recognitions The Sense ID platform incorporates algorithms managed by the Qualcomm SecureMSM foundation, the Qualcomm strategy, no “imperative” that biometric systems be Biometric Integrated Circuit and custom sensor technology, as well as Its the Nok Nok Labs' S3 Authentication Suite, and ID system. new Snapdragon Sense consensus on the FIDO (Fast IDentity Online) Universal Authentication Framework (UAF) protocol for biometrics and online security. tested before launch and that rigorous testing sensor captures three-dimensional acoustic what it should and evaluation must take place both before and details within the skin of the user. According Find out more about Qualcomm Snapdragon Sense ID 3D fingerprint include, and no afterwards. The report said ittechnology was “highly regrettable” to the company, this means it can detect at snapdragon.com/security expectation that the police had gone ahead before this more subtle features than can the current had taken place and it recommended that the that it will be generation of capacitive-based sensors, which government ascertain how biometric testing use an electric current to create an image of the published in this aligned with current UK software testing standards. thumbprint. Qualcomm said that this enhanced Parliament. This biometric information and made it harder to is inexcusable.” More generally, the report voiced concern about ‘spoof’ the thumbprint. the authors wrote. the growth of ‘unsupervised’ biometric systems They demanded on mobile devices and, in particular, the ‘second More noticeable to users will be that the generation’ biometric technologies that can a “comprehensive, system works through glass, aluminium, authenticate identities covertly and link to other types of crossdepartmental steel and plastics, much extending ‘big data’ in order to profile individuals. Witnesses noted forensics and the areas of a mobile device on that the private sector could feasibly grab images taken which it can be placed. biometrics strategy” to from surveillance cameras and match them against social be published by the end of media profiles in order to profile footfall customers for sales this year. targeting. ®

®

®

®

SECURE MSM

TM

Qualcomm Security Solutions, Qualcomm SecureMSM, Qualcomm Snapdragon, and Qualcomm Snapdragon agon Sense are products of Qualcomm Technologies, Inc.

rks of Qualcomm ©2015 Qualcomm Technologies, Inc. All Rights Reserved. Qualcomm, Snapdragon, and SecureMSM are trademarks Switch are Incorporated, registered in the United States and other countries. Snapdragon StudioAccess and Qualcomm. SafeSwitch sion. Other product and trademarks of Qualcomm Incorporated. All Qualcomm Incorporated trademarks are used with permission. brand names may be trademarks or registered trademarks of their respective owners.

APRIL 2015 | www.testmagazine.co.uk

PAGE 13


COVER STORY

NOT-SO-SMART CITIES?

Great Britain has embarked on an ambitious plan to replace the electric and gas meters in people’s homes and businesses with ‘smart’ devices that monitor and transmit data about energy consumption. Critics, however, are predicting a fiasco in the making, flagging a host of potential problems with the scheme. Here, James Brazier sets out the criticisms and speaks to SQS, one of the companies involved in testing some of the new systems.

PAGE 14

APRIL 2015 | www.testmagazine.co.uk


COVER STORY

JAMES BRAZIER EDITOR TEST MAGAZINE

JACK COXETER LEAD CONSULTANT OF POWER AND COMMUNICATIONS SQS

S

mart meters have the potential to be the building blocks of smart cities. In theory, by monitoring energy usage, they can ensure that homeowners and businesses use resources in the most efficient way possible, bringing down utility bills and emissions, expediting customer service, and even improving the balance of payments for energy importing countries. Yet the technology is being rolled out during a period of breathtaking technological change, and there are mounting questions over whether the smart meters can keep up. Latest among the sceptics is the Institute of Directors (IoD), a body that represents British bosses. It released a report in March 2015 entitled ‘Not too clever: will Smart Meters be the next Government IT disaster?’ which pulled together the main strands of criticism levelled against the smart meters. The IoD report said the UK scheme, which involves the installation of over 100 million devices , was an ‘immensely complex programme’ and it queried whether the international experience of smart meters suggested that they were a worthwhile investment. The IoD highlighted the case of Germany, where a study commissioned from consultants EY concluded that the smart meters did not withstand a costbenefit analysis. The report described the energy savings as ‘paltry’. The report noted that the success of the project depended on data transmitted not by WiFi or Bluetooth, but by a little-known suite of wireless protocols called ZigBee, designed to send data long distances and through solid objects. The UK’s Department of Energy & Climate Change (DECC) and the smart meter manufacturers are developing a new ZigBee standard that will operate at the lower frequency of 868 MHz. The IoD report claimed the lifetime of the ZigBee chips was unlikely to be longer than four years, and that ZigBee’s UK-only specification will need years of debugging after its launch. Regarding the software, the report questioned whether Elexon, the company that handles the UK’s current daily volume of 1.25 million meter readings, was prepared for the 20-fold increase in readings that the smart devices are expected to generate. “It is far from clear that the utilities will have the scaleable IT infrastructure in place and in time to cope with this new flood of data,” it suggested. “That would suggest they may choose only to collect enough data for automated meter reading (AMR), rather than delve into consumption analytics.” Security was another major concern. According to the report, “many argue” that the security dimension was mistakenly omitted at the beginning of the development process and then “bolted on at a later date”. It argued that a hacked smart meter would tell cyber-criminals when a homeowner was out

APRIL 2015 | www.testmagazine.co.uk

of his or her house; or allow homeowners to doctor their own readings to reduce their bills; or even allow intruders to switch off the electricity or gas supply. Were a Trojan horse to switch off the power en masse, it would do great damage to the national grid, the report noted. Interoperability was another issue flagged: “It has emerged that an undisclosed number of the first generation of smart meters known as SMETS 1 (Smart Metering Equipment Technical Specifications) will not be interoperable with Smart Meter infrastructure to be built by 2020, meaning a large number of them will have to be replaced, due to obsolescence after just a few years of service with SMETS 2 compliant meters.” In summary, the report said that Britain faced a potentially never-ending cycle of replacing obsolete meters. Seeking an answer to the IoD’s software-related criticisms, James Brazier spoke to Lead Utilities Consultant Jack Coxeter. SQS has extensive experience in testing smart-meter technology and has worked with the UK Department of Energy and Climate Change (DECC) to help structure the testing regime and regulation. James Brazier: To what extent was testing a part of the software development life cycle for SMETS? Jack Coxeter: Testing is a key element. In light of the amount of integration, the number of vendors, and the use of devices, means it takes significantly longer than normal IT delivery and the challenges can be much greater JB: How did testing fit in with the rest of the production schedule? JC: Testing in smart metering needs to run in parallel with supporting production and has longer stability periods. In SMETS 1 (Smart Meters Equipment Tech Spec, version 1) many of the changes and elements delivered were not part of the core applications so some were easy to schedule. Other, more backend changes typically need to conform with very fixed release schedules, and these required more planning and added constraints to delivery.

SQS POLL QUESTION: Thinking about your current energy supply... Which, if any, of the following would you like to see happen in the future? (Please select all that apply) Power cuts in my house and/ or area to be identified and acted on faster.......................... New services, tariffs and offers that reflect how I actually use energy.................................. Improved customer service based on my energy supplier knowing more about me....... Billing that I feel I can trust to reflect my energy usage....................................................... Being able to understand what uses up the most or least amounts of energy in my home.

19%

Being able to switch supplier more easily........

33%

None of these.......................................................

7%

Don't know............................................................

8%

52% 22% 48% 44%

PAGE 15


COVER STORY JB: Did any particularly difficult or unanticipated problems come up in testing? JC: In short, end-to-end integration took significantly longer than expected and I suspect the same can be expected with SMETS 2 (Smart Meters Equipment Tech Spec, version 2) as encountered with SMETS 1. Setbacks were common and often delivery was held back due to late delivery from third party vendors. JB: Adam Afriyie, a member of parliament, has argued that it would be better to attach a sensor to the old mechanical readers and then transmit the data via a smart phone app, as a much cheaper and more future-proof solution. Is this a good idea? Some people argue that peering at a little screen fixed to the wall is outdated and unlikely to result in consumer satisfaction, when they could be using a phone app. JC: I think suppliers are already considering how consumers will interface with the data, such as apps in addition to inhome energy displays (IHDs).However, my understanding is that if IHD were to be dropped completely, it would require regulation change. I’d also say that lots of the ‘dumb’ metering equipment currently in use is very old, so needs updating anyway. JB: Critics also say that the industry isn’t ready for the 20-fold increase in daily meter readings generated by the smart devices. Is scalability an issue? JC: Essentially, meter reads will predominantly become half hourly per consumer, so yes they will increase significantly with smart. But compared to other industries, in-home video streaming for instance, the quantity of transactions is not massive, so there are systems that require way more data and way more storage.

WHAT DOES THE PUBLIC THINK? In November 2014 a YouGov survey commissioned by SQS found some scepticism among the British public about the new meters. At the same time however, the survey revealed a desire for better energy control and personalised treatment from their suppliers, of the kind which can be enabled by smart metering. Of those sampled, 27% thought their energy supplier’s record of inaccurate billing, poor customer service and delays in fixing problems did not augur well for the success of smart meter implementation, but 41% agreed that new suppliers – who are eager to prove themselves and aren’t held back by old technologies – could provide a better service. Just over half would welcome services, tariffs and offers that reflected how they actually use energy, while 22% would like improved, personalised customer support. Only 48% said would like accurate billing they can trust, with 44% wanted a clear understanding of what uses up most and least energy.

JC: One of the key elements of SMETS2 is that it defines a protocol standard, and there is certification to ensure the meters meet the standard, which will help interoperability. The rules governing SMETS1, on the other hand, were looser.

Nevertheless, scalability is an area that will be tested extensively and solutions are being designed to enable future scaling.

Nevertheless, it’s important to remember that there are only about 1.5 million SMETS1 meters installed at the moment, so the vast majority will be SMETS2.

JB: The security of the smart meters was another major worry. The IoD says that the security elements were “bolted on” at a later stage.

JB: When smart meters go ‘live’, will testing continue? How will this work?

JC: There were good reasons why DECC improved the security model. Security is certainly very high on everyone’s agenda and there will be significant decisions on how systems are hosted and connected to DCC. Those using the Data and Communications Company (DCC, the body which manages the SMETS communications infrastructure) will need to be ISO27001 compliant, which is a high bar to jump over, and suppliers and other users are likely to bring in specialists security experts to help in this area. Certainly security is an area in which the DECC, Communications-Electronics Security Group, DCC, suppliers and data network operators will be taking very seriously and testing will play a very important role in de-risking the delivery. The meters themselves are also undergoing certification and this will include protocol (ZigBee and DLMS) certification. In addition there will be penetration testing and testing of the DCC Key Infrastructure. Security in SMETS 2 will be at levels not seen before in smart metering. This represents challenges to the delivery in terms of complexity, but due to this being critical national infrastructure it’s essential security is at the heart of testing. Certainly the meters, the DCC and its users will be expected to test security very thoroughly and understandingly it’s built into the SEC regulatory requirements. JB: Is inter-operability between SMETS1 and SMETS2 a worry, given their differing requirements?

PAGE 16

JC: Testing in smart doesn't end at live, mainly as the live environment throws up new challenges that are hard to replicate in test. An enduring test process will need to be created to support this and this will include the ability to test meter firmware updates effectively ahead of deployment. JB: What about the IoD’s comparisons to other countries? They would seem to cast doubt over the claims being made for SMETS’ impact on demand and supply. And why would privatised energy companies want people to consume less energy, which in the end will damage their own profitability? JC: I’m not sure you can make a direct comparison with Germany and Italy. What the UK is attempting is significantly different than in most other countries, in terms of how advanced it is. It is far more ‘all singing all dancing’ than has been attempted elsewhere and encompasses both gas and electricity. As for whether private energy countries have an incentive to persuade people to spend less on energy, it’s worth remembering that smart meters are an opportunity to develop new services around them, such as boiler installation and maintenance. For instance, as more and more wind and solar in the energy mix, and electricity generation becomes more weather dependent, smart meters could again help to manage loads.

APRIL 2015 | www.testmagazine.co.uk



MOBILE TESTING ERAN KINSBRUNER TECHNICAL EVANGELIST PERFECTO MOBILE

WEARABLES, THINGS AND THE USER EXPERIENCE The mobile experience has changed end-user expectations for good. Many reports claim that if an app crashes or acts buggy more than twice, the abandonment rate for that app is more than 80%. End users have become what we might call a high-expectations market – apps must work all of the time in order to gain usage. As the market for the Internet of Things (IoT) and wearables grows, this has never been truer. Here Eran Kinsbruner, Technical Evangelist at Israel’s Perfecto Mobile, explains why the user experience is paramount.

L

et’s reflect on the IoT market. While ‘smart roads’, ‘smart parking’ and ‘smart lighting’ can greatly improve our lives (and commute!), let’s take a minute to focus on the most relevant and widespread extension of IoT – wearables.

WHAT WE CAN LEARN FROM WEARABLES I tend to consider wearables as the first wave of widely available IoT products. There’s certainly fragmentation among the wearables market; just look at smart watches for instance. User-interface (UI) requirements vary for each device – we’ve got round screens, square screens and rectangular screens to choose from. When Apple Watch is launched this month, the fragmentation of smart watches will take on a whole new meaning as apps designed for smart watches must now support multi-OS. It’s great to see companies like Salesforce

Apps designed for smart watches must now support multi-OS

PAGE 18

APRIL 2015 | www.testmagazine.co.uk


MOBILE TESTING and Nike already on board with Apple Watch-specific apps, but what’s even more interesting is to see the evolution of these apps as they adjust to a new device. The user experience on a smart watch is very different to a user experience on a tablet. An airline’s smart phone app most likely won’t be used for browsing flights, but instead for gate updates and eventually check ins. Nike’s app isn’t a retail app, but a running tracker. This is what we must take away from the spread of wearables – the user experience is again different to the full mobile user experience. Limited hardware, memory, battery life, and other challenges force wearable app designers to narrow the focus of an app, and concentrate on the quality. Wearables are relatively new, and if they don’t work it will take much longer for the evolution to catch on. (Never forget Google Glass.) Apply this to the greater market of IoT. If a “thing” doesn’t work, work well, and work all of the time, its value is shot to pieces. As new requirements arise and devices expand to meet them, such as smart street lights that use sensors to adjust to the sunrise or sunset in order to save on energy costs, the value it provides is contingent on quality.

QUALITY GETS COMPLICATED AS FRAGMENTATION GROWS The mobile era introduced device fragmentation, and as IoT evolves, fragmentation only grows. A large portion of IoTrelated devices are related to mobile apps, like Nest and Sonos. How do you prepare for the next generation of mobile when establishing IoT products? Device fragmentation is a nightmare for anyone putting together a testing environment. There are new devices hitting the market daily, and each one’s relevancy period is shrinking as release cycles become shorter. Yes, real devices are a requirement for testing. Emulators may work well in the beginning of a test cycle, but they don’t offer a true end-user experience. As mentioned above, mobile app end users are a highexpectations group and the holes that emulators miss just won’t do when releasing final products. Here are some questions to consider when building a test environment that will have a positive impact on a quality strategy: •

What percentage of device coverage is enough for you to ship an app? We at Perfecto Mobile recommend 95% device coverage, but for most that may be unrealistic. Will 30% coverage do? Probably not, but it’s a start. How about 50%, or 80%? It depends how important your mobile or IoT users are to you. What matrix of devices is most relevant for your end users? Poll them! You might be surprised to find that many of them are operating on older legacy devices, or prefer tablets vs. phones when using your specific app.

APRIL 2015 | www.testmagazine.co.uk

What about networks? Does network compatibility impact your app? Does it need wifi? Thinking about this will strengthen the conditions of the test environment that you build.

Who’s testing? Dev/test teams are located all over the world. To support, build a cloud-based test environment. It facilitates sharing and collaboration much faster, cheaper, easier and more securely than pre-cloud methods, like shipping phones around or traveling with them.

In addition, we’re seeing the rise of a multi-screen trend. Transactions are started using a laptop, for example, then continued on a phone, and finished on a smart watch screen. The user experience needs to be flawless across ALL channels and the UI needs to self-adjust. Consider this as part of the user experience as well.

FORESEEABLE CHALLENGES WITH IOT We’re smarter now than we were when mobile came onto the scene a while back. We’ve learned from it, adjusted, and innovated well beyond. The challenges that arise with IoT are similar to those of mobile, but with more complexity. For instance, everything is connected. Look at a smart watch again. It connects to your phone and then becomes an extension of you. If something doesn’t work, it interrupts the entire connectivity process. There’s risk involved in not suficiently testing the quality of connectivity throughout all devices. Again, similarly to mobile, the question posed is “does an IoT device or app work, does it work well, and does it work all of the time?” This is so crucial to an enduser experience, the answer needs to be YES to all questions. Take a smart home for example. If my garage door is part of my smart home, and it works well 50% of the time, to me, it doesn’t work at all. The purpose has been defeated if the answer to all three scenarios isn’t YES. Making sure IoT apps undergo sufficient functional and performance testing, then post-release monitoring, is the answer to this challenge. Functional testing will determine if it works. Performance testing will tell how well it works, as in how quickly it responds, how it performs under varying network conditions, etc. Monitoring will show the stability of the app, because successful repeated usage is how the user experience will grow.

KEEP UX ON THE MIND Next generation apps that will incorporate IoT are clearly on the rise. We have learned a lot from mobile, whether it’s how end-user expectations have become so high or how to navigate device coverage. While the concept of IoT presents its own set of challenges with varying complexity levels, it’s smart to take what we’ve learned from mobile and apply it to a more innovative phase of digital connection, IoT. The user experience will make or break advancements in this field, and it’s wise to always keep that ‘UX’ on the mind when building connected devices and IoT.

PAGE 19


INTERVIEW

THE CASE FOR TESTING, RE-MADE

KAREN THOMAS LEAD USER ACCEPTANCE TEST MANAGER BARCLAYS BANK

In April 2013, Karen Thomas, Lead User Acceptance Test Manager at Barclays bank, wrote an article for TEST in which she made the case for testers in financial services. Two years on, James Brazier catches up with her and asks what’s changed.

J

ames Brazier: Two years ago you said that project managers had a tendency to encroach on the testing window, after letting other parts of the process overrun. Are you seeing any improvement?

Karen Thomas: Not really, no! Within my peer network we are still seeing project managers work backwards from the live date, and when other phases of the project overrun it’s the testing that suffers. The upshot is either a greater risk of things going wrong post-launch, or throwing contractor bodies at the test phase. The issue with that, of course, is that the external contractors don’t always fully understand what the solution is meant to look like, which reduces their effectiveness and the chances of the end product being as best as it could be. It still surprises me how little value is placed on the testing processes. I think it’s the way we measure success. I totally understand the whole project life-cycle. There’s that “push” to finish and people tend to not hear if you’re calling out there are issues. Of course, there are always compromises, things that could be de-scoped delivered later. JB: Last year’s meltdown at RBS and its subsidiaries was a good example, perhaps? KT: I’ve read and heard from my network that the RBS incident was purely down to the fact that they off-shored the element that used to know how things work. The offshore workers didn’t understand the implications of the changes they were making, which is why things went wrong. “Cut the bottom line” is the mantra, but ultimately the costs can end up outweighing whatever savings were identified in the first instance. Testers need to be perceived differently. We don’t want to “break things”. We’d love to be in that situation where it’s just a tick-box exercise. But when you have functional testers who don’t understand the product, we end up in that situation when we have to accept some change that doesn’t fit the customer needs. Companies need to value the knowledge of testers if they’re to create a great quality product. JB: Have working practices like agile improved the situation at all? KT: Not all projects lend themselves to an agile methodology. This is particularly true of those involving legacy systems and regulatory changes, which are a big issue for us in the financial sector. Regulators want a great deal of control over the development process and the

PAGE 20

changes to be delivered, and when this information is generated by legacy systems this limits the scope for agile working. Nevertheless, it’s still possible to work with an agile mindset. Scrums, war rooms and co-location are all good practices and they can help. In financial services, it’s important to understand the criticality of what we’re doing. OK, so no one will die as a result of our software, but if you get it wrong, ultimately the customer suffers as in the RBS instance: they can’t access their money, or they could end up with a bad credit rating. JB: You’re a great believer in talking to people in person. Why? KT: Face to face communication is really key to understanding people’s perspectives. To get the parties together in the same room and co-located works really well. Of course, communication tools such as Webex or v-conference can be useful. But really, to understand how the business works, you have to be located with them. It’s the business that has the day-to-day contact with customers. They understand what that means for the customer at the end point. That’s absolutely critical. Customers no longer have time to go through lengthy processes to do things. They have to do more online, as do we. Customers should be the priority, and their feedback is crucial. Proto-typing is a good method of receiving instant feedback and more customer target groups are being used in financial services. Data capture, for instance, is often quite standard when it comes to collecting a customer’s basic details, but when there are additional elements that need to be gathered, for instance in a loan-approval process, it’s really helpful to see how this works from the customer’s viewpoint, rather than that of a company insider. As a customer myself and a tester, I’m sure I can be a bit of a nightmare when it comes to the customer experience!

APRIL 2015 | www.testmagazine.co.uk



METHODOLOGY MIKE BARTLEY FOUNDER AND CHIEF EXECUTIVE TVS

WHAT WE LEARN FROM HARDWARE TESTERS It is sometimes easy to forget that computer hardware requires testing just as much as software. The methods of hardware testing, such as constrained random stimuli testing, offer lessons for software assurance and verification. Here Dr Mike Bartley, the founder and chief executive of TVS, explains why software assurance can learn from the processes used to verify the designs of processors

sing random stimuli to verify software may at first seem like an odd idea. It is a technique that is used extensively in hardware designs. To verify a processor design from, say, Intel, ARM or MIPS, we will execute millions or even billions of programs made up of random instructions. In effect, hardware designs are written using a programming language, with extensions for describing timing delays and parallelism, so applying this technique to software is not such giant leap.

U

AN INTRODUCTION TO CONSTRAINED RANDOM TEST BENCHES Figure 1 gives a simplified overview of the essential elements of a constrained random test bench used in hardware testing: • We need to have some way to generate stimuli for the design under test and we need to ensure that these are legal. So, following the processor example above, this might mean we only generate valid instructions. This can be more difficult than first meets the eye. For instance, a branch instruction needs a valid target address.

PAGE 22

We need some way to apply the stimuli. For the processor, we need to compile the program and load it into memory. This explains the ‘active’ elements of the test bench which stimulate the design, but we also need ‘passive’ elements which check what is happening when we run the tests. Remember, the test bench runs without human intervention, so we need to check what it is doing. We use the following three test bench elements: • The ‘Response’ block collects the response of the design under test to the stimuli. This can vary from collecting memory contents after applying the stimuli to collecting all architectural state and memory at the end of every instruction. • A checker which checks those responses. More on this below. • A ‘coverage’ block which measures what actually happened. Again, more on this below. • So, what is the advantage of this approach? Well, the test bench is independent. We can run it without any intervention and, because the stimulus generator is random, then the tests generated should all be unique. But this is only useful if the tests are continuing to hit interesting corner-cases and can identify failures which can then be debugged. The ability to hit those corner-cases depends on the quality of the generator. We need some way to measure this: hence the coverage block. This could be measuring structural coverage (such as statement coverage) or ‘functional coverage’. The latter is a technique to measure which functions of the design have been executed. For example, for the processor we might want to measure the distance jumped to the relative address of the branch target (minimum and maximum jump distances might be interesting corner-cases). Identifying failures is the responsibility of the checker. It must try to detect when the design under test acts incorrectly but it must also try to do this in a way that aids debug. So, for example, we might check memory contents at the end of the stimulus (i.e. at the end of the execution of the randomly generated program). However, this might miss bugs (e.g. if the program did not perform any writes to memory or the cache was not flushed at the end of execution) and it might also be hard to debug. This is because we have to understand a randomly generated program that we didn’t write; identify the point at which it wrote to the corrupt memory location; and then identify why it wrote the wrong value to that memory location. So another checker might check the content of architectural state (e.g. register content), cache and memory content after every instruction completes. Such a checker will fail very close to the cause of that failure thus reducing debug time. However, such a checker is very complex to write (although companies such as ARM do produce ‘fast’ models of the processor for software developers that can double as checkers for the test bench).

APRIL 2015 | www.testmagazine.co.uk


METHODOLOGY COVERAGE

STIMULUS GENERATOR

STIMULUS

DESIGN UNDER TEST

RESPONSE

• Testing can be done in a more stringent and random way. This can address any corner cases arising from regular tests not covering certain features. This also ensures that the tester does not miss out any testcase set or features.

INFRASTRUCTURE

The following infrastructure can be used for building constrained driven random testing for software programs: • Constraint-based random CHECKERS verification is enabled through use of an external randomisation library called CRAVE, available via public Figure 1: Constrained Random Test Bench repositories at GitHub. Alternatively, other modes of randomisation including the C++ inbuilt randomisation can be used. A SOFTWARE CASE STUDY • TVS has developed a functional coverage library based on C++ to enable coding of functional coverage points As a case study, we used a bubble sort program which can in test benches. By calling functions from the library, the take lists containing data all of the same type (integer, short, user is able to implement the coverage points based long, char or string) as input. The sorting can be done in on variables in the program being tested. The functions ascending or descending order. We implemented functional automatically track the values taken by those values and coverage points in the test bench – the following partial list match them to those specified in the coverage points. At gives some examples: the end of the test it can then produce a report on the 1. Type of data: integer, short, long, char, string. coverage points hit during the test. 2. Data to be sorted in Ascending or Descending order. • The functional coverage achieved is reported in both CSV 3. Repeated data elements in the input list. and XML format (using the "Tiny xml" library add-on). This format is compatible with the TVS asureSIGN™ tool so that 4. Data already sorted in Ascending or Descending order. coverage achieved can be viewed against requirements. 5. An empty list. • Note that hardware verification engineers typically build test benches according to a well-defined methodology It is also possible to perform ‘cross-coverage’ by taking crosssuch as the Universal Verification Methodology (UVM). TVS products of the above. For example, by crossing 1 and 3 we has implemented a copy of the UVM in C++ (a library of want to see repeated data elements in lists containing integers, base methodology classes including factory constructs, shorts, longs, chars and strings respectively. agents, monitors, scoreboard, drivers and sequencers) Code Coverage was also performed. Code Coverage allows known as TVM, which is freely available for download. If the user to analyse which portions of the code in the program you want to investigate this approach to software testing are executed and how many times. For example, the Gcov then you can download the TVM for free and re-run the utility provides information on statement coverage (i.e. how bubble sort case study or try it for your own software many times a line of code in a program is executed by the testing. tests), branch coverage, and so on. The checker was implemented by adding a number of checks to the output of the bubble sort. For example: HIGH FIBRE • The output list contains exactly the same data as the input list. • The data in the output list is sorted in the correct order. TEST caught up with Jonathan Lewis, Market Manager at Huber+Suhner, a Swiss manufacturer of fibre-optic cables, to discuss other lessons software testers can learn from their ADVANTAGES hardware counterparts. Fibre optic cables are encased For software testing, hardware concepts such as constrained with copper wires, which carry a current to modules along random stimulation and functional coverage offer the the route, but fibre is a better communications carrier following advantages: than copper because it is less length-dependent. Sheer • Constrained random testing will hit corner-cases which physical durability is an important consideration; the lines are manual testing might miss - or testers might not even think of. designed to work between -40 to 60 degrees Centigrade, • The testing of architectural or design features can be for use in hot data centres, and H+S even produces fibremeasured through functional coverage. optic cables that are parakeet-resistant,for use in Australia. • Management gets a better idea on the current status In terms of testing, it's mainly about ensuring that the fibre of the software as a ratio of tested-and-implemented is working as promised. It is manufactured for a range of features vs. non-tested features. precision, with the most demanding customers being the • Code Coverage ensures that all code in the device under carriers themselves. Companies such as Fluke and Ideal test is covered. Networks supply equipment to test the line, for use by field • Testing can be set up such that coverage is automatically testers, and the lines are also bench tested in the factory by loaded after every set of regressions to give current status. This can also give a history of increasing coverage which H+S themselves. helps to better predict completion times.

APRIL 2015 | www.testmagazine.co.uk

PAGE 23


DATA MANAGEMENT

HOW TO MANAGE TEST DATA

KYLE HAILEY PERFORMANCE ARCHITECT DELPHIX

Test data management is difficult, time consuming and expensive. Getting it wrong can lead to incorrect implementation and significant financial losses due to everything from high quality assurance (QA) costs, to bugs in production. Kyle Hailey, a performance architect at Data-as-a-Service provider Delphix, explains the way forward.

D

ependable QA testing requires code to be tested on data that represents the data encountered on the final production system. The data must respect the business rules and correlate between one rule and another. For example, a customer order record has to correlate to existing order items and to an existing customer. The records also have to cover the correct date ranges. If, for example, the test code searches date ranges outside of the test data, then the test code won’t even touch any data. It is difficult, in fact nearly impossible, to create from scratch a full set of data that represents all the possible combinations and permutations of data that will be encountered in production. Test data management is an enormous time drain for application development. According to Infosys, “up to 60% of application development and testing time is devoted to data-related tasks, making it cumbersome and time consuming”. In many cases, “system functionalities are not adequately tested due to required test data not being available”. If test data is not managed correctly, that data itself cannot be trusted. Poor data quality in testing increases the likelihood of bugs finding their way into production. And this in turn has a significant impact on the bottom line.

PATCHY COVERAGE After talking to hundreds of companies about their test data management, we at Delphix found that QA systems typically use data that is partial. It either represents a subset of data in production, or is synthetic and has been generated to simulate data in production. In both cases, the data isn’t sufficient to cover all the data possibilities that are encountered by the production system(s). In order to address these missing cases, the testing plan typically

PAGE 24

APRIL 2015 | www.testmagazine.co.uk


DATA MANAGEMENT includes a QA cycle on a full copy of the production system, towards the end of the development cycle. At this point many bugs are found, leading to the project either being delayed or being released on time but with bugs. This begs a question. Why isn’t the code tested on a full copy of production earlier in the development cycle? That way, bugs could be found earlier, fixed earlier and fixed with less code rework. However, the problem is that production systems usually run on larger, complex databases. Making copies of these databases is difficult, time consuming and resource intensive. Also, as data is created, modified and deleted, data sets often change during the testing cycle. As a result, using the same test data for successive QA tests will lead to diverging results. This means data needs to be reset to its original state before the QA cycle can be re-run. With large, complex data sets, the reset process can be prohibitively expensive and time consuming. This has led the QA industry at large to believe that testing on full size production data is not feasible.

“The absence of proper test data causes nearly one-third of the incidents we see in QA/nonproduction environments and is a major reason why nearly two-thirds of business applications reach production without being properly tested. The resulting application failures cause significant amounts of downtime, with an average price tag of $100,000 per hour for mission-critical applications, according to industry estimates.” – Cognizant

A FALSE ASSUMPTION In fact, this is not true. By capitalising on Data as a Service (DaaS), it is possible to use production data at every stage of testing. The underlying technology, Data Virtualisation, allows the almost instant cloning of data, using almost no storage. Naturally there is a storage requirement for the original source data, but that can be reduced to almost a third of its original size through compression. Once Data as a Service is linked to the source data – in other words, once there is an initial copy of the source data – from there, new virtual databases can be made. These require almost no storage because they aren’t actual copies but are sharing existing data blocks. The beauty is that these virtual databases are full read and write copies. When data is modified, the changes are stored separately from the original data and are only visible to the virtual databases that made the changes. These are sometimes called ‘thin clones’ because they don’t initially take up additional storage. Now instead of teams using a few shared test environments, testers can have their own, or even multiple environments enabling them to test in parallel, increase productivity and increase the number of test cycles. Whilst the storage saving is significant, there are many more benefits to the testing process. DaaS continually collects changes to the source data in an automated way, creating a time-line of changes. This means that a full new copy or snapshot of the source data is never required again – only the changes are collected, and changes older than a specified time window (usually a couple of weeks) are purged out. This means virtual databases can be provisioned from any point, down to the second, from this time flow.

APRIL 2015 | www.testmagazine.co.uk

With DaaS, as the environments are ‘light’, they can also be shared. For example if a bug is found in QA, and the developer has already moved on to a new project, the tester can share their environment with the developer. The developer can bring up the environment in parallel to anything they are already working on. Triaging defects becomes easier as you can rewind the data to just before the bug was found, without resorting to backups and archives. Perhaps one of the biggest test data challenges is integration testing. With Data as a Service, you can synchronise multiple data sources, provisioning virtual copies of them all, from any fixed point in time. This can be done in a matter of minutes, meaning that wait, rest and reset times are massively reduced. When it comes to security concerns, Data as a Service also enables the inclusion of datamasking in data delivery. This not only simplifies the masking process but it also improves consistency and of course saves time. Perhaps one of the most empowering elements of DaaS is that testers can now self-serve their own environments. Provisioning a running database on a target machine, and refreshing, resetting or rewinding that data is a matter of three mouse clicks and can be done in a few minutes, without involving anyone from the infrastructure team. Data as a Service means faster, simpler and cheaper access to test environments. Tests can be run on full production data much earlier in the development and production process, weeding out bugs and issues far sooner and resulting in better quality code.

PAGE 25


INTERVIEW STEVE COSTIGAN SOLUTIONS ARCHITECT ZADARA STORAGE

SOFT BORDERS The decoupling of cloud storage from physical hardware has changed the game for software development. At Cloud Expo in London, James Brazier caught up with Steve Costigan, Solutions Architect for Zadara Storage, to discuss how software-defined storage supports testing.

S

oftware-defined storage is an important trend, but one that is evolving quickly in terms of its technology and business model. To understand what it means for testers, we asked the experts: Zadara Storage, a company which offers software that defines storage within multi-tenancy data centres. James Brazier: What’s the guiding philosophy behind Zadara? What brought you to this point? Steve Costigan: If you look at the history of Zadara, the founders and key stakeholders have been around each other for about 15 years. It was an Israeli start-up for storage delivered on a very futuristic platform. People have taken the barrier away that was traditionally there, this idea that ‘I need to own everything’. That’s changed, but the emergence of those services has meant that there’s a new order. With 30 years of legacy CIFS/NFS protocols, what do you do with those in a cloud provider? How can you run a test and dev environment into those that mirrors your production

PAGE 26

environment, if the cloud doesn’t support them? So Zadara was born to address the enterprise needs in the cloud, but also the cloud needs in the enterprise, to provide a true scale-up model, a true pay-per-use model, in a co-location facility. JB: We hear a lot about the cloud. What advantages does it offer testers? SC: Consider time. Time is the most valuable asset in terms of a test. If you’ve got to do a restore of data and that takes 10 hours, and your test team sits for 10 hours waiting for that to happen, what if five minutes into that test it goes horribly wrong? The tester’s got to reload, return that data, and there’s another 10 hours wasted. But if you take the concept of spin-up cloud, on demand, and you can spin-up your storage on demand, your storage takes 90 seconds to become available. Then you’ll find that snapshot pops up, whereby you make that available to the test rig. If you make a mistake five minutes in, you’ve got a snapshot, you can create a new file or volume that’s instantly available. So you’ve just saved yourself 10 hours. If you have a stage test where you have Test One, Test Two, Test Three, Test Four, Test Five, what happens if a mistake happens in Test Six? Do you have to go all the way back? You did in the old world of store and recovery, but now with snapshots after each test you go from snapshot files. Now you can rollback and say actually, I’m going to go back 10 minutes ago, I’m going to go back an hour ago, and I’m going to re-run for there. JB: And what if you want to do a different test based on different criteria, with two options for Test Five? SC: You can apply both of them, and do them at the same time based on the same storage. So this is where applying enterprise rules into a cloud environment delivers what test really needs. Then you look at the next stage. You’ve done your test, you’ve done your pre-production, how do you go into production? If you take Zadara’s replication technology, that snapshot technology, and run your final pre-

APRIL 2015 | www.testmagazine.co.uk


INTERVIEW production test, on what should be your live platform, you can take that final snapshot without breaking the mirror, do your final tests against your acceptance criteria and then go live. Then, if you really want to, you can just flick the mirror back so you’ve got a disaster recovery (DR) replica back where you’ve got your original source. So from a test, dev, production, to DR, if you use the functionality in the right way, using replication, using snapshot, you can cover not just the test and dev needs but apply them to the production environment. JB: And when you are live, what if you do want to run some other test? SC: So if you’ve gone to the public cloud, spin it up to another instance, another two instances, present that data that is five minutes away from live, but not impacting the live environment, proving that that service pack or that change is actually functioning properly. How much does that save in terms of time - the biggest commodity which we need to save? JB: Say I’m testing for a bank, and they say they can’t have their data leaving their environment… SC: Fine. Take Zadara as a service into your data centre. Take it the same way, as a service, on a pay-per-use model, and you can use it for all projects. Because it’s a VPSA [Virtual Private Storage Array] and dedicated to a tenant with true multi-tenancy, so a bank, Barclaycard for example, or for Visa, can work on the same infrastructure on two or three completely separate VPSAs for completely separate organisations. JB: You talked about the shift towards computational power in the cloud, away from storage and data recovery. What are the actual computational trends and what sort of processes are you starting to see in that environment that you wouldn’t have seen five years ago? SC: I think even if you look four years ago when Zadara started in the US, attached to Amazon and Dimension Data, you would say that the majority of the early customers looked at our larger 3TB repository drives then (5TB now), as a backup mechanism to recover data. And then we started to see a trend. Suddenly customers were saying ‘We need more SAS drives’. So suddenly there was that switch from test and dev, to test and dev and throughput. They wanted to test the performance of their test and dev environment, not just ‘does it work’. And now we’re starting to see another shift, which is towards flash storage, SSDs. So people are now pushing their applications into that environment. What does that mean for a customer? Well, he may do his test and dev on SATA drives, but then he’s got to his performance and pre-production suite in SAS, and then he wants to go into production and he may need SSD or SSD acceleration. So your storage vendor’s got to be able to support that migration and that’s where Zadara really comes into its own, because we can do online migration of that data from SATA to SAS to SSD and back down again if need be. When you have workloads that are seasonal – Christmas, Easter, holiday seasons – well, no problem: move that data, move the tier, increase the cache, change the flavor based on the dynamics of the business.

APRIL 2015 | www.testmagazine.co.uk

JB: There’s an awful lot of talk about automation, algorithmic testing, the extent to which that can replace human testing. SC: When I think about automation, we’re fully REST API. We publish our API, so a lot of our customers automate quite a lot of what they do. You want to create a snapshot? Automate it – REST API. You want to create volume, expand volume, all of that’s available from our API. We do it in the right way – we develop the API before we develop what we’re doing in the GUI [Graphical User Interface]. So, when you want to integrate with us – Perl, Python, Ruby, Powershell – you can do that with us securely. So I would say we’re helping people provide a 24/7 global service. JB: What are the key priorities for the software that defines the storage? SC: The key priority is our service-level agreement (SLA). We deliver a 100% uptime SLA, with penalty clawbacks to customers if we don’t deliver on it. So our number one priority is customer service, keeping everything up, delivery, if we promise something we deliver on that promise because we’re 90 seconds away from a customer walking away and using someone else. We can’t say ‘sorry, we’ve made a mess of this, but we’re not going to do anything about it and we’ll see you in three years’. That’s the difference between delivering a product and a service.

Internally, when we roll out new environments for testing, before that’s handed over to ops then they need to know what dev have done, what it means, what’s the upgrade process, all of those things. Strong communication is essential. We’re no different from any other global organization in that respect. JB: How limited are you by the quality of communications infrastructure in the countries into which you’re selling? SC: I don’t think it constrains our model at all because we deliver on premise, off premise, yesterday’s announcement with Telecity, who have 38 data centres across Europe. We’ve been very clear certainly across Northern Europe to partner with key vendors who can deliver the bits that we desire. Telecity’s Cloud-IX provides a European backbone for us, we don’t have to worry about when and where as that’s our partners’ responsibility and they take care of that. We are able to concentrate on the delivery of an enterprise storage software environment and storage solution that enables us to quantify and qualify what our customers need in terms of storage today, snapshots tomorrow, and at some point in the future they may need something different like object-based storage. By virtualising our stack we’re providing dedicated resources, so we’re not dependent on that network vendor providing the services we’re attached to. So if a customer wants to take Storage as a Service (STaaS) within a Telecity environment, so long as they’ve got a network connect of four milliseconds or so then that’s fine. Outside of that, then we’d recommend to the customer that we deliver the same within their own data centre, start small, start with two nodes and scale up. So the infrastructure in terms of its broadband and all that doesn’t really affect us, because customers are building on top of us.

PAGE 27


BUSINESS MODELS

TESTING AS A SERVICE - A NEW WAY

KEVIN HAHN COMPANY OWNER FREELANCE SOFTWARE TEST

The US-based Freelance Software Test (freelancesoftwaretest.com) launched in February 2015 with the goal of changing the way software testing is done. Here, Kevin Hahn, the company’s owner, explains his hybrid business model. he testing of software is a core challenge for most technology companies. The demands of the market change as frequently as the technology itself, and testing is often the hardest skill-set to keep current. At present, there are two very different strategies for outsourcing the software quality work.

T

The ‘traditional’ strategy generally involves hiring contractors and generating large, detailed contracts, or hiring long-term employees and keeping them trained and current. Either approach requires significant time and dedication from all sides. It incurs a substantial overhead in the management of human and computer resources. On the plus side, it ensures that the company is in full control of the process and that it understands its workforce. A second, ‘modern’, strategy targets a single resource or small team with specific skills to accomplish a very specific goal. This execution model is commonly referred to as ‘crowdsourcing’ or ‘freelancing’. It can be highly efficient and is growing in popularity, but it presents a fundamental downside for corporate clients. The resources are transitional; they move from task to task. Their transitory nature often results in domain knowledge loss, a lack of training for company employees, and no access to the resource when re-work or updates are needed. In my experience of crowd sourcing, I have come to realise that while on paper the model sells glamour and efficiency, the reality is “the devil is in the details.” You may not get all the benefits that you believe you are getting and these resources can be difficult to manage. This is where Freelance Software Test comes in. Testing as a Service (TaaS) is an overused term; it means more than simply ‘testing in a cloud’. Launched in February, Freelance Software Test uses a hybrid business model to tackle the pitfalls of the ‘traditional’ and ‘modern’ methods. Our approach to TaaS is to manage clients’ testing activities from cradle to grave, striving for a happy marriage of freelance-style execution with industry standard software testing principle and process. The model works as follows. A certified Test Engineer Representative (TER) handles each client’s needs. We are a global corporation that recruits worldwide, but to become a TER the recruit must be certified by a testing standards organisation, meet the qualifications of a Senior Test Engineer, and have a record of successfully leading large-scale projects. We have a network of TERs across several different technical disciplines - Enterprise Web Applications, Mobile, and Gaming. This approach protects information and individual identity by channeling all communication through a single point of contact, the TER, to provide a level of anonymity and continuity for clients, regardless of the software development model or test approach. The TER uses video-conferencing, file-sharing, screen and desktop sharing, web chat, email and, yes, the telephone to keep in touch with clients, the clients’ functional teams, and also Freelance Software Test’s own technical team.

PAGE 28

A regular drumbeat of updates, technical reporting, and Q&A sessions reduces the scope for miscommunication and misinterpretation. When a customer’s data is critically sensitive in nature, we give special consideration to that client; all their work would be handled by Freelance Software Test’s direct employees. Our team approaches software testing challenges with a Tools, Process, and Technology methodology, which considers every problem or task from a business point of view in order to apply the appropriate testing tools, industry standard processes, and alignment to specific technology stacks. For small footprint applications, like today’s mobile computing platforms and client side applications, we test on specific device hardware. This produces more ‘true to life’ results, reducing the risk of surprises in the field, and is always our preferred method. In the rare cases where specific hardware procurement is not an option, or the client’s needs require testing on a wide range of devices, we use Android and iOS emulators in conjunction with mobile automation platforms to support the variety of devices. For large or even unknown targeted footprints, emulation is a proven method of assuring that applications will perform under most circumstances. For instance, a business might have a large-scale application that will be distributed across Android, iOS, Blackberry, and Amazons Fire OS. These larger scale applications are strong candidates for emulation, to ensure excepted operation across more distributed platforms. Virtualisation can reproduce the immense number of possible configurations that most companies struggle with maintaining. It is a hardware-agnostic means of rapidly deploying, testing, and decommissioning environments. A hybrid approach has benefits for both the client and the freelancers. Companies no longer need to send out different threads to multiple freelancers or domain experts. One thread gets the same work accomplished without the overhead and oversight. We manage the devilish details to produce consistent results. The direct efficiencies of this business model are its minimal hardware setup and the nearimmediate testing of web applications, mobile (android and IoS) applications, mobile web sites, and client-side applications.

APRIL 2015 | www.testmagazine.co.uk


Software Testing Network Strength in numbers www.softwaretestingnetwork.com

Membership benefits include: Series of one day debate sessions High-brow webinar streams Research & industry findings Exclusive product discounts Peer-to-peer networking Annual gala dinner And so much more...

Becoming a member of the Software Testing Network joins you together with like-minded professionals that are all striving for technical excellence and championing best practice and process


MOBILE TESTING AMITAVA SANYAL SENIOR MANAGER MAVERIC SYSTEMS

Amitava has over 13 years of experience in service development, product solution sales, alliance management and project management. He is focussed on next generation technologies that will drive uptake of digital channels and is responsible for creation and go to market of an integrated quality assurance platform to enable financial organizations to embrace new digital technologies. To exchange ideas with him connect with him at amitavas@ maveric-systems.com

COMPATIBILITY TESTING FOR MOBILE – INTRODUCING A METHOD TO THE MADNESS The fragmentation of mobile devices and operating systems poses a daunting challenge to quality assurance. Here, Amitava Sanyal, Senior Manager at Maveric Systems, suggests a methodology for managing the many challenges of compatibility testing for mobile applications. s the Internet of things (IoT) gathers momentum, businesses are leveraging the pervasiveness of customer devices not only to roll out innovative features for existing clients but also to reach out to a new set of customers who otherwise would have been inaccessible. While the opportunity is immense, the combination of mobile channels and smart devices presents a unique set of challenges; a need to focus on constant innovation, multiple channel management, strong and secure infrastructure, and all of this backed by a comprehensive testing and assurance plan. One of the key aspects that mobile and smart device application developers and testers need to assure is that their application works seamlessly on all kinds of devices that are used by their customers, taking into account the various network conditions under which they are used and the experience they derive while accessing those applications. This is what is referred to as the compatibility of an application. The assurance process to make sure that it is achieved is commonly referred to as compatibility testing. In most cases the difference between a successful application and a not so well received application lies not in its functionality but its compatibility and usability. Since the end-customer is the final frontier when it comes to mobile applications, their instantaneous feedback on an application can damage not only the hours of hard work and money invested by the developers and testers but also can be detrimental to the

A

PAGE 30

brand and the competitiveness of the business. This article focuses on how compatibility testing can be made more methodical, scalable, agile and future-ready while at the same time ensuring that the cost of quality is optimal.

THE KEY CHALLENGES TO COMPATIBILITY TESTING While a typical business application is bound by technology boundaries within the institution- hardware, operating system, middleware, databases and interfaces, and only to a very limited extent has external dependency, when it is exposed to a mobile world it has to tune itself to the ever-evolving external ecosystem. Unless, the impact of that ecosystem is factored in, it is impossible to assure the quality of the application. Three factors add to the complexity of compatibility testing: 1. The constant launch of new device models incorporating new mobile technologies, in the middle of the application releases, makes docking very difficult. 2. Testing needs to move beyond just browser-layer validations to include additional layers like operating systems (OS), manufacturer skins and technology features. 3. Increasing dependencies of application functionalities on the device hardware features necessitates the understanding of how the hardware features impact the functionality. Examples: machine-to-machine (M2M) technologies

APRIL 2015 | www.testmagazine.co.uk


MOBILE TESTING impacting location services, or near-field contact/host card emulation (NFC/HCE) impacting wallet related services. The key question then is how can I as a tester make my compatibility testing more effective by ensuring that I am able to: 1. Cover the maximum number of devices and come close to 100% of my end-customer device base. 2. In my business application, test for those functionalities which are more likely to fail in the event of technology upgrade or a new device roll out in the market. 3. Become agile in terms of regression testing and risk-based testing during incremental releases and keep pace with the new technology developments in mobility, while at the same time hitting the market faster than the competition. 4. Easily trace an issue found in production to the appropriate technology layer – operating system, browser, skins or device features. 5. An approach to handling compatibility testing 6. The figure below highlights a six-step process that might be followed to make your compatibility testing more methodical, scalable, agile and future-ready. Step 1 – Create a Device Compatibility Library • For each of device/model available in the market, capture the following information in a structured manner. Most of this information should be available through multiple sources such as manufacturer websites, product release notes and independent analysts: • Platform details - include the operating systems and the versions supported by the devices, browsers and their versions supported by the devices. • Technology features supported by the devices, e.g. – audio/video formats, image and document formats, file handling, password management. • Hardware features included in the device across various parameters such connectivity, M2M etc. • Network and other technology features supported by the device. Step 2 - Create a reusable Model from the Library • From the library of information created in Step 1 use various modelling techniques to enable analytical queries to arrive at logical reports. • These reports should enable the tester to shortlist devices based on various input parameters combinations – i.e by providing an input about the business application and the expected features to be supported the application. • These reports should also allow a tester to compare

between versions of operating versions, versions of browsers and various device models to be able to arrive at an effective risk-based testing strategy and regression pack. Step 3 – Shortlist the device list based on the region/country of operations – i.e devices that cover 95% to 99% of the end users in the region. • Reports available from market analysts such as IDC and Forrester provide the device market share across regions, the shipment figures across various manufacturers and the usage patterns across various customer profiles (demographics, age etc.) Step 4 - Utilise the Device Compatibility Model to arrive at the list of compatible vs. partially compatible devices. • Use the device compatibility model to arrive at two lists of devices: • A list of devices that are fully compatible – i.e. compatible to all the technology features required to make all the mobile channel functionalities work seamlessly • A list of devices that are partially compatible – i.e. they may be non- compatible with one or more technology features required to make all the mobile channel functionalities work seamlessly. Step 5 - Testing on the list of devices which are fully compatible. • Test 100% application functionality on select devices from this list in such a way that the testing on the latest and most used OS versions is covered. • Test 100% functionality for at least one model from each of the manufacturers. • Test for downward compatibility for browsers, screen size and screen resolution. Step 6 - Testing on the list of devices which are partially compatible • Test only the particular application functionality which might be impacted by the technology feature with which these set of devices are non-compatible. • This article does not intend to provide a definitive approach towards addressing all the challenges that has been witnessed in the mobile channel compatibility testing area over the last few years. However, it is intended to generate discussion around a set of approaches and methods that could help us get on top of the challenges generated by fast-changing mobility landscape.

5.

1. Create a Device Compability Library

2. Create a reusable Model from the Library

3. Start with the device list based on the region – devices that cover 95-99% of the users in the region

1. Test 100% functionality on select devices of latest and most used OS versions 2. Compatibility Testing for at least one model of all manufacturies 3. Downward compatibility for browsers, screen size and screen resolution

4. Utilize the Device Compability Model to arrive at the list of compatibile vs. partialy compatibile devices

TEST

STRATEGY

1. Test for only particular banking functionality which is impacted by the non compatibile feature. 6.

APRIL 2015 | www.testmagazine.co.uk

List of Devices which are fully compatibile – i.e. they are compatibile to all the technology features required to make the mobile banking functionalities seamlessly

List of Devices which are partially compatibile – i.e. they may be non compatibile with one or more technology features

PAGE 31


EDUCATION MIKE HOLCOMBE PROFESSOR UNIVERSITY OF SHEFFIELD

THE TESTING UNIVERSITIES Although most universities teach computer science in one form or other, the way this is done varies and the topics covered are exceptionally diverse. Here, Professor Mike Holcombe of the University of Sheffield examines how the UK’s centres of higher-education handle testing and quality assurance, and how it fits into the broader IT curriculum.

T

here are many types of universities in the UK, ranging from the ancient academic institutions, through the Russell Group of research-led universities to the newer universities and former colleges. The former tend to attract students with the best examination marks and offer courses that are more academic and theoretical than the others.

module is the development of medium-to-large size software projects for external organisations, including activities such as documentation, coding, testing, maintenance and deployment. The students work with professionals from the epiGenesys software company (www.epigenesys.org.uk) and use the professional design environment and tools in the company.

Which universities teach testing? If you look for explicit courses or modules on software testing you will be disappointed. There are very few. Often these are in universities where there is a research interest in testing. In the UK, the main ones are Brunel, Hull, King’s College London, Reading, Sheffield, Strathclyde, Swansea, University College London, and York. Here, advanced testing courses are provided, often in the 3rd or 4th year, and these may be taken by a majority of the students.

In Genesys, students again work in teams, developing software for real clients. The key difference between the Software Hut and Genesys is that in Genesys, students are mentored by epiGenesys staff. They review their code, provide regular feedback, and assist with project difficulties that can range from client issues to getting a Ruby gem working properly. As the students form part of the epiGenesys company for the year, the work they do transcends their individual teams. They take part in company meetings and are expected to assist other teams who share commonalities in their projects (for example, implementation of a similar type of subsystem).

Testing will be covered at some universities in general courses on software engineering or in courses on programming. The extent to which these courses go into the details will vary. In a Java programming course one would expect to see some mention of JUnit or similar, but this is often in passing. The details of courses in testing vary between universities. At my institution, Sheffield, there is a strong emphasis on the practical aspects, often in the context of a real development project. In the first year, Sheffield has a software engineering module where students work to develop a web application written in Ruby with the Sinatra framework. Fourth year students act as ‘clients’, with a brief that describes the project, from which they are allowed to add in variations. Students work in teams to extract the requirements, and then develop the application in a series of iterations, according to an agile methodology, as taught in lectures. Students write stories for each of the requirements, and list acceptance criteria. Later in the course, they learn about testing, and how to reformulate these acceptance criteria into Cucumber tests. They also learn about code coverage, how to monitor the coverage of their tests, and how to interpret the coverage figures they obtain – that is, not to slavishly write tests just to obtain higher coverage levels, but to figure out which tests are missing. In year 4 (MEng students) there is a specialist testing course (COM4506) and the innovative Genesys progamme. This module involves students participating in the running of the Genesys Solutions software company, which offers software development services to outside organisations. The emphasis of the work will be on learning how small IT companies are operated, the frameworks within which they work and their practical management processes. The basis for the

PAGE 32

Sheffield is fairly unique in that practical projects form a core of the degrees and testing is central to all of this. With so many of the projects involving external businesses and organisations, quality is paramount.

COM4506 TESTING AND VERIFICATION IN SAFETY-CRITICAL SYSTEMS •

The nature of safety-critical systems and software

Safety management: hazards, risks, risk assessment, and reduction, software reliability

Finite-state machine models and predicates

Software engineering lifecycles, processes and activities, the role of verification and validation.

Specifying control systems

Introduction to testing: forms of testing and approaches to it

Test methods and derivation of test cases

Hazard analysis, HazOp and FMEA

Programming practices for safety-critical software.

APRIL 2015 | www.testmagazine.co.uk



RESEARCH ??? FINDINGS ARCHIE ROBOOSTOFF DIRECTOR MICRO FOCUS

ARE WE PROVIDING THE BEST TESTING ENVIRONMENT FOR OUR APPLICATIONS? The testing landscape has changed significantly over the past few years. Archie Roboostoff, product director at Micro Focus, picks out the findings of some recent research.

T

hese days, businesses must consider how their applications can run on varying devices and platforms while meeting the high expectations of end users, who are themselves adopting the latest technology trends due to BYOD (Bring Your Own Device) and the ease of cloud computing. According to Gartner, the global enterprise software market is expected to grow by 5.5% in 2015 and is worth an eye-watering $335bn. More organisations are realising

PAGE 34

the importance that testing plays in producing quality software, faster, so it should be no surprise that software testing has found its place on every CIO’s agenda, given the need to keep pace with ever-changing technology trends and developments. However, according to the Borland Successful Software Delivery survey, which polled more than 100 key technology decision makers, more than a third of respondents (34%) said that the biggest difficulty in getting their application to run on multiple platforms,

APRIL 2015 | www.testmagazine.co.uk


RESEARCH FINDINGS browsers and devices was obtaining the necessary hardware and infrastructure. This was followed by a lack of time (21%) and lack of testing tools (18%). Despite needing their applications to run on a multitude of platforms, 61% of respondents continued to test manually.

business requirements should be a key information source to be considered in combination with this centralised vantage-point. But just under a third (31%) could confidently confirm that 61%-80% of their application tests were tied directly to their original requirements.

THE IMPACT OF POOR QUALITY APPLICATIONS

Business requirements enable developers to successfully produce an application to the businesses’ original brief. Of the respondents, 67% were manually asking the end user for feedback to help refine business requirements.

These testing obstacles can lead to failure and poor performing applications – in fact in some cases applications may not make it to the delivery stage, costing businesses thousands of pounds in wasted resources.

However, this feedback can be open to interpretation Despite that makes collaboration between developers and other stakeholders challenging. A poor needing their requirements brief can lead to extended project applications to run time and costs – a particular pain-point for businesses that have needed to outsource on a multitude elements of their application development and of platforms, testing. Worryingly, measuring application quality and 61% of respondents delivery readiness is also seen as challenging, with continued to test 12% of key technology decision makers waiting for customers to provide feedback and 34% stating manually

While just 21% of respondents felt that application performance testing was an integral part of quality testing, 16% reported that performance testing was confined to major releases only. This is despite many citing poor quality of the tests (40%) and a lack of test data availability (41%) as the main reasons for a testing failure.

Without being able to provide real-life test case scenarios using strong test data, businesses cannot predict how their application will fare with users. Almost half of respondents (49%) felt they provide some test coverage to match customers’ usage scenarios, but 27% have limited coverage or no coverage at all – jeopardising their application testing and development efforts as well as promoting the risk of performance failure.

MANUAL VS AUTOMATED TESTING According to the findings, businesses continue to test the majority of their applications manually (58%), and while 20% of businesses split testing evenly between manual and automated, as little as 12% use automated testing alone. There has been much debate over whether software testing should be done manually or be automated, but businesses must assess how they can best deliver high quality software and reduce the time to market to remain competitive. Automating the testing process can help achieve this. Automated testing is a form of testing that uses scripts to automatically run a set of procedures on the software under test, to check the steps which are coded in the script work.

that if the application is fit for purpose (in other words, that it ‘does what it says on the tin’) they have met expectations. But is this enough?

With constantly changing expectations and new technologies being introduced into the market, the race to deliver game-changing applications has never been so competitive. Failure to keep up with the competition could cost the business. When delivering a high performing and quality application, a development team, whether outsourced or internal, must ensure that granular testing of the application is part of the overall software supply chain. A robust and portable test automation solution for web, native, and enterprise software applications that enables users to test applications more effectively with lower complexity and cost is the way forward. The right solution will help improve collaboration between business stakeholders, QA engineers, and developers enabling them to contribute to the whole automation testing process to increase the effectiveness of software testing. This, combined with strong business requirements from the get go, can help businesses assure that the application delivered meets the performance and quality standard they expect.

Test automation can be of particular benefit in regression testing – rerunning a test against a new release to ensure that behaviour remains unbroken, or to check if a bug has been fixed. Also, rerunning load and performance testing to simulate hundreds or even thousands of virtual users across multiple devices is another way in which automation can optimise the performance of business applications. Additionally, automation testing requires less expertise and resources so that the test team can provide a faster delivery.

EQUIPPING DEVELOPERS WITH THE RIGHT TESTING TOOLS Of those polled, 40% felt that testing tools which afforded a centralised view of metrics and analytics, pooled from multiple sources of information, were highly important to the success of their testing process. A further 81% said that

APRIL 2015 | www.testmagazine.co.uk

PAGE 35


British Museum 19-20 May 2015

PROGRAMME After the unparallelled success of the 2014 conference the team behind the event has been working enthusiastically to once again bring you an array of esteemed industry experts, academics, high profile speakers, as well as a series of highbrow executive debate forums and an industry-leading exhibition.

HEADLINE SPONSOR

GOLD SPONSOR

EVENT PARTNERS

EXHIBITORS

www.softwaretestingconference.com PAGE 36

APRIL 2015 | www.testmagazine.co.uk


Two Days at the Museum! We are very pleased to welcome delegates to the 2nd National Software Testing Conference, convening once again at the British Museum.

to respond to the changing world of agile and DevOps in the face of rapid technological change.

The answer to these questions is an emphatic ‘yes’. In fact, the entrepreneurial process makes this mutual learning absolutely critical. Tech start-ups are not merely shifting business models onto digital platforms; they are mashing up old business models to create new ones. Uber blends a taxicab dispatch office with a social network and a mapping service. Shazam is your friend with the compendious music knowledge, crossed with the friend who knows where the shops are.

A glance at this year’s speakers testifies to the extraordinary range of industries that require quality-assured software in the 21st Century. Every year, boardrooms find new ways of dragging products and services out of the old analog world and into the digital one. This type of innovation is now the primary mode of entrepreneurship in the Western world. In the first three months of this year alone, investors poured venture capital of $686 million into London’s tech start-ups, smashing by two-thirds the record set in the same quarter of 2014. The sheer diversity of the problems being addressed by QA raises some interesting questions. Is it possible for this enormous variety of industries to learn from one another, from an assurance perspective? Does testing software for televised entertainment have anything in common with QA at an investment bank? Are there common threads of best practice that unite testers, irrespective of their sector and project?

Interaction is a watchword of this Conference. It is not a question of experts talking down from a podium, but rather of hundreds of peers, all experts in their own right, gathering to forge new insights into the technical and practical challenges of testing software, and to strengthen collectively by making new connections. Alongside the plenary sessions are workshops and executive debates hosted by our sponsors where everyone in the room contributes.

Hence the conference’s theme, “Breaking Today’s Boundaries to Shape Tomorrow.” How far one goes in the pursuit of common excellence is a matter of opinion: the need for universal standards is by no means universally accepted, and it is a debate we shall be hearing at the conference, alongside many other topics. We shall be hearing of the challenges presented by the multiplicity of new devices and systems, and how to ensure that they connect reliably to the mainframes and data centres that continue to guard our most sensitive data. We shall be discussing how to ensure that software works well in all markets, not just the West; how to analyse data in order to identify risks; how to assure against heavy traffic and what to do when problems manifest themselves; and how

Take a moment to read about the speakers and their topics over the next few pages; they are a fantastic range of senior professionals with diverse points of view. Some of them are winners of last year’s TESTA Awards, selected for their remarkable contributions to the testing world. Do also read about the sponsors of the National Software Testing Conference, who themselves represent the most advanced industry knowledge in the field of QA. Enjoy!

Exhibitor stand = 3m2 x 2m2

Raymond and Beverly Sackler Seminar Rooms

Hugh and Catherine Stevenson Theatre Claus Moser Seminar Room

02 03

01

Ca

te

The Studio BP Lecture Theatre

rin

g

04

C

06

05

g

rin

e at

14 13

11

07 08

15

12

09 10 Reception

Servery

Cloakroom

Lift

Lift

Cloakroom

Servery

The National Software Testing Conference 2015 British Museum 19-20 May 2015 The British Museum, Great Russell Street, London WC1B 3DG APRIL 2015 | www.testmagazine.co.uk

PAGE 37


NATIONAL SOFTWARE TESTING CONFERENCE 2015

Helen Byrne CIO Testing Services Direct Line

Culture and the Change in Mindset Needed for Testers to Operate in Today’s Landscape A look at how the industry has evolved in recent years, with agile becoming the norm rather than the exception, and the focus shifting to test driven and behaviour driven development. How different generations of testing professionals rise to these challenges. How the culture you grow up in generates different responses to industry changes.

Following the successful implementation of the transformation programme which delivered an industry standard testing operating model across Direct Line Group, Helen is currently heading up the Testing Services Department at Direct Line Group, which comprises onshore portfolio test managers, combined with offshore delivery of testing services through a long term strategic partner.

PAGE 38

Kieran Cornwall

Dominic Assirati

Jonny Wooldridge

Head of Testing, ITV

QA Director, King

Chief Technology Officer, The Cambridge Satchel Company

Are Standards Stifling Innovation? A discussion about how, while developers get the creator status, project managers are the controllers, and business analysts are the fount of knowledge, software testers have had a raw deal over the years, and it’s time that changed.

Kieran joined ITV in March 2013, and is currently Head of Testing. His test management experience spans eight years, primarily in the the finance industry, and including central banks, asset management and private wealth. Now working in an agile continuous integration delivery environment, Kieran aims to increase the rollout of pragmatic test functions within ITV.

Quality Assistance, not Assurance Quality Assistance is about helping software delivery teams to take ownership of quality and achieve accelerated development and rapid releases of high quality products. We do this by keeping the number of testers in each scrum team to a minimum, coaching the team on agile testing principles, and using testing tools to make things move faster.

Dom has over 15 years’ industry experience, working with companies ranging from start-ups to multinationals, including internet betting pioneers Betfair, and then mobile app innovators Shazam. Dom is currently working with casual game industry leader King Digital, where he is Senior Director of Quality Assistance, focusing on advancing the methods and practices of software testing.

Behaviour Driven Development Testing The practicalities of rolling out BDD testing while Head of Web Engineering at Marks & Spencer, and since then at The Cambridge Satchel Company. Among other aspects of BDD, the presentation will discuss how the implementation of BDD pulls in DevOps..

As head of Head of Web Engineering at Marks and Spencer, Jonny was part of the senior management team working with on-shore and offshore teams, to combine a set of best of breed ecommerce systems to replatform the M&S.com website, He was responsible for setting up the development capabilities and processes for the £150m, three-year programme, resulting in the formation of a shared DevOps practice for core capabilities in enterprise build and deployment automation.

APRIL 2015 | www.testmagazine.co.uk


SPEAKERS AT THE CONFERENCE

Dan Cuellar

Chris Ambler

Chris Young

Hector Lira

Head of Testing at Foodit

Head of Testing, Capita Customer Management

Portfolio Test Manager for Retail Systems, John Lewis

Portfolio Test Manager, John Lewis Partnership

How Appium Disrupted Mobile App Automation Through Collaboration. •What is open source software? •How OSS disrupts the balance of power, by having everyone work together to solve common problems, and provide solutions to organisations of all shapes and sizes. •The story behind deciding to open source Appium, •How Appium has disrupted mobile automation through collaboration

Dan Cuellar is the creator of the open source mobile automation framework Appium, and Head of Testing at Foodit. Previously, he headed the test organisation at Zoosk in San Francisco, and worked as a software engineer on Microsoft Outlook for Mac, and other products in the Microsoft Office suite. He is an advocate of open source technologies and technical software testing.

R.I.P Testing 2018 In 2013, Chris made a statement that he thought testers would not exist in five years’ time. Currently, testing focuses on development, and is used to try and prove development assumptions and interpretations are accurate against badly defined user requirements. The cause of all our problems is inaccurate, incomplete requirements and specifications – the symptoms are project or product failure, loss of reputation and loss of revenue.

With a degree in Computer Science, and as a Fellow of the British Computer Society, Chris is a senior testing professional, with over 30 years experience in the IT industry. While passionate about product and process quality, he understands that this comes at a price, and tries to adopt the concept of ‘good enough’ testing, to ensure value for money and quantify return on investment.

APRIL 2015 | www.testmagazine.co.uk

Introducing a Testing and Environment Practice Team within an Omni-channel Retail Environment The demands placed on IT to support continuous growth within John Lewis as an omni-channel business mean that system changes are increasingly complex, as solutions have more touch points and more levels of integration, thus increasing the potential points of failure. John Lewis has introduced a Testing and Environment Practice Team, to de-risk IT change through the delivery of software testing, and test environment provisioning. At the 2014 TESTA Awards, John Lewis won the Best Overall Testing Project in the Retail sector, in partnership with Cognizant Technology Solutions. The judges felt this entry provided a clear explanation of the business and technical challenges involved. It was a compelling case, showing how the test and quality approach was a significant enabler in the project reaching its goals.

Chris Young is a Chartered IT Professional member of the British Computer Society with over 40 years experience of project delivery within various project roles. He has over 25 years experience of working within software testing, and during a 34 year career with Post Office Ltd he worked on a number of major projects.

With over 20 years of experience as an information systems professional in the UK and America, Hector has held technical and managerial positions in application development, test automation and software testing. In the last decade he has worked at John Lewis where he participated in the creation of a Testing Centre of Excellence and is currently a Portfolio Test Manager.

PAGE 39


NATIONAL SOFTWARE TESTING CONFERENCE 2015

Iain McCowatt

John Stinson

Richard Kemp

Amit Shekhar

Director & Head of Testing, Treasury, Barclays

Test Automation Lead, Investment Banking

Solicitor, Kemp IT Law

Chief Strategy Officer, Birlasoft

ISO 29119 – Standards tie our hands

Test Automation in Finance – How to Survive the Next Five Years

Standards in manufacturing make sense. The variability between two widgets of the same type should be minimal, so acting in the same way each time a widget is produced is desirable. This does not apply to services, where demand is highly variable, or indeed in software, where every instance of demand is unique.

The Finance industry is now under tougher regulatory pressures than ever before. At the same time, systems complexity and trading volumes continue to increase. Automation is no longer an option, but how can this be implemented when budgets are under pressure and teams are globally dispersed?

Iain McCowatt, Director and Head of Testing, Treasury, at Barclays, is author of software testing blog Exploring Uncertainty and, in August 2014, the creator of an online petition calling for the withdrawal of the ISO 29119 standards. He has almost two decades of experience as a tester, automator, test manager, and software testing consultant.

PAGE 40

John Stinson has worked in software development and test automation in Investment Banking for the past 20 years. He has held lead roles across derivatives risk management, equity research and treasury departments. He has experience at Morgan Stanley, Deutsche Bank, JP Morgan and Barclays Capital.

Legal Liability for Software – What Changes as We Move into the Cloud? 1. The basics from the legal perspective 2. Why all this matters more now

Assuring Smart Cities – Testing in an era of convergence Navigant Research forecasts global smart city technology revenue to grow from $8.8 billion annually in 2014 to $27.5 billion in 2023, while Frost & Sullivan expects the global smart city market will be valued at $1.565 trillion in 2020.

3. Managing risk in our evolving software world

With over thirty years’ experience at the leading edge of technology law practice, Richard is widely recognised as one of the world’s top IT lawyers. He has built an outstanding reputation for advice that combines commerciality and client service with innovative legal solutions to the business challenges of technology development, deployment and regulation.

As Chief Strategy Officer of Birlasoft, Dr. Amit Shekhar is responsible for identifying growth areas, developing long-term strategic planning, and M&A. He is also responsible for incubating and developing new technology innovations in areas like cloud, predictive analytics and Internet of Things by creating IP, solution frameworks, and building the right partnerships across business, technology, and data.

APRIL 2015 | www.testmagazine.co.uk


SPEAKERS AT THE CONFERENCE

Sanjay Garg

Nadège Josa

Global Leader – QA Practice, Birlasoft

Senior Project Manager, Sony Computer Entertainment Europe

Smart Cities, Testing and Analytics – A Growing Priority

The Future of Localisation and Localisation Testing in Games

A Smart City will necessarily have smart meters, sensors, cameras and many features which will be generating huge amount of data. Analytical skills combined with appropriate testing skills are absolute imperatives to convert this accumulated data to useful information.

As Global Leader of Testing & QA Practice at Birlasoft, Sanjay is responsible for anticipating future trends in testing and positioning the practice accordingly. He casts a tester’s skeptical glance mixed with heavy doses of realism while developing the strategic plan to position the company’s Testing Services.

Nadège Josa talk looks at how Localisation is approached at Sony PlayStation. It questions the traditional sequential localisation models and shares what Sony has been doing to change the nature of its localisation testing efforts without compromising on quality.

With over 15 years of experience, Nadège has been working in various Localisation roles within Sony PlayStation. After studying Applied Foreign Languages specialised in Translations, she started off her career in the newly created Sony Localisation Quality Assurance group before moving onto Localisation Project Management. A Senior Project Manager since 2011, Nadège strives to find smart solutions to the everyday new challenges of the industry.

APRIL 2015 | www.testmagazine.co.uk

Archie Roboostoff Director, Borland Portfolio, Micro Focus

Building for Responsiveness QA teams need to guide developers to build web applications that minimise loading times and therefore reduce the scope for performance issues before they occur. Quality used to mean “does it work” but now it encapsulates everything from functionality to performance. A slow loading, non-responsive application is just as critical as one that doesn’t function properly.

With more than 18 years of experience in the enterprise software industry, Archie is currently the Borland Solutions Portfolio Director responsible for the direction and strategy of testing and quality products within Micro Focus. Previously, Roboostoff was responsible for the terminal emulation products at Micro Focus. He has held various development, management and engineering positions at NetManage Inc, EDS, e1525, Maxtor, Komag, and AvantCom Network.

Martin Wrigley Director & Head of Testing, Treasury, Barclays

Why the World of Testing Needs Standardisation Without industry wide standards we sit in a land of chaos, confusion and the inability to clearly communicate to those outside our speciality who regard it as a ‘black art’. This is the sure way to ensure that a skilled profession is dumbed down under financial pressure. Businesses rarely ever pay for something they don’t understand – until they learn the hard way.

Martin has more than 25 years of experience in telecoms and IT, with a wide background of IT development, solutions architecture and delivery and is now an independent consultant and Executive Director of AQuA.

PAGE 41


NATIONAL SOFTWARE TESTING CONFERENCE 2015

Céline Pasty

Neal Hardwick

Samir Sinha

Chris Livesey

Manager, BGAN Quality Assurance, Integration & Testing at Inmarsat

Engagement Manager, Tech Mahindra

Director -Test Practice, Tech Mahindra

Vice President of Worldwide Sales, Borland

How it all BGAN

Improving Business Outcomes

Ten years of testing BGAN, from the first lines of code to the provision of complex global communication services

Predicting outcomes and assessing test risk with data and analytics techniques

Céline will share her experience and explain how testing BGAN has evolved from the early integration days in factory in 2005 to the provision of complex communication services operating via three geostationary satellites today.

After graduating from Télécom ParisTech and Paris Dauphine University, Céline joined the mobile satellite communications company Inmarsat and participated in the design, commissioning and introduction of the global satellite communications network BGAN, responsible for the end-to-end integration and testing of the new products and services. Recently she has been leading the Product Assurance and Safety tasks in a new satellite-based communication system for Air Traffic Management in Europe.

PAGE 42

Introduction

Neal Hardwick was named Testing Innovator of the Year at the 2014 TESTA Awards. The prize recognises the individual who has pushed the boundaries in terms of innovation, and has employed new methods and tools in the testing field. Neal collected his award from Rob Jacobs, TESTA judge and Head of Testing & Assurance at the Royal Mail Group. The judges were particularly impressed by how Neal had displayed excellent skill in replacing outdated ways of working, with improved procedures that work for all involved.

Currently working as Engagement Manager for our Test Factory at a large UK Telco, including focus on customer relationships and transformation. Neal also leads the Tech Mahindra Testing IP Tools development function to innovate and introduce tools that improve and accelerate testing engagements globally.

Currently working as Director -Test Practice for Tech Mahindra Test Engagements, Samir has an extensive background in leading on Test Solutions and Test Competency as well as specialising in End-to-End Transformation Programmes

Chris's keynote speech will introduce the conference and highlight some of the key trends facing quality assurance.

Chris has over 20 years of experience in the software industry – as a practitioner and consultant; in sales and business development; and in senior management. He worked for many years as a developer, test manager and project manager working with end-user clients in financial services, telecoms and manufacturing. This has been followed by a successful career with several software companies, in technical and sales roles, and more recently in executive management and leadership.

APRIL 2015 | www.testmagazine.co.uk



SPONSORS AND EXHIBITORS

Borland - Headline Sponsor Originating in 1983, Borland Software Corporation is a Micro Focus company and the Borland brand identifies the requirements, test and change management solutions which help companies to build better software, faster. Our world class software development products work across the entire Application Development Lifecycle to transform good software into great software. Uniquely, our tools are Open, Agile, and fit for Enterprise. E: info@borland.co.uk W: www.borland.com

Birlasoft

Zephyr

Gold Sponsor

Event Partner

It is not just the office. We are increasingly living, traveling, shopping, wearing and “experiencing” an increasingly digitized world. Business advances driven by software innovation have led to increasingly higher intensities of digitization at the individual level..

Zephyr is a leading provider of on-demand, real-time enterprise test management solutions, offering innovative applications, seamless integrations and unparalleled, real-time visibility into the quality and status of software projects.

E: dreamteam@birlasoft.com W: www.birlasoft.com

E: sales@getzephyr.com W: www.getzephyr.com

Tech Machindra

Planit Software Testing

Mobile Labs

Sogeti

Event Partner

Event Partner

Exhibitor

Exhibitor

Tech Mahindra represents the connected world, offering innovative and customercentric information technology services and solutions, enabling Enterprises, Associates and the Society to Rise™.

Planit is a world leader in quality assurance with more than 600 permanent software testing consultants. Our unrivalled experience in software testing enables us to consistently deliver high quality in software development and integration projects.

At Mobile Labs, we know that mobility can be a challenge throughout the enterprise. For software developers and testers in particular, the growing complexity of mobility and its impact on mobile app testing is caused by many factors.

Sogeti and the wider Capgemini Group with over 20 years of experience in delivering structured testing solutions, Sogeti UK helps transform the IT testing and Quality Assurance performance of organisations in both public and private sectors.

E: connect@techmahindra.com W: www.techmahindra.com

E: testing@planittesting.co.uk W: www.planittesting.com

E: info@mobilelabsinc.com W: www.mobilelabsinc.com

E: enquiries.uk@sogeti.com W: www.uk.sogeti.com

Neotys

InfoStretch

ACRC

Amdocs Testing

Exhibitor

Exhibitor

Exhibitor

Exhibitor

Since 2005, Neotys has helped more than 1,000 customers in more than 60 countries enhance the reliability, performance, and quality of their web and mobile applications.

InfoStretch is a leading provider of mobile and enterprise QA services and solutions. Our offerings range from enterprise QA, mobile application development, testing, and automation to certification and sustenance

The Advanced Computing Research Centre was established in 2013 by HEFCE and the University of Sheffield.

Amdocs Testing is the world’s leading testing services provider for the communications industry. We offer our customers years of testing experiencein the communications domain.

E: sales@neotys.com W: www.neotys.co.uk

E: info@infostretch.com W: www.infostretch.com

E: info@acrc.com W: www.acrc.com

E: testing@amdocs.com W: www.amdocs.com

PAGE 44

APRIL 2015 | www.testmagazine.co.uk


www.neotys.com


LAST WORD DAVE WHALEN PRESIDENT AND SENIOR SOFTWARE ENTOMOLOGIST WHALEN TECHNOLOGIES HTTP://SOFTWAREENTOMOLOGIST.WORDPRESS.COM

I DON'T KNOW AND I DON'T CARE A few years ago, a lady in the United States successfully sued McDonalds when she spilled her coffee in her lap causing serious burns. Her argument - I didn't know the coffee was hot. Seriously?

I

have been noticing a disturbing trend in software development lately. I've run into a few situations recently that just caused me to shake my head. Maybe I'm just getting old and cranky. I am, but that's not why. I brought the subject up at recent software testing conference. Sadly, I heard the same stories repeatedly. Basically, code is being developed with a complete lack of common sense. I was floored last week in a bug triage meeting while we were discussing an obvious bug (that I had logged). The developer's defense...the requirement didn't define that. Really?! That's like saying that spelling errors are justified because there isn't a requirement on how to spell it correctly. Now, I'll admit, the matter was somewhat trivial. Given the sheer complexity of the application that we're building, it wasn't a big deal. Fine! We can always defer it and fix it later. But justifying the validity of the bug based on "No one told me"? Sorry, that argument isn't going to pass muster. Being the smart aleck that I am, I offered a similar scenario that everyone could understand. Let's say we manufacture Bingo cards. Because we didn't have a requirement that specifically addressed the correct way to spell Bingo, we released thousands of cards to an unsuspecting public where the "o" in Bingo appears between the "B" and "i". Now in Bingo parlours across the country, little blue-haired old ladies are jumping from their seats and yelling "BOING!!!" Cute example. Funny? Yes. Personally I'd love to see it happen. But that's not the point. Let's look at something a bit more serious. Like security. If you have been reading my columns for a while, you know I used to be in the military - the US Air Force to be specific. If you didn't, now you do. Throughout my 20 plus year career, security was constant in everything we did. At every level - from the lowest enlisted person to the highest officer. We were constantly trained on matters of security. We were all expected to be on the lookout for holes in security and to report them immediately. Fast forward a few years.

PAGE 46

Now you're a civilian and working for a company that specialises in security. During a code review you notice a potential security flaw in the system. Remembering your military training, you speak up to address a potential hole in security. Everyone acknowledges that there may be a potential hole in the system. It is probably something with a low likelihood of occurrence. But it could happen. If it did happen it could potentially cause harm not only to the customer, but to the company's reputation as well. You would expect someone to at least acknowledge the risk. If following a full risk assessment, the company decides it is alright to proceed then fine. I did my part and brought it up. A thorough risk analysis was completed. At least we discovered it and hopefully documented it and we can fix it later. In a perfect world that's what should happen. Reality. "We don't have a requirement to do that so we're not going to fix it. It operates as designed!" My response: "Maybe we should tell someone?" "That's not our job." Shut down again. So what to do? I'm contemplating a few ideas. First - write an article. Check. Second - escalate it. That's where I'm struggling. I probably will. At least if I'm ignored and it does happen I can always say: "I told you so!" Childish, yes.

APRIL 2015 | www.testmagazine.co.uk




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.