NOVEMBER 2016
SATISFYING END USERS OBSERVATIONS ON AN EVER‑CHANGING INDUSTRY WILL THE ‘CONNECTED CAR’ EVER BE SAFE ENOUGH? THE EUROPEAN SOFTWARE TESTING AWARDS SPECIAL
ex.
RecordItemIndex(1));
equence 'admin'.", new RecordItemIndex(2));
1
C O N T E N T S
T E S T C O V E R
M A G A Z I N E S T O R Y :
|
N O V E M B E R
T E L E C O M M U N I C AT I O N S
2 0 1 6 S E C T O R
NEWS
Software industry news ................................... 5 INNOVATIONS IN SOFTWARE TESTING
Observations on an ever‑changing industry .......................................................... 10 PERFORMANCE TESTING
Quality from all angles.................................... 14 AGILE
Continuous communication (part one) ........ 18 TELECOMMUNICATIONS SECTOR
Satisfying end users ....................................... 22 APPLICATIONS TESTING
Moving into the digital sphere....................... 26
48
MOBILE TESTING
Go forth and be brave . .................................. 28 TEST AUTOMATION
The craft and tools of test automation: a disparity........................................................ 32
10
A whole new world ........................................ 36 EMBEDDED TESTING
Will the ‘connected car’ ever be safe enough? . ................................................ 38 ENTERPRISE IT
#EpicFail ......................................................... 44 TEST PROCESS IMPROVEMENT
Not just a quick fix ......................................... 48 BIG DATA
Big data applications ...................................... 52 SPECIAL
The European Software Testing Awards 2016 .................................................. 56
38
28
T E S T M a g a z i n e | N o v e m b e r 2 01 6
60%
Test Data in a way that business testers can read!
of IT projects fail. Why?
Almost
50%
which causes
80%
of the defects come of defects costs from ambiguous requirements
Testers spend
50%
of their time looking for data
more than
70%
use production data for testing
Test Data Generator is an advanced business intelligence tool which delivers the right data, in the right place in real time reducing project delays by efficient provisioning of quality data: • “Shift testing” left with faster time to market. • Reduce the time and resources required to provision “fit for purpose” data by 50%. • Automatically generate richer sets of synthetic test data and reduce maintenance costs and execution times. • Create system data with the perfect fit to your test cases with data generation functions, default values, etc. • Maintain compliance with all new regulations • Cloud - first, mobile - first solution available on Microsoft Azure. • Fully integrated with Validata Quality Suite.
Don’t risk your Data Security! Don’t waste time and money searching for data manually!
For more information call at +44 020 7698 2731, email: info@validata-software.com or visit www.validata-software.com New Thinking | Instant Answers
E D I T O R ' S
C O M M E N T
3
INTERNET OF TURBULENCE CECILIA REHN EDITOR OF TEST MAGAZINE
L
ast month’s major distributed denialof‑service (DDoS) attack against Dyn, a major provider of internet infrastructure, has left the internet abuzz. Dyn has confirmed that it was the victim of an unprecedented cyber attack on the 21st of October. The attack came in two waves; with hackers using a network of everyday devices such as webcams and digital recorders infected with malware, known as a ‘botnet,’ to swarm Dyn with data requests. As a result Dyn’s systems were overwhelmed and its clients, some of the biggest names on the internet, were taken down. High‑profile sites affected include Twitter, Reddit, Spotify, GitHub, The New York Times, The Guardian, Amazon and Netflix. In a statement, Dyn said it is “analysing the data but estimate...up to 100,000 malicious endpoints.” The company also confirmed “that a significant volume of attack traffic originated from Mirai‑based botnets.”1 Late last month, the unknown developer of Mirai released its source code to the hacking community meaning it is freely available to hackers across the globe. The malware spreads to vulnerable devices by continuously scanning the internet for IoT systems protected by factory default or hard‑coded usernames and passwords. Dyn resolved the issues and restored services to normal the same day, but the attack has sparked numerous questions and concerns around internet security and volatility, especially the vulnerabilities in the security of IoT devices. Information security has, evidently, not been high enough on device manufacturers’ list of concerns, probably due to high costs and inexperience. Some have argued that paying for additional secure development, pen testing and more, slows down time to market. An argument many testers and QAs are, sadly, all too familiar with! There’s also the user experience angle as well – most users do not expect their smart kettle to have a password, let alone would want to go through the effort of resetting it. One can understand why, perhaps, a device manufacturer isn’t too keen to overburden
their customer with installation steps. Reports show that many of the hijacked DVRs and web‑enabled cameras in the Dyn attack contained circuit boards and software manufactured by the Chinese tech firm Hangzhou Xiongmai. The company has since announced recalls for 4.3 million circuit boards used in cameras, but they have also blamed users for not changing the default passwords on its devices. However, the Chinese firm has also pledged to improve the way it uses passwords on its products and will send customers a software patch to harden devices against future attacks. We can only hope that other manufacturers are watching, learning and updating their product lines to ensure future password‑proof security measures. The IoT industry is pushing ahead, much faster than any industry body could begin to regulate it. It is therefore critical that experts get involved and share industry best practices with manufacturers as early as possible. For more on this story and other cybersecurity and pen testing matters, please check out www.softwaretestingnews.co.uk/ category/security. The November issue kicks off with interviews on ‘innovations in software testing’ with senior managers on p. 10. Read on for coverage on agile, performance testing, big data and test automation. A special supplement on p.56 highlights the 2016 European Software Testing Awards finalists. I’m looking forward to hosting the Awards Gala on the 16th of November, and I hope to see many of you there! If you can’t make it in person, follow along on social media: #SoftwareTestingAwards.
NOVEMBER 2016 | VOLUME 8 | ISSUE 5 © 2016 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 GENERAL MANAGER AND EDITOR Cecilia Rehn cecilia.rehn@31media.co.uk +44 (0)203 056 4599 EDITORIAL ASSISTANT Jordan Platt jordan.platt@31media.co.uk ADVERTISING ENQUIRIES Anna Chubb anna.chubb@31media.co.uk +44 (0)203 668 6945 PRODUCTION & DESIGN JJ Jordan jj@31media.co.uk 31 Media Ltd, 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 info@31media.co.uk www.testingmagazine.com PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA softwaretestingnews
cecilia.rehn@31media.co.uk
@testmagazine TEST Magazine Group
1.
‘Dyn Analysis Summary Of Friday October 21 Attack’ www.dyn.com/blog/dyn-analysissummary-of-friday-october-21-attack/
T E S T M a g a z i n e | N o v e m b e r 2 01 6
MAKE SOFTWARE TESTING CERTIFICATIONS WITH BCS PART OF YOUR PLAN
BCS CERTIFICATION PATHWAY
BCS MEMBERSHIP
SUPPORT CONTINUAL PROFESSIONAL DEVELOPMENT
RESOURCES
Make your career plan future-proof. As software needs get more complex so does the role of Software Tester. BCS, The Chartered Institute for IT is at the forefront of technological change. We can give you unrivalled support and prepare you for the future with our suite of ISTQB® software testing certifications and unique BCS certifications. To find out how BCS membership and professional certifications can keep you up to date visit bcs.org/futureproof ISTQB® is a Registered Trade Mark of the International Software Testing Qualifications Board.
I N D U S T R Y
5
N E W S
EUROPEAN MARS LANDER FAILS DUE TO POTENTIAL SOFTWARE GLITCH Last month, the European Mars (ExoMars) lander was due to land on Mars in the first part of the ExoMars mission, but the scientists at the European Space Agency (ESA) believe a software glitch caused a catastrophic crash landing.
MOBILE WEB BROWSING BECOMES KING According to recent data from web analytics company StatCounter, web browsing, globally, has overtaken desktop for the first time last month, by 2.6%. Mobile phones and tablets now account for 51.3% of all web browsing in comparison to desktops, which now only account for 48.7%.
The incident, based on data sent back, was due to the lander's parachute separating too early and the thrusters only firing for three seconds, as the computer within the lander believed it was closer to the ground. The ESA’s verdict is that the failed landing was due to either a software glitch that triggered release demands for the robots parachute too early, or the computer misinterpreted the data that was sent from the sensors on the lander.
Although, desktop browsing still remains on top in the UK, accounting for 55.6% and in the US, accounting for 58%, but it is in gradual decline. Google predicted the decrease in users on desktops several years ago, and havs been avidly encouraging the switch, as PC sales have been in great decline over previous years, while smartphone sales keep on growing with new offerings annually.
MOZILLA AIMS FOR PERFORMANCE BOOST Through the development of a new project called Quantum, Mozilla wants to bring about a smoother browsing experience for Firefox users. Building off of the back of Servo and Rust, previous Mozilla software developments, Quantum should improve the stability, security and overall quality on more intrusive websites, in order to boost the overall performance of a user's experience in Firefox. Mozilla is aiming to have a roll‑out of Quantum at the end of 2017. The performance boosting software will initially only be available on Android, Windows, Mac and Linux Firefox users, meaning that iOS users will have to wait for now.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
6
LONDON TO SEE FREE WI-FI KIOSKS Traditional phoneboxes across London are being scrapped in favour of new Link Kiosks, which have been trialled successfully on the streets of New York City. BT, in partnership with Intersection and Primesight are planning to launch LinkUK in 2017, offering free calls and mobile charging points through the London Borough of Camden, and eventually, all of central London.
SAMSUNG STILL ON TOP IN INDIA After the billions of dollars lost by Samsung due to the recalling of the Galaxy Note 7, the company still remains ahead in India. Receiving 22.6% of the smartphone market, between July and September, according to Counterpoint Research, Samsung remains the country's leading manufacturer. Although, based on last years' share of 23.3%, the share of the market that the phone manufacturer holds is down by a minor
T E S T M a g a z i n e | N o v e m b e r 2 01 6
I N D U S T R Y
BT is dubbing the new “sleek, untramodern kiosk” as “the next evolution of our public payphone service”. LinkUK will offer users access to free Wi‑Fi, up to speeds of 1Gbps, as long as they’re in range – the fastest free public Wi‑Fi service available in the UK. The services to customers that these kiosks will offer are free of charge due to them being funded by advertisers. As an added bonus, using these phoneboxes, members of the public will be able to access maps, directions and local services.
percentage. India’s smartphone market is huge, with the countries sales of feature phones maxing that of the United States, making the country the world’s second largest phone market behind China. Samsung’s recall of the Galaxy Note 7 could potentially impact of the company’s market share, but it’s unlikely as most smartphones in India sell for under US$150, which would imply that a large amount of the Galaxy Note 7’s shipments were not made in the country.
N E W S
MICROSOFT ISSUES THREAT WARNING A new virus called Hicurdismos is posing as a Windows Security Essentials installer and tricking computer owners into believing their laptops/PCs are infected. The malware, using an icon similar to the Security Essentials castle, makes downloaders believe that their computers have stopped working by hiding the mouse pointer and displaying a fake blue error message across the screen. In order to tackle the malware, a bootable security tool that runs before Windows starts is required.
8
I N D U S T R Y
N E W S
MODERNISING LONDON’S BLACK CABS Bringing London’s black cabs into the 21st century, each and every 22,500 cab will now offer their customers the ability of paying for their journey via contactless payment. For now, the contactless compatible card readers can be either fixed or handheld, but by January 2017 TfL have specified that the card readers will need to be fixed in the passenger compartment of all black cabs. Another positive is that passengers will no longer have to pay a subcharge on their taxi fare, although the minimum fare will raise from £2.40 to £2.60 due to the convenience of offering contactless payments.
GOOGLE CRACKS THE WHIP ON SPAM APPS IN THE PLAY STORE Developers on the Play Store that use unethical methods to boost the popularity of their apps are being cracked down on, as Google takes extra steps in order to keep it from happening. When an app is released onto the Play Store, some developers attempt to manipulate the platform's content discovery systems in order to improve their app's placement. This is done through various ways, such as posting fake reviews or by artificially inflating install numbers. As these kinds of actions violate Google’s terms and conditions, the company has taken to introducing new and enhanced detection and filtering systems as a prevention method. It is stated that if developers continue to repeatedly violate the Play Store’s terms and conditions, they could have their apps removed from the store indefinitely.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
LIVESTREAMING COMES TO KICKSTARTER For the first time, Kickstarter will bring live video streaming to their site with Kickstarter Live. After a short spell of beta testing, the global crowd-funding platform will allow the online supporters of their campaigns to
BLACKBERRY JOINS WITH FORD TO CREATE FUTURE CARS The expanded use of Blackberry’s QNX software in Ford cars in now a done deal. Blackberry has signed a contract with Ford that will see the company provide its software for the automotive giant’s connected
interact in real time, taking their community engagement to another level. Just like live streaming services offered on Facebook and Twitter, supporters of campaign organisers will be able to ask questions, interact on a more personal level and select rewards and back projects during the live conversations.
car initiative. Due to their drastically declining hardware sales, Blackberry is making the move into software, and Ford, using this software, plans to ship 100,000 self driving taxis a year to ride sharing services by 2021. The company will be hoping that this deal will open doors for them to offer their software to other manufacturers, making this move what drives Blackberry into the future.
I N D U S T R Y
9
N E W S
INDUSTRY EVENTS www.softwaretestingnews.co.uk
www.devopsonline.co.uk
EUROPEAN SOFTWARE TESTING SUMMIT Date: 16 November 2016 Where: London, UK www.softwaretestingawards.com R E C O M M E N D E D
POTTERMORE’S MAGIC IS ABOUT TO SPREAD EVEN FURTHER
PUBLIC SECTOR FAIL TO PRIORITISE CLOUD INITIATIVES
Test Partners, digital agencies and a wide range of corporates, have announced involvement in Pottermore’s official Patronus experience, which allows visitors to J.K. Rowling’s Pottermore.com to discover their Patronus. Test Partners was responsible for ensuring that the Patronus experience would be accessible to visually impaired users.
Research reveals majority of public sector executives see digital transformation as the biggest challenge of the coming year, but fail to prioritise cloud initiatives Despite total sales on the government’s G-Cloud Digital Marketplace reaching £1.39bn, recent research has shown that over half (58%) of public sector IT executives have not used G-Cloud in the past year.
read more online
read more online
★★★
THE EUROPEAN SOFTWARE TESTING AWARDS Date: 16 November 2016 Where: London, UK www.softwaretestingawards.com R E C O M M E N D E D
★★★
TIGA MOBILE QA Date: 2 February 2017 Where: London, UK www.tiga.org/events/openlondon ★★★
TEST FOCUS GROUPS Date: 21 March 2017
U.S. CITIZENSHIP AND IMMIGRATION SERVICES INVESTS IN SOFTWARE TESTING Capgemini has announced that it has been awarded a US$53 million, three-year task order, to provide independent testing and evaluation services for U.S. Citizenship and Immigration Services (USCIS), that is anticipated to help it address the demands of customers in the digital age.
read more online
WHAT IS DEVOPS REALLY? People often want to know what DevOps is. Where can I buy one? How do I get started? What are the right tools? Which consulting company do I hire to get started? Joe Brown, Applications Development Lead at SunTrust Bank, delves into how he defines the culture of DevOps.
Where: Park Inn by Radisson, London, UK www.testfocusgroups.com R E C O M M E N D E D
★★★
MOBILE DEV + TEST Date: 24–28 April 2017 Where: San Diego, CA, United States www.mobiledevtest.techwell.com
read more online
T E S T M a g a z i n e | N o v e m b e r 2 01 6
OBSERVATIONS ON AN EVER‑CHANGING INDUSTRY
As 2016 draws to a close, it’s natural to reflect upon the year as a whole. TEST Magazine asked a few testing professionals to reflect upon their careers and hopes for the future.
I N N O V A T I O N S
I N
E
arlier this autumn, Cecilia Rehn, Editor of TEST Magazine interviewed Dan Ashby, Head of Software Quality & Testing at AstraZeneca; Amy Munn, Lead Digital QA Manager at Three UK; and Paula Thomsen, Head of Quality Assurance at Aviva UK, about innovations in the software testing and QA space. To start us off, how long have you been working in testing/QA? Paula Thomsen: It’s been 25 years, nearly 26. I have performed just about every test role before transitioning into the role as Head of Quality Assurance for one of the UK’s largest insurance companies. I have also spent time at a Diesel Engine Manufacturer which gave me a unique insight into how strong testing skills are transferable across industries. Dan Ashby: I've worked in the testing world for around 12 or 13 years, but I've really been testing things all my life – from testing my parents' patience to pulling apart toys to discover information about how they work (or not work) when I was really young. Recently I moved into a Global Head of Software Quality & Testing role, and I'm really enjoying it. It has a huge amount of management and I'm not really doing hands‑on testing anymore, but I still get to talk about and teach about testing to my team and to others within the company. It gives me senior management perspective too, working things from the top down, which is different from previous consultancy experiences where it was bottom up influencing and coaching. Amy Munn: I’ve been doing software testing, in one form or another, for the past 10 years and, like most people in the field, it was my curiosity and desire to improve day‑to‑day systems that would eventually lead me to a career that I love. I entered my first dedicated testing role about six years ago, having previously been heavily involved in the UAT for a web architecture overhaul and accessibility testing as part of my role as a Content Manager. Having worked for several years doing functional, integration, mobile and web testing, I knew my passion was definitely focused on front‑end customer experience. In your years in the industry, what has changed the most, for the better or worse? PT: I see the reliance upon proprietary tools has decreased dramatically in favour of freeware. This has led to increased choice
S O F T W A R E
T E S T I N G
and higher levels of automation across multiple application types. The flip side to this is that the demand for specific niche skill sets is putting even more pressure on availability of skills. AM: For me, the biggest shift I’ve seen has been in the desire for senior management to move away from a standard testing function, into a rounded quality model that shows real end‑to‑end benefit. For several years, testers were fighting to get involved in the SDLC at the right point, always being perceived at the bottle‑neck or harbingers of doom when finding issues. Now, senior managers and directors appreciate how QAs can improve quality right from the point of requirements gathering and how we can help shape the delivery of products to fit the true needs of the customer and in a more efficient way. DA: I actually think the biggest change, certainly within the testing and non‑testing communities that I'm part of, has been the awareness that there are still so many people that don't understand testing and have misconceptions or misunderstandings about testing. So many people still struggle with answering the questions: ‘what is software testing?’ or ‘what do software testers actually do?’ Now that we're talking about this a bit more as an industry, the realisation has still to come for a lot of people.
11
For me, the three biggest innovations are test automation for execution, whether that is non functional or functional; the introduction of test management tools; and frameworks that support testing within agile
PT: I disagree. I think recognition for the profession and the expectation that those performing a testing role are part of IT has been transformational, especially in other parts of the globe, where the profession didn’t really exist when I first started. But I believe that there is further work required, as software testing is not recognised within schools and universities. If testing isn’t promoted as a valid career choice at that stage, where will we get our testers for tomorrow? What are the three biggest innovations in software testing in your opinion? DA: For me, I think the psychology side of testing is the biggest revelation in the testing industry at the moment. And the deeper understanding of the investigation side of testing that been occurring over the past few years has been very innovative. We're starting to think of quality in a much deeper sense than it just being about ‘checking requirements’ or ‘asserting expectations.’ It's much more about the investigation that occurs of the things that we don't
DAN ASHBY HEAD OF SOFTWARE QUALITY & TESTING ASTRAZENECA
Dan’s been in the testing game for over 12 years, working on a wide variety of products. He is passionate about context‑driven testing and agile, and is currently focused on promoting software testing as a service internally within AstraZeneca, while coaching/training people in software testing and agile all over the world.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
I N N O V A T I O N S
12
I'd say most of the problems right now seem to be mind‑set based – misconceptions about testing and automation, and other buzzwords surrounding software development
I N
S O F T W A R E
necessarily have an expectation for. And along with that, how we have advanced on documenting that kind of testing has advanced, where it's not just about passing or failing – but it's more about telling your story of your investigation. It’s exciting times, but unfortunately, maybe only 10% of the entire testing world is where I'm talking about.
you get to be involved in so many different aspects of a business, get the chance to write automation scripts, get involved in functional and non-functional aspects of a service. Every day brings a new challenge, a new technology or business proposition to get involved within. It is a very exciting place to be.
PT: For me, the three biggest innovations are test automation for execution, whether that is non functional or functional; the introduction of test management tools; and frameworks that support testing within agile. The test quadrant springs to mind as the most readily available and intuitive alongside behaviour driven development.
DA: I know a lot of people who are actively trying to teach others about modern testing mind‑sets, so I think that's something I'm excited about for the future. Our industry is still very young in the grand scheme of things. As we grow and expand as an industry, more and more people will start to properly study their craft, and begin to teach others too. I think people need to be open to learning, and people outside of testing might need to go through some realisation too, which may be a bumpy journey for some, but I fully believe we'll get there. We need to! There's such a reliance on software now in our world. And look at future advancements with holographics, virtualisation, IoT, cloud integration, even just how we live our normal lives with communication, shopping, travelling, working, socialising... almost everything we do involves software somewhere in there.
What are you hopeful about for the future?
AMY MUNN LEAD DIGITAL QA MANAGER THREE UK
As a digitally focussed test manager, Amy is keen to improve customer experience and brand reputation by working to understand behaviours and customer drivers using a blend of insight and technical understanding. She is passionate about improving quality at every step in the SDLC and constantly improving cross‑functional processes and relationships to achieve positive business outcomes.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
T E S T I N G
AM: I feel as though the industry is really starting to see the benefits of the ‘human’ side of testing. Automation most definitely has its place and that side of the fence feels to be constantly evolving and improving but, where that was previously the sole focus of how teams can improve, now companies are equally interested in the psychology behind behaviour and how we can work smarter. I remember being encouraged as a tester to learn to code as ‘the future of testing was automation’ but I knew that the skills I brought to the table were completely different, but of equal value, and I’m seeing this change in attitude more and more. PT: I’m an advocate for use of agile and fortunate that the company I work for is forward thinking. So I have had the chance to see how the power of greater collaboration across roles/teams can increase the pace of customer value being delivered. Agile has been fundamental in increasing the value that testers are seen as bringing, it is the spring board from which our profession will undertake its next evolution. To be a truly great tester we will have both the behavioural, business facing skills combined with deep technical skills. It remains to me the best job in the world. Where else would
AM: I also believe the benefits of rounded teams with complimenting skills is really shaping the function for the better and by understanding the importance of insight, usability and accessibility alongside technical skill, the world of QA will continue to improve the quality of both out input and output. What are the key testing/QA technologies that have made your job easier? PT: There are so many, test automation has to stand out, especially the number of tools that are now available as freeware, which are very easy to use. I am hopeful about the range of test data management and test coverage optimisation tools, they have huge potential for the future. AM: Being responsible for Three’s digital testing, defining our browser and device coverage and ensuring the relevant handsets and browser versions were available to
I N N O V A T I O N S
I N
the whole team was a real struggle. The overhead of setting up VMs and giving 12 scrum teams all access to the latest phones, exactly when they needed them, often led to us testing on older devices than we would’ve liked – and then keeping track of who has what and when was a scheduling job in itself. A few months ago, we successfully trialled BrowserStack and I’ve recently invested and rolled this out across the team. Now, we have immediate access to over 1000 devices, browsers and operating systems just through a url and, with developer tools being available on mobile devices, our debugging is easier and everyone has access to what they need, when they need it. While I still invest in devices for our accessibility and app testing, the majority of web testing can now be completed using the tool across our test and production environments. DA: I'd like to shuffle this question a little bit by talking about some of the key advancements with processes that have definitely enhanced my testing and my teams testing. Exploratory testing with session based test management has definitely made a massive difference to the quality and focus of my testing. And the test reporting that comes with that – note‑taking skills, communication skills, etc. They've not just enhanced my testing, but have enhanced other areas of my life too. We constantly communicate. Learning how to articulate things and talk about them effectively is a useful skill. It’s an essential skill for a tester in today's world.
S O F T W A R E
T E S T I N G
13
What is the dream tool/service that you would like to see invented? PT: I’d like to see the robot that can take the analysis from live monitoring to automatically define optimum test cases. If they could also replicate the data required for the testing whilst desensitising it, I would be in heaven. I believe that this would ensure that regression test packs remain most appropriate to the real life case usage, freeing up the skilled testers to focus on exploratory testing and defining the coverage required for new features and systems, whilst continuing the increase the pace of technology change. DA: I find this a tough question. If I had a dream tool that would solve an important problem, then I'd probably have my team work on creating that. But actually, I'd say most of the problems right now seem to be mind‑set based – misconceptions about testing and automation, and other buzzwords surrounding software development which seem to be producing bad testing, automation and development practices and mind‑sets. It's human skills that are needed to solve these problems. I think there will be technology challenges in the future (the near future too). For example, what Microsoft is doing with things like HoloLens and the Windows 10 UWP. I'd love to see how companies test their Windows 10 apps to use with the HoloLens headset. I'm sure there will be lots of stories coming out in the future regarding those challenges.
PAULA THOMSEN HEAD OF QUALITY ASSURANCE AVIVA UK
Paula's testing experience spans over 20 years, covering both functional and non‑functional testing. She has worked predominately within the finance industry but spent a stint in the manufacturing industry. Paula is passionate about how as a community of IT professionals it is our responsibility to promote IT and testing as a career.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
QUALITY FROM ALL ANGLES
Do you have the required quality to be agile? asks Jun Zhuang, Senior Performance Test Engineer, HOBSONS.
P E R F O R M A N C E
15
T E S T I N G
A
lmost all the time when people talk about performance testing in agile software development, they talk about the process. For example, plan and prepare early, start testing as soon as a story has passed functional testing, continuous testing and communication, etc. – the quality associated with the performance testing practice itself, which is also critical to the success of the agile process, is rarely mentioned. The quality associated with the performance testing practice can be viewed from several perspectives: the quality of the engineer, the quality of the testing practice, the quality of the testing tool and the quality of the people & practice beyond the performance testing team.
THE QUALITY OF THE PERFORMANCE TEST ENGINEER To start with, it is people who participate and drive the process. The performance test engineer not only has to be familiar with the agile process but also the lifecycle of performance testing from planning to test development to execution to result analysis to reporting and diagnoses. There is probably no need to emphasise the criticality of the planning phase, it is in this phase where performance requirements are gathered, the test plan is drafted and important questions such as test coverage, expected load and acceptance criteria, whether the existing tool is capable of doing the work should be asked and answered. These would come naturally for a seasoned tester but probably won’t be so for someone inexperienced. Agile development requires tests to be developed and executed in a timely fashion. From this perspective, the better the test engineer knows about the tool and the application being tested the more productive he becomes. There is no denying that some tests are challenging to develop and may take a long time, but if the test engineer is always falling behind due to lack of working knowledge of the tool or the application, it is a problem that must be fixed now. Every tool has its limitations, the quality of the engineer also reflects in whether he has sufficient ammo in his toolkit to complement the tool. Generally speaking, knowledge in every aspect of software engineering and computer engineering helps. Furthermore, there are times the cause of
certain performance issues are challenging to pinpoint, even for an experienced tester. The knowledge and experience of the engineer often determine how long it takes to track down the cause. It is necessary to highlight the roles that performance test engineers should be playing. Our primary role, of course as expected, is to take part in the software development process and provide necessary testing coverage. Our other roles, which are rarely mentioned and not being realised even by some testers, include, but not limited to, promoting performance awareness among all parties involved in the software development life cycle (SDLC) and educating people about the broad range of tests we can perform. Believe it or not, it’s not uncommon for people to think that the only thing we do is LOAD testing, i.e., measure an application’s response time under certain load. Without the support and participation of other teams, it is going to be very difficult for the performance testing practice to be successful.
It’s doubtful that any company can deliver high quality software before its developers get over the idea that QA is the organisation that’s responsible for the quality of the application and start paying attention to the quality of code they write
THE QUALITY OF THE PERFORMANCE TESTING PRACTICE The quality of the performance testing practice involves all phases of the performance testing lifecycle from planning to test development to execution to reporting. I have already covered the planning in the previous section, here I am going to focus on the artefacts such as test scripts and any documents that are produced plus the test execution. I once worked on an electronic health record (EHR) application used by more than 200,000 physicians in the US. This application has a page that allows a physician to write clinical notes (SOAP notes), along with more than a dozen other sections for patient related information such as vitals, surgical/medical histories, allergies and test results. The test for this page was initially built with just a few sections and pretty much all static data and it worked, surprisingly, for many releases with acceptable response time. During one of the releases, I decided to rewrite it to accommodate the variations in test patients’ data and incorporate all the sections; as a result, the response time for the same request went up several‑fold comparing to the established baseline. What caused the elevated response time? Was it code related or something else? It only dawned on me a few days later that the acceptable
JUN ZHUANG SENIOR PERFORMANCE TEST ENGINEER HOBSONS
Jun is a seasoned software testing professional and author of multiple publications on performance testing and test automation. He has led and been involved in the design and build‑out of various automation frameworks and the performance testing of large‑scale enterprise applications across multiple industries.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
P E R F O R M A N C E
16
T E S T I N G
Without the support and participation of other teams, it is going to be very difficult for the performance testing practice to be successful response time before was the result of using static data in the request, which would never happen in the real world where the information for every patient is unique. But, as pointed out by a developer, including all the sections in the request is probably not realistic either. Further tests showed that the response time is quite sensitive to how many sections are included in that specific request, i.e., the payload of the request matters. The main takeaways from this story are that the outcome of a test could vary significantly depending on the quality of the test script and, for the test to be valid, the behaviour of the test must closely mimic that of a real user. As with software development, the test scripts we develop could have flaws, how can we be sure they do what we intend them to do? There are a few things to think about: • Follow the software development best practices. For example, establish coding
T E S T M a g a z i n e | N o v e m b e r 2 01 6
• • • •
standards and institute periodical code reviews. Make sure all dynamic data in the script are correlated correctly. Add validation to check the response of critical requests. Test the script with common data variations. Run the script and manually verify it did what it is supposed to do via the UI or database.
Did it ever happen to you that you intended to run the test against one environment but instead it was run against another one by mistake? Did it ever happen to you that the test was ran against a wrong build? How about running tests with incorrect configurations and wrong test data? Admittedly, I made all those mistakes at some point of my performance testing career. Time wastes as a result of these mistakes can
be characterised as ‘lost‑time’. If you keep making these mistakes, how can you not be stressed out while trying to keep up with the fast pace of agile? If that’s the case, it’s probably time to stop and think hard how you can avoid them. Here are some tips: • Try to automate the test setup and configuration as much as possible. • For the first few minutes after starting a test, manually check the target environment and make sure you see expected activities as a result of your test. • Start with a small load and gradually increase it to the desired level. If you try to apply the max load off the bat, chances are you might bring down the system. Performance testing tools, at least the commercial ones, typically come with some sort of report generating capability, some even allow the testers to create templates based on which reports can be generated. But
P E R F O R M A N C E
17
T E S T I N G
as much as the automatic report generator can help, I cannot see how it is ever possible to avoid certain customisations, i.e., if you want to create a high quality report. To begin with, the content of the report should reflect the purpose of the test. For example, it won’t make sense to include in the report a CPU graph of the application servers for a test run directly against the database. Secondly, you may want to include only the minimum amount of information that is necessary to support the test findings and make sure they are easily understandable by your audience. During a test you could gather a huge amount of data from throughput to transactions response time to servers CPU usage to network bandwidth usage, which, if all are included in a report, especially in their raw format, could be overwhelming as well as confusing. Depending on the audience, a one‑page executive summary might be sufficient for upper management, but you might want to include some technical details for developers. As long as it can accurately communicate the results, the report does not have to be very formal, sometimes an email might just suffice. Despite what else you deem appropriate, I would always include following in the report: my observations, recommendations (if applicable), and additional actions to be taken (if applicable).
THE QUALITY OF THE TESTING TOOL Cost aside, it is probably safe to say that not every performance testing tool, open source or commercial, suits the testing needs of your application. If you have not decided on a tool yet, following is a list of questions you might want to ask before choosing one: • Does it allow easy development, relatively speaking, of test scripts? • Can it be integrated into automated build and deployment processes to run tests automatically? • Does it allow you to ramp up any number of virtual users anytime you need to? • How good is its performance monitoring and result analysis capability? • The quality and responsiveness of the support. • How big is the pool from which you can select additional tester(s) when needed?
As for the details of any of these, many people have written about them, I don’t feel it’s necessary to repeat what they said here.1 If you have chosen a tool but are not happy with it, then you may or may not have an easy way switching to a different one depending on how much assets you have built and terms of your contract with the tool vendor.
THE QUALITY OF EVERYTHING BEYOND THE PERFORMANCE TESTING TEAM All people know that the earlier to find and fix a defect the lower the cost and everyone knows, in theory, how to do that. But, in reality, application performance is still an afterthought in many places and performance testing is often underutilised or not being given the proper priority. A common problem is the lack of non‑functional requirements, without which will lead to insufficient performance coverage, which, in turn, will lead to mediocre performance in production – the application might work most of the time, but when hit with high load, all kind of problems will be exposed. Some of the problems can be easily addressed by adding more resources, i.e., if scalability was not forgotten when designing the system and additional resources are readily available, but some, such as memory leak and bad queries, may require lengthy and extensive investigation. During one such accident, the production of one of the companies I worked for was completely shut down for about two weeks. Try to imagine the stress people had to endure when that
happened, not to mention the financial impact. Have you ever wondered why it takes forever for some web pages to load? Among many possibilities, one could be that the developer who wrote that page did not employ the many performance optimisation technologies to speed up the page loading; as a result, resources used by the page might be making unnecessary trips to the server, or not compressing images, or not caching data that should be cached, etc. Performance issues such as these could be reduced if developers keep performance in mind while coding and run sufficient unit tests. It’s doubtful that any company can deliver high quality software before its developers get over the idea that QA is the organisation that’s responsible for the quality of the application and start paying attention to the quality of code they write. Certain performance issues can happen with just a few users, but some require heavy load, which can only be done against a fairly powerful system. For example, database queries often perform differently depending on how many records are in the database. If it is too expensive to have a clone of the production environment for performance testing, at least you should test against a database with production sized data. Finally, let’s talk about production monitoring. When customers complain about production slowness, is there quality monitoring data to tell us what was going on in the system at that moment? What was the request rate? What was the resource utilisation? This information is critical for timely diagnosis and to fix potential performance issues.
SUMMARY A smooth and productive agile process is like a well‑oiled machine, to get the max output, all parties involved – business analysts, development, testing and TechOps – have to operate efficiently. Software quality is the responsibility of everyone involved in the SDLC, and no company can deliver high quality software before that happens.
Reference 1.
If you are interested in learning further, you should not have trouble finding them on the internet. For example, the ‘Load Testing: See a Bigger Picture' by Alexander Podelko at www.alexanderpodelko. com/docs/Load_Testing_CMG12.pdf would be an excellent read.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
CONTINUOUS COMMUNICATION PART 1 ‘Let’s talk!’ urges Kaspar van Dam, Consultant, Improve Quality Services, in this two‑part article on communication.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
19
A G I L E
H
istory was made on 11 February 2001, when 17 experts in a ski resort in Utah put their heads together. Two days later they emerge to the outside world again carrying the stone tablets currently known as the Agile Manifesto. Within the IT‑business this turned out to be a revolution, changing the way we work and especially changing the way we communicate at work. Fifteen years have passed. Where are we standing now? What did the agile revolution bring? Are we actually working more efficiently? Are we communicating better? Or did we lose the principles of the Agile Manifesto somewhere on the way? It seems a simple question: is agile actually as successful as we often think it is? But answering this question seems to be mere impossible. For instance, at GotoCon, industry veteran Linda Rising asked the audience the striking question: “Who of you uses agile because of the thoroughly executed double blind tests and scientific research?” Needless to say, no one raised a finger. There is no hard research showing that the agile way of work is actually more successful than the good old waterfall methodologies. Simply, because it’s all about people working together. About communicating, about understanding what the team needs and wants. It’s no exact science. However, in recent years we do see more and more doubt rising around the question if agile is actually the Holy Grail of the digital world. Take well‑known expert, Jez Humble, who told the same GotoCon audience about continuous delivery actually working in the real world, in contradiction to agile which he hadn’t actually even witnessed working as intended in reality. With this he was pointing to the term ‘semantic diffusion’ as Martin Fowler calls it. Fowler is one of the original 17 people who came up with the Agile Manifesto, but he saw that the principles they came up with were interpreted differently everywhere. Now, the fact that people interpret the Manifesto in a way that suits them best is actually how a Manifesto should be used. It shouldn’t tell people what they should do and what they shouldn’t do, it just states which things one should value more over certain other things. However, Fowler seems to have some doubts if agile hasn’t watered down too much. Others, like Dave Thomas (who was also one of the original 17), take it a step further and claim that agile is actually dead in the water and state that it’s about time to come up with a new term that suits the original values of the Manifesto better.
HOW TO MAKE THE MOST OF AGILE While it’s good to philosophise about things like this, reality is we are using agile every day. We are on a moving train and we simply can’t just put it to a halt. So, this article series won’t be about this new terminology for agile. It’s not about creating a brand new way of work or creating a second revolution to change the way we think about software development. It’s about what you should, or could, do to make the original values of the Agile Manifesto work while keeping in mind we’re not living in some software developing Utopia where we are practicing agile in its most purest form (does that even exist?). It’s a simple fact that many organisations are in some sort of split between traditional line management and the relatively new agile way of work. There’s a contradiction between the year planning, budget estimations and documentation on one side and the ‘agile’ development team on the other side. In the real world this is happening that much it even earned itself a name: ‘WaterScrumFall’. This article won’t offer you some sort of Silver Bullet to dissolve this contradiction, simply because this silver bullet doesn’t exist. We’ll have to live with the fact that we can only bring agile to practice in a suboptimal world. So, what can I offer you to make agile work better in the real world? It’s actually not something I came up with. It’s something the Agile Manifesto already says by itself and the main reason why Kent Beck, also one of the founders of the Manifesto, originally suggested to give the methodology the name ‘conversational’ instead of agile. Some points from the Manifesto itself: • Individuals and interactions over processes and tools. • Customer collaboration over contract negotiations. • Business people and developers must work together daily throughout the project. • The most efficient and effective method of conveying information to and within a development team is face‑to‑face conversation. It’s really that simple: Just talk! Talking to each other is the way to successfully work together and to take software development to the next level, as was intended with the creation of the Agile Manifesto. However, it may sound simple to just talk to each other and it may sound like yet another open door. Reality has proven that it’s actually really hard within
There’s a contradiction between the year planning, budget estimations and documentation on one side and the ‘agile’ development team on the other side. In the real world this is happening that much it even earned itself a name: ‘WaterScrumFall’
KASPAR VAN DAM CONSULTANT IMPROVE QUALITY SERVICES
With over 10 years of experience in IT, Kaspar advises colleagues and clients on matters concerning testing and/ or collaboration and communication within (agile) teams. He has published a number of articles on test automation, agile ways of work and continuous communication and is a speaker on these matters at events.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
A G I L E
20
It’s really that simple: Just talk! Talking to each other is the way to successfully work together and to take software development to the next level, as was intended with the creation of the Agile Manifesto
a software development project. Business and IT don’t speak the same language, there are politics going on, people have different agendas. With other words: there is no safe environment where we feel free to start an open and honest conversation, and thus we tend not to talk. For example, we are often afraid to show people there are things we don’t (yet) know and we make assumptions, by the dozen. Communication, conversation, interaction, collaboration. That’s what agile is all about. This shouldn’t be limited to just a few people or to a select few pre‑planned meetings (e.g. stand‑up). It’s a continuing process in which all stakeholders should be involved. Not every once in a while, but always: continuous communication.
APPLICATION IN REAL LIFE
Part two of Kaspar’s article, which will outline guidelines on how to achieve continuous communication will be published in the next issue of TEST Magazine.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
Now, back to reality. An average scrum team within an average project. The team is fed with stories which are written by a business analyst. The business analyst has based the stories on his talks with a product owner. The product owner obviously has no time to be a full member of the team, since (s)he is working on different projects and/or is part of different teams within the project. The stories are part of an epic, the epic has a certain predetermined deadline since line‑management has promised this to business managers to get funding for the project in the first place. There is no room for improvement based on feedback except for the incidental bugfix. It must be right the first time to have everything ready before the next release. Recognisable? Now, what can we do to get the most value for money for the customer in an environment like this where we claim to be agile, but really are more working WaterScrumFall? How can we be more agile whilst ignoring some of the basics described in the Agile Manifesto? How can we create an atmosphere in which continuous communication plays a central part? The answer might be found in the Three Amigos concept.
THREE AMIGOS The term ‘Three Amigos’ is getting more and more popular. It’s basically nothing more than putting three people in a room: the developer, the tester and the business (product owner/business analyst). However, it turns out the name was rather poorly chosen. The principle behind Three Amigos has nothing to do with these three people alone. It’s about discussing what you want before you start building and testing it and doing this with all relevant stakeholders at once. Not just the product owner and the business analyst, but also developer(s), tester(s), operations, DBA, etc. This could be three people, or it could also be a lot more. However, don’t overdo it. Too many people in a room tend to lead to endless debates about topics that don’t really matter (much). So what’s the benefit of discussing things in advance with all stakeholders inside and outside the team? It’s an effective way to decide where the low hanging fruit is and thus creating a minimum viable product (MVB) with the least effort. For instance, the product owner can tell what the business really needs, developers can tell how complicated it will be to build it and testers can give an indication on how much work is needed to guarantee a certain quality. Together you decide if the created business value outweighs the efforts needed to create it. And this meeting doesn’t need to be a huge thing planned ahead for weeks. You could even start using this principle by introducing recurring Three Amigo sessions of at most 15 minutes right after the teams stand‑up. It is very important though to ask the right questions. Things like: ‘Why should we build this?’, ‘How can we build and test this?’, ‘What will go wrong if we don’t build it?’, and probably the most important one: ‘Give me an example of how this should or should not work?’. This principle works well by itself, but there are some preconditions to make it work and to make it into a continuing process, a way of work, maybe even a way of life!
Owning and operating networks in 26 countries, Vodafone Group lives up to its classification as a multinational telecommunications company. Among mobile operator groups globally, it is ranked fifth by revenue and second in the number of connections. For such a large organisation, achieving quality is a complex and commendable enterprise.
T E L E C O M M U N I C A T I O N S
24
The key to how a consumer makes a decision is based on value or content – it is therefore critical that organisations remain focused on the quality and ease of use of their products and services
AJIT DHALIWAL DIRECTOR, HEAD OF DELIVERY, RELEASE AND TESTING VODAFONE UK
Ajit is an experienced Delivery and Testing Director within the telecommunications industry. He is known for his razor focus on delivery and technical leadership of major large-scale complex IT transformational change programmes.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
F
ollowing his presentation at The National Software Testing Conference in May 2016, Ajit Dhaliwal, Director, Head of Delivery, Release and Testing at Vodafone UK, sat down with the Editor of TEST Magazine Cecilia Rehn, to discuss career paths and how the importance of testing is being reinforced at the British multinational telecommunications company. Tell us about yourself, and your path into testing? Whenever I think to myself ‘What’s my profession?’, or ‘What’s my discipline?’, it always involves testing. It always has. I like to consider myself a career tester. I started my career in testing with Logica (now CGI) as a graduate software tester, having studied Mathematics and Computer Science at Brunel University. I started my first role in testing and worked on various secure sector projects but quickly realised I did not have the patience for testing and was more suited to management, so I progressed in my career through the ranks of testing. I have held various positions in testing which have mainly focused on Customer Relationship Management Enterprises for large-scale organisations, with a special focus on new greenfield transformational upgrades and implementations. I’ve been very privileged to work with some of the best FTSE organisations in the UK and help transform their businesses through new IT system transformations across digital, voice and assisted channels. Before joining Vodafone, I was the Head of Testing at BT Retail where I had the privilege to work on many leading programmes, including Digital Transformation and the launch of BT Sport! My previous experience within this industry has provided me with the ability to shape testing and delivery organisations to cater for the needs of the business and telecommunications industry and focus on delivering excellent customer experience and value through testing.
What do you oversee at Vodafone? At Vodafone I am responsible for Testing and IT Delivery, serving Vodafone UK. This covers our digital platforms, CRM, billing and business support systems across mobile and fixed line products. Very simply, I am responsible for quality! It’s 50% of my role. I’m also responsible for IT delivery, which is the other 50% of my role. I am also part of the UK leadership which helps me to shape strategy and influence commercial decisions.
S E C T O R
My role sees me responsible for IT strategy, execution, and most importantly quality, through delivering excellent customer experience. I’m the one-stop shop for quality. I joined Vodafone in November 2015, so I’ve been in this position now for a year. Have you witnessed change within the industry during this past year? Vodafone UK went through a significant IT transformation programme. Moving away from a legacy CRM, to an industry first single Oracle stack. As with any major IT system and business transformation, there were a few issues along the way. Unfortunately, during the upgrade and subsequent customer migrations this lead to a few customer experience issues arising, which lead to customer dissatisfaction. However, I have been fortunate enough to help lead the recovery and drive up customer experience and NPS and bring back stability and service reliability. Now that that’s all been done, effectively my remit is to cover the issues that have been caused by the transformation, drive up customer experience, drive up net promoter scores, and introduce a structured change process, so that the business can introduce and deliver change more frequently to respond to competition. Our aim is also to get changes delivered quicker, in order to improve customer experience, and just as importantly, employee experience, and then lastly, to launch products and services quicker to market. Introducing new practices such as agile and DevOps and increasing technical insourcing are all priorities prevalent in today’s climate. How is the QA function set up at Vodafone? What changes have you made as UK Head of Delivery, Release and Testing? And why? Our primary focus at Vodafone UK is delivering a first class customer experience and therefore testing is a key function within our IT department. When I joined Vodafone, my priority was to reinforce the importance of testing and the value it brings in informing decisions on quality, delivery and vendor performance. As a team, we have worked hard to drive up testing maturity and standardise operations and reporting, investing heavily in test automation across BSS and Mobile using cloud based solutions as an example. We also reformed and structured our testing engagements with our strategic partner
T E L E C O M M U N I C A T I O N S
Accenture in order to deliver improved valued and focussed outcomes to align to business priorities. Another big shift was in the introduction of monthly releases – smaller, but more frequent changes to respond to business demand to increase agility, reduce concept to market time and improve throughput. This enables us to respond to market conditions quicker and help increase stability of the stack and improve experience for our customers and internal employees, particularly those in our front line: working in stores or in call centre operations.
S E C T O R
other sectors for ideas/inspiration? The telecoms sector is experiencing its biggest shake up and challenges in the UK since the privatisation of BT and auction of 3G. The Strategic Digital Review, regulation pressure from Ofcom and market consolidation brings about a whole host of challenges.
We have also pushed hard on the insourcing of key technical roles to drive up quality, leadership, and to help with cost control and direction setting. A key attribute we needed in order to help IT steer its engagements and respond to business demands with individuals was to hire a team whom have the experience and depth to drive IT change and take accountability. The insourcing of the technical roles – how has that recruitment drive been planned out, and how is that working? We have brought in a range of people, it’s been quite a key point actually for what we’re trying to do at Vodafone, simply taking back the leadership decisions and bringing back key roles in-house, making people more accountable and giving that empowerment back to our staff. It’s been an important step towards our overall improvement as a department. It isn’t necessarily important that potential employees have telecommunications experience. The key things we look for are people who have testing experience, and particularly people who have testing experience working with on and offshore teams. We look for people who have worked in multi-application environments, come from structured organisations and people who have passion! I was fortunate enough to work for great IT leaders who helped coach and shape me into the person I am today. The qualities I role model are those I took from working with some of the UK's best CIOs. How is software quality changing in the telecoms sector? Are you looking to any
This is overlaid with economic and political instability – ultimately customers make decisions based on value, choice and customer experience. Telecoms serve as a utility, i.e., fixed line or entertainment such as TV or music, which are served through mobile networks using vast amounts of data. All streamed to 5.5 inch devices! The key to how a consumer makes a decision is based on value or content – it is therefore critical that organisations remain focused on the quality and ease of use of their products and services, and that the time to market is rapid in order to keep up with market trends and apps – which are released weekly! In your presentation at The National Software Testing Conference in May, you talked about maintaining the importance of quality assurance and testing independence whilst driving for cost efficiencies – tell us more? I am a big believer in testing departments and the fact that testing teams can provide informed decisions on whether you go live
25
with a capability, if it’s reached the right test criteria, and whether or not it’s got the right quality factors. And I feel that having a Head of Test in an organisation helps reaffirm that role and that profession for teams, as an organisation in the industry, and for testing. Where it does present a challenge however, particularly in larger organisations, where they’re driving for cost optimisation and increasing profit, is that it can be very easy to get rid of the test department. But then you lose the individuals that are passionate about finding defects, and to be honest, the ones that do the boring work of running the test scripts, and any kind of handovers in the IT lifecycle can introduce inefficiency and unforeseen overheads, which can sometimes inhibit and cause frustration for people. For example, that classic handover from development to test and the constant discussion of ‘Have you tested everything? Can I see your test results?’, can complicate the IT lifecycle and create inefficiencies. What I see as well, are people driving for the holy grail of DevOps and bringing together delivery and testing teams, so it’s quite a balance I feel. I think that there’s no right or wrong answer, I think it comes down to the maturity of an organisation; where it is in its lifespan. What are you looking forward to in the future at Vodafone? What’s next? Well one thing that we’ve been talking about in the team is that, like all test organisations, we want to be able to collect the huge amount of data, test scripts, test script runs, defect data, and wide incident data. We were thinking, ‘how could we use all of that to help better inform decisions on regression test coverage, on likelihood of failure, on picture defect analysis?’ So one thing we’re looking at is how we can centralise all of that data from HP ALM and other sources into a big data framework, and how we can extract that data in a meaningful way, to start helping inform some of the test case designs and test operations. It’s at the early stages at the moment, but I’m excited about it. Secondly I am looking forward to taking the challenge back to the competition in the UK and driving significant change within Vodafone UK to help position us for our future challenges against the competition. I want nothing more than to beat the competition in our home market!
T E S T M a g a z i n e | N o v e m b e r 2 01 6
MOVING INTO THE DIGITAL SPHERE Tamar Weinreb Haviv, Product Manager, Amdocs, explains why ‘digital testing’ is not just testing.
E
veryone’s going digital, across all industries and all geographies. Everyone also has a different interpretation of ‘digital’ and consequently, of ‘digital testing’. Within the broad range of interpretations and scope, one central common denominator identified is the ability to deliver services to the end customer via digital channels – primarily via websites and mobile applications. These services may enable window shopping from websites or placing orders for a service or product via a mobile application. Although this capability has become mainstream, it is constantly evolving. And so is testing of these components.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
Such testing – often named ‘digital testing’ – may appear similar to testing of any old software. Even experienced testers can easily fall into the false notion that they can approach both in the same way. They may have a wealth of knowledge and experience, and perhaps even a proven track record of ensuring high quality in complex IT projects. But that won’t cut it. I happened to witness a testing project for a leading telecommunication provider where testers like these dove into the project to test their digital applications in the same manner they would approach a non‑digital project, with backend systems and even local online ones. Needless to say, these testers stumbled
upon unexpected pitfalls that could have easily been avoided. This experience inspired me to share some observations of how digital testing is unique and requires its own approach. Perhaps it’ll help save someone else out there unnecessary delays and stress.
‘WORKING’ IS NOT GOOD ENOUGH A digital systems’ end user is primarily the end‑customer, the one who places an order, i.e., the source of revenue. Backend systems and even online ones are generally
A P P L I C A T I O N S
activated by people being paid to do so; such as IT specialists or CSRs. The former may have minimal knowledge on how to activate the system, high expectations and minimal patience. The latter has probably received training and is motivated to use the system accordingly to help them do their job. This difference requires testers who traditionally focus primarily on the functional aspect to broaden their scope and give special attention to the user experience. Functional testing remains imperative; however, if you don’t get into the shoes of the end user and look 360˚ around you – as a user – you may miss embarrassing spelling mistakes, illogical flows (what made sense when designing individual flows may suddenly reveal obvious shortcomings) and maybe just how much the system can get on ones nerves.
THE BOTTOMLESS PIT OF PLATFORM TYPES There is an increasing number of operating systems, browsers and device types out there in the market and digital applications need to be tested for compatibility with the different platform types and combinations. However, testing applications on all types and their possible combinations is costly and unnecessary. Analytics gathered on the target audience’s choice of platform types can be used to prioritise compatibility testing accordingly. Analytics can also be used to identify and prioritise frequently used and important flows.
YOU SNOOZE, YOU LOSE Business critical scenarios such as placing an order still remains at the top of our priority list. Although this isn’t unique to digital testing, it should be accentuated because unlike backend systems with flaws that may be fixed retroactively, an abandoned shopping cart due to an unintuitive flow or poor performance leads to irrevocable revenue loss.
EXPECT THE UNEXPECTED Digital applications are open to users of many different types of profiles – including senior citizens, gen‑Xers, disabled people and even hackers. Testers need to run the system in unexpected ways due to the unpredictability of the various types of users. Applications that aren’t accessible to users with disabilities (such as compliance with screen readers) not
27
T E S T I N G
only exclude a large population – but may breach accessibility laws in certain countries. Specialised security testing with a strong emphasis on penetration testing is a must due to the openness of digital applications. Testers tend to follow test plans religiously; no need to test beyond the test cases. Digital testers are encouraged to deviate from the plan, wander about and discover additional test scenarios; also known as exploratory testing. Even crowd sourced testing has become increasingly popular; where applications are open to testing by external communities.
KILLER DEADLINES Digital applications are being released faster and quicker than ever before and in order to remain competitive in this market, the need for continuous delivery is increasing, often as part of a broader transformation to DevOps. As a result, digital testing practices are shifting left. Automaton plays a major role in supporting this shift as it’s an integral part of the process. Assessing what is effective to automate and selecting the most appropriate tools and methodologies requires skilled expertise. Technical expertise combined with a deep understanding of the business will contribute to the ideal implementation. This expertise requires continuous research (as the technologies and methodologies are rapidly evolving), hands‑on experience and a deep understanding of the business at hand.
Testing applications on all types and their possible combinations is costly and unnecessary. Analytics gathered on the target audience’s choice of platform types can be used to prioritise compatibility testing accordingly
ACKNOWLEDGING THE DIFFERENCE IS HALF THE BATTLE If it ain’t broken, we don’t fix it. And if we have a great track record for testing we may be averse to questioning our approach. This is why we must recognise that digital testing is different and requires not just unique skills, tools and methodologies but also a completely new state of mind. We must be sensitive to all of the variants our users will experience, with special consideration to the diversity of the audience as well. The highly competitive nature of this market leaves no room for mishaps or delays. By putting the user experience as a top priority together with the supporting infrastructure and methodology to meet the unique needs we can avoid the common digital testing pitfalls. After all, digital testing is not just testing.
TAMAR WEINREB HAVIV PRODUCT MANAGER AMDOCS
Tamar leads research and development of testing services, bringing in new sources of revenue growth. She leverages her 15+ years of experience in the software telecommunication industry, spanning from software development to professional services to management of innovation and entrepreneurial programs to create comprehensive and profitable offerings.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
GO FORTH AND BE BRAVE Baris Sarialioglu, Managing Partner, Keytorc, explains how test metrics can help improve mobile testing processes.
M O B I L E
29
T E S T I N G
“If you’re afraid, don’t do it; if you’re doing it, don’t be afraid!” – Genghis Khan “Nothing can be loved or hated unless it is first understood.” – Leonardo da Vinci
I
f you are in the mobile application testing business, you need to accept that the less time you spend worrying about the test metrics, the more problems you will have when your app goes live. The less time you spend on testing, the more unsatisfied users you will have. In addition, you may end up using more test budget than expected, losing management’s trust, improving the wrong areas of your software delivery lifecycle, and may even end up explaining to people why a live defect was not found during system testing. Without the knowledge you would obtain through proper test metrics, you would not know, firstly, how good/bad your testing was and, secondly, which part of the development lifecycle was underperforming. Never forget that if you want to reach and announce correct results and convince people that you are doing good testing, you need to take care of your test metrics. If you are in the mobile business and do not pay enough attention to test metrics, your users will announce the quality instead of you. Obviously, you, the professional tester, would not want to be in that situation.
BE COMMITTED AND BRAVE If you are prepared to collect your test metrics and show them the utmost attention, then you are ready to convince people around you. Unless you have the full team’s commitment, you cannot establish a healthy working process. Your team (including project managers, business analysts, and software developers) should show commitment on the metrics that you will collect and publish. While collecting your metrics, it is also important to know that you will be creating a competition between several parties (business analysts, software developers, and testers) and this competition will create resistance. Your findings will probably restrict people’s comfort zones. Some people will get frustrated by the transparent environment you will be creating through monitoring the metrics. They will object to the metrics you are publishing, and even worse, they will try to intimidate you with taking back their support and may show several other unprofessional attitudes.
If you are afraid already, I suggest that you not collect and publish any metrics. As a tester, you need to be brave, transparent, and informative; otherwise, anyone can run a test case from a test suite. No big deal. The list below includes some tips that you may want to consider while you are setting your metrics‑based testing framework: • Tell people why metrics are necessary. • Select the metrics that are fitting (fitting to your test process, development methodology, organisation structure, and so on). • Explain each metric that you will collect/publish, not only to testers but to any people who are related to the project. • Try to compare a metrics‑based test project with a nonmetrics‑based one and show the consequences of having no metrics. • Make people believe in metrics and get their commitment (you can spread the commitment by having ambassadors in different groups, such as some business analysts and some developers who believe in you and the metrics). • Try to be informative, gentle, and simple in your first test reports (the first metrics are very important, so let people digest them one by one). • Try to evaluate/monitor/measure processes and products rather than individuals (when people recognise that they are evaluated, they can act aggressively and unnaturally, and they can be more self‑enclosed; worst of all, they may falsify the data you collect and make you publish inaccurate metrics). • Publish two different kind of metrics: (1) point‑in‑time metrics (a snapshot showing the situation at a particular moment t) and (2) trend metrics (displaying any activity over a period of time). • Do not send each test metric or report to everyone; upper managers will not be interested in very low‑level metrics (you need to include information at a suitable level of detail). • Use tools and proper formats in your reports; imprecise metrics will damage your reputation as a tester (include metric definitions, labels, thresholds, team, and project and date information in every metric). • Metric reports should include comments and interpretation. They must tell people what to do. Publishing numbers doesn’t mean anything unless you interpret them and draw conclusions from them. • Try to be 100% objective with your comments; people don’t like or trust
If you are in the mobile business and do not pay enough attention to test metrics, your users will announce the quality instead of you
BARIS SARIALIOGLU MANAGING PARTNER KEYTORC
Baris has over 15 years of experience as an information systems professional. He is highly experienced in software development lifecycle, project management, agile development, QA, and software testing. He has written industry articles and papers, and he regularly attends international and national conferences as a speaker, panelist, moderator, and contributor.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
M O B I L E
30
So you need to make a choice: Do you want to assume, or do you want to be sure?
T E S T I N G
TEST RESOURCES METRICS
TEST PROCESS METRICS
DEFECT METRICS
• Testing time (schedule) • Testing budget (man‑day effort) • Testing resources (people involved) • Percent effort per test phase (e.g., 10% unit, 20% integration, 50% system, 10% UAT, 10% other) • Test efficiency (actual test effort/planned test effort) • Test case generation efficiency (test cases generated/prepared within a period of time) • Test case execution efficiency (test cases executed within a period of time) • Total cost of quality ([detection effort + prevention effort + defect fixing]/total project effort) • Test cost variance (earned value-actual cost) • Test schedule variance (earned value-planned value)
• Total number of test cases • Number of passed, failed, blocked, not run, inconclusive test cases • Execution ratio ([number of passed + failed]/total number of test cases) • Quality ratio (number of passed/number of executed) • Requirements coverage • Requirements volatility (updates per requirement within a given period of time) • Test effectiveness/ quality of testing (defects found during testing/total number of defects)
• Number of defects found • Number of open/closed defects • Defects by platform (iOS, Android, BlackBerry, etc.) • Defects by display density/size • Defects by priority (business impact of a defect) • Defects by severity (system impact/fix cost of a defect) • Defects by root cause, defect taxonomy (missing requirement, invalid data, coding error, configuration issue, etc.) • Defects by test phase (unit, integration, system, UAT, etc.) • Defects by state (open, closed, in progress, etc.) • Defects clustering (where do defects populate most?) • Defect turnaround time (time spent for fixing a defect) • Defect rejection ratio (ratio of invalid defects) • Defect fix rejection ratio (ratio of unfixed defects) • Defect per requirement • Defect per developer day/line of code • Defect finding rate (how many are found within a given period)
Table 1. Test metrics organised into three categories
subjective arguments. • Do not show only bad things; there must be good things in any project (test metrics are published not only for showing missing, lacking, and problematic areas; they are also for showing your confidence). • Try to correlate different metrics with each other (e.g., if invalid defects are numerous, it is logical to have high levels of defect turnaround or fixing time because developers are struggling with understanding defect reports rather than fixing them). • Metrics should be accessible and visible any time. You need to make them available 24‑7. • Do not wait to be 100% complete and perfect before publishing your results (whether you have waited for a long time or not, your metrics will not be perfect, so do not waste your time). • Now we turn to which metrics to collect. Of course, you may find many more
T E S T M a g a z i n e | N o v e m b e r 2 01 6
than the ones listed here, but this is a starting point. I will organise metrics into three categories: test resources; test processes; and defects.
SUMMARY As testers are evaluators, they need to believe that there is as great a value in measurement as in metrics. Metrics involve numerous benefits to all project stakeholders. Metrics make the test process transparent, visible, countable, and controllable and in a way allows you to repair your weak areas and manage Your team more effectively. If you say that a tester never assumes, you need to back this idea with your belief in metrics. If you do not have metrics, you are obliged to act with assumptions rather than proven realities. So you need to make a choice: Do you want to assume, or do you want to be sure?
THE CRAFT AND TOOLS OF TEST AUTOMATION: A DISPARITY Michael Fritzius, Arch DevOps, LLC, reveals how to avoid test automation pitfalls and traps.
T E S T
33
A U T O M A T I O N
E
ver since software became a thing, there's been a need for testing, and then test automation. It didn't take long to realise that testing software manually is not scalable, and so we've been using the same hardware to assist us in writing test automation as we do for writing the actual software. Today, there are multitudes of tools available, each setting out to solve a subset of the problems that we encounter in both spheres of development and test automation. With such a wide selection of tools in the automation space, there are some traps I want to talk about, regarding choosing the correct tool. And I enjoy using word pictures to illustrate points.
CHOOSING YOUR TOOLS WELL My wife and I recently contracted our Pastor to build a custom chair for us. He's very experienced in carpentry, and does it on the side as a hobby. He's made some impressive pieces over the years. The arrangement was: We'd give him a concept, pay him for time and materials, and he'd take care of the design and construction. There were a few trips he and I made to the hardwood store. He would spend quite a bit of time deciding what kind of wood to get, and distinguishing between certain pieces. When I saw him overlook a plank for a seemingly identical one, he said, "The cuts I'd have to make in that piece would go right through a knot, and that's not good structurally." He had the chair designed in his head so well that he was making mental cuts in the wood, and that foresight led him to choose pieces of wood that wouldn't be a problem later. The end result was a beautiful piece which will probably outlive all of us. Do I know how to use all the tools he used? Yep. Could I have gotten the same result? Not a chance! I don't have the full understanding of how the chair goes together. I don't know how to construct the best joints for what type of wood. I don't understand carpentry. The fundamental knowledge is lacking, so my attempt would've been catastrophic. My wife and I also like making things from scratch. Things most people wouldn't think twice about buying, we make. We're funny that way. Baby wipes are a great example. We make our own at a fraction of the cost of
store‑bought ones. We take a roll of paper towels, and my job is to cut the roll exactly in half, with a straight edged kitchen knife. A couple years ago, I'd bought a power miter saw for some house projects. It wasn't long before I realised that, hey, this would cut rolls much faster. It does cut quickly. It also shoots paper towel fuzz everywhere, and the cut itself, although straight, isn't very clean. The problem I'd set out to solve was to decrease the time to cut these rolls. Some of the time was spent sharpening the knife – paper towels tend to dull blades quickly – but much of the time was spent with actual cutting. A better tool ended up being a straight edge knife, but not made of steel since that dulls too quickly. A ceramic knife was a much better choice. And, it does work well. Although it's slower, the cut is much cleaner, and as a bonus, our baby doesn't get fuzz all over her backside. Two lessons come from these two word pictures, respectively: • Knowing when and why to use a tool is more important than knowing how. • A tool that can do the job quickly, does not mean it does the job well.
Knowing when and why to use a tool is more important than knowing how
THE TEST AUTOMATION CRAFT There are traps that both you, and a company, can fall into when choosing a test automation tool. We of course want to remove the pain that comes from having to test manually. But a danger is that we can latch onto tools that make us too reliant on them, and require us to use them as a solution, simply because it's too costly or tedious to retool when needed. Or, we can adopt tools that have a large learning curve, which leads us to keep using them when they're not the best solution anymore, due to how much time and effort we've committed. And I think the reason we reach for the kind of tools we do – the ones that advertise quick pain relief – is because many of us don't know, or have forgotten, that there's a craft to test automation. There's a disparity between knowing how to use an automation tool, and understanding the craft of test automation. What we end up doing is exchanging knowledge of the craft for an increase in speed. This ends up hurting us as testers and automators, because we're so insulated
MICHAEL FRITZIUS PRESIDENT ARCH DEVOPS, LLC
Michael is based in St. Louis, Missouri, USA. He works with custom tooling and solutions for test automation, along with team leadership and mentoring in the QA space.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
T E S T
34
If you have control of the tool's source code, you're in a powerful position. The whole tool is probably not broken, and the broken part can either be fixed or swapped out
from knowing how the problem is being solved. With our tools, we often don't know how something works, we just know that it works. Until it doesn't. Then we're stuck either coming up with clunky workarounds, or avoiding the automation of certain tests. This is why I'm a huge advocate of open source tooling, particularly ones that we have to write ourselves. As we build them to solve our company's problems, we learn about the craft, and have better understanding of what's going on, and why the tools work the way they do.
BENEFITS We can also visualise solutions better, since we don't have to think of the automation pieces as ‘black boxes’. We know what's in the box, and have a pretty good idea how they fit in to accomplish the goal. There are other great benefits to this approach: • Talent retention: If you send the message to your employees that you'll spend time and money on marketable skills for them, you will foster loyalty. Coupled with their domain knowledge, you'll have an incredibly powerful team with the strategy to automate just what's needed, for the most pressing problems. No tool on the market can provide that for you. • Talent attraction: Many people know that test automation skills are highly sought after. If you offer a chance to those who are driven to learn, to come to your company to do so, you will be tapping a market which is unavailable to those companies who only have a tool to offer. • Change immunity: Hey, things happen. Jobs get lost, companies get bought, decisions get made. Investing time and
T E S T M a g a z i n e | N o v e m b e r 2 01 6
A U T O M A T I O N
brain space on certain tools limits your next job to those tools. Investing in learning the craft teaches you about the ‘what’ behind the ‘how’. If you master the craft, you have mastered every tool. Go forth and conquer. • Independent solutions: What I said above, about a tool working, until it wasn't, is a very common thing. If you have control of the tool's source code, you're in a powerful position. The whole tool is probably not broken, and the broken part can either be fixed or swapped out. Instead of scrapping and re‑platforming your automation, you can repair and keep going. Your competitors who rely on costly third party tools can't. • Innovation increase: Test automation is by no means a solved problem. There are innovations just waiting to be made by smart people who share ideas from their variegated backgrounds, and apply them to your company's unique problems. Sometimes it can put your company on the map, simply by how cleverly the work is being done. That probably won't happen if they're constrained to using solutions someone else thinks will solve their problems. So I encourage you – employer or employee – take the harder path. Don't just learn about a bunch of tools, learn about the craft. Go build some stuff! Try writing something to test a webpage. Send some data to a webservice and parse what comes back. Make a database query. See if you can interact with a Windows application. Then ask: How does my code look? Is it maintainable? Readable? Stringy? Heavy? Adapt it, share with others, solve problems with it, and constantly improve – both it, and yourself. I've already started the journey. Will you?
We Make Products Work
Consulting
Staff Augmentation
IT Academy
Testing Circle is a leading Software Testing Services provider to leading Finance, Telco, Government, Retail, Gaming, Media and FTSE 100 clients. Since 2005 Testing Circle has been building solid, lasting relationships with satisfied clients through the provision of tailored, cost effective and flexible solutions to meet all their software testing requirements.
Call +44 (0)207 048 4022 to discuss your testing requirements. testingcircle.com Testing Circle, 4 Copthall Avenue, London EC2R 7DA
A WHOLE NEW WORLD Phil Edwards, Managing Director, nFocus Testing, gives a guide to test automation in a DevOps world.
T
he modern world is moving and developing at an alarming pace. Change within our daily life is rapid, and just about everything we do in some way has been enabled by a computer and each application has been developed and (hopefully) tested. The desire to do more and do more quickly for less is forever in demand. Change is here to stay and organisations need to be ready to cope with the ever‑increasing mandate for speed of delivery and first class quality. It’s less than a decade ago that the iPhone was introduced to the world, and within a blink of the eye, the iPhone 7 is now being rolled out. Adoption of new working practices and methodologies has meant that organisations are constantly challenging themselves to get products and services to market faster than ever and in doing so are now having to think very hard how this can be achieved. We all know that first to market get the spoils, so
there is a lot at stake getting the delivery model right. Additionally, consumers and end users of products expect systems, devices and the internet of things to all run perfectly without technical failures.
GETTING AUTOMATION RIGHT Organisations who push through their latest development too quickly and without performing due diligence, focus, and attention to detail will result in making avoidable mistakes and thus as a consequence will require even more time to complete the task satisfactorily. Automation is a great way to reduce time in the iteration or sprint but so many organisations get it wrong and spend unnecessary time and money in getting this right.
In a world of continuous delivery, continuous integration and DevOps, it can take far too long to get to a satisfactory point whereby automation is paying you back sufficiently for the effort you have put in. The 2014 State of DevOps report1 found that organisations which successfully implement DevOps tools and practices are doubly likely to exceed profitability, gaining a far greater market share with nearly 50% higher market capitalisation and growth than similar businesses. Additionally, companies that implement DevOps improve productivity and have a more stable production environment. The most important part of any DevOps process is the regular release of software. In Scrum, iterations tend to be short in duration, only one or two weeks long. When you use Kanban you can release whenever a code drop is ready, this can be multiple times per week or even daily. When releasing very
T E S T
37
A U T O M A T I O N
frequently you can no longer maintain the manual effort to prove software quality; manual testing then becomes a bottleneck. The only way to support rapid releases of software is to automate the testing. Automated regression testing maximises return on investment in functional test automation and simplifies the testing and retesting of individual components of the system, increasing the reliability of a project while delivering significant cost benefits and value. Using a robust framework will significantly increase the productivity of the test team and improve time to market by reducing the test execution time along with enabling automation to deliver significant cost benefits and value. Small, medium and large organisations in all market sectors have realised the benefits of an integrated approach to automated testing on medium to large‑scale projects.
STEPS IN THE RIGHT DIRECTION There are a few steps you can take to get you on the right track, firstly, you should take advantage of virtualisation and the cloud to automate your application deployment in your test environments. Making your test environments easy to build and set up is a must but you should also make sure you can run a single test automatically in that environment with the application deployed and configured before writing additional tests. Starting with a light touch is a great way to ensure everything is correctly set up and working fine. Once the environment has been passed fit for purpose and is ready to test upon, the technical testers can set about creating an automation framework that provides reusable and expandable test cases for automation. These first tests should be built to cover the core application functionality to ensure you iron out debugging issues from the beginning. Start with fewer tests at first, so your initial automation is solid and gives you the payoff later. Caution needs to be applied when building a framework for automated tests, all too often the techies who write the framework get too focused on solving the problems of automated testing with little regard for the wider testing community. Inclusion is the key here which allows for
scalability of automated testing, and so it is important to design the framework in such a way that less technical members of the team can still extend the coverage of automated tests. This can also improve morale with all members feeling they are part of the test automation effort. Now you have a robust framework in place it is worth ensuring that every code change is committed with a proven test and that test is then added to the automation framework. Within a short period of time, you’ll have an expansive set of robust tests, which can be repeatedly used to validate the software being developed. This set or sub‑set of automated tests can also be used in the verification of environments and the creation of essential test data. Before commencing a new project or programme of work, think about automating testing from day one. Shift left testing as far left as you can, ideally at requirements stage and validating requirements so that they can be easily understood and are able to be automated. This saves costs in rework and ultimately improves time to market. Automate tests early, use tool technology to speed up testing by working with development and write tests as code is developed. Get the test team to think broader than just testing functionality, think about the operational impact and how poor performing code will affect the production environment. Speak to Operations regularly, ideally daily, to ensure they understand what is happening and solicit feedback on what testing you are doing. Finally, maintain your regression packs, build them up and give yourselves the best opportunity to test releases with the broadest and most robust set of tests available.
SUMMARY To conclude, automation will help any organisation take a step in the right direction towards speeding up time to market and improved software quality. Automated tests will help compliment the DevOps process and enable the continuous deployment required to develop at the pace and to the quality the market demands. Automation is here to stay and should be an integral part of any team, supporting and complimenting the delivery function.
Automated tests will help compliment the DevOps process and enable the continuous deployment required to develop at the pace and to the quality the market demands
PHIL EDWARDS MANAGING DIRECTOR NFOCUS TESTING
Phil has been in IT for over 20 years and for the last decade or so in pure play testing. He has held numerous positions including Business Director and COO of SQS UK, Testing Director at Cognizant UK and currently holds the position of Managing Director of nFocus Testing. To date Phil has advised and worked at
Reference
over 70 companies throughout the UK,
1.
20 of which are FTSE 100 companies.
‘2014 State of DevOps Report’, https://puppet.com/resources white‑paper/2014‑state‑of‑devops‑report
T E S T M a g a z i n e | N o v e m b e r 2 01 6
WILL THE ‘CONNECTED CAR’ EVER BE SAFE ENOUGH? With consumer paranoia about security at an all‑time high, Niroshan C. Rajadurai, Director of EMEA, Vector Software asks: does automotive software development need to up its game in the quality stakes?
E M B E D D E D
T
39
T E S T I N G
he recent collaboration between automotive manufacturers and internet providers in the creation of the autonomous vehicle has sparked great consumer interest in the development of the car that will one day deliver functionality talked about in mid‑20th century science fiction. However, the initial enthusiasm has been tempered by the reality that connecting a car to the internet has inherent risks. On further consideration, is this paranoia about internet connectivity the real issue at hand and what questions does this raise about automotive software quality in general? We are all familiar with the regular headline stories detailing security breaches and hacking witnessed on our other connected devices such as phones, tablets and PCs. By connecting cars to the internet, hackers can and will exploit the same class of vulnerabilities including lack of encryption and recording and cloning of key presses. Likewise, these vulnerabilities can easily be detected and prevented in a similar way with the appropriate software testing and scripting. These exploitations are carried out by perpetrators with varying motives and are therefore difficult to protect against in every instance. Malicious hackers will use internet connections to break into car subsystems but this is a small risk compared to poor embedded system quality. Car hacking is an individual crime in comparison to having 100,000 cars with the potential to unexpectedly accelerate, which is a much bigger issue. The recent Mitsubishi Outlander wi‑fi hotspot hack is a great example.1
SOFTWARE QUALITY ISSUES Whilst these internet connectivity security risks are very visible and consumers are understandably concerned there is a more serious software quality issue to deal with before we focus too much thought on hopping into our connected cars of the future. We have seen fundamental flaws with some of the latest software heavy automotive platforms that might hold us back on the sci‑fi vision. Leading brands have publicly discovered quality issues in their operating software creating huge recalls and in some cases accidents or near misses, Toyota (drive by wire throttle)2, Jeep (dashboard hack).3 Software updates have also caused issues either in the dealership or over‑the‑air to the user‑based command and control systems with Lexus (sat navigation upgrade)4 and Tesla (self‑drive software)5 making mainstream news. Integral to the autonomous car vision is the fundamental notion that everyone should feel secure in their car and by feel secure we mean that there should be no unexpected behaviour for the command and control systems deployed – when we change the radio station or open the window we don’t expect the car to accelerate or brake of its own accord or to behave abnormally in any respect. Before the move to autonomous can be complete, there is still a responsibility to make cars safe and secure in the way they behave, and this responsibility falls to the embedded software in the main.
Malicious hackers will use internet connections to break into car subsystems but this is a small risk compared to poor embedded system quality
NIROSHAN C. RAJADURAI DIRECTOR OF EMEA VECTOR SOFTWARE
Figure 1. An overview of the complexites of a connected car.
Niroshan brings more than 18 years of embedded systems experience in the design and development of safety critical and fault tolerant systems. In his current role at Vector Software, he works closely with customers helping them to reduce the time, effort and cost of verifying and validating their systems. Niroshan holds a Bachelor of Electrical and Electronic Engineering (Honours) degree and a Bachelor of Computer Science degree from The University of Melbourne, Australia.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
E M B E D D E D
40
Integral to the autonomous car vision is the fundamental notion that everyone should feel secure in their car and by feel secure we mean that there should be no unexpected behaviour for the command and control systems deployed
T E S T M a g a z i n e | N o v e m b e r 2 01 6
T E S T I N G
Figure 2. The VectorCAST test automation platform.
ASSURING EMBEDDED SOFTWARE We are familiar with software and security testing of internet‑based applications to ensure that their vulnerabilities are detected and patched, but conducting software testing on a safety critical system like a fly‑by‑wire throttle, adaptive cruise control and anti‑lock braking, is a different matter. To put it into context, by comparison there’s approximately 12 million lines of code in the Android operating system but over 100 million lines of code in an average 4‑door car. It’s generally acceptable today to deploy your average minimum viable product (MVP) internet/mobile application with at least 70%6 testing thoroughness (code coverage) with many areas of the code not having been executed/exercised, which is its residual technical debt. By comparison the software in a car with 95% code coverage will have tested 95 million of 100 million lines of code or 8x more lines of code than the internet application. Static analysis as a tool for checking automotive software code began life in the early 1970s7 and has become a pillar of software testing and produces output by parsing the code and comparing it to a dictionary and best practice rules without executing it. One of the main challenges with using static analysis is the number of false positives (noise) produced. As static analysis doesn’t execute the code, it falls to developers to use unit testing. Unit testing ensures that a unit of software behaves exactly as specified in its formal design and requirements documentation, with test cases written to test every execution
path and to eliminate any false positives. Today, there are hundreds if not thousands of pieces of software from a huge global supply chain that must be integrated to create a complete automotive system.
CAN STANDARDS HELP? To cope with the heightened pressures to create high quality, safe software the automotive industry examined other industries for solutions and went on to create standards that mirrored the best practices of the aerospace and rail industries. This effort has resulted in several standards (AUTOSPICE, AUTOSAR, ISO26262 & MISRA) emerging to satisfy the increase in safety legislation that ensures software meets the highest standard needed to keep driver, other road users and pedestrians safe. Whilst these standards are necessary, in practice this presents a huge challenge for software developers, as using the more common static analysis testing approach does not offer enough reliability. Using static analysis for such a large task will produce a mass of output, including false positives that will need to be interpreted by the developer who will use their experience to determine the best course of action to resolve and then build the necessary tests to ensure the error is corrected. In the well documented Toyota Unintended Acceleration Case when engineers checked the engine control application, they found more than 85,000 violations8 against MISRA’s 2004 edition, which included buffer overflows, invalid pointers, stack overflows
42
With the ever increasing size of source code bases and a drive towards a DevOps approach, software testing needs to be continuous and quickly highlight areas of code that need attention
E M B E D D E D
Figure 3. Continuous testing process.
and others.9 While it demonstrates the effect of poor coding and quality standards, it also highlights the vastness of the task for testing automotive software components. By MISRA’s own rules there is an industry recognised metric that equates the number of broken rules to number of bugs – for every 30 rule violations you can expect to see on average, three minor bugs and one major bug.10
SUMMARY It’s a time consuming effort and once a potential exploitation method or vulnerability has been investigated and documented it requires the support of the tool vendor to include the process heuristic into its analysis
T E S T I N G
algorithms. Therefore, it’s difficult for any single purpose off‑the‑shelf package to catch all the different types of security and safety vulnerabilities that can be found in code, with teams needing to deploy a suite of tools to make sure they are achieving the highest degree of testing completeness including static analysis, unit testing and code coverage. Now that automotive software developers have a bigger legal obligation to carry out comprehensive and thorough testing of all software components, what’s needed in the future is a platform that can analyse the efforts needed to meet industry testing requirements/regulations and produce ‘actionable intelligence’. Such a system will use static analysis as an indicator of potential deviation from the standard requirements. It would then automatically build and run all the necessary unit test cases required to isolate potential coding errors. With the ever increasing size of source code bases and a drive towards a DevOps approach, software testing needs to be continuous and quickly highlight areas of code that need attention. What’s needed is a testing system that makes the obligation for producing fail safe software a little more bearable and lets us feel secure in our cars. Developers and testers of safety critical software are unsung heroes who don’t usually get as much recognition as those working on those cool mobile or internet apps but they have a significant role to play in our everyday safety as we go about our business be it by train, plane or automobile.
References 1.
Mitsubishi Outlander hybrid car alarm 'hacked’, BBC, http://www.bbc.co.uk/news/ technology‑36444586 (6 June 2016). 2. ‘Toyota Case: Vehicle Testing Confirms Fatal Flaws’, EE Times, http://www.eetimes.com/document. asp?doc_id=1319966 (31 October 2013). 3. ‘After Jeep Hack, Chrysler Recalls 1.4M Vehicles for Bug Fix’, Wired, https://www.wired.com/2015/07/ jeep‑hack‑chrysler‑recalls‑1‑4m‑vehicles‑bug‑fix/ (24 July 2015). 4. ‘Faulty update breaks Lexus cars' maps and radio systems’, BBC, http://www.bbc.co.uk/news/ technology‑36478641 (8 June 2016). 5. ‘Tesla blames Model S owner for autonomous driving crash’, TechnoBuffalo, http://www. technobuffalo.com/2016/05/15/tesla‑blames‑model‑s‑crash‑summon‑mode/ (15 May 2016). 6. ‘How much Code Coverage is “enough”?’, Programmers Stack Exchange, http://programmers. stackexchange.com/questions/1380/how‑much‑code‑coverage‑is‑enough 7. Robert O. Lewis, Independent Verification and Validation: A Life Cycle Engineering Process for Quality Software, John Wiley & Sons (1992). 8. ‘Toyota Unintended Acceleration and the Big Bowl of “Spaghetti” Code’, Safety Research & Strategies, Inc., http://www.safetyresearch.net/blog/articles/ toyota‑unintended‑acceleration‑and‑big‑bowl‑%E2%80%9Cspaghetti%E2%80%9D‑code (7 November 2013). 9. Michael Barr, ‘BOOKOUT V. TOYOTA: 2005 Camry L4 Software Analysis’, http://www.safetyresearch. net/Library/BarrSlides_FINAL_SCRUBBED.pdf 10. ‘Toyota Unintended Acceleration and the Big Bowl of “Spaghetti” Code’, Safety Research & Strategies, Inc., http://www.safetyresearch.net/blog/articles/toyota-unintended‑acceleration‑and‑big‑bowl‑%E2 %80%9Cspaghetti%E2%80%9D‑code (7 November 2013).
T E S T M a g a z i n e | N o v e m b e r 2 01 6
Cassy Calvert, Head of Testing, BJSS, presents five spectacular software glitches and how they could have been avoided.
E N T E R P R I S E
45
I T
I
t’s finally happened. We’ve allowed software to become so ingrained in our everyday lives that whenever something goes wrong it hurts. Enterprise has been at the forefront of this with large IT estates and a reliance on process automation meaning that even the smallest outage can quickly bring an entire operation grinding to a halt. Millennials use a great word to describe this: #EpicFail – a total failure where success should be reasonably easy to attain. Has big business taken its dependence on software too far? Or can it successfully mitigate risk through simple testing techniques? The biggest #EpicFails provide the answer.
#EPICFAIL 1: STARBUCKS In 2015, an ‘internal failure’ brought down the entire Starbucks Point of Sale system. With their tills not working, baristas had little choice but to hand their drinks out free of charge. It wasn’t long before head office intervened by closing over 8000 stores, angering a client base deprived of their caffeine fix. The problem was later identified. The Point of Sale table in the database had been erroneously deleted by a daily system refresh. Modest changes often cause the greatest damage, and in this case a simple regression check would have saved Starbucks a bean or two. One of the primary uses for regression testing is to prove the correctness of an application. A basic smoke test, configured to prove the user journey for purchasing a Grande Vanilla Latte, for example, would have immediately flagged the issue. Every application should have a smoke test suite. This small regression pack should contain tests that are critical to the application and should be executed in less than an hour. For high transaction environments like Starbucks, it can become easy to develop a very large manual regression pack, which can quickly become too big or exercises the same path through the code each time. Smoke tests should ideally be automated. The pack can be increased to include more critical tests to verify the build or application. There are several different approaches for regression testing. Automation and smoke test packs are key, but they shouldn’t be solely relied upon to locate defects. Other techniques, such as exploratory testing, add richness to the regression approach.
#EPICFAIL 2: APPLE MAPS Apple faced severe criticism from its users when its new mapping solution was introduced in 2012. The service was slated for its unreliability, with problems ranging from the Manhattan Bridge resembling a large roller coaster, to views being obscured by clouds, inaccurate location data and warped topology. As a service, Apple Maps contrasted strongly with the company’s mantra of ‘it just works’. The software was so bad that Apple was forced to issue a rare and humiliating public apology, recommending that customers continued using competing solutions until the issues were resolved. Apple never disclosed the root cause of the issue. Perhaps, in a case of technology spatial disorientation, the company was blinded by technology and features at the expense of quality. Data quality and richness should never be overlooked. They help identify issues which normally remain hidden until tests occur. But tests need data, and of the largest challenges of any testing project is sourcing production data. Representative, production‑like rich data, is crucial to flagging and locating the defects that are often found in edge cases. For heavily‑used applications such as Apple Maps, it is also vital to consider data volume. This is a challenge – not only for the richness of scenarios that a large dataset can provide, but also for the effects of volume, regardless of platform size. In some cases, depersonalised data can be insecure. It often doesn’t comply with data protection legislation either. Instead, by starting with canned data, a dataset can be built by analysing data journeys. When performed early enough, it becomes possible to build a rich set of transitioned data alongside test cases throughout the progression of the project. In this instance, using a wide and varied group of testers with a range of devices and configurations, would have been effective. The concept is commonly known as ‘crowd sourcing’. As part of the crowd source, ‘bug hunt’ type sessions could have been held. Exploratory testing via a wide range of mobile devices, including operating systems (either released, or in the beta channel), would have also exposed defects, which would otherwise have slipped through the net. Public beta testing is also beneficial as it provides a valuable ‘sneak peak’ into the software, and there is always a willing public ready to take on the challenge.
Millennials use a great word to describe this: #EpicFail – a total failure where success should be reasonably easy to attain
CASSY CALVERT HEAD OF TESTING BJSS
Dedicating her career to the art of testing since 1999, Cassy joined BJSS in 2012 as Test Manager responsible for the development and growth of company’s test capabilities. Her role sees her managing enterprise scale projects, ensuring quality test delivery across multiple onshore and offshore work streams in a high pressure, agile, delivery focused environment.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
46
Through analysis, development and testing, quality is built‑in from the outset, creating a perception that testing time is reduced – even though it is more robust
T E S T M a g a z i n e | N o v e m b e r 2 01 6
E N T E R P R I S E
#EPICFAIL 3: EUROPEAN SPACE AGENCY It may have taken the European Space Agency 10 years to develop and build its new rocket, but Ariane 5 was destroyed just 37 seconds into its maiden flight. In what was a US$7 billion software bug, Ariane’s guidance system tried to convert a 64‑bit sideways velocity format into a 16‑bit format. Inevitably, the number became too big, and an overflow error ensued. The guidance system shut itself down, invoking a failover to the backup system, which had itself failed because it ran the same software and encountered the same issue. Ariane was programmed to automatically self‑destruct if it veered off‑course, and it was the decision to cease the processor operation which proved fatal. A review recommended that future missions should prepare a test facility comprising of as much real equipment as technically feasible, while also injecting
I T
realistic input data, and perform complete, closed‑loop system testing. Access to environments and production‑like systems for testing is crucial. While any tester would dream of having a fully scaled, production like system, the reality is almost always different. In greenfield development, testing can be performed on the production environment before a system goes live. This enables the architecture to be proven, facilitates non‑functional testing, and verifies the software being developed. Aim to perform this as early in the project as possible – the earlier, the greater the benefits.
#EPICFAIL 4: KIDDICARE Kiddicare wasn’t aware that it had fallen victim to a cyberattack until its customers started complaining about the highly personalised phishing messages they were receiving. The data was stolen from Kiddicare’s online test environment. While security measures relied largely on simple password authentication, real customer data was stored on this test environment, enabling the hackers to obtain the names, phone numbers, mailing and email addresses of 800,000 Kiddicare customers. Data has never been so valuable, yet enterprise tends to place a lower focus on it than revenue assurance. While it is common practice to use depersonalised data in development and test environments, robust security measures should be applied across all environments. Basic security and penetration testing would have helped Kiddicare secure its data. The basic objective of penetration testing is to determine security weaknesses by identifying entry points, breaking in to the application, and reporting findings. It is common practice to employ penetration testing on production environments and, depending on needs, it can be automated or manually performed. There are aids to penetration testing. For example, the checklist of Web application vulnerabilities in the Open Source Security Testing Methodology Manual (OSSTMM) from the Open Web Application Security Project (OWASP) is a framework for testing the security of web applications. It provides technical information on how to use penetration testing to look for specific issues. Penetration testing might be useful for understanding the resiliency of an application, but if it is performed incorrectly, it becomes of little value and will create a false sense of security.
E N T E R P R I S E
47
I T
#EPICFAIL 5: HEATHROW TERMINAL 5 it was designed to make the ‘Heathrow hassle’ a thing of the past, but the ‘calmer, smoother and simpler airport experience’ promised by Heathrow’s flagship Terminal 5 (T5) descended quickly into a full‑scale national embarrassment. Over £175 million was invested in T5’s IT estate. The project involved over 180 specialist suppliers deploying 163 systems, hundreds of interfaces and tens of thousands of user devices. But within hours of its opening, all of T5’s baggage, car parking and security systems failed, leaving 36,584 passengers frustrated and a mountain of 23,205 bags waiting to be reunited with their owners. During integration testing, in a bid to stop test examples from being delivered to live systems, integration messages were stubbed out. However, on its release to production, the code was erroneously left in place, preventing the system from receiving information about luggage transferring to British Airways from other airlines. Bags were sent for manual sorting, but as the messages backed up, the bags did too, missing their flights. Official reports put the failures down to ‘inadequate system testing’. With the opening date of the new terminal rapidly approaching, and with test engineers being unable to access full end‑to‑end systems, the scope of trials were intentionally reduced and several were cancelled altogether. There are many pitfalls in relying on testing as an ‘end of project’ activity. As demonstrated by T5, it can lead to system testing becoming squeezed, de‑scoped or worse – cancelled! It is likely that T5 was delivered using waterfall delivery techniques. By employing agile principles instead, the project would have been better placed to innovate and rapid change would have been delivered in working systems. While agile is a philosophy which requires players to adopt a particular mindset and way of working, small changes can be easily and quickly implemented. The waterfall delivery approach often views testing as a largely time‑consuming and manual process. However by implementing continuous delivery, the project benefits by combining development and testing.
One of the biggest misconceptions about agile is that it does not require documentation or planning
Continuous delivery spreads the effort across the delivery. Through analysis, development and testing, quality is built‑in from the outset, creating a perception that testing time is reduced – even though it is more robust. One of the biggest misconceptions about agile is that it does not require documentation or planning. This is incorrect – necessary and sufficient documentation and planning is still needed. Test scripts should still be written because they define what should be done and will highlight the exact issue should the test fail. Had T5 combined iterative development and testing, with the pre‑production messages supplied by the integration vendors, the stubbed messages oversight would have been identified and remedied before go live.
CONCLUSION Will proper testing ever end the #EpicFail? Sadly, it won’t. We’re living in an era characterised by large‑scale change and innovation. With new online services disrupting traditional players, there is a surge of investment in new IT systems. Existing IT investments are being optimised and tweaked too. Enterprise IT is set to play an even greater role in daily life. There are likely to be more #EpicFails over the coming years. Some will be amusing, others inconvenient, and a few destructive. They will probably be bigger and more spectacular to those we’ve already seen, but by applying the most basic testing principles, their magnitude, impact and severity will be reduced.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
NOT JUST A QUICK FIX It is amazing how often you hear that a management team is disappointed in the test tool that they bought. Is it the tools fault? Is it the testers fault or is it something else? asks James Milne, Technical Test Manager, Infuse Consulting.
T E S T
P R O C E S S
S
ometimes it seems that automated testing is viewed as a fix for all testing woes but is not often given the resource or backing that is required. This article documents five areas that should be taken into consideration when looking to implement any type of test tool and these are based on experience gained over a number of automated test engagements (summarised in Figure 1). Within many agile and DevOps teams, it is clear that the drivers and organisational structures are different to traditional models. However, some of the approaches below are of value.
expectations of what automated testing can achieve, what areas will be covered and what investment will be required. It is also a document where areas of non‑coverage can be outlined, so further setting expectations at this point. The test plan is the final document where detailed expectations can be set, the test manager can outline what functions will be covered by automated testing and what functions will be partly or not covered as part of an individual project.
PROCESS
Automated testing needs to be taken into consideration throughout the lifecycle of a project, from inception to closure (and Many management teams have an often beyond in the case of automated expectation of automated testing, often it is regression packs). that it will be able to do testing quicker and A typical example is where an automated cheaper than manual testing and will be able test fails and is reported as a defect. The to test everything. defect is then rejected as expected behaviour This expectation needs to be managed as a new change has been implemented both as part of the project process but without the automation team being told also through the tools available to the test that one was due. They are then in catch up manager, which include: mode to try to bring the automation test pack • Test policy. up to date. • Test strategy. The earlier that the test team (including • Test plan. the automation tester/team) are included the more likely the automated tests are By utilising the test policy, the to succeed. For agile teams a test manager can set out to working, proven automated senior management and test(s) should be part of stakeholders, the high the definition of done level approach that for the scrum. This is going to be taken means it can be for automation introduced into as well as the an automated level of initial regression and ongoing pack without support that further rework. is required to For DevOps make automated and continuous testing work in integration the organisation. systems, this It should be definition should be made clear to the extended to include senior stakeholders integration of that by signing off the automated on this document tests into the Figure 1. Five areas that should be taken into consideration that they are automated build when looking to implement any type of test tool. committing to the and deployment ongoing investment scripts. in the automation approach and tool set that will be defined. Through the test strategy the test RESOURCE manager can clearly define the approach The test engineers who write the automated being taken for automation testing in tests are not magicians; they cannot write an organisation, and can further define
EXPECTATION
ION T A
It should be made clear to the senior stakeholders that by signing off on this document that they are committing to the ongoing investment in the automation approach and tool set that will be defined
PRO C
TEST TOOL
OU TC
E OM
CE R U
S ES
EXPE CT
49
I M P R O V E M E N T
JAMES MILNE TECHNICAL TEST MANAGER INFUSE CONSULTING
With over 20 years of testing experience, James’ experience includes building and running test teams mainly in financial services on large high value programmes. When not at work he is a keen walker and trekker with recent a summiting of Stok Kangri in India, which stands at just over 20,000 ft.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
RES O
50
It needs to be realised that test tools have similar technical debt requirements as the actual systems under test
T E S T
P R O C E S S
tests without background information, working tools or applications under test. Too often management think that if they have a team to whom they provide little or no guidance too and who they have in some remote (or other location) that they will produce all the automated tests required. Similarly the environments that the actual tool to reside on and also the environment under test needs to be adequately sized and supported. It needs to be realised that test tools have similar technical debt requirements as the actual systems under test. Operating systems, web browsers, protocols to mention but a few changes overtime and will impact any tools ability to run. Like the systems under test, the test tools will also need their upgrades and maintenance planned into any release/roll out schedule, with adequate time to regression test and fix any issues identified. So often test tools are initially housed on substandard hardware as management teams don’t want the initial and ongoing expense of setting up and maintaining the environments until they have proven a return on investment (ROI). In many cases automated test runs execute at a faster rate than manual tests through screens and in many cases any API. This is often misunderstood by management who often will only pay for test environments that are half or quarter size (or less) of the final production version. Management are often surprised when the environment under test fails due to the level of automated tests executed against them. So expectations need to be managed (for example, if you have a test pack of 500 tests that are expected to be executed overnight but the environment can only support the execution of 100, the test plan needs to be adjusted to reflect that the task will now take five days instead of one). Similarly, both the tool and test environments need to be given support to provide an effective test environment. This can include operating system releases, windows upgrades and other housekeeping tasks to ensure that the environments stay in sync with production code and also ensure that testing is executed on a like for like environment.
THE TEST TOOL Different expected outcomes can require different test tools to deliver what is required.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
I M P R O V E M E N T
The range of tools available stretches from packaged applications to open source to individual bespoke applications, all with different levels of cost and support requirements. Seldom will one tool be able to deliver all that is expected of automated testing and time needs to be taken to select the correct tool sets in order to select those that will help deliver the level of testing that is required, rather than remain ‘package‑ware’. Building and running a proof of concept (POC) is a useful way to identify the suitability of a tool and can prove if a tool is really able to execute the testing and subsequent reporting that is required. A POC should focus on automating the system under test and trying to automate those complicated areas that form the core functionality. However, before embarking on a POC, the scope of automation needs to be carefully thought through. It is not always possible or financially viable to automate everything, so a clear definition of what will be automated is important.
THE OUTCOME As previously mentioned the perceived successful outcome of automated testing depends on what expectations have been agreed at all levels across an organisation and what is needed on a project‑by‑project basis. Once agreement has been reached about what is expected from automated testing and the required level of investment/support is in place, then the automation testing in any organisation has a good chance of succeeding in delivering both actual results and value for money, as well as meeting perceived expectations.
SUMMARY To ensure maximum benefit from automated testing, the process cannot be left to operate in a vacuum and needs to be integrated into an organisation’s development approach and lifecycle. To not do so can prove costly in both time and expense. Setting and agreeing expectations up front as well as having the right resource in terms of environments, initial and ongoing support, documentation, tools and team members, can greatly contribute to the success of automation and the value it delivers to an organisation.
BIG DATA APPLICATIONS Richard J Self, Research Fellow – Big Data Lab, University of Derby, examines the role of software testing in the achievement of effective information and corporate governance.
B I G
53
D A T A
A
Project failed: The project is cancelled at some point during the development cycle. Due to significant disquiet amongst CIOs about the definition of success requiring meeting the contracted functionality in a globalised and rapidly changing world, Standish Group changed the definition in 2013 to: Project successful: The project is completed on time and on budget, with a satisfactory result; which is, in some ways, a lower bar. As the graph in Figure 1 shows, the levels of project success, challenge and failure have remained remarkably stable over time. It is clear that, as an industry, IT is remarkably unsuccessful in delivering satisfactory products. There is a range of estimates of the resultant costs of challenged and failed projects which range from approximately US$500 billion to US$6 trillion, which compares to the annual ICT spend of US$3 trillion in a world GDP of approximately US$65 trillion. Clearly something needs to be done. The list of types of systems and software failures is too long to include here but a few examples include the recent announcements by YAHOO of the loss of between 500 and 700 million sets of personal data in 2012 and 2014, the loss of 75 million sets of personal and financial data by Target in 2012 and regular failures of operating system updates for iOS and Windows etc.
s a reminder, software testing is about both verifying that software meets the specification and also validating that the software system meets the business requirements.1 Most of the activity of the software testing teams attempts to verify that the code meets the specification. A small amount of validation occurs during user acceptance testing, at which point it is normal to discover many issues where the system does not do what the user needs or wants. It is only too clear that current approaches to software testing do not, so far, guarantee successful systems development and implementation.
IT PROJECT SUCCESS The Standish Group have been reporting annually on the success and failure of IT related projects since the original CHAOS report of 1994, using major surveys of projects of all sizes. They use three simple definitions of project successful, failed and challenged projects, as follows: Project successful: The project is completed on time and on budget, with all features and functions as initially specified. Project challenged: The project is completed and operational but over‑budget, over the time estimate, and offers fewer features and functions than originally specified.
53 46
49
51
56
53 46
44
% of Projects
31
27
28 26
29 24
23 16
15
18
19
52
43 39
RICHARD J SELF RESEARCH FELLOW – BIG DATA LAB UNIVERSITY OF DERBY
34
32 29
28
42 37
35
34
33
50
49
40
55
One effective approach is to develop a set of questions that can be asked of the various stakeholders, the requirements, the designs, the data, the technologies and the processing logic
28
27
22 21 18 21
19
17
29
Richard has 30 years’ experience at a
19
large aerospace company in designing and implementing a wide range of systems. He is now one of the leaders of the development of Data Science
1994 1996 1998 2000 2002 2004 2006 2009 2011 2012 2013 2014 2015 2016 2017 Success
Challenged
Failed
Success-New
Challenged-New
Failed-New
and Analytics programmes at the University of Derby. He is an invited speaker at national and international conferences across a wide range of
Figure 1. Standish Group IT project success and failure rates.
business sectors.
T E S T M a g a z i n e | N o v e m b e r 2 01 6
B I G
54
The fundamental challenge to the testing profession is to determine how their skills, knowledge, experience, processes and procedures and be applied earlier in the development lifecycle in order to deliver better validated and verified projects
COMMON THEMES, VERIFICATION AND VALIDATION Evaluating some of the primary causes of the long list of failures suggests some common themes and causes ranging from incomplete requirements capture, unit testing failures, volume test failures due to using too small an environment and too small sets of data, inappropriate HCI factors and the inability to effectively understand what machine learning is doing. Using the waterfall process as a way of understanding the fundamentals of what is happening, even in agile and DevOps approaches, we can see that software verification is happening close to the end of the process just before implementation. As professionals we recognise that there is little effective verification and validation activity happening earlier in the process. The fundamental question for systems developers is, therefore, whether there is any way that the skills and processes of software testing can be brought forward to earlier stages of the systems development cycle in order to more effectively ensure fully verified and validated requirements specifications, architectures and designs, software, data structures, interfaces, APIs etc.
IMPACT OF BIG DATA As we move into the world of big data and the internet of things, the problems become ever more complex and important. We have the three traditional Vs of big data: velocity, volume and variety which stress the infrastructures, cause problems with ensuring data dictionaries are consistent between the various siloes of databases, the ability to guarantee valid and correct connections between corporate master data and data being found in other databases and social media.
This article is based on the presentation delivered on the 28th September 2016 at The Software Testing Conference North 2016. Video can be found at this link: http://north.softwaretestingconference. com/richard‑self/
north.softwaretestingconference. com/richard‑self/
T E S T M a g a z i n e | N o v e m b e r 2 01 6
IMPROVED PROJECT GOVERNANCE If the IT industry is to become more successful, stronger information and project governance is required that is based on a holistic approach to the overall project, ensures a more effectively validated requirement specification, far more effectively verified and validated non‑functional requirements, especially
D A T A
in the areas of security by design and the human‑to‑computer interfaces. It is also vital to ensure that adequate contingencies are added to the project estimates. The 2001 Extreme Chaos report observed that for many of the successful projects, the IT executives took the best estimates multiplied by 2 and added another 50%. This is in direct contrast to most modern projects where the best and most informed estimates are reduced by some large percentage and a ‘challenging target’ is presented to the project team. Inevitably, the result is a challenged or failed project. If we can achieve more effective project governance, with effective verification and validation of all aspects from the beginning of the project, the rewards are very large in terms of much more successful software that truly meets the needs of all the involved stakeholders.
12 VS OF PROJECT GOVERNANCE AND BIG DATA One effective approach is to develop a set of questions that can be asked of the various stakeholders, the requirements, the designs, the data, the technologies and the processing logic. In the field of information security, ISO 27002 provides a very wide range of questions that can help an organisation of any size to identify the most important aspects that need to be solved. By analogy, a set of 12 Vs have been developed at the University of Derby which pose 12 critical questions which can be used both with big data and IoT projects and also for more traditional projects as the ‘12 Vs of IT Project Governance’. The 12 Vs are: • Volume (size). • Velocity (speed). • Variety (sources/format/type). • Variability (temporal). • Value (what/whom/when?). • Veracity (truth). • Validity (applicable). • Volatility (temporal). • Verbosity (text). • Vulnerability (security/reputation). • Verification (trust/accuracy). • Visualisation (presentation). As an example, the Value question leads towards topics such as: • Is the project really business focused?
B I G
55
D A T A
What are the questions that can be answered by the project and will they really add value to the organisation and who will get the benefit and what is the benefit? Is it monetary? Is it usability? Is it tangible or intangible? • What is the value that can be found in the data? Is the data of good enough quality? The Vulnerability question leads towards: • Is security designed into the system, or added as an afterthought? Major consequences could result in significant reputation damage. • Incorrect processing leads to reputation damage. The Veracity question is developed from the observation by J Easton2 that 80% of all data is of uncertain veracity, we cannot be certain which data are correct or incorrect, nor by how much the incorrect data are incorrect (the magnitude of the errors). Data sourced from social media is of highly uncertain veracity, it is difficult to detect irony, humans lie, change their likes and dislikes, etc. Data from sensor networks suffer from sensor calibration drift of random levels over time, smart device location services using assisted GPS have very variable levels of accuracy. A fundamental question that needs to be asked of all these data, is how can our ETL processes detect the
anomalies? A second question is to what extend do undetected errors affect the Value of the analyses and decisions being made?
FORMAL TESTING OF BI AND ANALYTICS One further fundamental issue (identified by the attendees at The Software Testing Conference North 2016)3 was that the formal software testing teams are very infrequently involved in any of the big data analytics projects. The data scientists, apparently, ‘do their own thing’ and the business makes many business critical decisions based on their ‘untested’ work. In one comment, the models developed by the data scientists produced different results depending on the order in which the data were presented, when the result should have been independent of the sequence. In conclusion, the fundamental challenge to the testing profession is to determine how their skills, knowledge, experience, processes and procedures and be applied earlier in the development lifecycle in order to deliver better validated and verified projects which can be delivered as a ‘successful project’ (in Standish Group terms)? Are there opportunities to ensure more comprehensive and correct requirements specifications?
0101
References 1. 2. 3.
PMBOK Guide 4th Ed. IBM, 2012 http://north.softwaretestingconference.com/richard‑self
T E S T M a g a z i n e | N o v e m b e r 2 01 6
F I N A L I S T S Following an extensive, fair judging process, we are pleased to announce the finalists in The European Software Testing Awards 2016.
HEADLINE SPONSOR
SPONSORS
SUMMIT SPONSORS
SUPPORTED BY
THE
EUROPEAN
SOFTWARE
BEST AGILE PROJECT Awarded for the best use of an agile approach in a software testing project. Judges were looking for: • Demonstration of an agile approach taken to the project • Excellent communication between the entrant and the development team • Outstanding communication demonstrated within the software testing team • Effective utilisation of a best practice method/technique • Successful outcome that meets the aims and targets set • Evidence of a commitment to high quality and standards
FINALISTS
★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★
Direct Line Group Insurance Group Home Office – Immigration IT Portfolio Cognizant QualiTest Infosys Limited in partnership with UBS Cognizant Technology Solutions Sopra Steria TalkTalk Telecom Group John Lewis Partnership Itera
TESTING
AWARDS
2016
57
BEST MOBILE TESTING PROJECT
BEST TEST AUTOMATION PROJECT – FUNCTIONAL
Awarded for the best use of technology and testing in a mobile application project.
The award for the best use of functional automation in a software testing project.
Judges were looking for: • Demonstration of the best use of technology in a mobile application project • Demonstration of a clear and concise “preparatory phase” • Clear and defined functional requirements • Effective utilisation of a best practice method/technique • A successful outcome that meets the aims and targets set • Evidence of a commitment to high quality and standards
FINALISTS
★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★
Hughes Systique Private Limited Box UK Testfort QA Lab (QArea Inc.) Cognizant Mobile Labs Credit Suisse nFocus Testing Avis Budget Group Applause Tech Mahindra
Judges were looking for: • Demonstration of the best use of automation in a software testing project • Utilisation of a well-developed test suite of testing scripts • Any technical problems that were encountered were successfully resolved • Effective utilisation of a best practice method/technique • A successful outcome that meets the aims and targets set • Evidence of a commitment to high quality and standards
FINALISTS
★★ Keytorc in partnership with Bilyoner ★★ UST Global Pvt. Ltd in partnership with Schroders Plc ★★ Moody’s Analytics ★★ Dow Jones ★★ Modality Systems ★★ VirtusaPolaris ★★ HARMAN Connected Services ★★ Accenture UK ★★ Maveric Systems in partnership with Metro Bank ★★ Worksoft Inc.
Amdocs Innovation. Expertise. Results. Amdocs Testing Services enable world‑class quality products in an ever‑changing business environment. Our holistic testing approach includes innovative technology, communications‑specific skills and industry knowledge, to ensure our customers gain testing solutions at an optimised cost, superior speed, and top quality. Our communications testing portfolio highlights our strength in the IT communications testing domain, and introduces new services such as digital testing and core network testing. Amdocs BEAT™, our award‑winning testing framework, standardises and optimises the testing process based on our accumulated communications testing experience and best practices including a repository of over 1,000,000 communications‑specific test
cases. Using a sophisticated analytical model to make recommendations, every testing project is significantly more productive and cost effective. In this way, we help our customers achieve their business goals to deliver a superior customer experience to their customers. Our years of testing experience in the communications domain, includes expertise in multi‑vendor environments as well as DevOps, where testing and operations teams work hand‑in‑hand. With Amdocs testing services, you can take your business applications to go‑live with industry‑low defect levels while reducing cost and minimising time to market. Amdocs Testing is part of Amdocs, a global company (NASDAQ:DOX) with revenue of US$3.6 billion in fiscal year 2015. Amdocs employs a workforce of more than 24,000 professionals serving customers in over 90 countries.
+1 314 212 7000 testing@amdocs.com Missouri 1390 Timberlake Manor Parkway Chesterfield, MO 63017 USA
www.amdocs.com
T E S T M a g a z i n e | N o v e m b e r 2 01 6
58
THE
EUROPEAN
BEST TEST AUTOMATION PROJECT – NON-FUNCTIONAL
SOFTWARE
TESTING
AWARDS
2016
GRADUATE TESTER OF THE YEAR
TESTING MANAGER OF THE YEAR
The award for the best use of non-functional automation in a software testing project.
A recent graduate (who has graduated in the last 2 years) who has shown outstanding commitment and development in the testing field.
Awarded to the most outstanding individual test manager or team leader over the last 12 months.
Judges were looking for: • Demonstration of the best use of automation in a software testing project • Utilisation of appropriate test technique to identify underlying root cause • Any technical problems that were encountered were successfully resolved • Effective utilisation of a best practice method/technique • A successful outcome that meets the aims and targets set • Evidence of a commitment to high quality and standards
Judges were looking for: • Evidence of outstanding commitment and development in the testing field • Excellent communication with colleagues and clients • Efforts to keep up-to-date with guidelines and trends / Dedication to self training • Effective utilisation of a best practice method/technique • Consistent successful outcomes that meet the aims and targets set • Evidence of a commitment to high quality and standards
FINALISTS
★★ Infosys Limited in partnership with ABN AMRO ★★ Cognizant ★★ Tata Consultancy Services Ltd. (TCS) in partnership with Aviva Insurance Plc ★★ Ciklum ★★ Tech Mahindra ★★ Cognizant Technology Solutions
Santander Group City Av. de Cantabria s/n 28660 Boadilla del Monte Madrid, Spain
www.santander.com
T E S T M a g a z i n e | N o v e m b e r 2 01 6
FINALISTS
★★ Stephen Quinn, Cognizant Technology Solutions ★★ Aaron Gibbon, BJSS ★★ Samantha McKee, KPMG ★★ Simranjit Kaur, Infrasoft Technologies Limited ★★ Romeo Ledesma, Chelsea Apps Factory ★★ Sai Bendi, Sopra Steria
Judges were looking for: • Consistent and outstanding leadership skills • Excellent people management skills • Outstanding communication with the software testing team • Examples of procedures put in place to ensure high quality results • Commitment to keep up-to-date with guidelines and trends • Evidence of a commitment to high quality and standards
FINALISTS
★★ Sragabart Mahakul, Cognizant Technology Solutions ★★ Santhosh Reddy Gujja, Amdocs ★★ Isabelle Magnusson, Ticketmaster ★★ Srikanth Mohan, Wipro Technologies Limited in partnership with Asda ★★ Nirmala Tarur Veeranna, HARMAN Connected Services ★★ Ryan Sandilands, KPMG ★★ Ruslan Desyatnikov, QA Mentor ★★ Ravi Kodali, Cognizant ★★ Raksha Chheda, Mastek
Banco Santander España Our purpose: Santander has a customer‑focused business model that enables it to fulfil its purpose of helping people and businesses prosper. Aim: Our aim is to be the best retail and commercial bank that earns the lasting loyalty of our people, customers, shareholders and communities. A Bank Simple, Personal and Fair. The Santander model: • Geographic diversification. • Focus on retail and commercial banking. • Subsidiaries model. • International talent, culture and brand. • A strong balance sheet, prudent risk management and global control frameworks.
THE
60
EUROPEAN
BEST OVERALL TESTING PROJECT – RETAIL Awarded to the most outstanding testing project in the retail sector. Judges were looking for: • Evidence of working closely with the organisation to deliver projects on time and within budget • Evidence that the entry is responsive to different and/or complex needs of the client • Effective utilisation of a best practice method/technique • A successful outcome that meets the aims and targets set • Evidence of a commitment to high quality and standards
FINALISTS
★★ Cognizant Technology Solutions in partnership with Arcadia ★★ John Lewis Partnership ★★ Marks and Spencer (M&S) ★★ VirtusaPolaris ★★ Accenture Services Pvt Ltd ★★ Wipro Technologies in partnership with Asda ★★ Tata Consultancy Services Ltd. (TCS) ★★ Boots
★★ ★★ ★★ ★★
SOFTWARE
TESTING
Sopra Steria Santander España Infrasoft Technologies Ltd. Cognizant Technology Solutions
BEST OVERALL TESTING PROJECT – GAMING Awarded to the most outstanding testing project in the gaming sector. Judges were looking for: • Evidence of working closely with the organisation to deliver projects on time and within budget • Evidence that the entry is responsive to different and/or complex needs of the client • Effective utilisation of a best practice method/technique • A successful outcome that meets the aims and targets set • Evidence of a commitment to high quality and standards
Awarded to the most outstanding testing project in the finance sector.
FINALISTS
★★ ★★ ★★ ★★ ★★ ★★
StarDust Cognizant Elevate Credit International Itera Brickendon Consulting VirtusaPolaris
T E S T M a g a z i n e | N o v e m b e r 2 01 6
2016
Judges were looking for: • Evidence of working closely with the organisation to deliver projects on time and within budget • Evidence that the entry is responsive to different and/or complex needs of the client • Effective utilisation of a best practice method/technique • A successful outcome that meets the aims and targets set • Evidence of a commitment to high quality and standards
FINALISTS
★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★
Tech Mahindra in partnership with BT BT Plc Vodafone NL Cognizant Technology Solutions EE Liberty Global Tech Mahindra in partnership with Sunrise Virgin Media Tata Consultancy Services Ltd. (TCS) in partnership with Vodafone Enterprise IT, UK
BEST USE OF TECHNOLOGY IN A PROJECT Awarded for outstanding application of technology in a testing project.
BEST OVERALL TESTING PROJECT – FINANCE
Judges were looking for: • Evidence of working closely with the organisation to deliver projects on time and within budget • Evidence that the entry is responsive to different and/or complex needs of the client • Effective utilisation of a best practice method/technique • A successful outcome that meets the aims and targets set • Evidence of a commitment to high quality and standards
AWARDS
FINALISTS
★★ HARMAN Connected Services in partnership with Inspired Gaming Group ★★ Sony Interactive Entertainment Europe ★★ Ciklum ★★ HARMAN Connected Services
AMDOCS BEST OVERALL TESTING PROJECT – COMMUNICATION Awarded to the most outstanding testing project in the communication.
Judges were looking for: • Outstanding application of technology in a testing project • The latest, most up-to-date technology was correctly and effectively used • Demonstration of an outstanding approach and method taken to the project • Effective utilisation of a best practice method/technique • A successful outcome that meets the aims and targets set • Evidence of a commitment to high quality and standards
FINALISTS
★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★
Accenture in partnership with BT Infosys Limited Cognizant Technology Solutions CA Technologies in partnership with London Metal Exchange KPMG Markerstudy Group Accenture Services Pvt Ltd Cognizant BugFinders Tata Consultancy Services Ltd. (TCS) in partnership with British Gas
THE
EUROPEAN
SOFTWARE
TESTING
J U D G E S
AWARDS
2016
61
2 0 1 6
AASHISH BENJWAL ASSOCIATE DIRECTOR UBS LONDON
DELIA BROWN HEAD OF TEST HOME OFFICE TECHNOLOGY
RIEL CAROL HEAD OF TEST YOUVIEW TV LIMITED
SUDEEP CHATTERJEE GLOBAL HEAD OF QA AND TESTING LOMBARD RISK
PAULA COPE GLOBAL HEAD OF QA TULLETT PREBON
JASON EMERY QA MANAGER - MOBILITY HUB TUI GROUP
PETER FRANCOME DIRECTOR OF TEST LIBERTY GLOBAL
SALLY GOBLE HEAD OF QUALITY GUARDIAN NEWS & MEDIA LIMITED
MYRON KIRK HEAD OF TEST COE BOOTS
AMPARO MARIN CERTIFICATION GOVERNANCE DIRECTOR BANCO SANTANDER
NADINE MARTIN SENIOR MANAGER, TEST SERVICES SONY COMPUTER ENTERTAINMENT EUROPE
GREGOR RECHBERGER PRODUCT MANAGER MICRO FOCUS
KASHIF SALEEM DIRECTOR QA HOTELS.COM
OLIVER SMITH QA DIRECTOR KGB – 118 118
SANDOKAN STERQUE HEAD OF GLOBAL IT TESTING & ASSURANCE BRITISH AMERICAN TOBACCO
PAULA THOMSEN HEAD OF QUALITY ASSURANCE AVIVA
T E S T M a g a z i n e | N o v e m b e r 2 01 6
THE
62
EUROPEAN
WORKSOFT TESTING TEAM OF THE YEAR Awarded to the most outstanding overall testing team of the year. Judges were looking for: • Outstanding communication demonstrated within the software testing team • Evidence of achieving project aims and targets through effective team work • Evidence of successful team work / team building • Evidence of a strong team ethos • Evidence of a team commitment to high quality and standards
FINALISTS
★★ Lloyds Banking Group – Aries Project ★★ Accenture Services Pvt Ltd in partnership with British Telecom Plc ★★ Amdocs ★★ eg Solutions ★★ QArea Inc. (Testfort QA Lab) ★★ BJSS in partnership with Royal Bank of Scotland ★★ Cognizant in partnership with Centrica
SOFTWARE
TESTING
★★ Lloyds Banking Group – Flood RE project ★★ Tech Mahindra ★★ AIB Bank
TESTING MANAGEMENT TEAM OF THE YEAR Awarded to the testing management team that has shown consistently outstanding leadership. Judges were looking for: • Consistent and outstanding leadership skills • Excellent people management skills • Outstanding communication with the software testing team • Examples of procedures put in place to ensure high quality results • Keeping up-to-date with guidelines and trends • Evidence of a commitment to high quality and standards
FINALISTS
★★ Virgin Media ★★ Lloyds Banking Group
★★ ★★ ★★ ★★
AWARDS
2016
QA Mentor L&T Infotech in partnership with RSA Ciklum Vodafone Group Services GmbH
MOST INNOVATIVE PROJECT Awarded for the project that has significantly advanced the methods and practices of software testing. Judges were looking for: • The project has significantly advanced the methods and practices of software testing • Created new methods and tools in the testing field • An attempt to push boundaries within the industry • Successful delivery and execution of project • Consistent successful outcomes that meet the aims and targets set • Evidence of a commitment to high quality and standards
QA Mentor +1 800 622 2602 support@qamentor.com 1441 Broadway, 3rd Floor, New York, NY 10018, USA
www.qamentor.com
T E S T M a g a z i n e | N o v e m b e r 2 01 6
QA Mentor is an award‑winning leading global QA services provider headquartered in New York and with eight different offices around the world. Established in 2010, with an aim to help organisations from various sectors improve their QA functions, QA Mentor proudly boasts of having a unique combination of 150+ offshore and onshore resources who work around the clock, supporting all time zones. The company supports 250+ clients from startups to Fortune 500 organisations within nine different industries. QA Mentor has uniquely positioned itself in the market by providing customisable QA testing services for all businesses by following a hybrid approach with flexible on‑demand testing services and solutions at low prices. By leveraging its in‑house automation solutions and tools, including a proprietary automation framework, QA Mentor is able to speed up execution time and save money for clients. This process also creates a tailor‑made solution for each client based on their specific budget and technology needs. The proprietary automation framework methodology alone includes the choice of 50 different automation testing tools and solutions to ensure that the right one is selected for a client’s
specific needs. With the acquisition of a French test automation tool development company, QA Mentor now has their own test management platform as well, QACoverage. So why do customers choose QA Mentor? • Most economical and cost‑effective QA testing services provider. • Has 30 different QA services, some unique to the company. • Covers all of the world’s time zones. • Expert knowledge of 50+ different automation tools, in offices worldwide. • Contractual obligations for defect leakages and productivity targets. Awards and recognitions Recent awards and recognitions demonstrates QA Mentor’s deep commitment to clients, employees, fans and supporters around the world: • 10 Pure Play Testing Services Providers by Gartner. • 20 Promising QA Providers by CIO Review Magazine. • 25 Most Recommended Quality Assurance Providers by Enterprise Outlook Magazine. • 20 Leading Testing Providers by TEST Magazine. • Brand of the Year 2015 by Silicon India Magazine. • 25 Fastest Growing QA/Testing Companies by CEO Magazine.
THE
64
EUROPEAN
FINALISTS
★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★
KPMG QA Mentor Accenture Services Pvt Ltd Cognizant Tata Consultancy Services Ltd. (TCS) Geeks Ltd QualiTest Tech Mahindra Accenture in partnership with BT
BEST USER EXPERIENCE (UX) TESTING IN A PROJECT
SOFTWARE
TESTING
research and best practices in the wider field of UX testing • A successful outcome that meets the aims and targets set • Evidence of a commitment to high quality and standards
FINALISTS
★★ ★★ ★★ ★★
HARMAN Connected Services Box UK Tech Mahindra Mastek in partnership with Specsavers
LEADING VENDOR
The award for the best use of user experience testing in a project.
Awarded to the vendor who receives top marks for their product/service and customer service.
Judges were looking for: • Evidence of carrying out high quality end user research suitable for the project • Evidence of user engagement throughout the delivery • Effective utilisation of a best practice method/technique • Evidence of promoting innovation,
Judges were looking for: • Evidence of a commitment to high quality and standards to customers • Commitment to customer satisfaction • Evidence of value for money • Evidence of reliability, flexibility and speed of installation
AWARDS
2016
FINALISTS
★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★
Applitools Mastek Applause Ciklum Tata Consultancy Services Ltd. (TCS) Actifio Perfecto QArea Inc. Infosys Limited nFocus Testing QA Mentor Tech Mahindra QASymphony
StarDust +33 4 91 68 66 28 contact@stardust-testing.com HQ: 37, Rue Guibal Pôle Media Belle de Mai 13003 Marseille France
www.stardust-testing.com
T E S T M a g a z i n e | N o v e m b e r 2 01 6
In today’s disruptive and fully connected world, the role of QA & testing is crucial to ensure high-performing websites & apps and guarantee the best digital experience possible to all users. StarDust is a leading testing agency supporting businesses in securing their digital products by conducting manual tests on real devices. With quality being at the heart of its operations, StarDust is committed to providing the best fitted testing strategy by delivering scalable QA services: • Within StarDust QA Labs in France & Canada, where over 50 QA professionals conduct tests on over 2000 device configurations. • With a powerful crowdtesting platform consisting of a global community of fully vetted QA professionals. • With on-site testing where dedicated
StarDust professionals carry out QA services within client offices. Conducting thorough functional & acceptance tests on a wide variety of devices & operating systems is key to generating great results. Given today’s fragmented digital environment, multi-device testing meets user demands and reduces overall project risk. And who better than a trusted third party with an impartial perspective to perform these tests? Testing is not just about detecting bugs; it also allows to measure the quality of a product at a given moment. Brands with who StarDust has developed sustainable partnerships include: L'Oréal, BNP Paribas, Française des Jeux, Sanofi, Danone, Thalys, Eurotunnel, Radio Canada, Accor Hotels, and many more.
Thank you TEST Magazine for supporting Great Ormond Street Hospital Children’s Charity Great Ormond Street Hospital is one of the top five paediatric research hospitals in the world. We treat children from all over the UK and abroad who are diagnosed with the most complex, life-threatening
conditions. But it’s only thanks to people like you that we can provide our patients with the specialist care they need.
Your donations will make a huge difference. Great Ormond Street Hospital Children’s Charity. Registered charity no. 1160024.
For more information an d to make a don ation, please visit
www.gosh.org
TESTING AT THE SPEED OF YOUR DIGITAL REVOLUTION
AMDOCS TESTING
With DevOps accelerators * and the most experienced team in the communications industry Amdocs Testing takes your DevOps journey from a dream to reality *A unique technology to turn “Agile to QA” to “Agile to production”
For more information, visit:
www.amdocs.com/testing