TEST – July 2017

Page 1

TEST LIGHT AN TEST RIGHT IS TH NORM, ASSESSIN AND IMPLEMENTIN STANDARDS/BES PRACTICES AR NOT DELIVERED B A TEST ARCHITEC BECAUSE THE ARE PART OF TH VERSION TWO TES MANAGER ROL JULY 2017

THE TEST MANAGER 2.0

THE FUTURE FOR TEST MANAGERS


All-in-One

Test Automation

RECORDING_1

RECORD MODULES

ex.

RecordItemIndex(1));

equence 'admin'.", new RecordItemIndex(2));

Any Technology

BROWSER

OPEN

MOUSE

CLICK

KEY

SEQUENCE

VALIDATE

ATTRIBUTE EQUAL

USER CODE

MY_METHOD

Recording_1.cs RECORDING_1

void ITestModule.Run { Report.Log(ReportLev "Website", "Opening w

Report.Log(ReportLev "Mouse", "Mouse Left C

Report.Log(ReportLev

Seamless Integration Broad Acceptance Robust Automation

1

Quick ROI

Licen

All Te

se

chno logi All U pdat es. es.

egration t In m iu n le e S Now with • Access Selenium with the Ranorex automation framework • Address Selenium pain points • Web testing across all major platforms and browsers • For testers and developers T E S T M a g a z i n e | J ul y 2 01 7

www.ranorex.com/try-now


n()

1

C O N T E N T S

T E S T C O V E R

M A G A Z I N E S T O R Y:

T H E

|

T E S T

J U LY

2 0 1 7

M A N A G E R

2 . 0

NEWS

Software industry news ................................... 5

36

THOUGHT LEADERSHIP

Using AI to re-imagine software quality......... 10 AML and technology ..................................... 12 Testing is broken ........................................... 14 DEVICE TESTING

vel.Info, web site 'http://www.ranorex.

Developing Dependable Devices ................... 16

vel.Info, Click at {X=10,Y=20}.", new RecordItemIndex(1));

MANUFACTORING SECTOR

vel.Info, "Keyboard", "Key sequence 'admin'.", new RecordItemIndex(2));

Manufacturing a better approach to application security .......................................................... 18 MOBILE TESTING

Yielding better mobile . .................................. 22 From Europe to Korea . ................................. 26 IOT

Who's responsible when IoT gets hacked? ... 30

48

AUTOMATION

Digital transformation: driving testing trends . . 32 FUTURE VISION

The Test Manager 2.0 .................................... 36

16

SMART CITIES

Smart Cities: Lessons Learned ...................... 42 PERFORMANCE TESTING

Which tool to use in testing . ........................ 46 VENDORS TO END USERS

Selling Software Testing................................. 48 TDD & BDD

30

Efficient practices of development ............. 52

T E S T M a g a z i n e | J ul y 2 01 7


2

T E S T M a g a z i n e | J ul y 2 01 7


E D I T O R ' S

C O M M E N T

3

ARTIFICIAL INTELLIGENCE AND A NEW TECHNOLOGICAL WORLD

ANDREW HOLT EDITOR OF TEST MAGAZINE

A

rtificial Intelligence (AI) is everywhere: from technological working, business and financial analytics to fictional narratives. Questions about its use can be simplified into: does it provide a gateway to a new age of technological possibilities or a negative outlook for workers and society in general? In this issue we have a number of thought provoking pieces that present AI in different guises. In one scenario, AI is designed to ‘replace’ the human factor entirely, a point touched upon by Mario Matthee (page 32) in which the ‘end game’ of digital transformation is a homogeneously digital workplace. Jayashree Natarajan explores the development of AI in software application testing (page 10). She poses this scenario: how about an artificial intelligence (AI)powered engine, for instance, helping identify glitches in the enterprise’s IT systems and enabling recovery in a matter of hours. Rather provocatively, Antony Edwards puts forward the idea that the testing process is broken (page 14). He presents a vision that AI can deliver true test automation with AI and analytics, allowing automation more of the testing process using ‘automation intelligence’. Taking on many issues all at the same time, Paul Mowat, in our cover feature (page 36), presents a fascinating picture of a 2.0 version of the Test Manager, in which this individual is changing and improving and there is no going back: this means, in short, the Test Manager is transforming beyond

recognition, and the many layers of change in society are resulting in a wider breadth of knowledge that is required. The evolution of these changes, Mowat notes in a future context, will mean that the title Test Manager will no longer be applicable. In a similar way, looking at future possibilities and trends, Pervez Siddiqui (page 42), argues that technology has a big role to play in enabling collaboration within cities. Here, technology, says Siddiqui, has a promising role to play in enabling collaboration and breaking down silos within cities. And for all the future predictions, Johan Steyn (page 48) makes a good observation that if you are working for a testing provider, your future is in your hands. Software quality and testing services are becoming more expert skills every day. Therefore, under this scenario, the fast moving world of digitalisation and automation is pushing the quality of software platforms to the front of the thinking, and worry, of business owners. A positive perspective for the Test Manager to exploit. On a final note, I am the new Editor and I would like to hear from you: what you like about the magazine, what you want to see more of, what topics we cover well, what we should be covering and any other comments you would like to share. Please do send me an email with your thoughts.

JULY 2017 | VOLUME 9 | ISSUE 3 © 2017 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 EDITOR Andrew Holt andrew.holt@31media.co.uk +44 (0)203 056 4599 REPORTER Leah Alger leah.alger@31media.co.uk +44 (0)203 668 6948 ADVERTISING ENQUIRIES Anna Chubb anna.chubb@31media.co.uk +44 (0)203 668 6945 PRODUCTION & DESIGN Ivan Boyanov ivan.boyanov@31media.co.uk 31 Media Ltd, 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 info@31media.co.uk www.testingmagazine.com PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA softwaretestingnews @testmagazine TEST Magazine Group

andrew.holt@31media.co.uk

T E S T M a g a z i n e | J ul y 2 01 7


4

T E S T M a g a z i n e | J ul y 2 01 7


5

N E W S

FEARS THAT BRITAIN’S LARGEST WARSHIP COULD BE VULNERABLE TO CYBERATTACKS

Leah Alger and Andrew Holt analyse a surprising case of a new £3.5billion state of the art warship, but with outdated software

T

he £3.5billion aircraft carrier HMS Queen Elizabeth, which left its dockyard for the first time at the end of June to begin sea trials, is apparently using the same software that left the NHS exposed. Screens inside a control room on the ship, which is the largest vessel ever built for the Royal Navy, reportedly displayed Microsoft Windows XP — copyright 1985 to 2001. It was the operating system that left the NHS and other organisations around the world vulnerable to a WannaCry ransomware attack last month. It affected 300,000 computers in 150 countries. A defence source told The Telegraph that some of the on-board hardware and software “would have been good in 2004” when the carrier was designed, “but now seems rather antiquated”. Windows XP is no longer supported by Microsoft, meaning it does not receive updates to protect users from new types of cyber hacks. “Just weeks after the NHS attack, discovering that HMS Queen Elizabeth ‘appears to be running outdated Windows XP' is a scary prospect. There is no such thing as 100% security. Software, written by humans, will invariably contain vulnerabilities – code

that can be manipulated in ways that the writers didn’t realise,” said Kaspersky Lab’s Principal Security Researcher, David Emm. A computer expert warned that Windows XP could leave HMS Queen Elizabeth vulnerable to cyberattack. “If XP is for operational use, it is extremely risky,” Alan Woodward, Professor of computing at the University of Surrey told The Times. “Why would you put an obsolete system in a new vessel that has a lifetime of decades?” Although Mark Deller, Commander Air on the Queen Elizabeth, told The Guardian: “We are a very sanitised procurement train. I would say, compared to the NHS buying computers off the shelf, we are probably better than that. If you think more Nasa and less NHS you are probably in the right place.” “It is important to remember that cybercriminals target systems and applications that are widely-used. That is why the volume of malware targeting Windows far exceeds that of Mac OS – not because the latter is immune to attack. We see the same sorts of malware for Mac OS as for Windows, but nothing like the same number,” added Emm. After cost and construction delays, it has

been reported that the navy is now prepared for any attacks, with help from a team of cyber specialists who are going to be on board when the nation’s future flagship starts heading to the North Sea for maiden seatrains throughout the summer. Assessing the situation, Emm from Kaspersky Lab, said: “One of the problems facing any organisation working on a project that takes years to bring to fruition is that technology will have moved on by the time it is released. So it’s important that design specifications are flexible enough to scope the requirements of a particular IT system without committing to specific versions of software. “Support for Windows XP ended in April 2014 and was well signposted before this. The danger is that there is no guarantee that security updates will be available for an unsupported operating system – or any other software. Therefore, any vulnerabilities found in obsolete systems are automatically zeroday vulnerabilities – that is, they are likely to remain unpatched.” A Minister of Defence Spokesman (MoD) noted that the UK take cyber security extremely seriously, and has doubled its investments to £1.9billion.

T E S T M a g a z i n e | J ul y 2 01 7


6

N E W S

JP MORGAN’S ‘RISE OF THE MACHINES’ IN YORK

Leah Alger gets an understanding of a thought provoking topic to be covered by a leading speaker at the Software Testing Conference NORTH in September

T

he Software Testing Conference NORTH to be held on the 26-27 September in York will provide the software testing community with a range of different topics and categories, including performance testing, continuous delivery and developing trends; through networking, executive workshops, practical presentations and market leading exhibitions. Exploring the current and future state of mobile delivery/testing technologies at the conference, JP Morgan Vice President and QA Manager, Lee Crossley will give an interestingly entitled presentation: ‘Rise of the Machines’ which will focus on the following topics: • Context Driven Testing • Mind Map Generation • Defect Management across multiple Jira instances • Dynamic Reporting across multiple Jira instances • UI Automation/Performance Testing using distributed mac infrastructure • Service Virtualisation • Continuous Integration • Staying close to source (Apple/Google) • Mobile Device Farms • The future of user interface (UI) Automation (The Automation of Automation) Crossley, who has carved out a successful testing career throughout the years, has plenty of experience under his belt. Specialising in building, mentoring, leading test teams, and creating/shaping the right

T E S T M a g a z i n e | J ul y 2 01 7

test strategies to best fit for multiple finance and energy sectors: Vebnet, Standard Life, Sopra, National Australia Group and Barclays.

GAINING GREATER TRACTION FROM APPLE Despite his experience, he strongly believes that it’s important to stay close to sources such as Apple and Google; his alignment with what Apple is passionate about has not only been the right move for the short term, but is gaining even greater traction from Apple as the new Xcode releases/features are being rolled out, giving him greater control and flexibility when creating/executing tests. Crossley said: “Some third party UI automation vendors were using UI automation instruments to run their automated tests in iOS — which came bundled with Apple’s Xcode — as were we. “However, we felt ‘UI automation instruments’ was not fit for purpose so we changed direction by starting to use the same framework (XCTest/XCUITest) that the developers used for unit testing, as well as using Apple’s new open source language ‘Swift’ to write our tests.” Mobile testing is relatively in its infancy, however, technology is starting to mature at pace and has been for the last 15 months, which is how he thought up his presentation title: ‘Rise of the Machines.’ “I still feel many of the key testing principles have yet to be adopted from other more mature technology spaces. We need

LEE CROSSLEY

to not only think about the functional testing of the applications on devices, but also automatically testing and performance testing the back end services and the simulation of global server locations, to give us a greater understanding of our global user base and the networking challenges they face,” revealed Crossley.

INCREASING TEST COVERAGE BY OWNING DATA According to Crossley, in terms of coverage, owning your data through either ‘service virtualisation’ or ‘mocking’ is the only real way to not only increase test coverage by owning your data, but to eradicate down time by decoupling from shared testing back-end systems. “Currently working on ‘automation of automation’ (Autobot across iOS/Android and web at the manual activity involved in the automation of any app), I believe that regardless of platform, it is very labour insensitive and brings with it a significant cost/resource overhead,” he added. Crossley is looking forward to attending this this year’s Software Testing Conference NORTH, particularly networking with likeminded individuals and discussing real work challenges that others are facing within the mobile space. Want to register and hear Lee Crossley’s speech? Visit http://north. softwaretestingconference.com and spread the word to technology-savvy individuals.


7

TESTING FOR £2 500 PER MONTH REALLY? Really.

You can have your cake AND eat it. DVT’s Global Testing Centre offers heavyweight testing solutions, backed up by 17 years’ experience and high level expertise. The DVT GTC Lite® Package is designed for small to medium size enterprises, providing an affordable, quality outsourced testing solution.

An exceptionally sweet deal. FREE two-week Proof of Concept on Non-Functional Testing

Functional Testing

FREE access to a range of REAL mobile devices

Automation Regression Testing

FREE Automation Tool

Device Testing

FREE Performance Testing Tool (limited to 50 users)

Performance Testing*

Activation in FIVE working days

Total Delivery** * Performance Testing limited to 20 hours per month ** Total Delivery limited to 160 hours per month

INTERESTED? Contact DVT to find out more. Tel: 0203 696 2440 | Email gtclite@dvtsoftware.co.uk

Terms and conditions apply. T E S T M a g a z i n e | J ul y 2 01 7


8

N E W S

DEVOPS IS A ‘MEANINGLESS BUZZWORD’ SAYS REPORT

Leah Alger gets views on a contentious study that says DevOps has ‘zero impact’ on work processes

I

n order to find out where the software testing community is placed inside the product development paradigm, software testing and automation provider, LogiGear, carried out its second survey, assessing the state of the software testing landscape, focusing on DevOps. LogiGear’s goal was to grasp attitudes regarding DevOps, to understand testing professionals’ strategies, and to look at the financial tool chain commitments that require culture change, planning and training. “As software technology companies continue the journey into becoming leaner and faster delivery organisations, new product features and updates will need to be delivered to customers rapidly without

T E S T M a g a z i n e | J ul y 2 01 7

compromising quality. This will continue to drive the adoption of DevOps/continuous delivery practices,” said Hung Nguyen, LogiGear’s CEO. “The right mix of QA capabilities, aligned with technology and practice revolutions, will help transform companies’ competitive advantages, as well as their readiness to respond to customer demands in the new, intelligent world order,” he added. The survey key findings show that 60% of respondents using DevOps are under a lot of pressure to automate, with 46% that don’t use DevOps saying that there is a lot more pressure to automate.

CULTURE, MOVEMENT AND PRACTICES “Agile and scrum were concrete processes that led to tangible and positive results in the workplace. On the other hand, I have yet to understand the term ‘DevOps’ except as a meaningless buzzword that has had zero effect to our work processes,” said a survey participant. Director, CIO Technology and Regulatory Affairs at Banco Santander, Ampora Marin, disagreed: “I am not sure DevOps has reached that point in which it has become a ‘meaningless buzzword’. Continuous delivery could be considered as a preliminary stage


9

N E W S

to DevOps, which is really the change of paradigm. “It’s a culture, movement or practice that emphasises the collaboration and communication of both software developers and other IT professionals, while automating the process of software delivery and infrastructure changes. It creates a culture and environment where building, testing and releasing software can happen rapidly, frequently and more reliably.”

TOOLS, TECHNIQUES AND PRACTICES HELP ACHIEVE OUTCOMES The argument here is that DevOps practitioners have tools, techniques and practices to help achieve outcomes. Continuous delivery of customer value is the outcome businesses need. Keith Watson, Director of DevOps at ADP UK, said:“Defining these outcomes in terms of an inspiring vision, as well as tools agnostic policies, enables organisations to focus on re-engineering the release process to improve deployment frequency, while extending collaboration to include all stakeholders in the delivery of customer value. “Companies who focus just on automation forget that building a software production line is more than just automation and the introduction of tools but requires a significant culture to break down silos between departments created by task specialisation.”

DATA AND ENVIRONMENT PROBLEMS Other DevOps challenges are due to the pressure of delivering shippable products at a continuous level. Sudeep Chaterjee, Global Head of Testing and QA at Lombard Risk, said: “These deliverables are not actually shippable, but demoable — this means there is lot of technical debt that accumulates which then requires lot of effort to make these multiple demo builds to be merged into a final delivery. “This is done at the end with final non-functional and regression cycles but sometimes, due to delivery pressure, is released without taking into consideration the non-functional side; as long as features are working for operation teams and this

results in issues like performance, security, operability issues. “It is important to know that Ops is not the only stakeholder for Dev in DevOps and there are also varied stakeholders particularly when it comes to non-functional requirements.” LogiGear’s survey results show that since adopting DevOps, 60% of participants are finishing more tests, 32% of developers are conducting more tests, and nearly 50% of respondents using DevOps revealed that they have a lot of test data or environment problems. “DevOps can be a big disruptor, bringing with it a new manner of working and a new set of tools. What most teams want is a smooth running software development pipeline and with DevOps that can take time,” said Michael Hackett, Senior Vice-President at LogiGear. Continuous delivery can be seen as an essential ingredient for teams doing iterative and incremental software delivery. It entails that every change to the system can be released and that new versions can be promoted to production at the push of a button.

CONTINUOUS DELIVERY IS WHAT SOFTWARE TEAMS REALLY NEED “At LogiGear we have stopped using the phrase DevOps, and now instead use continuous delivery. There are many reasons for this. First, continuous delivery eliminates the visualisation of the big issues that come to mind when thinking of the term DevOps. Second, it seems continuous delivery is what software teams really need. Our survey brings insight into the big issues and roadblocks surrounding continuous delivery adoption,” added Hackett. Surprisingly, the survey results showed that one of the groups that don’t use DevOps has fewer issues, with 33% of respondents saying that their agile and scrum practices are going well, despite not using the software delivery and development process. According to LogiGear, automation seems to be the leading way for sharing information and data, although some participants said that their automation regression suites often run “ok”, but have some “rough spots” and “false negatives” which can cause failure to tests. “The aim of continuous delivery is to make releases dull and reliable, so that

organisations can deliver frequently, at less risk and get feedback faster from the endusers. I do believe in the power and efficiency gains that can be derived from development and operations working together. And thus, to make this happen, necessarily continuous delivery should be in place,” added Marin.

AUTOMATION SEEMS TO BE THE LEADING WAY FOR SHARING DATA

Marin concluded that people prefer to talk about continuous delivery, rather than DevOps, because DevOps is the pure change of a paradigm, while continuous delivery is the milestone they have reasonably managed to achieve. “Some organisations are capable of working in ‘continuous delivery’, but most of them are still far away from working purely on DevOps, such as development and operations merged together,” she added. LogiGear’s key survey findings conclude that more teams that practice DevOps will continue to improve through communication, collaboration, information and training, and automation appears to be the way to share information and data, although it can be troublesome.

T E S T M a g a z i n e | J ul y 2 01 7


10

USING AI TO RE-IMAGINE

SOFTWARE QUALITY Within a short decade, test automation has gone from record-andplayback to shaking hands with AI. In fact, software application testing has already harnessed advanced AI techniques for commercial use, says Jayashree Natarajan, Global Head, Quality Engineering and Transformation at Tata Consultancy Services

W

e have witnessed numerous instances when a glitch in an enterprise’s IT system has not only brought its entire operation to a halt but impacted its top line and bottom line as well. In today’s digital era, where customers unable to access promised services can voice their dissatisfaction over social media, the consequences of an IT application failure on an enterprise’s brand can be detrimental, to say the least. One wonders if, like countless system outages that have struck large enterprises over decades, such failures were a case of some enterprise applications not having been rigorously tested. And one also longs to see what one could do, beyond the tools and techniques classically used, to prevent such episodes from happening in the future. How about an artificial intelligence (AI)powered engine,

T E S T M a g a z i n e | J ul y 2 01 7

for instance, helping identify glitches in the enterprise’s IT systems and enabling recovery in a matter of hours?

FROM TEST AUTOMATION TO AI-POWERED QA Today, in enterprise Agile/DevOps journeys, the power of test automation to deliver business value is well-recognised. Metrics in software QA typically focus on competitive advantage – shorter time-to-market, better brand protection, improved customer experience, risk mitigation and more – rather than on dollars, pounds or euros saved. Within the relatively short period of a decade, test automation has gone from software engineer-driven record-andplayback, to tool-based automation, to enterprise-level tool-agnostic frameworks and platforms. It has taken mobility and Big Data in its stride, marching on to one-touch automation. Inroads are now being made into AI which, just a few years ago, some believed to be beyond reach. The good news is two-pronged. First, in discussions with clients on their views regarding trying out AI-based testing on their


T H O U G H T

L E A D E R S H I P

11

IT applications, what we have been seeing is excitement – the sparkle in their eyes as we share with them the new possibilities that show promise. The other encouraging fact is that, viewed as a functional application of pattern recognition, AI techniques dovetail efficiently with test automation.

ADVANCED AI METHODS TESTED During 2015-16, TCS began investing in AI for software application testing and, subsequently, built a solution, TCS 360 Degree Assurance. Applying advanced AI methods like machine learning, natural language processing, artificial neural networks and linear regression, TCS 360 Degree Assurance offers problem analysis and root-cause identification, test suite optimisation and defect prediction capabilities. In fact, we recently tested the solution in multiple real-life customer engagements. In one such engagement, the client wished to pinpoint the major reasons for their mobile customers’ dissatisfaction. So, without studying our client’s applications, we let TCS 360 Degree Assurance analyse just the mobile app production data. It solved the problem – and amazed our client.

HUMAN AND/OR AI? In the real-life scenario we’ve just shared, had we used an expert test engineer’s help, she would need to possess deep knowledge of the application or the process domain. You would then need to give them all the data points for the analysis. She might look at ticket descriptions and, based on them, open the requisite application logs, searching for anomalies. In some cases, the engineer might also study some program code to see what it contained. In sum, a human engineer would analyse matters to the level visible to her; they would conduct a manual analysis or, at most, leverage automation to an extent. Now, if iterations were needed to cover, say, tens of thousands of defects or ticket logs, and an AI engine were available, a human SME would need only seed and support assisted learning. The engine would do the rest – parsing, graphical modelling, doing a mix-and-match with known problemsets and, once a pattern was found, using it to design the root cause. If no matching patterns existed, the engine

would update its knowledge repository and use the augmented one for future analyses. The engine’s speed and accuracy would be many orders of magnitude higher than that of an expert engineer.

WHERE SHOULD ENTERPRISES START? Enterprises should look at streamlining their QA process and how they assimilate their quality footprint into their systems of QA records. This would typically include their ALM, production operation and project planning and management systems. Since machine learning and other commonly used AI techniques like natural language processing depend heavily on data quality, the streamlining should be the starting point for deploying AI techniques. Subsequent investment could be made in employing data scientists who would identify algorithmic use cases, transform data into information and arrive at meaningful and actionable insights.

Metrics in software QA typically focus on competitive advantage – shorter time-to-market, better brand protection, improved customer experience, risk mitigation and more – rather than on dollars, pounds or euros saved

AI-POWERED QA FOR TOMORROW Numerous software QA scenarios today still demand considerable human intervention. So the role of AI has untapped potential. As more use cases are developed, AI will successfully simplify problems in software testing and make decision-making more transparent, especially in large-scale computation. Humans will thus be able to improve their governance of testing, delivering better quality, offering superior experiences to endusers and helping them achieve their business objectives.

JAYASHREE NATARAJAN (JAY) GLOBAL HEAD OF QUALITY ENGINEERING AND TRANSFORMATION TATA CONSULTANCY SERVICES

T E S T M a g a z i n e | J ul y 2 01 7


12

ANTI-MONEY LAUNDERING AND TECHNOLOGY CHALLENGES

Krishnakumar Ranganathan, Vice President of Maveric Systems, warns that a robust IT approach needs to be embraced to deal with banking Anti-Money Laundering processes T E S T M a g a z i n e | J ul y 2 01 7


T H O U G H T

13

L E A D E R S H I P

A

nti-Money Laundering (AML) and Know Your Customer (KYC) are becoming increasingly critical for leading banks as they need to align to regulatory requirements, both global and local, considering the geo-political climate prevalent world over. Due to constantly changing parameters, regulators enforce very stringent AML policies, procedures and operations and always want to 'watch' the banks progress on these critical processes through the bank’s own compliance functions. These compliance functions become the bank’s representatives to the regulators and carry the massive responsibility of protecting the bank’s reputation from a potential fraud or laundering being detected in a country of their operations. Banks have no option but to invest (data shows the AML spends are only going up) in strong AML technology advancements primarily in stronger screening methods and in transaction monitoring processes. Implementing stronger systems with advanced real-time market information/ risk is not easy, given that AML covers the entire banking landscape, end-to-end process control, and every line function of the bank is accountable to align to the process laid out by compliance. Given this complex scenario, some of the biggest questions, the banking industry is trying to find an answer to include: 1. What level of monitoring is required? Should it be excessive or optimal? 2. Major gaps in implementing the process and where is the leakage? 3. Why are regulatory consent order programs never finished on time? (90 per cent of the time) 4. Banks’ spend on robust technology doesn’t always translate into smoother AML commitments Analysing these challenges in the bank’s eco-systems (which includes the regulators) some Root Cause Analyses (RCA) are listed below. These learnings are based on project experiences, interactions with various AML stakeholders, and also validated through extensive project assessments using heat map techniques across various AML programs: • Doing more is not doing smart – simply put, generation of more alerts can lead to more case generations and doesn’t mean more SAR filing and increase in fraud detection. • Requirements for AML are poorly captured and not end-to-end in scope,

• •

with improper/inadequate definitions leading to massive downstream costs Within the requirements (AML scenario definition) there is massive ambiguity at the DFD (data definition levels) Data mismatches can cause incorrect monitoring leading to improper filtering, thereby missing out on major laundering transactions. 70 per cent of issues in AML programs come from data mapping issues, incorrect migrations, deviations causing insufficiency in monitoring, leading to ineffective alert generation and high volume of case generation. Given that AML is mainly about data and less on functionality, a unique form of requirements validation primarily focusing of Data Mapping, Critical Data Elements (CDE) based on business rules filtering will save significant costs. A robust approach needs to be created to ensure requirements, mainly data integrity, is ensured across the bank’s IT eco-systems from operations systems to AML monitoring systems and SAR systems ensuring right reporting to regulators.

Data-Requirement Validation (DRV) can be done in the following manner: • Master referencing of bank’s product/ transaction portfolio using a framework • CDE checklist – preparing master data reference parameters for each business entity/transaction type, channel, etc. • Data Mapping validation to ensure data mismatches across systems are compared and analyzed for early gap identification • Perform a detailed field level mapping to find out inconsistencies in data definition – comparing Source Product Processors (PP) -> Data Ware House (DWH)-> Alert Management Systems (AMS) -> Case Management System (CMS) across the entire life cycle • Review/Validate the requirements/ functional flow across entire landscape using a workbench to validate scenarios end-to-end till case generation With a concrete front investment in DataRV, banks can save up 30-40 per cent program costs and reduce operations cost and tech spends around 20 per cent. This complemented by a business rule-driven AML approach, scenarios covering all customer segments and LOBs, and planned BAU change management system should enable banks and their compliance functions to meet regulators commitments on time.

Implementing stronger systems with advanced real-time market information/ risk is not easy, given that AML covers the entire banking landscape

KRISHNAKUMAR RANGANATHAN VICE PRESIDENT AND CLIENT PARTNER MAVERIC SYSTEMS

Krishnakumar Ranganathan is Vice President and Client Partner at Maveric Systems. He is accountable for technology assurance engagement delivery by establishing client partnership road map strategy, for identifying and implementing "Step Change Initiatives” for key programs. He has been a key stakeholder in expanding Maveric’s presence in the UK and North American region.

T E S T M a g a z i n e | J ul y 2 01 7


14

TESTING IS BROKEN: HOW AI AND ANALYTICS CAN DELIVER TRUE TEST AUTOMATION Antony Edwards argues that testing is broken, addressing why organisations struggle with the process

T

he testing process is broken. Google Play and the AppStore report that 80 per cent of downloaded apps are only used once, and 96 per cent are not used after the first month. Recent research from PAC on Digital Testing reports that only 18 per cent of testing teams claim to have a test strategy that they believe will meet their quality

T E S T M a g a z i n e | J ul y 2 01 7

aspirations (though 86 per cent are meeting their quality objectives which is interesting). Clearly teams are not set-up to deliver apps that delight users. Many teams blame the fickleness of users for these stats. Low adoption and satisfaction are said to be the result of users constantly downloading apps they never really wanted


T H O U G H T

15

L E A D E R S H I P

just to have a look. But this is a comfortable myth. The truth is the top three reasons for users deleting apps are crashes/freezes, slow performance, and poor usability: all things we should be able to catch in testing. But why are organisations struggling with testing, and what can be done to improve the process? Fundamentally it comes down to coverage and productivity.

MORE WITH LESS In organisations across all industries, business leaders are expected to achieve more with less. However, lots of organisations (50-80 per cent) are still doing their testing processes manually. Even those who have ‘automated’ their testing have only automated test execution; very few companies have any automation when it comes to creating test cases, defining test scripts, setting up test environments, setting up test runs, or reviewing test results. Productivity is a challenge. And in testing, any productivity challenge quickly becomes a coverage challenge. Artificial Intelligence (AI) and analytics technology, however, present an opportunity to revolutionise testing and bring about a step change in productivity and coverage; this will ultimately lead to increased user satisfaction, conversion, and retention. AI and analytics allow us to automate more of the testing process using ‘automation intelligence’. Yet many businesses are still unaware of the benefits that digital testing can bring to the success of devices. Businesses note security and userexperience as their top ‘quality’ issues, but less than 4 per cent of total testing effort is being spent in these areas. Now, businesses are beginning to consider AI and analytics for more than just testing and reporting. AI and analytics strategies need to address the entire user experience and develop work flows whereby businesses can predict what is likely to happen before it does. IoT and digital outages are high profile when they occur, and brands cannot afford to fail. By automating testing processes, organisations are recognising the following benefits: 1. Reducing time-to-market Alongside delivering an improved user and digital experience, automation intelligence is also a great time saver. Instead of manually completing testing processes, automation intelligence and analytics can

accelerate the process which reduces the need for employees to manually complete the testing. 2. People re-allocation Automation isn’t set to replace testers: it is set to make their jobs more interesting. Automation is most successful when it augments humans and takes away tedious and repetitive tasks so that people can focus on higher value opportunities. When companies do achieve full text automation, testers will always be needed to set the objectives and parameters surrounding automation, to guide the learning associated with artificial intelligence, and to add the intent and the human interpretation to testing. 3. Improving visibility Embedding automation and analytics into the testing experience will not only allow business leaders to fully understand the complete user experience, but it will also ensure that the testing process is much more accessible to other members of the business, even if they are not experts in performance testing. 4. Identifying issues faster When issues arise in the testing process, the problem needs to be identified and rectified as quickly as possible. To ensure that issues are noticed quickly and are not missed, automation intelligence and analytics can identify patterns and show when a problem has occurred. In addition, digital testing can advise fixes and recommend how to fix the problem by taking information from past issues. 5. New predictive analytics and recommendations Automation is helping companies take their knowledge of the relationship between changes to the product and its quality, as well as their knowledge of the relationship between quality and user satisfaction, and use this to both predict the impact of a change, and to propose changes that the algorithms believe will increase user satisfaction. Over time, the transformative impact of AI and analytics is likely to go even further and fundamentally change what testing is all about. Testing will move closer to the user. Through predictive analytics, testing will move closer to product design, and by learning the relationship between technical behaviours and user satisfaction, retention testing will move closer to revenue. With all this in mind, we’re perhaps only a few years off testing becoming a revenue-generating profit centre, moving from a murky overhead to a driving force of customer conversion and retention.

In organisations across all industries, business leaders are expected to achieve more with less. However, lots of organisations are still doing their testing processes manually

ANTONY EDWARDS CTO TESTPLANT

Antony Edwards is the CTO at TestPlant, a leader in intelligent test automation. Antony studied computer engineering at the University of South Wales, Australia. He worked as a developer in Sydney before joining IBM Research in New York. Moving to London, Antony joined mobile operating system builder Symbian, moving from system architecture to eventually become a VP and member of the executive team. More recently he held the position of CTO with a major US online entertainment company.

T E S T M a g a z i n e | J ul y 2 01 7


16

DEVELOPING DEPENDABLE DEVICES Frederik Van Slycken, Device Security Expert at Intelligent Systems/ Altran and contributor to the prpl Foundation, focuses on how to develop a quality, secure IoT device

I

n developing any IoT device, security is paramount for the simple reason that once deployed there is no control over its environment. If it's a consumer device, it can’t be put behind a firewall or an intrusion detection system or fenced off from harmful devices, and usually it can’t be monitored. Therefore, the device itself needs to be built to withstand attacks, all on its own.

SECURITY BY DESIGN With every connected device, the basis of security is laid at design time. Depending on the requirements, and the threat model, communications should be encrypted and authenticated. Though it seems a popular practice, development backdoors or default passwords must be avoided in order to keep the integrity of security in the device. In addition, all debugging techniques (JTAG, serial inter-faces, etc) should be disabled, or they will eventually get discovered and abused. Another rule of thumb is that devices need to be upgradeable, because some se-

T E S T M a g a z i n e | J ul y 2 01 7

curity issues will pop up, regardless of efforts to avoid them. These upgrades need to be secure, to prevent attackers from installing their own firmware containing a backdoor that enables them to take over the device. If possible, both the upgrade mechanism and whole boot procedure need to be secure, in order to verify that only trusted code is run so that a temporary vulnerability of the upgrade mechanism cannot be made persistent. And, above all, these upgrades need to happen automatically, because users can’t be relied on to keep track of updates for all their devices.

for-profit hacker to abuse your devices. More important than any technique or tool is developer mindset. It's pointless to inflict quality processes, rules and tools onto a developer when he doesn't care about a secure device or high quality code. Only when the developers really care about building a high-quality product can secure connected devices be achieved. With that mindset, developers themselves will propose new checks to implement, instead of being forced to serve the tools.

DEVELOPERS MAKE

To help developers improve security a set of coding guidelines could be applied. These guidelines prevent anyone from (ab)using the more dangerous parts of the programming language. One example of a rule for C and C++ is that the right-hand side of && or || op-erator shall not contain side effects (MISRA rule 33). Due to short-circuit rules, this right-hand side may or may not be executed and it's very easy to misunderstand what's going to happen. It’s

HACKERS’ LIVES HARDER There are a number of techniques that can be applied to help build a stronger device that's even harder to abuse. While nothing can make it entirely secure if an attacker with the budget and technical capability of a nation state wants to get in, developers can make it harder for the script kiddie next door, or the

SECURING THE CODE


D E V I C E

17

T E S T I N G

often easier to simply say "don't do this". However, people are spectacularly bad at consistently applying or remembering such rules. Eventually, a new developer will be unaware of the rule, or someone who has been on the project for years will forget about it. Seeing this particular idiom pop up in the code, other developers may also start applying it. But certain other developers won't notice and will misinterpret the code. To keep this from becoming problematic, every rule adopted should get checked, and it should get checked automatically. A rule that isn't checked automatically is worse than useless.

TOOLS TO HELP A powerful technique that is rarely highlighted is symbolic execution. When working in C, a tool like KLEE could be used. With KLEE, symbolic execution can be viewed as automatic test case generation: it is told which variables to consider symbolic, and then generates values for these variables to exercise all code paths. Of course, the tool can't know what the expected behaviour is under a certain set of inputs, but that's not what it's looking for. Essentially, the tool is trying "all" possible inputs for the symbolic variables, looking for inputs that cause the program to crash, for example: segmentation fault, or out-of-bounds access (with address sanitizer). Naturally, there are limitations to what can be achieved with this: it’s not feasible to just let this loose on the entire codebase and expect results. At every statement involving a symbolic variable, the program can fork, so there is potential for huge scaling issues. This is similar to "regular" testing; correct behaviour of the complete program can’t be tested on all possible inputs, so unit testing (at a level where it is feasible to test for all relevant inputs) as well as functional testing is essential. Symbolic execution should be approached in a similar fashion: testing at a lower level, where the number of parameters is limited, so it is still possible to test all possibilities. This is the only way to avoid path explosion. Indeed, there are many other tools and techniques that can help, some of which deserve a whole article of their own, like valgrind/address sanitizer, undefined behavior sanitizer, and any of a list of static analysis tools like clang static analyzer, cppcheck, Coverity Scan, Flawfinder, or RATS.

IMPORTANT CONSIDERATIONS It is clear there are a lot of tools and techniques that can help improve the quality and security of IoT devices; however, there are a few important considerations in using tools like this. The first is that developers shouldn't blindly trust the tools to work properly, and to keep working properly. Tools themselves are updated and changed so some script might get modified, have a wrapper written around it, or the Jenkins configuration could get changed, and maybe the tool doesn’t detect the errors it once did. Or the tool still detects them, but the result doesn't ripple through the whole system of scripts and wrappers to cause an alarm to go off. Therefore, from time to time developers should check whether all tools are still detecting what they should be, and whether it gets brought to their attention the right way. Another crucial aspect is to realise that tools aren't Holy Scripture. Developer rules should not be set in stone. The guidelines and tooling should help achieve good quality, safe, secure code, but developers should not be slaves to their rules. An exception to a rule should be possible with proper justification, and the steps to take for this should not be so cumbersome as to prevent it from happening. When a developer says he needs to rewrite the code in an unreadable, unmaintainable way in order to comply with a certain rule, but the procedure for getting an exception to that rule is so long and difficult he'd rather not do it, something has gone horribly wrong.

It's pointless to inflict quality processes, rules and tools onto a developer when he doesn't care about a secure device or high quality code. Only when the developers really care about building a highquality product can secure connected devices be achieved

TAKEAWAY There are a variety of tools to help build a more secure IoT device. Developers should set-up a continuous integration system like Jenkins, add some analysis tools, add some unit tests and functional tests, and work from there. If these are already in use, select a coding guideline, add more stringent analysis tools and consider advanced techniques like symbolic execution or fuzzing. However, first and foremost, a project depends on having developers that really care about quality and security. Without that, there is no way to achieve a secure IoT device.

FREDERIK VAN SLYCKEN DEVICE SECURITY EXPERT AT INTELLIGENT SYSTEMS/ALTRAN

Frederik sees the impact of unsecure connected devices on our lives and wants to change it. To him, paranoia is a virtue, and the only sane response to the world around us. He is the device Security Expert at Intelligent Systems/ Altran Belgium, and provides security expertise for all embedded projects, both internally and for customers.

T E S T M a g a z i n e | J ul y 2 01 7


18

MANUFACTURING A BETTER APPROACH TO APPLICATION SECURITY Colin Domoney, Senior Product Innovation Manager at Veracode, looks at what the manufacturing industry is doing well at in terms of appsec and what other industries can learn from them

E

ducation around the threat that application security poses, is often cited as one of the greater barriers in introducing comprehensive security processes. However, even those companies well informed on its risk don’t find themselves on easy-street, facing the further challenge of defining what great application security even looks like. With no industry standards nor independent benchmark around what an acceptable security flaw density is, which criticality of defects are acceptable, or what remediation timeframe is adequate, even the best security and development teams can find themselves falling short.

T E S T M a g a z i n e | J ul y 2 01 7

Yet application security has never been more important. And the threat is continuing to grow in scale and sophistication. The Verizon Enterprise 2016 Data Breach Investigation Report found that web application attacks accounted for more than 40 per cent of incidents that resulted in a data breach, and application vulnerabilities were the single biggest source of data loss over the previous year. And it seems that no industry is spared.


M A N U F A C T U R I N G

19

S E C T O R

A CUT ABOVE THE REST

DOING IT RIGHT?

While no industry is spared from the threat, some are taking substantially greater steps towards remediating the risk in their organisation. Recent research from Veracode: The State of Software Security 2016, volume 7, drawn from code-level analysis of billions of lines of code revealed that the manufacturing industry has successfully positioned itself at the head of the pack. Matching the financial services industry in the first time pass rate of its applications against the OWASP top 10 (the widely accepted standard for application security), the manufacturing industry also tops the table on vulnerability fix rate by a two-to-one ratio on the worst performer, healthcare. The result of this approach on the quality of their applications is clear. Across a number of high-profile vulnerabilities that Veracode scrutinised as part of its study, manufacturing was found to have a significantly lower prevalence than the next best performing industry. For cryptographic issues, for instance, government tailed manufacturing with a 33.2 per cent difference (56.1 per cent compared to 22.9 per cent); while for crosssite scripting, manufacturing led all other industries by more than a 20 per cent.

With no benchmark per se, learning from the best practice of successful cases must play an important role in improving industrywide application security. And as a clear leader, businesses should be looking to the manufacturing industry to find out what it is doing right and how these techniques could be replicated across other industries. So, what works for the manufacturing industry?

MAKE THE PROCESS REPEATABLE With the industry’s Henry Ford heritage, it is perhaps a little surprise that many manufacturing companies have built exemplary application security programmes that incorporate the key characteristics of an efficient assembly line. Many of the principles that helped manufacturing become more efficient in the twentieth century, such as Total Quality Management and Systems Thinking, are absolutely key to how DevOps is being adopted in 2017. Engineers such as W Edwards Deming, born at the start of the last century, are still influencing how we think about producing software, more quickly and

With the Internet of Things, 3D printing – let alone geopolitical issues like Brexit affecting international procurement – introducing new disruptive challenges, the manufacturing industry must currently innovate or face the consequences

Prevalence of selected high-profile vulnerabilities by industry vertical ! VERTICAL VERTICAL

CROSSCROSS-SITE SITE SCRIPTING SCRIPTING

SQL CRYPTOGRAPHIC CREDENTIALS SQL CRYPTOGRAPHIC CREDENTIALS INJECTION ISSUES MANAGEMENT INJECTION ISSUES MANAGEMENT

Manufacturing

20.4%

13.9%

22.9%

14.6%

Healthcare

45.4%

28.4%

72.9%

47.7%

Government

68.5%

40.0%

56.1%

30.8%

Financial Services

51.0%

31.3%

62.0%

41.2%

Retail & Hospitality

45.9%

33.5%

68.7%

38.6%

Technology

51.1%

32.4%

67.5%

46.6%

Other

43.4%

27.6%

64.0%

44.2%

COLIN DOMONEY SENIOR PRODUCT INNOVATION MANAGER AT VERACODE

Colin has over 20 years of development and security expertise. His most recent experience before Veracode was as the technical expert leading a large-scale application security programme in a large multinational investment bank. Colin is now a Consultant Solution Architect at Veracode, helping to evangelise Application Security and the securing of DevOps and works closely with Veracode’s largest customers to help them secure their software estate.

As we can see in the two tables above, government organizations are still most likely to have both SQLi and XSS vulnerabilities in their code, a repeat from 2015. One thing we'd like to draw readers' attention to is the fact that healthcare organizations are most likely to be struck by cryptographic and credentials

T E S T M a g a z i n e | J ul y 2 01 7


20

In 2015, when comparing OWASP top 10 policy compliance in The State of Software Security, Veracode found that internally developed applications had a better pass rate than those developed by a commercial third party

M A N U F A C T U R I N G

more securely today. With the Internet of Things, 3D printing – let alone geopolitical issues like Brexit affecting international procurement – introducing new disruptive challenges, the manufacturing industry must currently innovate or face the consequences. As a result, it is crucial that any processes that are built into the company are able to not only withstand the changes that many organisations are going through now, but also whatever the industry throws at them next. Creating a comprehensive, repeatable programme that can be applied across different and evolving business processes, has enabled many manufacturing organisations to make application security a constant, no matter what other changes are happening in the business.

MAKE IT CREDIBLE The manufacturing industry is no stranger to standards and regulations dictating best practice, and ensuring it is rolled out across the entire company and its associated partners. This has enabled many organisations to create a mandate that drives regular, thorough testing. Thanks to the governance and controls that this industry must put in place, manufacturers have also been able to enforce the mandate anywhere they have an application – both in their own company and with their suppliers. While it is often assumed that commercially developed applications are built as – if not, more – securely than those internally, there is a surprising disparity in initial pass rates. In 2015, when comparing OWASP top 10 policy compliance in The State of Software Security, Veracode found that internally developed applications had a better pass rate than those developed by a commercial third party (37 per cent versus 28 per cent). It is, therefore, essential that organisations that incorporate commercially developed applications are able to enforce

T E S T M a g a z i n e | J ul y 2 01 7

S E C T O R

security standards with external organisations with third-party application testing. However, enforcing a mandate need not aggravate or put greater pressure on suppliers. A number of manufacturing companies that Veracode works with have used this mandate as a positive force in these organisations’ application security processes – even paying the third-party licence fee to enable them to become compliant with their company.

MAKE IT BEST PRACTICE Building best practice strategies into a company with multiple development teams can in itself be difficult, let alone tying that to the exemplary activity of an entire industry. One manufacturing company that Veracode works with looked at how it could build out best practice in its own company through creating a 'golden example' that the other teams could benchmark against. In this one case, the company employed Veracode to work closely with the development team that was building a particularly high-profile application for a major sporting event. The activity was then turned into an internal case study that demonstrated to the rest of the company how the application could be built securely and still launched on time. Ultimately, this helped this company create its own benchmark for producing secure applications on schedule.

TIME TO MAKE IT HAPPEN The manufacturing industry is reaping the benefits of its investment in best practice secure application development – the numbers speak for themselves. But there is no magic, nor mystery, that has helped the manufacturing industry top the table for secure applications. Many organisations across the manufacturing industry are successfully creating and enforcing processes that enable the company, and its suppliers, to not only produce secure applications today, but are also future proofed against the evolution of the industry. There’s a long way to go before we will achieve our mission of securing the world’s software. But as we continue to drive accountability and standards for secure application development, organisations across all industries can look to the best practice of the manufacturing industry to ensure they’re heading down the right path.


21

REGISTER ONLINE! The European Software Testing Awards truly cast a light on exceptional teams, individuals and businesses that work tirelessly to ensure perfection.

Reasons to enter A win at The European Software Testing Awards gives a seal of approval to your activities and is a sign of quality and high standards within your business Just being shortlisted can boost your brand awareness and promote your business The European Software Testing Awards recognise the hard work and achievements of your employees, helping to boost staff morale

Being shortlisted can impress your business partners, clients, investors and potential investors Winning awards boosts your hiring stature within the market, helping to attract the very best talent The European Software Testing Awards evening is a great way to learn, to network and to enhance professionalism with others from the software testing community

www.softwaretestingawards.com

T E S T M a g a z i n e | J ul y 2 01 7


22

YIELDING BETTER MOBILE When reducing and preventing defects becomes the responsibility of everyone, high quality software follows, according to Head of Mobile at Future Platforms, Dougie Hoskins. He shares his insight on integrating testing into the development process, arguing it can be an effective way to deliver complicated mobile apps T E S T M a g a z i n e | J ul y 2 01 7


M O B I L E

23

T E S T I N G

W

hile no industry is spared from the threat, some are taking substantially greater steps towards remediating the risk in their organisation. Mobile adoption today is widespread, and brands need to make sure they are accessible to consumers on all devices and platforms. The marketplace is now dominated by Apple and Google, with 99.6 per cent of new smartphones running on Android or iOS, so it means that building for these two operating systems is the bare minimum. This is a particularly important point when considering different audience groups. While Android’s worldwide market share currently exceeds 81 per cent (iOS 17.9 per cent), the split is far closer to 50-50 in several key markets including the UK, USA, Australia and Japan. For many clients this means we are tasked with delivering a suite of apps that have equivalent functionality across a wide range of mobile devices, which differ in screen size and capability. Here, I will explain how our development and testing teams work in tandem to tackle this challenge and maintain a consistent standard of quality; using a combination of automated and manual testing techniques that ensure quality assurance is the whole team’s responsibility.

CROSS-PLATFORM Central to our development approach is our cross-platform app framework, Kirin. Mobile development teams often look to such toolkits to streamline their process and save duplicating effort, particularly across iOS and Android. However, many cross-platform approaches have traditionally suffered from a poor developer experience, with a smooth and polished user experience being difficult to achieve. Kirin is different. In our approach the app’s business logic is the only shared code. The UI is left to the standard, platform-specific native tools, which provides a multitude of benefits. As much app business logic as possible resides within the shared layer. This reduces development effort, guarantees equivalent app functionality across platforms, and minimises the maintenance burden, with any changes and bug-fixes only needing to be done once.

SEPARATION OF CONCERNS The most beneficial side-effect of this approach when it comes to testing is the enforced separation of UI from the rest of the app. Separation of concerns is a wellestablished best practice for development, but as projects and teams increase in size over time, clean separation of UI and business logic can take an increasing amount of discipline to enforce. With Kirin it simply happens, and there is no way to call shared network code from within your UI, even if you wanted to. Shared code can therefore be run quickly and easily on development machines and continuous integration, beyond the context of a device. A comprehensive suite of unit and integration tests for your core codebase can be written using standard testing frameworks. In my opinion, this separation, and the ability to run core code natively on a development machine, is the single most important thing a mobile development team can do to ensure testability of their codebase. Our automated test suites span across all tiers of the testing pyramid. At the bottom, practicing test-driven development helps ensure the app is composed of modular building blocks, testable in isolation. And as we have just seen, when the core app code is written using the Kirin framework, it is much easier to write tests for the middle layer. So what of the top of the pyramid? There is a broad landscape of tools to choose from when creating automated UI tests for mobile. We have found that the modern native tooling, XCUITest on iOS and Espresso on Android, provide the best developer experience for UI testing. As well as being fast and robust, direct integration to Xcode and Android Studio means there is no complex setup and configuration, and developers can run them easily and quickly.

Projects and teams in size over time, clean separation of UI and business logic can take an increasing amount of discipline to enforce

DOUGIE HOSKINS HEAD OF MOBILE FUTURE PLATFORMS

TEST HARNESS

Head of Mobile at Future Platforms,

When building a suite of automated tests, it’s really important to ensure that they are both fast and repeatable. If tests take too long to run, developers may not bother to run them as often as they should. And if tests depend on factors such as time of day, or server state, they are likely to fail when nothing has changed. Flaky tests such as these will undermine your team’s confidence in the test suite.

professional career at the very heart

Douglas Hoskins has spent his of the industry's growth, having spent several years at Nokia before joining Future Platforms in 2007. Douglas works with the technical team to drive the direction of the company's mobile strategy, while helping its team of developers make the most of their potential across all mobile platforms.

T E S T M a g a z i n e | J ul y 2 01 7


24

M O B I L E

T E S T I N G

It also gives us the ability to verify that the correct network communication is always happening for each user journey, and no stray API calls make their way in.

SMARTER QA

Developers often consult testers for advice on edge cases to consider in their automated tests, while testers liaise closely with developers to ensure effort is concentrated on the most risky areas of the app

T E S T M a g a z i n e | J ul y 2 01 7

These issues are especially prevalent in products that rely heavily on server communication, such as apps that must download changing information such as product prices, item and stock availability, traffic status, or weather conditions. It means that each of our automated test suites have the ability to mock all network communication, effectively bundling each test with a tiny server. This can then be made to simulate reallife situations and conditions at any moment, such as a currently-closed retail location, flash sales where certain items receive far higher demand, exclusive offers, deals, and a range of other rarely-seen and otherwise hard-totest circumstances.

The upshot from all of this is that our testing team get to spend their time more productively. They are free to put more effort into exploratory testing, devising fiendish new ways to put apps through their paces rather than repeatedly running through manual regression plans. Developers often consult testers for advice on edge cases to consider in their automated tests, while testers liaise closely with developers to ensure effort is concentrated on the most risky areas of the app. When defects are discovered, they are reviewed to identify how development practices can be improved as a result. Manual testing effort is still required on some parts of apps, where automation is difficult, such as push notification handling. In addition to all of this, we like to keep a close eye on the state-of-the-art testing tools and techniques. Facebook’s open source Snapshot Test framework for iOS has provided great benefits; it offers a way to do test-driven development for your app’s UI, something which has traditionally been difficult. When building a new UI component, we can easily and quickly test each permutation of its appearance. Fastlane is an essential tool for modern iOS and Android development. It greatly simplifies processes such as app build and submission, and running tests on a continuous integration server. Additionally, it provides the ability to take automated screenshots of your app. This is extremely convenient for visualising user journeys through your app at a glance, useful for developers, testers and designers. Integrating testing into the development process is, in our experience, the most effective way to deliver complicated mobile apps. Put simply, when reducing and preventing defects becomes everybody’s responsibility, high quality software follows.


25

DevOps [a fusion of Development and Operations]

DevOps Online [a match between curious minds and information]

www.devopsonline.co.uk T E S T M a g a z i n e | J ul y 2 01 7


26

FROM EUROPE TO KOREA:

A CASE STUDY OF GLOBALIZED CROWDTESTING

Emanuel Karlen gives an inside account of globe-trotting Crowdtesting

T E S T M a g a z i n e | J ul y 2 01 7


27

C R O W D T E S T I N G

W

e gather in a meeting room with the QA team who is thrilled to discuss a new project with a Korean client. The company has a mobile app which is in need of user acceptance testing. To ensure that it works faultlessly on the target markets the client is eager to get rid of tricky bugs. Crowdtesting is the most trending QA solution to meet this customer need. Crowdtesting is often based on a large volume of users and even automated with thousands of devices. However, experienced IT companies know these kinds of services may not deliver the kind of results that are needed to find and fix those unexpected, but ever so itching, bugs. This is why it is common practice to offer Crowdtesting as a complement to lab testing. User acceptance testing is worthwhile. After months of developing, a bug can have a devastating effect on the early adaptors and totally ruin the reputation of an application. Therefore, the Korean company is interested today in finding a collaborator that specialises in letting highly qualified workers explore every dark corner of code, where the deadliest bugs dwell.

TARGET DEVICES Back in the office, the leader of the conference call is a dedicated QA project manager, who has been the SPOC ever since the client decided to open the door to a trusted, but nevertheless foreign third party. He has done his research and gives the client an assertive presentation on the suggested steps forward. The necessary research is first and foremost finding out what kind of devices the client’s target group is using, to match the spectrum of devices for the tests. Due to the high fragmentation of the mobile market, the target group user statistics are a crucial factor when selecting the test devices. During the presentation questions arise regarding who the actual people testing their application will be. The requirements by the client are matched with the localised testing capabilities of the crowd and the project is scheduled in the matter of weeks to stay in time for the

anticipated release. During these weeks, the project manager passes the ball to the Crowd manager who is the spider in the net who is in charge of sending out missions to a community of qualified testers. The concept of missions is a smart approach since it is attracting young and enthusiastic people from across the world. EMANUEL KARLEN

EAGER TESTER What do they have in common? The passion for challenges and, as in any game, accomplishing missions. Highly competitive tests are carried out in order to make sure only the brightest of students, and the most experienced workers are considered for localization testing. Competition is fierce and each mission is quickly snatched by the best prepared, capable, and eager tester. The target group for the Korean client is quickly summoned by the Crowd manager who, similarly to a coach for a national sports team, communicates directly to the testers in Korea. The testing stars come from different backgrounds and industries, resulting in a diversity of QA experience. Crowdtesting has become a must for

Emanuel Karlen is a Swedish native who is currently working in the international marketing team at StarDust, which has been aiming to innovate the QA business by creating a network of professional crowd testers (We Are Testers) to complement lab expertise. Emanuel is also part of the European Institute of Innovation and Technology. Tech has been a part of his life since he was a teenager and finding new ways to apply tech is his main interest. He is particularly interested in how tech can help maintain a healthy lifestyle.

T E S T M a g a z i n e | J ul y 2 01 7


28

Looking at empirical research regarding the organisation of Crowdtesting, we find a set of known issues: offering fair incentives and payment to testers, keeping reliability high among diverse testers and knowing exactly what kind of hardware, OS and preferences are used on testing devices

T E S T M a g a z i n e | J ul y 2 01 7

C R O W D T E S T I N G

all international companies that are looking to keep their reputation of delivering high quality services intact. Only local and real users have the ability to find particular language errors, cultural issues and bugs that are found when an app is used in combination

with other local services, that is, making transactions using a specific bank account incorporated into an app or website. Adapting an app to different localisations can be a tricky process. Sentences need to be re-written and this has an impact on the layout. A Korean sign may correspond to a very long word in German. Since the screen size is extremely limited on mobile devices, translations need to be thoroughly tested to avoid embarrassing layout issues. When translating, there are also cultural aspects to consider, for example how customers should be addressed.

In Sweden, there is no distinction when addressing a customer or a friend, everyone is on the same level. In German culture, one would be frowned upon if no distinction is made to indicate the difference. Another common bug found when doing localization testing is currency errors. Users may lose all trust in a company when different currency signs are displayed during the purchasing process. And statistics say the Crowd does find bugs in localised adaptations: StarDust is reporting on average between 80 and 140 bugs for each campaign. So why should the master website be of higher quality than the localised versions when the company is likely to attract customers globally?

APPS SPEAK LANGUAGES The client is convinced, the quality assurance of localisations deserves as much attention as for the master version. When the master is tested by professionals, it makes sense to apply the same approach to the QA of local versions. Looking at empirical research regarding the organization of Crowdtesting, we find a set of known issues: offering fair incentives and payment to testers, keeping reliability high among diverse testers and knowing exactly what kind of hardware, OS and preferences are used on testing devices. These are issues that are common for larger organisations that outsource test campaigns to an unfamiliar crowd. That is why it is important to consider taking the time to head hunt new testers and make sure all necessary data is collected, stored and frequently updated. This way, clients can rely on the quality of test results and access all necessary information to throw themselves into fixing the critical points. This case study is truly a phenomena of a globalised w orld where apps are speaking your native language, even when east meets west. It is also a case of agile development at its best. While the development team is resting, their application is being tested in the lab on the other side of the globe. When they get back to work in the morning, there will surely be a list of bugs to fix before it is time for putting their efforts to the test once again.


29

industry news

to talk

about BOOKMARK IT TODAY!

www.softwaretestingnews.co.uk T E S T M a g a z i n e | J ul y 2 01 7


30

BEARING THE BLAME: WHO'S RESPONSIBLE WHEN IOT GETS HACKED? An invasion of privacy and an insecure channel for exploiting the individual are among the accusations levelled at the Internet of Things (IoT), and with some justification, argues Ken Munro, Partner, Pen Test Partners

A

n invasion of privacy, an insecure channel for exploiting the individual, an unnecessary form of automation, a super gateway for powering botnets… All of these accusations have been levelled at the Internet of Things (IoT) and with some justification. The ludicrously poor security of these devices has laid them wide open to attack. We have seen data sent in the clear, rather than via SSL, allowing an attacker to intercept communications sent from the device to the cloud-based service. We’ve seen easily hackable online user accounts, allowing the attacker to enumerate passwords using the forgotten password feature. We’ve even seen websites that allow user account deletion without the need for authentication. Yet adoption continues apace and no one, it seems, has stopped to think

T E S T M a g a z i n e | J ul y 2 01 7

who is ultimately responsible in the event of an attack orchestrated over the IoT. The massive Distributed Denial of Service (DDoS) attacks carried out against Brian Krebs, OVH and Dyn last autumn served as a wake-up to call to an industry that continues to ignore warnings. Security researchers had long been predicting the potential for the computing resource of the IoT to be hijacked and used for harm. In the case of the DDoS attacks, Mirai malware (originally developed to attack DVR devices) was used to enslave other devices via the dated and seldom used Telnet protocol. This resulted in a botnet battery capable of launching attacks that peaked at over 1Tbps. The publication of the Mirai code has now reduced the threat of attacks using this vector (the proliferation of botnets that are now all seeking to use a finite resource has a self-

limiting effect) but the incident has served to illuminate the difficulty of attribution and retribution.

SLOPPY SECURITY While Mirai and the attackers behind it (who, interestingly seem to have been DDoS Mitigation Service providers) were clearly to blame, the security mechanisms used by these devices also drew criticism. In fact, if you look at Mirai itself, it carried out perfectly legitimate actions to access the device. It was the use of the redundant Telnet protocol, and default passwords that was seen as sloppy security practice. Even in those cases where device passwords are routinely changed it’s often possible to find these online. For instance,


I N T E R N E T

O F

31

T H I N G S

CCTV log-in credentials are often shared by installers, the long supply chains associated with the IoT are in themselves a weak spot enabling data leakage or device tampering. So should manufacturers be held to account? IoT devices differ from their dumb counterparts in that they can be updated with Over-The-Air (OTA) updates from the manufacturer. This can effectively extend the lifespan of the product but also means that responsibility for their product no longer simply extends to a twelve month warranty period. Instead, they’re facing a responsibility for securing each device for years, or at least until they decide to no longer support the device and declare its end-of-life. The landscape changes. New vulnerabilities are exposed in protocols, frameworks and hardware all the time, so the manufacturer will have to patch the device itself by issuing an OTA update or if the integrity of the device itself is in danger, a product recall may be in order. That’s a costly undertaking. But a product recall also has other implications. By announcing a recall, the manufacturer could be said to have legally accepted their devices are part of the problem and this could see them open to litigation.

ANGRY MOB The potential for a manufacturer to be sued is increasingly likely due, not least, to consumer angst. Avoidance tactics such as those used by VTech which altered its Ts and Cs to try and avoid being held to account after a data breach a couple of years back, have seen consumers begin to question the trust they place in manufacturers. In the EU and in the US, consumer lobbyists are now taking on the toy industry, filing complaints to the relevant national authorities on what seems to be obvious breaches of several consumer laws including an abuse of privacy. Vivid Imaginations Toy Group’s My Friend Cayla interactive doll has a litany of issues including the ability for an attacker to easily intercept and join-in communications between a child and the toy. That means any random stranger within Bluetooth range of a child playing with their toy can interact with the child. If such actions are successful they could well pave the way for lawsuits and this in turn will see manufacturers seeking recompense from their partners, such as software providers. If the developer has failed to observe tried and tested security best

practice, there’s little doubt they too will be held to account. But what about the communications layer? Could even network carriers be implicated? In the case of the Mirai attacks, is the manufacturer to blame for the malware taking out some Domain Name Servers? What about if such a botnet were to take out major social networks? Where does the device end and the network begin, particularly if the issue is with the protocol and not the IoT device as such.

UNCHARTED WATERS Legally, we’re in uncharted waters and the authorities can’t draft legislation fast enough. In the UK, we’ve seen the Investigatory Powers Bill and the Digital Economy Bill. The former will compel all ISPs to keep a record of online activities and which service devices they are connect to, while the latter is paving the way for web blocking. In addition, the General Data Protection Regulation due to be adopted in May 2018 will increase the powers of the individual to access data and file complaints. Across the pond, amendments to Rule 41, a statute that regulates search and seizure for the US Department of Justice, grants the authorities the power to seize computing equipment and user data. This is expected to see a clamp down on the use of anonymising software and will also empower the authorities to seize IoT equipment. In light of these events, manufacturers may want to take another look at their cyber liability insurance policies. Do these cover an IoT compromise? How far does the policy extend? For now, I suspect claims relating to IoT and DDoS will be far beyond the capacity of today’s insurance markets and that leaves manufacturers facing a stark choice: address security issues now or be prepared for some potentially ruinous lawsuits further down the line. The pressure from consumers and the authorities alike mean that the time is running out for IoT vendors. For too long there’s been a tendency to grab market share at any cost but there’s now a real impetus for the industry to self-regulate. Unless manufacturers step up and embrace standardisation we will see more of the types of attack launched back in September. Except next time they might not harness IoT resource to attack a web provider, they might take down the critical national infrastructure or even wipe out the Internet of an entire country.

So should manufacturers be held to account? IoT devices differ from their dumb counterparts in that they can be updated with Over-The-Air updates from the manufacturer

KEN MUNRO PARTNER PEN TEST PARTNERS

Ken is passionate about empowering the user and blowing away the fear, uncertainty and doubt (FUD) peddled
by security vendors. A successful entrepreneur and a founder and partner in Pen Test Partners, he’s also on the executive steering board for the IoT Security Foundation, which aims to promote security and improve standards in the market. Ken has been in the infosecurity business for over 15 years.

T E S T M a g a z i n e | J ul y 2 01 7


32

DIGITAL TRANSFORMATION:

DRIVING TESTING TRENDS

Mario Matthee, Head of Research and Development at DVT Global Testing Solutions, says digital transformation is real, but critical mass is still some way off T E S T M a g a z i n e | J ul y 2 01 7


T E S T I N G

33

T R E N D S

I

’ve recently been involved in an ongoing debate with some of my learned colleagues about the very definition of probably the biggest technological shift in modern times: digital transformation. Where we disagree is the scope: it’s my opinion that digital transformation is the natural shift from mainly paper-based environments, to a wholly ‘digital’ working environment. My colleagues, on the other hand, consider digital transformation to be significantly more far-reaching, to the point where any non-digital factors – including people – are taken out of play. In other words, for them, the ‘end game’ of digital transformation in the enterprise is a homogeneously digital workplace, similar to technologies like AI, designed to ‘replace’ the human factor entirely. Regardless of how far, or not, we want to stretch our definitions of digital transformation, from a software testing perspective two things are clear: digital transformation is happening, and what we know about testing is changing because of it.

SETTING THE STRATEGY If we can agree that digital transformation is going to play a critical role in the workplace over the next few years, we could also agree that the software development lifecycle will necessarily be transforming as well. Transformation 1: Modern organisations increasingly have to deal with a multitude of new digital interfaces. Other than websites, we now have smartphones, tablets, and rooms full of other ‘smart’ devices connected to the so-called Internet of Things (IoT). Connecting all these devices to one another is a layer of ‘social’ software – most commonly social media portals and applications that are transforming not only the way we work, but the way we interact, play and think. Transformation 2: Another platform where we are seeing massive transformation is online commerce. We all know about buying something online, only to find our Facebook feed suddenly swamped with ads for similar items. Today, an online purchase does not happen in isolation – it registers a user’s buying preferences through analytical and geospatial engines that interface with social

media accounts, which open doors for other marketing opportunities from dozens of other vendors; and it all happens in an instant. It is no longer enough that our software works on one or more platforms. It needs to traverse a universe of new devices and connect to dozens of interconnected portals. Transformation 3: A third factor to consider is the user experience. As digital transformation gathers pace, interacting with technology in almost every aspect of daily life will become mainstream. Users will no longer be limited to a choice between a few devices that perform one or another task – they will literally have dozens of choices thrown at them from every direction. Because of these choices, users will be far less discerning about specific features; they’ll just want their technology to work. This puts the user experience front and centre of the testing mandate. Regardless of how simple, complex, detached or connected our devices and their software might be, if the user experience falls short, they’ll be tossed aside for something else. Digital transformations such as those described above have to be taken into consideration when structuring a Test Plan. They need to be quantified and qualified, and as with any other factor affecting our businesses, it’s important to set an upfront strategy to manage them, to stay ahead of transformation that is already happening at the speed of light.

TEST INTEGRATION All these integrated and interconnected systems are forcing us to make new choices when it comes to our testing strategies. Online security: Whenever I think of my data travelling across the Interweb, there’s always a little voice at the back of my mind wondering how secure it is. So when I’m buying something online, even though I know my personal information is going to be encrypted from my computer to the vendor to the payment merchant, I also know it will be bouncing between private and public cloud systems, possibly stored on servers in countries I’ve never heard of.

If we can agree that digital transformation is going to play a critical role in the workplace over the next few years, we could also agree that the software development lifecycle will necessarily be transforming as well

MARIO MATTHEE HEAD: RESEARCH AND DEVELOPMENT DVT GLOBAL TESTING SOLUTIONS (GTS)

Mario Matthee is Head: Research and Development at DVT Global Testing Solutions (GTS) and is motivated by his belief that the youth are the future. Before joining DVT in 2007, Mario applied his passion for software testing and test automation at Allan Gray, JP Morgan and Vodacom. Mario’s goal is to help international companies overcome the challenges of software test automation and mobile app testing. Mentoring the next generation of test professionals and providing opportunities for skilled testers are also high on Mario’s priority list.

T E S T M a g a z i n e | J ul y 2 01 7


34

T E S T I N G

T R E N D S

users are already asking. You can substitute social media with just about any other system in that example – a Point of Sale back-end system, a business intelligence system, ERP, SAP… you get the idea. Your software may be optimised for performance under load, but what about other systems not in your control?

CRITICAL MASS Here’s another thought: we now have all these different integration points, but most of the systems we integrate with won’t open up their testing environments for us. That means our own testing strategies are necessarily going to become more complex. More complexity means more cost, which means we are going to have to find smart ways of keeping costs down. The explosion in digital devices and platforms will likewise affect the time it takes to test. The other week, one of my clients boldly declared he wants his new app tested on 'every' device. When I asked him what he meant by 'every', he literally meant every device currently in use, anywhere in the world. I admired his courage, but of course the problem is that by the time we would have finished testing his app, he would have been way over budget and half the devices we tested on would be redundant!

The other week, one of my clients boldly declared he wants his new app tested on 'every' device. When I asked him what he meant by 'every', he literally meant every device currently in use, anywhere in the world

T E S T M a g a z i n e | J ul y 2 01 7

Compliance: Then there’s the not-so-small matter of compliance. All organisations are governed by strict compliance laws that detail how and where their user data can be utilised. Often, moving data offshore – even inadvertently – could cause a compliance breach. Suddenly all the news stories of data breaches make sense. User experience: With all that data flying around between different systems, performance testing is quickly becoming a much larger and more significant part of any new software launch. Many applications, for example, tap into users’ social media feeds, which necessarily make them slave to the performance of those systems. If your app features a live stream of your Twitter feed, and Twitter decides to have a worldwide meltdown, what does that do to the user experience of your app? It may be pretty, but is it fast, and can it do exactly what I need it to do? These are the questions your

FACING THE CHALLENGE Rather than trivial, this example is typical of some of the challenges we can expect, as digital transformation rapidly approaches critical mass. When that happens, almost every business system will in some way be exposed to the public, as companies clamour to empower their users – and outflank their competitors – with digital services. Digital transformation is real, but fortunately for us, critical mass is still some way off. How far off, that’s hard to say. However, I do believe the time is now to make serious changes in anticipation. Testing needs to take its rightful place alongside business analysis – and senior testers need to become ingrained at the business level, so they can establish upfront what the journey will be like. Otherwise, by the time you actually start testing, it’s going to be far too late.


35

17 OCTOBER 2017

DON’T MISS THE NETWORKING AND ENLIGHTENING EVENT OF THE YEAR  CONVENIENT LONDON LOCATION |  OVER 100 SENIOR DELEGATES  12 THOUGHT LEADING DEBATES |  ALL LUNCH & REFRESHMENTS  NETWORKING AND KNOWLEDGE SHARING Want to attend? info@devopsfocusgroups.com +44 (0)870 863 6930

www.DevOpsFocusGroups.com T E S T M a g a z i n e | J ul y 2 01 7


36

THE TEST MANAGER 2.0 Paul Mowat, Technology Delivery Lead Senior Manager at Accenture, gazes into his crystal ball and envisages the future for test managers and the challenges and opportunities ahead

T E S T M a g a z i n e | J ul y 2 01 7


37

T E S T M a g a z i n e | J ul y 2 01 7


38

If you are not aware or involved in DevOps and lean practices, you need to be, these are just a common standard way of working in 2025

PAUL MOWAT TECHNOLOGY DELIVERY LEAD SENIOR MANAGER AT ACCENTURE

Paul Mowat is Technology Delivery Lead Senior Manager at Accenture specialising in Test Consultancy/ Advisory as well as complex delivery. Paul has built high performing managed test services across FS, Products and CMT, delivered test improvement and innovation programmes. He’s gained the Accenture Master Test Architect certification through his deliver experience. Since 2014 Paul has led test assessments across the globe as a TMMi Lead Assessor supporting C-suite change programmes, training/coaching employees globally to build capability and increasing the maturity levels of the clients testing. In 2016 with Paul’s agile university faculty experience he combined agile testing and scrum. Paul gained the leading SAFe Agilist (SA) certification and is putting some of the coaching into practice with his agile teams at a large financial services client. Email: paul.mowat@accenture.com

T E S T M a g a z i n e | J ul y 2 01 7

T H E

T E S T

M

ost of us in the UK have reason to use the services of the NHS. And in doing so, it can’t have escaped our attention that the way services are offered and accessed have changed little since its foundation and are now very much out of step with the way that we lead the rest of our lives. You have been transported to 2025. It’s not quite Gattaca (1997) or the Minority Report (2002). However, there are some parallels to be taken from these films which are not just science fiction, not just in the test world but technology and society in general, we are already experiencing pioneering medical treatments based on the ability to analyse huge amounts of genetic data and more importantly its saving lives. Take the Minority Report, holographic interfaces are being used today by manufacturers/designers across the globe. What has this got to do with version 2 of the Test Manager, this role/individual is changing and there is no going back. Its transforming beyond recognition and the many layers of change in society is resulting in a wider breath of knowledge needed. For example simply describing an agile methodology on its own will not mean anything or reap any benefits. You need to apply both the business and technology context to appreciate how Agile is adopted and that adoption could be applicable to just one industry or organisation. Also, I will touch upon many topics which require further reading to fully understand each, because the industry is changing at such a fast pace it seems for some individuals the pace of change is difficult to know which topic to study, to master or to learn.

THE DISAPPEARING TEST MANAGER Now back to the futuristic Test Manager: this title “Test Manager” is no longer applicable. Terms used throughout 2014-2019 are Quality Engineers, Quality Assurance and SDETS, a term coined from Google in the book How does Google test software but the role has evolved to one of coaching, supporting the team with building automated frameworks and libraries as part of the continuous integration pipeline. Also, this individual is armed with rapid software testing practices, knowledge of context drive and session based school of testing, chairs testing guild working groups, has an in depth understanding of scrum

M A N A G E R

2 . 0

working and agile/lean practices. Support is provided to this individual from a range of AI cognitive agents like Isoft’s Amelia, which emulates human intelligence and communicates test risks, status and defect trends using natural language. The lines between business, development and testing is blurred with the skills of that of a data scientist has become necessary to analyse: the A/B tests, review the analytics to continuously suggest improvements to the product and a good understanding of algorithms to support the development of machine learning. In addition, the coach needs working experience of at least the key tools which I cover below and to support the development of team members, this individual should be proficient in an ecosystem of tools and programming languages because having knowledge of C, Java, strong logical reasoning skills and the use of techniques like TDD, ATDD, DBB, A/B are part of the everyday job.

FLEX THE TEAM Test light and test right is the norm, assessing and implementing standards/best practices are not delivered by a Test Architect because they are part of the version 2 Test Manager role. The industry and product knowledge required to optimise across the lifecycle to ensure lean practices are following, duplication is eliminated and automation is adopted. I don’t just mean functional or regression automation I mean automation across security and performance too. The ability to flex the team size to meet the demand is second nature and is achieved through a network of crowd sourced resources with particular skills needed to get the job done. Data masking, IoT, Test Bots, AI, Augmented Reality applications and machine learning are the next big challenges to overcome and the version 2 Test Manager will have to acquire knowledge of these. If you are not aware or involved in DevOps and lean practices, you need to be, these are just a common standard way of working in 2025 and if truth be told you have most likely been contributing to your projects in the form of service improvements and innovation for a long time, just think of these as a type of lean practice. Continuous integration has been mainstream for several years and the testing


T H E

T E S T

M A N A G E R

is no longer governed within a TCoE or separate test teams because of cost pressures on reducing the IT budget spend has resulted in smaller, knowledgeable, cross functional teams with a truly empowered product owner. It’s simply an efficient way to work. It’s this team who are the keepers of quality. Contrary to this, another trend has emerged, the federated model enabling information sharing between semi-autonomous decentrally organised lines of business because some organisations are not quite ready to totally let go off or rely upon the development team being the master of quality, even if it should mean everyone within the organisation is ultimately responsible for quality. The colossal number of open-

39

2 . 0

source tools available and being used by the test team is still increasing, GIT has become your automated test suite configuration management repository, Jenkins your continuous integration pipeline of changes, Selenium supporting different web browsers, multiple OS and enabling parallel execution of tests, in addition to using specific tools, for example: PyAuto to drive Chrome validation, SAUCELABS, Perfecto or StormTest for performance testing to support the path to live. API and microservice testing is no longer an afterthought, it’s a must to be able to keep up with

Test light and test right is the norm, assessing and implementing standards/best practices are not delivered by a Test Architect because they are part of the version 2 Test Manager role

T E S T M a g a z i n e | J ul y 2 01 7


40

Collaboration is vital for success, not just on your current project but across the organisation. Work with your peers, partners and vendors to innovate: identify trends not just understanding the technology but the business too

T E S T M a g a z i n e | J ul y 2 01 7

T H E

T E S T

the pace of DevOps/CI API testing is faster, failures are picked up quickly and resolved. Security vulnerabilities, incorrect handling of code and duplication of code faster to detect. The Test Manager (coach) supports the team to gain certification in tools through on the job training, for example: Tricentis, Worksoft, Omnichannel and Smartbear (SoapUI). Also QASymphony, is other mainstream tool being used in large numbers as this tool integrates with DevOps/CI using Bamboo/Jenkins, realtime test management with JIRA, but also Selenium, Rally and VersionOne tools.

FUTURE GOALS Microservice architecture has permitted business goals and value to be understood more easily due to the containment of unique modular services. In 2017 Sogeti’s annual World Quality Report, which surveyed 1,600 senior IT decision makers worldwide found that only 29 per cent of the test activities are automated. In 2025 this has increased to 85 per cent which includes the endless list of API, SoapUI, microservice, Robotic Process Automation, Blue Prism and Automation Anywhere: it’s the standard way of working. Manual testing is still conducted, but in smaller numbers, you have heard of exploratory testing. To exist, this type of tester has developed a deep understanding of the industry and applied it to the organisation, acquiring e2e process knowledge to identify and test the most critical and valuable edge cases for both the organisation and the customer. In addition design and test optimisation is the life-line for testers to add value to the business. The Test Manager (coach) knowledge needs to at least match the testers knowledge to understand the risk of not testing a piece of functionality. Analytics within IT as well as production support Splunk/Service Now etc, are commonly used to increase the levels of quality and drive the path to live. Due to the culture of finding issues as soon as possible, on-going analysis of potential further failure points is second nature because the goal is to prevent issues within the path to live. A trend which has emerged is groups of customers are brought into the organisation to provide feedback and contribute to the assessment of production incidents. Back in 2014 a crowd sourcing approach started to take shape and its continued to mature to the extent where

M A N A G E R

2 . 0

organisations have built their own ecosystems to protect their brand and ensure customer feedback is delivered fast.

AGILE THINKING The cross functional teams within organisations who have made agile their methodology of choice are mature, collaboration is second nature (tools like Confluence, Niko Niko, Slack, Yammer, Team Room and JIRA) are used, hosting development environments are standard, specialist areas of Security and Performance testing have become embedded within the everyday team, enabling requirements/ solutions to be assessed much earlier. The teams have undergone on the job training. You may have noticed how we learn has changed, the old-fashioned sitting in a classroom with an exam at the end has become continuous on the job learning, gaining knowledge from the test community through meet-up’s, YouTube videos, podcast, building learning boards and collections to share with like-minded peers. Let’s come back to today, 2017… collaboration is vital for success, not just on your current project but across the organisation. Work with your peers, partners and vendors to innovate: identify trends not just understanding the technology but the business too; what is the business value and the outcome technology is expected to provide; and what is the value of what you deliver each day; how does that add value to your clients? Don’t stop learning, gain certification in at least one new tool each year not because you want a pay rise because it’s part of your job, focus on areas of interest that will help you pick up new knowledge and skills, reinvent yourself and start to master one of the newer topics which have not been covered in detail in this article. Read, read lots…have an appetite for learning. Some of the books I’ve personally read to gain an understanding of the above are Thinking: Fast and Slow by Daniel Kahneman; Tacit and Explicit Knowledge by Harry Collins; Secrets of a Buccaneer Scholar by James Bach; The Phoenix Project; Goal; The Lean Startup; More Fearless Change; Learn Enterprise; More Agile Testing: Learning Journeys for the Whole Team; Work Rules! Insights from Google; How Google Tests Software and listened to many podcasts on agile, lean and thoughtworks.


41

T E S T M a g a z i n e | J ul y 2 01 7


42

SMART CITIES: LESSONS LEARNED Pervez Siddiqui, Vice President of Business Development at Genetec, argues that technology has a big role to play in enabling collaboration within cities. But we need to ensure we draft the correct policies

T E S T M a g a z i n e | J ul y 2 01 7


M A N U F A C T U R I N G

S E C T O R

43

T E S T M a g a z i n e | J ul y 2 01 7


44

The Smart City calls on city administrators to leverage the growing network of connected sensors: cameras, in ground systems and street lights, to help reshape the urban experience

S M A R T

O

ver the past four years, a growing number of mayors have joined the chorus of voices proclaiming to be “Smart Cities”. The “Smart City narrative” is often part of a broader, more deliberate initiative to attract and retain residents, talent and businesses. After all, most cities are in competition with other metropolitan areas and need to differentiate themselves in just the same way that private organisations do. While there is a near-weekly drumbeat of announcements indicating yet another Smart City pilot, there is little consensus as to what makes a city 'smart'. The Smart City calls on city administrators to leverage the growing network of connected sensors: cameras, in ground systems and street lights, etc, to help reshape the urban experience to one which is decidedly less frustrating, more vibrant, more efficient and/or safer for residents, visitors and businesses. In steady-state, sensors embedded in virtually every aspect of our urban infrastructure will collect data, communicate with each other and make sense of the information. The most advanced implementations will also detect unnatural patterns in the data, represent the outputs in a meaningful manner, extract useful insights and even make smart decisions in response to this data. While many such applications are very ordinary such as weather monitoring applications, there are others which are more ambitious in scope and vision. One such application, for example, seeks to reduce motorists’ time spent idling in traffic by improving the flow of vehicles: flow is adjusted by managing traffic signals and adjusting lane-use in real time, based on traffic patterns at that time.

of access to data held by cities through Open Data Initiatives, these developers have been seeking to convert their pilot applications into larger rollouts. Fast-forward four years, and the adoption of paid “Smart City” systems has thus far, been tepid. While some attribute this to the glacial buying process and funding constraints within city governments, others point to the need for a 'top-down' political mandate to overcome the inertia associated with byzantine buying processes. Reasoned scepticism has emerged from some quarters, as to whether there is anything new here, especially given the number of technology vendors which have co-opted the term, to describe technology which they were already selling into cities in any case. As a provider of software systems used primarily within a public safety and security context, the Genetec approach has been remarkably different. The opportunity we see with the Smart Cities movement is not just to digitise “how work gets done today,” but to offer new capabilities such as the capability to collaborate across departmental lines, jurisdictions and across the public-private divide. This type of crossenterprise collaboration, or partnering, has historically been difficult as it has traditionally involved expensive custom integration work. While custom integration might work for very large stakeholders, it is cost prohibitive for small businesses. In addition to the economic costs, time, trust, turf and privacy concerns – hold stakeholders back from partnering despite the political imperative to do so.

SMART CITY DEVELOPER

Our view is that selling a security solution into a city is not usually a single sales motion; it is more likely to involve small incremental purchases by different stakeholders – whether public or private. These stakeholders may buy and operate their systems independently, but can benefit from the ability to ‘connect’ and start collaborating with each other in real time, should the need arise. Nothing better exemplifies here and now 'collaboration-enabling' Smart City applications than the community-led public-private 'virtual patrol' partnerships which are gaining momentum around the world.

COMMUNITY For many, the Smart City concept remains abstract at best. Despite this, there is now a burgeoning and blossoming Smart City developer community – comprised of both startups and established behemoths. Encouraged by the favourable political disposition towards such programmes, the growing proliferation of sensors, the increasing ability to connect to sensors on an as-needed basis through various technologies such as 4G wireless technology, and the democratisation

T E S T M a g a z i n e | J ul y 2 01 7

C I T I E S

BEING HERE NOW


S M A R T

45

C I T I E S

For example, the city of Detroit in Michigan USA launched a program on 1 January 2016 called Project Greenlight to help reclaim the city at night. Several studies had shown that violent crimes tended to be clustered around venues which are open late. In the case of Detroit, a quarter of violent crimes reported between 10 pm and 8am occurred within a 500-foot radius of a gas station. Participating Detroit businesses funded the purchase of a video surveillance camera system, subscribed to the cloudbased Genetec Stratocast surveillance software-as-a-service (SaaS) which enabled them to manage their own security system, and opt-in to 'federate' their video camera streams directly to the police department’s Real Time Crime Center. The connection to the police department helped reduce the time between incident and response. For example, following a recent gangrelated shooting at a local gas station, the Detroit police department was able to quickly identify the suspects, thanks to high-resolution quality of the video images and immediate access to the captured event. The suspect was arrested within two hours with the help of community and social media distribution. If the systems had not been connected with real-time access, standard procedures mean it would have taken multiple hours for the officers to even begin reviewing the footage, let alone identify and arrest the suspects.

SECURITY OF SECURITY The key benefit for these small businesses is the ability to mount a green strobe light, displayed outside the business, indicating participation in the Project Green Light program. By seeing the ‘green light’, any potential criminals know that high-resolution cameras and real-time police intervention will happen. This deters crime and increases business. A growing number of cities around the United States, Central and South America such as Campinas, Brazil have since borrowed the model and adapted it for their local environs. As the rate of adoption of connected sensors – and particularly connected cameras – continues unabated, the larger issue around the 'security of security' in the age of the Internet of Things (IoT) comes into the spotlight.

Increasingly, security systems owners are being asked to consider what devices are on their network and whether they know where the devices and their associated software have originated. A quick look at the Shodan search engine website provides a glimpse into the scale of the problem. An extraordinarily large number of internet-connected end points have either not been configured or secured properly and yet, many executives are not even aware of the gravity of these cyber threats. Mirai, the malware that wreaked havoc on Dyn’s telecom infrastructure in 2016, seeks out factory default passwords on internet-connected devices, and uses them to take over a chain of other non-protected devices to launch an attack on targeted entities. This reminds us that an unsecured camera or unprotected communication network between a server and client application can be all that is needed for the cybercriminal to stage an attack. The prospect of collaboration, of course, prompts multiple questions for each individual stakeholder within the larger Smart City initiative. As cities embark on becoming a Smart City, the following questions to pose include: • How do we balance the need for trust versus the need for agility? • How do we exercise control of our data when we start sharing across enterprise boundaries? • How do we ensure the veracity of the data and chain of custody? • How much do we share? • When do we share and do we only share certain data in very specific public safety circumstances such as in the immediate aftermath of a significant critical incident? • How do we ensure that the identity of those surveilled stays ‘private’ via image masking technology, until there is a legitimate need to see full resolution images for investigations? Technology has a promising role to play in enabling collaboration and breaking down silos within cities. However, we need to ensure that we are investing at least as much time forging the right partnerships and drafting policies which address the questions above. This investment in 'soft infrastructure' is as important as the underlying technology itself, in realising the vision of the Smart City.

Nothing better exemplifies here and now 'collaborationenabling' Smart City applications than the community-led public-private 'virtual patrol' partnerships which are gaining momentum around the world

PERVEZ SIDDIQUI DIRECTOR OF STRATEGIC MARKETS GENETEC

As Director of Strategic Markets at Genetec, Pervez leads the company’s business development groups that focus on stakeholders in metropolitan environments. He is a key proponent of a citizen-centric and community-led approach to issues relating to urban security, place-making, and open data.

T E S T M a g a z i n e | J ul y 2 01 7


46

WHICH TOOL TO USE IN PERFORMANCE TESTING Dusanka Lecic, Test Developer at Levi9 IT Services, looks at the tools to fulfill the job for performance testing

P

erformance testing is a type of non-functional testing performed to determine the behaviour of the system. It determines some parameters of responsiveness and stability of the application. Today it is also very important to have good loading and good performances of the web application especially in an agile environment. Because of that I am, as a test developer, very often in situations to perform load and performance testing. The tools are very different. Go with the use of tools is always difficult. The tools I use are JMeter and Locust. You must be wondering why exactly these two. First, I searched for a good tool for performance testing that is open source.

T E S T M a g a z i n e | J ul y 2 01 7

JMETER AS THE CHOSEN ONE As a first tool, I tried JMeter — free tool that supports a lot of protocols out-of-box via plugins (HTTP, FTP, LDAP, JDBC, JMS, SMTP, POP, TCP). I can also add plugins to get suitable reports. Visualization plugins allow great extensibility as well as personalization. It can be extended via Beanshell scripting, Javascript and Java. When I started to use JMeter I saw that it is very easy to use with simple test plans. But, when I tried to make some more advanced test plan it became difficult. But I was learning step-by-step about Thread Group, which is a placeholder for all other elements like Samplers, Listeners and Logic Controllers, that determine the order in which Samplers are processed. Samplers perform the actual


P E R F O R M A N C E

47

T E S T I N G

work of JMeter. Each sampler generates one or more sample results. The sample results have various attributes (success/fail, elapsed time, data size, etc) and can be viewed in the various listeners. I was in a situation where I learned a lot from a large community, from forums and mailing lists. I made advanced plans, to play with a simulation of numerous IP addresses from only one machine, what helped me on my project for testing load balancers. I also have a possibility to capture and playback via proxy record. Besides, I can integrate JMeter with some other tools like Eclipse, Jira, which allows us to automate performance testing.

LOCUST AS A SOLUTION When I first confronted the framework called Locust I was curious to know all about it and how to use it. Why? Simply because it is in Python. Locust is easy to use performance testing tool. It is written in Python code. Locust has a web based UI. Just a plain code. It supports running load tests distributed over multiple machines and because of that it can be used to simulate a lot of simultaneous users. Locust is also a web oriented but I can use it to test any system I want. I can add event listeners in Locust very easy. It supports mainly HTTP protocol. As a first I learned to define several Locust tasks, that are gathered under a TaskSet class. Then we have a HttpLocust class which is representing a user, where we define how long a simulated user should wait between executing tasks. The Locust class also allows us to specify minimum and maximum wait time per simulated user between the execution of tasks as well as other user behaviours. Locust hasn’t got any visualization. Results are in one table. And that was strange to me so I started to research because I wanted to see the results on a graph. I found several propositions for a displaying graph for Locust data. I have decided to use visualization library Bokeh, that is a Python interactive visualization library that targets modern web browsers for a presentation. From Locust, I can get data in JSON format and must create plotter file. Locust is a tool that I can integrate with other tools like Jenkins. It isn't used very much because of the language used in Python. Most people don't know Python and it is easier for them to use JMeter.

COMPARING THE TOOLS

In the next few lines I will show you a comparison of the two tools with advantages for both. JMeter is pure Java tool and that allows us to execute this tool in any platform. Also, it is a thread-based tool. It requires a separated thread to simulate a user. GUI is very user friendly and that helps us in executing and recording application sessions. JMeter allows me to apply an automation framework. It supports many protocols. Also, I cannot write my own scripts or change any recorded script. Advantages of Locust are that it is an open source tool. Locust scripts are written in simple Python code. It can be also run in multi-platform as JMeter. Locust is based on co-routine and uses a sync approach. That means that it is possible to simulate large number of users in one machine. Locust scripts are plain code and more reusable. It is very easy to use version-control tools to track the differences between histories. In Locust, I can also support specific functionality. It is highly customizable with the help of JavaScripts, Python. I want to say that is very easy to combine several requests as one scenario by grouping requests with specific rules. But, I have in mind that Locust is a new tool so there are not many forums or blogs about it. It is still a small community for Locust. Locust doesn't support recording scripts. Also, it is very convenient for us to start/stop test via UI, but in Locust that isn't possible because it hasn't UI.

In using JMeter and Locust I found advantages, benefits and drawbacks for both tools. Beside the fact that both tools are open source, performance testing will rely on JMeter more

FINAL THOUGHTS In using JMeter and Locust I found advantages, benefits and drawbacks for both tools. Beside the fact that both tools are open source, performance testing will rely on JMeter more. There are a lot of performance testing tools among these two, but I am still recommending JMeter for performance testing. Many problems already have the solutions on forums and blogs. Locust must evaluate more. I think JMeter has demonstrated staying power and a great community that has changed over the years. The upcoming changes will continue to improve JMeter. Future developments include the new contribution that provides a web based reporting UI. I think the current weakness for pure open source JMeter is the complexity for running in the cloud.

DUSANKA LECIC TEST DEVELOPER LEVI9 IT SERVICES

Dusanka works as a test developer in Dutch company Levi9 IT Services in the digital media sector. She has worked on several projects primarily as a technical expert involved in the activities of software development. Also, she is a member of the first testing community in Serbia, called Test’RS Club. Last year she finished he PhD Thesis. During her academic career, she has been participating in research projects, technical seminars, conferences and workshops.

T E S T M a g a z i n e | J ul y 2 01 7


48

SELLING SOFTWARE TESTING:

A VIEW FROM THE OTHER SIDE

So you sell software testing services? Ever wondered what your customers are thinking? The answers may surprise you, suggests Johan Steyn

T E S T M a g a z i n e | J ul y 2 01 7


V E N D O R S

T O

E N D

O

ver the last two decades, I worked in roles where I sold technology products and services to corporate customers. The last ten years my focus turned to software services, and software testing and quality management in particular. I worked in senior roles for some of the large global vendors, and I competed for business with many of the best rated global software service providers. Last year something interesting happened. I had the unique opportunity to fall down the rabbit hole and discovered a new world. For the first time in my life, I worked for a customer of these large providers. I had the opportunity to look over the fence. My experience over the last few months made me rethink how I would deal with customers if I am ever again in a sales role. The question that continually popped up in my mind was: “Would I sell to me?” Or more to the point: “Would I buy from me?” Many who read this article work for software quality and testing vendors. You deal with customers on a daily basis. But do you not sometimes wonder what your world looks like from the other side? How do your customers perceive you? What is in your client’s mind that they wish to get from you to see you as a value-adding partner? Allow me to share with you what I discovered. What I am about to share with you applies to all who sell service offerings to customers irrespective of industry, but I want to pull it through to apply to software testing and quality management in particular.

YOUR OFFERINGS ARE NOT

49

U S E R S

the world. My conclusion is the following: if you consider the recommended solution proposals from the top 5 global vendors, there is very little that differentiates them. At a basic level, they can all offer the same solutions, they have the same capabilities, they do the same “cutting-edge” R&D, and they all have the same abilities to scale (like offshore testing centres). So what differentiates a vendor is not global reach, capability, client references or shiny presentations. It all comes down to a relationship. I need to know that you are genuinely interested in fixing my

If you consider the recommended solution proposals from the top 5 global vendors there is very little that differentiates them

pain points. I want to see that you are willing to invest time with my team and me even when there is currently no clear opportunity for you on the table. Hold my hand in the long run, and your trust and guidance could be commercially rewarded. JOHAN STEYN BUSINESS DEVELOPMENT SPECIALIST

AS UNIQUE AS YOU THINK

PLEASE EMPLOY

The downfall of many vendor representatives is that they are so infused into the world of their employer, that they have blinkers on their eyes to the competitive world out there. Many will refer to the latest industry reports, like the World Quality Report, or Gartner’s Magic Quadrant series, to prove they are the best vendor. Over the last few months, I reviewed proposals and RFx responses from most of the leading software quality vendors in

PROFESSIONAL SALES PEOPLE

Johan Steyn is a business development

Most vendor technology divisions are led by techies who “came through the ranks” over the years. These leaders are often technically the best in their fields, but they lack business and commercial understanding. Many providers will use their best techies as customer representatives. This is good when the client’s engagement is of a technical/solution

technology products and services in

specialist who has been selling South Africa and in Europe over the last twenty years. Over the last eight years his focus turned to the software testing market. He worked in senior roles for consultancies like SQS and Accenture, and is currently a senior manager in Enterprise Testing at Nedbank.

T E S T M a g a z i n e | J ul y 2 01 7


50

The question that continually popped up in my mind was: “Would I sell to me?” Or more to the point: “Would I buy from me?”

T E S T M a g a z i n e | J ul y 2 01 7

V E N D O R S

scoping nature. But the initial customer engagement will most often be with business people, so please deploy individuals who are commercially savvy and who are thinking about a problem to be solved. A vendor’s proposed solution will always be a means to an end. A tool or technology or framework or approach must deliver a result. And in the case of software quality management, the net effect is risk mitigation, reduced time to market and cost savings. With that in mind, why should your customer worry primarily about your proposed framework or toolset? I urge vendors to employ and invest in professional sales people. Yes, they

T O

E N D

U S E R S

need to understand the offering, but more importantly, they need to deal with your customers in business terms. They need to understand sales pipeline management and the often drawn-out process of landing significant business with your clients. They need to be trusted advisors and not primarily technical specialists. And please, for the sake of all that is good and holy: they need to speak English well.

YOUR CUSTOMER IS NOT AS STUPID AS YOU THINK Vendors should never underestimate how much their customers know about testing. Some year’s back you could get away with this as software testing as a speciality was in its infancy. But nowadays your clients are often testing professionals themselves who attend conferences, read books and are “in the know.” You customers receive


V E N D O R S

T O

E N D

U S E R S

51

advice from many other vendors and if you propose a solution that is not suitable or if you reference facts that do not align you will quickly lose credibility with your customer.

YOUR PRESENTATIONS ARE BORING We copy what we see, and in time we think it is the norm. Let’s walk through a typical vendor presentation: the first few slides are all about how big you are, how many customers you serve, how many countries you have offices in, your humongous annual revenue and so forth. All the presentations start the same. Most presentations are clearly a cutand-paste job: they show little relevance to the customer and the challenges we face. It is not that we do not want to tell you about the problems we need to fix, it’s just that you do not earn my trust and spend time with me. I am keen for you to help me with my challenges. Oh, and please can you send your people on a presentations skills course? Content is not king. It’s about delivery.

COLD CALLING LEAVES ME COLD Most people hate calling on prospective customers. But skilled and experienced sales people know how to approach a cold call to good effect. Often when someone calls me for the first time, requesting a meeting or wanting my e-mail address to send me their company profile, it sounds like they are reading their words from a script. Maybe they are. Or maybe they have done so many calls and are sticking with the phrases they are comfortable with. I welcome cold calls. You never know when you discover a needle in a haystack — a potential vendor (big or small) who can offer value. I also respect it when people call me as I know how daunting this can be. I receive many intro requests via LinkedIn messages. I recently received a request from a representative of a large global provider. Her message read that they did extensive research into the bank I work for, and they believe they can help us. I then replied asking her for more info on the findings of the “research” when I

realised this was simply a ploy. My name was one of many on her list, and she was merely flinging mud at the wall to see what sticks. A week later I received the same message — word for word — from another person in the same company. So, please invest time and do some homework before you call a prospective customer.

TESTING DEMAND IS RISING AND THE FUTURE IS BRIGHT If you are working for a testing provider, your future is in your hands. Software quality and testing services is becoming a more critical expert skill every day. The fast moving world of digitalization and automation is pushing the quality of software platforms to the front of the thinking (and worry) of business owners. There will always be a place for a reputable, capable testing service provider among your customers. But please take into account what your customer is thinking. Foremost in their minds — no matter what you are pitching — is “what will I get out of this?” Answer that question with honesty displaying you understand their pain points, and you will almost always be guaranteed of their business. The opinions expressed here are those of the author and do not necessarily reflect the views of any of the organisations he currently or has previously been involved with.

There will alwys be a place for a reputable, capable testing provider among your customers. But please take into account what your customer is thinking

T E S T M a g a z i n e | J ul y 2 01 7


52

TDD AND BDD AS EFFICIENT PRACTICES

OF SOFTWARE DEVELOPMENT

T E S T M a g a z i n e | J ul y 2 01 7


53

T D D & B D D

Elena Moldavskaya, Business Analyst at Intetics, highlights the benefits of the Test Driven development and Behavior Driven Development approaches

I

t is quite common in the software development industry to generate “new ideas” to make software development projects more successful. There are many approaches, so far. Among them are Agile, Scrum, Kanban, Behavior Driven Development (BDD), Extreme Programming, Test Driven development (TDD) and Lean Development. What is the reason for the variety? Why are qualified and experienced people always ready to implement changes? One of the reasons is that software development is a very complex task, where faults often occur. Sometimes those faults are not obvious until the product testing is held. However, expecting for testing results can be quite disturbing. What is more, those results can be not so encouraging. Thus, the development team should start the long bugs' fixing process. The testing results may also prove that the product works correctly but shows that not all the features are implemented. These and multiple similar cases explain why project managers continually seek development process optimization. Trying to optimize the process, project managers implement the 'new ideas' mentioned. If done properly, software development teams benefit from them. In this article, we would like to tell you about two that shift the focus to testing: Test Driven development and Behavior Driven Development. Let’s compare them with the traditional (waterfall) approach.

TRADITIONAL APPROACH Testing is one of the latest stages here. It’s not surprising that the testing results become available only at the end of the cycle. Unfortunately, it triggers the dilemma of fast deployment of low-quality features, or pushing out the deadlines and increasing the project budget for QA. However, those who follow Test-First approach avoid it in advance.

Design and detailed requirements gathering

Development

Testing

TRADITIONAL APPROACH VS TEST-FIRST APPROACH The traditional software development lifecycle looks in the following way.

First, the process goes through the stage of the Red tests, when the first tests fail – the code hasn’t been produced yet. Then the developers write the minimum amount of code to pass each test

Deployment

ELENA MOLDAVSKAYA BUSINESS ANALYST INTETICS

As a senior business analyst Elena has a great deal of experience in business processes modeling (AS IS and TO BE), requirements gathering, writing functional and non-functional specifications, analytical support of the software development and quality control processes and wireframing.

T E S T M a g a z i n e | J ul y 2 01 7


54

T D D & B D D

Test-First approach

Design and initial requirements gathering

Test

Development and refactoring

TEST-FIRST APPROACH

TDD: TEST-DRIVEN

In the Test-First approach, the software development life cycle is changed. The testing stage goes before the actual development.

DEVELOPMENT

In this approach: • Tests are held before coding: the code is developed to pass the tests. • Tests serve as superior requirements: one layer of project documentation is removed along with the risk of unclear requirements. • Developers write the code only to pass tests and make the refactoring later (the tests should still pass). Both Test Driven development and Behavior Driven Development approaches follow the principles of the Test-First approach. Though in a different manner.

Deployment

When the cycle is repeated throughout the whole project, it serves as a perfect complement to agile and automated testing. Applied in this way, TDD helps efficiently combine testing and development

TDD is a software development process based on the sequence of short development cycles. During each cycle, developers gather the requirements, develop and run automated tests to define the requirements to the new functionality. First, the process goes through the stage of the Red tests when the first tests fail - the code hasn’t been produced yet. Then the developers write the minimum amount of code (“just enough”) to pass each test. When tests are successfully passed - the stage of the Green tests – the developers start the code refactoring. This makes the code compliant with the standard. After that, comes the Refactor stage when the tests are performed again. The approach sometimes is also called Red-Green-Refactor cycle. When the cycle is repeated throughout the whole project, it serves as a perfect complement to Agile and automated testing. Applied in this way, TDD helps efficiently combine testing and development. The

Red tests

TDD Refactor

Green tests

1. Write the failed tests 2. Write the code. Make the tests pass 3. Rewrite the code better. Tests should pass

T E S T M a g a z i n e | J ul y 2 01 7


55

T D D & B D D

2. Scenario failed

benefits of the approach are quite impressive.

EXPECTED BENEFITS

1. Scenario writing

• The approach encourages the developers to focus first on the results and actual feature realization. Code writing goes afterwards. TDD helps to design the solution or product in a more simple and businesslike way. Many teams that use TDD report significant reducing of bugs and therefore increasing the final product quality. • It helps decrease the overall development time if compared to Waterfall. Unlike the traditional approach, TDD does not require long testing after development, bug fixing and maintenance. In TDD these things go along, therefore it less time-consuming. • The number of lines of code is usually less when the team uses TDD, unlike in the traditional approach. This can reduce the overall project time and budget. TDD is a good way to effectively organize the development process, though it’s not easy to implement and has some limitations.

SOME DIFFICULTIES The typical mistakes that can reduce the expected benefits of using TDD: • It’s not easy to implement. The developers could produce too many tests or not enough tests. • The test suite is seldom used. • The development team could make the TDD adaptation mistake, as the learning curve of this technique is quite long.

SOME LIMITATIONS • TDD is focused on technical specifications rather than on system behavior and business cases. • It’s difficult to deploy on the projects where the task is to revise or modify the product, but no to deliver a new one. The fact that TDD is not focused on business cases is considerably important. Due to that, leading software development and testing experts created a new approach — BDD. It is focused on the behavior of the system rather than on the technical requirements. Let’s figure out how it is used.

3. Code

BDD Cycle

5. Refactor

TDD Cycle 4. Scenario passed

BDD: BEHAVIOR-DRIVEN DEVELOPMENT BDD is a technique of agile development that is often considered as a refinement of TDD. It designed to fill some of the TDD gaps. It also makes requirements for test writing and actual development more clear. Moreover, TDD and BDD approaches could be combined very efficiently. When they used together, the whole process can be presented as two related cycles. At the first stage, all the requirements scenarios (or User Stories) should be discussed and created. A clear definition of the requirements is a key point for BDD. To effectively apply the approach, the team should follow these principles: • Focus on those behaviors of the system that contribute to the business outcomes directly. • Write the requirements in a way that would be clear to all the team members and stakeholders. • Use specific 'Given-When-Then style for the requirements specification and developing scenarios/user stories. BDD encourages teams to use a conversational style and concrete examples to build up the understanding of how an application should work and which features matter. The way of the requirements and user stories writing that is used in BDD sometimes is called Specification by Example.

BDD is technique of agile development that is often considered as a refinement of TDD. It also makes requirements for test writing an actual development more clear

T E S T M a g a z i n e | J ul y 2 01 7


56

The TDD approach with automated unit tests could also be used with the aim to minimize the number of technical bugs. At the next stage, all the scenarios should be passed. After that, the developers could start the refactoring to improve the code

T D D & B D D

The essential idea is to break down the described user story scenario into three sections: • GIVEN. This part describes the state of the system before the behavior specified by the scenario is launched. It also could be called the pre-conditions of the test. • WHEN. In this section, the specified behavior is described. • THEN. This section shows the changes that are expected in line with the specified scenario. Of course, all the scenarios initially fail because the code hasn’t been produced yet. It should be written at the third stage. The TDD approach with automated unit tests could also be used with the aim to minimize the number of technical bugs. At the next stage, all the scenarios should be passed. After that, the developers could start the refactoring to improve the code. This approach could be extremely useful.

writing stage during the development cycle due to the Given-When-Then style of requirements, user story writing. • Clear definition of the requirements improves the conversation between technical and non-technical people on the project (developers, testers, analysts, business stakeholders and business domain experts). It makes the development process simple and short. • The approach allows using tools that support the automatic generation of technical and end-user documentation out of BDD specifications. • It could become a good way to make software development process cost effective and efficient. The approach also helps to assure the high quality of the future product. The team should also remember the obstacles that could make BDD implementation longer and more difficult than expected.

EXPECTED BENEFITS

SOME DIFFICULTIES

• BDD assures the improvement of the tests

• This approach could be difficult for implementation. Sometimes it’s not very comfortable for both technical and nontechnical people to start work together closely and discuss all the scenarios. All the participants should learn how to communicate with each other and define all the requirements according to the proposed BDD 'Given-When-Then' templates. • The approach requires high professional competencies among the team members. • BDD works well within an agile methodology environment. Therefore, the attempt to use it in line with the traditional (Waterfall) approach could be unsuccessful. The bottom line is that it is incredibly difficult to create a good software when the team does not know what it is supposed to do from the very beginning of the project. It becomes even more difficult when they do not use the system of the quality control. The described TDD and BDD approaches to software development are created to leave behind many of those gaps. By implementing them, the team initiates a new and more efficient process of software development. These techniques deliver numerous benefits, though project managers should analyze all the obstacles before adopting them.

T E S T M a g a z i n e | J ul y 2 01 7


BE

froglogic Squish GUI Test Automation Code Coverage Analysis

cross platform. multi language. cross design. Learn more and get in touch: www.froglogic.com T E S T M a g a z i n e | J ul y 2 01 7


BF

REGISTER NOW TO HEAR

Award-winning speakers PLUS

take part in executive workshops Q&A panels and a market leading exhibition

“I had fun presenting my keynote at the conference, and I am glad that I’ve inspired people to look at the other investigative side of testing. It’s been a platform for meeting some new friends, which I’m sure I’ll keep in touch with.” Dan Ashby, Head of Software Quality and Testing, AstraZeneca

Call us on

0870 863 6930 SUPPORTED BY

T E S T M a g a z i n e | J ul y 2 01 7

north.softwaretestingconference.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.