TEST – July 2018

Page 1

JULY 2018


froglogic Squish GUI Test Automation Code Coverage Analysis

cross platform. multi language. cross design. Learn more and get in touch: www.froglogic.com T E S T M a g a z i n e | J ul y 2 01 8


C O N T E N T S

1

TEST MAGAZINE | JULY 2018

IoT - CONSUMER SECURITY IS PARAMOUNT

4

8

E-COMMERCE - PREVENTING SECURITY VUNERABILITIES

10

12

RETAIL & E-COMMERCE

16

20

Website and app performance is key to revenue & retention

10

Who watches the Test Managers?

12

Developing services with Legacy & DevOps 22 components 26 Is poor communication the no.1 reason for ...... projects failing?

TEST DATA MANAGEMENT STRATEGIES

ROI - TEST AUTOMATION

Non-functional testing can drive robust approaches

30

Uncover defects sooner with exploratory testing

32

Exploratory testing is vital to teams facing varied challenges

34

Software Testing Conference NORTH

40

T E S T M a g a z i n e | J ul y 2 01 8


2 upcoming

INDUSTRY EVENTS

18-19

16 -17

16

Sept

Oct

Oct

The Software Testing Conference NORTH will be held at The Principal York Hotel, York, on the 1819 September 2018. This northern conference will provide the software testing community with practical presentations, where the winners and finalists of The European Software Testing Awards will touch upon pressing industry topics; executive workshops led by industry assets; and will give delegates the chance to check out the latest products and services within the industry via an array of exhibition stands.

The National DevOps Conference will be held at the Millennium Gloucester Hotel, Kensington, London, on the 16-17 October 2018. This is a two-day conference designed to connect to a wide range of stakeholders and engage not only existing DevOps pros, but also other senior professionals keen to learn about implementing this useful practice. At The National DevOps Conference, you will have the chance to listen to peers who have successfully begun their DevOps journey; receive advice and knowledge from industry practitioners; as well as join in and debate at executive workshops.

The DevOps Industry Awards will be held at the Millennium Gloucester Hotel, Kensington, London, on the 16 October 2018, to celebrate companies and individuals who have accomplished significant achievements when incorporating and adopting DevOps practices. This glittering awards gala has been launched to recognise the tremendous efforts of individuals and teams when undergoing digital transformation projects – whether they are small and bespoke, or large complex initiatives.

north.softwaretestingconference.com

devopsevent.com

devopsindustryawards.com

30

21

DEC

Oct

Nov

2018

THE EUROPEAN SOFTWARE TESTING

DevTEST Summit Scotland is a one-day event, which will be held on the 30 October 2018 at the iconic Principal Grand Hotel, Glasgow, Scotland. The full days programme will see a panel of key professionals speak, before hosting interactive sessions on recent issues related to software testing and DevOps. DevTEST Summit is open to all organisations and individuals within the software testing and DevOps community who are keen to increase their knowledge and harvest workable solutions towards various issued faced in complex, burgeoning sectors.

For the sixth year running, The European Software Testing Awards will celebrate companies and individuals who have accomplished significant achievements in the software testing and quality assurance market on the 21 November 2018 at Old Billingsgate, London. Enter The European Software Testing Awards and start on a journey of anticipation and excitement leading up to the awards night – who knows, it could be you and your team collecting one of the highly coveted awards.

The European Software Testing Summit is a one-day event, which will be held on the 21 November 2018 at The Hatton, Farringdon, London. The European Software Testing summit will consist of up to 100 senior software testing and QA professionals, who are eager to network and participate in targeted workshops. All delegates will receive printed research literature, have the chance to interact with The European Software Testing Awards’ experienced Judging Panel, as well receive practical advice and actionable intelligence from dedicated workshops.

devtestsummit.com

softwaretestingawards.com

softwaretestingsummit.com

T E S T M a g a z i n e | J ul y 2 01 8


E D I T O R ' S

C O M M E N T

3

A GROWING CONCERN BARNABY DRACUP EDITOR

ccording to the trade association, UK Finance, there were 13.2 billion debit card payments in the UK last year, compared to 13.1 billion in cash, making 2017 the first year where cash was no longer king. Of course, all this is only made possible by the software which supports it. Since Payleven released the first device onto the UK market in 2012 it’s been the new and smaller businesses which have really benefitted from this explosion of tech, with businesses and individuals alike using these fast, efficient systems to power their trade activity. Compatible with iOS and Android smartphones, tablets and other devices, these facilitators of commerce are now common and consumer trust is now firmly in place regarding these types of machines and the contactless payments they make on them. The UK Finance report stated that the rise of contactless payments is a key factor in increased debit card use, with almost twothirds (63%) of adults now using contactless cards, with the average shopper making nine contactless payments per month, up from five in 2016. By 2027, UK Finance expects adults to be making 22 contactless payments per month. All this, of course, raises a challenge for financial institutions – as well as any new kids on the block – to protect consumers’ data and do their bit to eliminate card fraud during this time of rapid evolution. As these systems grow, change and develop, they require continuous testing to ensure security is never compromised while offering customers the ability to buy, sell and transfer money in any way they want. But what about the big contenders? In early June, when VISA’s payment network went down across Europe, the outage was a stark reminder of the fragility of these centralised payment networks

A

and reinforced how complex these global economic systems are - and also how difficult it is to recover from problems after they begin to spiral out of control. So, with consumer protection, data and security concerns paramount – and not forgetting user experience and performance – what considerations must testers now face up to? With retail e-commerce sales predicted to double over the next three years, we take a look at a possible revolution in static analysis with the implementation of a unique codeas-data approach (p.4) to scale approaches for preventing security vulnerabilities in the software enabling these types of systems. Consumer security is obviously paramount, so is it time retailers and manufacturers of IoT devices began selfregulation and began setting their own cyber-security standards? (p.8) What is the one most important aspect of website and app design in relation to customer revenue and retention? Performance, of course (p.10). It is vital that, during peak business periods, any customer experience is as smooth and as bug-free as possible if enterprising retailers are to maximise on their earnings. All this takes investment of course and return on investment (ROI) is now at the centre of every business function. Robust test data management strategies are essential to better, more informed testing (p.16) and realising a better return on investment from available tools and resources. We take a look at the growing need for ROI from test automation (p.20) and how the challenge now is how modern businesses can demonstrate the value that can be derived from a testing budget. Whatever solutions can be derived, there are certainly some interesting times ahead in the world of retail and e-commerce.

JULY 2018 | VOLUME 10 | ISSUE 3 © 2018 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 EDITOR Barnaby Dracup editor@31media.co.uk +44 (0)203 056 4599 STAFF WRITER Leah Alger leah.alger@31media.co.uk +44 (0)203 668 6948 ADVERTISING ENQUIRIES Shivanni Sohal shivanni.sohal@31media.co.uk +44 (0)203 668 6945 PRODUCTION & DESIGN Ivan Boyanov ivan.boyanov@31media.co.uk Roselyne Sechel roselyne.sechel@31media.co.uk 31 Media Ltd, 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 info@31media.co.uk testingmagazine.com PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA  softwaretestingnews  @testmagazine  TEST Magazine Group

T E S T M a g a z i n e | J ul y 2 01 8


4

Viva la revolution! With retail e-commerce sales predicted to double over the next three years, it’s time for a revolution in static analysis and the implementation of a code-as-data approach to security threats

A

s the saying goes today, 'software is eating the world', 1and the rapidly increasing threat of a cyber security attack is one of the things slowing companies as they race to feed the growing appetite for digitisation. Retail e-commerce sales are set to more than double by 20212. At the same time, security incidents are rising 38% year-on-year3. The pressure to handle cyber security threats efficiently is key for any company that wants to achieve these growth targets. This article will present a unique new code-as-data approach to scale the approach for preventing security vulnerabilities in the software enabling these systems. Software is the backbone of these e-commerce systems, providing the building bricks for them to develop. One of the major challenges today is that no two systems are identical. Two different retailers with an e-commerce front end may provide the same experience to an end user, but achieve it using completely different software technologies and stacks. This means that

T E S T M a g a z i n e | J ul y 2 01 8

commercial off-the-shelf (COTS) tools will struggle as the subtle nuances of each system need to be addressed. Furthermore, the rapidly evolving field of cyber security means there are constantly new patterns or styles of threats being identified, and these need a quick response by retail organisations’ teams to ensure these types of security vulnerabilities don’t exist in their systems. This is no trivial effort, when you consider a company like Google is estimated to have over 2 billion lines of code for their internet services software alone4. If a single version of Google’s code were printed out, it would take over 100 years to read completely! With cyber security related issues continuing to grow, despite the significant US$55billion5 annual investment in preventative technologies by companies, perhaps it’s time for a paradigm change; a revolution in how we confront the cyber security challenges that companies face today, as they struggle to manage the rapidly growing demands of the retail and e-commerce industries.

NIROSHAN RAJADURAI CODE QUALITY & CYBER SECURITY EXPERT Niroshan has helped companies across the world transform how they build their software. With the growing requirement for internet connectivity in these systems, he is currently introducing unique approaches and technologies to assist in combating the increasing number of cyber security threats.


E - C O M M E R C E

In hunting for cyber security vulnerabilities today, most companies use one or a mixture of the following approaches: • Running a barrage of code analysis tools, which go through a fixed set of static analysis rules to look for known patterns in the software • Using penetration testers who use various tools to analyse the binary code of the application • Using crowdsourcing techniques, like bug bounty programmes. While these approaches can be effective and have done a reasonable job in reducing exposure to vulnerabilities so far, they are struggling to scale and keep up with the growth in software most organisations are seeing today. The main challenge is that these approaches still require significant human effort in triaging results to pick out the real issues from the noise (also known as false positives). Often the developer who wrote the defective code has moved on to other activities and may not even remember the context or specific reason the code block was written in that way. This slows down efforts to fix the vulnerability, while the developer re-acquaints themselves with the code. Then there may be a full software development iteration before that change is merged into a release candidate of the code. Typically, static analysis tools that provide automated code inspection are based upon industry standard code implementations. But, most organisations have some coding patterns or implementation details that are unique, perhaps due to a specific architecture or design pattern choice. This means a large set of standard rules are likely to return results that have a high falsepositive rate because the rule, as designed, does not fit with the code as implemented. There’s usually no easy way for users to refine or extend the rule, so rules are often turned off just to prevent fatigue from reviewing these false positives. We can call this sort of static analysis approach ‘staticstatic analysis’, because users have limited options for extending or customising them.

A NEW UNDERSTANDING

To address these challenges, a new approach is needed. One which provides the ability to deeply understand the software, and quickly and easily analyse it for any perceived issues in an extensible way. There’s a parallel with how easy it is to retrieve information flexibly from relational databases: they are good at storing data and then using a query language like SQL, a

user or an organisation can extract elements for the analysis they want to complete, or a report they want to produce. In a similar way, if we apply that same concept to code, the problems of static-static analysis will be eliminated. A compiler or parser takes the source code that a developer writes and transforms it into executable code; something a computer can execute. However, instead of converting this source code into executable code we can, as an alternative, map out the relationships of all the function calls, variables and types into something known as an abstract syntax tree (AST). The AST, in turn, can be mapped into a relational database, a knowledge base that encapsulates all the information in the code. In the same way that we can query a relational database to explore data, now we can query this knowledge base to explore the relationships between these functions, variables and types. This is what we call code-as-data. Once we start to inspect code in this way, there are limitless analyses we can conduct. Let us consider the three following scenarios and see how our approach can be more effective than today’s static-static analysers.

S E C U R I T Y

55

"Perhaps it’s time for a paradigm change; a revolution in how we confront the cyber security challenges that companies face today"

STANDARDS COMPLIANCE

As discussed previously, today companies achieve compliance with industry coding standards using a static analysis tool partially through fixing the code, and partially through disregarding or turning off certain rules in certain areas of the code. The latter approach is forced because the code needs to be constructed in a certain way, and the coding errors raised by the static analyser are considered false positives – the analyser is giving warnings that developers feel are not justified. This situation is very costly in that developers waste massive amounts of time to review and dismiss what turn out to be false positives. Additionally, once developers stop trusting the validity of the results, they may start dismissing them without deep investigation, potentially missing real vulnerabilities, which can have severe consequences. A better alternative would be to refine the static analysis rule, improving the rule such that it permits the current coding pattern, but still checks it for any potential vulnerabilities or issues. If we use our code-as-data approach,

1 https://a16z.com/2016/08/20/why-softwareis-eating-the-world/ 2 US$4.88 trillion - https://www.statista.com/ statistics/379046/worldwide-retail-e-commercesales/ 3 PwC's Global State of Security Information Survey 2015 4 http://www.visualcapitalist.com/millions-linesof-code/ 5 https://www.morganstanley.com/ideas/ finding-the-next-100-billion-dollar-software-giant. html

T E S T M a g a z i n e | J ul y 2 01 8


6

E - C O M M E R C E

"A similar approach was applied by NASA JPL when the Mars Rover, Curiosity, was en route to Mars. When a critical issue was identified in the source code of Curiosity’s landing module, the engineers at JPL decided that other instances of the issue might exist and decided to apply the variant approach"

S E C U R I T Y

this is simple to do. We take our existing query, refine it for our application (by adding, for example, an extra condition in our search criteria), and then continue to use it without disabling any of the safeguards on our code. As a result of this approach, developers will trust the results and avoid wasting their time on reviewing false positives.

VARIANT ANALYSIS

In variant analysis we look at an issue, be it a software bug, or a security vulnerability, that has been identified in our application. It means, especially if this is after the application has been deployed, that our current method of verifying and validating our application has a hole in it. So, while it is good the issue has been identified, it is highly probable that a similar issue exists elsewhere in our application or in another application within our software portfolio. Without using a code-as-data approach, finding such variants requires manually inspecting all the software, exploring similar blocks of code, potentially using simple tools such as Grep. This is all the more complicated if the issue is due to a complex data flow that crosses many procedure boundaries. This sort of method might be manageable for an application that is a few thousand lines of code but would be infeasible and incredibly time consuming for an application hundreds of thousands, or millions, of lines of code in size. Returning to our code-as-data approach, we could construct a simple query that could look for the same attributes as the issue or vulnerability we identified. If we consider what such a pseudo query might look like for a simple example, like checking that all user input is sanitised and checked for validity prior to being used, we can consider a simple syntax as follows: from: /* The sub datasets of interest * functionCallsOfInterest, parametersOfInterest, userInputs where: /* The conditions */ functionCallsOfInterest has parametersOfInterest, and Data flows from userInputs to parametersOfInterest, and userInputs is not sanitised on the path to parametersofInterest select: /* The results of unsanitised usage of user data*/ functionCallsOfInterest Figure 1 - Pseudo Code for a Query

T E S T M a g a z i n e | J ul y 2 01 8

We could refine such a query on the code base with the known issue, and then quickly take it and run it on our entire software portfolio to check if other variants exist. A query like this can be quick to construct, maintain, and manage. Furthermore, it is also easy to extend or inherit from in the future. One good example of a query language and engine is QL, a query language and tool chain developed by Semmle. This type of variant analysis is powerful for security response teams or bug bounty teams. As soon as an issue is identified, they can quickly construct the query, and then identify other variants in their code. Security teams can then integrate the analyses in their organisation’s continuously-run set of analyses, to share security knowledge internally and prevent regression. A similar approach was applied by NASA JPL when the Mars Rover, Curiosity, was en route to Mars. When a critical issue was identified in the source code of Curiosity’s landing module, the engineers at JPL decided that other instances of the issue might exist and decided to apply the variant approach. Within 20 minutes a query was constructed. The JPL team ran the query across the full Curiosity control software, where it identified the original problem along with more than 30 other variants, of which three were in the critical ‘entry, descent and landing’ module where the original problem had been found. The team addressed all issues and patched the firmware remotely. Not long after, the Curiosity Rover landed safely on Mars6.

A NEW UNDERSTANDING

The other area where code-as-data is a powerful tool is when we want to explore, navigate and analyse our code for new types of vulnerabilities. It’s common at ethical hacker forums for new types of vulnerabilities or exploit mechanisms to be identified. As a security researcher responsible for safeguarding your company and ensuring its software portfolio does not contain these issues, the follow up activities from such an event can be daunting. Once a concept is disclosed, immediately both ethical and unethical hackers will be looking for ways to exploit the concept. The ability to quickly refine and improve queries, as well as inherit from prior work, means that in real time a new query to detect the new problem can be constructed, with immediate feedback from each iteration of the query. Let’s consider a simple example of an issue (CVE-2017-86367), identified recently


7

Figure 3 - Example deployment workflow for queries

in Microsoft’s ChakraCore8 code base. ChakraCore is the core part of the Chakra Javascript engine that powers the Microsoft Edge browser and it was recently open sourced9 by Microsoft. A subtle nuance of the C/C++ compiler is its behaviour with integers that are not a standard size. Consider the following scenario: in computer programming, an integer overflow occurs when an arithmetic operation attempts to create a numeric value outside of the range that can be represented with a given number of bits – either larger than the maximum or lower than the minimum representable value. In the example below, a developer performing a sum operation on the variables ‘x’ and ‘y’ might naively construct the following logical expression to detect if an overflow has occurred so that corrective action can be taken:

result in a memory corruption and hence a possible vulnerability that can be exploited. To look for an issue like this we could express this issue as a simple query:

x + v # Sum if (x + v < x) # Logical expression to test for overflow

Figure 2 - Sample Query from Semmle QL Code as Data tool

The issue occurs when ‘x’ and ‘v’ are not of an integer type (memory size is 32 bits), for example a short type (memory size is 16 bits). In the logical expression above, when ‘x’ and ‘v’ are short types, the compiler will generate code that will temporarily expand the types of ‘x’ and ‘v’ to be that of an integer while it does the sum and performs the logical test. As a result, the logical expression above will never fail, even for the overflow scenario discussed above. This issue can commonly occur when the type for ‘x’ and ‘y’ are user defined sub-types and it is unclear to the developer that both variables are not of an integer type. If the result is then used to access an array, this could

predicate isSmall(Expr e) { e.getType().getSize() < 4 } from AddExpr a, Variable v, RelationalOperation cmp where a.getAnOperand() = v.getAnAccess() and cmp.getAnOperand() = a and cmp.getAnOperand() = v.getAnAccess()and forall(Expr op | op = a.getAnOperand() | isSmall(op)) and not isSmall(a. getExplicitlyConverted()) select cmp, "Bad overflow check"

You can see how quick and easy it is to construct a query to look for this issue, and once we have it, we can deploy it to analyse the whole portfolio, not just for ChakraCore, but actually for any other code that might have the same issue. In this example, the query is run on LGTM.com on over 76,000 open source projects, looking for variants of this issue in every commit or pull request to those projects. Once a query has been created for a new issue, it can be included in the continuous integration process to provide an automated code review for any changes to the code. This ensures the same issue, or a new variant of the issue, cannot creep back into the code base in the future (see figure 3 above).

THE NEXT STEPS

With the rapid growth of software in the retail and finance industry, and the criticality of protecting e-commerce transactions and user information, revolutionary steps need to be taken to address the challenges of cyber security. Fortunately, many companies are already demonstrating these approaches are possible and helpful in identifying these security vulnerabilities in software. The concept of treating code-as-data is demonstrably effective and could introduce significant cost savings in issue and vulnerability detection as code bases continue to grow. It is common that the same kinds of logical coding mistakes are made over and over again, sometimes repeatedly within a single project, and sometimes across the whole software ecosystem. These mistakes are the source of many of today’s critical software vulnerabilities. Using the code-as-data approach, and a query language like Semmle’s QL10, you can codify such mistakes as queries, find logical variants of the same mistake elsewhere in the code, and prevent similar mistakes from being introduced in the future by automatically catching them before code gets merged. The scalability and flexibility of this approach means that, over time, queries can be inherited from and tweaked and improved as new insights are developed on the code and new coding paradigms are employed.

7 https://www.exploit-db.com/exploits/42478/ 8 https://github.com/Microsoft/ChakraCore 9 https://blogs.windows.com/ msedgedev/2015/12/05/open-source-chakra-core/ 10 https://semmle.com

T E S T M a g a z i n e | J ul y 2 01 8


8

Should we regulate the IoT?

Consumer security is paramount, so is it time retailers and manufacturers of IoT devices began self-regulation and began setting their own cyber-security standards?

MARCO HOGEWONING, SENIOR EXTERNAL RELATIONS OFFICER RIPE NCC Marco Hogewoning is the Senior External Relations Officer and Technical Advisor

y 2020 there will be a staggering 200 billion Internet of Things (IoT) devices in operation, according to Intel. This high number of end-points will change the way we communicate with each other, interact with our surroundings and will have a significant impact on the UK marketplace. However, with so many connected devices out there sending data over the internet, there is the growing prospect of serious security issues. If the IoT is not handled with care, the number of cyberattacks and data breaches will increase markedly. The steps that are taken by the authorities and other stakeholders over the next period will be crucial to ensuring a safe, efficient and successful IoT network that drives development and growth. Bizarrely, it’s the accelerated growth in the IoT space that is raising the spectre of greater security problems. The commercial pressures that manufacturers are under

B

T E S T M a g a z i n e | J ul y 2 01 8

have sparked a race to be first-to-market with new IoT products. Of deep concern is that manufacturers often overlook security when developing new devices. This is typically because they may lack institutional experience around working with connected devices or might not be able to afford the extra time or budget to build-in adequate security. From a security perspective the implications of this can be dire.

TAKING RESPONSIBILITY

This somewhat ad-hoc approach to security, along with a lack of any defined IoT security standards, has resulted in damaging cyber security events, such as the Mirai Botnet incident in October 2016. This crippling attack saw enormous blocks of IoT devices infected with malware, which were then used to attack core internet infrastructure. Mirai was a stark reminder of how serious

for RIPE NCC. As part of the external relations team, he helps lead RIPE NCC’s engagement with membership, the RIPE community, government, law enforcement and other internet stakeholders.


C O N S U M E R

S E C U R I T Y

9

"If a product met standards and the terms of its guarantee, it's ceased to be the responsibility of the manufacturer. But IoT devices are different, as they are linked to the internet, meaning the vendor must continue to provide security updates. It’s also not yet clear who is ultimately responsible" cyberattacks on vulnerable IoT devices can be. Alongside a lack of widely-adopted IoT security standards, there is the huge question of who is responsible for the security of these connected devices. Most IoT devices are designed to remain active for years, perhaps even decades. Can we really expect consumers to ensure their devices are kept patched and up to date? Unlike a home PC, connected devices generally lack a user interface, so even the question of how to notify customers about updates remains a challenge. In the past, if a product met standards and the terms of its guarantee, it ceased to be the responsibility of the manufacturer. But IoT devices are different, as they are linked to the internet, meaning the vendor must continue to provide security updates. It’s also not yet clear who is ultimately responsible for making sure an individual device is updated, or what happens when an IoT manufacturer goes out of business and is unable to support their product. This is not a clearcut situation.

REGULATION CAUTION

It is heartening to see the UK government take steps towards making the IoT a safer space for everyone concerned. In March, the Department for Digital, Culture, Media and Sport (DCMS) announced a new IoT code of practice, focused on driving up the overall security of the IoT ecosystem. These measures will help to ensure that all stakeholders, including manufacturers, take security seriously. Laying out clearer roles and responsibilities for manufacturers and others operating in this space will help businesses to better understand their own role in protecting the end user. Moreover, it will help along the realisation that security needs to be built into devices from the beginning. While this is positive, caution should still be exercised towards any approach that introduces formal regulation of the IoT. Initiatives like the IoT code of practice will play a key role in education around IoT security – but should not negatively impact

innovation and dynamism in the IoT space. On the other hand, if the UK government were to establish a centralised regulatory body for the IoT, it would likely face some tough challenges. Firstly, such a regulator would need to bring together a huge array of different competencies from a range of different fields, which is not an easy thing to do. It’s therefore unlikely such a body would be able to do its job without threatening competition or vibrancy in the market. On the other hand, an approach based on sectors, where existing industry regulators work with IoT stakeholders to discuss shared values, could be worthwhile. Collaboration across sector boundaries and between different stakeholders is what the internet was developed on and could see firm IoT security standards come into operation. Self-regulation could see that the IoT market remains secure, flexible, dynamic and successful. A great example of this is the DCMS establishing its code of practice in direct collaboration with a range of manufacturers, retailers and the National Cyber Security Centre. This approach, not built on control or authority, but around genuine cooperation between regulators and other businesses and organisations, has worked for the internet. This type of effort promotes trust, openness, and collaboration to establish a series of shared values and standards, which could be of huge benefit to the IoT.

LOOKING TO THE FUTURE The security of consumers matters, but there is a long way to go until this is seriously addressed. It is encouraging to see collaborative initiatives around IoT security being launched into the market. A healthy debate is the first step in establishing the voluntary IoT security standards that could see the network thrive. However, a change in mindset is also required, and while commercial pressures and objectives are important, all organisations involved in the IoT ecosystem must consider them even-handedly with the need for security. Everyone must work together towards a unique framework for a very unique network.

T E S T M a g a z i n e | J ul y 2 01 8


10

Performance under pressure What is the one most important aspect of website and app design in relation to customer revenue and retention? he answer, perhaps obviously to some, is performance – and yet it still seems to be a niche and poorly comprehended area within the testing industry. Load and performance tests provide a crucial role within our business and are one of the few areas where the benefits of testing can be tangibly and very closely related to user experience and its impact against sales. We are the UK market leader in holiday rentals with over 18,000 individual properties listed through our company and our websites provide 67% of our transactional revenue, therefore the online experience we offer our customers is crucial to our business and we want to make sure that it’s stable, responsive and performant at all times. Our strategy for load testing is geared very much around patterns of business activity, or substantial project change, such as the introduction of a new website or significant architectural restructure. We have two

T

T E S T M a g a z i n e | J ul y 2 01 8

major PBAs each year, around the New Year period and school summer holidays, when most of our customers are looking to take a break away or grab a great deal on their next holiday. Load tests are run against our production environment around eight weeks prior to the expected peak period and are invaluable for identifying areas for improvement, before hitting that crucial trading period.

IMPORTANCE OF ANALYSIS

Data analytics and business analysis play a critical part in the planning of a load test. Total sessions and user volumes from previous peak periods are used to derive load volumes, alongside business expectations around sales growth. Typically, we will run load at peak volumes, and at peak plus a 10%-20% tolerance for growth. Once the load tests are performed, a

FELICITY LORD QA MANAGER VACATION RENTALS UK

Felicity is an experienced Quality Assurance Manager with a demonstrated history in test management, system integration testing, systems thinking and test planning in the leisure, travel and tourism industry.


L O A D

ramped stress test is conducted with a further 10-20% contingency. The tests themselves only form part of the load test activity as the analysis and interpretation of the results, using the information gathered from logs and our load test tools, then drives a substantial body of effort to determine what changes are required to fine tune our production environments and to then put these changes into effect before peak. Performance testing, for us, is a subtly different style of test; we are driving traffic at smaller volumes of virtual users, compared to the load test, as we look to identify issues with speed of response, page rendering/ loading, resilience, and performance errors. Performance testing is an entirely in-house activity whereas we utilise consultants to assist with our load testing, and we are much more likely to run our performance tests in a non-production environment. With DevOps, my strategy for performance testing is now to include performance tests as part of our CD/ CI models for development so that performance forms part of our continuous automation pattern for build and release. Performance tests should check performance metrics on each deploy to ensure no performance degradation has been encountered. It is important that this should be a completely automated process with very little maintenance required in order to get the most benefit. As well as performance testing, ongoing performance monitoring is an important aspect of keeping check of your company’s performance requirements. My team use a range of tools to provide 24/7 monitoring of performance across our infrastructure, both within production and our non-production environments.

CLOUD FLEXIBILITY BENEFITS

The transition to cloud technologies has opened up some interesting new avenues for performance and load testing with a cloud-hosted production environment versus physical hardware giving a range of different options for how we support our websites and tailor our infrastructure to meet our expected needs. Cloud technologies also allow for a greater tool choice and flexibility to support our business that can provide the required volume of concurrent users to meet our test demands. There are also environment considerations; as how you choose to host your production and non-production

&

P E R F O R M A N C E

environments, and any variations between them, can have consequences for how you conduct your tests and the validity of your test results. I have had to undertake a significant investment programme at Vacation Rentals UK in order to provide for a software development lifecycle in our non-production infrastructure that is closer to mimicking production and therefore provides a more accurate reflection of how our future developments may impact performance when live.

NEW CHALLENGES

One of the biggest challenges I have found is the shortage of individuals with the required skillsets in the testing market. As there only seems to be a small pool of talented specialists in this area, they have influence over the hiring market and it has proven difficult to attract talent, particularly in more regional locations. For my purposes, I have chosen a model that is a mix of in-house self-generated capability and consultants or third-party vendors. By developing our own in-house capability, it gives us the ability to make performance testing an embedded part of the build and release activity. Performance testing can be scheduled and tailored to meet the needs of each squad or set of project deliverables. It gives our teams’ independence and accountability over their performance and load testing needs. The introduction of new functionality into our infrastructure can be a difficult thing to then interpret into non-functional needs and requirements – but having some inhouse knowledge and understanding of performance testing can help derive more meaningful performance goals. This has led to an interesting development for my team as they need the technical knowhow to structure, run and interpret load and performance tests, but they also need the skills to assist non-technical members of our organisation with teasing out their business needs, so these can be translated into suitable non-functional requirements. As more organisations strive for a ‘digital first’ mentality, I believe the importance of load and performance testing will become ever more critical to a company’s success. As in Agile methodology, where quality is now being widely recognised as everyone’s responsibility, so too should everyone be seen as accountable for performance and how this directly relates to the success of their organisation.

T E S T I N G

11

"Our websites provide 67% of our transactional revenue, therefore the online experience we offer our customers is crucial to our business and we want to make sure that it’s stable, responsive and performant at all times"

T E S T M a g a z i n e | J ul y 2 01 8


12

WHO WATCHES THE WATCHMEN? Today’s fiercely commercial world places increasing pressure on the professional test manager to provide a complete, truthful and independent picture of the state of testing progress n the world of independent software verification and validation, nothing ever happens in isolation and, at times, it is all too easy to forget the critical ‘independent’ viewpoint, with the potential result that corporate reputations can be seriously tarnished. Test effort is cumulative. But it is not until the arrival of a period of sustained business interaction that the diligence of accumulated verification ‘fact’ has the chance to dovetail with the creativity of commercial validation ‘opinion’. Such a window affords the opportunity to combine the discipline of crafted test case execution with exploratory testing imagination. In a waterfall-world, that occasion is usually referred to as the UAT test phase.

I

T E S T M a g a z i n e | J ul y 2 01 8

In an agile world, where the desire is to obtain broadest business approval across a related number of sprints, a label of UAT hardening is most often used. In such a way, the prospect of paying more than cursory lip service to individual archipelago land masses is present – an occasion to really join up the proverbial dots, so to speak. Howsoever named, this sustained business user interaction is the first time these users will have had sight of their new functionality and also time to give it a good kicking. Here is where the most convincing evidence accrues that illustrates whether the right product is being built and if it is being built the right way. In the eyes of many, this is the most critical period of testing in an entire project.

IVOR KELLY HEAD OF TESTING & UAT CONSULTANT

Ivor Kelly has spent more than a quarter of a century in the software testing profession. These days, he derives most pleasure in helping others mature their test function and building ‘top gun’ style training solutions.


I N D E P E N D E N T

Left to their own separate devices, neither test verification effort nor test validation activity is likely to be as effective as when a practical, well-resourced and agreed project plan bonds them together. To realise such an objective, of course, requires early and on-going stakeholder communication to bring about that goal. It also requires training that is timely, well organised and enjoyable. Excellence in this realm is seldom achieved by uninspired individuals who merely regurgitate slides they themselves saw for the first time a few weeks previously. By way of illustration, one of the most effective and enjoyable training experiences that I have had the pleasure of witnessing was the education associated with a worldwide series of training workshops for a new database offering. The fellow who designed and authored that courseware did a remarkable job in creating original material that including cardboard cut-outs which, when folded origami-style (his wife is a Japanese lady), greatly assisted understanding of challenging technical topics. He had allegedly spent much time sitting on the desks of his development and sales colleagues, not going away until he obtained the information he required. Rumours also circulated that he occasionally brought his cat into work for some weekend company and inspiration and, as further testament to his character, it turned out those tales were true and the cat was called Garfield! But, it was this same character who was a natural teacher and one that knew how to create magic in the classroom, so much so, he was earmarked for a seven-week trip to Toronto, San Francisco, Sydney, Wellington, Singapore, Amsterdam and London to deliver the material he shaped to global technical teams. Furthermore, so well was that training received, that he was persuaded to visit those locations again to deliver the same innovative material to the sales teams there. With business-class travel, naturally. The message here is to seek out the bestof-the-best in this arena, since training is such a key enabler of every test verification and validation activity.

COLLABORATION IS KEY

Turning now to the wider world of testing and to the subject of collaborative teamwork in particular, where there is a very real and present danger.

These days the majority of senior testing vacancies emphasise the need for collaboration above pretty-much everything else. Collaboration with the development teams, collaboration with the service management function, collaboration with business users and their line management and collaboration with the technical BAU functions. Plus, collaboration with other project teams to ensure there are no nasty surprises hiding in anyone else’s woodwork, either. Most of the time, a collaborative mentality is good testing practice. Questions are asked and answered. Problems are discussed and resolved, practical test artifacts get created and approved. Ergo, confidence grows in the ability of the entire team to create verification and validation excellence. Every now and then, however, something rather important called ‘test independence’ gets forgotten. In the same way that the overriding responsibility of every auditor is to form an independent opinion of the accuracy of an organisation’s accounts for its shareholders’ guidance, and every physician who is charged with upholding the Hippocratic Oath; the professional test manager is mandated with providing an independent assessment of the status of testing on behalf of project stakeholders. One might well ask, ‘if not him or her, then who?’ Consider the hypothetical situation where a major financial institution desires to schedule significant migration activity over a particular weekend. Relevant communications go out, together with notification that certain commonly-used customer routes to affected data would be limited during the migration window. All well and good, so far. But, let us further assume that when the anointed weekend arrives, pre-migration test activity is substantially incomplete and that this status is known to senior management. However, the go-ahead for migration is given with the result that a multitude of serious data errors occur when subsequent access is attempted by customers using their usual access routes. Questions would therefore have to be asked, at levels up to and including high places, to establish whether the in-house test function and their out-house advisors had been collaborating with other project team members to such a degree that they could no longer see the proverbial wood for the trees.

T E S T I N G

13

“Most of the time, a collaborative mentality is good testing practice. Questions are asked and answered. Problems are discussed and resolved, practical test artifacts get created and approved... Every now and then, however, something rather important called ‘test independence’ gets forgotten”

T E S T M a g a z i n e | J ul y 2 01 8


14

I N D E P E N D E N T

This is where professional ethics and an ability to sleep nights have to play their role in forming and delivering a complete, truthful and independent picture. In such a situation, one would also trust that those who not only defined but also agreed the test approach would face some fairly blunt cross-examination. There are times when a risk-based test approach is not the correct one. When a certain amount of time is absolutely required for a test team to collect an appropriate amount of functional and non-functional test evidence, then that time must be allowed if a respected commercial reputation is not to be seriously tarnished. Which is why an independent software verification and validation voice is so critical.

BEWARE THE IDES OF OVER-COLLABORATION!

Earlier, we touched on good testing practice. In cases such as the hypothetical situation described, perhaps the best advice is for every test manager to be aware of the Ides of Over-Collaboration. There comes a time when collaboration must be trumped by the principle of independence so that any subsequent charge of ‘conflict of interest’ can be avoided. Such a view may have its detractors, but there is mounting evidence the attitude of a sensible Chief Information Officer (CIO) is aligned with that of an independent test function. In a recent CIO article, directional wisdom was expressed as not only the ability to get colleagues rowing in the same direction in times

T E S T M a g a z i n e | J ul y 2 01 8

T E S T I N G

"Every test manager worth their salt needs to listen to the voice in their head that reminds him or her that their primary obligation is to deliver a timely and independent test verdict for the benefit of all stakeholders – and stand by it"

of disagreement but also to be fearless in explaining the rationale, obtaining commitment and making the tough calls in finding a path through. To talk straight, be honest and get comfortable dealing with conflict was the other valuable advice given. Well said, and thank you, Jo Abernathy. Before closing, there is one other aspect to collaboration that is so fundamental that it must not be subject to the principle of independence outlined above. A while back, in rural Hampshire, a situation occurred during a large programme when an onshore developer fell and sprained her ankle. Painful, but it happens. Here, it was the test manager (the leader of the ‘traditional enemy’) who rushed the patient to hospital. Not because he had to. But because the human aspect of any collaborative situation must trump even an independent point of view. For the record, the developer recovered well but had to put both feet up for a day or two.That one exception aside, every test manager worth their salt needs to listen to the voice in their head that reminds him or her that their primary obligation is to deliver a timely and independent test verdict for the benefit of all stakeholders – and stand by it. In a number of his books, Sir Terry Pratchett poses the question quis custodiet ipsos custodes? (who watches the watchmen?) At some point, the test profession is going to have to give that question some serious thought and come up with a robust answer.


B e a w i nne r - R EGIS TER TO DAY

15

Enter The DevOps Industry Awards and start on a journey of anticipation and excitement leading up to the awards night – it could be you and your team collecting one of the highly coveted awards.

KEY DATES 24 August

Booking & Submission Deadline

12 September

Finalists announced

16 October

Awards Gala Dinner

 www.devopsindustryawards.com

T E S T M a g a z i n e | J ul y 2 01 8


16

DELIVERING REAL VALUE Robust test data management strategies are essential to better, more informed testing and realising a better return on investment from your tools and resources hen it comes to test data management it is true that no single approach, method, diagram or vision will give you everything that you want. What is particularly true is that it doesn’t matter what you do. If your test data management (TDM) is not high on your list of priorities then one of the following is likely to happen: • Verifying user needs will be incredibly challenging • Precious time allocated to testing will be wasted creating data that doesn’t fully meet the needs of anyone • Tests will give false positives or false negatives • Test automation will be ineffective, frustrating and counterproductive • You might run the risk of contravening GDPR or other laws and regulations. Testing can be used for many things, but

W

T E S T M a g a z i n e | J ul y 2 01 8

one of the greatest elements is to provide a measure of confidence. Run as many tests as you like; create the world’s greatest automation factory churning out millions of tests across browsers, apps and platforms, but it really doesn’t matter unless your data is right. Cambridge computing pioneer and father of modern computing, Charles Babbage, conceived and designed the 'Difference Engine'. While the machine was never built in his lifetime, it was clear from the earliest computers, that the outputs obtained were in direct relation to the inputs. In those days computers were like steam punk machines; made from gears and cogs, requiring oil, fine tuning and dedicated expert maintenance. And yet today, almost 150 years later, TDM is no different. The same principles apply to the 'Difference Machine' as they do to the Sunway TaihuLight Supercomputer and the

JANE KARGER TEST DELIVERY MANAGER (FUNCTIONAL TESTING) ) AT UNIVERSITY INFORMATION SERVICES

UNIVERSITY OF CAMBRIDGE

Jane has spent more than 10 years' in software testing, working in waterfall and agile environments. She believes in continuous improvement and an inquisitive approach to testing.


T E S T

latest advances in AI. It’s all about the quality of the data available. So, here at the Information Service Division UIS (Cambridge), we pay special attention to TDM. We system and regression test a number of central, single record, browser-based information systems that provide an integrated information service for our stakeholders, including colleges, administrative departments, faculties, schools, students, staff, academics and alumni. Our team was created in 2008 to help reduce the number of defects in our production systems. Initially consisting of functional testers, the requirement was to verify and validate the work produced by developers and functional analysts before it was released to our production systems. Over time, the team grew in to what it is today, a professional group of independent test analysts running thousands of functional, automated and performance tests to support business critical applications across the University. As each of the test specialisms grew (functional, automated and performance testing), we found they all needed their own TDM processes – and this is our journey to TDM maturity.

FUNCTIONAL TESTING & THE TDM JOURNEY

In the early days, one of the main challenges with functional testing was how to manage the test data. We realised that finding quality test data, and being able to store and maintain it, was just as important as writing and executing test scripts. Keeping ad hoc data in test scripts or in secure lists was not going to be sufficient for our needs. If we were to test business features and scenarios properly, we needed a data management strategy to manage the many variants of static and transitional test data. It was important that we had quality data providing consistent and correct results and it had to include the provision for single data sets for some scenarios and high volumes of data for automation and performance testing. Ideally, as reusable as possible. We also realised that TDM takes up time and we had to accept that finding the right data was going to be a time-consuming process, sometimes taking as long as it takes to test the business process or write scripts. We had to take on the administrative effort in maintaining a test data management repository as our data became invalid

D A T A

M A N A G E M E N T

quickly which meant that our TDM process had to include frequent review and maintenance stages.

APPLYING THE SOFTWARE DEVELOPMENT CYCLE To make sure our TDM strategy provided the data that we needed, we applied the stages used in the software testing development cycle – planning, analysis, design, build, and maintenance. We added an extra step by communicating our test data strategy to the business for sign-off. This was a strategy that would carry us through the present and would continue to work in future years. To meet our requirements our TDM process has gradually matured into a collaborative and iterative process. It was critical for us to involve a wider audience of functional analysts, developers and users to confirm requirements and make sure all major business scenarios were fully addressed. All test data management decisions were based around discussion, process planning and demonstrations. This helped us focus on prioritising tests and improving efficiency. After much hard work we had created a TDM process with a good understanding of what was reflective of the true business process and met testing requirements. The next most important factor was for our test environments to replicate the end user environment as much as possible and another important consideration was to make sure that our testing environments were well defined, up-to-date and consistent. So, our test environments were refreshed from the production environment on a regular basis after each major code release giving us data for all our testing needs. Test environments also reflected the production environment by including all the relevant interfaces, which were also refreshed. All data in the test environments was masked (scrambled) to anonymise any personal data. Now that we had ring-fenced our data and made sure it was of sufficient quality, we then had to gather and store the data. The majority of functional test data sets were retrieved with database queries, which were then stored in test scripts or in a shared database query library. In the test scripts it was made clear if the test data was for reuse or had to be created.

17

“All test data management decisions were based around discussion, process planning and demonstrations. This helped us focus on prioritising tests and improving efficiency”

AGUSTIN FERNANDEZ TRUJILLO TEST DELIVERY MANAGER (NON-FUNCTIONAL TESTING) AT UNIVERSITY INFORMATION SERVICES

UNIVERSITY OF CAMBRIDGE

Agustin is an energetic advocate of QA with more than 12 years' of experience in Performance Testing and Test Automation.

FURTHER TDM CHALLENGES

Working with this test data management

T E S T M a g a z i n e | J ul y 2 01 8


18

T E S T

D A T A

M A N A G E M E N T

process provides quality data efficiently and on demand but we still have challenges. We have to take into consideration that the same data may be used by multiple teams, which means data can be used up or becomes invalid. Some data may not reference all test cases in a test suite and some data may be very scarce. The costs of storage can be high and our environments sometimes have to be scaled down. It is also important to learn from each code release. By obtaining feedback from project teams, DBAs and users, we are able to build better test strategies for the next code release. We are not afraid to learn from experience.

TDM FOR AUTOMATION & PERFORMANCE TESTING Automation is a key part of the UIS test strategy and TDM has played a major role in successfully creating, maintaining and running our non-functional and automated testing services. Our automation and performance TDM strategy organically developed as our frameworks matured. We went from having one Test Engineer running 50 automated tests running sequentially, relying on hardcoded test data with minimum horizontal scalability, to running more than 800 crossbrowser distributed tests across 28 runners covering six business critical systems across the university. So, TDM becomes indispensable in terms of supporting operability and securing our long-term investment in automation. At the start of our automation journey our tests were running ad-hoc and test data selection and design were not conducive to long-term operability or to the maintenance of the test assets. However, as our expertise grew we acknowledged that

T E S T M a g a z i n e | J ul y 2 01 8

test automation required a high degree of test data orchestration and TDM was crucial in supporting the four ‘Rs’ of test automation: reusability, reliability, resilience and realism. A well-designed automated test should deliver on the four Rs of automation good practice and consequently it should run today, tomorrow and in a year’s time. To establish this level of rigour it took the automation team numerous iterations during which we tried various approaches, leveraging different technologies and methodologies. We also changed our automation test design approach and adopted a method (see diagram above) based on the following principals: collaborate, build and maintain (CBM).

THE CBM APPROACH

The CBM approach became the most essential part of our test automaton strategy. It empowered the automaton testers to take ownership of their automation suites and fully manage the quality of the test data design processes. As a result, each automation suite runs as an agile automaton project, using valuable information gathered through consultation with stakeholders and by having sprint planning meetings in collaboration with functional analysts and technical developers. This approach has been so successful that now our automation suites have been used to support the testing of infrastructure changes, the application of security patches and the functional testing of system upgrades. The value and return on investment (ROI) provided by test automation services, in our experience, can be directly correlated to the quality of the test data, the maturity of the test automation frameworks and the corresponding TDM


T E S T

D A T A

M A N A G E M E N T

19

“As with all maturing services we had some critical early lessons to learn. Cloud can be costly and it requires careful planning, management and monitoring. By failing fast and failing early, we were able to learn, adapt, refine and evolve" "This meant we were able to improve our performance testing services by creating a hybrid model built on three pillars: cost-effectiveness, TDM and having full control of our test data (at rest and in transit), and the capacity to leverage the cloud for scalability”

strategies. During the last ten years, many organisations have integrated their automation and performance testing inhouse solutions with cloud services in an attempt to increase test coverage, streamline testing feedback loops and accelerate QA processes. Although these hybrid cloud strategies may offer certain advantages in terms of infrastructure scalability, they also present compliance and security risks which can be painful and complex to manage. At UIS, our performance testing services went from engaging external contractors, to utilising 20 windows XP computers in parallel, to orchestrating load and volume exercises with thousands of users completing millions of end-to-end transactions per hour. Each one of these approaches had different degrees of complexity, data orchestration requirements and cost. Storing performance test data, sending it via the network and having the sufficient CPU and memory to simulate thousands of users, does come at a cost. As with all maturing services we had some critical early lessons to learn. Cloud can be costly and it requires careful planning, management and monitoring. By failing fast and failing early, we were able to learn, adapt, refine and evolve. This meant we were able to improve our performance testing services by creating a hybrid model built on three pillars: cost-effectiveness, TDM and having full control of our test data (at rest and in transit), and the capacity to leverage the cloud for scalability.

CONCLUSION

One of the main objectives when testing a system is to ensure the delivered product meets the agreed operational, performance

and usability requirements. TDM has a major role to play in facilitating all QA efforts across the delivery of IT systems as well as in creating customer satisfaction. The more comprehensive our test design becomes and the more similar our test data is to production data – having complied with GDPR and security aspects – the higher the level of realism our testing activities can generate by way of creating high-valued test cases. Equally, the more mature our TDM strategies become, the more informed testing we can do and the more ROI we can realise from our testing tools and resources.

THE ACHILLE'S HEEL OF TESTING SERVICES

Conversantly, poor TDM can be the Achilles’ heel of testing services (especially test automation projects), as these fail to deliver on productivity and costeffectiveness and eventually, management runs out of patience, confidence and money. However, there will always be data challenges. No team can be complacent with their TDM strategy. It has to be an iterative process and be seen as an important part of any test strategy. The strategic direction of travel for many organisations across different industries is, inevitably, on a direct collision course with automation, big data and the adoption of artificial intelligence and robotics. TDM is likely to be at the epicentre of the so called ‘test data revolution’ and will require engagement and collaboration from all stakeholders in order to create quality test data that supports QA activates and drives demonstrable value across the software development lifecycle.

T E S T M a g a z i n e | J ul y 2 01 8


20

Testing your returns With budgets being squeezed ever tighter, the need for businesses to demonstrate their ROI from test automation has become increasingly pressing esting and quality assurance is rarely seen as an investment. This makes a conversation about return on investment (ROI) seem somewhat inappropriate. In the majority of best-case scenarios, testing is often managed as an expense. In worst-case, it is seen as a grudge purchase, much like insurance. However, it should be noted that this ‘expense’, according to the World Quality Report , accounted for 26% of the total expenditure on IT budgets in 2017. Testing is finding its rightful place in the sun. The business challenge now is how to demonstrate the value that can be derived from a testing budget. One important question to be considered is "What can we, as a testing community, do to not only show the ROI of QA but also to demonstrate improvements in

T

T E S T M a g a z i n e | J ul y 2 01 8

the effectiveness of the QA process?" I believe a key driver of value creation in QA is test automation. Correctly applied, test automation has the range and reach to demonstrate ROI in the QA space. Despite its value benefits, test automation is currently under-exploited in QA and testing, with the World Quality Report citing the average level of automation for test activates at around 16%. One possible reason for the lack of automation take-up is that there is no clear and articulate way of demonstrating the ROI against the spend for automation. How do we define return on investment if we cannot quantify any revenue directly related to this activity? To put it slightly differently, in a cost centre where there is no profit realised, how can there be a clear return? Or are we simply referring to savings that can be made

BRUCE ZAAYMAN DIRECTOR, CLIENT ENGAGEMENT DVT

Bruce’s primary focus is the business development of DVT's managed outsourced software testing solutions. With more than 12 years’ experience in performance testing and software QA consulting, Bruce helps UK businesses optimise software testing automation. When he’s not talking testing, Bruce flies off mountains in a paraglider, using ridge lifts and thermals to glide 3000 metres above the ground.


T E S T

A U T O M A T I O N

R O I

21

"One possible reason for the lack of automation take-up is that there is no clear and articulate way of demonstrating the ROI against the spend for automation. How do we define return on investment if we cannot quantify any revenue directly related to this activity? To put it slightly differently, in a cost centre where there is no profit realised, how can there be a clear return?" in the testing budget to show ‘return’? In my opinion, there are three primary ways that clients perceive value in the context of testing. The first is to pay less for the same volume of output. The second is to get more work done for the same budget. And the third is to pay more and get more. The first step in defining ROI is to start looking at the elements that make up the base cost for testing. These are the costs incurred during environment set-up and licencing for test technology. However, the bulk of the testing cost still lies in the resources, the test analysts and the test leads. At this point it should be noted that the true cost of a tester does not only include their direct salary or contracted rate, but also indirect items: desk cost, leadership cost, capacity management and down time, upskilling and continued training. This is where we can make a saving through carefully planned and executed automation. Enter automated testing.

SO IS THIS WHERE WE LOSE OUR JOBS?

To be clear, automation does not replace testers. It needs to be seen as a tool to help make testers more effective, much like the tractor did not replace the farmer. It allowed the farmer to become more effective and farm over a much wider area. Automation aims to reduce the manual or repetitive work associated with testing. This repetitiveness is found mainly in the regression component of the test process. So, in the current test life cycle, the tester in a sprint needs to not only test new functionality, but also has to work backwards through the entire system each time (regression testing) to ensure no bugs have been inadvertently introduced. Through automation, however, we can mechanise the regression testing component, freeing up capacity to focus on newer functionality in each sprint.

This aligns perfectly to agile concepts such as DevOps and Behaviour Driven Development, where the tester is now seen as a fully functioning part of the three amigos – business analyst, quality assurance and developer. Consider the ROI at this point: • Through automation, the tester can now focus more on new functionality • The reduced focus by functional resources on regression enables the overall development cycle to move forward, bringing fresh energy and maximising the strengths of your team • Automation of regression testing improves the quality of testing, as computers have no issue with monotony. Testers repeating the same test cases are bound to make a mistake or two through boredom or rushing to complete the regression testing to focus on new functionality • Now that the testers have been freed up from most of the regression testing through the automation process, they have an opportunity to extend their coverage and examine a broader range of functionality through items such as exploratory testing and static testing. Replacing manual testing with automated testing reduces the pain of regression testing. When we automate the process, we provide consistency. Plus, it becomes feasible to run the regression, more often, for a lower cost. Automation improves overall delivery by reducing test cycle times, leading to shorter sprints in the DevOps world, which ultimately leads to improvements in the time-to-market for software. We are able to overcome challenges in the regression cycle of repetitiveness and testing, which is generally harder to manually execute. By automating our testing, it is entirely possible to provide an increase in test coverage without increasing test cycle times, scaling testing across large numbers of browsers, devices and platforms without

increasing associated costs. If the benefits of automation – improved delivery, improved quality and improved coverage – awre to be realised, this will be achieved at an initial cost. The cost of automation is the writing of the test scenarios in a scripting language. This is usually carried out in the context of a development framework, which compiles and runs these scripts on command for regression purposes. The cost of the automation scripting needs to be offset against the benefits quoted above. This cost can be seen as the function investment component of an ROI calculation. The overall ROI = the benefits of the automated regression testing minus the cost of investment to script the tests in the automated language. Let us also pause to remember that not all ROI can be quantified. Qualitative ROI describes the hard-to-quantify benefits of technology for an organisation – and for that organisation’s customers – and always should be taken into consideration. In the case of test automation, aspects such as customer satisfaction, improved efficiency, time saving, internal perceptions of the automated testing – ‘will this be good for our business?’ are as important as the quantifiable aspects. There is a clear recognition that not all testing can be automated. Its value driver is the reduction of regression testing. In addition, and critically, the tester’s performance and through-put will improve substantially when they utilise automation as one of their primary tools. With automation correctly implemented, its quantitative and qualitative ROI cannot be ignored. With regression testing, processes and systems become more testable over time, delivering a better quality, faster-to-market product. If we offset delivery improvement, quality of testing and extension of the coverage against the time and cost to implement the automation, there is a strong case for this component of quality assurance.

T E S T M a g a z i n e | J ul y 2 01 8


22

WHEN WORLDS COLLIDE How to survive developing services with both DevOps and legacy components evOps methodology, tools and techniques are increasingly being used to provide services which will need to integrate with core legacy applications and, for those providing a robust end-to-end service using DevOps and legacy, it is sure to be a challenge. While we focus on DevOps and agile developments and explore the opportunities afforded to us by these techniques and the tools that support them – unless you are working in a start-up – most organisations will have legacy systems that represent significant investments and provide continued support to core business

D

T E S T M a g a z i n e | J ul y 2 01 8

processes. There is often little appetite in an organisation to transform a system that is supplying this level of support. The risks and the costs of this level of transformation will often need to be thought through carefully. More often than not, a policy of caution is chosen. However, modern applications developed with cloud-based containerised microservices are not a natural fit with legacy systems developed with old inflexible technologies and maintained in on-premises environments. But, more often than not, that is our challenge as developers: to get the best out of the new technologies and DevOps

WAYNE TOFTEROO QUALITY LEADER

As a provider of test leadership for agile and DevOps, an innovator and a believer in change, Wayne believes there are always ways we can improve.


D E V O P S

processes, while exploiting the value in the core legacy components. For Google, E-Bay and Amazon it is necessary to provide fast, highly scalable and easily maintained software and infrastructure. These organisations will commit vast sums of money to ensure they can constantly meet customer expectation. They will continuously re-engineer the processing pipeline to exploit the latest technologies and recruit and nurture the best software engineers they can find. They basically focus on technology as the core enabler of their business. However, very few companies operate at the scale of these giants nor do they have the budget. While it is good to learn from the pioneering engineering and results achieved by these companies, the reality is they are solving a different problem to most organisations. This is not a one-size-fits-all equation. While we all would like a Ferrari, not all of us need one. According to the world QA report 2017 – 2018, most organisations are making use of DevOps at some level: 12% of companies surveyed had not used DevOps, 47% of companies used DevOps in less than 20% of projects and 30% of companies had used DevOps in between 20% and 50% of their projects. Few organisations have made it their primary development methodology. Most organisations have exploited DevOps to support the customerfacing operations and have left back-office operations to continue as is, until they have enough confidence with DevOps to do a full transformation. However, in most scenarios, the customer-facing DevOps solutions will need to interact with legacy systems. By using DevOps to support customer facing operations, they are able to exploit DevOps’ ability to make quick changes to reflect the latest customer propositions, for example, new offers and promotions can be swiftly reflected in the customer-facing services and where there is a need to respond to a competitor in order to maintain their position in the market.

GREEN AND BROWN FIELDS

The phrase Green Field is used to describe a development where nothing already exists and we are free to do what we like to get our solution in place. If we were in a Green Field, we would just focus on DevOps, pure and simple. However, as I’ve stated above, most organisations are transforming slowly – they are not in a Green Field scenario, they are in a Brown Field Scenario. Brown Field is the

&

L E G A C Y

term used for constructing on land that is being reclaimed from previous industrial use. I think the name is appropriate. When we are in a Brown Field development we have to accept there are constraints on what development freedoms we have. The legacy components will impose data definitions and business rules on any new application. By definition, if these systems are too complex or important to risk transforming, then they will be the systems that define the way the business holds and processes data. If we fail to understand this, we may have a very long and painful integration phase. To get this right we really need to take a hybrid approach.

SUGGESTIONS FOR A HYBRID APPROACH

So, imagining a situation where we are a DevOps development about to deliver a service where one or more DevOps squads will produce a set of components to enable that service, but a sizable chunk of that service will be delivered by existing legacy components, we first need to understand the constraints that are being imposed by the legacy components. Now, a lot of people can shy away from this, as it really isn’t as easy as it sounds; if a legacy system has been retained due to the risk and cost of transforming it, chances are the way that it manages data and enforces business rules is only known to a small, dedicated support team with years of specific knowledge. Don’t even begin to hope the documentation is comprehensive or up to date, it won’t be.

C O M P O N E N T S

23

“Most organisations will have legacy systems that represent significant investments and provide continued support to core business processes. There is often little appetite in an organisation to transform a system that is supplying this level of support”

PRODUCE AN END-TOEND ARCHITECTURE

For the service under construction, you will need to show the integration points between the new development you have complete control over and the legacy components you do not. It is necessary to understand these points and what data and processing they are supporting. The legacy integration points become third-party API’s into your development. However, they are usually more complex in terms of data and rules processing than a simple third-party API. Remember, these are the systems they were afraid to touch. As such, they will require much more attention and understanding to successfully integrate. For example, a rule enforcing a date for a registration process may seem clear, but will it have been enforced by the legacy system in the same way that you need to implement it? If it isn’t, you may have thousands of existing

T E S T M a g a z i n e | J ul y 2 01 8


24

D E V O P S

&

"In creating an end-to-end service in a hybrid environment, we are often dealing with the meeting point between two different worlds. We need a clear understanding of how both of those operate and the risks involved"

L E G A C Y

C O M P O N E N T S

customers breaking your service on day one. I once sat in a contact centre to see how a registration process was being completed. There was a date that was required, but the script didn’t explain what was required or what the date meant. The staff were proud to inform me they had overcome this problem by setting it to a default date they all could use, which allowed them to proceed and didn’t seem to cause any problems. To them it was a problem solved… but not until someone like me came along to apply a shiny new business rule processing to it, anyway! The lesson was learned.

BUSINESS RULE PROCESSING & THE ENDTO-END JOURNEY

As much as agile is about leaving documentation behind and focusing on collaboration, the complexity of legacy systems means that you need to get a clear understanding of what you can and can’t do and in some manner document it. If it’s a constraint, it’s not up for discussion. The way the legacy system manages the data and the process you are dealing with is of critical concern. You need to understand that by working with the legacy support teams and, in some instance, performing exploratory testing on the legacy systems to confirm their behaviour. Exploratory testing of the legacy components allows you to build up knowledge of the behaviour and document this as part of the knowledge necessary to develop an end-to-end service. A short, architectural sprint may be required, but these are all potential solutions and it is first necessary to take a realistic view of the problems that your team might encounter. If these issues are not managed, then they will appear in test (if you are lucky), and in live service (if you are not). Once understood and documented, the behaviours and constraints of the legacy components can be shared.

DATA LATENCIES ACROSS THE SERVICE

Legacy systems are likely to make use of performance optimisation strategies based around the batching of updates to make use of evening ‘downtime’ and create overnight batch windows to process data. So, while an update may be sent to the legacy system as part of a customer interaction with a mobile application, the downstream impact may be dependent on a batch overnight update, such as the provisioning of a service on the legacy system based

T E S T M a g a z i n e | J ul y 2 01 8

upon a successful customer application, managed by a mobile application where the confirmation is provided to the customer immediately. This causes issues in service design, implementation, and will place challenges on the end-to-end testing of the solution. It will certainly place restrictions on test automation and the prospects of continuously testing the end-to-end service.

TESTING, AUTOMATION AND CONTINUOUS TESTING

The challenge of testing an end-toend service from such heterogeneous components is quite large. Generally, the modern products of the DevOps development can be tested via automation and continuously tested with little trouble. However, fragile legacy components will have constraints. These will be the environments and data. If a legacy component is hosted on premises, generally there will be a set of test environments that a new development can integrate with for testing. These are generally shared with other development and support teams and the test data in the legacy environment is often cumbersome to set up and difficult to refresh. This will limit what can and can’t be done to test the end to-end service. End-to-end automated service testing can be done with much work and a bespoke set up, but it will be fragile, difficult to maintain and may generate some false results. A risk-based approach will be needed to manage end-to-end service testing and the dynamics of testing will be different between the modern and the legacy components. The legacy component is often treated as a third-party API that we have little control over. The option to stub out the legacy component should be considered. But, however it is done, it won’t be perfect. It will be risk-based and it will require understanding and agreement as an approach.

SUMMARY

In creating an end-to-end service in a hybrid environment, we are often dealing with the meeting point between two different worlds. We need a clear understanding of how both of those operate and the risks involved in your own unique set up, as each development incorporating legacy components will be individual. Doing as much upfront as possible will reduce difficulties and will only increase your chances of success.


16 - 17 October 2018

25

Millenium Gloucester Hotel London

www.DevOpsEvent.com

The National DevOps Conference has its finger on the pulse as it is owned and supported by industry leading news portal DevOps Online and industry leading journal TEST Magazine, and is produced by the organisers of The DevOps Industry Awards. This two-day programme is designed to connect a wide range of stakeholders, and engage not only existing DevOps pros, but also other senior professionals keen to learn about implementing this practice. At The National DevOps Conference, you can hear from peers who have successfully begun their DevOps journey; from industry practitioners sharing advice and knowledge; join in executive workshops; network and much more.

 Learn from leading industry experts at cutting edge seminars. Interact at collaborative Q&A sessions

 Join in executive discussion forums  Source the latest products and services via the industry leading exhibition  Network with your peers and key industry figures  Swap and share ideas with likeminded professionals  Network and make new and influential contacts

 Take away ‘real-world’ scenarios and apply learned solutions to accelerate your own IT transformation

 Network with your peers and new acquaintances at the Evening Drinks Reception

REGISTER TODAY 2 - for - 1 End User Registration

 02039315827

2 Days**

£735

Day 1 Only*

£525

 info@devopsevent.com

Day 2 Only

£525

 http://devopsevent.com

*Includes entry to the Evening Drinks Reception on the first day of the Conference. All prices are subject to VAT. **2 for 1 offer applies

only

to 2-Days tickets. Includes 2x2 2-Days tickets to the Conference and 1 entry to the DevOps Industry Awards

T E S T M a g a z i n e | J ul y 2 01 8


26

ARE WE STILL TALKING?

Has the agile world seen any improvements in its application of ‘continuous communication’, or is poor communication still the number one reason for projects failing? ack in 2015, I introduced the term ‘continuous communication’. It was a way to create awareness that, despite the introduction of methodologies like agile (2001), specification by example (2004) and behaviour driven development (2006), communication (or the lack thereof) is still the one thing that causes many IT projects to fail. Not because we don’t communicate at all, but because we tend to forget to keep communicating. A standup or retrospective meeting isn’t the only moment at which we are allowed to talk. We should be communicating all the time, hence ‘continuous communication’. Over three years' have passed. Time for a retrospective on where we’re standing now with continuous communication. Have we learned from our mistakes? Have we learned to effectively communicate all the time, or is poor communication still the

B

T E S T M a g a z i n e | J ul y 2 01 8

number one reason for failing projects? In this article I’ll be talking about my own experiences with continuous communication and also the experiences and anecdotes of others working in IT related projects. Their thoughts are included in this article as communication definitely isn’t just a one-man thing!

ENDLESS DIGITAL POSSIBILITIES

We live in a time of ‘communication’. There are more than enough means to communicate with each other: email, chat, social media, video conferencing and telephones, etc. Digital communication is everywhere and it certainly makes life a whole lot easier. We can work from home and still stay in touch with our co-workers. With the push of a button we can find

KASPAR VAN DAM CONSULTANT IMPROVE QUALITY SERVICES BV, THE NETHERLANDS

With over 12 years' of experience in IT, Kaspar advises colleagues and clients on matters concerning testing and/ or collaboration and communication within agile teams, projects and organisations.


C O N T I N U O U S

anyone within our global company and ask them for the information we require. However, as my colleague Benjamin Timmermans pointed out, all these means of digital communication seem to have more focus on sending information and less on receiving it. Thus, it’s all about talking – but not much actual conversation. Because of this, the use of communication tools poses a huge risk, especially since people working in IT tend to quite easily put all their trust in digital ‘communication’. This was clearly demonstrated in this anecdote from another colleague, Gerbert Gloudemans: “I once got this totally serious question about what I would do if a Business Analyst, seated 10 metres away, provided specs that weren’t entirely clear. My quite shocking response was that I would walk over to them and ask them to clarify the specs!” When it comes to communicating nothing works better than an old-fashioned face-to-face conversation but, more often than not, digital communication is used instead of real-life conversations, when it should really be supplementary to it. That’s bad news for effective continuous communication. Does this mean we should abandon digital communication? Absolutely not! However, as discussed by Agile Coach, Ralph van Roosmalen, it requires the right mindset: non-face-to-face communication should be more explicit. Sharing information and talking about things should be a conscious activity. People should be aware that communication and collaboration is part of their work and that it will not happen by itself. Especially when you’re not in the same room and you need to make a phone call or start up a video conference. It’s very easy to forget all about communication when working at a distance. There are, of course, tools to keep communication lines open and help teams when people are not always able to meet face-to-face. Communication is all about conversation, not about talking. And, as said before, social media and most other digital communication channels tend to be more about talking: sending information instead of having a conversation about something. The importance of (face-toface) communication through conversation is even part of the twelve agile principles: ‘The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

C O M M U N I C A T I O N

We can certainly adapt tools that can help us communicate better, faster and/or easier, but they should never replace faceto-face conversation, which does happen nowadays, all too often

TRUST AND CONTINUOUS FEEDBACK Something else that almost everyone I spoke to about Continuous Communication for this article came up with, was this one very important thing: the importance of trust. This is also demonstrated in Lencioni’s pyramid of team dysfunctions:

27

“Within a team you need to be able to speak out about what’s on your mind. You should also be allowed to make mistakes and learn from them!”

Effective communication within an agile team isn’t possible without trust. As also mentioned by senior agile consultant at Xebia Group, Bart Bouwers: “Within a team you need to be able to speak out about what’s on your mind. You should also be allowed to make mistakes and learn from them!” If you don’t trust team members enough to criticise them out of fear of conflict, or if team members are afraid to criticise you, then the team will (most probably) develop a culture of keeping quiet and just focusing on their own work. Frustrations will grow, collaboration will be low and conversations will be kept to a strict minimum. One of the keystones of agile work is feedback: ‘At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.’ However, there’s one issue with this agile principle: what is meant by ‘regular intervals’? People tend to translate this to that one meeting (retrospective, review, stand-up) where we are allowed to give feedback – and when that meeting has finished you’d better mind your own business.

T E S T M a g a z i n e | J ul y 2 01 8


28

C O N T I N U O U S

However, if a team wants to be the most efficient with effective communication there should be a culture in which there’s room for continuous feedback: speak out about what’s on your mind and don’t hold it back until that one specific meeting (all the while growing increasingly frustrated). This should not only concern personal matters, but also the software being built and the way in which it is being built.

EAT YOUR OWN DOG FOOD

Another theme that gets a lot of attention these days is accountability. A good agile team should feel responsible for the software they develop. They should be proud when they build something that really helps the business and they should be ashamed when it turns out they actually developed rubbish. Therefore, fast feedback on the product is essential. This is where, as stated by Agile Consultant at Avisi, Berry Kersten, such things as information radiators can really help: visual displays on the working floor that show the current status of the software in development. They can show results from the nightly run, but also results on how the software is being used in production. How many customers used the software without issues, but also how many errors were thrown. The team should be aware that their product is actually being used and that it’s functioning correctly – or if it is not! This is one of the reasons why

T E S T M a g a z i n e | J ul y 2 01 8

C O M M U N I C A T I O N

DevOps can help in creating a sense of responsibility and accountability. If you build the right software, and build it well, you won’t end up with much work on your plate to keep it running. However, if there are issues with the software the team itself will end up with a ton of work trying to fix every single issue encountered on production. It’s even better if teams actually use their own software – better known by the principle of ‘eat your own dog food’. It’s a great motivator for any team to build the best software they can when they will also benefit from it themselves. This almost certainly leads to open conversations about the software when team members discover the software they so proudly built doesn’t work as intended and/or they can’t use it in real-life situations the way they thought. The availability of information gives people something to talk about. Information radiators and DevOps can improve this mindset in which people continuously share conversations on how the software being built is functioning and how it can be improved, both within the agile team itself and with stakeholders, end-users, etcetera.

COMMUNICATION IS ALWAYS KEY

It’s 17 years after the introduction of agile, which was all about communication within teams, fast feedback and (face-to-face) conversation. In 2009 the book Bridging the Communication Gap by Gojko Adzic

starts with, “I am getting more and more convinced every day that communication is, in fact, what makes or breaks software projects”. It’s 2018 now, and I think this is still the case today. I have asked many people in the industry if they actually know what the end-user really wants. Almost everyone thinks they do. However, when asking the one single question ‘why does the end-user want that?’ it turns out most people have no clue what drives the end-user and what they therefore need from the software being built. Have you ever walked over to a real enduser and asked them why they’re doing things the way they are doing them? And why they would want things (the software being built) to change? It turns out that ‘why?’ is a really powerful question to start a conversation about what a partner really wants or needs. In IT we tend to immediately jump to conclusions and solutions. Quite often the business even asks for a specific IT solution. A good example I encountered myself was when the business asked the IT team I was working in if we could build two buttons on a screen for generating a report. One button should say ‘Excel’ and result in a report being generated in Excel, the second button should say ‘PDF’ and generate an error message stating that a PDF report isn’t available. My obvious response was ‘why on earth would you want a PDF-button just to show


29

an error message?’ Of course, they didn’t really want that button, they just wanted the report in Excel. However, there had always been these two buttons for any reporting functionality so they had assumed that was the only possible way to build that screen. In the end we didn’t even build a button – it turned out users much preferred to just press the ‘Enter’ key on the keyboard to download the Excel report they needed

THE THREE AMIGOS

When we talk about communication through conversation it’s also relevant with who you should talk. This is often overlooked and people tend to stick to their usual circle of people. The people they know best and learned to trust. This is also the focus of the ‘three amigo’s principle’ which describes the power of conversation between three ‘friends’: business, tester and developer. Within methodologies like behaviour driven development these three amigo meetings are really important and a really efficient way to quickly come to a shared understanding about what the business really wants and on how to get this realised as quickly as possible (think minimum viable product). However, three amigo’s doesn’t mean you’re not allowed to talk to other people besides your ‘friends’. Get out of your comfort zone and start conversations with other stakeholders as well: possibly the real end-user, or other agile teams working on software that provides a service to your

software (or vice versa). As pointed out by Gerbert Gloudemans, agile and SCRUM have a strong focus on the team itself, but they don’t say much about collaboration and communication between different teams and the outside world. Quite often this is where communication fails and things go sideways. This leads to frustration and instead of speaking out about what’s on our mind we often keep things within our own team and start complaining about ‘that other team that always gets things wrong’. When this is happening there’s still no constant communication mindset. And communication, conversation and collaboration should be part of the mindset and culture of the entire organisation, not just within one single team. And, as mentioned by my colleague Jochem Gross, this mindset is all about people. For people, communication should be part of everyday work – and not just within the comfort of their own teams – but with everyone involved. And not through just one single email or during one single review session, but all the time. continuous communication.

CONTINUOUS COMMUNICATION

It’s safe to say communication is still a huge bottleneck within any IT related project and, even though digital means have added many different ways of communication that can help us improve collaboration, it’s still not easy to keep talking about the things

that matter. It’s all about fostering a mindset of communication and realising that this is part of our daily work. Even though pitfalls like making assumptions, jumping to conclusions/solutions still exist, in the end it’s all about people, face-to-face conversation and trust. It’s about giving continuous feedback, asking the right questions and about leaving the comfort of your own circle. It is also about starting a conversation with those stakeholders that you don’t yet know. So, where are we standing now when it comes to continuous communication? When listening to what many people in the IT field have to say, combined with my own experiences, I think things have certainly improved during the last few years. However, there is a tendency to still keep communication within our own circle of trust and we also have a tendency to limit communication through conversation to just those specific review meetings – or even worse, that one single email! However, speaking out about what’s on your mind, whenever it’s on your mind, can be a hard discipline to develop and also requires a lot of trust. No matter if communication is face-to-face, or over a distance through communication tools, it requires an effort to keep communicating all the time. So, while I feel we haven’t mastered continuous communication yet, increasingly, people are becoming aware that we’re not quite there yet and are, in themselves, looking to work on it. Who knows where we’ll be in another few years from now.

T E S T M a g a z i n e | J ul y 2 01 8


30

A CLOSE RELATIONSHIP Continuous testing has a close relationship with non-functional testing, together driving an informed, agile and robust approach to enterprise software development and deployment

F

irstly, let’s understand what is meant by continuous testing. The following definition from Wikipedia is good enough for our purposes: ‘continuous testing is the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release candidate as rapidly as possible’. There are some key points that come out of this definition: • Automated tests: Firstly, whatever type of testing you may be looking to execute, you generally can’t deliver continuous

T E S T M a g a z i n e | J ul y 2 01 8

testing in any meaningful way without using automation Feedback: Secondly, the point of Continuous testing is to provide regular feedback to the business on the impact of change as a result of continuous integration and continuous delivery • Business risk: Finally, increased velocity in release deployment increases the risk of introducing functional error, performance and capacity regression to a new deployment. Introducing CT at each stage of the software development life-cycle (SDLC) helps mitigate the risk of

IAN MOLYNEAUX INDEPENDENT CONSULTANT

Ian works as an independent consultant offering advice and support to clients and tooling vendors alike on DevOps implementation, performance testing, application performance management (APM) and performance trouble-shooting.


C O N T I N U O U S

this occurring. It would be true to say that continuous testing, continuous integration and continuous deployment are all part of DevOps, enabling the feedback loop that drives an informed, agile and robust approach to enterprise software development and deployment.

WHAT IS NONFUNCTIONAL TESTING?

Non-functional testing refers to testing the way a system operates rather than specific system behaviour. Whilst non-functional testing can also encompass security and acceptance testing, the most common use case is testing software performance: reliability and scalability under load. Let’s look in more detail at the key points from our definition of continuous testing and how they align with non-functional testing requirements.

AUTOMATED TESTING

Delivering non-functional testing requires automation and there is a wide range of tooling available, both licensed and open source. Historically this has applied mainly to system testing carried out late in the SDLC, leaving little time to correct any problems uncovered by testing. Non-functional test automation now increasingly extends into Dev, where the move to modular or microservice-based architecture simplifies testing at the component level. This is often accomplished using open source tooling, such as JMeter (jmeter.apache.org) which has a powerful scripting engine and is easy to deploy. Continuous non-functional testing is relatively simple to integrate into the Dev process, either ad-hoc or as part of the overnight build/test/deploy. This allows trending of component performance across builds and early detection and correction of coding changes that inadvertently cause performance regression. The increasingly common use of highly scalable cloud-based test environments leveraging endpoint mock, stub or service virtualisation can allow developers to test component performance in sprint against production SLA’s , something unheard of even five years ago

FEEDBACK

Non-functional testing that is limited to system testing late in the SDLC is poorly placed to provide timely feedback to the Business on application performance. The feedback loop

that is a key goal of implementing DevOps needs to function right across the SDLC to be truly effective. Ideally continuous non-functional testing should be implemented in the following areas: • Development • Component and integrated component testing • QA • System testing • Production • Scheduled BAU testing. Feedback data can take several forms but the most important consideration is visibility of application performance, good and bad, to relevant stakeholders. This means rapid feedback to Dev of performance-related bugs so they can be corrected and, if necessary, a new release prepared – including the dashboarding of test results demonstrating that an application is meeting or failing to meet the assigned performance goals. Many organisations leverage full stack monitoring solutions like ELK and Graphite / Grafana to visualise application behaviour. It is vitally important that performance data is included to inform stakeholder groups tasked with key responsibilities, such as approving Dev team work items and application release candidates.

T E S T I N G

31

“Feedback data can take several forms but the most important consideration is visibility of application performance, good and bad, to relevant stakeholders. This means rapid feedback to Dev of performancerelated bugs so they can be corrected”

BUSINESS RISK

Mitigating business risk is all about ensuring that core applications remain available, functionally stable and performant to endusers regardless of load. Implementing continuous non-functional testing in the areas previously described helps to make this happen by maintaining and improving the quality of release. The business impact of problems uncovered at each stage of testing can be quickly assessed, prioritised and corrected. This promotes early discovery of problems so the number and severity of non-functional software bugs naturally decreases and shifts left, benefiting the speed and quality of delivery. It is equally important to understand the ongoing performance of core software applications post-deployment. Regularly scheduled non-functional testing of production systems can help detect creeping performance regressions, often the cumulative result of many small releases. Scheduled BAU testing also provides confidence that production deployments are performance-ready in advance of peak events like Black Friday and Cyber Monday.

T E S T M a g a z i n e | J ul y 2 01 8


32

EXPLORING THE LAST BASTION OF MANUAL TESTING

What differentiates exploratory testing from a traditional scripted approach and how can traditional test strategies blend with exploratory testing to uncover defects in a shorter time? uality has always been the driving factor behind the delivery of any software product. Ensuring quality delivery with minimal or no defects usually leads to a satisfied customer and drives business growth. Since quality is such an intrinsic factor in the development of the software product – that can result in making it or breaking it in the market – it is essential for the product team to perform rigorous quality checks before releasing it to the customer base. Software testing methodologies have proposed various approaches to verify the product as per its specifications. These, however, can be executed in any one of these styles, namely scripted or exploratory and can be implemented at different phases during the software development lifecycle. Also, testers need to be included in the software development process, beginning from the requirements gathering phase, which will result in a better analysis of requirements and help to develop better test

Q

T E S T M a g a z i n e | J ul y 2 01 8

plans. The benefit of creating the test plan is to cover all scenarios which improve test coverage; early identification of risks, listing out-of-scope scenarios, defining the testing strategies and so on. The traditional approach requires the tester to plan, write scenarios, review them and then move on to the test execution phase. On the contrary, a freestyle testing approach can help save this time and effort, which can then be devoted to other parts of testing such as test execution, defect tracking and retesting.

INTRODUCTION TO EXPLORATORY TESTING

Exploratory testing mainly targets the functionality of the application with the intent of finding hidden bugs and ensuring the application works as per the requirement specifications. It is sometimes also referred to as ‘ad-hoc testing’, since it doesn’t require a standard structure for test execution,

AFSANA ATAR SCRUM MASTER SUSQUEHANNA INTERNATIONAL GROUP

Afsana is an accomplished Test Engineer with more than 10 years’ experience in software testing. She extends her thought leadership to teams in a variety of domains from digital advertising, education and healthcare to the financial sector in banking, insurance. and trading.


E X P L O R A T O R Y

T E S T I N G

33

"Testers might need to start rethinking the way they need to add value to the software team; an example would be human testers managing multiple software projects while the AI performs the actual functions of the current day tester" generates less documentation and doesn’t always facilitate an easy way to reproduce the defects. A tester not only needs to know how the software works but also needs to apply their cognitive skills to generate ideas to break the application. Exploratory testing, as an approach, encourages the tester to apply their creativity, analytical skills and past experience in exploring all facets of the application with the intent of finding the defects.

SCRIPTED VS EXPLORATORY TESTING

With the scripted approach we invest time early on, planning all scenarios, creating relevant test data, documenting test cases, reviewing and finalising all steps that will help us during the actual test execution. This brings structure and order in to the process of testing, which results in a detailed understanding of the workflows, expected results and a clear path for reproducing defects. With the availability of tracking and retracing mechanisms, the scripted approach enables the creation of a requirements traceability matrix. Documentation, also, is essential in training new testers in the team and also helps in auditing functions. Exploratory testing is a completely different beast that requires the tester to be disciplined in their approach rather than being bound by the standards and rules dictated by the approach itself. This approach focuses on validating the behaviour of the application under test in a limited time to ensure critical functions work as expected and no unusual behaviour or unseen defects are observed. The tester uses the test charter as a guide to decide on the key areas to test and develop ideas to explore edge cases. With time being limited and the focus intensive on discovering defects, it is challenging to reproduce defects once they are discovered as the tester doesn’t spend time in recording the steps followed for each test execution. Product and project managers require artifacts to manage expectations around the probability of introducing defects during each development activity that directly supports product release decisions. This is

easily satisfied if the testers take the scripted approach. Traditional testing tools are all capable of generating such reports and usually have such functionality built in, since the workflows are structured. However, exploratory testing is limiting in a way since it reports on descriptive metrics like the number of defects found, fixed and retested, with just a checklist of scenarios to support tracking and retracing. Also, since it relies on the tester to maintain order, the onus is on the tester to analyse the application under test, based on their domain knowledge and level of expertise, to generate ideas and prioritise them.

IS THERE A HYBRID APPROACH?

Project delivery has to adhere to four major constraints: scope, time, budget and quality. Testers have to optimise their activities for every product release based on these constraints, since there is a time limit on the number of scenarios that can be verified while ensuring the maximum number of defects are identified and fixed, as the cost to fix bugs is higher after release. Such constraints are ideal to introduce exploratory testing to augment the scripted approach. This will help testers to create and prioritise all required scenarios and perform exploratory testing as they learn more about the application. It gives them more flexibility to verify all parts of the application while diving deep on newly introduced changes. This hybrid approach would result in revealing exact steps where the application breaks, identifying hidden defects and producing proper documentation in the form of formal test execution reports at the end. Thus, with the scripted approach, testers perform a gap analysis between the specifications and actual code in the early stages of the test preparation phase, while introducing exploratory testing in the test execution phase, to concentrate more on complex areas for which less scenarios have been identified. Hence a tester’s adaptive ability will help them to generate more complex scenarios that have not been identified previously, while performing rigorous testing on selected functionality, for which test cases have been already identified. Thus, the focus changes from the

quantity of documentation produced to the quality of documentation necessary to track testing activities.

WHAT THE FUTURE HOLDS

Most activities in software testing are repetitive and have already been targets for successful automation. With AI slowly, but surely, coming of age, industries in all sectors are trying to find applications to automate and optimise operations. We have seen the death of manual testing being discussed quite a few times in the past decade. Manual testers, however, are still a quintessential part of the software team since, until only very recently, cognitive automation has been a challenge. We might have heard of how an AI from DeepMind (AlphaGo) can now beat human experts in the game of Go, or of the feats of an AI from OpenAI that has defeated human players in a multiplayer online strategy game. In each of those examples, the AI explored innovative ideas that had not been thought of previously by human players. Exploratory testing has been difficult to automate since it requires creativity, understanding, analysis and the application of that knowledge for it to be applied effectively as a testing strategy. In essence, a lack of cognition and intuition has been the barrier for AI in software testing. This, however, is changing with each advancement in machine learning, especially in the application of reinforcement learning. Testers might need to start rethinking the way they need to add value to the software team; an example would be human testers managing multiple software projects while the AI performs the actual functions of the current day tester. The role of the software tester might thus evolve to be more managerial along with the need to be more technical to be able to find root causes for the defects and support software developers in fixing bugs. As with automation in any industry, a trainable employee is bound to survive. Hence it is imperative for all testers to improve domain knowledge in their field and learn new skills in software debugging and maintenance to add value to their organisation.

T E S T M a g a z i n e | J ul y 2 01 8


34

WHERE CAN EXPLORATION TAKE YOU? Exploratory testing can be of vital importance to teams facing the varied challenges of modern software testing, adding value to organisations with improved testing efficiency and reduced time to deployment n current times, the success or failure of any organisation is both driven and governed by three fundamental factors: first, the ease with which organisations are helping business transformation by empowering digitalisation; second, how quickly organisations are able to adapt themselves to the changing market needs and technology dynamics; third,

I

T E S T M a g a z i n e | J ul y 2 01 8

organisational readiness in offering necessary and requisite IT-enabled solutions across business domains, which not only demands the implementation of contemporary or top-notch technology stacks to handle bundles of critical functionalities, but also needs the right mix of expertise to ascertain quality. This is especially so when organisations

NARAYANA MARUVADA PROJECT MANAGER & QA TESTER

Narayana has 12-plus years of experience and expertise in validating the applications and products that are built in open-source technology stacks with a particular interest in assessing systems’ functional & non-functional attributes and investigating and implementing new testing approaches.


E X P L O R A T O R Y

are inclined for an agile mode of delivery where faster test iterations, with improved testing efficiency and reduced time to deployments, take the front seat. In this scenario it is always good to have exploratory testers alongside other testing and QA experts who together can deliver results.

EXPLORATORY TESTING IS NOT A METHODOLOGY

Rather than a methodology, and according to Cem Kaner & James Marcus Back in A Tutorial in Exploratory Testing, ‘exploratory testing is more a mindset or a way of thinking about testing’. Typically, exploratory testing is carried out with a thoughtful consideration or approach, wherein testers manoeuver through a given application or product, with a definite intention to look out for subtle bugs. The below points help us in providing a holistic impression or outline of what exploratory testing entails: • A formal testing process where we don’t have the need to explicitly formulate any test cases prior to testing the application and likewise no conventional test plans are needed to begin testing • Testers tend to gain requisite understanding or know-how of the application only by exploring the same • A fundamental advantage is that testers get to learn more about the application, even in very short delivery/test cycles • It promotes test design and test

execution alongside simultaneously learning the application • It provides a proven, practical handson approach in which testers can achieve maximum test execution. The exploratory testing is about three core aspects: detection, examination and learning. The test exercise basically involves simple planning, wherein testers come up with a preliminary understanding about the scope and effort (usually short and time-boxed). Secondly, there is no need for having proper formal documentation which outlines the test scripts or test cases, since test design and test execution activities go in parallel. Third, test logging and/or reporting appears to be simple and effective since only a few important details – such as functionality tested, corresponding defects identified, need for another test iteration – are captured (see diagram above).

ESSENTIAL TRAITS FOR EXPLORATORY TESTERS

Besides carrying the right testing mindset, in my experience exploratory testing can be more effective when you have testers armed with the below traits and/or skillset: • Domain expertise: this is one of the prerequisites to conducting exploratory testing more effectively. As mentioned above, exploratory testing generally carries less documentation, hence teams can fall short of having the requisite references and other

T E S T I N G

35

supporting documents that can better help them understand the requirements, functionalities and business process better. So, unless there are testers with extensive domain expertise, it is seemingly not possible to conduct exploratory testing on certain ‘domain driven’ (see below) applications or products Ability to ascertain non-functional test requirements: yet another prerequisite attribute. The tester should definitely have the inclination and/ or initiative to ascertain the nonfunctional test requirements; such as security, performance etc., that are likely associated with an application. If testers fail to identify these requirements, it can cost badly from a quality standpoint Ability to explore end-to-end functionality: usually the applications or products are bundled with numerous complex business functionalities. It is important for the tester to gain holistic insight over the end-to-end application flow, and then determine which functionality in the application needs more attention and whether it would suffice in the actual scope defined for testing Optimised test delivery: since exploratory testing is a very timecritical activity, it is important for testers to plan, conduct and deliver optimised test results without compromising on guidelines or quality standards

T E S T M a g a z i n e | J ul y 2 01 8


36

E X P L O R A T O R Y

"It is important for the tester to gain holistic insight over the end-to-end application flow, and then determine which functionality in the application needs more attention and whether it would suffice in the actual scope defined for testing"

T E S T M a g a z i n e | J ul y 2 01 8

T E S T I N G

Test strategy: one of the fundamental expectations from an experienced tester and/or testing teams is to have a proper test strategy in place, which outlines some predefined goals set to test a given application, taking certain key factors (scope, time and effort related), into consideration. And it is obvious to have a concrete and best-test strategy in place regardless of the test process, methodology, testing types and expertise deployed.

KNOWN CHALLENGES WITH EXPLORATORY TESTING

Although exploratory testing appears to be straight forward and results orientated, it can be terrible if the following key challenges are not contemplated upfront and mitigated: • Frequently changing requirements: if you have a situation pertaining to frequently changing requirements and/or non-finalised end-to-end workflow with an application for a business that you support, it is important to freeze them and have the stable application deployed prior to conducting exploratory testing • Need for test data: looking at the constraints within test cycles, if there are complex applications or products whose validation demands large test-data sets, then it is crucial for the testing teams to opt out from conducting exploratory testing. This is because, conventionally, testing teams can invest considerable time in cooking up the test data needed for validating the applications and it is not ideal if the project teams adopt an agile mode of delivery. However, the same can be addressed if teams are enabled with proper test-data management tools • Test documentation: regardless of a specific test process or testing type adopted, test documentation is essential for testers to make note of key observations, such as the information needed for reproducing and fixing any bugs reported etc. With exploratory testing, there is no provision for careful documentation, hence exploratory testers fall short of information pertaining to tracking the purpose and findings of the tests, which otherwise would help identification of business risks

Test coverage: there are assumptions that exploratory testing will provide 100% test coverage, but this is not true when looking at the intended purpose behind such testing. It is highly unrealistic for any exploratory testing teams to achieve full coverage without actually evaluating the test progress against any proper test documentation or plan.

WHERE EXPLORATORY TESTING CAN ADD VALUE

Among the gamut of IT applications and products available the ones which are designed and developed exclusively for business domains such as banking, finance, utility, aerospace and healthcare are considered to be the most complex; not because of known implementation challenges, but because the domains are driven by bundles of critical business processes and underlying functionalities, whose proper evaluation undoubtedly demands extensive domain knowledge. Hence, such applications and products are generally referred to as ‘domain driven’. In order to evaluate or test the applications developed for the above kinds of business domains encompassing intricate functionalities, it is always good to have skilled testers equipped with the requisite domain knowledge. For example, consider healthcare (for US demographics), wherein EDI (Electronic Data Interchange) forms a major crux of the business, since it deals with transactions that are processed through the exchange of data in a standardised format between heterogeneous systems. Again, there are several file formats available and each format has a unique business purpose and usage. As a final series of points, I would therefore suggest that to test an EDI application it is advisable to adopt exploratory testing for the following reasons: the testers can focus solely on test execution, as there is no specific need to learn or explore the application, considering the fact that testers are domain or functional experts. Test coverage is guaranteed, as the testers already have the requisite insight about the test scope. Conduct your testing independently, seamlessly and expedite the same, since test requirements are specific and there is no wait for any clarifications. Above all, deliver promising test results on time, regardless of time-to-market situations.


new for 2018 37

30th OCTOBER 2018 See the future of DevOps and Software Testing A one-day event held in Central Glasgow that attracts up to 100 senior software testing and DevOps professionals, eager to network and participate in targeted workshops, interactive Q&A sessions, and lively presentations – all housed in the impressive Glasgow Marriott Hotel. DevTEST Summit is open to all individuals or organisations within the Software Testing and DevOps community who are keen to increase their knowledge and harvest workable solutions to the various issues faced in these burgeoning and complex sectors.

REGISTER NOW www.devtestsummit.com T E S T M a g a z i n e | J ul y 2 01 8


38

D E V O P S / A G I L E

I N N O V A T I O N S

Kuba, Service Delivery Manager in Sii

WHAT’S THE OUTCOME? Outcome Based Payment is an efficient, robust and now fully matured, modern service delivery method

ver the years', customers looking for software testing services suppliers have opted for body leasing or team leasing as their primary service delivery type choice. More and more companies, however, have started looking for other, more cost effective and customised ways to order testing services that would fulfill their project needs. At Sii Poland we have experienced this trend change and decided to satisfy all these needs by offering flexible services on the one hand and bringing more standardisation to both testing delivery and testing processes on the other. The solution to this growing demand for new, robust and efficient software testing services is an Outcome Based Payment service sometimes called the Testing Factory. Output Based Payment (the Testing Factory) is a service that assumes that

O

T E S T M a g a z i n e | J ul y 2 01 8

the settlement between the supplier and customer is carried out based on a catalog of products where each product has its pre-defined unit and price. Customers can order as many products from the catalog as required to fulfill their testing needs. The price of the service is then calculated as a sum of all the delivered products and is not dependent on the team size or project schedule. At Sii Poland we believe the Testing Factory service is the perfect compromise between standard managed services and body/team leasing. While bringing all the advantages of managed services with the reduction of time spent on team management as a prime example, it allows the customer to scale the service based on his current needs without any additional effort needed for change requests processing.

Sii Sp. z o.o. Top IT and Engineering services provider in Poland sii.pl/en Al. Niepodleglosci 69 02-626 Warsaw

TOP THREE BENEFITS

The top three benefits for the customers who decide to choose this type of service include: • Cost clarity: with a catalog of available products pre-defined before the service is initiated and the products’ prices independent from the number of involved testers or the order timelines, the customers have clear information regarding the testing costs. • No team management effort: the Testing Factory service relieves the customer from test team management responsibilities; testing team management activities are fully transferred to the supplier, making him responsible for delivering top quality products within agreed timelines.


39

"Outcome Based Payment services allow vendors to manage their teams more effectively, enabling more traffic between projects and customers" •

No downtime/overtime costs: the key features of the Output Based Payment service include settlement based on a catalog of products where there are no additional costs for downtime, overtime or the onboarding of new test team members. The only cost incurred by the customer is the value of the products ordered from the catalog.

We believe this type of service is aimed mainly, but not only at, companies dealing with a large amount of simultaneous testing activities and continuous development. Our recent implementations show this type of service brings the most added value to: • Projects delivered in agile/iterative development methodologies: projects delivered in agile methodologies, with the sprint backlog grooming preceding the development start, give a solid base for a precise estimation of the amounts of the products required. The same principle applies in the iterative approach, where the scope of a single iteration gives a perfect foundation for project leads to understand the expected number of products required to provide a satisfactory testing coverage level. Each iteration or sprint can then be easily evaluated by using a simple calculation of required products and their unit prices. • Waterfall projects: the waterfall methodology, with well-defined requirements and a system specification gathered prior to the development start, gives a solid base for an accurate test effort and test coverage estimation. It allows the customers to determine the number of products and the cost of testing required for a successful delivery of their systems.

BENEFICIAL FOR SERVICES SUPPLIERS

The movement of customers towards Testing Factory type services is also a positive trend for testing services suppliers. Outcome Based Payment services allow

vendors to manage their teams more effectively, enabling more traffic between projects and customers, depending on the current demand. Testers, who are no longer locked in a single area, can switch projects and gain experience more easily. It allows suppliers to build knowledge around the customer’s systems among a wider group of engineers, making it easier to scale the service without compromising the highest quality of testing. At the same time, it makes suppliers less prone to staff attrition. With so many advantages, Outcome Based Payment services come with a price: the time needed for preparations for operational use. The product catalog has to be prepared and, being the only source of available activities, requires a thorough analysis, multiple discussions and mutual approval. Product prices are defined based on the average cost of testing services across the customer’s projects portfolio. It can make some of them less cost effective. The specifics of a service based on a products catalog makes it also much less flexible from the perspective of tasks allocation. We believe, however, that the benefits gained by customers who decide to take advantage of these types of testing services outweigh the limitations of its implementation. At Sii Poland we are glad to deliver testing services using this new, robust and efficient delivery method.

T E S T M a g a z i n e | J ul y 2 01 8


40

SOFTWARE TESTING CONFERENCE NORTH The Software Testing Conference NORTH is taking place at The Principal York on the 18th - 19th September 2018 e at 31 Media are aware that the scope of software testing is forever changing, so it’s important that the software testing community share, collaborate, debate, and discuss pressing industry topics to keep one step ahead of market trends. This is where we come to lend a helping hand, hosting the Software Testing Conference NORTH to provide the software testing industry with invaluable content from revered industry speakers; practical presentations from the likes of BookingGo, Infinity Works, Rolls-Royce, Sky Betting and Gaming, EE Digital, Infuse, BBC, MoneySuperMarket, Camelot Group (UK Lotteries), Moodys and the University of Derby; executive workshops, facilitated and led by key figures; as well as a market leading exhibition, which will enable delegates to view the latest products

W

T E S T M a g a z i n e | J ul y 2 01 8

and services from SQS, Scott Logic, LEAPWORK, Infuse, iSQI, Parasoft and our gold sponsor, Eggplant. Last year, the Software Testing Conference North saw more than 150 attendees from a variety of different businesses that included BBC, Lloyds Banking Group, NHS Digital, Aviva and Volkswagen Group, all of whom were keen to interact with their peers and broaden their knowledge base, making it apparent that it was more than essential to host another event for the third year running. Software Testing Conference NORTH has its finger on the pulse as it is owned and supported by the industry-leading journal TEST Magazine and is produced by the organisers of The European Software Testing Awards. No other software testing conference in the UK or Europe can boast these credentials. It’s here that you get

access to the authoritative analyses and ground breaking research that will enable you to stay ahead of testing trends. So, what better way to listen to a selection of speakers at who have delivered or implemented projects, strategies, methodologies, management styles, innovations, ground-breaking uses of technologies, or best practice approaches in the last 12 months, who have then gone on to win a prestigious and independent award for their feats?

To book your place today please contact us on: T: +44 (0) 203 931 5827 E: registrations@ softwaretestingconference.com W: north.softwaretestingconference.com


41

Enter The European Software Testing Awards and start on a journey of anticipation and excitement leading up to the awards night – it could be you and your team collecting one of the highly coveted awards.

KEY DATES 14 September

Submission Deadline

4 October

Finalists announced

21 November

Awards Gala Dinner

Be a winner - REGISTER TODAY

 www.softwaretestingawards.com

T E S T M a g a z i n e | J ul y 2 01 8


18-19 September 2018 42

The Principal York, York www.north.softwaretestingconference.com

Software Testing Conference NORTH Practical Presentations | Workshops | Networking | Exhibition

The Software Testing Conference North is a UK-based conference that provides the software testing community with invaluable content from revered industry speakers; practical presentations from the winners and finalists of The European Software Testing Awards; Executive Workshops, facilitated and led by key figures; as well as a market leading exhibition, which will enable delegates to view the latest products and services. In 2017, the Software Testing Conference North saw over 150 attendees from a variety of different businesses that included BBC, Lloyds Banking Group, NHS Digital, Aviva and Volkswagen Group; all of whom were keen to interact with their peers and broaden their knowledge base.

 A selection of speakers who have delivered or implemented projects, strategies, methodologies, management styles, innovations, ground-breaking uses of technologies or best practice approaches in the last 12 months and have then gone on to win a prestigious and independent award for their feats.  Get access to the authoritative analyses and ground-breaking research that will enable you to keep one step ahead of market trends.  Delegates at the event receive pragmatic advice to current issues that in turn allows them to head back to the office and implement change with immediate effect.  Over 93% of the delegates at the recent Software Testing Conference NORTH said the content was good or fantastic, with 94% stating the conference was good value for the money.

REGISTER TODAY 2 - for - 1 End User Registration 2 Days**

£525

 0203 931 5827

Day 1 Only*

£325

 north.softwaretestingconference.com

Day 2 Only

£325

 registrations@softwaretestingconference.com

*Includes T E S T Mentry a g a ztoi nthe e |Evening J ul y 2Drinks 01 8 Reception on the first day of the Conference. All prices are subject to VAT. ** 2 for 1 offer applies only to 2-Days tickets.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.