TEST Magazine March 2016

Page 1

INDEPENDENCE IN TESTING QA IN THE GAMING SECTOR


dItemIndex(1));

nce 'admin'.", new RecordItemIndex(2));


1

C O N T E N T S

T E S T

M A G A Z I N E C O V E R

|

S T O R Y :

M A R C H

2 0 1 6

R E T A I L

24

28

20 NEWS

Software industry news

QA IN GAMING

5

THOUGHT LEADERSHIP

The future of testing in a DevOps world

10

A path to high velocity

12

Keeping pace with the changing face of retail

20

PREVIEW

24

PROFESSIONAL VIEWPOINT

28

TEST Magazine asks the experts

VIRTUAL REALITY TESTING

Testing’s new reality

32

EXPLORATORY TESTING

TEST CASE MANAGEMENT

LOCALISATION

Where do we go from here?

Driving down risk

14

RETAIL SECTOR

30

TEST STRATEGY

Exploring the options with exploratory testing

INDEPENDENT VERIFICATION AND VALIDATION

Independence in testing safety critical applications

The changing nature of the game

36

I am a tester, not a document writer 40 The National Software Testing Conference 2016 42

14 46

T E S T M a g a z i n e | M a r c h 2 01 6


TESTING RECONSIDERED OF PROJECTS WILL BE CANCELLED 31.1% BEFORE THEY EVER GET COMPLETED [1] ONLY

ARE COMPLETED ON-TIME

16.2% AND ON-BUDGET [1]

$312BN

? Y H W

, O SLOWAN O T S I G TESTIN UAL AND LETS OF N R TOO MA PTABLE NUMBE UNACCE ECTS THROUGH DEF

POOR QUALITY REQUIREMENTS

56%

OF DEFECTS STEM FROM POOR QUALITY REQUIREMENTS [3]

64%

SPENT ON DEBUGGING PER YEAR [2]

CA TEST DATA MANAGER LETS YOU FIND, CREATE AND PROVISION THE DATA NEEDED FOR TESTING.

189%

[1]

MANUAL TEST CASE DESIGN 6 HOURS TO CREATE 11 TEST CASES WITH 16% COVERAGE

A NEW APPROACH TO TESTING

UNAVAILABLE OR MISSING DATA UP TO 50% OF THE

2

AUTOMATICALLY GENERATE AND EXECUTE OPTIMIZED TESTS

3

THE RIGHT DATA, TO THE RIGHT PLACE, AT THE RIGHT TIME

(GRID-TOOLS AUDIT AT A LARGE FINANCIAL SERVICES COMPANY: HTTP:// HUBS.LY/H01L2BH0)

AVERAGE TESTER’S TIME IS SPENT WAITING FOR DATA, LOOKING FOR IT, OR CREATING IT BY HAND (GRID-TOOLS EXPERIENCE WITH CUSTOMERS)

TESTING CANNOT OF TOTAL DEFECT REACT TO CHANGE

COSTS ORIGINATE IN THE REQUIREMENTS ANALYSIS AND DESIGN PHASE [4]

THE AVE COST OVE RAGE RRUN IS

TWO TESTERS SPENT TWO DAYS UPDATING TEST CASES AFTER A CHANGE WAS MADE TO THE REQUIREMENTS

(GRID-TOOLS AUDIT AT A LARGE FINANCIAL SERVICES COMPANY: HTTP://HUBS.LY/H01L2C_0)

1

BUILD BETTER REQUIREMENTS

4 HOURS TO MODEL ALL BUSINESS REQUIREMENTS AS AN ACTIVE FLOWCHART AND MAKE THEM “CLEAR TO EVERYONE” [5]

2 BUSINESS DAYS TO GO FROM SCRATCH TO EXECUTING 137 TEST SCRIPTS WITH 100% COVERAGE [5] 60% IMPROVEMENT IN TEST

DATA QUALITY AND EFFICIENCY WITHIN 3 MONTHS USING SYNTHETIC DATA GENERATION

(GRID-TOOLS CASE STUDY AT A MULTINATIONAL BANK: HTTP://HUBS.LY/H01L2G50)

4

AUTO-UPDATE TEST CASES AND DATA WHEN THE REQUIREMENTS CHANGE

5 MINUTES TO UPDATE TEST CASES

AFTER A CHANGE WAS MADE TO THE REQUIREMENTS

(AUDIT AT A LARGE FINANCIAL SERVICES COMPANY: HTTP://HUBS.LY/H01L2HP0)

DON’T DELAY, START YOUR FREE TRIAL TODAY

GRID-TOOLS.COM/DATAMAKER-FREE-TRIAL

[1] STANDISH GROUP’S CHAOS MANIFESTO, 2014 – HTTP://HUBS.LY/H01L2JK0 | [2] CAMBRIDGE UNIVERSITY JUDGE BUSINESS SCHOOL, 2013 – HTTP://HUBS.LY/H01L2KY0 | [3] BENDER RBT, 2009 – HTTP://HUBS.LY/H01L2L80 | [4] HYDERABAD BUSINESS SCHOOL, GITAM UNIVERSITY, 2012 – HTTP:// HUBS.LY/H01L2MC0 | [5] CA A.S.R CASE STUDY, 2015 – HTTP://HUBS.LY/H01L2NJ0 Copyright © 2015 CA, Inc. All rights reserved. All marks used herein may belong to their respective companies. This document does not contain any warranties and is provided for informational purposes only. Any functionality descriptions may be unique to the customers depicted herein and actual product performance may vary. CS200-160313


E D I T O R ' S

3

C O M M E N T

HOW WILL IT DEPARTMENTS RESPOND TO DEVOPS? CECILIA REHN EDITOR OF TEST MAGAZINE

I

n a relatively short time, DevOps has gone from a little‑known term to a buzzword circulating in boardrooms, IT centres and start‑up offices around the globe. It’s been growing in popularity ever since its first adoption by tech pioneers such as Amazon and Etsy, who have evangelised its benefits. DevOps is now something everyone wants to implement – or at least talk about implementing. Although a standardised definition is still up for debate, DevOps aims at establishing a culture and environment where building, testing, and releasing software, can happen rapidly, frequently, and more reliably. Implementing this culture and strategy has historically been easier in the Amazons of the world, or in small, lean organisations where engineers, developers, and IT operations all sit in the same office around the same desks. Established organisations with complex legacy systems and conventional hierarchies face a harder time becoming leaner, more agile and competitive. However, the question today is no longer ‘if’ IT departments will respond to this cultural movement, but rather, how and when? Gartner, Inc. reports that the market for DevOps toolsets grew from US$1.9 billion in 2014 to US$2.3 billion last year.1 The research firm also predicts that 2016, DevOps will emerge as a mainstream strategy employed by 25% of Global 2000 organisations. HP Enterprise forecasts “within five years, DevOps will be the norm when it comes to software development.”2 And the motivation is clear. It’s unavoidable. Speed to market is critical in today’s competitive landscape – regardless of what sector you’re in. The ability to ensure

continuous integration and continuous deployment of high quality products will make or break enterprises. IT departments will have to respond to the DevOps challenge, either to be amongst the first actors or to compete with new challengers. Here at TEST Magazine, we’ve seen the topic of DevOps creep into conversation more and more often, and we get a lot of requests for additional coverage on the theme. So what have we done? We’ve listened, sourced high quality speakers for new DevOps streams at The National Software Testing Conference to help educate us all, and launched a new sister site: www.devopsonline.co.uk – where we’ll provide insight, news and opinions on this cultural movement and what it might mean for you. So if you’ve got any opinions/ contributions on DevOps, or any other topics, feel free to drop me a line at the usual address – all suggestions receive consideration and help ensure that we produce the best quality content for everyone. Also, by the time this issue is out, our first Benchmark Report survey of 2016 will be live! We’re polling the industry on automation and it would be great if you could take part – you can find more information at www.softwaretestingnews.co.uk/survey.

cecilia.rehn@31media.co.uk

MARCH 2016 | VOLUME 8 | ISSUE 1 © 2016 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 EDITOR Cecilia Rehn cecilia.rehn@31media.co.uk +44 (0)203 056 4599 ADVERTISING ENQUIRIES Anna Chubb anna.chubb@31media.co.uk +44 (0)203 668 6945 PRODUCTION & DESIGN JJ Jordan jj@31media.co.uk 31 Media Ltd, 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 info@31media.co.uk www.testingmagazine.com PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA @testmagazine

References 1. 2.

D I G I TA L E D I T I O N

TEST Magazine Group

Gartner, Inc. press release (5 March 2016). 'Predictions for DevOps in 2016’, Hewlett Packard Enterprise (2015).

T E S T M a g a z i n e | M a r c h 2 01 6



I N D U S T R Y

5

N E W S

HUGE RISE IN CONTACTLESS PAYMENTS

UK TECH SECTOR OPPOSED TO BREXIT A significant portion of London’s technology sector is opposed to Britain exiting the EU, according to a survey of members of Tech London Advocates, an industry group representing almost 3000 senior members of the capital’s tech scene. The questionnaire found that 86.5% of almost 300 respondents believe the UK should remain inside the EU, while 9.49% were still undecided. TLA believes there are five key areas of the tech sector that might be damaged if the EU referendum sees the UK public vote to leave the world’s largest single market.

NEW PARTNERSHIP TO BRING UX TESTING TO DIGITAL BUSINESS TestPlant and GameBench have entered into technology partnership together to provide an integrated solution that brings automated user experience (UX) testing to mobile applications. The joint solution encompasses GameBench’s UX testing capabilities combined with TestPlant’s eggPlant Functional, which uses a patented, image‑based approach to automate functional testing. For businesses, UX impacts directly on customer acquisition and retention. In addition to ensuring correct functionality, apps must also deliver seamless fluidity and

These areas are: access to talent, influence in regulatory decisions, access to trade agreements, ability to attract global HQs to the UK and collaboration with other European digital hubs. "The Legatum Institute labelled the UK ‘extremely entrepreneur‑friendly’ and it certainly has been for the last few years. We have made incredible advances in starting up tech business," Russ Shaw, Founder at Tech London Advocates. "If the government moves to limit skilled migration, however, this growth will not continue for long." The UK Referendum on whether Britain should remain in the European Union to be held on Thursday 23 June 2016.

responsiveness or customers will go elsewhere. As apps become more advanced, such as using higher‑res images, media streaming, and even VR and AR, the need to keep an eye on performance becomes ever more critical. In a study commissioned by Google, 52% of users said that a bad mobile experience made them less likely to engage with a company. And 48% said that if a site didn’t work well on their smartphones, it made them feel like the company didn’t care about their business. "A frictionless UX gives users an impression of quality and keeps them engaged with your app and your digital business," said Antony Edwards, CTO, TestPlant. "That's why GameBench helps eggPlant users achieve greater quality."

A new study from Juniper Research has found that the number of consumers making contactless payments via their mobile handsets will reach 148 million this year, with Apple and Samsung together accounting for nearly 70% of new customers. According to the new research, the industry has already received a strong stimulus from the launch of Apple Pay and Samsung Pay in selected key markets. It cited the case of the recent arrival of Apple Pay in China, where nearly 40 million payment cards were registered to the service in 24 hours in mid‑February. Furthermore, the research argued that with nearly 1 in 5 POS (Point of Sale) terminals in the US now contactless‑capable, the infrastructure was now in place for that market to experience traction. It anticipated that NFC smartphones would be the primary initial driver of contactless payments in the US, given the limited number of cards that currently offer the facility. The research also anticipated that models based on HCE (Host Card Emulation) would be widely deployed by banks and

a number of leading OTT (Over The Top) players. It noted that HCE – where credentials and other sensitive data are stored in the cloud – had already been deployed by over 50 financial institutions, including Barclays Bank in the UK. According to research co‑author Dr Windsor Holden: "The combination of HCE and tokenisation is extremely attractive to banks. HCE means that they are not dependent on a mobile operator to enable the service; tokenisation reduces the burden on the issuer and allows them to use their existing infrastructure."

T E S T M a g a z i n e | M a r c h 2 01 6


6

HEADPHONES TO BE USED AS BIOMETRIC ID NEC Corporation has developed a new biometric personal identification technology that uses the resonation of sound determined by the shape of human ear cavities to distinguish individuals. The news comes as banks in the UK are looking to implement more biometric security, such as fingerprints and voice ‘prints.’ The new technology instantaneously measures (within approximately one second) acoustic characteristics determined by the shape of the ear, which is unique for each person, using an earphone with a built‑in microphone to collect earphone‑generated sounds as they resonate within ear cavities. NEC says the technology has a greater than 99% accuracy. "Since the new technology does not require particular actions such as scanning a part of the body over an authentication device, it enables a natural way of conducting continuous authentication, even during movement and while performing work, simply by wearing an earphone with a built‑in microphone to listen to the sounds within ears," said Shigeki Yamagata, General Manager, Information and Media Processing Laboratories, NEC Corporation. Recently, German security firm SRLabs, posted a description of how easily fingerprint sensors can be deceived online. The organisation has long warned that the danger with using any biometric marker is that once an identifier has been duplicated, it’s impossible to reset like a password. Others have pointed out that while earbud‑based checks might be impressive, they might be less convenient. "People always have their handsets with them – that's not the case for their earphones," technology consultant Ben Wood said. "But they do make sense as a way to provide authentication if you are already on a call while using them."

T E S T M a g a z i n e | M a r c h 2 01 6

I N D U S T R Y

ATOM BANK MOVES TO HEIGHTEN USER INTERFACE The UK’s Atom, a soon‑to‑be‑launched mobile bank, has acquired Grasp – a digital design agency that specialises in building user interfaces for the gaming market. Atom says the acquisition is part of its plan "to create the most engaging user experience in banking." Grasp, a startup headed by Eutechnyx/ Zerolight founder Brian Jobling, has worked on gaming projects with some of the biggest consumer brands in the world including James Bond, F1, NASCAR and MTV. "In a single move, this brings together user‑interface expertise from the banking

APPLE’S SUPPORT IN FBI CASE CONTINUES TO GROW Apple is currently in the midst of a fight against the FBI, who wants to legally compel the tech giant to unlock the iPhone once owned by a terrorist involved in the San Bernardino massacre last December. The FBI claims it cannot access the data on the phone without help, and has asked Apple to create a new version of the phone's iOS operating system that could be installed to disable certain security features. Apple declined due to its policy to never undermine the security features of its products. Apple is also arguing that by creating such a security ‘backdoor,’ it would leave all iPhone users vulnerable. In addition, many are sceptical that should Apple abide by this request, it could set a precedent where governments can ask for more ‘backdoors’ in other devices. A host of supporters from the tech industry stand behind Apple CEO Tim Cook. Industry giants such as Amazon,

N E W S

industry with experience from the video games industry," says the firm. "This combined experience in 3D technology and design visualisation is driving the differentiation of Atom’s customer’s experience." As part of the deal, Brian Jopling will join Atom as business development director. Edward Twiddy, Atom’s Chief Innovations Officer said: "Our first conversations with Brian about what gaming and the North East development industry could offer, started in the spring of 2014 when Atom was just an idea. Now on the cusp of going live and after several months of testing, we are bringing some of the best in digital design and development into the heart of the family. Their work is our shop window and we’re really excited about what has already been built, and all the future promises."

Facebook, Google and Microsoft, along with 11 other companies, recently filed a joint amicus brief; a court filing that throws their support behind Apple. Twitter, Airbnb, LinkedIn and 13 other companies filed a separate joint amicus brief, and Intel and AT&T have presented their own filings. In addition to support from the tech sector, the United Nation's High Commissioner for Human Rights, Zeid Ra'ad Al‑Hussein has urged the US to avoid crossing a 'key red line' that could jeopardise the quality of life of millions of people around the world. "This is not just about one case and one IT company in one country," Al‑Hussein said. "It will have tremendous ramifications for the future of individuals' security in a digital world which is increasingly inextricably meshed with the actual world we live in." The UN High Commissioner is particularly concerned about what could happen in oppressive regimes around the world, as a backdoor would offer them the means for prying into the personal information of their citizens. It could be "a gift to authoritarian regimes" and hackers alike, Al‑Hussein said. On the other side, the FBI is arguing that this is an isolated incident involving a single iPhone 5. In addition, family members of San Bernardino shooting victims are urging that the court "not be led astray by Apple's grandstanding." The dispute continues, and Apple has said it is willing to take this to the Supreme Court if needed.


www.isqi.org

tion for informa iders aining prov regarding tr ations and examin tact please con rg.uk her@isqi.o rc .a ie b b e d

iSQI – The leading global certification body for software quality – testing and usability.

©ShadeOn–shutterstock.com

How experts are made.


8

I N D U S T R Y

N E W S

SOFTWARE GLITCH SEES RADAR FAILING ON F‑35 FIGHTER JET

VOLKSWAGEN CHIEF EXECUTIVE LEAVES COMPANY AFTER EMISSIONS SCANDAL Michael Horn, Volkswagen's top executive in the US, has left the company almost six months after the emissions scandal came to the public’s attention. Last September, following the discovery by US regulators, the automaker was forced to admit that 600,000 cars were sold in the US with software designed to cheat emissions tests. VW has confirmed that Michael Horn is leaving "to pursue other opportunities effective immediately."

US UNIVERSITY INTRODUCES NEW CYBERSECURITY PROGRAMME The field of cybersecurity is growing at an extraordinary rate, but there are not enough cybersecurity professionals to meet the demand. Aware that in Colorado, USA alone there are 12,000 unfilled cybersecurity jobs today, the University of Denver has recently added a critical cybersecurity master’s degree. Nation‑wide, there is expected to be 1.5 million unfilled openings by 2019.

T E S T M a g a z i n e | M a r c h 2 01 6

Horn was the public face of the company during the scandal, and admitted last autumn that VW had been "dishonest", adding "we totally screwed up." Addressing a US Congressional committee in October, Horn said: "On behalf of our company, my colleagues in Germany and myself, I would like to offer a sincere apology for Volkswagen's use of a software programme that served to defeat the regular emissions testing regime." Horn has blamed the cheat on "a couple" of software developers in Germany. Some 11 million vehicles worldwide were affected by software designed to cheat emissions testing and VW has set aside €6.5 billion to cover costs.

“President Obama’s latest annual budget proposal includes US$19 billion for cybersecurity,” JB Holston, Dean of the Ritchie School at the University of Denver said. “The nation is in need of more experts as cybersecurity has become a central global concern. We’re positioning DU as a critical platform for driving the public/private cybersecurity ecosystem in Colorado.” The university will be giving accepted students for the 2015‑16 school year a discounted scholarship of nearly 50%, bringing the usual US$57,552 tuition cost down to US$28,800.

Lockheed Martin’s F‑35 Joint Strike Fighter plane saw new software troubles, as pilots had to reportedly turn the radar off and on in order to get it working. The high tech, software‑driven warplane has been in development since 2001 and has experienced several failures and setbacks. Commenting on the radar fail, US Air Force (USAF) Major General Harrigian said to analyst firm IHS Jane’s: "What would happen is they’d get a signal that says either a radar degrade or a radar fail – something that would force us to restart the radar." "Lockheed Martin discovered the root cause, and now they’re in the process of making sure they take that solution and run it through the [software testing] lab," Harrigian added. USAF expects the bug fixes for the planes to be delivered by the end of March. In Australia, the man who had to sign off on the testing and evaluation for the Joint Strike Fighter for the national defence force, Dr Keith Joiner, now wants the project to be halted. Joiner told Radio National Background Briefing: "Some systems like the radar control are fundamentally worse than the earlier version, which is not a good sign. "The next software version is block 4. It won’t be available until 2020. So there’ll be nothing but fixing bugs in the original software between 2013 and 2020." The lack of cyber security testing carried out on the plane’s software is also adding to concerns. Joiner said: "The only system that has done cyber security, vulnerability and penetration testing is the logistics software. So ordering spares. And it didn’t go very well… the most software driven aircraft ever built hasn't yet been tested against cyber security and the modern cyber warfare threats." Previous problems with the F‑35 and its three variants have seen it grounded by the USAF over concerns with the engine after a fire.


I N D U S T R Y

9

N E W S

Featuring stimulating, intriguing articles and features from experienced software testers and leading vendors, you can be sure that you will stay up‑to‑date with the software testing industry.

www.softwaretestingnews.co.uk

INDUSTRY EVENTS CLOUD EXPO

Date: 12‑13 April 2016 Where: ExCeL, London, UK www.cloudexpoeurope.com

HSBC AND FIRST DIRECT BRING BIOMETRIC BANKING TO THE MAINSTREAM

NEW INITIATIVE TO ENCOURAGE 5000 MORE WOMEN INTO TECH SECTOR

HSBC and first direct are planning the UK’s largest roll out of voice biometric security technology. HSBC will also introduce touch ID for mobile banking customers. The move comes after rivals RBS and NatWest offering finger print technology for the last year, and weeks ahead of the launch of mobile ‑only Atom Bank, which will incorporate a face recognition system.

A tech recruitment firm has launched a new initiative to inspire 5000 more women to pursue careers within technology roles in the UK by 2020. In response to dwindling numbers of women in the tech secto Empiric’s Future Tech Girls campaign will work with employers to place students into relevant work experience opportunities.

read more online

★★★

GAMES QA

Date: 26 April 2016 Where: Kings College, London, UK www.tiga.org ★★★

STAREAST

Date: 1‑6 May 2016 Where: Orlando, USA www.techwell.com ★★★

read more online

APPS WORLD NORTH AMERICA

Date: 11‑12 May 2016 Where: Santa Clara, USA www.na.apps‑world.net ★★★

THE NATIONAL SOFTWARE TESTING CONFERENCE

Date: 17‑18 May 2016 Where: British Museum, London, UK www.softwaretestingconference.com R E C O M M E N D E D

US DEPARTMENT OF DEFENSE INTRODUCES ITS FIRST CYBER BUG BOUNTY PROGRAMME The Department of Defense has announced that it will invite vetted hackers to test the department’s cybersecurity under a unique pilot programme. The ‘Hack the Pentagon’ initiative is the first cyber bug bounty programme in the history of the US government.

read more online

NISSAN DISABLES APP FOLLOWING SOFTWARE VULNERABILITIES Following concerns that a software fault has left its cars open to hackers, Nissan has disabled the official app for its Leaf electric cars. Security researcher Troy Hunt first found the software flaw and warned the Japanese automaker that hackers could get access to a car, drain its battery and obtain journey data.

read more online

★★★

AUSTRALIAN TESTING DAYS

Date: 20‑21 May 2016 Where: Melbourne, Australia www.testengineeringalliance.com ★★★

NORDIC TESTING DAYS

Date: 1‑3 June 2016 Where: Tallinn, Estonia www.nordictestingdays.eu

T E S T M a g a z i n e | M a r c h 2 01 6


THE FUTURE OF TESTING IN A DEVOPS WORLD Software development and testing are in the midst of several paradigm shifts, Matthew Brady, Senior Pre‑sales ADM Specialist, Hewlett Packard Enterprise, reveals.

T

here are multiple causes for this, which include, but are not limited to, the increased adoption of DevOps, agile development, containers, micro‑services, the rise of the API and the growth of mobile. The long‑standing approach has been to reserve several types of testing until just before deployment, but testing late is increasingly unsustainable except for existing maintenance activities or where mandated by regulators or customers. Tests still need to be performed prior to deployment to validate system readiness, but issues discovered this late demonstrate fundamental failures in development processes, and impact release schedules, increase costs and reduce agility.

KNOWING ONE'S HISTORY “History is not a burden on the memory but an illumination of the soul” – Lord Acton. The history of software testing is extensive. Since at least the 1980s, testing methodologies have separated functional

T E S T M a g a z i n e | M a r c h 2 01 6

and non‑functional test types with the latter covering performance, security, reliability, scalability and availability amongst other facets (the ‘‑ilities’). This was because performance and scalability were considered a function of the delivery platform or hosting environment, and also that these aspects could not be tested early as they needed integration, multiple tiers, complete data, scalable systems, etc. Consider a functional requirement: ‘be able to login’, and a non‑functional one: ‘up to 1000 users can login in less than 1 second’. Both requirements are equally important in the delivered solution, but the latter could be considered to be more challenging to deliver and although you need to be able to login before you can determine how long it takes, the design process should consider both, equally, at the outset. Non‑functional testing later in the cycle can leave serious issues undetected until way too late. Performance, scalability and security must be inherent in both the design and system architecture, and should be implemented and tested from the first line of code.


T H O U G H T

11

L E A D E R S H I P

The old adage that you can’t bake‑in quality after the fact is equally as applicable to performance, scalability and security (amongst others).

TRANSFORMING DEVOPS INTO DEVTESTOPS The term DevOps doesn’t acknowledge that testing should be integral to the process with close collaboration between dev and test teams. This enhanced approach could be christened DevTestOps to reinforce the importance of testing throughout the lifecycle. A DevTestOps approach would involve: • Making developers responsible for all aspects of the application. • Supporting this by re‑tooling dev and test teams with new capabilities. • Embracing virtualisation. • Automating security testing and validation. • Embracing chaos. • Reducing late‑stage testing and using only for validation. Considering each of these in turn can be helpful in fully explaining the DevTestOps approach. 1. Every component, service or application should be tested as soon as it is in a state to do so, for all aspects necessary for the final delivery. Consider developing a service accessed via an API: it should be tested to verify that it responds to API calls correctly, but it should also be tested to determine how it scales up to run multiple calls, how fast it performs with many sessions (and why performance degrades), how it uses resources, how it responds to calls with impaired network conditions, how it deals with slowdowns, errors or outages of other services it calls, etc. These tests should be performed as soon as possible, preferably just after the first instance of the service is viable, allowing modifications and optimisation to be completed close to the coding cycle. The component or application should be delivered with evidence of the tests performed and also include recommendations for operations on how to host it, for example, memory and processing requirements, etc. 2. Developers will need extensive support to test these other aspects earlier. Collaboration with the test team is critical and both teams need to have the right capabilities including

scalable test tools and technologies. Assets created in one cycle should be reusable – for example unit level or functional test cases could be reused to test performance or scalability. Test fidelity is also important, and rich UI testing should be favoured over simulation or simple API calls where possible. 3. Virtualisation is key, with some examples below: • Virtual environments to host components to be tested for performance and scalability. • Service virtualisation to replace stubs and allow isolation of the component under test from the services it relies upon, with the ability to test slowdowns, failures and outages of the virtualised services. • Network virtualisation to test the performance and scalability of remote clients or services, including network optimisation analysis to fix slowdowns. 4. Security validation should be integrated within the development process, with components being regularly and automatically scanned for intrusion weakness, malicious coding, data exposure and more. This validation can be performed automatically as code is checked‑in to the source repository, with dynamic analysis aligned with other test cycles. 5. Applications in production are subject to unpredictable usage patterns, and failures or outages of hosting environments, cloud infrastructure, dependent services etc. are difficult to anticipate. Testing often follows predefined scenarios which are focused on successful outcomes. Chaos engineering is the approach of becoming ‘comfortable being uncomfortable’, adding uncertainty to test activities and practicing how to address production issues before they happen. It includes taking services offline, adding spikes of traffic, failing‑over to standby environments during tests as well as testing for negative outcomes (for example high volume failed login attempts). 6. Final stage testing will still be needed before deployment of the fully integrated system, but with better coverage earlier in the process, the issues encountered should be greatly reduced. Failures detected only in late‑stage tests should red‑flag the testing done earlier and require enhanced testing earlier to ensure they are not repeated for future iterations.

The term DevOps doesn’t acknowledge that testing should be integral to the process with close collaboration between dev and test teams. This enhanced approach could be christened DevTestOps to reinforce the importance of testing throughout the lifecycle

MATTHEW BRADY SENIOR PRE‑SALES ADM SPECIALIST HEWLETT PACKARD ENTERPRISE

Matthew Brady has over 25 years’ experience in IT consultancy and sales support for a wide range of clients across many industries. His expertise includes web & mobile performance analysis and monitoring, load and stress testing, functional testing, software quality analysis, network analysis, application development and more. Matthew has a BSc (hons) in Mathematics from Salford University.

T E S T M a g a z i n e | M a r c h 2 01 6


A PATH TO HIGH VELOCITY Neil Batstone, Vice President, EMEA, Worksoft, shows how to test 500,000 business process steps in three hours.

I

n this always‑on, connected world, it seems every week brings another story of a software glitch that brings an enterprise to a virtual standstill. The auto manufacturer that can’t move cars out of the factory because of a problem with an inventory management upgrade. The electronics giant that inadvertently re‑labels all of their internal product codes. The electric utility provider whose systems problems cause the lowest customer satisfaction rankings in their industry. All true stories. For every one that makes the news, there are probably hundreds of failures in enterprise apps – large and small – that never make the home page.

“NOT ON MY WATCH” For many CIOs, maintaining business continuity has become a high priority because introducing innovative digital technologies

T E S T M a g a z i n e | M a r c h 2 01 6

remains a top priority. Gartner says that there’s been more technology change in the last three years than in the prior 20 combined. In other words, CIOs need to be able to change a tyre whilst rolling down the road! For them, the integrity of the business process is vital – before, during, and after the innovation projects that bring new digital, cloud, mobile, big data, web, and other enterprise apps into the organisation.

AUTOMATION MAKES IT POSSIBLE There’s only one way to ensure that every business process and enterprise app works like it should on your watch. Every one of them needs to be tested. And to keep up with the pace of innovation, testing has to be agile. To be agile, it has to be automated. Obviously, if you’re validating 500 core

business processes every day or testing 500,000 business process steps every night, it can’t be done manually. Those days are over. Today, automation platforms have replaced manual labour with digital labour for functional testing and business process validation. Sure, it’s an investment in new work practices and some new automation software, but that’s small compared to a major disruption in business continuity. Here are 4 key steps to lock‑in rock‑solid business execution of enterprise systems:

STEP 1: FOCUS ON THE BUSINESS USER The first step is to have a firm understanding of how business users actually use enterprise apps and ‘how things really work around here.’ The problem is that generating this


T H O U G H T

13

L E A D E R S H I P

understanding is time consuming and difficult. Process and application knowledge has to come from business users and business analysts, whose time is expensive – and any time spent on this takes away from their primary mission of running the business. Even worse, once this hard‑won information is captured, it can become out‑of‑date in a matter of weeks as processes change. Fortunately, automation can help in a big way to capture processes. With software for automated business process discovery, business users simply turn on a process ‘capture’ feature from their desktop toolbar when executing a business process in their enterprise application of choice, such as SAP or a web application. When the process is complete, they turn off the capture. Every business process function, keystroke, and transaction has been uploaded into the automation software. In this way, the software captures process information directly from the user’s interaction with the system and its underlying business objects. No interviewer. No time delay. High accuracy. Lowest cost. This captured process becomes the basis for functional test automation.

STEP 2: AUTOMATE END‑TO‑END PROCESSES Automation is not limited to a single transaction by an individual user. You may have a dozen or more people involved in a complex end‑to‑end business process that crosses multiple apps. Today’s automation platforms can ‘stitch together’ business process segments across users and geographies for a comprehensive, end‑to‑end view of the whole process and every interface. It’s not a hypothetical process or the process as originally designed – it’s the process as actually executed by real business users every day. This ensures that test automation covers not just the entry application but every back‑end application and integration. It also lets you identify all business process variations and check them too.

STEP 3: TEST CONTINUOUSLY Because much of the manual effort in capturing process information has been eliminated, it’s possible to cover 90% or more of the core business processes. And

because automation is fast, you can run your automation portfolio on‑demand and even on a daily basis. How often should you test? The frequency of testing needs to match the rate of change and digital transformation in your enterprise landscape. If not, you’re exposed and falling behind! If new technology or updates are deployed monthly, you need to check all your interconnected business processes and enterprise apps monthly or better. If you have many technology projects, maybe it needs to be weekly. Some companies validate their core business processes Monday, Wednesday, and Friday. And if your enterprise relies heavily on hybrid cloud apps – where you don’t control the timing of changes (like Salesforce.com) – then maybe you need to perform daily validation. Same for companies that are handling 4000 SAP transports per month. When companies don’t match the rate of change with the rate of testing, it often causes news‑making business disruptions. When firms shortcut testing or deploy changes without testing anything at all, there is enormous risk to business continuity. With high‑speed automation, your team can uncover problems before your business users or customers do – to ensure every business processes works. The new gold standard is to check everything, every day.

When companies don’t match the rate of change with the rate of testing, it often causes news‑making business disruptions

STEP 4: SCALE BY USING THE CLOUD What most people don’t realise is that automation platforms such as Worksoft can distribute automated testing across multiple machines to achieve enormous scale and full enterprise‑level coverage. ‘Multiple’ doesn’t mean two or three. Companies regularly use anywhere from 30 to 150 virtual machines, and some are contemplating 1000 or more in the very near future. And with a public cloud infrastructure, it’s possible to spin up machines on demand and spin down when automated testing is complete – for the most cost‑effective resource management possible. So there you have it. For some of the people we work with, very high velocity business process testing means confidence and ironclad business execution. Others think of it as insurance or a safety net. But whatever you call it, these automation platforms let the CIO say, “Nothing’s going to happen on my watch,” and mean it.

NEIL BATSTONE VICE PRESIDENT EMEA, WORKSOFT

As Vice President, EMEA, at Worksoft, Neil Batstone brings over 20 years of high tech leadership experience in sales, operations, services and business development. At Worksoft, Neil ensures EMEA customer success with a strong focus on the United Kingdom, DACH and the Nordics.

T E S T M a g a z i n e | M a r c h 2 01 6


INDEPENDENCE IN TESTING SAFETY CRITICAL APPLICATIONS Odd Ivar Haugen, Principal Engineer, DNV GL Marine Cybernetics Services asks why independence in software testing is so important? Or, what might happen if the verification effort relies solely on internal testing by the developer?


I N D E P E N D E N T

V E R I F I C AT I O N

W

hen someone develops and/or configures a software system, that person wants the system to work. This is one of the main motivations for a developer. This mind‑set may easily be transferred into the verification process of the system. To enter the verification stage thinking that it will or: ‘it better work!’, may lead to blind spots, unconsciously avoiding challenging areas in the software that we might feel be a bit dodgy, resulting in less defects being discovered. This is a well‑known effect called confirmation bias, blended with a dash of conflict of interest. If we think, and truly believe, that something works, "people seek data that are likely to be compatible with the beliefs they currently hold."1 Moreover, if finding many defects may delay the delivery date of the system, the motivation for finding defects might be degraded. Independence in the verification effort facilitates a fresh viewpoint towards the system and ensures objectivity. The degree of objectivity should rise with increased system safety criticality. This article discusses independence in safety critical software verification and validation, called ‘IV&V’ for short.

WHAT IS INDEPENDENCE? Often when independence in the verification effort is discussed, the discussion circles around whether the person performing the verification is the same person who developed the software. The discussion often ends there; if the person verifying did not develop the software, he is independent. Even widely used safety standards such as IEC61508 do not define, nor require specific level of independence in the verification effort when developing safety critical systems. Independence may come in many types and forms. If a person verifying a software artefact is not the developer, he is independent, right? Not necessarily. What if he is the person sitting in the neighbouring cubical and frequently assisted the developer during coding? Could that person have blind spots to coding flaws? Of course. Is it wrong that the colleague performs code review, or tests the developer's software in another way? Of course not; however, when developing safety critical software, relying solely on in‑house testing is not adequate. The public expects a level of confidence in the system that in‑house testing is not capable of delivering. The link between the developer and the verifier is too close to ensure an adequate level of objectivity.

A N D

VA L I D AT I O N

According to IEEE2, a standard which is used by critical weapon systems for the US armed forces, NASA, FDA, FAA, and other federal agencies, independence in the verification effort is described with three parameters, and a level of independence of each of those parameters. The three parameters being Technical Independence, Managerial Independence and Financial Independence. The following description of the independence parameters is taken from IEEE.2

TECHNICAL INDEPENDENCE

Technical independence includes both personnel and tools. The important aspect of independent personnel is that they must take an independent viewpoint of the system and formulate their own understanding of the system under test. This means that to be considered an independent verifier, one must analyse the system under test; design, implement, and execute the test cases; and formulate and report possible defects. Note that witnessing a test activity performed by the vendor does not qualify as an independent verification. The verification effort should use its own (independent) tools for system analyses and test execution and reporting. Included into technical independence is the input data used during test, and the plant (system under control) simulator if the system under test is a safety critical control system. Shared tools, or if the personnel conducting the verification is close to the development team, implies that technical independence is compromised.

MANAGERIAL INDEPENDENCE

Managerial independence reflects the verification effort's organisational link to the developer. The verification organisation should independently select the test scope, tools and techniques in verifying the test target. Communication of test results to project stakeholders should not be restricted by the developers.

FINANCIAL INDEPENDENCE

15

When developing safety critical software, relying solely on in‑house testing is not adequate. The public expects a level of confidence in the system that in‑house testing is not capable of delivering

ODD IVAR HAUGEN PRINCIPAL ENGINEER DNV GL MARINE CYBERNETICS SERVICES

Odd has worked at Marine Cybernetics with 3rd party HIL testing of maritime safety critical control systems for 10 years, with dynamic positioning as the key field of knowledge. His current main field of interest is trying

Financial independence requires the verification budget is not controlled by the developer. The developer should not be able to restrict funds from the verification effort, put ‘adverse financial pressure’, or in other ways prevent that the verification effort completes its task.

to build a bridge between system

FORMS OF INDEPENDENCE

with respect to large complex software

There are five forms of independence described in IEEE,2 Classical, Modified,

(software) safety analysis and software test methodologies. Moreover, Odd advocates for the difference between the two quality characteristics "reliability" and "safety", particularly intensive integrated real time control systems.

T E S T M a g a z i n e | M a r c h 2 01 6


16

I N D E P E N D E N T

The point is not that a software development team should stop testing their own software, the point is that for safety critical applications, solely relying on Embedded IV&V is not adequate, and does not deliver the required level of confidence to the system

V E R I F I C AT I O N

Integrated, Internal, and Embedded IV&V. Different forms constitute different levels of independence of each type (Technical, Managerial and Financial), depicting the rigidity of which the independence is implemented. If the verification effort is organised as an external organisation (different company), using their own (independent) tools, doing their own (independent) system analysis, controlling their own budget, the independence is entitled Classical Independence. Classical IV&V is adequate for the most safety critical software systems. For other forms of IV&V (Modified, Integrated, Internal, and Embedded), one or more type of independence (Technical and/or Managerial, and/or Financial) are compromised to a smaller or larger degree. If the verification effort is organised within the developers group, although not directly involved in the development, all three types of independence are compromised, this form of independence is called Embedded IV&V. The point is not that a software development team should stop testing their own software, the point is that for safety critical applications, solely relying on Embedded IV&V is not adequate, and does not deliver the required level of confidence to the system.

IV&V OF SAFETY RELATED SOFTWARE The system software safety process is a sub process of the system safety process. The software is said to be safety related if it contains hazardous functions that may be a causal factor in an accident. These software systems must be developed and verified with increased rigidity, with independence in the verification effort being one such factor.

A N D

VA L I D AT I O N

FOCUS AREAS WHEN DEVELOPING (SAFETY RELATED) SOFTWARE

One may broadly divide the development of any software into two focus areas, the software product, and the software lifecycle process. Both focus areas consist of artefacts, or work products that should be subject to verification. Safety standards focus on both areas; however, independence requirements in the product verification is almost non‑existent in standards such as IEC61508. IEC61508 focuses on independence in the verification of the development process, not of the product itself. In general, many standards rely too much on the assumption that a rigid development process will guarantee a safe product; however, as we all know: ‘The proof of the pudding is in the eating.’

SOFTWARE QUALITY CHARACTERISTICS AND SYSTEM PROPERTIES

Although a software V&V may focus on many software quality characteristics including Functional Suitability, Performance efficiency, Capability, Usability, Maintainability, and so on,3 the main objective of an IV&V process in a safety critical development project is safety, or ‘freedom from risk.’ The IV&V has no incentives other than contributing to the safe operation of the system. In contrast, the internal verification must not only focus on a number of other quality characteristics, they also, deliberate or not, may have conflicting interests. Safety is a system property, as opposed to other quality characteristics, such as reliability, that is a component property. Component properties may be verified through component or single application testing, while system properties, including safety, should also be verified on a higher system integration level, because hazards often exist in the interaction between sub‑systems. Safety requirements posed upon single components will be less effective in dealing with safety in highly complex software intensive systems. Accidents happen without component failures, and highly reliable components and redundancy, although still important, is less effective in preventing accidents in such systems.

IV&V EFFORT OF SOFTWARE DEVELOPMENT ARTEFACTS

Software development artefacts consist of safety policies, coding standards, (safety)


E l i m i n a t e R i s k w i t h H i g h S p e e d B u s i n e s s P ro c e s s Te s t i n g SAP HANA®, mobility, web, and the cloud are introducing enormous changes in your digital business. You need to ensure that every core business process works glitch-free, all day, every day – even as you deploy updates and new apps. How? With industrial-strength test automation. Want to check 500,000 business process steps in 3 hours? No problem. For more than a decade, Global 5000 firms have turned to Worksoft’s intelligent automation to reduce risk, shorten IT projects, and dramatically lower costs. There’s simply no other way to do it. Let us show you how.

www.worksoft.com


18

I N D E P E N D E N T

analysis methodology to be used, including the level of rigour of which these should be used for a particular software module, or application. In some safety standards, these artefacts are the only ones to be verified by an independent organisation. We can all agree that by using a rigid, well‑organised development process, based on well‑founded safety policies, the probability of developing unsafe software decreases. The problem; however, is we humans don't always do what we are told, so by looking only at the ‘check‑list’, how certain are you that the items actually have been performed, or how well each item has been performed? As I sometimes tell my three boys, we adults sometimes behave like kids in disguise: big bodies but immature between the ears. Telling my son to go to his room to do his homework; can I rely on him that he actually will do it, and think no more of it? Even though I believe he will, I want to check the quality of his work, and to what extent he has carried out his assignments. "Simply following a process does not mean that the process was effective, which is the basic limitation of many process assurance activities.”4 Independent testing of the product is like enabling a feedback loop from the development process, checking how effective the development process has been, and possibly update it based on this feedback.

IV&V EFFORT AND SOFTWARE PRODUCT ARTEFACTS

Software consists of many artefacts, or test items, produced in different phases of the development process. Test items can be documents, such as design requirements, produced early in the development phase, or operator manuals, often produced just prior to release. The source code is of course another test item category; a very central one I might add. These test items are closely linked to the developed product. Static testing of software requirements and design documents are very important in discovering flaws as early as possible, decreasing the cost of re‑work. Independent V&V increases the confidence in the correctness and completeness of the documents. The fresh viewpoint increases the probability of discovering omissions, which

T E S T M a g a z i n e | M a r c h 2 01 6

V E R I F I C AT I O N

is a known source of unsafe behaviour of software.5 My impression is that omissions are especially difficult to discover for a person too close to the software development process, which increases the value of an independent verifier. Ideally, all defects are identified within the current development phase, leaving only defects introduced within the active phase for the verification effort to discover. Unfortunately, this is not the case in real life. The verification effort is not perfect, and can never find all defects; some defects

will always find their way to the next phase and into the next produced item. As earlier stated, a difficult type of error to discover early in the verification process is omissions – what is missing? The items produced along the development process transforms its physical (from paper to code) and functional properties. The level of complexity will also increase. The test process (method, tools, documentation) of each test item must adequately reflect its physical and functional properties, complexity, safety level, and risk factors. The test method must change as the test items transform. Finding all defects by reviewing the system documentation, however important, is impossible. Early in the development/integration process, the system exists as nothing else than mental models on a ‘piece of paper’ that must be interpreted by humans, like developers, managers and software testers, and projected into the finished system. These cognitive processes will inevitably lead to misunderstandings and defects such as omissions. Omissions may be explained by lack of tacit knowledge, knowledge presumed by the author(s) of the requirements, and therefore not made explicit. Omissions in the requirements leading to coding defects can then be explained as a result of a communication gap between the author(s)

A N D

VA L I D AT I O N

of the requirements, and the developer(s) of the source code. The question will then be whether we can close the gap by increasing the level of detail, and the length of the requirements (a longer “string”6). Or, perhaps a document, such as the software requirement, can never describe a complex system in such a way that it can be coded without flaws? And, will the software tester

be able to project the requirements correctly to the end system and then decide if it is correct or not? I think that the requirements must be transformed to something more tangible, like running code, to make the requirements (and defects) more visible, also for the software tester (change the physical form of the [communication] entity5). Of course, requirements are also transformed to records in a database. This is still not enough, and it can even make the communication gap wider. After all, prose suits us humans relatively well in visualisation of thoughts, ideas, and systems for that matter. I mean, how well would we visualise the content of The Lord of the Rings by reading the books as standalone records in a database, like bullet points? We should not underestimate the power of good (requirement) prose. When the system, or test items, transforms from paper to lines of code, further to running programs, and finally to the complete integrated system, it becomes more and more concrete (and complex). Increased concreteness enables a better understanding of the system (safety) properties and therefore better chance to discover (safety critical) defects, defects not only introduced in the current development phase, but also missed defects from earlier phases, such as requirement and design. Why is this relevant when discussing independence in software testing? The independence in the verification effort should be maintained throughout the development lifecycle, not suddenly drop from "Classical IV&V"2 early


I N D E P E N D E N T

V E R I F I C AT I O N

in the development phase when verification is conducted as document reviews, to "Internal IV&V"2 where all independence parameters are compromised, and when the work products start to get integrated and complex.

INDEPENDENT V&V AND INTEGRATION LEVEL During the software lifecycle the software product (and system product), exists at different integration levels. A test item can both be a (standalone) unit, and a large complex integrated system. When the target software is going to be part of larger systems (systems of systems), there is a need for a system integrator, controlling the

A N D

VA L I D AT I O N

interaction between sub‑systems, often supplied from different vendors. System integrators are the companies putting their names on the end product, like Audi in the car industry, or Boeing in the airplane industry.

THE MARITIME AND OFFSHORE INDUSTRY

In the maritime and offshore industry there are no such strong system integrators as in the car and the airplane industry. The shipyards, building the ships and offshore drilling rigs, traditionally have a stronger focus on keel and steel, not highly integrated, safety critical, complex, and software intensive systems. As these systems already exist on‑board most of the ships today, there is a need of an independent V&V effort, focusing on the safety aspect, capable of testing these systems at the system integration level, prior installing on‑board the vessel.

19

Independent testing of the product is like enabling a feedback loop from the development process, checking how effective the development process has been, and possibly update it based on this feedback

References 1. 2. 3. 4. 5. 6.

Kahneman, D., ‘Thinking Fast and Slow’, Penguin Group (2011). IEEE, 1012 Standard for System and Software Verification and Validation (May 2012). ISO/IEC 25010: ‘System and software engineering System and software quality Requirements and Evaluation (SQuaRE) ‑ System and Software quality models’ (March, 2011). Leveson, N., ‘Engineering a safer world’, The MIT Press (2011). Clifton A. and Ericson II, Software safety primer, CreateSpace Inc., (2013). Collins, H., ‘Tacit & Explicit knowledge’, The University of Chicago Press, (2010).

Further reading ISO‑IEC‑IEEE 29119‑1..4: ‘Software and system engineering, Software testing’ (September, 2013).

T E S T M a g a z i n e | M a r c h 2 01 6


KEEPING PACE WITH THE CHANGING FACE OF RETAIL Chris Addis, VP & General Manager EMEA, SOASTA, looks at the emergence and impact of digital performance management.


R E T A I L

21

S E C T O R

C

hange is inevitable. Nowhere is this more apparent than in the retail sector. The growth of m‑commerce alongside and, in many cases beyond, ecommerce, coupled with the move towards promotional events such as Black Friday and flash‑sales, have forced retailers to take stock of their online strategies. The changing face of retail is mirrored in the changing nature of testing. The two are intertwined. As retailers invest in digital transformation programmes so the notion of testing is evolving towards digital performance management (DPM). As the retail sector continues to change, can digital performance management help to anchor retailers’ digital flotilla in a sea of change?

AN EVOLVING RETAIL LANDSCAPE The 2015 Christmas trading figures told their own story about the changing face of the retail sector. While retailers struggled to increase the number of people through their shop doors, it was a different story as far as online trading was concerned. Most large retailers reported an increase in online trading to some degree. House of Fraser reported a 40% boost in its online sales during Black Friday and a 61.8% jump in online sales in the seven days before Christmas. John Lewis saw an increase in sales generated from mobiles and tablet devices of 31% and online sales accounted for a significant proportion of its overall revenue over the festive season. Andy Street, Managing Director of John Lewis, was reported as saying that the role of the shop was absolutely critical in providing the online sales. This suggests that customers continue to attach a high value to attributes such as experience and service. Many may suggest that the answer is to simply replicate these attributes online but this is to overlook the fact that in an online, digital environment, service and experience are defined in a very different way. This is not about replicating the in‑shop experience per say but rather translating it so it centres on and around the base needs and key considerations of the consumer when they shop online – namely time, convenience, speed and availability. In short, customer service and experience in a digital environment centres on managing expectations and it is this that digital performance management delivers.

THE PATH TO DIGITAL PERFORMANCE MANAGEMENT It would be fair to say that the current view that prevails about testing is somewhat clouded by the belief that testing is complex, time‑intensive and a somewhat exclusive affair. Testing is generally regarded as the preserve of large companies with deep pockets and access to datacentres with row upon row of servers poised to perform load and stress tests on the IT infrastructure. This was certainly the case back in the 1990s when client server was all the rage and testing took weeks and months to set up, run and report. Companies had the luxury of having time on their side, as the web was still in its infancy, ecommerce was still a novelty and mobile phones were, well, just phones. Fast forward 20 or so years and it is consumers calling the shots and setting the agenda as social media, smartphones and tablets dominate the way individuals engage and interact with retailers. Time is no longer a luxury and testing has to evolve to embrace an environment based around smartphones, tablets and apps. For retailers that maintain legacy testing technology, the challenge is trying to figure out how to adapt an inflexible, expensive and linear testing tool designed in the 1990s to solve very different problems to the demands of a 21st Century environment. Those demands dictate that testing can no longer be a static exercise conducted in the confines of a lab but an ongoing, real‑time event that is anchored within the consumer environment. Fortunately, the advent of cloud computing has fundamentally changed the testing landscape. Retailers, irrespective of their size, can deploy scalable, cost‑effective and dynamic testing platforms that deliver meaningful data and insights into not only the performance of their website or app but how that performance affects consumer behaviour and the way their customers interact with them online. In an age of the smartphone and tablet, continuous testing becomes the norm rather than the exception. It is the conflux of changing consumer habits and expectations coupled with

Time is no longer a luxury and testing has to evolve to embrace an environment based around smartphones, tablets and apps

CHRIS ADDIS VP & GENERAL MANAGER EMEA SOASTA

Chris was SOASTA’s first employee in Europe and has seen the company expand its European footprint in that time. Chris brings a vast level of industry knowledge and experience having held senior sales and leadership roles at Rightscale, Gomez, S1, SAP and IBM.

T E S T M a g a z i n e | M a r c h 2 01 6


22

R E T A I L

S E C T O R

By embracing digital performance management, Nordstrom is able to proactively tune performance front to back, gain a deeper understanding of how real user experiences impacted business

the availability of cloud‑based technologies that has led to the emergence of digital performance management. Where once companies focused on testing in isolation, now the emphasis is on understanding the impact that a website’s performance has on end user behaviour and managing those expectations. Testing has given way to a more detailed and analytical approach that seeks to better align the digital estate with the needs and expectations of its end users. Understanding the subtle difference between the overall speed of your site and the time it takes for specific webpages to load is fundamental to understanding end user behaviour. If a retailer can establish which pages need to load faster and how their end users navigate through their site, they can better anticipate and plan for ongoing as well as seasonal traffic. That is where digital performance management comes into its own – delivering detailed, actionable data and intelligence that allows retailers to develop and refine their digital estates to deliver the online equivalent of customer service and

T E S T M a g a z i n e | M a r c h 2 01 6

experience – performance. It’s something that a number of retailers in the US have been quick to embrace as they transition their businesses to an increasingly digital estate. Nordstrom is one example of a retailer that has taken the notion of customer service and successfully translated it to their online operations.

THE NORDSTROM EXPERIENCE Customer service has always been a hallmark of the Nordstrom brand. When the retailer noticed a change in customer satisfaction survey scores for online performance, it immediately took notice. Synthetic monitoring and internal testing wasn’t giving Nordstrom the full picture – it needed to understand customer perceived performance and obtain actionable data to meet expectations. In order to extend the same level of top‑quality service customers expected in‑store to online shopping, Nordstrom started to look

for a performance management solution that provided an end‑to‑end view of server‑ and client side performance, rooted in real customer experiences. What they invested in was a real user monitoring (RUM) solution. Nordstrom was moving heavily into cloud technologies and SOASTA provided the scale and support it needed. The engagement began with the vendor’s CloudTest On‑Demand services on Nordstrom’s mobile POS system. After the success of that project, Nordstrom implemented the cloud solution across its entire network. Though the organisation was supportive of testing in production, it was truly convinced of the business value at the 2014 Anniversary Sale. The Anniversary Sale is Nordstrom’s biggest event with traffic peaks 4‑6 times higher than usual. Going into the 2014 Anniversary Sale, Nordstrom were able to find two critical defects in production that they were not able to see in the performance test lab. It demonstrated the reasons they needed to test in production and they began testing regularly in production with CloudTest,


R E T A I L

23

S E C T O R

about 10 times a year. They also started using the built‑in RUM capabilities of the mPulse platform. The mPulse platform summarised graphical information on performance and also gave key business metrics, especially conversions as a function of response time, so Nordstrom could see how the customer experience was affecting customer behaviour. Since the mPulse information was hosted by SOASTA, Nordstrom didn’t have to build the infrastructure to store the data in‑house, resulting in a significant cost saving. Perhaps the most important application that Nordstrom used mPulse platform for was in addressing the customer satisfaction survey feedback. They investigated and found that it was the front‑end performance that was degrading. Previously, the company had primarily focused its resources on back‑end, server side performance. The findings prompted a shift in the culture and mind‑set within the retailer. Nordstrom’s senior program manager of ecommerce operational intelligence was tasked with using RUM to marry client and server side data to make data‑driven decisions about how online performance affects the business. To do this, Nordstrom customised its use of mPulse to meet its unique needs. Typically a lot of the customer experience is not translated when you are trying to measure performance using page load. Nordstrom did some work to shift the tagging and configure mPulse to start giving them ready information. The ability to derive deeper analytics with access to raw mPulse data and then combine that data with other sources proved invaluable. Prior to deploying the DPM solution, there wasn’t a good sense of customer perceived performance using metrics, as Nordstrom was relying on customer satisfaction surveys to measure online performance. With mPulse, Nordstrom set service‑level objectives for mobile and web pages, identified pages not meeting those objectives, and plan to use this data to drive development to fix pages. By embracing digital performance management, Nordstrom is able to proactively tune performance front to back, gain a deeper understanding of how real user experiences impacted business and utilise real‑world data to detect and resolve problems well ahead of customer satisfaction survey results.

SUMMARY If there are any lessons to learn from Black Friday or the Christmas trading season, it is that managing expectations is central to a successful and profitable online operation. In the digital environment, customer experience and service are driven by a deep understanding of page load times, availability and how these impact on end users. There’s an old adage – you can’t manage what you can’t measure. While this still holds true, the management and the measurement are now combined in digital performance. As retailers adjust towards a growing digital environment, they would be equally prudent to adjust their testing regimes to one that is more holistic, insightful and able to manage the great expectations of the consumer.

QA, LOC

& CS SUMMIT

LONDON 26/ APRIL

SPEAKERS INCLUDE: MARCIA DEAKIN, NEXTGEN SKILLS ACADEMY MICHAEL SOUTO, LOCALISEDIRECT SYLVIA FERRERO, MEDIA-LOC STUART PRATT, PKR DIGITAL MARK ESTDALE, OMUK

HEADLINE SPONSOR

NETWORKING DRINKS SPONSOR

EVENT PARTNERS

MEDIA PARTNERS


WHERE DO WE GO FROM HERE? The video game experience has been revolutionised over the last decades. As a consequence, localisation for console games is in constant evolution, Nadège Josa, Senior Project Manager ‑ Localisation Services, Sony Computer Entertainment Europe (SCEE) Limited, explains.


25

L O C A L I S A T I O N

I

n the early days, games required little localisation input as these featured little on‑screen text, menus or instructions for the gamer. With a drive to contemporary games with diverse, creative and compelling script and animation that engage the experience of the players, the necessity of providing a localisation that supports this evolution became vital. Localisation broadened its scope and sphere of action, from translation to culturisation and transcreation. Source scripts from Japanese, US or UK English were derived into market variants thus enabling the end user to experience a quality localisation adapted to their own culture, environment and knowledge.

horizons. In Localisation Services at SCEE, we feel a conscious desire to explore smarter ways to create more time and cost‑effective strategies. No two games are similar yet every title is propitious to exploring bolder and more daring ways. Not all new initiatives bring positive results; however, lessons learnt bring invaluable experience on how we can improve our services. So what are the results of recent exploration? In Localisation Services, we use the Localisation Asset Management System (LAMS); cloud‑based, its basic purpose serves as a repository for all source/localised text and audio assets. It also provides plethora of smart features, one of them being the automatic subtitles creation from translation string and audio file.

AS GAMING EVOLVES SO MUST LOCALISATION

HOW IS LAMS USED?

Nowadays, games are developed from ever more capable tools, ever finer artistic skills, inspired scriptwriting. Some franchises even marry cinematography and interactivity, giving end users a fantastic new approach to gaming: involving them emotionally in the experience. The impact on localisation is evident: scripts to localise are more elaborate, intricate and feature increasingly more complex branching narratives. More characters feature in the storyline. Audio performances require top quality execution from actors, with emotionally‑charged deliveries. As localisation gets in motion alongside development milestones, it naturally becomes submitted to agile processes: Developers send numerous batches of source text and audio once ‘finalised’ on completion of their work. This strategy constrains localisers to react rather than plan, leaving us with no other alternative in crunch time than mirroring the fragmentation. As a result, consistency gets affected by fragmentation, continuity of work gets broken up, resources aren’t judiciously allocated to one same project throughout its lifecycle (due to availability/other projects’ bookings), translation styles vary, inconsistencies crop up, context is misinterpreted, translators and actors get to work on excerpts of scripts without necessarily being given the opportunity to grasp the underlying plot or context. Therefore, do agile methodologies dis‑serve localisation? Agile methodology has many benefits but also brings challenges aforementioned, which offer new opportunities to explore. And with them, smart solutions open new

Dev inputs the source text and audio in LAMS. Traditionally, the project manager in Localisation Services will handle all interactions with LAMS: downloads, uploads of all text and audio files, etc. To compensate for vendors not having full access to the script and storyline hence translating without full project overview, we decided to open up LAMS to vendors. Whilst this meant that project managers would delegate responsibility of downloads/uploads or file checking before delivery, it also meant we were giving translators a full overview of the script for read‑only reference and precious context. Opening up LAMS also meant giving vendors more ownership and accountability (LAMS flags up length issues, file naming inconsistencies…) allowing vendors to immediately fix bugs and reduce tolerance level of context issues overall found during the localisation QA cycle. Another solution explored to mitigate inconsistencies and style issues was to design schedules specifically to commit a single translator on the main original translation batch (where suitable). Working on one translation batch of final assets only would also minimise asset wastage by not starting translations too early. Typically, because of time constraints on large volume of text to translate and adapt to audio constraints, this work would be allocated to two translators to hit the deadline. Allocating a single translator who would immerse himself /herself and create the backbone of the story would prove instrumental and improve the translation work whilst, again, reducing bugs in the later localisation QA cycle.

The impact on localisation is evident: scripts to localise are more elaborate, intricate and feature increasingly more complex branching narratives

NADÈGE JOSA SENIOR PROJECT MANAGER ‑ LOCALISATION SERVICES, SONY COMPUTER ENTERTAINMENT EUROPE (SCEE) LIMITED

With over 15 years of experience, Nadège has been working in various localisation roles within Sony. After studying Applied Foreign Languages specialised in Translations, she started off her career in the first Sony Localisation Quality Assurance group before moving onto Localisation Project Management. A Senior Project Manager since 2011, Nadège strives to find smart solutions to the everyday new challenges of the industry.

T E S T M a g a z i n e | M a r c h 2 01 6


26

Best practices often recommend that developers provide vendors with one batch of final assets to optimise localisation services. Yet this is extremely challenging in an environment where most developers follow agile methodologies

T E S T M a g a z i n e | M a r c h 2 01 6

L O C A L I S A T I O N

We also looked into working on one single recording session for the main audio session and one single recording session for the pick‑up session thus minimising the fragmentation to optimise efforts, cost and time.

personas or plot, remove incorrect meaning or mistranslations) but they would also polish the text of any errors (spelling, grammar, etc.). Thus making scripts as immersive, creative and emotionally‑engaging as possible prior to audio recording.

OPTIMISING LOCALISATION IN AN AGILE DEVELOPMENT PROCESS

STRENGTHENING LOCALISATION AT SCEE

Not a novelty? True, best practices often recommend that developers provide vendors with one batch of final assets to optimise localisation services. Yet this is extremely challenging in an environment where most developers follow agile methodologies. So how do we implement this ourselves, with the buy‑in of developers and producers? We needed to forecast a volume scope on source text and audio together with developers, then create realistic schedules with production. Typically, localisation starts early in the game development to enable gradual translation, adaptation to source audio, recordings, integration, localisation testing, focus groups and so on. This time, we would push and compact schedules and not follow the traditional internal milestones criteria. This meant reviewing the project lifecycle process and approaching the schedule not from the early phase but the end phase. From the master submission, allowing for the localisation testing schedule, for developers’ localised assets integration time, for localisation recording and translation; leaving time strictly for the translation in one go and recording session with very little room for error. Hence working on more finalised source assets, thus not requiring re‑recordings or edited audio. In SCEE, we are also looking at a ‘revolutionary’ step: giving our translators (who are outsourced vendors in countries) the chance to review their own localisation in‑game before (or after) recordings. No need for translators to have testing skills, the intention is not to replace testers’ input from the project lifecycle. With this path, not only would the translator be able to review their own translations/subtitles in‑game against the source audio (adapt their script with visual context, add subtitles to character

To achieve this, we are looking into two options: using a new tool created by our Test Automation team with the support of the LAMS team: the Passive Capture Tool. Playthrough videos are linked to dialogues and associated LAMS script lines. By using strings’ unique name in LAMS, the idea is to allow the translator to quickly navigate to a particular asset, watch the relevant recording footage and verify the translation against visual support. The second option is to have translators verifying their localisation on Test Kits and using debug features to jump from sequences. This approach being dependent on the code being stable enough to be run on Test Kits, which is the hardware most vendors have access to. Hence the need to fit within schedule that would cater for this process. Finally, Localisation and Localisation QA are two separate entities in SCEE hence collaboration with our World Wide Studios QA Localisation colleagues is vital. To strengthen co‑operation and grow both disciplines stronger, we have been actively implementing joint meetings ahead of localisation; transition meetings when our test team colleagues start their first localisation sweep; weekly meetings for progress updates; and testers meetings to get live feedback on quality. We have been organising for Localisation Services project managers to sit with lead testers to enhance communication: avoid back&forth mail time‑wasters, live discussions on issues cropping up when both the localisation and localisation testing happen concurrently and overall better share of responsibilities when both disciplines overlap.

CONCLUSION In this ever‑changing industry, these gradual adjustments build up the path towards a new localisation horizon that focuses on challenging and optimising our services as well as our wider practices, to eventually provide an increasingly immersive and top quality gaming experience.



TESTING’S NEW REALITY Over the last few months Syed Ali has been testing using Oculus Rift and other VR systems. In this article, he covers his experiences and lessons learned.

V

irtual reality (VR) has been threatening the digital world for years, long before any of us would ever have referred to it as a ‘digital world’. I remember being taken to a trade show as a child and trying out a VR system. The headsets were huge in comparison to Oculus Rift. There was a whole pod‑like structure in which you had to get into. I still remember being wowed when surrounded by this (primitive) virtual environment. It was a game where you had to battle in tanks and being the 1990s it was polygon city. Since those days, VR has always been on the cards but never quite matching up to our expectations. Now it seems technology has finally caught up to make the VR experience of your dreams. Oculus Rift has been leading this charge but isn't on its own. Sony are releasing a VR headset to partner with the PS4; also HTC are releasing the Vive headset which has reportedly superior technology to Oculus Rift's headset. Of course the expected price tag is also superior. So when I got the chance to provide testing for a company developing a 4K media player in VR; I jumped at the opportunity.

T E S T M a g a z i n e | M a r c h 2 01 6


V I R T U A L

R E A L I T Y

Even before I entered their offices, I had already started to envision how I would test in VR. How would I take everything I've learnt about testing and apply this to a completely new domain?

TESTING IN VR Putting on the headset takes you into another world and at the start you have to take it in and let it almost overwhelm you. You cannot objectively test the experience if you first haven't let yourself be immersed in it. Even as you're immersed however, your mind is still trying to break down everything you see. You are trying to compartmentalise the different aspects of the experience, but then a very simple question came to mind: Is it more comfortable to look up or to look down? It seems simplistic but there are a number of factors that will be influenced by this. In order to design appropriate user interfaces, we must first understand how positioning UI elements can affect the comfort level of the user. It turns out that looking down is very uncomfortable compared to looking up. Your presence in the virtual environment can be adversely affected if your chin hits your chest as you look down. This is because you become aware of your body being outside of the virtual environment you are seeing. It's also not as simple as putting content higher up so the user looks down less. If the content is higher up but too close in the z‑plane, the user can become claustrophobic because of the content being on top of them. Claustrophobia might be the perfect feeling for a certain section in a game, but it's the last feeling you want the user to experience if they're selecting content from a video‑wall. Brightness and contrast is also another big design consideration. We have all seen how overly bright web pages with very little contrast put off users. Now extend this to a user that is surrounded by an overly bright UI with very little contrast. Users will not suffer through this, they will simply take off their headset and do something else.

PROTECTING THE USER There is also the consideration that a bad experience doesn't mean the user will consider an alternative. A bad experience in VR can mean someone is put off the entire world of VR. As testers, we need to pre‑empt those experiences and help to make sure they

29

T E S T I N G

don't happen. When a user may have to pay £500+ to experience VR, we need to make every experience wow them. Someone recently said that: "Building content for VR is a bit like skydiving and constructing the parachute on the way down." We're in a situation where there are no tried and tested standards. Oculus Rift have built a set of guidelines for VR content‑creators, but these can only go so far. We are literally building the future and we need to envision how to test it before it exists!

VR TESTING RIGS As well as redesigning our approaches to testing to cope with VR, we also have to rethink the way we setup our testing rigs. We've all been in a situation where a tester reports a bug and the bug cannot be reproduced on the developer's machine. Now let's think about how that could manifest when testing in VR using a real life occurrence. I had reported an issue with the positioning of content in the UI. The dev sent me the latest build to re‑test the issue and also came around to my side of the office to watch me test. From the dev watching me test, we realised there was a big difference in the way our respective headsets were calibrated. We realised we needed to set up a calibration screen for users and that we needed to calibrate the different rigs in the office. There is no physical way they can mirror each other because calibration depends on physical aspects such as interpupillary distance, but we can be aware of this and develop and test accordingly.

SUMMARY We have to realise that we cannot rely on what has worked for testing other types of content. We have to continually question things in order to find the best way to work in the context of VR. Questioning what we do has always been the hallmark of a good tester, but now we have to up that if we want to test in VR to the standards required.

We're in a situation where there are no tried and tested standards. Oculus Rift have built a set of guidelines for VR content‑creators, but these can only go so far. We are literally building the future and we need to envision how to test it before it exists!

SYED ALI FREELANCER SOFTWARE TESTER AND PROJECT MANAGER

Syed Ali is a context‑driven tester with a background in sociology. With an interest in technology since a young age, he is now using all his experience in order to further software testing as a discipline. Currently focusing on testing in VR, he is building resources to help the wider testing community. His next engagement will be as a speaker at the Global Testing Retreat in Pune, India.

T E S T M a g a z i n e | M a r c h 2 01 6


O U T S O U R C I N G

Andy Robson, Owner, Testology Ltd, addresses the changing QA challenges facing those working in the gaming sector. T E S T M a g a z i n e | M a r c h 2 01 6

T

he industry has seen a few notable QA challenges in the last few years. The first has been focused around platform diversification and growth within these platforms, with mobile being a significant contributor to these challenges. Work is now focused on mobile devices, consoles, PC/Steam platforms, browser‑based products on desktop and mobile, physical toys, hardware that collaborates with applications, VR, and others I’ve missed, I’m sure. And these platforms are no longer limited to

singular industries; they’re shared across a multitude.

PLATFORM DIVERSIFICATION So, for example, we can test browser‑based video games one day, and then slots or websites using the same browser considerations the next. Platforms are expanding in number, as well as the type


Q A

I N

31

G A M I N G

of product they support. QA has to be considerate of all of these things. Some 23 years ago, PC and a couple of consoles (Nintendo and the Master System) were all we had to work with, so the industry is diversifying through platform growth and sharing more synergy with other product types and industries. But this means that we have to be more aware of time, of resource distribution, and of focus when planning test phases for our clients, or suggesting how we can support and supplement their existing QA departments.

INCREASED SIZE AND COMPLEXITY OF GAMES Secondly, one of the toughest things that the industry has faced would be the ever increasing size of games that are now being released on console, PC and, at times, mobile too. Many products are incredibly open ended, expansive, and feature rich beasts, that are both online/offline and heavily reliant on complex and large‑scale multiple options. The testing coverage is significantly greater than the early years of games development and this does complicate the testing process considerably. And I personally think this has had huge ramifications on the industry and the amount of time during development dedicated to the QA process. QA teams are simply not getting enough coverage on titles and games are being released before they’re ready to meet imposed deadlines. This really frustrates QA teams across the gaming industry, as scope is simply inaccurate during periods of planning and comprehension of how involved a proper QA period is. It’s longwinded and repetitive and time consuming. Huge games need huge amounts of testing.

THE GROWTH IN MOBILE And with mobile being the most significant platform to develop in recent years, the products have grown in scope in the some way. Applications are being consistently released with successful IPs releasing new features and updates and content bi‑weekly or monthly. Title development seems to be endless on this platform for many of our clients. So, with the sheer quantity of titles and developers, we are now testing 30‑35 projects a day, as opposed to three or four major titles a few years ago. Some require 1 day of testing every few weeks,

some require hundreds of man‑days every month. There’s diversity in the requirements but success for a title means the IP needs to be sustained to satiate the user base. Our management structure, scheduling model, and client liaisons need to be spot on to make sure we manage our own QA departments while managing the dynamic expectations of our clients and their products. A lot of QA departments I know have struggled with this adjustment. We’ve managed to create a flexible QA business that’s suited to changes in requirements, changes in industry, and fluctuations in workload. But it’s always exciting to see the industry being affected and influenced by technology and developments in platforms. It’s only natural that our business should be influenced by it too! Mobile is not just about the products, applications, or games, either. It’s a convoluted and saturated market from a hardware and software perspective. We face the challenge of making sure we’re market representative and supportive of our clients’ compatibility needs. With iOS, we manage the limited hardware releases, software updates, and BETA releases relatively painlessly. It’s the Android market that can be the tricky one. We currently have over 300+ physical devices and would never consider emulation as an option. The scale of procurement was a challenge, initially, but now it’s more of less making sure we’re expert consumers. Developers just don’t have the amount of physical devices we have, or the desire to purchase them, so companies like Testology are a time and cost‑effective solution for ensuring full compatibility of their products.

WHERE WE’RE HEADED For me, I don’t really think technology like virtual reality (VR) will be hugely impactful on the gaming market. It’s too expensive and the player experiences leave a lot to be desired from what I’ve seen. But, mobile VR might be the answer… Generally, I think platforms will continue to grow and influence the type of products we test. Mobile will get bigger as a platform and, perhaps, even become the gamers’ console – it’s an attractive proposition when most daily requirements are on someone’s mobile, anyway. Some smartphones are more powerful than a console and, with improving network speeds and better battery life, games of console quality will be neatly stored on your favourite mobile device. Portability won’t compromise quality or overall experience and AAA games will be ‘mobile.’

QA teams are simply not getting enough coverage on titles and games are being released before they’re ready to meet imposed deadlines

ANDY ROBSON OWNER TESTOLOGY LTD

Andy has been working in the industry since 1994 when he joined Bullfrog Productions and ran the testing department, working closely with Peter Molyneux. In 1998 he followed Molyneux to work at the newly formed Lionhead Studios, again as Head of Testing. During this time, Andy worked on over 30+ AAA titles spanning all platforms. He now works closely with development teams and publishers in all areas of games, digital media and gambling sectors.

T E S T M a g a z i n e | M a r c h 2 01 6


32

O U T S O U R C I N G

DRIVING DOWN RISK Jane Such, Head of the Assurance Practice, Certeco, explains why an enterprise wide test strategy should be the goal.


T E S T

33

S T R A T E G Y

R

arely a week goes by without a tech failure of monumental proportions hitting the headlines. Pre‑Christmas, it was all about the Cyber‑Monday IT chaos. The annual internet shopping free‑for‑all saw retail websites fall over and payment systems such as PayPal collapse as shoppers raced to get online bargains. And in the last month or two, we have also witnessed a few banking system collapses, with customers of major UK banks left unable to access bank accounts, or conduct online banking. Twitter also suffered a major outage back in January when it was offline for a few hours, frustrating its 900 million+ users. Some industry analysts have attempted to pin down the amount global businesses lose out through IT failures – some estimates have it at around US$3 trillion. So is IT failure what we should start to expect? Are tech failures – which are largely caused by undetected bugs and inadequate testing and quality assurance (QA) – just part and parcel of the business landscape? A major issue is that QA and testing are often silo‑ised and are parceled out to specific projects rather than having a centrally managed approach. The approach to testing is generally piecemeal, disparate, not cohesive. But in the same way organisations have a single financial strategy in place, a HR policy or a focused approach to procurement, in a bid to create alignment, stop duplication of effort and streamline costs, they need to take the same approach to testing and QA.

A STRATEGIC APPROACH In a bid to raise testing and QA out of the tactical doldrums, organsiations need to realise how much they would benefit from having an enterprise‑wide test strategy in place, which becomes the driving force behind the implementation of an enterprise‑wide test process. The strategy is a powerful thing. But it is important that it becomes a living, breathing document and doesn’t just sit on the shelf gathering dust. A major element of the strategy is that testing becomes a repeatable, industrialised

process. If this happens, it frees up testers to focus on more strategic elements of their role, such as flagging up defects and challenging specifications and also focus on innovations, implementing improvements or spending more time on areas of high risk or complexity. If testers get the chance to bring their knowledge and experience to bear because they have been freed up by the industrialisation of the testing process, that can only be a good thing and obviously helps elevate testing to a strategic enabler, as opposed to leaving it as a tactical, tick box function. The strategy is the first step in giving testing a framework and real foundation.

HOW IMPORTANT IS SPONSORSHIP? The importance of ‘buy in’ from the top can’t be underestimated. It helps to give the strategy the impact and gravitas it needs for

the team to start believing in it. It’s important to have senior input when socialising the testing strategy across the organisation – ‘lunch and learn’ sessions introduced by a board member, for example, giving their opinion on why organisation‑wide quality assurance is so important, is a really powerful way of getting the message across.

But in the same way organisations have a single financial strategy in place, a HR policy or a focused approach to procurement, in a bid to create alignment, stop duplication of effort and streamline costs, they need to take the same approach to testing and QA

JANE SUCH HEAD OF THE ASSURANCE PRACTICE CERTECO

Jane heads the assurance practice at

THE CORNERSTONES OF TESTING STRATEGY Outlining the governance of test in projects is critical. This involves pinning down operational governance – status, weekly updates etc. – test plans, test exit reports (looking at the entry and exit criteria, for example, to assess whether they’ll succeed or fail). Assessing how testing is performing as a whole as a service to the business,

Certeco. Prior to this position, Jane led CSC's regional testing centre of excellence, where she was responsible for the post merger functional integration of two test consultancies. Before CSC, Jane was head of test and QA at multinational DIY retailer Kingfisher where she created the company’s first enterprise wide testing and QA function.

T E S T M a g a z i n e | M a r c h 2 01 6


34

It’s often the case that organisations end up repeating up to 50% of tests, as the systems integrator hasn’t performed as many as they should have

T E S T

project after project, is vital in making it a successful business function. Issues, such as how good the testing team is at finding defects, and how efficient the development team is at fixing them, is important to measure. It is only when assurance becomes a measurable function is it possible to improve areas that aren’t doing so well. This leads into the quality assessment framework. Gauging the facts and figures needed to measure quality is important in being able to measure the testing goals – for example, reducing cost, improving quality or getting a product to market quickly. It might be that a measure is the number of defect reports being raised – but if the tester working on it doesn’t understand the business functionality very well, they will erroneously raise reports, which skews the effectiveness of the function. It is important to have a proper understanding and a holistic approach, where it’s not just about metrics, but about the people, culture and framework around those metrics. In the case of the tester who is mistakenly raising reports, it would mean they need more training. Formal alignment of the test process to the software development lifecycle (SDLC) ensures that testing is involved early. But it is also very important to remember that an organisation might well multiple flavours of SDLC, a combination of waterfall and agile, for example. So it’s important to align testing to all of these – this means that testing will be involved early, at the requirements and analysis phase. The testing strategy also needs to cover what approach you need to take to testing, for example whether it will be a risk based testing approach, a model based testing approach, or a test early approach, where you determine the level of static testing as opposed to dynamic testing. And the approach to testing will also be dictated by the type of change the organisation is going through – for example, business as usual (BAU) change will involve ‘lite touch’ testing whereas bigger transformational change will involve more in‑depth, risk‑based testing.

THE DANGER OF MULTIPLE TEST STRATEGIES Another key factor, when developing the test strategy, is to define the test level, establish who the test owner of the enterprise wide test strategy is and work out the objectives.

T E S T M a g a z i n e | M a r c h 2 01 6

S T R A T E G Y

Defining this upfront ensures an end‑to‑end progressive approach to testing and mitigates any risks. The problem with large change programmes is that the systems integrator will often write the test strategy, but they will only consider their remit within the programme and not what the wider organisation might be doing from a testing perspective. But this means the organisation ends up with several testing strategies and runs the risk of duplicating effort, or maybe not doing some things at all. It’s often the case that organisations end up repeating up to 50% of tests, as the systems integrator hasn’t performed as many as they should have. All of these factors can pose serious risks. Having an enterprise wide test strategy in place means that everyone is swimming in the same direction, knows exactly what they are doing and what they are responsible for. It can become part of the contract with the systems integrator.

THE BREAD AND BUTTER STUFF There are some tactical testing activities that all organisations do and the enterprise wide test strategy can become a home for them. These include: the test tools that are needed at an organisational level; what incident management facilities there are in place; other complementary strategies, such as an environmental strategy or a release strategy; and also an approach to performance testing and test automation.

WHAT IS TO BE GAINED FROM EWTS For a start, avoiding technical glitches – or doing your level best to avoid them – is the primary aim of an enterprise wide test strategy. If it’s not in place, how can a team effectively perform an upgrade or implement a new application? Cross their fingers and hope for the best? That’s not good enough, when consequences can have such a negative effect on the operating business. In the way that a company might future proof against a potential catastrophe in disaster recovery or reputational terms, companies have to think about any potential tech pitfalls. And a fully comprehensive enterprise wide test strategy with the appropriate governance in place – even though it won’t solve all your problems – is an excellent place to start.



EXPLORING THE OPTIONS WITH EXPLORATORY TESTING

When it comes to choosing the testing methodology for a project, research shows that there is no one method that is better than the rest. As an experienced tester, more often than not it comes down to your informed judgement. Ulf Eriksson, Founder of ReQtest, reviews exploratory testing tips and tricks.


E X P L O R A T O R Y

37

T E S T I N G

W

hat to know before beginning? If you are new to exploratory testing, it is a good idea to further research the topic to understand the concept and see how exploratory testing compares to other testing methodologies. All testing involves a certain amount of exploratory testing – we just don’t label it as such. In fact, testers do exploratory testing all the time, albeit subconsciously, as part of a traditional test strategy. For example, during the course of a scripted test cycle, a tester might have an idea for a test case or scenario that isn’t part of the original plan, and they may test it anyway because they believe it’s worth the effort. This is exploratory testing. Exploratory testing doesn’t preclude traditional testing; in fact, many find that using more than one testing method on a project tends to deliver best‑in‑class test quality. A quick Google search will demonstrate that even huge, and hugely successful, corporations such as Microsoft use both exploratory and scripted test cycles on the same project, as per best practice.

ASK IF EXPLORATORY TESTING IS RIGHT FOR YOU/YOUR TEAM? There are a number of key factors that need to be considered before deciding to follow the exploratory testing route, chief among which are: • You and your team have sufficient knowledge and experience of the system being tested. Why? Because it is only when a tester is familiar with the system being tested, that they will be able to come up with scenarios that stretch the system in the right way. For example, exploratory testing is especially powerful when it’s aimed at making enhancements to a product that is already being used by customers, as this usually means that the product is very well tested given that it is live, and subsequently, that the project team are intimately familiar with the system. • You want to start catching bugs right away, spending very little of your time and effort in planning and preparation. • There is a reduced need for efficiency, repeatability and reliability of this testing cycle and its results. Your team is comfortable working with relatively thin documentation, and fully aware that this will likely build ‘as you go along’, as you test. Agile projects lend themselves very

well to exploratory testing, given their central principle of iterative development, and therefore many test cycles. • There is a more comprehensive test cycle scheduled for an appropriate time. Exploratory tests help teams to get back to the development board quickly by catching major bugs, however, traditional testing is key to a truly comprehensive test coverage. For example, if a change deals with customer‑facing systems, it is essential to also use traditional testing as a supplement to exploratory testing, as this will aid in fulfilling any mandate for in‑depth testing, as well as marginalise the risk of system failure, while protecting the reputation of the company and preventing any adverse impact upon customers and their operations. If you can answer yes to all the points above, then exploratory testing is right for you.

Agile projects lend themselves very well to exploratory testing, given their central principle of iterative development, and therefore many test cycles

CREATING A TEST CHARTER This may sound counter‑intuitive, especially given we just said ‘sparse documentation’, but, a little bit of preparation goes a long way in maximising the value you can get out of exploratory testing. Exploratory testing does not mean no documentation – rather, any documentation tends to be optimal – and thus costs much less in terms of time and money when compared to more traditional forms of test documentation. Instead of writing vast test plans, test cases and scripts, go ahead and create a test charter which would include: • Scope and approach: A high‑level description of the system, components and functionality that require testing. • Expected results: When a tester knows what they’re looking for, the chances of meandering from the intended test route are greatly reduced. This will help bring some structure to – but not defeat the purpose of – exploratory testing. • Method for recording tested cases and results: Identify a process for recording the test effort and results.

ULF ERIKSSON FOUNDER OF REQTEST

Ulf Eriksson is one of the founders of ReQtest, an online bug tracking software hand‑built and developed in

SELECTING THE TOOLS TO BE USED FOR TESTING

Sweden. As the author of a number

Apart from traditional testing tools, use visual tools to help in significantly improving the effects of exploratory testing. Visual tools help record each test case as it is executed, and provide a highly reliable form of

has recently written an e‑book titled

of whitepapers and articles, mostly on the world of software testing, Ulf Beyond the Hype: Software Testing Explained, which is a compendium of his experiences in the industry.

T E S T M a g a z i n e | M a r c h 2 01 6


38

Visual tools help record each test case as it is executed, and provide a highly reliable form of audit and reference

E X P L O R A T O R Y

audit and reference. Select a visual testing tool that complements your current tool suite, and you’re all set.

SETTING A TIME‑FRAME As with anything, having a deadline to work within will help testers focus on testing the more important scenarios, and help limit them to the defined scope. And more importantly, deadlines prevent over‑testing.

LOGGING AS YOU EXECUTE Log test cases and results as you test. The act of recording your actions stimulates your brain to come up with new follow‑up test ideas based on the test result, and improves test‑related learning. Logging also creates enough audit and reference for your efforts. This can be helpful when you want to go back and review individual scenarios.

T E S T I N G

At the end of an exploratory test cycle, you should have the following output as reference: • List of test cases and outcomes – including screen capture or videos of each test case. • A report with key findings and recommendations. • It is worth remembering that logging can provide a strong foundation for a formal scripted test cycle later on in the project, and help speed up its preparation phase.

SUMMARY When done right, exploratory testing can provide a much‑needed boost to a team’s productivity. Adding a burst of exploratory tests up front in your testing cycle could result in significant benefits, including: • Faster discovery of major bugs. • Decreased lead time for beginning testing. • Most importantly, testing normally untested parts of the system, and therefore discovering hidden bugs that can cause a lot of heartburn.



I AM A TESTER, NOT A DOCUMENT WRITER I am a tester. I need to use my brain more than my hands, laments Pratik Shah, QA Lead, Advanced Computer Software (ACS).

T

here is a difference between test case writing and test case designing. No organisation (or project team) will admit that they are spending lot of time in test case writing, in other words documenting detailed test cases. Everyone thinks that the time they are allotting for test case writing is justifiable. When I started my career as a software tester in 2005, I joined the same lot.

T E S T M a g a z i n e | M a r c h 2 01 6

I started writing detailed test cases due to so called genuine reasons explained to me as below: • Writing test cases is an equally important activity from the billing point of view in the customer/vendor relationship. • They provide the evidence of exactly what steps were followed to validate a particular event.

• Detailed documented test cases are easy to follow by anyone (any new team member or someone who is exposed first time to a particular functionality). • Detailed documented test cases with steps help tracking which particular step is failing.


T E S T

C A S E

41

M A N A G E M E N T

CURRENT REVOLUTION But now, for last 6 ‑ 7 years, I have been working in an agile world which has changed the thought process completely. Primarily, this has strengthened the vendor/customer relationship by holding customers equally responsible for timely software delivery and quality. This is a revolution phase. There is now a lot of risk analysis documenting what software engineers are doing; how long it will take to complete; what is the cost to do a particular activity; and what could be compromised if it is not done at all. Additionally, there is a trust between the team members and between the vendor/customers. Instead of showcasing lots of productivity metrics and documents, the vendor is asked to focus on timely delivery of a quality product. The customer is involved from day one in day‑to‑day activities so they always know what is going on and how a project is being executed. The important key everyone holds in this pattern is – only work if it adds value. Value to the business, value to the delivery, value to the profitability, value to the organisation and value to the customer. If value addition demands changing the traditional ways of working, why not?

IT IS TIME TO STOP WRITING EXHAUSTIVE TEST CASES So coming back to main point, writing detailed test cases. Is it really needed? Does it really add value? What is the return of investment of writing down detailed stepwise test cases? The important thing is to design as many test variations as to cover maximum risk, to achieve maximum coverage followed by test execution for the same. Why am I advocating for testers to stop writing exhaustive test cases? 1. Test execution is not a DIY activity which we expect just anyone to carry out. When we expect someone to execute the test cases, the first thing we check is does the tester have some product knowledge, domain knowledge or knowledge of changing features? We don’t expect a tester who has just joined the team to start executing the test cases without any help from day one. At least I would not take that risk. 2. Concentrating on writing business level test cases will help teams to design more scenarios because the efforts wasted in

writing details are saved. A tester will have to read less and do more. 3. Test cases prepared in a business language are a non‑boring task compared to the detailed ones. 4. One can quickly go through them and figure out what are the main variations covered and which are outstanding. 5. When a defect is fixed, a tester has to verify the whole scenario following the steps in the defect report. So there is no use of writing down the steps in test cases and the similar steps in the defect report too.

RECOMMENDATIONS What I ask my team to follow is: 1. Spend maximum time designing test variations. Analyse, discuss, mind map. 2. Drop all the variations in a readable form on a piece of paper. That’s it. We are done. Now one may go one step further and drop all the test cases in the test management tool to keep a repository of test cases and to build regression packs by reusing some of the test cases. This is fine. 3. I do not recommend any specific method for capturing the test ideas (or test scenarios). Based on the requirement, I always keep changing the method to design test scenarios. I have used spreadsheet tables, marking the variations just by “Yes” or “No” or “1” s or “0” s. I have designed the test variations using flow charts. One can use the decision table or data flow diagrams. These are the traditional methods to design the test cases and I stop there only. I do not document anything further to this.

Concentrating on writing business level test cases will help teams to design more scenarios because the efforts wasted in writing details are saved. A tester will have to read less and do more

CHALLENGES IN ADAPTING THE REVOLUTION There are many challenges facing testers who wish to challenge the status quo, such as convincing top management not to expect lots of documents from the testing team and to forget about the test case writing SLA agreed with the customer. Additionally, test variations and test coverage has to cover maximum risk so as to ensure nobody compromises on quality. Finally, if you are already writing detailed test cases, then it becomes more challenging for you to transition to the summary level test cases.

PRATIK SHAH QA LEAD ADVANCED COMPUTER SOFTWARE (ACS)

Pratik Shah has been working in software testing for last 11 years. Currently he is working as a branch head for the QA department with one of the UK‑based MNCs in India.

T E S T M a g a z i n e | M a r c h 2 01 6


including The National DevOps Conference

The National Software Testing Conference is back at the British Museum! The National Software Testing conference is the premier UK‑based conference that provides the software testing and QA community at home and abroad with invaluable content: ✓ Revered industry speakers ✓ Practical presentations ✓ Best practices and lessons learned from industry peers ✓ Executive Workshops, facilitated and led by key figures ✓ A market leading exhibition “It was an amazing event! Like with the European Software Testing Awards earlier, the team pulled out all the stops. Very professional, great venue and fantastic networking opportunity. Thank you.” Karen Thomas, Practice Manager, Barclaycard

GOLD SPONSOR

SILVER SPONSOR

EVENT PARTNERS

www.softwaretestingconference.com

EXHIBITORS


N AT I O N A L

S O F T W A R E

T E S T I N G

43

C O N F E R E N C E

Featured speakers

Rod Armstrong Programme Quality Manager EasyJet

Deb Bhattacharya Scrum Master and DevOps Specialist HSBC Commercial Banking

Delia Brown Head of Test Home Office Technology

Andy Chakraborty DevOps & Security Consultant

John Clapham Agile & DevOps Consultant Cotelic

Steve Connell Director of QA and Test Design & Consultancy Services Home Office Technology

Mike Dilworth Agile & DevOps Transformation Sainsbury’s

Peter Francome Head of Test, Quality and Delivery Innovation Virgin Media

Paul Gerrard Consultant Gerrard Consulting

Sally Goble Head of Quality Guardian News & Media Limited

Andrew Hardie DevOperative at AgileSphere Ministry of Justice

Arun Jayabalan Test and Release Manager Public Sector, London

Shane Kelley Head of Digital Change William Hill

Eran Kinsbruner The Mobile Evangelist Perfecto Mobile

Myron Kirk Head of Test CoE Boots

Alan Richardson Independant Test Consultant

Inès Smith Quality Assurance and Controls Lead UBS

Geoff Thompson Consultancy Director Experimentus

MORE TO BE ANNOUNCED

Paula Thomsen Head of Quality Assurance Aviva

Keith Watson Agile Delivery Manager Ordnance Survey

Stephen Williams VP Engineering Ticketmaster International

Martin Wrigley Executive Director AQuA

Presentation topics include ★ Case studies ★ Changing trends ★ Test data management ★ Test automation ★ The Art of Questioning to improve Testing, Agile, and Automating

★ Testing: Evolution or Revolution? ★ Agile ★ What skills will testers need? ★ DevOps ★ MicroServices Testing and Automation strategy

★ Mobile testing ★ Transforming culture ★ Scaled DevOps Frameworks ★ Continuous delivery ★ Behaviour driven change implementation

“Fantastic conference focused at testing professionals. Good opportunity to network, new cutting‑edge products and ideas. Looking forward to next year.” Nadine Abley, GTF Test Manager, JP Morgan

www.softwaretestingconference.com


44

N AT I O N A L

S O F T WA R E

T E S T I N G

C O N F E R E N C E

In 2015, the National Software Testing Conference saw over 270 attendees from scores of different businesses that included easyJet, Credit Suisse, Direct Line, Mail Newspapers and Sony.

“As a leading provider of industry recognised software testing and agile certifications, iSQI were very warmly welcomed by delegates at the the NSTC. We look forward to next year.� Kyle Siemens and Debbie Archer, iSQI Gmbh

www.softwaretestingconference.com


The National 1 7 - 1 8

M A Y

2 0 1 6

T H E

Conference B R I T I S H

M U S E U M

L O N D O N

Come learn from peers who have successfully begun their DevOps journey!

BRINGING TWO IT CONFERENCES UNDER ONE ROOF! Also taking place at the British Museum this May is The National DevOps Conference The National DevOps Conference is the event for those seeking to bring lean principles into the IT value stream and incorporate DevOps and continuous delivery into their organisation. The National DevOps Conference is targeted towards C‑level executives interested in learning about the professional movement and its cultural variations that assist, promote and guide a collaborative working relationship between Development and Operations. Come, learn and network At The National DevOps Conference, you can hear from peers who have successfully begun their DevOps journey; from industry practitioners sharing advice and knowledge; join in executive workshops; network and much more. GOLD SPONSOR

SILVER SPONSOR

EVENT PARTNERS

www.devopsevent.com

EXHIBITORS


PROFESSIONAL VIEWPOINT TEST Magazine asks: As more organisations transition to an agile environment, how will the testers’ role be redefined?

ASHLEY PARSONS SENIOR TEST MANAGEMENT CONSULTANT TEST DIRECT www.test‑direct.com

Regardless of delivery approach, the requirement for testers to verify solution quality and assure end user experience remains paramount. There has been an evident shift in focus over the last few years, to engaging with testing earlier in the project lifecycle. Driven, at least in part, by the rapid rise of digital technologies, the transitioning to agile delivery is redefining the role of the tester along more technical lines. At Test Direct, this ‘shift left’ engagement of the tester has enhanced the importance of specialist testers within agile teams. Significantly, this approach now allows for the early adoption of non‑functional testing and automation within an agile environment. Gone are the days of non‑functional testing being executed as a final phase, if at all, with the primary focus on manual functional testing. Modern testers need to be increasingly aware of, and competent in, more technical testing disciplines. This ‘shift left’ and the opportunity to develop testing best practice within agile means it is an exciting time for our industry. We now have the platform to deliver more effective testing, faster than ever before, and testers have to evolve to meet this challenge. Welcome to the age of the agile technical tester!

T E S T M a g a z i n e | M a r c h 2 01 6

ALAN RICHARDSON INDEPENDANT TEST CONSULTANT www.compendiumdev.co.uk

In many ways the tester's responsibilities remain unchanged. The tester must still apply their skills to expose risks, coverage gaps, assumptions and ambiguities. As a tester on agile projects I had to ensure that my testing skills remained at the top of their game. I also learned to test faster, to fit into the shorter feedback cycles, and use the team communication tools to track and manage testing. No two agile implementations are the same. Testers have to adapt to their organisations specific implementation and learn to use a 'whole team' approach to mitigate risk, rather than trying to do so single‑handedly. Testers apply their risk identification skills to the agile process, as well as the system: identifying gaps in the automated coverage, and ambiguities in the story definitions and acceptance criteria; and then helping the team adopt mitigating test approaches. Testers succeed when they redefine their own role.



48

P R O F E S S I O N A L

TEST Magazine asks:

V I E W P O I N T

VALÉRY RAULET CHIEF TECHNOLOGY OFFICER TESTHOUSE www.testhouse.net

Why should performance testing be higher on the agenda?

DAVID BUCH VP PRODUCT RADVIEW www.radview.com

Performance testing needs to be higher and earlier on the agenda. Every website has an audience that it cannot afford to disappoint or lose. A few seconds of load time can mean millions of dollars in your bottom line. Performance testing is key to retaining agility and saving time and money early in the development cycle. After each iteration, you should run performance tests and discover bugs and system issues that will grow if not dealt with. Waiting until the end to run performance tests will cause unnecessary delays. During these early pre‑release stages, your system isn’t ‘properly cooked’ and you need a flexible tool that will help you to manipulate partially working features and components. Scripting is the only way to tweak your test scenarios at the micro level. This is vital for staying agile – your initial and early performance tests may reveal bugs that you otherwise would discover at a later stage, when the price for fixing them will be much higher. The earlier you know about a memory leak or CPU usage issue, the better. If you don't run frequent performance tests, you are back to Waterfall and are not running DevOps!

T E S T M a g a z i n e | M a r c h 2 01 6

High user expectations demand today’s applications to be interactive, efficient and multi‑channel. The challenge is therefore to deliver applications more quickly whilst still meeting these demands. Agile development has driven the need for improving software delivery. With the advent of DevOps, it has become mandatory to embed test automation into the build/deploy process and tools such as Visual Studio and TFS/VSTS are providing the technology to support that need. With automation being at the heart of this testing revolution, it will not be long before performance testing becomes omnipresent.

MATTHEW BRADY SENIOR PRE‑SALES ADM SPECIALIST HPE www.hpe.com/software

The accelerating adoption of DevOps in some sectors is driving a sea change in test activities which are moving closer to development, becoming more automated and being executed more frequently (or continuously), with a corresponding reduction in extended test phases later in the lifecycle. However, the danger of testing being part of the development process is that the best characteristics of a structured testing approach are being compromised by a developer‑centric focus on specific environments, platforms or frameworks. Test tools invoked from the developer’s IDE are not necessarily the only, nor the best, solution, as they rarely represent the real user experience, and the corresponding abandonment of black box and functional test methodologies is a major concern. Furthermore, there is an increasing trend of limited or no performance testing prior to release, at times replaced with scalability testing in production. It is surely better to develop optimal code, designed and tested to be fit for purpose early, as opposed to trying to resolve newly discovered limitations after deployment. Performance, scalability and security validation need to be embraced throughput the development process, with close collaboration between the developers and testers – perhaps renamed a DevTestOps approach.



Bring your teams together and deliver the right software faster

Too often, business and delivery teams feel divided. Both are aligned in delivering what your organization needs, but while business teams talk in requirements, developers think in terms of stories and tasks. Atlas translates business needs into iterative Agile delivery targets, in a language that everyone understands. This means everyone gets a clearer

view of the project, delivery timescales, the evolution of individual stories and how each one contributes to requirements and the business needs they represent. Keep your teams in sync and ensure projects are delivered with greater speed and confidence, with Atlas.

Discover more and take a FREE 30-day trial at Borland.com/Atlas

Copyright Š 2016 Micro Focus. All rights reserved. Registered office: The Lawn, 22-30 Old Bath Road Newbury, Berkshire, RG14 1QN, UK.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.