TEST Magazine May 2016

Page 1

MAY 2016

DIGITAL ASSURANCE FOR

FINANCIAL SERVICES

TEST MANAGEMENT TEST FOCUS GROUPS SYNDICATE SUPPLEMENT

UX AND USABILITY TESTING

SPECIAL

ETHICAL HACKIG


dItemIndex(1));

nce 'admin'.", new RecordItemIndex(2));


1

C O N T E N T S

T E S T

M A G A Z I N E

C O V E R

S T O R Y :

|

M AY

F I N A N C E

2 0 1 6

S E C T O R

28

38

36 NEWS

Software industry news

RETAIL SECTOR

5

THOUGHT LEADERSHIP

Juxtaposing the micro and macro in assurance

10

SYNTHETIC DATA

Getting ready for the EU General Data Protection Regulation

SECURITY TESTING

USER EXPERIENCE TESTING

12

Breaking the surface

38

OPERATIONAL TESTING

15

TEST MANAGEMENT

Transforming a test manager into a test leader 16 AGILE

Why collaboration is key for QA teams in an agile world 20

Validation of operational readiness of high availability platforms 42 PREVIEW

The National Software Testing Conference 2016 45 SUPPLEMENT

FINANCIAL SECTOR

From quality assurance to ‘digital assurance’

28

Hacking from the inside out 36

TEST AUTOMATION

Reactive automation

Top performance from the testing team during Black Friday

24

TEST Focus Groups Syndicate Supplement 49

45 T E S T M a g a z i n e | M a y 2 01 6



E D I T O R ' S

3

C O M M E N T

ARE BOTS THE NEW APPS? B ots have been getting everyone's attention lately. With headlines like Facebook’s plan to use business bots to make money on its 900 million‑user chat app Messenger and billion‑user WhatsApp, it is not hard to see why. Recently announced at Facebook’s annual developer conference F8 in San Francisco, the social media giant has opened up its Messenger for developers, letting businesses and brands build their own small artificial intelligence software programs to interact with Messenger users. Currently, more than one billion messages are sent between businesses and users via the app, Facebook says. It’s easy to understand why a myriad of businesses would be interested in tapping this market. Several enterprises are already on board with Facebook’s bot idea, and more are joining in at a fast rate. Facebook has already announced partnerships with companies such as Uber, Lyft, and KLM Royal Dutch Airlines. The social media company says it now has more than 40 partners for Messenger, including CNN, eBay, Walmart, and the NBA. Through Messenger’s Send/Receive software tool, these firms and others can build bots that are not only used for chatting, but can also send and receive images, product carousels and other rich content, to let their customers see products and services for sale in a new way. Facebook isn’t the only company to have invested in bots. Kik has got a bot store, and so has Telegram. Popular workplace chat service Slack has had bots included in its communication service for some time. It is getting easier to envisage a future where bots on messaging apps are used by end users on a daily basis. By keeping users inside the Facebook apps, the shift from a social experience into a transitional one will feel seamless. A customer wanting a taxi won’t need to fire up the Uber app, they can just order a car from inside Messenger or Slack. This signals next level of user experience and convenience. So are bots the new apps? Well only if the bot‑business‑hungry companies Facebook,

CECILIA REHN EDITOR OF TEST MAGAZINE

Slack, Microsoft and Kik etc. can convince enough developers to build bots to fill their stores. But the signs of a growing trend are there; Microsoft has been touting its bot‑building tools, and Slack has created an US$80 million fund to invest in bot start‑ups. They could be on to a winner. It’s clear why developers have taken a fancy to the bots – they are simpler to build and distribute than apps, hence more cost-effective. And once the bot is there, it’s easy to integrate with messaging apps. Naturally, the question we’re asking here at TEST Magazine is about quality and testing: If bots are looking to dethrone apps, how will this affect testing teams and service providers? Who will ultimately take responsibility for the quality of a bot? As we all know, the world of applications and new devices has been a game changer, and it's exciting to see more innovation coming our way, pushing the boundaries of our industry. In this issue of TEST Magazine we’re covering a multitude of topics to help ensure we all learn from testing and QA challenges – current and in the future. Read on for coverage on: test management strategy, what the EU General Data Protection Regulation will mean for testers, a great think piece on the relationships between QA and testing and UX and usability, and much more. This issue is also accompanied by the TEST Focus Groups Syndicate Supplement – a collection of the debate outcomes following the one‑day event last March, which saw over 100 senior testing and QA professionals gathered in London. Finally, if you are reading this issue at The National Software Testing Conference, then don't hesitate to say hi to me or a member of the TEST Magazine team. I’ll be hosting an interactive Q&A Panel at the end of each day, so do come along!

MAY 2016 | VOLUME 8 | ISSUE 2 © 2016 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 GENERAL MANAGER AND EDITOR Cecilia Rehn cecilia.rehn@31media.co.uk +44 (0)203 056 4599 ADVERTISING ENQUIRIES Anna Chubb anna.chubb@31media.co.uk +44 (0)203 668 6945 Mostafa Al-Baali mostafa.al-baali@31media.co.uk +44 (0)203 668 6943 PRODUCTION & DESIGN JJ Jordan jj@31media.co.uk EDITORIAL INTERN Jordan Platt 31 Media Ltd, 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 info@31media.co.uk www.testingmagazine.com PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA @testmagazine TEST Magazine Group

cecilia.rehn@31media.co.uk

T E S T M a g a z i n e | M a y 2 01 6



I N D U S T R Y

5

N E W S

VOLVO’S SELF‑DRIVING CARS WILL BE TESTED IN LONDON IN 2017 The Swedish carmaker plans to run a trial in 2017 that will see self‑driving versions of its 4x4s on the roads of London. The test being run by Volvo and The Thatchem Research Centre is called Drive Me London, and will be using real families driving autonomous cars on public roads during the trials. Previous tests have been conducted with the same vehicles in Gothenburg since 2014, and the carmaker plans to produce a parallel trial in the same city in 2017. Driverless technology has the ability to massively reduce the number of car accidents, along with being able to cut congestion on roads, according to Volvo. President and Chief Executive Håkan Samuelsson said: “Autonomous driving represents a leap forward in car safety. The sooner AD cars are on the roads, the sooner lives will start being saved.” The UK plans to be at the forefront of intelligent mobility with more than half of new cars sold in the UK already having autonomous systems, according to an analysis by the Society of Motor Manufacturers and Traders. Commonplace automotive tech includes collision warning systems and autonomous emergency braking.

STUDENTS INVENT GLOVES THAT TRANSLATE SIGN LANGUAGE Two University of Washington undergraduates, Navid Azodi and Thomas Pryor, have invented 'SignAloud' gloves that translate American Sign Language into speech and text. Their invention is a pair of gloves that have the ability to recognise an individual's hand gestures that correspond to words and phrases that appear in American Sign Language, allowing users to understand their peers that are deaf or hard‑of‑hearing.

AUSTRALIA INVESTS IN CYBERSECURITY Australia’s Prime Minister Malcolm Turnbull has outlined and implemented a cybersecurity strategy in order to push for a more co‑ordinated global approach to the protection of online data. During a speech in Sydney, the former online entrepreneur said that hacking attacks cost Australia AUS$1 billion a year, and he unveiled a list of measures that would be put into place in order to secure online data, from appointing his own special

Each glove contains sensors that record hand position and movement and then send that data wirelessly via Bluetooth to a central computer, which if the hand gesture is recognised, will then translate it into spoken word through a speaker. Due to the pair wanting to create a device that would have real‑world impact, they set out to build a device that would translate American Sign Language instantaneously, while being ergonomic enough that the device could be used in everyday life. “Our gloves are lightweight, compact and worn on the hands, but ergonomic enough to use as an everyday accessory, similar to hearing aids or contact lenses,” said Pryor.

cybersecurity advisor to having internet safety taught in schools. “There’s no global institution or infrastructure more important to the future prosperity and freedom of our global community than the internet itself,” Turnbull said, along with stating that the internet has spread “almost entirely without the direction or control of government”. “The same qualities that enable us freely to harness cyberspace for prosperity can also provide an avenue for those who may wish to do us harm,” he said as reasoning for his campaign.

T E S T M a g a z i n e | M a y 2 01 6


6

I N D U S T R Y

N E W S

APPLE ADVANCES HEALTH APPS WITH CAREKIT Apple has introduced CareKit, a new open source software framework designed to help developers enable people to actively manage their own medical conditions. iPhone apps using CareKit make it easier for individuals to keep track of care plans and monitor symptoms and medication; providing insights that help people better understand their own health. With the ability to share information with doctors, nurses or family members, CareKit apps help people take a more active role in their health. Now released, the developer community can continue building on the first four modules designed by Apple, which include a care card to help people track their individual care plans and action items; a symptom and measurement tracker; an insight dashboard which maps symptoms against the action items; and a connect module allowing people to share information with designated recipients. Developers of health and wellness apps are able to build these CareKit modules into apps for Parkinson’s patients, post‑surgery progress, home health monitoring, diabetes management, mental health and maternal health. Some use cases include: • Sage Bionetworks and the University of Rochester are using CareKit to turn the mPower ResearchKit study into a valuable tool to help better inform patients about their condition and care providers about treatment. • The Texas Medical Center is designing apps to guide and support care pathways for its 8 million patients to improve their health through enhanced connectivity. • Beth Israel Deaconess Medical Center will provide patients with more insight into their own chronic care management through home health monitoring devices that securely store data in HealthKit. • Start, by Iodine, helps people on antidepressants understand if their medication is working for them or not, and helps their doctors deliver more informed care. • Glow, Inc. will incorporate CareKit modules into its pregnancy app, Glow Nurture, to guide women through a healthier pregnancy.

T E S T M a g a z i n e | M a y 2 01 6

MILLENNIAL WOMEN ARE CLOSING THE GENDER GAP IN THE WORKPLACE Accenture has looked at gender equality in the workplace through three specific areas: how women use education in preparing for the workplace; how they do at finding and keeping a job; and how they do in advancing their careers. After conducting its research into these areas, according to Accenture, the most significant accelerator of gender equality in the workplace is digital fluency. The extent to which individuals, women especially,

GOOGLE COULD FACE A US$7 BILLION FINE FROM EU REGULATORS FOR ANTI‑COMPETITIVE PRACTICE The European antitrust regulator recently served Google with a formal complaint, citing it believes that Google has abused its dominant position within the industry by imposing restrictions on Android device manufacturers and mobile network operators.

embrace and use digital technologies to become more knowledgeable, connected and effective, is helping to significantly close the gender gap and level the playing field of women in the workplace. “If governments and businesses can double the pace at which women become digitally fluent, we could reach gender equality in the workplace by 2040 in developed countries and by 2060 in developing countries,” said Accenture. “If you are digitally fluent, it can provide a positive effect throughout your entire career lifecycle, and the effect benefits women more than men."

For an offense such as this, Google could have to pay the EU penalties up to 10% of the company’s global revenue, which for the web giant could be as much as US$7 billion. The case was particularly focused on two complaints: • Google requires its manufacturers to pre‑install Google Search and Google’s Chrome browser along with making it their default, and they also offer financial incentive to their manufacturers to make those services exclusive. • That Google penalises their manufacturers if they sell smartphones that run non‑Google versions of Android.


TESTING RECONSIDERED PROJECTS WILL BE CANCELLED 31.1% OF BEFORE THEY EVER GET COMPLETED [1] ONLY

ARE COMPLETED ON-TIME

16.2% AND ON-BUDGET [1]

$312BN

? Y H W

W, OO SLO S AN T S I G TESTIN UAL AND LET OF N R TOO MA PTABLE NUMBE UNACCE ECTS THROUGH DEF

POOR QUALITY REQUIREMENTS

56%

OF DEFECTS STEM FROM POOR QUALITY REQUIREMENTS [3]

64%

OF TOTAL DEFECT COSTS ORIGINATE IN THE REQUIREMENTS ANALYSIS AND DESIGN PHASE [4] CA TEST DATA MANAGER LETS YOU FIND, CREATE AND PROVISION THE DATA NEEDED FOR TESTING.

SPENT ON DEBUGGING PER YEAR [2]

THE AVE COST OVE RAGE RRUN IS

189%

[1]

MANUAL TEST CASE DESIGN 6 HOURS TO CREATE 11 TEST CASES WITH 16% COVERAGE

A NEW APPROACH TO TESTING

UNAVAILABLE OR MISSING DATA UP TO 50% OF THE

2

AUTOMATICALLY GENERATE AND EXECUTE OPTIMIZED TESTS

3

THE RIGHT DATA, TO THE RIGHT PLACE, AT THE RIGHT TIME

(GRID-TOOLS AUDIT AT A LARGE FINANCIAL SERVICES COMPANY: HTTP:// HUBS.LY/H01L2BH0)

AVERAGE TESTER’S TIME IS SPENT WAITING FOR DATA, LOOKING FOR IT, OR CREATING IT BY HAND (GRID-TOOLS EXPERIENCE WITH CUSTOMERS)

1

BUILD BETTER REQUIREMENTS

4 HOURS TO MODEL ALL BUSINESS REQUIREMENTS AS AN ACTIVE FLOWCHART AND MAKE THEM “CLEAR TO EVERYONE” [5]

2 BUSINESS DAYS TO GO FROM SCRATCH TO EXECUTING 137 TEST SCRIPTS WITH 100% COVERAGE [5] 60% IMPROVEMENT IN TEST

DATA QUALITY AND EFFICIENCY WITHIN 3 MONTHS USING SYNTHETIC DATA GENERATION

TESTING CANNOT REACT TO CHANGE

(GRID-TOOLS CASE STUDY AT A MULTINATIONAL BANK: HTTP://HUBS.LY/H01L2G50)

TWO TESTERS SPENT TWO DAYS UPDATING TEST CASES AFTER A CHANGE WAS MADE TO THE REQUIREMENTS

5 MINUTES TO UPDATE TEST CASES

(GRID-TOOLS AUDIT AT A LARGE FINANCIAL SERVICES COMPANY: HTTP://HUBS.LY/H01L2C_0)

4

AUTO-UPDATE TEST CASES AND DATA WHEN THE REQUIREMENTS CHANGE

AFTER A CHANGE WAS MADE TO THE REQUIREMENTS (AUDIT AT A LARGE FINANCIAL SERVICES COMPANY: HTTP://HUBS.LY/H01L2HP0)

FOR MORE INFORMATION

HTTP://WWW.CA.COM/US/PRODUCTS/CA-TEST-DATA-MANAGER.HTML [1] STANDISH GROUP’S CHAOS MANIFESTO, 2014 – HTTP://HUBS.LY/H01L2JK0 | [2] CAMBRIDGE UNIVERSITY JUDGE BUSINESS SCHOOL, 2013 – HTTP://HUBS.LY/H01L2KY0 | [3] BENDER RBT, 2009 – HTTP://HUBS.LY/H01L2L80 | [4] HYDERABAD BUSINESS SCHOOL, GITAM UNIVERSITY, 2012 – HTTP:// HUBS.LY/H01L2MC0 | [5] CA A.S.R CASE STUDY, 2015 – HTTP://HUBS.LY/H01L2NJ0 Copyright © 2015 CA, Inc. All rights reserved. All marks used herein may belong to their respective companies. This document does not contain any warranties and is provided for informational purposes only. Any functionality descriptions may be unique to the customers depicted herein and actual product performance may vary. CS200-160313


8

DATA STOLEN FROM DATING WEBSITE BEAUTIFULPEOPLE.COM BeautifulPeople.com is a dating website aimed solely at individuals who are ‘beautiful’, and the details of more than a million of its members have been discovered unencrypted online. Details included: height, weight, job and phone numbers. Security expert Troy Hunt has disclosed that these details have now been sold on the

BANGLADESH BANK HACKERS EXPLOIT SWIFT SOFTWARE BUG According to reports, Bangladesh’s Central Bank was left US$81 million down after hackers may have exploited a bug in SWIFT banking software. The malware used, evtdiag.exe, according to investigators at the British defence contractor BAE Systems, had been designed to alter code in SWIFT’s Access Alliance software, allowing the attackers to delete outgoing transfers and intercept incoming requests. The hackers were also able to hide the heist from officials by altering account balances. The malware even went as far as interfering with a printer in order to ensure that paper copies of transfer requests did not unveil the attack once it was underway. BAE claimed that even though the

GOOGLE AND UBER PARTNER TO MAKE DRIVERLESS CARS A REALITY Alongside Ford, Volvo and Lyft, Google and Uber will lobby US lawmakers and regulators on some of the legal barriers that are stopping self‑driving cars becoming a reality. Former US National Highway Traffic Safety Administration official David Strickland will be the spokesman for the coalition. "Self‑driving technology will enhance public safety and mobility for the elderly

T E S T M a g a z i n e | M a y 2 01 6

I N D U S T R Y

black market putting the security of the sites members at risk. Although, the site has said that the stolen data only belonged to members who joined before July 2015 and that no passwords or financial information were included. “The breach involves data that was provided by members prior to mid‑July 2015. No more recent user data or any data relating to users who joined from mid‑July 2015 onward is affected,” Beautiful People said in a statement.

malware was created specifically for this attack, it has the potential to be altered for similar attacks in the future. "I can't think of a case where we have seen a criminal go to the level of effort to customise it for the environment they were operating in," BAE Head of Threat Intelligence, Adrian Nish, told media. "I guess it was the realisation that the potential payoff made that effort worthwhile."

and disabled, reduce traffic congestion, improve environmental quality, and advance transportation efficiency," the group said in a statement. Carmakers have been worried that the government is moving too quickly to regulate this space, without consulting with industry players first. In terms of the roll-out of self‑driving cars, Strickland said, "What people are looking for is clear rules of the road of what needs to be done for (fully autonomous) vehicles to be on the road," emphasising that the companies want to deploy them safely. "Nobody wants to take a shortcut on this."

N E W S

MIT LAUNCHES EXPERIMENTAL BUG BOUNTY PROGRAMME The Massachusetts Institute of Technology has announced a new experimental bug bounty programme, to keep the school and research institution's websites and software safe from attack. Awareness of cybersecurity continues to grow, and bug bounty programmes get adopted by more and more organisations, including software firms like Google and large institutions such as the US Department of Defense. Now MIT is exploring whether or not offering rewards to researchers and security specialists who find security flaws in products and services is worthwhile. Currently, the academic institute's bug bounty programme is not open to the public, and no outside rewards are being issued due to it being in alpha testing mode. The programme is accepting applicants who are MIT affiliates, students or academic staff. And rewards are only being offered in TechCASH, which is MIT's version of money for campus cards. Bug hunters are also being asked not to publicly disclose vulnerabilities "before they have been completely resolved." Another request is for users not to use "noisy" automated scanners while breaking in. A handful of domains are currently the targets for vulnerability discovery, including MIT's student portal, the Atlas service platform and the Stellar domain, which hosts learning modules. In addition, the bug bounty website itself is also on the list.


I N D U S T R Y

9

N E W S

INDUSTRY EVENTS www.softwaretestingnews.co.uk

www.devopsonline.co.uk

THE NATIONAL SOFTWARE TESTING CONFERENCE

Date: 17‑18 May 2016 Where: British Museum, London, UK www.softwaretestingconference.com R E C O M M E N D E D

★★★

AUSTRALIAN TESTING DAYS

UK GOVERNMENT WEBSITES FAIL TO MEET USER EXPERIENCE STANDARDS VisibleThread, after analysing up to 300 pages on each GOV.UK website, found that 92% of Central Government and 66% of Local Governments sites do not meet the recommended readability standards. “UK government recognised the need to communicate clearly and took the significant step of publishing ‘Writing for GOV.UK guide’ said Fergal McGovern, CEO of Visible Thread.

read more online

TEST IN THE DEVOPS WORLD Test is like the Rodney Dangerfield of DevOps – it gets no respect. But that is all about to change. In a recent Gartner survey, over 50% of respondents listed test automation as a top enabler of DevOps success. This is really illustrative of the recognition that DevOps cannot succeed if only one part of the DevOps continuum is automated.

read more online

Date: 20-21 May 2016 Where: Melbourne, Australia www.testengineeringalliance.com ★★★

NORDIC TESTING DAYS

Date: 1‑3 June 2016 Where: Tallinn, Estonia www.nordictestingdays.eu ★★★

SOFTWARE TESTING CONFERENCE NORTH

Date: September 2016 Where: TBA north.softwaretestingconference.com R E C O M M E N D E D

★★★

STARWEST

Date: 2‑7 October 2016 Where: Anaheim, CA, United States www.techwell.com ★★★

DIGITAL FITNESS TECHNOLOGY IS MAKING BRITAIN HEALTHIER

DEVOPS PLATFORM MARKET TO GROW BY 19.42% DURING 2016 AND 2020

The widespread use of fitness applications and wearable technology has boosted the health and activity levels of members of the British public, according to new research. Findings have revealed that 82% of those who filled in the survey now use some kind of fitness technology.

The global DevOps PaaS (Platform as a Service) market is projected to grow between the years of 2016 and 2020 at a CAPR of 19.42%. New research from analyst firm Research and Markets shows that DevOps is shifting from being an emerging trend in the IT market to one that is gaining popularity at a rapid pace.

read more online

read more online

DEVOPS FOCUS GROUPS

Date: 18 October 2016 Where: Park Inn by Radisson, London, UK www.devopsfocusgroups.com R E C O M M E N D E D

★★★

QA&TEST 2016

Date: 19‑21 October 2016 Where: Bilbao, Spain www.qatest.org

T E S T M a g a z i n e | M a y 2 01 6


Siva Ganesan, Vice President and Global Head of Assurance Services, Tata Consultancy Services, reveals QA’s unique role in being able to align small details to an organisation’s larger vision and mission.

T E S T M a g a z i n e | M a y 2 01 6

O

ver the last few years, there has been a lot of focus on ‘big picture thinking’ in various aspects of quality assurance (QA). From a customer experience perspective, with the ability to view problems from the end consumer’s vantage point, to a productivity and process efficiency perspective, with, for example, shift left becoming the norm. QA practitioners at all levels are now expected to have a 360˚ view – of the business, processes, market, technology and end customer – with a long term perspective, tracing parts and their impact on the whole, and identifying patterns in complex problems. This demands not just greater understanding of requirements, design, architecture, engineering or instrumentation, test processes, metrics, and analytics, but also the ability to collaborate and be involved at

all stages of the development lifecycle. With big picture thinking, QA practitioners play a role in straddling assurance with business, technology, development, and operations.

THE LARGER OBJECTIVE QA folks are expected to have an eye for detail. Often, test case execution takes up a lot of their time. But while working on these smaller elements, if they can retain the larger objective, they are aligned with the project and company’s vision and mission. That’s when the company, and other stakeholders view QA as a partner in success and not simply ‘bug detectors’. When this happens, QA folks are called upon to participate in ideation and innovation dialogues, respond to customer queries and offer their viewpoint on


T H O U G H T

11

L E A D E R S H I P

strategic matters. They start contributing to product development and design, customer presentations, proposals, and much more. Yes, big picture view (and thinking) is certainly important, not just to the enterprise, but also to a QA professional. But there is a catch!

A SHARED VISION If the big picture isn’t communicated and disseminated effectively, it creates pockets of individual excellence – very often not useful to the enterprise at large. Practitioners need to not just see the larger view, but also show it, share it, and inspire others to appreciate it. We need to be thinkers with the ability to articulate and demonstrate shared vision. Here are a few guidelines on how QA folks can concentrate on the big picture and articulate it well: 1. Address specifics all the time, but remain focused on business results. Don’t get lost in detail. 2. Ensure brevity and clarity in written and verbal communication. Verbose, cluttered communication often creates confusion, fails to create interest and impact. 3. Periodically, disconnect from project details, and connect with your company’s vision, mission, pain areas, objectives and long term aspirations. 4. Practice zooming into detail, and zooming out to see the overall landscape iteratively, until both, the macro and the micro are juxtaposed in your mind. The last point about the macro and the micro being juxtaposed is really the key. QA experts need to strike the right balance between the micro and the macro, between the details and the big picture, between the trees and the woods. Achieving this balance is extremely critical. So while the current wisdom aggressively promotes ‘go macro’, I’d like to bring us back to some of the merits that the ‘micro picture’ traditionally offers, and why juxtaposing them is important. Here, before I go on, let me explore the word ‘juxtapose’ a little. It means putting things, which are not similar, close to each other in order to create an interesting effect. This is really the crux of this article. How do we ensure that the micro and macro views are brought close together for maximum impact on the organisation. The QA reality is that, both, the micro and macro aspects exist in the development lifecycle, and both require to be carried out with utmost rigor simultaneously.

Some psychologists advise activating the right brain first, focusing on holistic thinking, followed by the left brain for sequencing and details. Unfortunately, this need is easier advised than explained, especially in the current agile scenario where there isn’t too much time for waterfall‑ish sequencing. Therefore, the need to juxtapose. Take the example of a common test case. It must be granular, accurate, well‑articulated, clear, and functionality focused – the micro. But a good test case should also be traceable back to functional requirements, and the functional requirements traced back to solution objectives, customer intent, aspirations and business mission – the macro. One may question “Is this too much to ask from a micro level deliverable such as a test case?” They have to be granular, and must cater to every distinctive nuance of the ecosystem in which the enterprise functions. At the same time, a robust assurance framework comprising proven processes, strategies, methods and architecture must support the enterprise goals and vision. The end result – QA & testing teams achieve a synergistic balance between big picture and granular detail. In the earlier days, the macro aspect was conspicuous by absence. Of late, while the macro has gained prominence, somewhere the micro is losing relevance. This is having an impact on overall quality and output. And must not be allowed to happen.

QA experts need to strike the right balance between the micro and the macro, between the details and the big picture, between the trees and the woods. Achieving this balance is extremely critical

COLLABORATION IS KEY FOR THE FUTURE Big picture thinking cannot be facilitated by vision or good articulation alone. It must also be supported by work and process ethos. Armed with this juxtaposition and synergistic approach, QA teams have helped many businesses eliminate the ‘they’ and ‘us’ line of thinking, and embrace the ‘we’ approach. When roles and responsibilities are aligned to business results, it motivates everyone to perform and achieve more. In this context, I am reminded of a few lines by Anne Michaels, a Canadian author, poet and musician: “Trees carry the memory of rainfall. In their rings we read ancient weather – storms, sunlight, and temperatures, the growing seasons of centuries. The forest shares a history, where each tree is remembered…” While we cannot miss the woods for the trees, we cannot even afford to ignore the trees, for without them, there would be no woods.

SIVA GANESAN VICE PRESIDENT AND GLOBAL HEAD OF ASSURANCE SERVICES TATA CONSULTANCY SERVICES

Siva runs the Assurance Services Unit (ASU) business for Tata Consultancy Services (TCS), which serves a large, diversified customer base globally. With over 25 years in the IT industry, Siva has considerable experience in building relationships ground up and helping customers unlock tremendous value from their existing testing estate.

T E S T M a g a z i n e | M a y 2 01 6


EXECUTIVE WORKSHOP:

TEST DATA MANAGEMENT AND REACTIVE AUTOMATION TEST Magazine, in partnership with CA Technologies, hosted an industry roundtable in March 2016, where senior testing and QA professionals from end user organisations got together to debate synthetic data generation and reactive automation. Attendees included heads of testing and QA directors from the financial, ecommerce and education sectors, industry consultant Paul Gerrard and Huw Price, VP at CA Technologies, Tom Pryce, Product Marketing Manager at CA Technologies, Cecilia Rehn, Editor and General Manager at TEST Magazine.


S Y N T H E T I C

13

D A T A

GETTING READY FOR EU GENERAL DATA PROTECTION REGULATION

A

ttendees at the TEST Magazine roundtable highlighted the potential impacts of the new EU General Data Protection Regulation and its likely affect on test data management. The EU General Data Protection Regulation (GDPR) was initially proposed in 2012 to unify and strengthen existing legislation, and was voted into approval in April by the EU MPs. Therefore, there is a sense of urgency in the IT community, as the two‑year implementation window means it is time to act now. The regulation will apply to any organisation worldwide processing data in the EU, meaning that even if the UK voted to leave the EU, UK firms who handle EU data would still be held accountable. Headlines concerning the GDPR have traditionally focused on the numbers; firms found breaking the law face fines of €20 million or 4% of annual turnover (whichever is higher). And the issues arising from ‘Right to Erasure’: to withdraw consent (Article 7) or for data to be forgotten unless there is a legitimate reason to keep it (Article 17). But what about testing?

MASKING OK UNTIL IT GOES WRONG It has been the norm to use masked production data in test and development environments. A head of testing from a smaller organisation at the roundtable lamented the fact that “there just isn’t enough time or money to invest in anything other than masked data.” However, organisations will now have to put the privacy of their customers’ data at the top, which is worrisome for many. The GDPR

requires data to be stored for a limited period of time and for no longer than it is useful to the organisation. Data will also have to be stored in such a way that it cannot be directly identified by reverse engineering data masking (data obfuscation), or pseudonymisation, as the GDPR calls it. This raises a whole host of new questions such as: • If information is left in as a form of compromise, such as inter‑column relationships, is this not GDPR compliant? • As the definition of ‘personal information’ continues to grow to include anything related to genetic, mental, economic, cultural or social identity, how easy is it to mask all of this content, while retaining the referential integrity needed for testing? • Crucially, as the GDPR demands, is the masked data safe? Is it possible to reverse engineer data from complex relationships using a piece of external information? “The GDPR defines when consent is and is not needed, but organisations will have to ask themselves, 'What in our test environments do we need consent for?' says Tom Pryce, Product Marketing Manager, CA Technologies. “Additionally, the GDPR includes language emphasising that should individuals request to see how their personal data is used/or have it deleted, organisations must comply ‘without delay.’” Roundtable delegates were in agreement that little or no specific preparation has been carried out in advance of the GDPR in their organisations.

Data will also have to stored in such a way that it cannot be directly identified by reverse engineering data masking (data obfuscation), or pseudonymisation, as the GDPR calls it

HUW PRICE VICE PRESIDENT CA TECHNOLOGIES

Huw joined CA Technologies in 2015 as Vice President of Continuous Delivery, when specialist testing vendor Grid‑Tools was acquired into the CA DevOps portfolio. During his 30 year career, Huw has gained a deep understanding of the challenges faced by modern organisations, and, with an understanding of the science of testing, how to solve them.

T E S T M a g a z i n e | M a y 2 01 6


14

S Y N T H E T I C

It is clear that organisations will need to know exactly where personal data is, when the data was collected, who’s using it, and for what purpose. A question that many organisations are struggling to find time to answer

TOM PRYCE PRODUCT MARKETING MANAGER CA TECHNOLOGIES

Tom Pryce is a Product Marketing Manager, having been a technical and content writer for testing specialists Grid‑Tools, a recent addition to the CA Technologies family. His interests include test data management, test case design, and test automation, with a particular focus on the role of continuous delivery in enabling modern organisations to become leaders in fast‑emerging, technology‑driven markets.

T E S T M a g a z i n e | M a y 2 01 6

D A T A

One senior testing manager from a high street bank noted the reluctance from their end to implement change in time for the deadline: “We’re aware of the GDPR, it’s being talked about. But I fear that with all the other regulation and scrutiny the banking sector faces, this will be a case of waiting to see if someone gets fined, before action is taken.” The ‘without delay’ clause is particularly troubling, the roundtable participants agreed, as if this becomes well‑known amongst European citizens, it could cripple organisations struggling to sort through data stored in different locations.

“in an intelligible and easily accessible form, using clear and plain language.” “The practice of testers copying and sharing masked data ad hoc, and keeping it on their machines indefinitely, is just not viable and increases significantly the risk of running afoul of the GDPR,” said Huw Price, VP at CA Technologies. It is clear that organisations will need to know exactly where personal data is, when the data was collected, who’s using it, and for what purpose. A question that many organisations are struggling to find time to answer.

Role‑based approaches to limiting data access were also brought up during the roundtable. Are these measures enough? Arguably not, given that data access must be restricted based on the task being performed (and if there is consent for it), and to a limited number of individuals, for a limited amount of time.

COMPLIANCE AND THE FINES LANDSCAPE

THE CHANGING NATURE OF CONSENT The GDPR highlights why organisations might need to re‑consider their test data management strategy, especially when it comes to consent. According to the new rules, individuals must be informed of their right to withdraw consent. In addition, the ‘opt out’ rules of the past are now gone; consent must be clearly distinguishable from other matters in a written document and must be provided

Now in place, the GDPR promises to levy heavy fines on non‑compliant firms, so for those looking to avoid punitive damages, investment in solutions such as synthetic data generation is appealing. The CA Test Data Manager solution creates synthetic data with all the characteristics of production, but none of the sensitive content. The synthetic data can be fed into multiple test environments at once, and from a compliance perspective, this data can then be stored alongside existing, masked production data in a central Test Data Warehouse. From here re‑usable data sets can be provisioned to authorised individuals on demand. Access can therefore be granted only if data will be used for a reason for which consent has been given, rather than granting it solely on a role basis.


T E S T

15

A U T O M A T I O N

REACTIVE AUTOMATION

T

he second roundtable session saw attendees discussing ‘reactive automation’ – automation that can keep up with changing user or business requirements. Attendees agreed that across testing departments there is a large amount of manual testing, although there is a desire to switch to automation, in order to eliminate testing bottlenecks and meet the increased pressure for short delivery time to market. As buzzwords such as continuous delivery and DevOps take the stage, it is sometimes unclear how automation is going to help? Traditionally, test automation has been focused on replacing manual test execution, but if this remains the sole focus many other tasks will remain too slow and manual, while test automation itself can introduce additional labour.

PROBLEMS WITH TRADITIONAL TEST AUTOMATION As organisations look to adopt the high levels of test automation they seek, issues can, and most likely will, arise. As pointed out during the roundtable, manual scripting can hinder automation and maintenance is a major painpoint for many. Although organisations can have their pick from good automation frameworks, they tend to rely on scripting, which results in time wasted on manual test case design. “One team we worked with took 6 hours to write just 11 tests,” said Pryce. In addition, the issue of test automation maintenance needs to be addressed in many organisations. It is often the case that test scripts can not be traced back to the inert requirements from which they originated, so that testers have to manually check and update the pile of existing tests to reflect a change made to the system. All of this eats up precious time and resources, leaving many to wonder what the time‑saving capabilities of automation were all about. “We need to shift testing away from being a tools problem into thinking about it

as a modelling problem,” said Paul Gerrard, industry consultant. “Through improvements in modelling and in the requirements phase, we can eliminate mistakes in code and in automation.” Successful test automation should free up testers to focus on the exploratory, build test models and inform future testing. If automation is the future, how does reactive automation fit in? “Reactive automation is the reason I came to this roundtable,” said one QA Director in the ecommerce space. “It’s an unknown for us right now, but we need to look into it.”

PAUL GERRARD CONSULTANT GERRARD CONSULTING

Paul Gerrard is a consultant, teacher, author, webmaster, programmer, tester, conference speaker, rowing coach and publisher. He has conducted consulting assignments in all aspects of software testing and quality assurance, specialising in test assurance.

REACTIVE AUTOMATION Reactive automation sees test automation encompassing test creation and maintenance to help speed up delivery and increase test coverage. This can only be achieved if ‘active’ requirements are used for the automated testing. CA Agile Requirements Designer, an end‑to‑end requirements gathering and test case design tool, allows organisations to run automated tests that can be derived automatically from requirements modelled as unambiguous flowcharts. These tests are updated automatically when the requirements change, solving the issues of time wasted on maintenance and manual test generation. In addition, optimisation techniques can be implemented to generate the smallest number of tests needed for maximum coverage, shortening test cycles.

If you are interested in participating in future Executive Workshops, please get in touch.

Contact the Editor, Cecilia Rehn ‑ cecilia.rehn@31media.co.uk

CECILIA REHN EDITOR AND GENERAL MANAGER TEST MAGAZINE / 31 MEDIA

Cecilia came on board as Editor for TEST Magazine in 2015 following a 5 year stint in oil and gas publishing. She oversees all content in the TEST Group portfolio including the National Software Testing Conference, the European Software Testing Awards, Software Testing News, TEST Focus Groups, and the newly launched DevOps Online news portal. Cecilia has a BA (Hons) in History from The University of Sheffield, and studied abroad in New Mexico, USA as part of her degree.

T E S T M a g a z i n e | M a y 2 01 6


TRANSFORMING A TEST MANAGER INTO A TEST LEADER Rajesh Mathur, Head of Testing, Australian Health Practitioner Regulation Agency; Paul Seaman, Principal QA, SS&C HiPortfolio; and Lee Hawkins, Principal Test Architect, Dell Software, explain how testing management is not what you might think – thinking isn’t optional.


T E S T

17

M A N A G E M E N T

O

ne of the ironies of the software testing industry is that a lot of people outside the industry (and also a lot of people inside the industry) believe that testing is easy. Testing can be easy for certain software products. For example, applications that meet the following assumptions: • Simple architectural designs. • Are used sparingly. • Are not mission, life or business critical. • Do not interact with other applications or environments or their interaction as well as integration is minimal. • Usability and accessibility requirements are minimal and may have bugs that ‘may not bug someone who matters.’ Such applications are usually free, open source or come as freebies with other software products. An example is Notepad, which has minimal functionality and comes free with other Microsoft products. On the other hand, testing can be very complex. Think about all other software that you use or interact with, or depend on, while at home or work, while driving, travelling by air, etc. The list of complex software we interact with is almost endless. However, when many people talk about software testing, they generalise the subject and call testing easy. This generalisation naturally leads to the belief that anyone can test. If you share this belief, please read on. The authors suggest you read Perfect Software and Other Illusions about Software Testing written by Jerry Weinberg. This might change your perceptions and thinking.

TEST MANAGEMENT MYTHS AND PERCEPTIONS Since many people believe testing is easy, some testers or technical people we meet also feel that test management is easy and that anyone can do it. Most of the people who say such things do not really understand what they mean by testing or test management. It is very important to understand what we mean when we use these terms. In the words of Michael Bolton: “Words are powerful tools for understanding and clarifying ideas, but like all tools, they must be used skilfully to achieve their purposes and to avoid trouble.” Here are some of the myths of test management that we have often heard from test professionals. In this article we will examine some of these. • I don’t need to know testing to become a

test manager. • Test management is all about organising resources. (The authors of this article prefer to use ‘people’ or ‘team members’ and not ‘resources’.) • As a test manager, I do not have to actually test. • Knowing about testing is not that important as a manager because anyone can test. • I am a test manager and it’s easy because all you have to do is to assign resources to projects. • As long as I follow best practices, it will be all good. • Test strategies and plans are based on templates. So as long as you have a template, planning is easy. • The availability of a detailed requirement documentation makes a test manager’s job easier. Testers can simply write test cases based on requirements. • Following standard testing processes help you deliver good testing and vice versa.

Since many people believe testing is easy, some testers or technical people we meet also feel that test management is easy and that anyone can do it

SKILLSETS FOR TEST MANAGERS Test management, like any other management discipline, requires a balanced and relevant skillset. Skills that help one make a good test manager include: • Leadership and management: Dealing with people (people management), setting priorities, delegating, motivating and developing people, coaching, listening. Demonstrating that you trust your people to understand problems and provide great solutions. • Critical thinking: To understand the mission of the project and to devise approaches appropriate for solving the problems. Recognising and negating pitfalls and biases that the problems pose and to draw meaningful conclusions, when needed. • Project management: You don’t have to create project plans, but learning how to decipher them or to add to them is seen as a good skill. Other project management skills that are useful to know as a test manager are scoping, planning, co‑ordinating, budgeting, organising and understanding risks. • Communication and collaboration skills: As a test manager, an important part of your job is communication. You communicate with your team members, with peers such as development managers, architects, database administrators, infrastructure

RAJESH MATHUR, HEAD OF TESTING AUSTRALIAN HEALTH PRACTITIONER REGULATION AGENCY

An international speaker, blogger and writer, Raj has spoken at various conferences and has written articles for numerous publications. He is a dedicated context‑driven tester who believes less in dogmatism and more in pragmatism of testing. Raj has held senior test management positions with Cathay Pacific Airways, Nokia and Boots (UK) amongst others living and working in US, UK, India and Hong Kong China.

T E S T M a g a z i n e | M a y 2 01 6


18

T E S T

So what makes a good test manager? It is a combination of people skills combined with test skills. The balance is important

M A N A G E M E N T

people, support teams, and with management teams. Good collaboration skills help you value and build relationships with these people. Forming positive alliances and understanding and is important when compromise and negotiation is required. • Testing: An important skill for test managers is to understand testing. Creating a test plan based on a requirements document and a test plan template is not test management. You must understand testing and you must be ready to roll up your sleeves! It is clear that test management is much more than just resource management as some of the test managers we have met or worked with seem to think.

LEADERSHIP

PAUL SEAMAN PRINCIPAL QA SS&C HIPORTFOLIO

A former primary school teacher, with an Economics and Finance degree, Paul has 16 years of software testing experience including leading and managing test teams and coaching. His last 3 years have been spent engaging in, and leading, an agile transformation. He has a strong interest in psychology, thinking skills and complexity management. Paul is a member of the Association for Software Testing and is an active organiser of the Test Engineering Alliance Melbourne (TEAM) meet‑up.

T E S T M a g a z i n e | M a y 2 01 6

So what makes a good test manager? It is a combination of people skills combined with test skills. The balance is important. The context of the engagement matters and the balance will change as a test team matures. The one thing that stays constant is the need to have a ‘people first’ attitude. Management is great for handling management responsibilities (reporting and the like), beyond that, you must embrace leadership. When most people complain about their manager they are not complaining about management. They are really complaining about too much management and not enough leadership. A leader is a person who distributes empowerment through trust. It is someone who trusts you to solve problems using the skills you have (or ones they will actively encourage you to develop). They are most definitely not a micromanager and they know how to create an environment in which failure is safe. A manager, on the other hand, talks about how people (they would call them resources) should feel empowered but not give them the right permissions to actually be empowered. They micro manage and assign blame. There is no safe way to fail and the acceptable solutions are your manager’s solutions. Leaders motivate, managers suck motivation out of people. Daniel Pink in his book Drive ‑ The Surprising Truth of What Motivates Us talks about motivation models.

A summary is presented below.

MOTIVATION

• Motivation 1.0 – These are your basic instincts. Humans have had these since the dawn of time. This is the drive to survive. • Motivation 2.0 – The recognition that people respond to reward and punishment (controlled motivation). In the early 1900s, Frederick Winslow Taylor was a notable contributor in this area. This approach hinges on rewarding desired behaviour and punishing other, unwanted, behaviour. This a command and control approach and appears to still be the predominant form of motivation used by managers. • Motivation 3.0 – Tapping into people’s intrinsic (autonomous) motivation, the


T E S T

desire to do a great job. Allowing people to utilise their sense of autonomy, allowing them to self‑direct. This requires resisting the urge to control people. If you want people to succeed, excel and engage then you must give them room to do so. Managers must learn to manage less and lead more.

TEST SKILLS

19

M A N A G E M E N T

Good test managers follow good practices of management. While people management skills are really important as a leader, another important requirement in becoming a good test manager is becoming a skilful tester. It is strongly recommended that you maintain a healthy interest in continuously improving your testing skills. Imagine you decide to learn how to drive a motorcar. You have a friend and your friend’s grandfather has decided he will help you. He’s been driving for years so you’re confident that he’ll know what you need to learn. Experience is really important, right? The morning of the first lesson arrives. You sit in the driver’s seat of your car imagining yourself out on the road. Your friend’s grandfather arrives, gets into the passenger seat and says “You know I’ve been driving for over 60 years”. “Awesome, you respond, you must have driven a lot of cars”. The answer comes back “No, still driving my first model T. I take it out for a short drive every decade or so on a private property”. Is grandpa really the right guy to be guiding you, teaching you car driving skills? Just about every industry we can think of has examples of people who think knowledge at a point in time (especially certification) equips them for life, and that skill, practice and acquiring new knowledge and skills are not important. This is a bad attitude and a great way to make yourself redundant. You really want to make sure that you don’t roll up to work a ‘Model T driver’ when your team are all suited up ‘Formula 1 racers.’ Experience is important but the right experience is far more useful. Documentation and metrics (by this we mean metrics that are supported by a clear context that enables them to tell the underlying story) are useful. If you are moving into a test manager role it is likely to be one of the first items added to your ‘to do’ list. Improving your team’s testing capabilities, creating a capability of finding important bugs fast is probably the most important task. Documentation and metrics do not make your stakeholders happy, high quality software

does. How you do that depends on your testing and people management skills. As a manager you might simply embark on a “certification collection” exercise and tell your stakeholders “My test resources are really good. They are all certified and we use only best practices”. As a leader you might talk to your people, discover areas they feel development is required. You might also consider skills that do not have ‘test’ as part of their description, such as courses that focus on things such as teaching, mentoring, coaching, thinking, analysing, team building, leading, etc. The absence of a certificate at completion will be overridden by the value of the knowledge being brought back to the test team. As a leader you’ll tell your stakeholders “We have a really broad experience base. The people in the test team are broad thinkers, they love analysis and problem solving. We are one of the happiest and strongest teams I have ever worked in.” A good resource for improving testing skills is to attend courses (in person or online), conferences, webinars and to keep informed of industry best practices through wider reading and research.

While people management skills are really important as a leader, another important requirement in becoming a good test manager is becoming a skilful tester

CONCLUSION Through our experience, we believe that, amongst other things, good test managers are rounded individuals. They manage when required but otherwise lead and are good at leading by example. Their 'people first' approach engages those that work with them and encourages those same people to work with a real passion because their input is highly valued. The testers experiment and innovate because they are led by someone who makes it safe for them to fail and supports moving forward from the failure. We are not being critical of people who do not demonstrate these skills. We are, however, suggesting that if this article makes you feel like you manage and never lead, it is time to reconsider your approach. To see a full list of recommended resources from the authors please visit: www.softwaretestingnews.co.uk/ transforming-a-test-manager-into-a-testleader.

LEE HAWKINS PRINCIPAL TEST ARCHITECT DELL SOFTWARE

Lee is responsible for testing direction and strategy across the Dell Software group. Lee has been in the IT industry since 1996 in both development and testing roles and his testing career really started in 2007 after attending Rapid Software Testing with Michael Bolton. He is the co‑founder of the TEAM

www.softwaretestingnews.co.uk/ transforming-a-test-manager-into -a-test-leader.

meet‑up group in Melbourne and regularly speaks at international testing conferences.

T E S T M a g a z i n e | M a y 2 01 6


WHY COLLABORATION IS KEY FOR QA TEAMS IN AN AGILE WORLD Greg Law, CEO and Co‑Founder, Undo Software, debates the need for test departments to embrace, rather than fear, change.


21

A G I L E

T

he entire software industry is currently undergoing major, ongoing change. The rise of agile development, test‑driven development, continuous integration and continuous deployment are all transforming how software is created and provided to customers. At the same time the spread of software into more and more of the devices around us and the interconnectivity of those devices means that the sector is growing in scale, complexity and importance. These factors are having an enormous impact on testing and QA departments. If more software is being produced and needs to be deployed in shorter and shorter timeframes, traditional testing methodologies have to change, hence the rise of automation in the testing process (and corresponding worries among testers that their jobs will disappear).

EMBRACING THE CHANGE Rather than fear change and automation, however, test departments need to embrace it. With QA at the beginning of the automation process, it is worth looking at the impact it has had on other industries. Automation tends to affect the number of people employed in an industry – for example, in 1900, 41% of the US workforce was involved in agriculture, and in 2000 it was 1.9%. Agriculture is now many times more productive than it was when it relied on brawn rather than brain. The same applies to the disruption caused by robots to factory jobs in the 20th century. In both cases, those who survived and thrived were the ones who embraced automation. So how do test teams adapt to the new DevOps world? In my experience there are three areas to focus on:

BE PART OF THE PROCESS

Agile is here to stay, and there is no point in denying that change is happening. So test departments need to understand agile, and look at how they can automate and work with developers to deliver the right services to meet overall business needs. In this world

testers have to be able to become developers of a sort, scripting for automated test tools if they want to remain relevant. At the same time, test and QA teams need to retain their independence from the development process, by challenging and testing development assumptions. This is one of the key strengths of having a separate function, and has to be preserved. Testers therefore need to work together with developers, but keep a certain distance from them, and testers should not be afraid to use their skills to ask potentially awkward questions.

ACCEPT NEW TECHNOLOGY

Test automation brings a whole new set of opportunities and challenges. As I mentioned testers will need to be able to script tests to run on these tools, but more importantly they need to be able to understand, and communicate, the results. The combination of agile, test automation and potentially unlimited compute resources via the cloud means more (and more detailed) tests are being run, more frequently, on more complex software. A software project of a given size can easily run two or three orders of magnitude more tests every day than the equivalent project would have run ten years ago. This means that there is a consequent growth in test failures, which threatens to overwhelm QA teams. If many thousands of tests run every hour, and 0.1% of them fail, triaging these failures can quickly become a nightmare. (And a failure rate as low as 0.1% is rare; I have spoken to companies where more than 10% of their overnight tests fail.) Test teams therefore need to look at new technology that can help them not just to automate their tests, but they also need technology that will help them deal with the resulting failures. Tools such as Jenkins can help with more basic fails, allowing QA teams to focus their efforts on more complex or unpredictable issues. Software is becoming available to support test teams with these tougher cases as well, and can add greatly to productivity and speed.

WORK MORE CLOSELY WITH DEVELOPERS

Software developers are QA’s customers, so it is imperative that test teams provide a service that meets their needs.

At the same time, test and QA teams need to retain their independence from the development process, by challenging and testing development assumptions

GREG LAW CEO AND CO‑FOUNDER UNDO SOFTWARE

Greg is a coder at heart, but likes to bridge the gap between the business and software worlds. He has over 15 years of experience in the software industry and has held development and management roles at companies including British Acorn, as well as fast‑growing startups NexWave and Solarflare. Greg holds a PhD from City University, London and was nominated for the 2001 British Computer Society Distinguished Dissertation Award.

T E S T M a g a z i n e | M a y 2 01 6


22

A G I L E

At the same time testers need to be proactive and fight for their right to exist. For example, if you wait to be asked to attend development meetings there is a risk that the invitation will never arrive

In pre‑automated days, QA knew that simply handing over a list of red/green pass/fails was never going to help engineering find and fix the root cause of a problem. That’s why they added verbose bug reports to give as much detail and context about a problem as they could. Obviously, this approach doesn’t scale in the automated test world, which leads back to the point that using tools that can provide more details on what went wrong is highly valuable, even if it is as basic as “this was the commit that caused the code to fail.” Talk to developers in their own language and give information in a usable form if you want to remain relevant and valued. At the same time testers need to be proactive and fight for their right to exist. For example, if you wait to be asked to attend development meetings there is a risk that the invitation will never arrive. So talk to developers, get involved early and use your skills to provide a higher level service that

T E S T M a g a z i n e | M a y 2 01 6

is valued by the whole company. If there are daily stand‑up meetings, make sure you attend those.

SUMMARY I’m currently seeing a big change when I talk to customers and prospects. Three years ago I was speaking to development teams; now I’m increasingly talking to smart QA departments who understand that they need to be closer to developers, and want the tools to deliver this change. Obviously, the bad news for testers is that automation is likely to reduce their numbers, but arguably those that remain will move up the stack and be seen as more strategic and important to the entire agile development process. The choice may seem stark, but if they embrace change, test departments can survive – and thrive – in an agile world.


SOFTWARE TESTING OUTSOURCING Delight your users with a consistent, high-quality experience – everywhere, every time.

We hate bugs as much as you do – and go all out to catch them. Get your mobile, web, desktop, and wearable apps tested by our award-winning team. Contact us for a 4-week risk-free trial of our software testing services.

Why Mindfire?

16 years of award-winning experience

1000+ global projects delivered for startups and enterprises

Market-proven Automated and Manual testing experience

ISTQB Platinum Partner

Well-equipped QA lab with prevailing hardware and software

www.mindfiresolutions.com +1 248.686.1424 sales@mindfiresolutions.com

Specialist serving the testing needs of small & midsize businesses

Strong Selenium, SoapUI, WebAPI, Protractor, QTP, Appium, Robotium, Jmeter skills

90+ certified test engineers with 5+ years average experience

4 weeks risk-free trial

Founded in 1999, Mindfire Solutions is an award-winning provider of software development and testing services to the global market with 650+ talented software engineers at 3 centers in India. For its people and its work, Mindfire has won coveted international awards such as Deloitte Technology Fast50 India Award 2013 and 2014, Dun & Bradstreet Fastest Growing SME 2013 Award, Red Herring Top 100 Asia Award and Zinnov GSPR 2014. Mindfire has been recognized with ISO 9001:2008 and ISO 27001:2005 certification, is a continuous member of NASSCOM, and has established a strong track record of 2000+ projects successfully delivered for 500+ technology clients.


FROM QUALITY ASSURANCE TO ‘DIGITAL ASSURANCE’ Our world is digital, like never before, and the digitisation of everything is revolutionising the business, explains Vaios Vaitsis, Founder and CEO of Validata Group.


F I N A N C I A L

25

S E C T O R

L

ooking ahead, the world will soon be filled with everyday objects connected to the internet. Gartner asserts that by 2020, 25 billion ‘things’ will be connected to the internet, making businesses vulnerable to the risks inherent to digitisation. Cost, time‑to‑market and customer experience are top priorities, with customer experience being today’s imperative. SMAC (social, media and analytics in the cloud) and digital technology are changing the behaviour of organisations across all industries and as a result, a dramatic change can be identified in end customer experiences, operational processes, and business models for engagement. Although many companies understand that they need to adapt to the evolving use of technology by their customers, they are yet to realise how fast they have to address these changes. Mobile web adoption is growing eight times faster than the internet back in the early 2000s, and today's customers demand 'anything‑anytime‑anywhere' service with the best experience. As an example, the users of mobile applications expect them to change almost daily. The market dynamics that rely on mobile apps demand that clients expect new features, offers and opportunities all the time.

QUALITY IMPERATIVE TO DELIVER A FIRST CLASS CUSTOMER EXPERIENCE The need to modernise and innovate areas such as mobile applications, self‑service websites, social media analytics and multi‑channel ecommerce platforms is hugely important in order to remain competitive and meet the constant changing demands of the business. Organisations usually fail to consider quality as one of the most important aspects to deliver a first class customer experience. That, alongside with the lack of investment and accountability for ensuring it, often means that organisations fall short of the results they want and even worse put their reputation at risk. The goal is to get ahead, acquire new customers, new business and then stay ahead. To win this race you need a strategy that raises the bar on capabilities your core banking transformation project can deliver, as technology continues to evolve. Clearly, business wants IT responsiveness and what is called ‘true agility’. This doesn't necessarily mean they want hundreds of software releases every day, but they need

to be confident that when the market changes, IT will be able respond. When the time comes, the business will want to turn ideas into software delivery as quickly as possible. QA teams should be tuned‑in on technological changes, be able to innovate quickly, and come up with optimal solutions for new testing challenges. By implementing the agile/Dev‑Test‑Ops principles and by automating the application lifecycle, QA can ensure process improvement metrics and establish ‘continuous testing and monitoring’ that reduce errors and accelerate time‑to‑value, realising true ‘continuous delivery’ and enabling the digital transformation. While automation has been the key in achieving increased testing coverage and effectiveness, it should not be limited to test execution alone, but expand through the whole testing lifecycle – starting from requirements analysis. Would you consider building a bridge if there was any room for errors? So why would you think that complex digital transformation will be delivered without having solid foundations?

The need to modernise and innovate areas such as mobile applications, self‑service websites, social media analytics and multi‑channel e‑commerce platforms is hugely important in order to remain competitive

DIGITAL ASSURANCE The increased complexity of software applications is one of the big challenges that digital transformation brings on the table. QA now needs to support multiple new applications built on disparate technologies. The interaction of these applications as well as the requirement to test them on different device configurations, add more challenges on QA teams. This means that, at any point QA teams need to be able to test APIs, regressions, the performance and UIs on various operating systems and devices. A well planned QA strategy which would involve service virtualisation, test automation, and effective test data management should be implemented. The assurance organisation (QA and testing) plays a vital role in certifying the digital transformation. Agile ‘digital assurance’ is revolutionising the QA industry. The quality process put in place is an indication of a company’s digital maturity and of its progress through the digital transformation. A successful digital assurance initiative will help an organisation achieve reduced time to market, better quality while ensuring scalability. With the industry being as competitive as it is, it is important for both new and older banks to take note of others’ IT failures

VAIOS VAITSIS FOUNDER AND CEO VALIDATA GROUP

With 35 years' experience in the software industry, Vaios is a serial entrepreneur in the enterprise software industry, responsible for establishing and growing market leading IT companies. Before setting up Validata, Vaios was the Founder, President and CEO of I+C=S software, which developed a platform and applications for the energy sector.

T E S T M a g a z i n e | M a y 2 01 6


26

Financial services organisations must invest more in quality assurance to retain customer loyalty and maintain market share. They need to rethink their digital QA strategies to remain competitive

T E S T M a g a z i n e | M a y 2 01 6

F I N A N C I A L

S E C T O R

and ensure that they have the right working software in place to suit the demand of modern banking systems.

early in the development lifecycle. By doing so, they will ensure that quality is built in from the start.

THE FINANCIAL SERVICES SPACE IN 2016 AND BEYOND

SUMMARY

Financial services organisations must invest more in quality assurance to retain customer loyalty and maintain market share. They need to rethink their digital QA strategies to remain competitive, and move toward integrating the software development supply chain into the business. Considering the financial services space in 2016 and beyond, agile is the way to go. Testing needs to happen real time, in an agile manner to ensure quality products are delivered. With automated tools, techniques, development and ‘continuous testing’ methodologies available, QA teams are better positioned to respond to digital technology challenges within short time frameworks, protecting and even enhancing the brand value and eliminating any reputational risks through a superior customer experience. QA and testing teams need to employ a new way of thinking based on ‘shift left’ practices simply by putting testing and automation

As ‘continuous delivery’ concept establishes itself into the software delivery process, application lifecycles shorten and the number of releases increases, so the emphasis must be placed on increasing capacity and quality across the organisation. But when it comes to realising a full continuous delivery process, ‘continuous testing’ is the missing link to close the circle. Continuous testing involves applying the methods and concepts of agile development to the testing and QA process, resulting in a more efficient testing process, bridging the gap between Dev and Ops. Continuous testing is only going to become more essential as time goes on and technology continues to advance. However, it is more than just automation; it’s the final step in the continuous delivery process, augmenting software quality processes and ensuring speed, agility and risk management. An effective continuous testing strategy forces an organisation‑wide cultural change to align Dev, Ops and QA/testing as part of the true DevOps philosophy, enabling organisations to go through a smooth digital transformation.



TOP PERFORMANCE FROM THE TESTING TEAM DURING


Rob Hornby, Senior Test Manager, John Lewis, gives insight on the award winning Peak Scalability and Click & Collect testing project at one of Britain’s largest retailers.


R E T A I L

30

Each year, the company prepares through a cross‑functional initiative to ensure that everything is ready from the first customer touchpoint – the website – through to the actual picking up of an ordered item from a Waitrose or John Lewis store

ROB HORNBY SENIOR TEST MANAGER JOHN LEWIS

Rob Hornby is a Test Manager who has worked in the industry for over 10 years. Rob has worked in ecommerce based retail and travel industries for the majority of his career. He is now heavily focused on the internal transformation of John Lewis towards having a more engineering culture and approach across delivery.

T E S T M a g a z i n e | M a y 2 01 6

C

lick & Collect is a popular way for retailers to bridge the gap between their online and physical presences, often offering customers a quicker way to receive goods that they order over the internet compared with having them delivered to the home. For John Lewis, the Christmas retail period is divided up into three peaks: Black Friday, Christmas and Clearance. This article delves into the work of the testing department during the Black Friday 2014 period, summed up as the Peak project, which was the overall winner at The European Software Testing Awards in 2015. This article will also give an overview of the retailer’s continued work in 2015. In 2014, the five weeks in the lead up to Christmas including the phenomena of Black Friday was a significant period of heightened website activity when compared to the rest of the year. Thanks to the success of the testing department's Peak project work, John Lewis saw a successful trading period where: • Online sales accounted for 36% – up from 32% the previous year. • Click & Collect accounted for 56% of those sales. • Supply chain picked up 54% more parcels than previous years, 87% during peak Black Friday sales delivering in to a Saturday. Mark Lewis, Online Director at John Lewis, said in the company’s trading update for the week to 27 December 2014: “It was the week when our Click & Collect service broke records. We had billed 2014 as the Click & Collect Christmas and it certainly came to pass.”

METHODOLOGIES BEHIND A SUCCESSFUL PROJECT THE PEAK PROGRAMME

John Lewis recognised that the peak period of Christmas is critical for trading success. Each year, the company prepares through a cross‑functional initiative to ensure that everything is ready from the first customer touchpoint – the website – through to the actual picking up of an ordered item from a Waitrose or John Lewis store. A programme of work was created and run to support the run up to peak. The focus was in two key areas, the website, ensuring that it could meet the expected customer

S E C T O R

volume and order rates, and supply chain which focused on ensuring that the orders could then be successfully delivered to customers and Click & Collect orders to John Lewis and Waitrose stores. The key was for this to be delivered at scale.

THE WEBSITE

The website is an entry point to the John Lewis experience. The marketing teams drive traffic to the website through a co‑ordinated campaign for Peak. For 2014, this focused around ‘Monty the Penguin’. Traffic then builds throughout November with spikes in traffic for sales such as Black Friday. To ensure that the website could meet capacity John Lewis undertook several preparations. The company formed a cross functional engineering team made up of data scientists from the insight team, operations, test, networks and infrastructure. A test model was formed utilising site analytics (Adobe) and operational tools (Splunk), using a common currency across teams of TPS (transactions per second). The common currency allowed the team to clearly communicate website performance across both business and IT in a non‑technical manner. The testers built a set of performance scripts which mirrored the live site behaviour and a scaling factor was agreed between the test and live environments. These tests were then run and analysed against actual live behaviour to ensure they were consistent with the measure of live and predicted peak TPS. The testing department worked closely in partnership with HP to ensure their SaaS platform enabled them burst capabilities when required to the predicted live peak volumes. A programme of exploratory testing of the website was run in different configurations and optimised changes made across networks, server and application (especially caching strategies). This testing also had to contend with new features and functionality being delivered to the live platform on a monthly basis. The monthly change was tracked and trended to ensure that performance was maintained and issues resolved over subsequent releases. The commercial teams worked to manage webpage budgets to ensure page size and load were optimised for content without availability of a full CDN (content delivery network). The company also managed testing environments to ensure the testing day was maximised through a 20hr schedule running over six months.



R E T A I L

32

• Time boxed testing based on priority and smart working patterns utilising John Lewis offshore test teams maximised the available time for testing. • Improving regression testing capabilities in the team through a regression scrum to build regression testing based on business processes, ensuring that tests were optimised and automated reducing execution times while maintaining and actually increasing coverage.

Total Page Views per hour 2014 Total Page Views per hour 2013

12-1 AM

1-2 AM

2-3

3-4 AM

4-5 AM

5-6 AM

6-7 AM

7-8 AM

8-9 9-10 10-11 11-12 12-1 AM AM AM AM PM PM

1-2 PM

2-3 PM

3-4 PM

4-5 PM

5-6 PM

6-7 PM

7-8 PM

8-9 9-10 10-11 11-12 PM PM PM PM

Figure 1. Page views/hour – Black Friday 2013 versus 2014.

John Lewis developed an email marketing campaign that would be linked to the final website performance results ensuring that traffic was driven to the website in a controlled manner while maximising volume.

WEBSITE OUTCOME

The result was a successful peak trading demonstrated through trading over Black Friday. The marketing teams were able to release their most aggressive campaigns. The website remained up throughout the 2014 sales period with full availability. Sales throughout the week across John Lewis were significantly up on the previous year's Black Friday. The Guardian detailed this sales success: “Overall John Lewis sold an average of one tablet computer every second and a Flatscreen 40‑inch voice‑command TV every minute from the moment 24 hours of promotions began at midnight last Thursday. Internet traffic was up by more than 300% in the early hours of Friday as consumers logged on to snap up discounted clothing, handbags and electrical goods.”1 The success of the project team has led to this becoming a full time team with a focus on being the guardians of performance on the website working with project teams and ensuring performance is a key focus. TPS has become a key website KPI and is reviewed against business growth predictions and is then tested monthly as part of ongoing release change and also in monthly live tests.

S E C T O R

for the peak period. As with online, small cross functional teams were identified based on relevant skill sets and small projects established to ensure the supply chain was optimised. Some key testing activities were undertaken: • As with the website, peak tests were run against live supply chain systems to ensure performance of key systems based on predicted order volumes. This testing was in partnership with logistic partners such as Metapack. It was key that non‑John Lewis systems were part of the testing approach. • New testing approaches adopted risk based approaches and minimum path coverage techniques to rapidly test a number of small projects aimed at improving giving customers greater visibility of their order; purchase items later on the website with delivery guaranteed the next day and improved tracking of parcels within the internal fulfilment process. • Bottlenecks were removed through utilising automated tooling solutions to inject orders and run complex batch suites in test environments.

SUPPLY CHAIN OUTCOMES

Dino Rocos, Operations Director, John Lewis commented: “There was clearly huge customer anticipation of Black Friday in 2014 and we knew we had high expectations to meet, both in terms of the products we had on offer through our ‘Never Knowingly Undersold’ commitment, and in ensuring that we fulfilled customer orders as promised.” Should the website have crashed, run slowly during the 2014 Black Friday sales, or suffered a backlog of online order deliveries, John Lewis would have classed this as failing to meet customer expectations. The avoidance of these potential issues highlighted how John Lewis succeeding through close working between teams across the business. From the marketing department through to the logistics team there is a recognition that optimisation of performance and functionality provided a much greater capacity to fulfil customer propositions such as Click & Collect. “While the sales figures are attention‑grabbing, for me our biggest achievement was delivering an operation which ran like clockwork,” Rocos said. “We picked and packed 87% more online parcels on Saturday than we did last year, and to have delivered successfully on customer expectation is a testament to the work of our Partners both on Black Friday itself and in our forward‑planning. Our website coped well with exceptional demand.”

T E S T M a g a z i n e | M a y 2 01 6

Figure 2: Orders per hour on Black Friday 2014.

00:00

23:00

22:00

21:00

20:00

19:00

18:00

17:00

16:00

15:00

14:00

13:00

12:00

11:00

10:00

09:00

08:00

07:00

06:00

05:00

THE SUPPLY CHAIN

The supply chain is integral to a successful order proposition, especially Click & Collect. There is no point in taking orders if you can't fulfil them. This leads to a strain on stores, call centres and reduces customer retention. This was recognised early in the planning



R E T A I L

34

S E C T O R

The John Lewis team at The European Software Testing Awards in November 2015.

KEY TESTING SUCCESS Within the testing practice it is all about how testing teams can work more closely with both commercial and IT areas. Although the company employs successful testing techniques such as performance engineering, testing automation, agile testing methods, it is more how they are employed in a team environment. The success of the Peak project and Click & Collect is down to how the organisation is self‑motivated through Partner (employee) ownership. Morale is high at John Lewis, thanks to the unique Partner ownership. The Partnership ensures all employees are looked after and a transparent, democratic leadership instils a corporate culture of good work that pays off – all Partners receive % bonuses depending on the retailer’s overall performance.

SUPPORTING THE IT FUNCTION John Lewis has recognised that IT as a function is at the heart of how to deliver great customer service. The goal is to become more agile, both in practice and in culture. “Our challenge going forward is less testing,” Alex Wotton, Head of Testing and Environment Practice at John Lewis said.

T E S T M a g a z i n e | M a y 2 01 6

“We want to transform so that automation becomes the norm.” It’s an engineering challenge as much as it is mind‑set. John Lewis is investigating and investing in continuous integration, delivery and rear end automation. “We also need to change the skill sets of our testers,” Wotton added. “In order to make sure they can cope with an agile, DevOps approach.”

LOOKING AHEAD Thanks to the peak programme in 2014, the retailer was more than ready to handle the same influx of orders in 2015. There was not a spike in order volume, but thanks to the work carried out earlier and the continuous testing, the platform was much smoother and ran with few issues. The trend in the industry is to take this performance engineered approach in to the wider delivery. Better non‑functional requirements, capacity planning and baselining of production systems are all areas of focus along with mind-set changes in teams to ensure they are thinking about performance early in the delivery lifecycle. Smaller more optimised changes are a

focus within agile teams. In addition to this, testing needs to account for industry trends around the move to mobile apps, big data and heightened security all of which require a more engineered solution including how testing is carried out.

SUMMARY In summary John Lewis delivered a successful Peak project in 2014. The retailer’s strategy, ensured successful delivery: • Ensured the website remained operational through peak trading periods including the highly competitive Black Friday. • Collaboration of teams across John Lewis ensured that the end to end journey was optimised and clear measures put in place for success. • Cross functional IT teams formed a engineered approach to improving performance across the website. • The delivery of many of the projects within a short period of time was testament to the hard work of the Partners and third parties within John Lewis. The Partner owned nature creates a unique culture and desire to drive success.

Reference 1.

‘John Lewis breaks its all‑time sales record in Black Friday week’, The Guardian, http://www. theguardian.com/business/2014/dec/02/john‑lewis‑breaks‑all‑time‑sales‑records‑black‑friday‑week


Accelerate the SDLC and Release with Confidence

Interested in learning more? Visit us at the National Software Testing Conference May 17-18 where we will be speaking about "Evolving from Automated to Continuous Testing for Agile and DevOps"


HACKING FROM THE INSIDE OUT

Find your vulnerabilities, before anyone else does, Ryan O’Leary, Vice President, WhiteHat Security, implores.

T E S T M a g a z i n e | M a y 2 01 6

T

he proliferation of the internet has fundamentally changed so much about the way we live our daily lives, not least because it has put access to millions of people and things right at our fingertips. From healthcare to commerce, public services and beyond, being connected has enriched our quality of life like never before. But it’s also exposed businesses and consumers alike to unprecedented levels of risk. Threats are no longer posed solely by those countries or cybercriminal networks with the financial means to carry out attacks. This both raises the stakes and levels the playing field for attackers and defenders.

LOWERING THE BARRIERS The barriers to entry are lower than they’ve ever been. Underground markets sell automated attack toolkits that do not require deep technical experience for anything from information‑stealing malware to DDoS services; bulletproof hosting to online hacking tutorial courses. Whether the end goal is to gain geopolitical advantage, economic espionage or denial of service, the bad guys are certainly capable of relentlessly targeting your organisation. Now many doom‑mongers warn that this state of affairs will eventually lead to some


S E C U R I T Y

37

T E S T I N G

kind of catastrophic internet outage. I’m less sure. It’s more likely that if we allow the black hats to thrive unchecked then the public will simply begin to trust – and therefore use – the internet less. This might not sound particularly dramatic, but it could have a crippling effect on commerce, on healthcare, banking, and the provision of government services. Technology innovation would slowly grind to a halt.

LAWS DON’T WORK

It’s what we like to call the “hack yourself first” approach and it’s the first step towards effective self‑defence. Having been shown how easy it is to infiltrate their networks, steal customer data and IP or disrupt key systems, these firms can then take steps to do something about it. Unfortunately, many others prefer to stay in the dark; hoping and praying they’ll be ok. But luck is not a security strategy. Businesses must call on the experts to hack their systems to test for weaknesses, and the more that do this, the better. It’ll help secure customer and corporate information and maintain public confidence

It’s more likely that if we allow the black hats to thrive unchecked then the public will simply begin to trust – and therefore use – the internet less

So what do we do? Unfortunately, more laws will not do the trick. The internet is transnational. This means there will always be some countries where online criminals can escape scrutiny, and there will always be state‑sponsored operatives to whom the authorities turn a blind eye. In 2014, the US indicted five Chinese military hackers because they were deemed to have been directly attacking the commercial interests of US firms, rather than committing traditional cyber espionage focused on stealing state‑secrets. It was in the end little more than a symbolic gesture. The truth is that no CEO can expect their government to step in to protect their interests. Large regimes and an increasing number of smaller states gearing up their own capabilities will continue to quietly condone such attacks if it’s in their interests. The conventional rules of warfare no longer apply in cyberspace. Extradition remains nearly impossible and attribution even harder: there are simply too many ways to obfuscate the trail of digital crumbs leading back to an attacker. The bottom line is this: UK business leaders must understand that when it comes to cybersecurity, you’re on your own.

FIGHTING BACK So how do we fight back? Well, there’s no silver bullet – there can’t be for something as broad and complex as internet security. But it is time to realise that the best form of defence is attack. Security firms such as WhiteHat Security specialise in testing the defences of businesses so they can better understand where their cyber weaknesses lie, and then take steps to remediate. You’d be amazed at how easy they can be to crack. And these are major organisations – some of the biggest and best‑known brands in the world, responsible for billions of customers. If you haven’t been hacked yet, you’re not looking hard enough.

in the internet, which is vital to its continued success. But as an industry we also need to teach more people how to hack. You might think this sounds crazy, but it’s quite the opposite. The more people understand how the bad guys think and act, the better our national security and economic wellbeing.

RYAN O’LEARY VICE PRESIDENT OF THE THREAT RESEARCH CENTER WHITEHAT SECURITY

Ryan joined WhiteHat Security as an ethical hacker in 2007 and has since developed a breadth of experience finding and exploiting web application

SUMMARY The white hat community has forever been on the back foot, reacting to a seemingly more agile, unpredictable enemy. By hacking from the inside out, businesses can finally start taking the fight to the bad guys for a change. It’s been long overdue.

vulnerabilities and configuring automated tools for testing. Under Ryan’s leadership, the team has built a one‑of‑a‑kind database that combines details of more than 26 million vulnerability patterns with proprietary algorithms to assess the threat level.

T E S T M a g a z i n e | M a y 2 01 6


Mark Galvin, Systems Assurance Manager, University of Cambridge, dives into the relationships between QA and testing, and UX and usability.

BREAKING THE SURFACE


U S E R

E X P E R I E N C E

M

any people say that when dealing with user topics, a story is a powerful tool. Before electricity, paper, scrolls and cave drawings, we were telling stories (my kids would say I’ve witnessed all of these eras). Humanity, we are informed, has been conditioned to tell and listen to stories for millennia. James Whittaker’s The Story Manifesto1 is a highly recommended, inspiring business article in this author’s opinion. So, perhaps, I should start this article with a story: A few months ago, my manager came to me with a vision. I am not normally one to question my manager, but my first reaction was mild confusion. He wanted me to head up a User Centric Design Office. The office would be a small number of user experience practitioners, who would provide guidance, directions and tools for the rest of the Division – putting users at the heart of everything we do. “But I am a quality professional,” I informed him. “My background is in quality and testing. I’m not a User Experience Designer.” “I know,” he replied. “But it’s bigger than that and your background in quality is why it is a good idea.” After a short amount of further dialogue, and some thinking time, I realised that my manager was on to something. This seemed to be further supported when I read an article called 'QA and UX' by Jakob Nielson (just in case you don’t know of him, Dr. Nielsen has been called ‘the world’s leading expert on user friendly design’). I came to realise that the parallels between QA and testing are very similar to those between UX (user experience) and usability. Both areas seem to suffer from misconception. Testing and usability can be victims of late and limited involvement. QA and UX, if done wrong, can be perceived as ‘experts’ slowing delivery, disempowering staff and stifling creativity. It made me think deeply about the relationships between QA and testing, and UX and usability.

PERFECT ON PAPER, BUT… A few years back, I was desperately waiting for an iPad alternative to use at work. When the MS Surface was released, I was sold – this was it, this was the magic solution. On paper; they’d taken everything Apple had done and made it better. They added a detachable custom‑made keyboard

39

T E S T I N G

option. They added an integrated kickstand (wonderful). They loaded it with Office software out of the box (brilliant). In terms of usability and my own user experience, it did what I needed fantastically well. In meetings I found myself next to iPad users hunched over a screen, prodding away with sore fingers as words slowly appeared on rustic notepad – while I was touch‑typing, head‑up, using a keyboard to type into a cloud‑synched One Note and using full versions of Office. However, I was in a minority. For most, Surface RT was a great product ruined by an awful OS. My son got one for school. He has a number of accessibility requirements and while it works brilliantly for voice recognition and Office applications, it cannot install anything other than a limited number of apps from a poorly supported store. When he needed a visual scanning aid, the device could not do it and it became redundant. His machine could not let him perform the tasks he needed to do. He was not alone in this experience. A great product, with decent usability (in many areas), high quality, and one that passes tests with flying colours might still suffer from very poor overall user experience. And so it was with Surface RT. It was rushed. It wasn’t marketed correctly and it didn’t take account of the entire user experience. Windows RT has since been consigned to the great OS graveyard, while the Surface itself has flourished in its absence.

Testing and usability can be victims of late and limited involvement. QA and UX, if done wrong, can be perceived as ‘experts’ slowing delivery, disempowering staff and stifling creativity

AN ECHO OF GLORY IN FAILURE Not so long ago there was a real buzz in the industry as well as amongst consumers. Google X had made science fiction a reality. But many perceived the Google Glass adventure as a failure. Why did it ‘fail’? It was very expensive. It could only be purchased in limited numbers by a select few. The marketing was confusing (the idea was that this would generate exclusivity and product desire and it did generate staggering media interest globally), yet there was no clear product release strategy. There were hardware issues; breakages, heat problems, low‑res cameras, limited applications, poor battery time – all limiting use and frustrating the experience. Security was an issue but not in a usual IT sense – personal security. Being confronted by strangers (that were just curious

MARK GALVIN SYSTEMS ASSURANCE MANAGER UNIVERSITY OF CAMBRIDGE

Mark is responsible for quality across the core IT systems of one of the oldest and most valuable institutions in the world, the University of Cambridge. Mark is also head of the software testing team and Build and Development Academy. He has over 20 years' experience in software testing and twice been judge at The European Software Testing Awards.

T E S T M a g a z i n e | M a y 2 01 6


40

U S E R

E X P E R I E N C E

T E S T I N G

Sometimes, dare I say it, IT should be just an enabler, essential, but almost invisible in the everyday lives of our users. It is the means, not the end

or concerned that their privacy might be compromised), posed risks to the user that they would not experience otherwise. Out of the box user configuration was disappointing, especially when compared to something like a Kindle e‑reader. Not great for a high‑end device. Google stopped the Explorer program and took it back to the labs. Many saw this as a disaster for Google, but is this true? How much did they learn through the Explorer program? I’d say they learnt more about marketing, product requirements, hardware, software, design, media manipulation, social interaction, usability and most importantly the overall user experience, than most companies learn in decades. These lessons would have been difficult to learn just through design meetings, or testing in a controlled lab. It was something that required a firm grasp of the daily life of a user and how the device assisted (or hindered) this. Google used this, they assessed the overall user experience and refused to release something that was simply not ready – and by doing this they have protected the product and their reputation. Will it come back? Probably in some form – and if it does it will be infinitely better.

QUALITY EXPERIENCE It’s hard to believe, but some people still mix up QA and testing. But it is easy to demonstrate that testing is just one part of quality – even in more recent delivery methodologies, it’s still the part that tends to be done late and can be expensive, especially if quality is lacking elsewhere. Quality, on the other hand is the responsibility of everyone,

T E S T M a g a z i n e | M a y 2 01 6

not just the testers. Quality is better if everyone is involved and passionately working towards it. It starts from the conceptual idea through to ongoing support and eventual replacement. Sure, having a small number of experts (or standards) to draw on is great but unless it is baked into the mixture from the start, no amount of late, hastily‑applied icing is going to disguise a teeth‑shattering sponge cake. Likewise, UX is not just usability testing – how a user feels when using a product or service is much more than the buttons, pages, eye‑tracking and response times – sometimes we get caught up in this, or worse, we try to justify how much user involvement we’ve strived for (feel free to tell me how agile or DevOps solves everything at this point). But sometimes it’s about what a user is trying to do, what they want to achieve, how they feel, and a webpage or app might just be a small part of that. Sometimes, dare I say it, IT should be just an enabler, essential, but almost invisible in the everyday lives of our users. It is the means, not the end. Famously, Amazon changed their entire business when they received a complaint that their packaging was hard to open. It wasn’t enough to have a website that was thoroughly tested and highly usable, nor was it sufficient to have competitive prices and deliver quality products – the entire user experience was lacking due to one aspect – so they rectified it.

EVERYONE NEEDS TO OWN THE USER EXPERIENCE There is no point in having a great application, if the user later has to go through a five stage log‑in process an hour after they’ve left the system. No point in having the best webpage usability if the user doesn’t have the right support when they need it, or their product arrives late and damaged. Technology is an integral part of our lives today and it is fast becoming a necessity rather than a choice. We all have a part to play in user experience, from the business leaders through to operational support; it is not the remit of a small number of experts – but the responsibility of us all. There are experts out there, but there are not enough, and a small number of people cannot achieve truly great user experience alone. Since writing this article, the structure of my division has changed, but the vision remains valid and central to our future. We are the writers of the story for our users; the end point is where we choose it to be. We must consider the overall user experience. Quality and UX should be central to our thinking. In the words of Dr Nielson, “UX would be much improved if we all acknowledged that QA is the foundation for user confidence and customer satisfaction.”2

References: 1. 2.

'The Storytelling Manifesto' by James Whittaker on July 07, 2014. https://www.linkedin.com/ pulse/20140707182256‑46939713‑the‑storytelling‑manifesto 'QA & UX' by Jakob Nielsen on February 17, 2013. https://www.nngroup.com/articles/ quality‑assurance‑ux/



O U T S O U R C I N G

VALIDATION OF OPERATIONAL READINESS OF HIGH AVAILABILITY PLATFORMS Dan Bucureanu, Test Leader, European Commission, discusses how technological platforms (in this case web application platforms) can be validated against its requirements to have a basic guarantee in the design, and also to have basic confidence when operating the platform.


O P E R A T I O N A L

99.999

or ‘five 9s’ is a number which comes up a lot in most information systems, and it tends to be the commonly accepted quality standard of product/service availability. It basically means that the provided service is allowed to fail 5 minutes over the course of one year. This concept applies to network systems, business applications, critical systems, stock exchange systems, military command centres, and the list goes on. Think of a bank or a mobile service provider, what would happen if the core system would go offline for one hour? Well most probably there would a huge loss of revenues as well as a big hit to the company’s reputation. To understand the 99.999 rule think of it this way: You have one device that has a vendor given availability rate of 99.9%. (In one year it will go offline 8.7 hours.) In order to achieve the 99.999 rule there has to be another exact device, which will take over the work of the one that fails, exactly as the device fails. So what happens when you have the two systems and by some unfortunate chance the redundant system will not go online, or it will take a lot more time to failover than in the design? This is where the platform operational readiness testing comes into place.

TESTING OPERATIONAL READINESS There are three main decisive factors in the testing of operational readiness: 1. The system. 2. The processes. 3. The team.

THE SYSTEM

43

T E S T I N G

When thinking of the system as a decisive factor in the testing of operational readiness, the aspects listed in Table 1 need to be taken into account: An important aspect here is, in case you have an active/active configuration and one of the nodes goes down, it is important to verify that the system still handles the entire volume of transactions with one of the nodes down. Performance aspects which should be tested individually, as well as a whole include: 1. Performance of the network (firewalls, public and private links, interconnect switches).

Think of a bank or a mobile service provider, what would happen if the core system would go offline for one hour?

Figure 1. An example of a platform.

2. Performance of the web server. 3. Performance of the application servers. 4. Performance of the database (in different configurations). 5. Performance of login (in case federation and some awkward protocol is used, such as Stork + SAML).

THE PROCESSES

In the matter of processes, it has to be verified that the processes that govern the change and business continuity are in place and that they apply. When performing such a test, you can take out a process (follow the exact steps from the manual) and perform it. Some examples of what can be tested: • Deployment of new application. • Migration of a database. • Upgrade of OS. • Upgrade of application. • Failure in on the core systems.

THE TEAM

One important aspect to test is the readiness of the team. In this manner you are testing the skills of the team members as well as

DAN BUCUREANU TEST LEADER EUROPEAN COMMISSION

Dan is an IT expert with more than 10 years' hands‑on experience with all types of testing procedures. He is currently serving as lead test architect for the European Commission Taxation and Customs Department. He is certified by the OPEN Group with the Master IT Certification, Certified Scum Master, Certified Test Manager and IBM IT Expert in the area of continuous delivery of services.

T E S T M a g a z i n e | M a y 2 01 6


44

One important aspect to test is the readiness of the team. In this manner you are testing the skills of the team members as well as the process, which is in place in case of failure

O P E R A T I O N A L

T E S T I N G

ID

Characteristic

Description

1

Performance

The ability of the system to respond in a given time.

2

Scalability

The ability of the system to be extended without modifying the applications.

3

Security

The ability of the system to provide identification authentication and authorisation, as well as intrusion detection, etc.

4

Availability

Ability of the system to continue normal operations in case of a failure of a component or upgrade of running applications.

5

Disaster recovery

How fast will the system recover from a potential disaster.

6

Monitoring

The ability of the system to have its resources monitored.

7

Audit

Ability of the system to show who when and what has been done.

8

Backup and restore

The ability of the system to be configured to a previous state.

9

Access

The ability of applications/users to access the resources on the servers.

10

Error handling

The ability of the system to throw errors which can be fast and easily understood.

Table 1. Aspects to verify in a platform.

the process, which is in place in case of failure. When performing this verification there are two very important aspects to consider: the team does not need to know that they will be tested, so do not inform them that you will simulate a failure. Secondly, try to learn what went wrong after the test and find improvements in the process and add skills to the team. I like to perform the test in the following way: I will kill a process or drop a table in the DB and after that start the stopwatch, and stop it after the solution has been fixed. After that I will look as aspects such as:

T E S T M a g a z i n e | M a y 2 01 6

• How long it took until the failure was found. • Did the alarm go off. • Did the appropriate team member solve the solution. • Are we in the SLA. • Have we lost any data in the process. • And so on. By following the above steps you will not have a 100% error free system, but you can prepare for a failure. And then when it inevitably comes, you know that you have been there and know how to fix it and how long it takes.


“Fantastic conference focused at testing professionals. Good opportunity to network, new cutting‑edge products and ideas. Looking forward to next year.” Nadine Abley, GTF Test Manager, JP Morgan

GOLD SPONSOR

SILVER SPONSOR

EVENT PARTNERS

EXHIBITORS

inspired by potential

SUPPORTED BY

www.softwaretestingconference.com


46

THE

NATIONAL

Cecilia Rehn, General Manager and Editor, TEST Magazine, gives her highlights ahead of The National Software Testing Conference.

I

am pleased to say that The National Software Testing Conference has returned to The British Museum for a third year running, with a bigger programme than ever before, full of exceptional speakers. With more than 50 speakers covering everything from test automation; test management culture; UX; agile; mobile application testing to DevOps, there will be something for everyone! The conference will open up with a keynote from Mike Shaw, who leads HPE Software’s thought-leadership work around digital disruption and its impact upon IT. The two-day programme will see speakers from a range of different verticals, with keynotes from Paula Thomsen, Head of Quality Assurance at AVIVA on diversity within assurance teams and Shane Kelly, Former Head of Digital Change at William Hill, who will be talking about digital change and test transformation.

T E S T M a g a z i n e | M a y 2 01 6

SOFT WARE

TESTING

Other presenters include: • Peter Francome, Head of Test, Quality and Delivery Innovation, Virgin Media. • Inès Smith, Quality Assurance and Controls Lead, UBS. • Ajit Dhaliwal, Director, IT Delivery, Deployment and Testing, Vodafone Limited. • Sally Goble, Head of Quality and Jonathan Hare-Winton, QA Automation Engineer, Guardian Media. • David Rondell, Head of Digital Platform QA, Royal Bank of Scotland. • Arun Jayabalan, Test and Release Manager, Public Sector. • Rod Armstrong, Programme Quality Manager, easyJet. • Dan Giles, CSI Manager, Sony Computer Entertainment Europe. • John Stinson, Test Automation Architect, Investment Bank. Martin Wrigley, Executive Director, Application Quality Alliance (AQuA), will be hosting a session comprised of panel discussions and presentations on the state of mobile testing and the forthcoming challenges in the arena. The winner of The Lifetime Achievement Award at the 2015 European Software Testing Awards, Geoff Thompson, Director and Managing Consultant, Experimentus Ltd will be presenting on the meaning of shift lift, and the benefits of a shift left approach to delivery. “This is a fantastic opportunity to share my thoughts and insights with other experienced testing professionals from across the industry and in return hear their views and opinions. I am excited to share my ideas and experiences so that others can benefit by putting them into practice within their organisations,” said Myron Kirk, Head of Test CoE, Boots, whose presentation will leave attendees asking: Will central test teams still exist? What skills will testers need? Is this evolution or a revolution?

CONFERENCE

NETWORKING AND NEW BUSINESS OPPORTUNITIES The National Software Testing Conference provides an ideal environment for learning, networking and developing skills. “The British Museum is unrivalled as a venue,” John Stinson, Test Automation Architect, Investment Bank, said. “The conference is a great chance to make contacts with people in similar roles from other industries.” Rod Armstrong, Programme Quality Manager, easyJet, notes that it’s important for individuals working within the industry to attend conferences such as this as “it’s an opportunity to network, meet new people at your level and above and come away feeling inspired.” “Testing and quality is going through a period of unsurpassed change,” observes Peter Francome, Head of Test, Quality and Delivery Innovation, Virgin Media. “Where software is the organisation, whether it’s building digital trust or protecting brand value coupled with the treat and opportunity agile and DevOps presents to the testing community, it’s never been more important to understand where the market and profession is developing.” Paula Thomsen, Head of Quality Assurance, AVIVA, adds that it’s important for senior IT professionals to take the time to attend conferences and step away from the day-to-day tasks and see what’s out there. It’s a chance to borrow ideas, learn from others’ failures and successes, and be pointed in the right direction in terms of further skills development. “Attendance at conferences such as The National Software Testing Conference is essential for anyone who wants to own their own career,” she said.


THE

NATIONAL

SOFT WARE

TESTING

CONFERENCE

47

The National Software Testing Conference checklist Check out the speakers and presentation topics in the programme

Ask your questions during the Q&A panels

You’ll see each presenter’s biography and presentation topic, ensuring you are well equipped to decide which presentations are the most important to you. You can find the conference programme at www.softwaretestingconference.com/conferenceprogramme.

Each conference day will conclude with an interactive Q&A panel – where you’ll be able to ask the panelists for insight on pressing matters.

Visit the market-leading exhibition Don’t forget to take in the market-leading exhibition, where you’ll be able to source key information and new products from leading vendors. There will be plenty of time to walk around during the two-day conference.

Enter the treasure hunt And when you’re walking around the exhibition, don’t forget to participate in the Treasure Hunt for the chance to win 2 VIP tickets to The European Software Testing Awards! The Treasure Hunt form can be found in your delegate bag.

Share your opinion in the roundtables There are also a series of workshops taking place during the conference – you can find more information about the topics in the programme. Spaces are limited, so make sure you don’t miss out on this great opportunity to learn from your peers and share your views.

Get out there and network Apart from gleaning knowledge imparted from the excellent speakers, The National Software Testing Conference has been designed to offer prime networking opportunities. Whether it is making new acquaintances during the Gala Dinner or conversing over new business opportunities on the exhibition floor, we are encouraging you to get out there and speak to the other attendees!

Enjoy the Gala Dinner on the 17th of May… Ticket prices include attendance at the Gala Dinner, which will be held at The Grand Connaught Rooms – a short step away from the British Museum. Come along for a night of entertainment, prizes, networking, delicious food and drinks.

…And come back for more on the 18th Make sure wake up in time for day 2 of the conference – there’ll be just as many exciting presentations, talks and roundtables to attend this day also!

www.softwaretestingconference.com


The National 1 7 - 1 8

M A Y

2 0 1 6

T H E

Conference B R I T I S H

M U S E U M

L O N D O N

A

s well as bringing you excellence, best practices and practical advice from the software testing and QA scene, The National Software Testing Conference will include two DevOps streams for those interested in seeing how the new cultural IT movement will impact upon IT teams and change organisations from within. Presentation topics vary from case studies of successful DevOps implementation, to transformation strategy; culture/mindset shifts; continuous delivery; microservices, and more. Taking the stage on first day will be Stephen Williams, VP Engineering, Ticketmaster International, who will be presenting on a DevOps case study. “My talk is on how we defined a DevOps strategy and programme within Ticketmaster, which allows flexible planning of how geographically distributed teams progress,” Williams explained. Another speaker, Keith Watson, Agile Delivery Manager at Ordnance Survey, will be “mentioning some hints and tips on implementing DevOps” during his presentation. “In particular, how to make the first steps on the DevOps journey, how to present a compelling business case to convince stakeholders to invest in ideas and how to work with the politics of an organisation,” he said. Other key speakers include: • Andrew Hardie, DevOperative at AgileSphere, Ministry of Justice. • Milan Juza, Agile Transformation Director, Barclaycard. • Mike Dilworth, Agile & DevOps Transformation, Sainsbury's. • Deb Bhattacharya, Scrum Master and DevOps Specialist, HSBC Commercial Banking. • Marco F. Delaurenti, Lead DevOps, Blockchain. • Feidhlim O'Neill, Group Head, TechOPS Wonga.

GOLD SPONSOR

SILVER SPONSOR

Come learn from peers who have successfully begun their DevOps journey!

• Pallavi Jain, DevOps Team Lead, Sportingbet. • Seb Chakraborty, CTO, Centrica Connected Home. • James Chapman, Interim Head of Engineering and Testing & Agile Transformation Coach, River Island.

WHY IS DEVOPS IMPORTANT TO INCORPORATE WITHIN ORGANISATIONS? The culture and methods of DevOps is being adopted by many organisations because they are demonstrating real business value by saving time, increasing productivity and improving speed to market of new ideas. Discussing why DevOps should be integral to an organisation, Watson said “because of the need to have specialist skills in an IT organisation to deliver software, barriers between development and operations can build up very easily. The way to develop modern software is by having these teams

EVENT PARTNERS

work together. You can achieve this using a combination of culture change, business transformation and DevOps automation philosophy.”

STRENGTHENING YOUR OWN DEVOPS KNOWLEDGE Williams shares his advice on expanding your DevOps knowledge: “It’s always great at a conference such as this because you have such a wide array of people from different businesses, different viewpoints and different business domains and problems. Understanding other people’s approaches and the problems they’ve solved can only be a good thing to expand and strengthening one’s own knowledge and understanding of the DevOps domain space. “It’s also a great opportunity to network, making those relationships that can help you draw on expertise post conference, as well as other potential business opportunities.”

EXHIBITORS

inspired by potential

SUPPORTED BY


Syndicate Supplement MAY

2016



1

Software knights of the roundtable

T

he first annual TEST Focus Groups event took place in late March, and saw over 100 delegates congregating in syndicate rooms to debate topics and issues important to them. Senior testing and QA professionals were able to discuss and learn from each other on varied topics including agile, big data, test data management and more. We launched the TEST Focus Groups to help our readers who wish to discuss their challenges in a meaningful and structured manner with a view to finding pragmatic and workable solutions to what are invariably complex issues. We’ve put together this Syndicate Supplement to ensure that learning from the day is being shared with the wider testing

CECILIA REHN EDITOR OF TEST MAGAZINE

community, in hopes that it will help move the market forward. The Focus Groups will be returning in October, but with DevOps topics, which we’re very excited about. The DevOps Focus Groups will take place on the 18th of October, and include topics such as managing culture shifts, automation, cloud technologies and more. If you’re interested in attending future roundtable discussions, please don’t hesitate to reach out to me.

cecilia.rehn@31media.co.uk

AGILE

Using analytics to improve testing and application delivery

Accelerating software delivery – new models

2

ADVERTISING ENQUIRIES Anna Chubb anna.chubb@31media.co.uk +44 (0)203 668 6945

18

The future of test data management

6

Continuous performance testing in an agile environment

Challenges for mobile testing in the enterprise 22

TESTER TRAINING AND DEVELOPMENT

PERFORMANCE TESTING

10

PRODUCTION & DESIGN JJ Jordan jj@31media.co.uk EDITORIAL INTERN Jordan Platt

MOBILE APP TESTING TEST DATA MANAGEMENT

© 2016 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 GENERAL MANAGER AND EDITOR Cecilia Rehn cecilia.rehn@31media.co.uk +44 (0)203 056 4599

C O N T E N T S ANALYTICS

SYNDICATE SUPPLEMENT | MAY 2016

What learning and development activities are on a tester’s CV?

26

31 Media Ltd, 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 info@31media.co.uk www.testingmagazine.com PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA @testmagazine TEST Magazine Group

BIG DATA

PHISHING

Simplifying big data testing 14

Big game phishing

30

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6


Using analytics to improve testing and application delivery T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

Oded A. Tankus, Project Manager, Assure, discusses analytics and its implication on testing process improvement and application delivery.


3

A N A L Y T I C S

T

he TEST Focus Groups conference was conducted on March 22nd at the Park Inn Hotel, London. Lead by Assure’s CEO, David White, three roundtable discussions were conducted on the subject of ‘Using Analytics to Improve Testing and Application Delivery.’ Assure was approached to lead these roundtable discussions, and welcomed 28 professionals who came from a variety of industries and mostly held executive and managerial roles (68%). 50% of the participants came from the financial industry. In the three roundtable sessions we touched on all aspects of analytics in the testing process and its implication on testing process improvement and application delivery. This article summarises some of the topics that were discussed in the form of Q&A, and describes the best practices, insights and experiences of the participants

from the perspective of the metrics used and the insights gained in improving the test process and shortening the application delivery time without compromising quality.

ORGANISATIONAL AND MANAGEMENT SUPPORT How can we get organisational and management support for the QA analytics initiative? The organisation defines a framework that drives the QA improvement function. Management understands that their success is directly proportional to the quality of the delivered applications. Management always needs to balance between committing resources to deliver quality applications and lowering QA costs. Budgets are justified by

continuously demonstrating the quality of application delivery through efficient quality processes.

THE ANALYTICS PROCESS How can we streamline the analytics process, maximising the extraction of meaningful insights? The analytics process must be structured and well planned. The following are some of the points discussed: 1 Analytics process components: ££ A communications platform. ££ A feedback system. ££ Training. ££ A text analytics engine. 2 Real time versus right time – information and insights must be provided at the right time and not necessarily continuously. 3 Insights – the process of interpretations

Management understands that their success is directly proportional to the quality of the delivered applications

must be standardised so that different managers come to the same conclusion after analysing the data.

ANALYTICS What should be in the analytics toolbox? Tools that extract meaningful insights that support: • Descriptive analytics – creating simple counts, distributions, visualisations describing your data. • Predictive analytics – predicting organisational and process behaviour, e.g., can you predict at the beginning of a release how it will end? • Prescriptive analytics – prescribe corrective actions and suggest mitigations after identifying aberrations, risks, etc.

ODED A. TANKUS PROJECT MANAGER ASSURE

Oded has over 30 years’ experience in data sciences and analytics, BI implementation and methodology. He specialises in application development, software quality assurance and project management metrics, and he has an M.Sc. in Applied Statistics.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6


4

The major areas of discussion focused on data quality, test automation and the tools and techniques surrounding dashboarding, metrics programmes and analytics

A N A L Y T I C S

DASHBOARDING What is the best way of developing and designing a dashboard? ‘Dashboarding’ is defined as the development of an effective dashboard – a tool that supports the way to gain insights, allowing the information consumer to think business instead of data. Some considerations for dashboarding: 1. Follow an accepted methodology for dashboarding. 2. The dashboard must support a specific process/goal in order to be useful. 3. Do not mix strategic metrics with operational metrics, since this causes ambiguity and confusion. 4. Provide drill down capabilities to better focus on and target your issues. Define threshold indicators and colour coding to add intelligence.

METRICS METHODOLOGY What is the best way to develop ‘meaningful’ metrics? The de‑facto methodology that provides the most cost effective list of metrics is the GQ(I)M (goal‑question‑indicator‑metric) methodology, where: • Goals need to be defined based on organisationally adopted quality frameworks (TMMI, ISO, IEEE, etc.). • Questions need to be defined so that their answers meet the goals. • Defined metrics are a natural outcome of the questions.

DATA What are the major issues and misconceptions surrounding the data asset? The two major issues are: 1 Data quality and standardisation: ££ Organisations are not aware of the low quality of their data. ££ Analytics will not solve the data quality issue, but only highlight where data quality is weak. ££ Improve the quality of the data asset and form a trustworthy base for analytics. ££ Data quality issues are treated through process improvement initiatives. 2 Data structures: ££ Data must have a stable and consistent unified data structure defined in entity‑relationship‑diagrams reflecting business entities, relationships and attributes.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

££

Data structures inherently contain natural hierarchies which are translated to and defined for filtering and drill down capabilities when dashboarding.

PROCESSES On which processes do organisations focus when considering the analytics initiative? The three major quality processes that directly impact the analytics initiative are: 1 Requirements management – requirements must be modelled. Requirements modelling techniques include use cases and ERDs. In the absence of good requirements, test case documentation is used. 2 Test automation – used to decrease the delivery time of software by decreasing the time and effort allocated to quality. Effective test planning can dramatically simplify the test automation effort. Considerations for test automations include: ££ Identify high risk business areas. ££ Do not try to automate everything. Define clear criteria for automation. ££ Automate test cases that always pass or always fail, test cases with a high number of runs. 3 Defect management – effective management of the defect lifecycle will vastly increase the quality of delivered applications. Metrics in this domain are usually filtered by defect severity and include: ££ Counts and average times between defect statuses to identify bottlenecks. ££ End‑to‑end times along the defect status network, e.g., open‑close. ££ Counts and average times between cycles of inefficiencies on the network – e.g., number of defects and average times on the open‑reject‑open path.

CONCLUSIONS The above focused on a small portion of the subjects raised in the roundtable discussions. The major areas of discussion focused on data quality, test automation and the tools and techniques surrounding dashboarding, metrics programmes and analytics. Most organisations have implemented basic quality metrics for descriptive analytics. A small few have evolved to predictive and prescriptive analytical capabilities.



The future of test data management T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

Iain Chidgey, VP and General Manager, International, Delphix, reviews the key challenges facing testers and developers following the TEST Focus Groups event.


T E S T

D A T A

7

M A N A G E M E N T

I

n today’s culture of immediacy, speed to market has never been more important to ensure organisations can conquer the competition and deliver on the promise of high quality business applications. Yet, grappling with a plethora of issues, the software testing world is rife with delay‑inducing challenges. From access to quality data, difficulties in ensuring collaboration between teams and deploying staff with the correct skills to effect change, many have hailed agile and DevOps methodologies as the answer. When speaking to a number of industry insiders, however, we found out exactly what bottlenecks are currently holding them back from success.

time is lost while teams work independently without common goals, a cultural dilemma is arising.

AUTOMATION IS THE AIM, NOT THE REALITY While collaboration between development and operations is important, leading organisations also encourage developers to embrace ops functions and deliver scale through automation. From software

GAINING ACCESS TO ACCURATE, HIGH QUALITY DATA IS LIKE CHASING GOLD DUST

While collaboration between development and operations is important, leading organisations also encourage developers to embrace ops functions and deliver scale through automation

While having access to secure data on demand is a critical success factor in delivering timely and accurate releases, confessions from the testing community have shown that testing environments are regularly limited due to data issues. In fact, recent research from Delphix has shown that staff are waiting up to a week or longer to refresh non‑production environments from production. These difficulties in managing test data have also led to bugs creeping into development cycles and impacting the bottom line.

WORKING TOGETHER DOESN’T ALWAYS GO TO PLAN Enabling teams to achieve mutual goals is also an area for improvement, with feedback indicating that when it comes to testing and development teams, it’s traditionally a finger‑pointing exercise for blame when processes and outcomes don’t align. Getting data into the hands of those who need it, when they need it, is an ongoing challenge, and collaboration traditionally has not been considered a prerequisite for success. As

architecture and design to system admin and production support, success is synonymous with a style of IT management and implementation that places emphasis on automation and iterative delivery of software. Unfortunately, legacy infrastructure often doesn’t support modern approaches and deployment automation, and there is still a large portion of the market that has yet to fully embrace this practice and remove delay and inefficiency.

IAIN CHIDGEY VP AND GENERAL MANAGER INTERNATIONAL DELPHIX

In his 20 years of experience in the IT industry, Iain has held senior sales roles for various high growth software organisations, including Portal and Oracle both in the EMEA and the US. He has a BSc with Honours in Systems Modelling from Sheffield Hallam University.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6


8

T E S T

D A T A

M A N A G E M E N T

Of all the challengers our industry leaders have addressed, the issues all have one underlying connection – data THE NEED FOR SPEED MEANS SACRIFICING ON SECURITY – OR DOES IT? Despite facing unprecedented risk of data loss, teams are opting for agility over security, deciding to move faster rather than waiting for data to be masked and secured for use in non‑production data. As businesses opt to expand the population of employees who have access to full production data, this creates challenges in ensuring data is safeguarded and protocols are adhered to.

MEASUREMENT IS THE MISSING LINK While automation and collaboration are challenges impacting speed to market, they

aren’t themes that can provide a tangible metric for measurement. Across the board, the way we measure productivity and success today is varied and wide ranging. From test coverage to customer feedback and the volume of bugs found in production, teams struggle to define and measure their productivity effectively.

THE GOLDEN TICKET Of all the challengers our industry leaders have addressed, the issues all have one underlying connection – data. The ability to copy, secure and deliver data on demand is a critical success factor for business. Only by making the underlying data more agile can businesses reduce delays. This is where data virtualisation can step in to do the heavy lifting.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

By virtualising at the data level, copies no longer need to be duplicates; rather, data blocks can be shared. This means environments can be served up in minutes, not months. Data sets can be refreshed and reset on demand and environments can be bookmarked and shared between users. This reduces the time it takes to provision data for applications and limits the hand‑offs between teams competing for data access. In turn, this means self‑service and automation take precedence and empower users to copy and share data without fear. In turn, this fosters collaboration as data can finally be in the hands of those that need it, when they need it. As this approach means data can be masked during data delivery without any need for manual intervention, organisations can also finally control data access and ensure security policy is up to scratch without enduring bottlenecks to speed.



Continuous performance testing in an agile environment T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

Henrik Rexed, Performance Specialist, Neotys, explains the power of performance testing in agile.


P E R F O R M A N C E

L

11

T E S T I N G

et’s face it: agile is a fact of life. Perhaps you’re not a ‘full agile’ shop and maybe you’re not doing continuous integration or even talking about DevOps yet, but the reality is that pressures are increasing to realise many of the benefits, like quality and speed, that are inherent in agile or ‘agile‑like’ development methodologies. When you start becoming more agile, developers churn out code at a rapid pace, and many testers struggle to keep up with this pace. Furthermore, testers in agile teams often have responsibility for automated testing, unit testing, regression testing, as well as load and performance testing. In this environment, you need to be able to keep up with the speed of development while also meeting heightened expectations of quality. Performance testing in agile is even more powerful than in standard development environments because you can: 1. Avoid late performance problem discovery: When load and performance testing are pushed off until the end of a development cycle, there is often little to no time for developers to make changes. This can cause teams to push back release dates and delay getting features out the door that customers need. 2. Make changes earlier when they are cheaper: By including load and performance testing in continuous integration testing processes, organisations can catch performance issues early before they get much more costly to fix. This is especially true on agile teams when discovering a performance problem weeks later could mean that it actually occurred several builds ago which makes the task of pinpointing the root cause a nightmare. In the same way that combining agile with load testing can provide unique benefits, it can also present your teams with unique challenges they may not have experienced in the past, such as: 1. Shorter development cycles require more tests in less time: With agile, development cycles are much shorter, and load and performance testing can get pushed off until the last day of a sprint or sometimes it’s done every other sprint. This can often result in code being released without being adequately tested or user stories slipping to the next release once they have been tested. Conceptually the solution is to do the testing earlier in the development cycle, but that’s easier said than done with many teams lacking the resources and tools to make it happen.

2.

3.

Developers need feedback now: Agile developers need to know more than just the fact that their code is causing performance issues: they need to know when their code started causing problems and what story they were working on when the issue started. It’s a huge pain for developers to be forced to go back and fix code for a story they worked on weeks or months ago. It also means they can’t spend time working on getting new features out the door. Detecting performance issues early in the cycle so you can deliver important feedback to developers quickly is crucial to saving costs. Automating the handoff from Dev to Ops can feel risky: While DevOps and continuous deployment are still fairly young practices, the fear felt by operations teams that new changes in code will slow down or even crash the application when it is deployed in production has been around forever. Automating some of the testing in the continuous integration process can help to ease some of this fear, but without adequate performance testing included, the risk is still real. Ops teams know well the huge impact application downtime can have on the business.

Every application has minimum performance service level agreements (SLAs) to meet, but agile teams are often more focused on adding features and functionality to an application than optimising the application’s performance

OVERCOMING CHALLENGES The following best practices can help you maximise the advantages while helping you overcome the challenges – of load testing in an agile environment.

PUT PERFORMANCE SLAS ON THE TASK BOARD

Every application has minimum performance service level agreements (SLAs) to meet, but agile teams are often more focused on adding features and functionality to an application than optimising the application’s performance. User stories are typically written from a functional perspective without specification of application performance requirements. Performance needs to be somewhere on the task board if the team is going to give it attention.

WORK CLOSELY WITH DEVELOPERS TO ANTICIPATE CHANGES

One of the benefits for testers working in an agile environment is that they typically learn about updates on development tasks during daily stand‑ups or similar meetings.

HENRIK REXED PERFORMANCE SPECIALIST NEOTYS

Henrik has been orchestrating and conducting performance tests for over 10 years, delivering large cloud testing on the most demanding business areas such as trading applications, video on demand (adaptive streaming), sports websites, etc. Prior to Neotys, Henrik worked as .NET architect for Logica and performance testing expert on large accounts in a variety of industries including insurance, car industry, retail and energy.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6


12

The difference between continuous integration builds, nightly builds, and the builds produced at the end of sprints can be huge

P E R F O R M A N C E

In order to get the maximum benefit from this level of collaboration, testers should constantly be thinking about how the stories that are currently being coded will be tested. Will these require completely new load tests? Will they cause errors in current test scripts? Can you get away with slight modifications to current test scripts if you plan ahead? Most of the time, these are small changes, so testers can stay ahead of the curve if they keep engaged with the team.

INTEGRATE WITH BUILD SERVER

Even if you aren’t completely on the agile bandwagon yet, you probably have a build server that kicks off some automated tests, unit tests, smoke tests, regression tests, etc. In the same way that performance goals need to be added to the task board, performance tests should be among the tests that occur with every build. This can be as simple as setting up a trigger to have the build server kick off the test, but could include displaying test results within the build tool depending on

T E S T I N G

The best practice here is to start small and internal. For CI builds that are getting kicked off every time someone commits a change, you want these tests to run quickly so that you can get results back to that developer about how his/her changes affected the system. Consider running a small performance test with the most common scenarios covered with the typical load on your application being produced from your own internal load generators. For nightly builds, ramp it up to include more of the corner case scenarios and increase the load to what you see at peak times to see if any performance issues were missed during the CI tests. At the end of the sprint, you’ll want to go all out: consider generating load from the cloud to see what happens when users access your app through the firewall. Make sure every SLA on the constraints list is passed so that every story completed during the sprint can be marked as ‘done.’

SUMMARY

how sophisticated the integration is. Ideally, you want the person who kicked off the build to instantly see the results and know which changes went into that build so they can be fixed if there is a performance issue.

CI + NIGHTLY BUILD + END OF SPRINT LOAD TESTING

The difference between continuous integration builds, nightly builds, and the builds produced at the end of sprints can be huge. We’re talking the difference between a single change committed to a version control server versus all the changes committed in a day versus all the changes committed during a sprint. With this in mind, you should adjust your load tests to the type of build you’re running.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

Agile development is increasing the productivity of teams and the quality of applications. When adding load and performance testing into this process, careful planning should occur to ensure that performance is a priority in every iteration. To ensure you are getting the most value from combining agile methodologies and load testing, it is advised you: • Make sure performance SLAs are on your task board or constraints list to guarantee the code associated with a task performs well before that task is marked ‘done.’ • Collaborate with developers to anticipate when new code changes require changes in performance test scenarios. • Have performance tests automatically kicked off with every new build and track performance trends from build to build. • Ramp up complexity and load for tests with the size of the build to keep build times down (CI => performance smoke test, nightly build => full load test, end of sprint => stress test with cloud load generators).


The Fastest, Most Automated Load & Performance Testing Solution on the Planet.

Period.

Test 5-10x Faster than with other solutions Reduces Script Maintenance Time by 90% First to Support HTTP/2 Apps Integrations with CI servers for Agile and DevOps Fully Integrated Mobile testing

www.neotys.com


Simplifying big data testing

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

While the trends around the use of big data are morphing fast, so are the accompanying challenges. Big data testing is one of the key challenges in a successful big data implementation. Umesh Kulkarni, Head of Presales and Business Solutions and Sagar Pise, Senior Technical Architect, L&T Infotech, explain.


B I G

15

D A T A

O

ften, organisations struggle to define the testing strategy for structured and unstructured data validation, working with non‑relational databases, setting‑up of an optimal test environment and finding the right talent to test complex ecosystems. The key questions we posed during our TEST Focus Groups roundtable discussions were: how do organisations devise a one‑stop big data testing solution in the knowledge that each big data project has many different aspects and priorities for its quality assurance? And how do they do it in a simplified and structured way?

BIG DATA VERSUS ‘LOTS OF DATA’ At the outset of our discussions, it was important to clarify what is meant by a ‘big data’ problem, and identify what makes a ‘big data’ problem different from a ‘lots of data’ problem. The three key characteristics of big data are: • Volume – data above a few terabytes. • Velocity – data needs to be processed quickly. • Variety – different types of structured/ unstructured data. High‑level indicators that an organisation is dealing with big data are: • Traditional tools and software technologies are failing to process data. • Data processing needs distributed architecture. • Data processing involves structured, semi‑structured and unstructured data sets. As discussions continued, and the roundtable participants shared their experiences of how their respective organisations are leveraging big data, it was acknowledged that there is a major change in how data is generated and consumed. Previously, only a few entities or companies generated data and rest of the world consumed that data. Today, everyone is generating data and everyone is consuming data. And whilst most organisations have started focusing on big data testing, very few have a clearly defined big data test strategy.

BIG DATA FLOW AND ASSURANCE CHECKPOINTS When implementing a big data testing strategy, it is important to understand the big

data flow and assurance checkpoints. Data coming from heterogeneous data sources, such as Excel File, Flat File, Fixed Length File, XML, JSON & BSON, Binary File, MP4, Flash Files, WAV, PDF File, Word Doc, HTML File, etc., needs to be dumped into big data stores in order to process it further and get meaningful information out of it. Before moving these data files into big data stores, it is always preferred to verify source file metadata and checksums as part of first level assurance check point. Once heterogeneous source files are dumped into big data stores, as part of pre‑big data processing validation, it is important to verify whether files are dumped as per dumping rules and they are dumped completely without any data discrepancy in terms of extra, duplicate or missing data. Once data dumping is completed, an initial level of data profiling and cleansing is done followed by actual functional algorithm execution in distributed mode. Testing of cleansing rules and functional algorithm are the main assurance check points at this layer. After the execution of data processing algorithms, the cream of data obtained is then given to downstream systems such as the enterprise data warehouse for historical data analysis and reporting systems. Here report testing and ETL testing act as assurance check points. All of these high‑level assurance checkpoints will ensure: • Data completeness – wherein end‑to‑end data validation among heterogeneous big data sources is tested. • Data transformation – wherein structured and unstructured data validations are performed based on business rules. • Data quality – wherein rejected, ignored and invalid data is identified. • Performance and scalability – wherein scalable technical architecture used for data processing is tested.

TRADITIONAL DATA PROCESSING VERSUS BIG DATA PROCESSING

As big data testing is still an emerging area, it was commonly agreed that a big data tester needs strong technical skills at least till the point where the industry has standard big data testing tools

UMESH KULKARNI HEAD OF PRESALES AND BUSINESS SOLUTIONS L&T INFOTECH

Umesh is a Head of Presales and Business Solutions for L&T Infotech’s ‘Testing Service Line’. He is involved in providing strategic direction to various consulting assignments, researching new tools and technology, defining

Discussions around the differences between traditional data processing and big data processing models raised questions with respect to tools and technology, skill sets, process and templates and cost. When considering the big data characteristics of volume, velocity and variety, the following challenges and recommended approaches to deal with them were

software testing processes and assists organisations in optimising testing efficiency through a mix of QA tools and methodologies. He has 20 years' industry experience, 15 of which have been spent in the software testing domain.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6


16

B I G

SAGAR PISE SENIOR TECHNICAL ARCHITECT L&T INFOTECH

Sagar has more than 13 years of experience in the IT industry and has strong experience in database internals, air traffic control systems, automation testing and big data testing. He currently manages the Big Data Testing Solutions team within the Solutions Group of Testing Service Line at L&T Infotech.

highlighted at the roundtable sessions: • 100% test coverage versus adequate test coverage: focus required on adequate test coverage by prioritising projects and situations. • Setting up a production‑like environment for performance testing: recommends simulating and test stubbing components which are not ready and available during testing phase. • High scripting efforts while dealing with variety of data: requires a strong collaboration among testing, development, IT and all other teams, along with a strong resource skill set.

KEY COMPONENTS OF A BIG DATA TESTING STRATEGY The recommended big data testing strategy should include the following: • Big test data management. • Big data validation. • Big data profiling. • Big data security testing.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

D A T A

• Failover testing. • Big data environment testing. • Big data performance testing.

EMERGING BIG DATA SKILL SETS AND ONGOING TRAINING During the final part of the discussion, there were different views on the technical skill set required for big data testing. As big data testing is still an emerging area, it was commonly agreed that a big data tester needs strong technical skills at least till the point where the industry has standard big data testing tools. Along with this, big data testers need to have a strong knowledge of the business domain and processes in order to carry out effective big data testing. A typical QA‑BA model will play a major role to overcome business domain and process knowledge. Additionally, due to the frequent changes in big data technologies and frequently increasing big data tools stack, ongoing training will be necessary.



Accelerating software delivery – new models T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

Colin Dready, Principle Consultant, Capita IT Professional Services, discusses how organisations can adopt the concept of agility to achieve shorter delivery cycles.


19

A G I L E

O

rganisations wanting to adopt more flexible testing methodologies and achieve shorter delivery cycles face a crucial question: how far has agile drifted from the concept of agility itself? It’s the difference between agile with a capital ‘A’ and agility, where the ‘a’ is lowercase; the difference between following doctrine and proactively and positively absorbing the inevitable change that all projects experience. Although more organisations claim to want to become agile, cost constraints and deadlines can see project teams falling back on unhelpful behaviours rather than examining ways to apply creativity to current challenges. Essentially, organisations often attempt to use agile principles to shift left, but end up bouncing right when things go wrong, with the agile methodology incorrectly blamed. Capita sessions at the recent TEST Focus Group event facilitated discussions between senior testing and QA professionals about the practical application of agile techniques as well as user and business expectations of agile as a methodology.

EXPECTATIONS OF AGILE Is your organisation even set up to be agile? We have worked with many businesses considering just this question. To ensure genuine agility, businesses need to embrace conversation, collaboration and working cross‑organisationally. Testers and other members of the development team should be brought into the project process earlier and concepts of agility explained and promoted throughout the relevant parts of the business. Those organising the work and running procurement departments should understand that being agile might entail earlier investment and some new overheads, while still delivering better outcomes at a lower cost overall.

THE PEOPLE YOU NEED The ideal agile team needs complimentary skills from the technical and business sides of an organisation, and for those with specific skills to adapt and collaborate with others with different talents. A successful and effective agile team needs ‘T‑shaped’ people with a broad range of shallow skills and deep niche vertical skills in their chosen specialisms. ‘Linear’ people – i.e., those without the broader skillset outside their speciality – may be easier (and cheaper) to

find but this often means that important aspects of agile delivery, such as test automation and test‑first approaches, are left by the wayside. A successful agile team (not just its testers) immerses itself completely in the act of deliberate discovery, the active attempt to uncover ‘unknown unknowns’ at an early stage of each iteration and the search for features of business value to implement, while simultaneously reducing the extent of those unknowns via effective communication and collaboration. Excluding testers from early conversations (intentionally or unintentionally) means that when the output is translated into documentation, these documents may lose the creative essence of the conversation itself, and will certainly prove harder to test, with testing falling back to a work as reactive function. And when documentation becomes defensive as suppliers, developers and those assessing business needs state their own case ‘louder’, it becomes impossible to read in a meaningful way. Smaller, more intelligible documents and communication make for more effective, more agile work. Behaviour driven development comes into its own here as conversations lead to living documentation that genuinely reflects business need. This is what it means to collaborate: don’t keep secrets, especially where those unknowns could have a material impact on delivering successful outcomes.

This is what it means to collaborate: don’t keep secrets, especially where those unknowns could have a material impact on delivering successful outcomes

WHAT DO ORGANISATIONS NEED TO DO? Agile projects intentionally look to refactor their outputs as the team works to create the best solution. This implies – in fact necessitates – that continuous implementation involving testing is required. This repeated examination of what’s been created doesn’t sit well with traditional testing models that focus on manual‑heavy approaches. Agile projects depend on automated testing to keep up with the pace of the work; organisations transitioning to agile need to budget the money and effort to get test automation up and running at all levels. Agility also means being able to respond rapidly and positively to inevitable change, which can be dependent on wider agility in supporting capabilities. In our TEST Focus Group we extensively discussed the common challenge of test

COLIN DREADY PRINCIPLE CONSULTANT CAPITA IT PROFESSIONAL SERVICES

Colin Deady runs Capita IT Professional Service’s behaviour driven development (BDD) practice. A software tester by trade he now has many years of agile delivery using BDD under his belt and is a firm advocate of genuine agility within teams. Colin extols an approach he terms zero‑known defects, which when used alongside BDD results in very high quality software releases.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6


20

It’s also not the case that you must strictly adhere to the agile method you have chosen; again agility means positively responding to change

A G I L E

environments. If you are working in iterations that last only a few weeks, then having to wait for a month or longer for someone to provide an environment clearly does not work. Many teams now look at self‑provisioning via the use of cloud services or similar, although some discipline is required here. This is a classic example of expanding the team to ensure it covers all the necessary skills and capabilities. When thinking about agile versus agility, we also need to ask what people actually think agile means. People often consider agile methodology to be synonymous with scrum, whereas this couldn’t be further from the truth. Many different agile methods exist, unsurprisingly, each having different strengths. It’s also not the case that you must strictly adhere to the agile method you have chosen; again agility means positively responding to change. Tweaking your method to best fit your circumstances is a central cornerstone of true agility. For example, Capita has many teams deployed to various

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

client sites, each demonstrating agility and each having implemented Kanban, DSDM, scrum, etc. in slightly different ways. Plans and deadlines (yes, even when using agile) are the very essence of projects, and team sizes and goals will change as the project evolves. Our clients have successfully combined other approaches to agile with test‑first approaches such as test driven development and behaviour driven development. Our advice is: when considering agile, don’t stop your research after the first page of Google results.

SUMMARY Agile can regain its agility as long as those using the methodology embrace its inherent flexibility. Our conversations with delegates at the TEST Focus Group event reinforced our and Capita’s way of thinking: collaboration and conversations can make or break an agile programme.


Increase testing quality and capacity at low costs

Software Testing experts Technology evolves every day and organisations rely on IT to grow their business and remain competitive. Software testing has to adapt to respond to market needs and finding the right testing partner is crucial. Testers have to implement innovative ways to deliver working software outside of traditional environments or traditional approaches. Security has also become critical for organisations; they need to control their data and protect their systems to avoid cyber-attacks. Security has to be embedded in the software development lifecycle from the outset. At Capita IT professional services, our testing experts have developed innovative tools to respond to market changes and new clients needs. At any stage of the engineering lifecycle, we can test mobile applications, implement agile methodologies, put automated tools in place, run penetration tests and more. With over 15 years of testing experience, we are perfectly equipped to help organisations improve systems quality, reduce company risks at low costs.

Capita IT Professional Services 17 Rochester Row, London, SW1P 1JB Email: Marketing.itps@capita.co.uk Web: www.capita-itps.co.uk


Challenges for mobile testing in the enterprise T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

Eran Kinsbruner, Mobile Evangelist, Perfecto, highlights the key takeaways from the 2016 TEST Focus Groups.


M O B I L E

A P P

23

T E S T I N G

T

here could not be a more challenging time to be in app development or testing. With the constant and rapid proliferation of new devices, operating systems, and mobile browsers, testing has to account for an astonishing number of combinations and scenarios. For enterprise teams moving to agile development and integrating automated testing in their process, it can be difficult to devise a test strategy that delivers great user experiences across all platforms. To explore some of these topics and learn more about what testers are facing today, we recently held three roundtable discussions at the 2016 TEST Focus Groups, a one‑day event for professionals in the testing industry. During our roundtables, we focused on strategies and challenges specific to mobile testing and cross‑platform support and testing. These are the key takeaways and insights from our discussions.

TO AUTOMATE OR NOT TO AUTOMATE The percentage of organisations doing test automation continues to grow, but there are still major challenges that need to be addressed. When it comes to setting

the number of releases each year, every organisation is approaching it differently based on unique needs and obstacles. The needs of a long‑established company with a large customer base are going to be quite different from the needs of a small, hungry start‑up. During the roundtable, it was revealed that release schedules run anywhere from bi‑monthly to multiple releases every day, with one participant sharing that her company has hundreds of releases a day. That said, many of the larger organisations that are doing multiple releases every day aren’t running automated tests against all of them. It’s a great idea in theory to be constantly testing if you’re practicing continuous deployment, but for many, 100% automation is simply unrealistic. Some companies use a different approach, putting off automated testing altogether as long as they can, saying they prefer to get feedback from end‑users. These companies also justify putting off automated testing because they’re in the process of updating features. Setting up automation feels like a waste, they say. This approach is a gamble, but many organisations take that gamble precisely because they use continuous delivery. Their logic is that because they're able to fix bugs in less than five minutes, where’s the harm?

Some companies use a different approach, putting off automated testing altogether as long as they can, saying they prefer to get feedback from end‑users

ERAN KINSBRUNER MOBILE EVANGELIST PERFECTO

Eran Kinsbruner is the Mobile Evangelist at Perfecto. Formerly CTO for mobile testing and Texas Instruments project manager at Matrix, Eran has been in testing since 1999 with experience that includes managing teams at Qulicke & Soffa, Sun Microsystems, General Electric and NeuStar.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6


24

When used well, persona traits and real user condition testing can provide test coverage for a large percentage of a user base — creating a huge value for the entire business

M O B I L E

DEVICE COVERAGE AND USER CONDITION TESTING Another common problem occurs when a company transitions to agile development. They introduce automation as part of the team's deliverables while still having a large backlog of legacy systems. And while they’re trying to set up automation on the new systems, they simply don’t have the resources to get through the backlog. And don't forget about testing for outdated devices or browsers – just because users should move on, doesn’t mean they will (or can). For example, it’s not unheard of for users in the banking or federal industry to still be using company‑issued Blackberries or Internet Explorer 7 for security reasons. You can’t tell users to upgrade their device when it's against company policy to use anything else. You simply have to make sure old devices and outdated software is supported.

A P P

T E S T I N G

Participants also shared being frustrated that it’s not possible to include everything in their regression tests. They’d all like to find a way to avoid manual regression testing and leave their testers to focus on exploratory testing only. And yet, many companies are approaching testing from a lean UX perspective, where everything is end user focused. That's one reason that testing for real user conditions based on an understanding of the organisation's target personas is so important. When used well, persona traits and real user condition testing can provide test coverage for a large percentage of a user base — creating a huge value for the entire business. Of course, setting up devices to include every user condition, such as network latency and text message interruptions, can be tricky, especially when you consider that many organisations are testing between 10 and 15 different browsers across mobile and desktop. But new approaches, such as using Perfecto’s Continuous Quality Lab to set up and test real user conditions can help cut down on laborious tasks.

SUMMARY We learned a lot about the state of the industry at the TEST Focus Groups sessions. Many of the testing challenges that enterprises encounter are the very same issues that we focus on solving at Perfecto. That’s why, each quarter, we release The Digital Test Coverage Index, which aggregates data from testing on 5000 devices in the Perfecto CQ Lab with market share data to give readers benchmarks for determining the right mix of devices, operating systems and form factors for testing. We’d like to thank everyone who participated in the roundtable discussions. It was a great conversation with many important takeaways.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6


www.perfectomobile.com/testcoverage

Mobile Test Coverage – U.S.

PPI (Pixel Per Inch)

Release Date

Recommended Device Name OS Version

Screen Resolution

750 x 1334

326

Sep-14 GalaxyiOS Samsung S59.2.1

5.1"

1080 x 1920

432

Apr-14

5.0

5.1"

1440 x 2560

515

Apr-15iPhone 6 5.1.1 Apple

4.7"

750 x 1334

326

Sep-14

iOS 9.2.1

Apple iPhone 6 Plus

5.5"

1080 x 1920

401

Sep-14 GalaxyiOS Samsung S49.2.1

5.0"

1080 x 1920

440

Apr-13

5.0.1

Apple iPad 2

9.7"

iOS 8.4.1

132

Mar-11

iOS 9.2.1

Samsung Galaxy S5

5.1"

1024 x 768 1080 x 1920

132 432

Apple iPhone 6S

4.7"

750 x 1334

326

Samsung Galaxy S4

5.0"

1080 x 1920

440

Samsung Galaxy Note 5

5.67"

1440 x 2560

515

Apple iPhone 5S

4.0"

640 x 1136

326

Essential

Screen Size

4.7"

Mar-11iPad 2 Apple

PPI (Pixel Per Inch)

Release Date

Recommended OS Version

9.7"

1024 x 768

Apr-14 Galaxy S6 5.0 Samsung

5.1"

1440 x 2560

515

Apr-15

Sep-15 9.3.1 Sony Xperia Z3 iOS Compact

4.6"

720 x 1280

319

Sep-14

5.1.1

Apr-13iPhone 6 Plus 5.0.1 Apple

5.5"

1080 x 1920

401

Sep-14

iOS 9.2

Aug-15 [REF] Apple iPhone 6S 5.1.1

4.7"

750x1334

326

Sep-15

iOS 9.3.1 iOS 8.4.1

Sep-13iPad miniiOS 9.2.1 Apple

1024 x 768

162

Nov-12

5.0"

1080 x 1920

424

Oct-15 LG Nexus 5

5.0"

1080 x 1920

445

Nov-13

5.5"

1080 x 1920

401

Sep-14 HTC One (M9) iOS 9.2.1

5.0"

1080 x 1920

440

Mar-15

5.1

HTC One (M9)

5.0"

1080 x 1920

440

Mar-15iPhone 5S 5.1 Apple

4.0"

640 x 1136

326

Sep-13

iOS 8.4.1

LG G4

5.5"

1440 x 2560

515

Apr-15iPhone 6S Plus 5.1 Apple

5.5"

1080x1920

401

Sep-15

iOS 9.2.1

Nov-12 GalaxyiOS Samsung S69.2.1 Edge

Apple iPad mini

[REF]

7.9"

5.1.1

Apple iPhone 6S Plus

6.0.1 [REF]

6.0.1

1024 x 768

162

5.1"

1440 x 2560

515

Apr-15

5.1.1

9.7"

2048 x 1536

264

Oct-14iPad Air 2 iOS 9.2 Apple

9.7"

2048 x 1536

264

Oct-14

iOS 9.2

9.7"

1536 x 2048

264

Sep-15 Galaxy Tab 5.0.2 Samsung 4

10.1"

800 x 1280

149

Jun-14

5.0.2

1440 x 2560

Sep-14

5.1.1

Enhanced

7.9"

Apple iPad Air 2 Samsung Galaxy Tab S2

[REF]

Oct-14 Galaxy Note 5.1 4 Samsung

5.7"

May -12 Nokia Lumia 5.1 635 Microsoft

4.5"

480 x 854

221

May -14

Oct-11 GalaxyiOS Samsung S3 9.2

4.8"

720 x 1280

306

May -12

4.3

515

Sep-14 Galaxy S4 5.0.1 Samsung mini NEW

4.3"

540 x 960

256

Jul-13

4.4.2

2048 x 1536

440

Motorola Droid Turbo

5.2"

1440 x 2560

565

Samsung Galaxy S3

4.8"

720 x 1280

306

Apple iPhone 4S

3.5"

640 x 960

330

Samsung Galaxy Note 4

5.7"

1440 x 2560

Apple iPad Air

9.7"

515

8.1

264

Oct-13 HTC One (M8) iOS 9.2.1

5.0"

1080 x 1920

Apple iPhone 5

4.0"

640 x 1136

326

Sep-12 LG G4

iOS 8.4.1

5.5"

1440 x 2560

515

Apr-15

5.1

Motorola DROID Ultra

5.0"

720 x 1280

294

Aug-13 Apple iPhone 5C 4.4.4

4.0"

640 x 1136

326

Sep-13

iOS 9.2.1

Samsung Galaxy S6 Edge

5.1"

1440 x 2560

515

Apr-15 Galaxy Tab 5.1.1 Samsung A

9.7"

768 x 1024

131

May-15

5.0.2

Apple iPad mini 2

7.9"

2048 x 1536

324

Nov-13 Apple iPad mini iOS 2 9.2

7.9"

Samsung Galaxy Core Prime

4.5"

480 x 800

207

Nov-14 Apple iPhone 4S 4.4.4

Sony Xperia Z3

5.2"

1080 x 1920

424

Amazon Kindle Fire HD

7.0"

800 x 1280

216

Samsung Galaxy Tab 4

10.1"

800 x 1280

149

9.7"

2048 x 1536

Apple iPad 4 Motorola Moto G (3rd Gen)

5.0"

720 x 1280

iPad Pro NEW

12.9"

2048 x 2732

264

Extended

Essential Enhanced Extended

Read Now!

Screen Resolution

Apple iPhone 6

Google Nexus 5X

8

Mobile Test Coverage – EU5 Screen Size

Samsung Galaxy S6

Device Name

5.0.1

2048 x 1536

324

Nov-13

iOS 9.2.1

3.5"

640 x 960

330

Oct-11

iOS 7.1.2

Sep-14 Galaxy Note 5.1.15 Samsung

5.67"

1440 x 2560

515

Aug-15

Sep-12 Sony Xperia Z3 4.0.3

5.2"

1080 x 1920

424

Sep-14

5.1.1

5.0.2

5.0"

720 x 1280

424

May-15

5.0.1

4.0"

480 x 800

233

Nov-12

4.1.2

5.5"

1440 x 2560

515

Jun-14

5.0

4.5"

480 x 854

221

May-14

8.1

Feb-14 P8 Huawei

Nov-12 GalaxyiOS 9.2 Samsung S3 mini 5.1.1

294

Jul-15 LG G3

264

Nov-15 NokiaiOS 9.2.1630 Microsoft Lumia

9

Mar-14

5.1.1


What learning and development activities are on a tester’s CV? T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

BCS, The Chartered Institute of IT were pleased to welcome three energetic groups of participants to discuss all things learning and development for testers. Tia Shortall, Marketing Manager, BSC Learning & Development, highlights the topics discussed and debated at the TEST Focus Groups.


T E S T E R

E

T R A I N I N G

very single participant was able to identify a development activity that they’ve undertaken that they felt worthy of putting on a CV. On average, our focus group participants were each generating 2 ‑ 3 activities that they’ve completed during their careers that they’ve found relevant and valuable. These could be formal certifications, but also included training courses and mentoring. The most common answer was the ISTQB® Foundation and the BCS/ISEB Intermediate certifications, with over half of participants saying they’d gained these. Where things got more interesting was when we turned this question on its head and asked what recruiters would like to see on a candidate’s CV. Certifications now featured more heavily than informal training, with participants generating a longer list of 5 ‑ 7 requirements each. We started to see a shopping list of potential qualifications and training opportunities emerge, across a broader spectrum of non‑technical as

A N D

D E V E L O P M E N T

well as technical skills. ISTQB certifications continued to feature heavily, but with intermediate being the sought‑after standard and agile tester certifications also coming up repeatedly. Also making the list was PRINCE2® project management.

WHICH AREAS DID PARTICIPANTS FEEL WERE MOST IMPORTANT FOR TESTERS TO INVEST TIME IN DEVELOPING?

27

There was a lot of discussion around agile, and almost all agreed that testers needed to be confident operating in an agile environment

It was clear that there are some hot topics that are increasingly sought after amongst testers. There was a lot of discussion around agile, and almost all agreed that testers needed to be confident operating in an agile environment. Hiring managers would look for agile tester certifications as evidence of this, especially when coupled with

TIA SHORTALL MARKETING MANAGER BCS LEARNING & DEVELOPMENT

Tia is the Marketing Manager for BCS’ portfolio of Professional Certifications. She works alongside IT professionals and learning and development specialists to promote professional development throughout the IT industry.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6


28

T E S T E R

T R A I N I N G

A N D

D E V E L O P M E N T

Certifications are seen as a visible way of demonstrating that candidates have the knowledge to do the job, and building this over time

experience on agile projects. Mobile testing and test automation topped the list of ‘hard to fill’ skills, with security testing also mentioned. Our discussions took us to user experience (UX) and usability and our groups agreed that responsibility for UX spans the life of the project from design, through development, to testing. Everyone felt that knowledge in this area would be valuable to testers, and BCS Foundation level UX certification was seen as beneficial.

WHAT ROLE DO CERTIFICATIONS PLAY FOR TESTERS? Certifications were described as outwardly‑visible proof of knowledge. Most participants felt that ISTQB Foundation is a

minimum to get a foot through the proverbial door. They felt this provided a ‘rubber‑stamp’ of knowledge, but that the interview process then needed to examine experience and soft skills. Certifications are seen as a visible way of demonstrating that candidates have the knowledge to do the job, and building this over time. Hiring managers felt that certifications give their team credibility in the wider organisation and help them to be seen as experts. However, our groups stressed that not all certifications are created equal. Examination method plays an integral part of the perceived value of a qualification. Many in the group felt that ‘certificate of attendance’ training contributed little to proving an individual’s competence, while at the other end of the spectrum portfolio‑ or scenario‑based examination were seen as a very good indicator.

WHAT CHALLENGES DO TESTERS FACE WHEN DECIDING TRAINING AND DEVELOPMENT ACTIVITIES? We saw a spectrum of approaches from employers with some applying for a slice of a departmental training budget, while others receive a substantial training allowance per head to ‘spend’ as they see fit. What all participants shared was a need to be able to demonstrate tangible value from training that they and their teams undertake. Can you show what has changed as a result of your newfound knowledge? This was the question that all were being asked, and all found it difficult to justify investment in training if they couldn’t.


BCS MEMBERSHIP

SO WHAT’S THE ANSWER? WHAT TRAINING AND DEVELOPMENT IS MOST VALUABLE TO TESTERS?

BCS CERTIFICATION PATHWAY

It’s never quite as simple as that, is it? There’s no doubt that a well‑rounded software tester will have certifications that evidence their strong technical skills. The ISTQB Foundation and the BCS Intermediate certifications remain the accepted standard. Certifications in emerging areas – such as mobile and test automation – are very sought after. And agile tester certifications would certainly make the list. However soft skills such as team leadership and collaboration are equally important, as is a broad range of experience across a variety of projects. And there’s one must‑have that everyone agreed on; testers need to be constantly learning because testing doesn’t stay still for long. For those interested in pursuing certifications and continuous professional development, BCS, The Chartered Institute for IT has been guiding IT professionals for over 30 years, and has delivered over 105,000 professional certifications to date.

SUPPORT

RESOURCES CONTINUAL PROFESSIONAL DEVELOPMENT

MAKE SOFTWARE TESTING CERTIFICATIONS WITH BCS PART OF YOUR PLAN

Make your career plan future-proof. As software needs get more complex so does the role of Software Tester. BCS, The Chartered Institute for IT is at the forefront of technological change. We can give you unrivalled support and prepare you for the future with our suite of ISTQB® software testing certifications and unique BCS certifications. To find out how BCS membership and professional certifications can keep you up to date visit bcs.org/futureproof

ISTQB® is a Registered Trade Mark of the International Software Testing Qualifications Board.


Big game phishing

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6

Noel Hannan, Infosec Consultancy Manager, Capita IT Professional Services, gives his advice on how organisations and professionals alike can avoid becoming victims of phishing.


31

P H I S H I N G

A

nyone who relies on email for professional communication will have had experience of phishing. We may think that, with our knowledge of testing and tech, we are far too savvy to be fooled by a phishing email, but these types of messages still get through. Why? Because of the way we use information to do business. Even the most up‑to‑date technical operations won’t stop well‑crafted phishing emails because they are indistinguishable from normal emails. While software can successfully block spam, organisations still allow users to open attachments and send hyperlinks in emails, because Sharepoint, Dropbox and similar programmes are commonly used methods of sharing information. All the variables that allow phishing to continue to take place are inherent in how we use email. But even if we cannot technically stop phishing, we can educate people to look out for it by providing security awareness training for individual users within organisations.

STAGING PHISHING EXERCISES Capita can stage a phishing exercise in which we work with an organisation to craft emails that specifically target sections of the business where there may be a weakness or that could be considered valuable targets. For example, if a phishing attack compromised the MD or CEO’s email, or that of a system administrator, the consequences could be catastrophic. Any phishing attack is all about trust models. What sort of messages do we inherently trust and how could they be used to fool us? All of us receive enormous amounts of messaging on a daily basis. When you stop to consider your daily professional email traffic, Skype, voicemails, personal emails and LinkedIn updates – plus of course social media updates from Facebook, Twitter, WhatsApp, Facebook Messenger, etc. – you may receive 300 ‑ 400 messages in some form every day. You may feel confident that you know what’s important; however, as the amount of information you receive increases, it becomes harder to undertake this process. The time you are able to spend

looking at each message and deciding whether to open it, action it or ignore it, reduces to maybe only a few seconds. And at traditionally busy times of the year (the first Monday back at work after Christmas and New Year, for example), when you’re keen to clear your inbox, you may not be able to give effective consideration to all your emails.

WHERE CAN PHISHING OCCUR? Now think about doing the same thing on your phone, in a crowded train or waiting at the bus stop on the way to work. Phishing is even easier on phones because we’re used to seeing non‑graphics‑based emails on a phone – there are fewer protective measures on a phone, too. If you use your phone or tablet to manage multiple email accounts into a single inbox, how easy is it to analyse each message? People fall for phishing attacks because not only do phones and tablets not operate effective filtering, but our brains can’t always filter the level of information presented to us. Smishing – phishing by SMS – is perhaps even more dangerous. A text may arrive purporting to tell you that your parcel is on its way and asking when you’ll be home to collect it, but what if that message was in fact from a potential burglar weighing up when you’ll be out? LinkedIn is a massive resource for phishers and hackers as it can give attackers knowledge of what lies beyond a company firewall. For example, a hacker might want to identify a company’s back office personnel system. By analysing LinkedIn profiles for that company’s employees, the hacker could easily discover whether personnel have SAP or Oracle experience, and could then identify the system used. Arguably, phishing isn’t going to disappear until we end our dependence on email as a primary mode of communication. This could be a generational thing – younger generations rely on instant messaging, which may take the place of professional emails in the future. But we need to deal with phishing here and now, and that means educating people.

Phishing is even easier on phones because we’re used to seeing non‑graphics‑based emails on a phone – there are fewer protective measures on a phone, too

NOEL HANNAN INFOSEC CONSULTANCY MANAGER, CAPITA IT PROFESSIONAL SERVICES

Noel has over 20 years’ experience in the IT industry and has spent 10 years specialising in security. He has worked for Capita since 2009. Noel is a former CLAS consultant and is a CESG Certified Professional IT Security Officer and Security and Information Risk Advisor.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6


32

Arguably, phishing isn’t going to disappear until we end our dependence on email as a primary mode of communication

P H I S H I N G

HOW TO AVOID BECOMING A VICTIM Capita has found that one of the most effective methods of security awareness training is to conduct a phishing exercise which then produces a metric demonstrating its effectiveness. We staged an exercise in a public sector agency in which we sent users a fake discount shopping email, which arrived on the first Monday back at work after Christmas. Several staffers clicked through the link we sent, giving us vital information about how to further educate users about vigilance regarding phishing. This personal vigilance comes from security awareness, which in turn comes from security education. A phishing exercise can highlight areas of weakness and a security awareness programme can run in tandem

with this. For example, an organisation could run a security awareness course, then tell staff that phishing exercises will be taking place throughout the year and that their department/sector will be scored according to response. An alternative approach could divert any users who actioned a deliberately planted phishing email to a mandatory security awareness test, although of course this could have a negative impact in a busy operational environment. Whichever approach an organisation takes, Capita will offer support through security awareness training to prevent further susceptibility.

CONCLUSION Our key message is that we don’t want people to be frightened of doing their jobs effectively. Remember, only a tiny proportion of emails are malicious. Your inbox is not a minefield – even though in reality organisations are under consistent attack, the vast majority of emails you receive every day are legitimate.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 6



Bring your teams together and deliver the right software faster

Too often, business and delivery teams feel divided. Both are aligned in delivering what your organization needs, but while business teams talk in requirements, developers think in terms of stories and tasks. Atlas translates business needs into iterative Agile delivery targets, in a language that everyone understands. This means everyone gets a clearer

view of the project, delivery timescales, the evolution of individual stories and how each one contributes to requirements and the business needs they represent. Keep your teams in sync and ensure projects are delivered with greater speed and confidence, with Atlas.

Discover more and take a FREE 30-day trial at Borland.com/Atlas

Copyright Š 2016 Micro Focus. All rights reserved. Registered office: The Lawn, 22-30 Old Bath Road Newbury, Berkshire, RG14 1QN, UK.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.