MARCH 2017
SPEAKING THE USER’S LANGUAGE PERFORMANCE PERCEPTIONS THE NATIONAL SOFTWARE TESTING CONFERENCE THE NATIONAL DEVOPS CONFERENCE SPECIAL
RECORDING_1
All-in-One
RECORD MODULES
Test Automation
BROWSER
OPEN
MOUSE
CLICK
KEY
SEQUENCE
VALIDATE
ATTRIBUTE EQUAL
USER CODE
MY_METHOD
Any Technology Seamless Integration Broad Acceptance Robust Automation Quick ROI
1 Lic
All Te
ense
chno logi All U pdat es. es.
www.ranorex.com/try-now Cross-Technology | Cross-Device | Cross-Platform | Cross-Browser
MY_METHOD RECORDING_1
void ITestModule.Run( { Mouse.DefaultM Keyboard.Defau Delay.SpeedFact }
()
1
C O N T E N T S
T E S T
M A G A Z I N E
C O V E R
S T O R Y :
|
M A R C H
G A M I N G
2 0 1 7
S E C T O R
NEWS
Software industry news ................................... 5
44
THOUGHT LEADERSHIP: APPLICATION PERFORMANCE
Operation Flashback for software quality assurance ....................................................... 10 AGILE
Embracing agile . ........................................... 12
MoveTime = 300; ultKeyPressTime = 100; tor = 1.0;
PERFORMANCE AND LOAD TESTING
Performance perceptions .............................. 16 SUPPLIER PROFILE
A new outsourcing option in a post‑Brexit era ................................................................. 20 GAMING SECTOR
Mobile games under the microscope . .......... 24 Securing games against the criminals – and not the players ..................................... 28
28
EVENT PREVIEW
The National Software Testing Conference and The DevOps National Conference ......... 30
QA AND UX
Speaking the user’s language ........................ 34 Overcoming poor usability & user requirements ................................................. 40
12
16
Testing a fantasy sports platform . ................. 44
T E S T M a g a z i n e | M a r c h 2 01 7
E D I T O R ' S
C O M M E N T
3
PREVENTING THE BREXIT BUGS
L
ast summer, the UK IT sector seemed unanimous – remaining in the EU was the preferred outcome in the Brexit referendum.1 Fast forward 9 months, and the reality of the Leave vote is beginning to settle in. UK Prime Minister Theresa May has said she will trigger Article 50 – beginning the formal two-year separation process – by the end of March 2017, meaning the UK will be expected to have left the EU by the summer of 2019. We’ve now had 9 months of media speculation and political policy debates. One key storyline has been the pressing IT skills gap, and the potential brain drain the UK could experience as a result of losing the EU membership. Research has shown that British tech firm employers are reliant on overseas talent, with 45% of recent vacancies filled by foreign workers.2 There is a real concern that there won’t be enough home-grown talent, and that EU citizens will feel other countries better serve their careers.3 We’ve also seen reports of increased tech prices and cries of ‘Brexploitation’4. In 2016, the pound was ranked worst performing major currency, and has plunged against the dollar. As a result, products and services from the likes of Apple (20% increase on products); Microsoft (13% mark up for enterprise software products & 22% for cloud services); and HP (10% price hike across the Personal Systems portfolio) have gone up, affecting consumers and enterprises alike. More intangible than personnel and hardware, the Brexit decision has created legal uncertainty around the implementation of current and upcoming EU law in the UK. The arrival of the General Data Protection Regulations (GDPR), affecting all organisations handling EU data, could introduce difficult compliance decisions.5 The GDPR is notorious for its heavy fines – up to €20 million or 4% of annual worldwide turnover, whichever is greater. Even if Article 50 is triggered this March, the GDPR, which will enter into force in May 2018, will affect the UK, since it is applicable in every EU member state and the UK won’t officially
CECILIA REHN EDITOR OF TEST MAGAZINE
be ‘out’ until 2019. Then, as many businesses in the UK handle, and will continue to work with, EU data; it is likely that parts of the Regulations will need to be obeyed. As the GDPR implementation date is nearing and the Brexit outcome is unlikely to hinder effectuation; there is a sense of urgency to act now, in order to avoid heavy fines in the future. With Brexit, there may be new treaties, regulations, or resolutions that take the place of EU governance within the UK. These potential changes are unknown and unpredictable. But what we know for certain is that changes will require updates, online and offline. There is a potential for a huge peak in regression testing, and investment in ‘Brexit bugs’ will come to the forefront for many. Software testing and quality assurance teams will have to navigate these ambiguous seas, offer their unique perspective and expertise when it comes to ensuring delivery meets new compliance requirements, as well as business and user expectations. There will be a period of uncertainty and adjustment, but the UK IT sector will have to move forward and embrace the new status quo. This must not be seen as a time to retreat into the trenches. UK IT needs to come out swinging, proving its status as a preferred destination by virtue of its talent, enthusiasm, innovation and strong network, both with Europe and beyond. When surrounded by matters out of your control, it can be a great opportunity to focus inward. It all comes down to the individual – invest in yourself, through training, networking, up-skilling, and prove yourself invaluable. And we will all weather this storm together and come out stronger for it.
cecilia.rehn@31media.co.uk
MARCH 2017 | VOLUME 9 | ISSUE 1 © 2017 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 GENERAL MANAGER AND EDITOR Cecilia Rehn cecilia.rehn@31media.co.uk +44 (0)203 056 4599 EDITORIAL ASSISTANT Jordan Platt jordan.platt@31media.co.uk ADVERTISING ENQUIRIES Anna Chubb anna.chubb@31media.co.uk +44 (0)203 668 6945 PRODUCTION & DESIGN JJ Jordan jj@31media.co.uk 31 Media Ltd, 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 info@31media.co.uk www.testingmagazine.com PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA
References
softwaretestingnews
1. ‘UK Tech Companies Large and Small Agree EU Membership is Good for Business’, https://www.techuk.org/member-eu-survey/eu_membership_survey 2. ‘UK tech industry not immune to Brexit, trade group warns’ https://www.theguardian.com/politics/2017/jan/24/uk-tech-industry-not-immune-to-brexittrade-group-warns 3. ‘A post-Brexit view of the UK tech sector’, http://www.softwaretestingnews.co.uk/a-post-brexit-view-of-the-uk-high-tech-sector/ 4. ‘Brexploitation? Adobe gets creative with price hikes’, https://www.theregister.co.uk/2017/02/03/never_knowlingly_undersold_adobe_joins_ brexploitation_gang/ 5. Brexit and EU privacy rules’, http://www.softwaretestingnews.co.uk/brexit-and-eu-privacy-rules
@testmagazine TEST Magazine Group
T E S T M a g a z i n e | M a r c h 2 01 7
Smart People Smart Solutions
It’s not what we do, it’s how we do it. Post-Brexit UK tech’s new offshore partner: • Software testing • Software development www.dvtsoftware.co.uk
LONDON
CAPE TOWN
JOHANNESBURG
DURBAN
Dawson House, 5 Jewry Street, London, EC3N 2EX
Block 3, Unit 117, Northgate Island, Section Road, Paarden Eiland, 7405
Ground Floor, Victoria Gate South Hyde Park Lane, Hyde Park, 2196
12 Frosterley Crescent Frosterley Park, La Lucia Ridge Durban, 4320
T: +44 (0)207 030 3158
T: +27 (0)21 467 5400
T: +27 (0)11 759 5930
T: +27 (0)31 566 5251
I N D U S T R Y
5
N E W S
DENMARK TO APPOINT A ‘SILICON VALLEY AMBASSADOR’ Silicon Valley is now being treated as if it was its very own country by Denmark, who are planning on appointing an ambassador to this region, showing how powerful and influential tech companies are becoming. The new digital ambassador, also referred to as the ‘Silicon Valley Ambassador’, will be appointed to work alongside the world’s biggest technology companies, such as Apple, Google and Microsoft, although the details of their role have not yet been released. When questioned about the relevance of such an ambassador, the country stated that tech companies, due to their influence, are just as important and have just as much impact on society as other countries.
SANSAR IS SECOND LIFE’S THIRD The same company that had great success with the online fantasy game, Second Life, is undertaking a brand new social virtual reality venture. The virtual world of Second Life launched with great success in 2003, hitting as many as 1 million monthly users, but user experience issues plagued the fantasy world, which is why for the past four years, Project Sansar has been in development. San Francisco-based Linden Lab has stated that Project Sansar will contain many of the elements that made Second Life such a cultural phenomenon, the only difference being that VR has come a long way since the early 2000s, meaning that it can solve the problems the current project's predecessor had.
Sansar will be available on Oculus Rift and HTC virtual reality headwear, allowing players to fully engage themselves within the new reality of the community‑built world. The release date has not yet been announced, but Project Sansar will be finalised and released to the public later this year. So far though, Linden Lab and a select few partners have been developing lots of different experiences for the platform in order to, in the future, bring people together to work and play, helping make the world of Sansar a lot bigger than it ever would have been if Linden Lab built it all internally. With consumer VR still in its infancy, and only 2 million headsets shipped worldwide in 2016, there’s still a long way to go before it can be used in a meaningful, popular, manner.
RURAL INDIA GETS CONNECTED The Indian government is implementing a new initiative that has been dubbed 'Digital Village', in order to supply free Wi‑Fi to over 1000 rural villages throughout the country. Over the first six months of 2017, tower‑mounted Wi‑Fi hotspot installations will take place in remote locations across India, making sure that villagers can get connected. Statistics show that there are over 900 million unconnected Indians, and the Digital Village initiative will assist in helping domestic internet providers and tech companies, such as Facebook and Google, tap into the potential of these newly connected citizens.
T E S T M a g a z i n e | M a r c h 2 01 7
6
I N D U S T R Y
N E W S
NO MORE IN‑FLIGHT ENTERTAINMENT Based off of the fact that it believes 90% of its passengers carry smart devices on board their aircrafts, American Airlines is eradicating in‑flight entertainment screens. The move has been implemented in its new order of 100 Boeing 737s, which will be in service by the end of this year, saving US$3 million, the cost of installing the systems on every jetliner. Getting rid of the backseat technology not only saves on installation costs, but on fuel costs too. Under-floor wiring and specifically modified seating is required in order to fit the tiny TV screens into the back of the seats, and all of this adds bulk and weight to the aircraft, upping fuel consumption. Although, passengers won’t be left on their long‑haul flights with zero in‑flight entertainment. American Airlines will be allowing passengers to stream movies, TV shows etc., via the plane's Wi‑Fi, but customers will be charged for logging onto the internet, which is currently US$16 plus tax for a day pass.
SOFTWARE PROTECTION FOR GREAT WHITE SHARKS
A solution for customers who do not wish to pay for internet access on board a flight, is to pre-download films, TV shows etc., onto their smart devices. The move from American Airlines indicates a change from freely providing entertainment to commoditising the modern customer's desire to stay connected, and it'll be interesting to see which airlines follow suit.
T E S T M a g a z i n e | M a r c h 2 01 7
Collecting data surrounding rare and elusive species of animals, such as the great white shark, can be difficult and endangering, but a marine biologist alongside an applied mathematician and a software developer have developed a custom piece of software that could be the solution to this problem. Dr Sara Andreotti, a marine biologist, has spent her career tracking the population span of great white sharks along the South African coastline.
She did this by manually inputting data regarding the shark’s individual dorsal fin into her personal computer. This information was used to develop the software, called Indentifin. It compares a semi‑automatically drawn trace of the back edge of a dorsal fin to existing images in the database, allowing researchers to match data and rank it in order of probability. A direct match will be featured at the top of the database, allowing them to identify the shark. The software used to keep track of the great white shark population shows promise to be used in a more generic sense, to safely, and appropriately, without harming, monitor other elusive animals. Potentially helping to elude extinction of specific species.
8
I N D U S T R Y
N E W S
TOYS ARE LEAKING CHILDREN'S DATA Over half of a million people’s personal data was leaked in the UK due to fluffy internet‑connected toys being compromised. The CloudPets stuffed toys leaked over 2 million voice recordings of children and adults who had played with the toys, along with email addresses and passwords. CloudPets have denied that the voice recordings were ‘stolen’ due to them not being stored in a safe place, instead acknowledging they have individual URLs, so they are accessible to anyone who knows the specific URL for each recording, however, security experts have called the company's password security lax. The personal information that was taken from the database and leaked was held on the internet with no usernames or passwords securely locking the information out of public view. A week after the hack took place no databases were accessible to the public, but the users were not informed of the attack. John Madelin, CEO at IT security experts RelianceACSN, spoke to The Guardian saying, “Connected toys that are easily accessible by hackers are sinister. The CloudPets issue highlights the fact that manufacturers of connected devices really struggle to bake security in from the start. The 2.2m voice recordings were stored online, but not securely, along with email addresses and passwords of 800,000 users, this is unforgivable.”
AMAZON WEB SERVERS CRASH GOOGLE BRINGS CSI TO LIFE CSI fans everywhere should rejoice, as Google have developed a new AI system that is able to enhance an eight‑pixel square image. The 'enhancement' increases the resolution of the image 16‑fold, effectively restoring lost data. Although, Google researchers have described the new system as ‘hallucinating’ the extra information needed in order to resolutely restore an image. The system was taught through being shown images of faces, so it could learn typical facial features. Using the software embedded within, the AI system simply redraws its best guess of what the original facial image was.
T E S T M a g a z i n e | M a r c h 2 01 7
Many websites and apps took a hit on 28 February due to Amazon’s S3 cloud service experiencing an outage. Eventually resolved a few hours later, the outage caused problems for Medium, Slack, Quora, Giphy and Business Insider, and more.
TECHNOLOGY KNOWS NO AGE An 81‑year‑old Japanese woman has launched her first iPhone app, showing that technology knows no age and any one, at any age, can be tech‑savvy. The app teaches people the traditions of
The issue also affected internet connected devices, such as smartphone‑controlled light switches, just going to show how heavily the web is reliant on key players, such as Amazon. Over tens of thousands of web services for hosting and backing up data use the Amazon Simple Storage Solution (S3) due to its simplicity and ease.
Japan's Hinamatsuri (Girl’s Day). Wakamiya, the tech‑savvy 81‑year‑old, created the app in order to inform people of the accurate way of staging their traditional doll displays during the festival. Wakamiya started developing her skills at the age of 60, and believes that you’re never too old to stay ahead.
I N D U S T R Y
9
N E W S
INDUSTRY EVENTS www.softwaretestingnews.co.uk
www.devopsonline.co.uk
TEST FOCUS GROUPS Date: 21 March 2017 Where: Park Inn by Radisson, London, UK www.testfocusgroups.com R E C O M M E N D E D
★★★
HONDA LAUNCHES CONNECTED CAR SERVICES IN EU
GOVERNMENT’S DIGITAL STRATEGY TO BOOST AI SECTOR
Honda is leveraging IoT solutions from Cisco Jasper and Bright Box to deliver the MyHonda Connected Car platform. The platform provides a suite of powerful services that enhances the driving experience. MyHonda utilises telematics solutions to deliver a variety of connected services that increase driver safety, simplify vehicle ownership, and enable new experiences for drivers.
New measures to support Britain’s world-leading AI sector are to be announced as part of a bold digital strategy to boost growth and deliver a thriving, outward-looking digital economy that works for everyone. Accenture has estimated AI could add in the region of £654 billion (US$814 billion) to the UK economy by 2035.
read more online
read more online
QAI QUEST 2017 Date: 3-7 April 2017 Where: Chicago,IL, United States www.qaiquest.org/2017 ★★★
MOBILE DEV + TEST Date: 24–28 April 2017 Where: San Diego, CA, United States www.mobiledevtest.techwell.com ★★★
NATIONAL SOFTWARE TESTING CONFERENCE Date: 22–23 May 2017 Where: Millenium Gloucester Hotel London Kensington, UK www.softwaretestingconference.com R E C O M M E N D E D
★★★
NATIONAL DEVOPS CONFERENCE GOOGLE’S RECENT CLOUD OUTAGE CAUSED BY UPDATES STUCK IN CANARY TEST DEPLOYMENT At the end of last month, newly created Google Compute Engine instances, cloud VPNs and network load balancers were left unavailable for just over two hours, in a wide scope incident. The affected servers had public IP addresses, but couldn’t be reached from the outside world or send any traffic.
read more online
ENTRIES FOR THE DEVOPS INDUSTRY AWARDS ARE NOW OPEN! From the creators of The European Software Testing Awards comes the DevOps Industry Awards! The new awards show is set to take the industry by storm, shining a light on both Dev and Ops teams who are bridging the gap between the two cultures. Entries are now open! The awards ceremony will take place in October.
Date: 24–25 May 2017 Where: Millenium Gloucester Hotel London Kensington, UK www.devopsevent.com R E C O M M E N D E D
★★★
EXTENT CONFERENCE Date: 29 June 2017 Where: London, UK www.lseg.com/extent-2017
read more online
T E S T M a g a z i n e | M a r c h 2 01 7
OPERATION FLASHBACK FOR SOFTWARE QUALITY ASSURANCE Rajat Satender, Account Manager, Tata Consultancy Services (TCS) and Ed Reid, Head of Testing Services at British Gas – Centrica, highlight how British Gas achieves predictable, repeatable technical regression and verification test results and zero post‑implementation defects in its migrated applications.
A P P L I C A T I O N
11
P E R F O R M A N C E
T
he UK’s largest energy and home services company, British Gas is one of UK’s ‘Big Six’ gas and electricity players. Serving around 12 million homes in the UK, the company has a prized reputation it’s determined to safeguard. In the competitive gas and electricity business, manufacturing, generation, distribution and sales are complex and critical enterprise processes. It’s no less at British Gas: the firm’s Quality Assurance & Control (QA&C) department, which addresses testing requirements across the organisation, is rigorous by design. So, with new software being implemented continuously to meet the growing demands placed on the business, business‑critical technical regression testing (TRT) and technical verification testing (TVT) for application performance is more critical than ever. Thus, to ensure that software changes do not negatively impact production performance, two rounds of TRT and another two of a TRT‑TVT combo are run for each of British Gas’ six or more annual large sized application releases. However in 2016, QA&C found some TRT results weren’t matching up with each other. It was possible that critical releases might go into production without performance variations addressed.
IN WANT OF REPEATABLE REGRESSION TESTING British Gas called in Tata Consultancy Services (TCS), which had earlier helped modernise its supply chain to manage the exponential growth in smart meter installations. British Gas needed a repeatable regression testing mechanism that included predictable regression testing and application performance and low‑cost test data preparation. So the question was: “Should application performance variations be pinned to the released code or the test data‑sets?” The IT stack – Oracle, with Red Hat Enterprise Linux, HP Performance Centre and TRT packs for SAP and Digital – was complex. Some TRT scripts, TCS noticed, were data‑burning. TCS analysis revealed that application performance variations were larger when these scripts were used. This suggested that the variations, wherever they existed, might be caused by the test data because data‑burning scripts can be used only once. So TCS decided to use Oracle Flashback tools to view past states of database objects and their metadata so that transactions could be rolled back, while the database
remained online. Using this ‘time travel’ back to the desired point in the past would help remove any data variability, make the tests repeatable and allow accurate identification of performance degradations in the release code.
METICULOUS PLANNING AND EXECUTION This was a task that called for intricacy, precision, safeguards and user control. So TCS and British Gas set out to get it right first time. The implementation was doing something never done successfully before – implementing Flashback in a shared pre‑production environment. What was more, Flashback was being run on multiple databases and associated systems that had to be synced continuously with the data, to prevent critical data losses. There was no time for redeployment because running Flashback locked all users out each time. Any glitches would adversely impact the release cycle. Neither was it possible to accurately predict the data volumes the TRT runs would produce.
A REWARDING ORCHESTRATION However, with the two teams orchestrating operations with clockwork precision, TRT repeatability succeeded. The Flashback restoration took approximately 120 minutes. Data generated in the system was of a manageable size (134 GB). And users continued working seamlessly afterwards. The results exceeded expectations. Post‑Flashback restoration, the number of transactions with over 25% variance was 45% lower than monitored in previous comparison runs not using Flashback, while transactions with more than 2 second response time deviations saw a 63% reduction. Since then, British Gas has estimated response time variance tolerances accurately. It compares and precisely predicts results after new code deployment in pre‑production. All build release changes, TRT and TVT runs and system configuration changes are tested. Production performance incidents have reduced considerably. And Go/No‑go decisions are taken very confidently now. The cost savings expected are difficult to predict at this moment, but are expected to be considerable. But the gains in reputation and credibility? Enough to leave one all smiles, shall we say?
This was a task that called for intricacy, precision, safeguards and user control. So TCS and British Gas set out to get it right first time
ED REID HEAD OF TESTING SERVICES BRITISH GAS – CENTRICA
Ed joined British Gas 17 years ago and progressed quickly into management roles. He currently heads the Testing Services and has stewarded the transition from T&M to an outsource Test Factory Model. Ed has been instrumental in transforming testing into a globalised function within Centrica.
RAJAT SATENDER ACCOUNT MANAGER TATA CONSULTANCY SERVICES (TCS)
With over 14 years of experience, Rajat has built and led the TCS Performance Testing Centre of Excellence and currently leads the British Gas account. Rajat is an industry subject matter expert in performance testing and has implemented a number of strategic initiatives and value adds, Flashback being one of them.
T E S T M a g a z i n e | M a r c h 2 01 7
EMBRACING AGILE Should testers only test? And how agile are you? asks Niranjalee Rajaratne, Head of Quality Assurance, Third Bridge.
13
A G I L E
M
any organisations are in the process of transforming into agile. Or they are fully agile or some are analysing the possibilities to become fully agile. Though the preference is Kanban, Scrum/XP, as people in testing and quality assurance it’s important for us to understand the impact on the testing role and the transformation required within testing to work with these changes. Once at a networking event someone asked me: “working in agile or in a traditional waterfall model, do the testing activities really have to change? We still need to do the test analysis, design, execution, required level of exploratory testing, regression, performance, usability and the list goes on. What would be your thoughts on this?”
BEING AGILE Many organisations are adopting proactive ways for quality management to mitigate risks early into the development cycles. Unless the objective was of technical refinement, the software development focus is on the user behaviours rather than being bogged into ensuring the functioning of a single functionality. In agile, we talk a lot about shared responsibilities and everyone has a part to play in ensuring and maintaining required levels of quality. By adopting appropriate development approaches such as BDD, TDD, ATDD and pair programming, developers can produce well‑tested, self‑documented code. The quality of the code is becoming better. Also with that, automation is making it easier to control the repetitive manual testing. The growing adoption of distributed computing architectures such as microservices, continuous delivery pipelines, and cloud deployments bring up a new set of challenges hence the levels of testing are not only different but also increased. Now more than ever, identifying the right skill set and shaping up the role of testers is becoming important. Being agile means being able to move quickly and easily. In agile ways of working, tasks are divided into shorter phases of delivery with continuous assessments and adoption into development plans. If it’s scrum, every sprint has a definition of done which is identified and agreed at planning. The team works towards this definition and the software is being built incrementally and delivered continuously. In a fast‑moving, fast phase working environment it can be challenging for testers to assure the required quality levels, or they might end up being
checkers in validating the implementation of the user stories. Let’s recall the 'Manifesto For Agile Software Development': We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: • Individuals and interactions over processes and tools • Working software over comprehensive documentation • Customer collaboration over contract negotiation • Responding to change over following a plan
Now more than ever, identifying the right skill set and shaping up the role of testers is becoming important
That is, while there is value in the items on the right, we value the items on the left more. You may have noticed that there seems to be a muddle of the value of the left and the right above. Some tend to go extreme on the left ignoring the value of the right altogether. It is important to understand what agile means for your working environment and find that balance between the right and the left.
ASSURANCE OF SOFTWARE QUALITY IN AGILE In a nutshell for agile to work effectively it requires collaboration with all stakeholders regardless of the role or the position. To assure quality throughout the software development lifecycle, continuous reviews need to happen replacing comprehensive documents that are likely to change with lean coffee sessions, and plans with dynamic pragmatic decision making. Hence, the required skill set for testers, quality assurance personnel have more to do than just testing the implemented code against a requirement. To keep the quality maintained and to keep the consistency of the fast delivery, it’s necessary to embed quality into every step of software development. Having said that, in agile, quality is no longer the single responsibility of the testing or quality assurance team, everyone is responsible and everyone works towards a single goal. There is no longer the practice of having a separate testing team assuring software on an agreed timeframe. Everything is done together as a team breaking the silos. Agile, as well as waterfall, encourages automation. However, it seems to be
NIRANJALEE RAJARATNE HEAD OF QUALITY ASSURANCE THIRD BRIDGE
Niranjalee is passionate about QA, and is a quality evangelist who has spent large parts of her career focusing on testing and quality assurance to ensure it's seen as everyone’s responsibility. She values continuous development and believes to be successful, challenges need to be taken that pushes one’s limits. She has over a decade of QA and testing experience and holds an MBA from the University of Sunderland, UK.
T E S T M a g a z i n e | M a r c h 2 01 7
14
Automation is a continuous exercise and quality advocacy should come from the testing and quality assurance teams. Even if you are not technically versed, you should still be able to provide recommendations where necessary for automation
T E S T M a g a z i n e | M a r c h 2 01 7
A G I L E
becoming more prominent in agile. So, what happens for the testers and quality assurance personnel with no such technical skill set? There might not be a straightforward answer for this and it is subjective to the culture of the organisation you are in. Testers and quality assurance personnel need to get a better understanding of how automation works and how it can benefit the assurance practice. Automation is a continuous exercise and quality advocacy should come from the testing and quality assurance teams. Even if you are not technically versed, you should still be able to provide recommendations where necessary for automation. For that, yes, you need to understand the concept, be aware of the technology, do your research and be agile within and beyond. Testing and quality assurance requires a unique set of skills but it’s not uncommon that some in IT have a different perception to it. This is quite common if someone is mixing up user testing with functional testing and assurance. In agile, the development team is only completed if there is an (experienced) agile tester or a quality assurance person in the team. It allows the testers and the developers to closely work together and in a timely manner. You will require to learn how and why organisation
uses a certain technology stack, how domain is designed, how to work in a collaborative code base and DevOps operations. It is good to take a context driven approach to testing. It is important to focus on risk based, exploratory, regression testing as well as performance.
LOOKING AHEAD By many of the observed indicators agile is going to be in practice in the software industry at least for the foreseeable future. Not only the software development, but it seems the C‑level itself could benefit with an agile way of working. Will there be a different manifesto? Probably one day. Many organisations have proven by adopting an agile way of working it has helped in creating a dynamic and a positive culture and with autonomy. It has opened new avenues that lead in growth of innovation and creativity. I am not trying to say that agile fits for every organisation – some certainly would benefit from the old fashion waterfall model – however, if you are in testing or quality assurance, you should be responsive to changes in the software world and gather necessary skills to play an effective role in the field.
PERFORMANCE PERCEPTIONS As the high street continues to migrate online and with increasing customer expectations of intuitive, accessible – and most importantly fast – online service, investment in performance and load testing is critical.
P E R F O R M A N C E
A N D
D
iscussing changes in the online retail space, Editor of TEST Magazine Cecilia Rehn recently interviewed Elliot Kimmel, Technical Test Consultant, Shop Direct and Paul Smith, Performance QA Engineer at ASOS.com, to find out their views on key changes in performance testing, tools, and hopes for the future. How did you start out working in testing/QA? Has it always been in performance testing? Elliot Kimmel: A graduate trainee scheme introduced me to test automation. I eventually went into performance testing where I have approximately 12 years’ experience. I have also been involved in other test areas such as non‑functional and technical testing. Paul Smith: For me it was a different route. I started out in science and made the transition to technology via a role conducting UAT for a software house that produces tools for scientists in the pharmaceutical industry. After that, I moved into e‑commerce, and quickly branched into performance. Here I realised the importance of critical thinking, experimental design, dealing with complex systems, as well as data handling and interpretation. The attributes required for performance engineering were a great fit with my scientific background. How has performance testing matured and/or changed? PS: There’s now a greater appreciation of the groundwork that goes into conducting an effective test. There used to be a lot of emphasis on what load testing tools you knew how to use, and what technologies you’d previously tested, but these alone are not going to deliver effective tests. EK: Organisations have started to grasp that they need to do more performance testing. They are realising that it is not just infrastructure that can be the bottleneck but also code and configuration. Agile methodologies mean earlier performance testing in a project is more likely, which is good but this brings its own challenges. Fitting performance testing into sprints and adding it to continuous integration is not straightforward. PS: Now we expect a lot more from our performance specialists. We need them to show that they can collaborate with a variety of stakeholders across your business, clearly communicate the meaning of results
L O A D
17
T E S T I N G
(in both technical and business terms), and demonstrate critical thinking so that they can quickly assess what is important when developing an approach. What do you consider to be the biggest challenge or hindrance to your job? EK: Currently, some organisations think that performance testing can be reduced, as many bottlenecks can be avoided by throwing more tin at the solution instead. They think that infrastructure is readily available and cheaper in the cloud or virtual solutions, etc. This is not necessarily the case as there are licensing costs which need to be taken into account and it can be a dangerous view as the organisation may reduce the performance testing effort and miss critical defects. The educational role, to the end client, should always be at the forefront of a good performance tester.
PS: I would say, probably the challenge of realistic test environments, data (content and volume), infrastructure, hosting, network, security, etc. If our stakeholders hope to gain confidence that everything will run smoothly at peak, then limiting the discrepancies between test and production environments will help produce more realistic results. What opportunities/lessons are out there for organisations who currently aren’t investing as much into performance testing? PS: There are a number of studies that have shown a strong correlation between performance and conversion in retail websites – so the message has to be that a slow site will not realise its true potential, and can undermine efforts elsewhere in the business. People are more demanding than ever, and slow is not acceptable – it will cost you.
Organisations should look at their own peak times and think about what their loss would be if they could not provide their service(s). This often justifies the cost of the performance testing effort
ELLIOT KIMMEL TECHNICAL TEST CONSULTANT SHOP DIRECT
Elliott is a non‑functional test consultant, who has experience in a variety of roles including analyst, lead and management. His main speciality is performance testing, and recently Elliott has moved into a more generic technical testing role, to fit in with an agile programme.
T E S T M a g a z i n e | M a r c h 2 01 7
18
P E R F O R M A N C E
A N D
L O A D
T E S T I N G
If our stakeholders hope to gain confidence that everything will run smoothly at peak, then limiting the discrepancies between test and production environments will help produce more realistic results EK: I agree. They should look at their own peak times and think about what their loss would be if they could not provide their service(s). This often justifies the cost of the performance testing effort. What are the key technologies that have made your job easier? EK: The integration of performance counters into test tools, network‑sniffing tools such as Wireshark and Fiddler, better performance test tool analysis modules, text comparison tools, have all assisted me in my career. The development of technical sites on the internet and text editors, such as Notepad++, has been very useful. PAUL SMITH PERFORMANCE QA ENGINEER ASOS.COM
Paul has been working in software quality for more than 11 years, focusing on performance for the last 8 years. The ASOS performance engineering team is working to build a culture of performance that puts the quality of the customer experience at the heart of everything, throughout the entire delivery lifecycle.
T E S T M a g a z i n e | M a r c h 2 01 7
PS: For me, log querying tools, alongside application performance monitoring (APM) tools, are giving us great, actionable insight into application performance issues. Coupled with real user monitoring (RUM) tools, we now have a more comprehensive understanding of all layers of the system under test, than ever before. RUM is also helping to increase the focus on client‑side performance – an individual customer’s experience, on their particular device, from their specific location. What are you hopeful about for the future? PS: With the continuing growth in online
retail, the phenomenal pace of change in technology, and the emphasis on customer experience (with speed and convenience being big factors – think rapid delivery, drones etc.), I think that we can be excited about the fascinating variety of challenges ahead! EK: As businesses mature they are forced to understand every part of their business more. This includes IT and every aspect of it. With the help of good performance testing consultants, they will eventually realise the many benefits that good performance testing can bring. What is the dream tool/service that you would like to see invented? EK: Many server operating systems are in Unix/Linux, which is not very intuitive. An API where you could run commands in a more user friendly way would be very useful. PS: It will be very interesting to see how machine learning will influence performance engineering as a tool that can aid interrogation of vast amounts of information/variables, and make insightful recommendations on how to optimise. It would be amazing to see a tool that leverages machine learning in this area.
G O L D
S P O N S O R
E V E N T
P A R T N E R S
E X H I B I T O R S
C O N F E R E N C E
PA RT N E R S
S U P P O R T E D
B Y
A NEW OUTSOURCING OPTION IN A POST‑BREXIT ERA Designated as the ‘automation specialist of choice’ by Old Mutual S.A., veteran software testing group DVT is setting its sights on the UK, eager to help new clients in a post‑Brexit world.
Space for opening design ‑ image of Table Mountain/Cape Town
S U P P L I E R
21
P R O F I L E
N
ow that 2017 is fully underway, Editor of TEST Magazine Cecilia Rehn caught up with Chris Wilkins, CEO, DVT and Bruce Zaayman, Director: DVT United Kingdom, to discuss how this South African powerhouse is poised to help UK businesses optimise automation this year. DVT is well known as one of the largest, privately‑owned software testing groups in the southern hemisphere, but can you give us an introduction for our European audience? Chris Wilkins: DVT started in Cape Town in 1999 and we have built up our group to a staff of 600 professional software developers, testers, business analysts, project managers, architects. At heart we are a software development company and over the last 10 years we’ve recognised that software testing is becoming more and more important, so we built up a very large and very competent testing team. This is made up of 200 ‑ 250 testing professionals, which includes our Global Test Centre facility in Cape Town, one of the largest, specialised testing facilities in the southern hemisphere. Our clientele spans from large finance and insurance firms and media companies, down to smaller organisations such as Doddle. Our focus in testing is automation; we believe that the world will slowly move towards automation, and we believe that outsourcing software testing and commoditising it, and making life easier and allowing enterprises to focus on the more specialised, and possibly more interesting, QA jobs is the way to go. What are the main services that DVT provide? Bruce Zaayman: We provide agile software development, testing, consulting and training. We have also built our own test automation framework, which means our clients don't have to pay any license fees for their testing projects. The main reason we developed the java‑flavour, UTA‑H (Unified Test Automation – Hybrid) framework is because a lot of companies don’t want to spend the money on the big players. You know, the HP titles or the CA type tools. For that reason, this is based on Selenium web driver, saving costs as we’re not limited by a license for one individual machine. If we need to run through a massive amount of work in a short amount of time we spin up a couple of VMs and we can run on double,
triple, the amount of machines in order to reduce the time. So that’s a major selling point and I think that that’s something our clients look for. CW: Everybody wants flexibility; everybody wants scalability. We, as a company, are pragmatic delivery specialists; we’re not trying to play in that big generic space. We’re not looking for these massive deals; we’re just saying ‘we can get the job done for you.’ The framework’s been built with that in mind, to get the job done, and it’s 80/20. Once the process is more or less 80% complete, the learning curve has been so dramatic that it makes that last, more challenging 20%, that much quicker and easier.
When it comes to cultural familiarity, there’s a strong link between Britain and South Africa. We’re part of the same Commonwealth, share a common language, and the same time zone
BZ: We also use other tools for automation frameworks. We are agnostic, if a client has a tool, then we are more than happy to augment that team with our service offerings and our skills. To us, an automation specialist is not just a functional software tester with some tech background; we have a java‑development type of resource that is useful. We run test automation from a development point of view, and find that this flexible and scalable approach works very well. What can a South African venture offer to the UK/European market? CW: Post‑Brexit, we think Britain is looking at being more of a global citizen again, and we believe South Africa is a culturally and economically sound partner. In terms of IT outsourcing, we believe the Indian model, although effective for some companies, is not specialised, nor boutique enough for most. And when you consider the euro’s recent increase, other Eastern European options have become more costly. In contrast, the South African rand is extremely competitive, which means that there is a strong case to say that partnering with a Cape Town‑based firm can be a cost reduction and cost mitigation, strategic exercise as well. However, we consider our strengths to be based on more than economics. When it comes to cultural familiarity, there’s a strong link between Britain and South Africa. We’re part of the same Commonwealth, share a common language, and the same time zone so you can pick up the phone and talk to someone straight away. A lot of Brits travel to and from South Africa, and a lot of them have families there as well. So there’s a strong sense of it being part of the British framework.
CHRIS WILKINS CEO DYNAMIC TECHNOLOGY HOLDINGS
Born in Brighton and raised in South Africa, Chris founded DVT in Cape Town in 1999 and has been at the helm of DVT and DTH ever since. He has a degree in management accounting, and a background in software development. His current passion is establishing a successful UK‑SA business model that challenges the Indian and Eastern European status‑quo in post‑Brexit Britain.
T E S T M a g a z i n e | M a r c h 2 01 7
22
The Global Testing Centre is a natural extension of our testing service. It’s all about having your testing carried out remotely, so you don’t have to hold onto the headache of staff, you don’t have to manage your peaks and troughs as large projects come and go in quick succession
S U P P L I E R
And of course there are loads of South Africans working in London and in the UK. These cultural links are so important for IT outsourcing in particular, when miscommunication could have huge ramifications for a project. South Africans' first language is English, they are educated in a system that reflects the British educational system and our best practice, the way we do things, the way we work, the methodologies, the jargon, they are all exactly the same. On the whole I think communication is as easy as it can get. We are a much easier country to work with than any of the other primary sources of offshore work at the moment in Eastern Europe and India. A key part of DVT’s business is your Global Testing Centre. How does this support your offerings and clients? CW: The Global Testing Centre is a natural extension of our testing service. It’s all about having your testing carried out remotely, so you don’t have to hold onto the headache of staff, you don’t have to manage your peaks and troughs as large projects come and go in quick succession. Our clients don’t have to worry about finding very specialised skills for 10 hours a month; we’ll find them internally. So the logistical benefits are enormous, it just takes away the nuisance. We will also make sure that the bridge between the clients and the test centre is built and that it is maintained, and that there is just the right flow of communication that goes on between them. Every client is on a different maturity curve when it comes to software testing, and we provide a tailored, bespoke service.
BRUCE ZAAYMAN DIRECTOR DYNAMIC TECHNOLOGY HOLDINGS UK
Bruce, a technical engineering specialist for DVT, is passionate about promoting the company’s Cape‑based Global Software Testing Automation Centre – the only one of its size in Cape Town – and innovative, custom software solutions. Formerly Practice Head of the Quality Assurance division DVT, Bruce is now leading the way for DVT in London, introducing UK‑based enterprise and fast‑growing technology clients to the exceptional quality and potential of our South African‑based products.
T E S T M a g a z i n e | M a r c h 2 01 7
Because DVT’s focus and expertise has been on automation, we can consult and advise on how to tackle the more emotional aspects of automation with your staff, how to take them down that road, how to get them onto that first rung of the ladder, and then how to continually invest so that over time your automation gets faster and faster. We ensure clients can get product to market faster, and most importantly that no one is holding up all of the expensive software developers who keep waiting for testing to finish. Our global test centre can facilitate all of that. BZ: The GTC is structured into pods of 30‑50 odd people, run by senior technical managers. This structure ensures that there’s always senior technical knowledge onsite, in close contact. All resources allocated to clients have
P R O F I L E
senior oversight. South Africans, in general, are very positive to working with international clients and forging global business links. So we ensure we have talented staff onsite with the technical knowhow to support clients, and the enthusiasm to go above and beyond. CW: Enterprise firms like the GTC because we have the size and the scale of a larger organisation structure and start‑ups like us because we have agility in that centre and we can move around quickly. Also, the really good news is that we always have 10 to 15 people available at short notice. We would encourage any new client to work with us on an initial proof of concept, which we can often turn around in a couple of days or weeks. This would be an investment by DVT into a client, to demonstrate the way we work, the kind of experience they might get if they signed us up as a more strategic partner. You’ve recently partnered with British TSG, what does this partnership look like? CW: We were initially introduced through mutual acquaintances 18 months ago. This partnership makes sense: TSG went through an MBO last year, so with new ownership and invigorated management, they are tackling the market with fresh eyes. They’re British owned, British managed with blue chip clients. As specialists in the UK market and with high‑end consultative skills, TSG really complements our proposition as an outsourcing destination. Partnering with TSG allows us close proximity with the client and senior boots on the ground, whilst we give TSG scale, flexibility, and dynamism, all in the same language and same time zone. Working closely together from TSG’s City offices, we serve as the preferred offshore partner. I think every British software vendor or testing specialist needs this flexibility for their clients. To stay competitive, it’s an absolute necessity. TSG is our partner of choice. We don’t want to have to have a shotgun approach to partnerships. We’d rather have just one very good partner, and of course, we want to accelerate this business together now and win new UK clients. What are your thoughts on trends in outsourcing for 2017 and beyond? CW: It is clear that organisations will need to invest in various different avenues to tackle
S U P P L I E R
23
P R O F I L E
Figure 1. DVT’s Global Test Centre facility in Cape Town, one of the largest, specialised testing facilities in the southern hemisphere.
testing challenges, including cost‑effective outsourced partners and a serious focus on automation. What it boils down to is that we need less and less actual people to do more and more testing work. With the legacy that surrounds an enterprise today, there’s an enormous amount of software, lines of code. I think automation is critical otherwise costs, time and effort will balloon out of proportion. You will not be able to keep up with more agile competition. We’re offering a specialised solution; we’re not trying to do mass‑produced stuff. South African outsourced staff have opinions, they’re not just going to sit and do as they’re told and say ‘yes’. They will question and talk. So I think we could be a very refreshing option for people wanting to outsource. We’re a good company to work with if you want to slowly, first of all, outsource and have a manual oriented approach, and then transition across into a more automated environment. So, we tick the boxes on both sides, and over the 10 years of developing our QA competency, obviously being software development specialists we’ve introduced all the learnings, and techniques, and best practice. But not best practice in just a global generic sense, but best practices in the way that we feel what is the right way to test software.
Another key concern for organisations is how to cope when you need some specialised skill or opinion or consulting for a just a few hours a week or a month? If you’re not outsourcing, you’ve got to go find that skill somewhere if you don’t have it in‑house. We’ve got a big test team, an extended team through the company as well, so we’re more than likely to find it internally. Increasingly, organisations are finding out that this can be a big advantage. Immediate access to specialised knowledge and insight can clear log jams very quickly. You’ve been in the industry for a long time, how do you think testing and QA is changing? CW: I don’t think it’s changing fast enough. I think the extraordinary high amount of software code out there means that actually regression testing is, or should be, one of the primary focuses for enterprise, not just for quality, but also for speeding up the entire delivery lifecycle. Automation testing products are reaching a better level of maturity. We’re seeing for the first time in the last few years that these products really can do the job, which means that testing automation will start coming into its own in the next five years. So we believe in automation and offshoring, but with a more boutique flavour; not mass
production ‘throw 20 more people at the project’ ideology that’s been adopted by other jurisdictions. This is a tired tactic, and we need a sharper, more adaptable approach now. And of course Brexit is going to introduce its own peak of regression testing where small code changes are going to have to be made to accommodate compliance for whatever Brexit regulations are agreed upon. So, where is it going? There’s more formality around it, I think everyone agrees that getting an expensive java developer to test is crazy. You actually need to make sure you have a separate team with a separate responsibility with people who are trained to test not to code. There’s going to be some type of tension in post‑collaboration between those two teams. Software developers also write their own codes, so they’re more inclined to test it and say it’s okay quite quickly. And it’s not only the code; it’s the UX as well, which is also becoming more important as the end users’ expectations change. We’re looking forward to showing the UK what we’ve got, and helping this market navigate post‑Brexit uncertainty with a strong, neighbourly partner! For more information about DVT please visit: www.dvtsoftware.co.uk
T E S T M a g a z i n e | M a r c h 2 01 7
MOBILE GAMES UNDER THE MICROSCOPE Ville‑Veikko ‘W’ Helppi, Head of Demand Generation, Bitbar, examines the value of mobile game testing and test automation.
G A M I N G
25
S E C T O R
W
hen comparing to native mobile apps, designing, developing and testing aspects are in totally different ballparks. Despite the same virtues, high quality, robustness, compatibility across devices, and user experience, are the common factors whether an app or game will be successful among users. The fact remains that mobile game testing and automating it across all possible device variants, OS combinations and form factors is very different from regular app testing. Much of the mobile game testing is still carried out manually which isn’t producing results efficiently, doesn’t provide thorough view and understanding of bugs and other issues with mobile games. In fact, effective mobile game testing should derive from a well‑structured and systematic approach, use of test automation frameworks, real mobile devices and a seamless integration to the development process and tools.
run fine on mid‑range and even low‑end phones, if tested properly. Scaling the graphical content, limiting certain operations or simply allowing the mobile game to fully utilise hardware are probably the easiest methods to get a game running on any of those devices. However, it still requires the use of physical, real devices, in testing.
LEARNING FROM TESTS ON REAL DEVICES At Bitbar, last year we conducted over 184 million tests on real devices. This couldn’t be done without the use of test automation. Especially during the past two years, test automation has come into play and helped game developers to quickly and easily see how their games work across all possible device variants. This has produced tonnes of great data on how games can be optimised for end user devices.
In fact, effective mobile game testing should derive from a well‑structured and systematic approach, use of test automation frameworks, real mobile devices and a seamless integration to the development process and tools
FACTORS FOR MOBILE GAME SUCCESS There are few things that are commonly accepted as the most important reasons why mobile games make it big time – or fail.
USER RATINGS
Competition is fierce in mobile games today. Users have a very short attention span for a game that doesn’t provide great user experience, lags or simply isn’t working as expected. Frankly, nobody wants to download bad performing, buggy or dubious games from app markets. In addition, users are very straightforward about their passionate opinions on games and comments/feedback end up in front of hundreds of millions of potential users.
PERFORMANCE
Good, slick performance of mobile games provide better gaming experience and gamers are more likely to promote games that work well. The real performance of a mobile game can only be measured on real devices. The majority of mobile games are graphics‑intensive and basically stress two parts of the hardware: CPU and GPU. The optimisation of any mobile game across different device variants, chipsets, and form factors requires testing on real devices.
DEVICE COMPATIBILITY
Way too many mobile games are still limited to high‑end devices even though those could
During 2016, we provided a large and diverse mobile device farm for Android and iOS app, game and web developers. Among this group of users, there are fortune 500 companies, dozen of top 20 mobile game developer companies, as well as SMBs and indie developers. Their mobile test runs hammered thousands of our devices every day, produced enormous amount of data, results, screenshots, videos and performance statistics of how their mobile apps and games work on these different handsets. The total number of test runs exceeded 184 million unique device runs. The majority of those were carried out on Android devices, but there was a significantly growing trend of testing on iOS devices (quarter by quarter). This shows that ‘fragmentation’ is far from being resolved on both major mobile platforms – and testing must take its place before top companies can push their apps and games to end users.
VILLE‑VEIKKO ‘W’ HELPPI HEAD OF DEMAND GENERATION BITBAR
W is passionate about anything embedded software and hardware, with a hands‑on approach. As a networker and tech‑savvy person, he loves anything about mobile platforms, devices, apps, games and more, and is always eager to share his experiences and thoughts. W holds two Masters of Science degrees (Technology and Economics/BusAdmin) from the University of Oulu, Finland.
T E S T M a g a z i n e | M a r c h 2 01 7
26
Stunning graphics, pictures, animations and all those beautiful effects will make the mobile game shine, but if the performance lags, they are pretty much useless
G A M I N G
S E C T O R
Figure 1. Why do Android games fail? Research by Bitbar.
Figure 2. Mobile games real device tests statistics for 2016.
DIFFERENCES BETWEEN MOBILE APP AND MOBILE GAME TESTING What are the major differences between testing a mobile application and a mobile game? There are few important differences that developers, testers and QA folks must know when using test automation for mobile games.
USER INTERFACE AND ITS FUNCTIONALITY
A user interface and its overall functionality will directly affect how successful your app or game will be. These two things, which encompass visual appeal and gameplay, are the most important things to get right – and you must ensure that device fragmentation doesn’t break any of these. Various things in the UI need to be tested: • UI layouts and elements: Games especially are typically targeted at a high number of
T E S T M a g a z i n e | M a r c h 2 01 7
different screen resolutions and screen types. Regression testing should be done each and every time the UI’s layout changes to ensure that the game works. • Menu structure and functions: Testing menu structures, functionality and behaviour can be automated with instrumentation and the help of different test automation frameworks. • Screen orientation: Surprisingly, so many apps and games out there get this wrong. If a screen’s orientation changes during an interaction, for example, what happens? What is supposed to happen? Does the app or game work well in both landscape and portrait modes? • Screen resolution: A lot of screen resolutions exist, especially on Android, and auto‑scaling will usually help developers. However, test your game across these resolutions to ensure that the graphics do not stretch.
GRAPHICS PERFORMANCE
Good graphics performance of any mobile
G A M I N G
27
S E C T O R
game is tightly related to good user experience. Stunning graphics, pictures, animations and all those beautiful effects will make the mobile game shine, but if the performance lags, they are pretty much useless. In the gameplay, gamers want to see constant progress, feel that the game is fully implemented for their device, and that nothing slows down the game experience. Performance needs to be consistent across all device variants among your users. Because of this, use test automation on as many real devices as possible. To determine how well your game responds to various levels of usage, including performance and battery usage, consider creating tests that last for hours. To determine whether your game runs effectively under a heavy load for a long time, run load (or stress) tests. These performance tests will measure, for example, how responsive your game is on real devices.
USABILITY AND GAMING EXPERIENCE
Testing usability, navigation flow and user experience simply cannot be done on a desktop with a mouse and keyboard. It is advisable to forget emulators and use only real devices. To test how usable and entertaining your app is, consider these two important things: • User interaction and responsiveness: Testing performance is critical because this will make or break the user experience. Performance lag, for example, is easy to expose with real devices. • Background events: Interruptions, battery consumption and the effect of battery chargers on overall performance and usage all have a significant impact on the user experience – and entertainment value.
apps. This can be tested thoroughly with real Android and iOS devices, using real back‑end or services provided by the platform, to assess functionality, performance and user experience.
THE FUTURE IS TEST AUTOMATION Currently, the trend is towards open source test automation frameworks. Appium and Calabash being the top ones for cross‑platform testing, and XCUITest (iOS) and Espresso (Android) for platform‑specific test automation. And surprisingly, in mobile game testing some of these frameworks have become extremely popular due to easy setup, ease of use, and significant value. Basically, any of these frameworks can be coupled with image recognition and many of the top mobile game companies have adopted image recognition in their testing procedures. When things are done in an automation context, the results are remarkable compared to old‑fashion manual testing. With the help of test automation and image recognition, a mobile game can be tested on hundreds of different device variants, each with a different OS inside, with completely different hardware and form factor. Mobile games are set to grow in popularity and it is imperative that game developers invest in adequate test automation on real devices to ensure their hard work and efforts don’t get uninstalled at the first hint of a lag.
MULTIPLAYER FEATURES
Nowadays, multi‑user support is common in both apps and games. Testing multi‑player capabilities is important and is naturally more challenging, requiring real users to measure performance. A typical case is a game communicating with the back‑end server. In this case, connectivity is essential, to synchronise the back‑end with devices that need to get information about the gameplay. You should test a multiple of different scenarios, many of which could severely affect the game’s experience, resulting in negative feedback and the game being uninstalled by users. Integration with social networks is another important factor. Being able to share something across an ecosystem, with friends or just with oneself, is essential in many
T E S T M a g a z i n e | M a r c h 2 01 7
SECURING GAMES AGAINST THE CRIMINALS – AND NOT THE PLAYERS Aaron Lint, VP of Research, Arxan Technologies, considers the role of security in modern‑day gaming.
W
hether it be digital pirates and cyber criminals, or hobbyists simply looking for a challenge – as long as the gaming industry has existed, it has battled with people determined to break their games. Even before the advent of the internet, piracy has always been a nemesis for the gaming industry. Developers and publishers have employed creative techniques to discourage attacks and maintain game
integrity. One of the more interesting pieces of technology developed in the early 1980s by Atari was the Lenslok, a plastic prism that users held up to their screens in order to decode a passcode that enabled the use of the software. Users with illegitimate copies, distributed without the physical device, would not be able to enter the correct code – or so the developers thought. A few changes to the machine code of the game rendered the system useless in defending against piracy.
The virtual smorgasbord of downloads available on the web has massively accelerated the rate at which games can be cloned and shared. Between leaked early releases, broken licensing, and modified games, finding almost any game is only a simple search away. In a survey of 50,000 gamers by PC Gamer last year1, 90% of respondents admitted they had pirated games at some point, while 35% say they are still actively pirating games today.
G A M I N G
29
S E C T O R
Attackers are also using tampered copies of games as a vector for cybercrime. Given the huge explosion in mobile gaming, criminals are capitalising on overwhelming desire for popular apps to trick users into downloading malware. These malicious copies can cause serious harm to users by infecting their devices, stealing their credentials and personally identifying information. Fake versions of Super Mario Run, one of the most anticipated games of 2016, were found to be hiding the Marcher banking Trojan ahead of the game’s official Android launch.2 Harmful to the bottom line of companies and frustrating to legitimate players, those users deploying cheating techniques can ruin game economy. Bots, which play games with superhuman skill, have become a common nuisance in online competitive games, allowing players to over achieve in battles and mine in‑game currencies.
SECURITY VERSUS PLAYABILITY Just as the capabilities of the pirates and criminals have taken a drastic leap forward, the industry has also moved on from the days of plastic lenses. There are more sophisticated ways of keeping the mechanics of games safe and sound. These efforts have run afoul of legitimate players at times, and the issue of game security continues to be a controversial one in many circles. Super Mario Run again proves to be a good example here. The smartphone debut of the famous Italian plumber was met with much praise and Nintendo reported more than 40 million downloads in the first four days after launch alone. However, reception was still muted thanks to the US$9.99 price point and the decision to implement always‑on DRM, requiring a constant internet connection. The primary complaint of players is that adding security techniques to these games ruins the performance, especially for graphically intensive games. That motivation alone has been enough to motivate certain gaming communities to attempt to remove protections, even from legally purchased games.
KEEPING OUT THE ATTACKERS It is difficult to balance the concerns of the developer with the experience of legitimate players. Policies that are too strict or create too much difficulty for genuine players have shown time and time again to damage company reputation and title sales. However, the studios cannot afford to completely forego a security implementation. The first few weeks after a title launch are crucial to the profit realised by that title, and if a cracked copy arrives in that timeframe, the results can be devastating to the bottom line. An effective way to achieve this balance is to deploy security measures that normal players are not even aware of, by embedding them directly into the game’s code during development. Code obfuscation is one such technique, helping to increase the effort required to reverse‑engineer the game logic. Hiding clear text string encodings, renaming unused program symbols from application binaries, as well as changing easy‑to‑understand program symbol names, make the code more difficult to understand and exploit. It has proven to be very effective to embed security routines that make the application self‑aware, able to identify and counter threats as they occur. These routines can verify the integrity of the application and its resources, detect OS‑based attacks, and prevent attackers from attaching to the process to exploit it. Another prime target for attackers is the cryptographic keys that protect game assets and control entitlements and licensing. With these keys, an attacker can decipher and modify game graphics and assets and even spoof client‑server communication. White‑box cryptography is one way to achieve a balance of performance and security in this area. These advanced hardening techniques can be safely combined with other application and network security measures to create multiple layers of protection for the code and data at the heart of the game. No matter the methodology developers choose to defend against the attackers, they should always ensure that they don’t end up blocking legitimate players along with them.
References 1. ‘PC piracy survey results: 35 percent of PC gamers pirate’, http://www.pcgamer.com/pc-piracy-survey-results-35percent-of-pc-gamers-pirate/ (26 August 2016). 2. ‘Android Marcher now posing as Super Mario Run’, https://www.zscaler.com/blogs/research/android-marcher-nowposing-super-mario-run (5 January 2017).
It is difficult to balance the concerns of the developer with the experience of legitimate players. Policies that are too strict or create too much difficulty for genuine players have shown time and time again to damage company reputation and title sales
AARON LINT VP OF RESEARCH ARXAN TECHNOLOGIES
Aaron brings 10 years of industry and academic experience in information security and cryptography. His deep technical expertise is in reverse engineering, compilers, linkers, and operating systems. He holds a BS in Computer Science from The Ohio State University and a MS in Computer Science from Purdue University. In his spare time, he works as a football official and enjoys attending hacker conferences around the world.
T E S T M a g a z i n e | M a r c h 2 01 7
Supported by TEST Magazine and produced by the organisers of The European Software Testing Awards, The National Software Testing Conference gives delegates access to high value presentations, management strategy and case studies with practical takeaways. All delegates will also receive a printed copy of The European Summit Report, covering trends and learning derived from hundreds of entries into The European Software Testing Awards programme.
G O L D
S P O N S O R
E V E N T
P A R T N E R S
E X H I B I T O R S
C O N F E R E N C E
PA RT N E R S
  www.softwaretestingconference.com
S U P P O R T E D
B Y
E V E N T
31
P R E V I E W
Who should attend?
HIGH VALUE PRESENTATIONS FROM EXPERIENCED DIRECTORS, MANAGERS AND HEADS OF TESTING
The National Software Testing Conference is open to all, but is aimed and produced for those professionals that recognise the crucial importance of software testing within the software development lifecycle. Therefore the content is geared towards C-level IT executives, QA directors, heads of testing and test managers, and senior engineers and test professionals.
Experienced presenters include: • Adithya Alladi, QA Engineer, Gamesys • Dan Ashby, Head of Software Quality & Testing, AstraZeneca • Nathan Barguss, Lead Engineer, Sony Interactive Entertainment Worldwide Studios • Sudeep Chatterjee, Global Head of QA and Testing, Lombard Risk • Ajit Dhaliwal, UK Head of Delivery, Release and Testing, Vodafone • Amparo Marin, Certification Governance Director, Banco Santander • Niranjalee Rajaratne, Head of Quality Assurance, Third Bridge • Laksitha Ranasingha, Tech Lead, Home Office • Leigh Rathbone, Testing Principal, Shop Direct • David Rondell, Head of Quality Engineering, Tandem Bank • Matt Ryan, Lead Test Engineer and Joanne Griffith, Senior Test Engineer, BBC • Richard Self, Big Data Evangelist and Research Fellow – Big Data Laboratory, University of Derby • Nicky Watson, IT Director and Suresh Chandrasekaran, Vice President, Credit Suisse • Steve Watson, Director Quality Engineering, RBI Technology • Chris Wilkins, Founder & CEO, DTH • More to be announced!
Register today to attend the conference, including a networking drinks reception! Call on +44 (0) 870 863 6930 or email registrations@softwaretestingconference.com
www.softwaretestingconference.com
MILLENNIUM
GLOUCESTER
HOTEL LONDON
KENSINGTON
Returning to London after a successful run in 2016, the National DevOps Conference is the event for those seeking to bring lean principles into the IT value stream and incorporate DevOps and continuous delivery into their organisation. The National DevOps Conference is targeted towards C-level executives interested in learning about the professional movement and its cultural variations that assist, promote and guide a collaborative working relationship between Development and Operations.
G O L D
S P O N S O R
E V E N T
P A R T N E R S
CONFERENCE
PARTNERS
www.devopsevent.com
S U P P O R T E D
B Y
E V E N T
33
P R E V I E W
A conference for curious minds The National DevOps Conference is open to all, but is aimed and produced for those professionals that are interested in learning about the practices and culture behind the DevOps movement, in an effort to implement change in their own IT infrastructures. Therefore the content is geared towards C-level IT executives, directors, managers, and senior engineers and professionals.
TWO FULL DAYS OF PRESENTATIONS FROM SENIOR SPEAKERS The National DevOps Conference speaker line-up includes: • • • • • • • • • • • • • • •
Andy Callow, Head of Technology Delivery, NHS Digital Tom Clark, Head of Common Platform, ITV Dan Cuellar, Principal Development Manager, FoodIT Dan Fiehn, Group IT Director/CIO, Markerstudy Group Andrew Hardie, DevOps Expert, formerly of HSBC and Ministry of Justice Richard Haigh, Head of Delivery Enablement, Paddy Power Betfair Amy Harms, Head of Site Reliability Engineering and IT, WorldRemit Paul Houghton, Technology Delivery Manager, NHS Choices Stephen Lowe, Director of Technology, Integration, Paddy Power Betfair Steve McDonald, Head of Software Development, Tesco PLC Dinesh Sharma, Head of Development, Gamesys Emilio Vacca, Director Mobile Channel, Telegraph Media Group Keith Watson, Director of DevOps iHCM, ADP Nicki Watt, Lead Consultant and Rafal Gancarz, Lead Consultant, Opencredo More to be announced!
Register today to attend the conference, including a networking drinks reception! Call on +44 (0) 870 863 6930 or email registrations@devopsevent.com
www.devopsevent.com
SPEAKING THE USER’S LANGUAGE Launched in 1995, Dictionary.com is the de facto lexicon for 70 million users every month. The service provides millions of English definitions, spellings, audio pronunciations, example sentences, and word origins. 5.5 billion word searches are conducted every year, and over 100 million users around the world have downloaded their app.
36
Q A
Like most companies, we had a couple of motivated QA doing their best to build a test automation suite but, without enough investment from management, engagement of all team members and the right focus on objectives, test automation projects tend to stagnate and fail
K
eeping users happy, engaged and returning is key at the world's leading digital dictionary. Cecilia Rehn, Editor of TEST Magazine, interviewed Dictionary.com’s Director of Product Quality Engineering, Kenneth Toley, to find out more about how the QA function is adapting to automation and new skill sets. How did you get started in quality assurance? Actually somewhat by accident. My educational background included interactive media, computer science and technical writing but after about three years as a writer I wanted a more technical role, which returned me to interactive media. My first opportunity in the career shift was Developer Support for Macromedia Flash. At that time Flash was unique and highly expressive software. The role afforded me interaction with customers from all over the world every day. Not just on support calls but user forums, conferences, as a consultant on projects and as a trainer for our overseas partners. I loved working with our software and would make it a point to get a freelance project every year so I could work with features we had not yet released in a professional context. Needless to say I had a strong, customer focused perspective and that tech writing experience turned out to be pretty valuable for publishing tutorials, writing up detailed defects, design change and new feature requests for the Engineering team. After three years we were acquired by Adobe and I was asked to join the
A N D
U X
Quality team where I could have more direct influence on the product. I should say Quality at Macromedia and Adobe was pretty amazing. We had the notion of multidisciplinary teams working together for the user and SyncDev process before it was very popular. The role of QA was broader than the scope of bug hunting and spec validation. We collaborated with developers on specifications, and UX and Product on design feature workflow and concept. QA developed our test case management system, several tracking and reporting tools. We also had a large role to play on user forums, beta tests, and the success of product conferences. That experience and some of the great things we were able to accomplish while I was there gave me a love for Quality Assurance and Engineering, as well as an appreciation for a well formed team with a great process. What does your Director of Product Engineering role entail? Dictionary.com is a surprisingly small company given its age and success. To continue to be successful in a competitive landscape requires some reinvention. I was hired in 2015 as part of an effort to modernise and streamline the engineering team. Because of our size many people don multiple hats. I have a team of QAE dispersed among three web and one mobile team. I try to keep a count of two QA per team. My job is to instill best practices for assuring
KENNETH TOLEY DIRECTOR OF PRODUCT QUALITY ENGINEERING DICTIONARY.COM
Kenneth has over 10 years’ of experience in software in various industries from enterprise database software to casual online gaming, throughout the San Francisco Bay Area. He currently tackles the challenge of adoption and scale of test automation, along with implementation of development best practices and process for agile teams. Kenneth is a practitioner of martial arts,
Figure 1. Dictionary.com has been growing its audience since 1995 and is now the de facto lexicon for
husband, and father of two.
70 million users every month.
T E S T M a g a z i n e | M a r c h 2 01 7
Q A
A N D
37
U X
and measuring quality, streamlining the development process as it relates to quality and efficiency. I introduce improvements or requirements to the code pipeline supporting development, code coverage and security metrics. I introduce tools and best practices for load and performance testing. I advocate for education and training that benefits the team on everything from test automaton to machine learning and natural language processing meet‑ups. I have helped shape our agile development process and configure our collaboration tools such as Jira and Confluence. I help shape the product discovery process and ensure QA is fulfilling their role in the process. I work with the dev team building solutions for continuous integration, code deployment, and reporting. In addition, I manage some for our source control systems and do my best to advocate for engineering best practices. This is aside from weekly one on one meetings with each team member as well as a ‘Quality All Hands’ meeting, ‘Engineering All Hands’ meetings, and everything I can do to keep our teams in sync with interdependencies. If I find a gap I get it filled or fill it myself. I also try to keep the team very forward thinking about what the accomplishment of immediate goals enables us to do in the near future, not to mention what opportunities will help with career development. How is the QA function at Dictionary.com set up? I try to keep two QAE per team. I try to balance the more technically skilled with those with little coding experience. I have advocated for the adoption of behaviour driven development and QA participation in our Product discovery process so they have a closer relationship with Product and Development, helping catch quality issues early and through the process of development rather than just after code has been completed. QA leads the responsibility of end‑to‑end test automation and manual testing. Developers lead integration testing of the business logic and unit testing. In
Figure 2. With Dictionary’s new approach to product discovery; however, the team will have the opportunity to shift this from being quality problem focused approach to how QA better provide value testing and development. Pictured: Vijaya Hiremath, Quality Assurance Engineer.
several teams, eventually all, Dev and QA collaborate on writing test scenarios with product sign off resulting in ‘executable specifications’ which run as failing tests until feature development is complete. It’s the beginning of a test first approach that we acquired from an excellent training session provided by Xolv.io. All of the QA are ramping up on test automation if it is not yet part of their daily tasks, a couple of QA team do some feature development, tools development and some release engineering work. Where possible I have pushed for the use of tools and APIs, which can be applied to broad targets. In this way I am getting QA to be interchangeable from team to team. Aside from being flexible with resources and assignments it means we can let QA move to different teams to get a different experience with different products and team culture. What are the biggest changes you’ve implemented? The biggest change has been a new focus on test automation and the evolution of a culture of quality. Test automation is normally the biggest change I am asked to make when working with
a new team. Often Quality teams have a notion they need test automation to scale but also a fear ‘the company’ wants test automation to replace them. They understand it’s an important shift in the industry but they are not sure they can learn to write code or expand their technical depth. Like most companies Dictionary.com had a couple of motivated QA doing their best to build a test automation suite but, as I am sure you are aware, without enough investment from the management team, engagement of all team members and the right focus on objectives, test automation projects tend to stagnate and fail. As part of Dictionary’s commitment to innovation we needed to reinvent our approach to quality, engineering, and product development. We introduced best practices, process and tools best suited for our technology stack, even as our stack was in transition. With the support of a great management team we invested significantly in training not just QA on automation or developers on technologies, but the entire team to varying degrees across disciplines so QA, Developers and Product Owners had enough knowledge and empathy for each other’s roles on the team to bond and collaborate as a team to resolve problems.
T E S T M a g a z i n e | M a r c h 2 01 7
38
A tester’s attention to detail, ability to empathise with the user’s point of view, and ability to devise strategies to predict what can go wrong is their real value and not something you can fully capitalise on unless they are involved as early as feature concept and involved though validation post release
Q A
That said, I knew QA needed to demonstrate their commitment to these changes. It’s one thing to ask a developer to change their code, process, and workload. But you will have more success when QA is crossing the aisle to help them do so. This is in part why it was very important to me to grow all of our manual-only testers into automation engineers, and give our tech savvy QA more technical growth opportunities in testing and development. From my point of view the least value a QA brings to the software development process is their ability to find bugs. Confirming software works as designed is not a good use of their expertise. Test automation can do that very well. A tester’s attention to detail, ability to empathise with the user’s point of view, and ability to devise strategies to predict what can go wrong is their real value and not something you can fully capitalise on unless they are involved as early as feature concept and involved though validation post release. QA can help Product write better user stories, help design with usability, help developers implement features correctly the first time avoid the drag of defects found late in the cycle. This broad scope of the role of Quality is also change and quite a challenge to implement. I am proud to say, despite some of them never writing a line of code before, there was not been a mass exodus from the team when I set that goal. In fact they really dug in with training, helping each other and collaborating with developers
Figure 3. It was important to grow all of the manual‑only testers into automation engineers, and give the tech savvy QA more technical growth opportunities in testing and development. Pictured: Marina Zabryanskiy, Quality Assurance Engineer.
T E S T M a g a z i n e | M a r c h 2 01 7
A N D
U X
to grow their skills ensuring we reached this objective last year. How is the QA function helping ensure Dictionary.com remains competitive? Well as I mentioned, the company has been doing some reinvention. Process, technology, modernisation and teamwork, that is the goal. The benefit of accomplishing this is the ability to adapt faster to our user’s needs, and technology choices. Dictionary.com has a strong brand and the trust of our audience. They expect our product to have quality. But they expect us to be where they are, as their context and platforms and needs shift. Given a shifting landscape the team has been open to new technologies and relationships that will help keep us ahead. For example, Dictionary.com started using Buddybuild to streamline our CI pipeline for mobile before their initial service offering, providing feedback to the company and helping them improve their public API for build status and metrics. We also are working with Appdif, AI bots which ‘automate’ the task of test automation. Again, as a reference customer we have helped a great team with a great product for testing build performance issues before release, into one which helps identify and avoid problems during the development process when it is most cost‑effective to fix them. Tools and partnerships like these are always very exciting as they help us innovate and find ways of improving quality without sacrificing velocity. How involved is QA in owning user experience? We have a lot of empathy for our customers. We have friends and family who use our product every day and when something goes wrong we take it personally. (Sometimes they are the first to report a problem actually.) At the same time we respond to direct user feedback, analytics, surveys etc. and work with our product owners to get improvements into our products where we can. With Dictionary’s new approach to product discovery; however, I believe we will have the opportunity to shift this from being quality problem focused approach to how we better provide value to the user approach. I will be honest here, I am thinking about the best aspects of my experience at Macromedia and Adobe, which gave me my love of QA. I am doing everything I can to recreate that experience at Dictionary as it did amazing things for the product and our customers.
Q A
A N D
39
U X
What are you excited about for the future at Dictionary.com? Well, I am super proud of my team and how far so many of them have come with test automation. As a manager I am actually looking forward to hearing about the goals they will be setting for themselves in 2017, given the skills bump confidence I have seen so far. Many of us have been following news about machine learning and natural language processing. Dictionary’s experiment with a Facebook bot was a great learning experience for everyone. It field tested some of our new processes and from it came a well spring of ideas for improving it, not to mention what we learned about user engagement that could help with new product features. The Internet of Things is an area I have my eye on. I think we will see users on new platforms, wearables and there‑ables that will inspire a lot of creativity on the team as
we take our relationship with the user away from the Desktop. I am hoping there is opportunity in the coming years to leverage deep learning platforms. Just recently I attended a lecture on the research into improving speech recognition for second‑language speakers who have accents and shifted grammar. Being part of a company whose goal is to help people communicate with confidence, a system like that would be of incredible value paired with our lexicon. I am also inspired by how emerging artificial intelligence may change the field of quality and testing. Again, I am a big fan of automation and have no illusions about how it may impact the broad employment market. However I think embracing it is the key to evolving. I look forward to the task of writing automated tests, and even the development of software, being less recognisable as coding and more akin to training your dog. Think of the things we could accomplish.
Dictionary.com has a strong brand and the trust of our audience. They expect our product to have quality. But they expect us to be where they are, as their context and platforms and needs shift
OVERCOMING POOR USABILITY & USER REQUIREMENTS Sophia Segal, Senior Computer Systems Analyst, Loblaw Companies Limited, discusses how concise, usability requirements are critical in determining the success of a product or software application.
Q A
A N D
41
U X
A
software application or product can be easily adaptable, effortless, and accessible, but if it fails to meet user requirements then it contains absolutely no value. More businesses are realising that the success of a product or software application is determined by how accurately it meets usability and user requirements. A staggering 70% of software projects fail due to poor requirements, costing US$45 billion annually in rework spend.1
AMBIGUITY AND USER REQUIREMENTS Too frequently, user requirements are ambiguous and open to interpretation which often lead the IT team to unintentionally misconstrue them, which causes requirement errors falling through the software lifecycle and growing into defects.
This is disastrous because these defects surface after implementation, when costs are 10 times greater to pursue and resolve this exact same defect. It is therefore imperative to elicit and define ambiguous requirements concisely, with meticulously defined acceptance criteria, so that expected results are transparent, with no unexpected surprises after deployment. Businesses are steadily relying on the expertise of requirements subject matter experts (SMEs) to lead them to product success, by eliciting and validating unambiguous requirements by means of user acceptance testing (UAT). UAT is a critical phase of waterfall and agile projects. If carried out competently, it warns the IT team to vulnerable gaps, and areas that fail to meet the requirements of the users. If UAT is postponed or mismanaged, defects become extremely expensive and burdensome to fix and can escalate to critical risk, fast.
Category requirement
Examples
Interface
Ability to send the approved purchase order information from the ordering system to the warehouse management system. Some of the questions that need to be answered are: Does the application need to interface with other applications? Are there any systems or processes that depend on the solution? Interface elements should be clear and easily understood by the user.
Performance
Depending on the nature of the solution in the desired state, the metrics which are used to describe performance requirements could differ. If the solution is user‑driven, then performance‑based requirements on online response times should be described. If the solution represents system‑to‑system interactions, then the average number of transactions per day may be more appropriate. The requirements need to consider both peak hours and off‑peak hours, and requirements need to be defined for system‑to‑system and user interactions.
Both user requirements and usability closely intersect in the real world, but it is evident that they both have the main objective of meeting users’ requirements
SOPHIA SEGAL SENIOR COMPUTER SYSTEMS ANALYST LOBLAW COMPANIES LIMITED
Sophia has over 14 years consulting
Accessibility
The solution must include layout functionality to support visually and hearing impaired, or the solution must support full functionality without a mouse. Does the solution have to comply with physical restrictions? Are there language conditions?
expertise, leading complex, software projects from translation of user requirements into systems design. She has taught domestically and internationally to corporate clients on good practices for developing user requirements and is a renowned
Table 1. Non‑functional requirements are found in the following requirement categories.
speaker on requirements management and business‑critical risk.
T E S T M a g a z i n e | M a r c h 2 01 7
42
Requirements SMEs have a very important role in managing UAT, by designing test cases leveraging concise requirements, which shape the foundation for test scenarios
Q A
USABILITY AND USER REQUIREMENTS IN SOFTWARE DEVELOPMENT A user requirement can signify: “what does the user need the software or product to be able to perform?” Usability can signify: “can this need be implemented in a user friendly manner?” Both user requirements and usability closely intersect in the real world, but it is evident that they both have the main objective of meeting users’ requirements.
USER REQUIREMENTS (OR FEATURES) ARE MADE UP OF FUNCTIONAL AND NON‑FUNCTIONAL REQUIREMENTS
The only way to understand what makes a product easy to use is by eliciting users' requirements and answering questions
T E S T M a g a z i n e | M a r c h 2 01 7
A N D
U X
such as: How will users work with the product? What are high priority features for users? Which functions are important to the users accomplishing their task? How should these features be structured? What are we trying to achieve with this screen functionality? Once you have a list of requirements or features, describe how they should function in order to support a user's workflow, describe the steps that users take to complete a task. It is valuable to determine how one feature will interact with other features. Sometimes the interaction of two or more features can create new design problems related to functionality. These are all functional requirements.
TESTING USER REQUIREMENTS UAT is a disciplined testing practice that determines if requirements are performing
Q A
A N D
43
U X
to the users’ expectations. The user requirements must satisfy pre‑defined acceptance criteria, before the user will sign off and accept the product. Requirements SMEs have a very important role in managing UAT, by designing test cases leveraging concise requirements, which shape the foundation for test scenarios. Each test scenario has pre‑defined acceptance criteria providing clear, measurable benchmarks that requirements need to satisfy. Each test scenario simulates an aspect of functionality of the product by capturing all steps in sequence, which the user executes
in UAT, validating if the software application or product functions as predetermined, and, depending on the results returned, passes or fails the test case. These test cases can be very comprehensive with step‑by‑step instructions, such as eye‑tracker, hand‑movement ergonomic verification or gathered, clearly, at a higher user level. If a test case fails, a defect is raised. Defects are captured in UAT, because the product or software application is tested and examined from a user perspective and is derived exclusively from user requirements. Any defects not resolved in UAT are treated as risks. Users' feedback from UAT, lead to refinements of the product or software tested. This process of refinement combined with a competently managed UAT is critical in validating if a system or product has satisfied requirements of users accurately.
CONCLUSION From my experience as Senior Computer Systems Analyst and Team Lead, even if there is an experienced development team and plenty of resources, the project may still fail if the requirements of the product or software application fail to meet user expectations. UAT practice, led by experts, who have the knowledge, is critical to accurately validate if a product or software application meets the requirements of the users. This technique discovers critical functionality and system vulnerabilities, while getting rid of risk and decreasing extremely high rework costs.
Users' feedback from UAT, lead to refinements of the product or software tested. This process of refinement combined with a competently managed UAT is critical in validating if a system or product has satisfied requirements of users accurately
Reference 1. ‘Leveraging Business Architecture to Improve Business Requirements Analysis’, http://c.ymcdn.com/sites/www.businessarchitectureguild.org/resource/resmgr/public_resources/ LeveragingBusinessArchitectu.pdf
T E S T M a g a z i n e | M a r c h 2 01 7
TESTING A FANTASY SPORTS PLATFORM Georg Hansbauer, Founder and Chief Technical Officer of Testbirds, shares a UX case study from Fanto, a company that provides a free, daily fantasy sports (DFS) online platform and app for gambling.
Q A
A N D
45
U X
F
anto is a company that focuses primarily on simplicity. This is reflected in the way its entire fantasy sports platform functions. From a quick and easy signup process to a liberal player picking system that allows users to jump into the world of fantasy sports without requiring extensive research and planning, the company strives to provide a gambling experience that can be enjoyed by all manner of people. In order to achieve this, however, Fanto had to address and overcome a number of challenges of the development process:
ORGANISATIONAL BLINDNESS
Organisational blindness is a common issue that arises during all stages of development. Often times, developers approach and use their technology in a way that differs from their end users. This results in an undesirable situation where end users feel that a digital product is unintuitive and difficult to use while the creators are unable to pinpoint the cause behind the negative feedback. To counter this, comprehensive target group relevant testing is a necessity.
DEVICE DIVERSIFICATION
Technology today is extremely fragmented. Developers today have to ensure that their digital products and software function on all manner of devices, from different PC operating systems such as Windows and Mac, to all versions of iPhones and over 24,000 unique Android devices. For an internal team it’s generally impossible to have all the devices necessary to perform extensive testing. However, extensive testing over a large scale of devices is a must to compete in an increasingly congested marketplace.
THE TESTING PROCESS Testbirds’ testing process focuses on providing clients with end results that promise to take the quality of its technology to the next level. To achieve this, clients are able to choose a group of testers that consist of their target audience from a crowd of over 200,000 with more than 60 demographic criteria to help narrow down their specific end users. In addition, to address the issues of device diversification, Testbirds offers its clients access to over 450,000 devices. The propriety testing platform, or ‘Nest’, allows for the creation, execution and analysis of 21 different crowd and cloud‑based testing solutions. Fanto had now reached a stage of development where it had conducted a number of internal tests and continuously applied fixes in an agile manner to ensure that its DFS platform was functioning well and, from an internal perspective, had a high level of usability. However, to further gain insight into the minds of the users, strengthen investor and internal confidence by performing comprehensive and extensive testing and ensure the very best for customers, the fantasy sports company chose to invest in a UX study with Testbirds.
Often times, developers approach and use their technology in a way that differs from their end users
TEST SETUP Together with one of Testbirds’ experienced project managers, Fanto handpicked a crowd of 38 testers that reflected the players on its platform. All testers were asked to have previous online gambling experience,
GEORG HANSBAUER FOUNDER AND CTO TESTBIRDS
Georg is responsible for the development of services and IT infrastructure as well as finance and quality assurance at Testbirds. He has gathered extensive experience in the field of enterprise testing – from automated tests for complete IT service desks to load testing – and has been in charge of various IT projects for international corporate groups.
T E S T M a g a z i n e | M a r c h 2 01 7
46
Q A
A N D
U X
Figure 1. Fanto’s dashboard.
The testing process took place over a large number of devices such as: iPhones, Android tablets and smartphones, Windows and Mac computers as well as a number of different operating system versions and browsers with a particular interest in testers who had previous knowledge of daily fantasy sports. Testers had to be between 18 to 40 years old and reside in the United Kingdom. Finally, Fanto required both female and male testers to take part in the project in order to ensure that it was able to gain insight into the desires and necessities of all users. As Fanto is looking to succeed as an online platform as well as an app, it was extremely important to ensure that both are intuitive to use and offer a great user experience on multiple devices. Therefore, the testing process took place over a large number of devices such as: iPhones, Android tablets and smartphones, Windows and Mac computers as well as a number of different operating system versions and browsers.
T E S T M a g a z i n e | M a r c h 2 01 7
This allowed Fanto to optimise the DFS platform across a huge number of devices, browsers and operating systems. The testing project was made up of 10 use cases consisting of qualitative and quantitative questions. To receive accurate results, it’s imperative that each question is formulated in a manner that guides testers in the right direction without influencing the results. This was achieved by carefully curating a specific list of questions, which were delivered to one of Testbirds’ project managers. Together, the use cases were optimised and improved until they were able to ensure a successful testing phase. This in turn led to an efficient and productive testing phase that yielded a number of important results.
RESULTS The results received from the UX study were varied, especially when taking into consideration that the app and website are currently two different products at two different stages of development. Fanto’s expectation of the website was that it was already quite far along and the feedback did reflect this. Quantitative results
48
Q A
A N D
U X
Figure 2. Fanto’s Add a Player page.
Issues with navigation, the sign up process and the general layout of the online platform can all lower the experience for the end user. For a company that provides an online gambling experience and hopes to cultivate a high level of simplicity, it’s of utmost importance that these issues are fixed
T E S T M a g a z i n e | M a r c h 2 01 7
for the online platform generally ranged between 6 and 10 (with 10 being highest) with the majority being on the higher side (a solid 8 or 9). The app, on the other hand, is a much newer endeavour of Fanto’s. This often received lower scores, which reinforced expectations that the app was still very much in development and far from the standard that it wishes to provide after further rounds of optimisation. The qualitative results for the website did not reveal any huge flaws that would be debilitating for the company, rather a number of smaller issues, often deemed to be “frustrating” by testers. However, these little issues for Fanto were equally important as when added up they can be extremely harmful. Issues with navigation, the sign up process and the general layout of the online platform can all lower the experience for the end user. For a company that provides an online gambling experience and hopes to cultivate a high level of simplicity, it’s of utmost importance that these issues are fixed. The app, due to it being much newer than the website, received precisely the severer feedback that Fanto expected. In fact, one tester went as far as to describe it as “being in early beta stage”. After the analysis of the results, Fanto created the next steps by dividing the issues into three categories based on priorities:
“mandatory”, “important” and “nice to have”. The development team then began focusing on implementing these changes. The “mandatory” fixes were set out to be applied within a week’s time, the “important” two weeks after and the “nice to have” issues, the ones that have very little effect if any on user experience, would be dealt with in the future as new, more important issues continually occur. In general, Fanto addressed all the vital issues found within six weeks of receiving results.
CONCLUSION Fanto invested in UX crowdtesting in order to ensure that it is able to dominate the UK marketplace through providing a simple and entertaining online platform and app for daily fantasy sport gambling. After choosing a select group of testers that reflected the target group, a number of detailed use cases were created. Fanto then executed the project and communicated with testers in Testbirds’ testing platform. After implementing the diverse results gathered, Fanto and its investors have been able to gain more confidence in its product and ensure that the end users are able to experience a DFS experience like none other.
Š Majdanski_shutterstock.com
ThaT is No ordiNary sofTware TesTer
ilovesoftwaretesting