Part 3: Who Owns Continuous Testing?
Full Page Ads_SDT024.qxp_Layout 1 5/21/19 1:34 PM Page 36
ContinTest3.qxp_Layout 1 5/22/19 12:01 PM Page 37
Who Owns Continuous Testing?
C
ontinuous Testing (CT) has been the missing piece of the Continuous Integration/Continuous Delivery (CI/CD) movement, but that’s changing out of necessity. While organizations say they’re “delivering value to customers” at an ever-accelerating pace, customers may disagree with the definition of “value” when software quality doesn’t keep pace. In fact, software quality may actually decrease as delivery cycles shorten. There are good reasons why more organizations start with minimum viable products (MVPs) and enhance them over time. For one thing, the dizzying pace of business and technology innovation means that companies must get to market fast. Unlike yesteryear, there just isn’t time to strive for perfection over a long period of time without running the risk of missing the
BY LISA MORGAN target completely. Speed has become a competitive advantage, but not at the expense of software quality. One school of thought is, “If my users know I’m delivering 100 updates a week, they’ll be less forgiving of that which doesn’t work the first time.” Tell that to the irate customer who can’t get loyalty card benefits, play a game or complete an ecommerce purchase. Granted, the fix may be delivered swiftly, albeit perhaps not swiftly enough to prevent customer churn because there
THE LAST OF THREE PARTS
are always other alternatives and better experiences to be had. Interestingly, some cultural issues are contributing to the problem. Namely, how testers are viewed and who’s responsible for testing what. Arguably, everyone is responsible for quality in an Agile, DevOps and CI/CD scenario and yet the effects of an accelerated delivery cycle without an equal focus on quality means that buggy software is released faster. “When you start to think of testing as a discipline, rather than a role, everyone realizes that testing is part of their role even if testing is not in their title,” said Sean Kenefick, VP, analyst at Gartner. “That’s a big cultural change and it’s been a problem for a lot of organizations. When you define continuous testing, you should define it as four things: test early, test fast, test
ContinTest3.qxp_Layout 1 5/22/19 12:01 PM Page 38
often, test after.”
Making testers first-class citizens CT isn’t going to deliver the intended results if testers are viewed and treated as second-class citizens. Historically, the view has been smart people code and other people test, which has never been a productive mindset and is antiquated in light of Agile, DevOps and CI/CD. “Testing is a noble profession. It has to be valued, which has gone by the wayside,” said Theresa Lanowitz, founder and head analyst at market research firm Voke. “Once Mercury Interactive went under, there was no vendor standing up for testers saying they’re a valuable part of the software engineering process.” According to Rex Black, president of testing training and consulting firm RBCS, some organizations have tried to get away from having any sort of independent testing team by having developers automate tests. “That’s something that often sounds better in theory than it is in practice, especially if the developers who are to go and create automated tests are only trained in how to use tools and they don’t get any training in how to design tests,” said Black. Meanwhile, testers have been told that they need to code. “The industry has said to testers, ‘If you still want to have any relevance, you have to become more technical, you have to code in Selenium,” said Voke’s Lanowitz. “Testers are writing Selenium scripts that are difficult to write, hard to maintain, brittle, bloated and they’re still doing most of the testing at the interface level, so there’s not a healthy mix of tests and we’ve largely ignored the non-functional requirements of performance and security.” Fundamentally, organizations should have a philosophy about quality and satisfying customers that not only aligns with their customers’ expectations but is also in line with the brand image they’re attempting to create or maintain. They should look to testers to lead the CT effort. “So many organizations say we’re
getting rid of our testers or our testers aren’t playing as much of a role as they have before. Quality should be front and center, but we’ve moved away from it, but I think we’re moving back to it because software has become so defectriddled over the past several years,” said Lanowitz. “You need to have healthy respect for the people that do testing. Quality has to be built in and you have to make it everyone’s job.”
The Shift-Left dilemma The “test early and often” mantra predates the turn of the millennium. Since that time, the shift-left movement
“When defects are not remediated properly, that costs you time, money, brand issues and customers,” said Voke’s Lanowitz. “Defects are leaking from sprint to sprint and phase to phase.” Forrester’s Lo Giudice said developers are spending about one hour per day testing, which isn’t enough. “You need to give them time,” said Lo Giudice. “If you’re doing unit testing, development time almost doubles. The only way to realize what’s going on is to start using tools that make that visible.” Forrester recognizes three different types of testers: business testers, technical testers and developers. Business
‘Once Mercury Interactive went under, there was no vendor standing up for testers saying they’re a valuable part of the software engineering process.’ —Theresa Lanowitz
emerged which encourages even more types of testing earlier based on the same, old reasoning which is that it’s easier and cheaper to keep bugs out of the later stages of the software development lifecycle (SDLC). More fundamentally, software quality matters. Web and mobile development stressed the need to focus on user experience (UX), which hinges on software quality. However, the focus of UX quality can focus too much on the UI level when other types of testing are also important. The belief now is that developers should do more than just unit testing including API, performance, and security testing. They may also be the ones writing test automation scripts if testers haven’t learned to write Selenium code, for example. However, the shift left hype is rosier than the reality because many developers aren’t even doing unit testing — so how can they be expected to do other types of testing? According to Diego Lo Giudice, VP and principal analyst at Forrester Research, only 53 percent of developers are doing unit testing today. Voke’s numbers are even more sobering at 67 percent.
testers tend to focus on user acceptance testing and they have the fewest number of tools available. Technical testers use high-level scripting languages or graphical tools and do the lion’s share of functional testing. They don’t necessarily write Java code to automate tests, which the developers might be doing at the functional testing level. Performance tests have been the domain of performance engineers. “This is a joint venture among product owners, business testers, technical testers, and developers,” said Lo Giudice. “How responsive the UI needs to be is a business requirement. The product owner is incented to give feedback on the performance he’s requiring. The testers might help build a baseline or interact with a performance engineering test center to identify issues that if not tested early on become very costly to do when there is end-to-end performance testing.” Providing a great user experience isn’t just about high performance and slick UIs. Security testing should also be table stakes and not just in regulated industries, but there’s more to it than application security testing. “If an organization approaches secu-
Full Page Ads_SDT024.qxp_Layout 1 5/21/19 1:35 PM Page 39
A brief history of web and mobile app testing.
BEFORE SAUCE LABS Devices. Delays. Despair.
AFTER SAUCE LABS Automated. Accelerated. Awesome.
Find out how Sauce Labs can accelerate continuous testing to the speed of awesome. For a demo, please visit saucelabs.com/demo Email sales@saucelabs.com or call (855) 677-0011 to learn more.
Full Page Ads_SDT024.qxp_Layout 1 5/21/19 1:35 PM Page 40
ContinTest3.qxp_Layout 1 5/22/19 12:01 PM Page 41
rity purely from an application security point of view, they’re going to fail because that misses two other important elements which are service security and infrastructure/network security,” said RBCS’ Black. “You have to have what security professionals call ‘defense in depth’ which means hardening all the different layers.” Then there are microservices applications and other loosely-coupled architectures that require testing at more than one level. “When you’re dealing with a lot of contracts — interface contracts, service levels, authentications — you have to make sure your object works in isolation with the contracts but somebody somewhere has to have the bigger picture [of] business processes that align a lot of different services made by a lot of different teams,” said Gartner’s Kenefick. Developers are the first line of defense against bugs, in other words.
The impact of shift left on testers Assuming more types of testing are shifting left in an organization, where does that leave testers? The division of labor is obvious in a waterfall scenario in which developers throw code over the wall and testers find bugs. With shift left, the division of labor evolves. “Shift left means making sure that you not only have ample and powerful upstream left-hand side defect filters but also good downstream right-hand defect filters,” said RBCS’s Black. Shifting left shouldn’t diminish the value of a tester, it should elevate it in two ways: by reducing the number of bugs that flow from a developer’s desk and enabling testers to spend more time improving testing-related processes. “As a tester, I might make the rest of the team more sensitive to what might need to be tested and why. I still need that skill, that mindset, that approach, the curiosity, and that gut feel of where a problem might be hiding or a technical issue might be hiding and discovering it,” said Forrester’s Lo Giudice. “We need automation and work done in parallel from the beginning and not test as a stage later on.” z
BY LISA MORGAN
Testing has always required tools to be effective. However, the tools continue to evolve with many of them becoming faster, more intelligent and easier to use. Continuous Testing (CT) necessarily recognizes the importance of testing throughout the software delivery life cycle. However, given the rapid pace of CT, tests need to run in parallel with the help of automation. However, that does not mean that all tests must be automated. The nature of “tools” is evolving in the modern contexts of data analytics, AI and machine learning. Nearly all types of tools, testing or otherwise, include analytics now. However, tool-specific analytics only provide a narrow form of value when used in a siloed fashion. CT is part of an end-to-end continuous process and so success metrics need to be viewed in that context as well. AI and machine learning are the latest wave of tool capabilities which some vendors overstate. Those capabilities are seen as enablers of predictive defect prevention, better code coverage and test coverage, more effective testing strategies and more effective testing strategy execution.
Analytics and AI can help Test metrics isn’t a new concept, although with the data analytics capabilities modern tools include, there’s a lot more than can be measured and optimized beyond code coverage. “You need to understand your code coverage as well as your test coverage. You need to understand what percentage of your APIs are actually tested and whether they’re completely tested or
not because those APIs are being used in other applications as well,” said Theresa Lanowitz, founder and head analyst at market research firm Voke. “The confidence level is important.” Rex Black, president of testing training and consulting firm RBCS said some of his clients have been perplexed by results that should indicate success when code quality still isn’t what it should be. “One thing that happens is sometimes [there is] a unidimensional view of coverage, and I’ve seen this with clients where they say, ‘How could we possibly be missing defects when we’re achieving 100% statement coverage?’ and I have to explain you’re testing the code that’s there, but how about the code that should be there and isn’t?” said Black. Similarly, there may be a big screen dashboard indicating that all automated tests have passed, so the assumption is all is good, but the dashboard is only reporting on the tests that have run, not what should be covered but hasn’t been covered. Forrester’s Diego Lo Giudice referenced a “continuous testing mobius loop” in which data is generated at each step that can be used to do “smarter things” like deciding what to test next. “If you start using AI or machine learning, you can start to predict where the software may be more buggy or less buggy, or you can predict bugs and prevent them rather than finding the bugs and validating or verifying them,” said Lo Giudice. “On the other hand, it’s part of a broader picture that we call ‘value stream management’ where each type of stakeholder in the whole end-
ContinTest3.qxp_Layout 1 5/22/19 12:01 PM Page 42
to-end delivery process has a view.” Quality is one of the views. Other views enable a product owner to underDiego Lo Giudice, VP and principal analyst at Forrester Research, outlined some of the stand the costs of achieving a quality testing tools needed for a CT effort which have been included below. The list is merely level, what has been released into prorepresentative as opposed to exhaustive: duction, how long it took and that the • Planning – JIRA value it’s generating. • Version Control – GitHub Sean Kenefick, VP, analyst at Gart• CI – Jenkins ner, said he’s working on a project that • Unit testing – JUnit, Microsoft unit test framework, NUnit, Parasoft C/C++test, involves looking at AI and its relation• Functional testing – Micro Focus UFT, TestCraft ship to software testing. Importantly, • API testing – Parasoft SOAtest, SoapUI there are two ways to view that topic. • UI testing – Applitools, Ranorex Studio One is testing systems that incorporate • Test suites – Smart Bear Zephyr, Telerik Test Studio AI. The other is using AI as part of the • Automated testing (including automated test suites) – Appvance, IBM Rational testing effort. Functional Tester, LEAPWORK, Sauce Labs, Selenium, SmartBear TestComplete, “I think AIs are going to have quite a SOASTA TouchTest, Micro Focus Borland Silk Test significant impact on automated testing • CT – Tricentis Tosca because they’re going to allow us to solve some really thorny problems,” accessibility testing and portability test- you’re trying to accomplish? For each said Kenefick. “Some of my clients are ing to some extent. one of those objectives, show me how in the video game space where, unlike “You need to start with what do we you’re measuring testing and efficiency, an insurance package or a banking need to test and if you let the test show me the business case for your package that have right answers deterautomation determine what your test automation. What’s your ROI?’ Without mined by banking regulations or GAAP, coverage is going to be, that’s likely to a business case, it’s not a tool, it’s a toy.” games are disconnected from the real make you sad,” said Black. “I made that There is a temptation to go to tools world.” first and think later, when the When you start to think Traditional automation tools ‘ reverse yields better results. have done a poor job of bridgStarting with tools results in a of testing as a discipline, ing the gap between fictional test strategy by default rather rather than a role, everyone scenarios and the real-life than test strategy by design. realizes that testing is part expectations of humans, such “I think many people would of their role even if testing as hair flapping in the wind. AI say [that having a testing strateis not in their title.’ enhancement could help. gy] is an old-school approach, —Sean Kenefick Automated testing is ripe but it’s the right approach for AI enhancement from a because if you’re just going with number of perspectives, such as identi- mistake when I was an inexperienced tool A or tool B, you’re trusting your fying tests that should be automated test manager and it’s not a good thing. approach to that tool and that tool may and suggesting new types of tests, but Identify all the things you need to cover not provide what you need,” said Voke’s again, even with AI it may not be possi- and then figure out the best way to cov- Lanowitz. “I think you need to take a ble or even wise to automate all types of er those things.” step back and decide what you’re going tests. to do.” “We can automate functional testing Drive higher value with a test strategy Ultimately, the testing strategy and at multiple levels to some extent, though The inclusion of analytics, AI, and execution need to tie together in a way there are some validation tests which machine learning in tools is arguably that aligns with business objectives, but may not be so easy to automate,” said “cool,” but the outcomes they help more fundamentally organizations have Black. “At some point, we’ll have AIs enable and the effectiveness of testing to cultivate a culture of quality in the that can do ethical hacker kinds of tests, generally can be improved by having an first place if they want their CT efforts but we don’t have that now, so if you overarching test strategy. to be stable. “This is a very big problem. Software want to do a penetration test, you need a “Understand that nothing we’re engineering is notoriously fad-driven and doing can be 100 percent. What we’re human ethical hacker to do that.” Localization testing tools are using hype-driven,” said RBCS’s Black. “I’ve really doing is trying to minimize our machine learning to test translations. worked with a number of clients who risk as much as possible and mitigate They’re not perfect yet, nor is Google were heavily into automation and if you for things we weren’t expecting,” said Translate, which is available via an API, look at what they were doing, there was RBCS’s Black. “Continuous delivery is a so a human is still needed in the loop. no strategy. I ask questions that are pretty process that lives forever. As long as the Humans also need to be involved in obvious like, ‘What are the objectives product is alive, we’re testing it.” z
CT-Related Tools
98% Defect Identification is Possible. IMAGINE THIS Increasing your defect identification from 75% to over 98%. We did it for a Fortune 500 bank. And we can do it for you too. With fast feedback, smart AI-powered insights, and quieter reporting, Perfecto powers the world’s leading organizations. Get streamlined test reporting and analytics with visual heatmaps, automation dashboards, and detailed test reports. See it in action. Get your analytics demo today.
Part 2: Testing at Every Step
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:31 AM Page 22
Continuous Testing at every step C
ontinuous integration (CI), continuous testing (CT) and continuous delivery (CD) should go hand-in-hand, but CT is still missing from the CI/CD workflow in most organizations. As a result, software teams eventually reach an impasse when they attempt to accelerate release cycles further with CI/CD. What they need to get to from both a mindset and process standpoint is a continuous, end-to-end workflow that includes CT. While CT requires test automation in order to meet time-to-market mandates, the two are not synonymous. A common misconception is that CT means automating every test, which isn’t necessarily practical or prudent. Instead, the decision to automate tests should be viewed from a number of
BY LISA MORGAN perspectives including time and cost savings.
How to set up continuous testing Like CI, CD, DevOps and Agile, the purpose of CT is to accelerate the release of quality software. To enable a continuous end-to-end workflow, one should understand how CT fits into the CI/CD pipeline and how it can be used to drive higher levels of efficiency and effectiveness. “The key thing is prioritizing,” said Mush Honda, VP of testing at software development, testing services and consulting company KMS Technology. “If you are in a state where you don’t have a live system, it’s easier to go into a mindset of automation first. I still
believe not everything can be automated in most cases. For those things that you are trying to migrate off of manual testing and add a component of automated testing with a system that’s already live or near going live, I would attack it with business priorities in mind.” Automated testing should occur often enough to avoid system disruption and ensure that business-critical functionality is not adversely impacted. To prioritize test automation, consider the business severity of defects, manual tests that take a lot of time to set up, and whether the tests that have already been automated still make sense. Also, make a point of understanding what the definition of CT is in your organization so you can set goals accordingly.
“You need to understand what you’re going to achieve [by] doing CT in measurable terms and how that translates to your application or software project,” said Manish Mathuria, CTO and co-founder of digital strategy and services company Infostretch. “Beyond that, then it depends on having the right strategy for automation. Automation is key, so [you need] good buy-in on what layers you’re going to automate, the quality gates you’re going to put on each of these types of tests for static analysis, what you are going to stop at for unit tests, what kind of pass rates you’re going to achieve. It goes upstream from there.” Each type of automated test should be well-planned. The rest is engineering, and the hard part may be getting everyone to buy into the continuous
code meets performance criteria • Security testing to identify vulnerabilities • Logging and monitoring to pinpoint errors occurring in production Implementing CT may require adjusting internal testing processes to
THE SECOND OF THREE PARTS
achieve the stated goals. For example, Lincoln Financial developers used to follow a waterfall methodology in which developers met with a business user or analyst to understand requirements. Then, the developer would write code
“T est engineers need to know equally what is being changed by
software engineers and then make the changes at the same time your software engineers are changing the application.”
—Nancy Kastl
testing process. “Continuous testing is designed to mature within your CI/CD process,” said Nancy Kastl, executive director of testing at digital transformation agency SPR. “Without having testing as part of the build, integrate, deploy [process], all you’re doing is deploying potentially bad code quicker.” The CT process spans from development to deployment including: • Unit tests that ensure a piece of functionality works the way it is intended to work • Integration tests that verify the pieces of code that collectively enable a piece of functionality are working as intended together • Regression testing to ensure the new code doesn’t break what exists • API testing to ensure that APIs meet expectations • End-to-end tests that verify workflow • Performance tests that ensure the
and send it off for testing. The company now does Test-Driven Development (TDD), which juxtaposes testing and development. Test scripts are written and automated based on a user story before code is written. In addition, acceptance testers have been placed in development. “When the code passes the test, you know you’ve achieved the outcome of your user story,” said Michelle DeCarlo, senior VP of technology engineering and enterprise delivery practices at Lincoln Financial.
Managing change When code changes, the associated tests may fail. According to SPR’s Kastl, that outcome should not happen in a CT process since developers and testers should be working together from day one. “Communication and collaboration are really key when it comes to manag-
ing changes,” said Kastl. “As part of Agile methods, your team includes software engineers and test engineers, and the test engineers need to know equally what is being changed by the software engineers and then make the changes at the same time your software engineers are changing the application.” To improve testing efficiency, Lincoln Financial uses tools to isolate software changes and has quality checks built into its process. The quality checks are performed with different types of resources to lessen the likelihood that a change may go unnoticed. “We try to isolate when an asset changes [so we can] make sure that we’re testing for those changes. Quite frankly, nothing is foolproof,” said Lincoln Financial’s DeCarlo. “After we’ve released to production, we also do sampling and examine the code as it works in production.” While it’s probably safe to say no organization has achieved a zero-defect rate in production, Lincoln Financial tries to minimize issues by performing different types of scans, including listening to customer feedback via various channels, including social media, so that feedback can be integrated into the delivery stream. Generally speaking, it’s important to understand what these software changes impact so the relevant tests can be adjusted accordingly. If a traditional automation script fails, the defect may be traceable back to the build process. If that’s the case, one can determine what has changed and what specific code caused the failure. Nevertheless, it’s also important to have confidence in the test scripts themselves. “If you don’t have high confidence in the scripts that are traditionally run, that sort of spirals into the question of what you should do next,” said KMS Technology’s Honda. “You don’t know whether it was a problem with the way the script was written or the data it was using, or if it was genuinely a point of failure. Being able to have high confidence in the script I created is what becomes a key component of how I know something did go wrong with the system.”
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:32 AM Page 25
A brief history of web and mobile app testing.
BEFORE SAUCE LABS Devices. Delays. Despair.
AFTER SAUCE LABS Automated. Accelerated. Awesome.
Find out how Sauce Labs can accelerate continuous testing to the speed of awesome. For a demo, please visit saucelabs.com/demo Email sales@saucelabs.com or call (855) 677-0011 to learn more.
Issue tracking tools like Jira help because they provide traceability from the user story on. Without that, it’s harder to pinpoint exactly what went wrong. Some tools now use AI to enable model-driven testing. Specifically, the AI instance analyzes application code and then automatically generates automated tests. In addition, such tools use other data, such as the data that resides in other tools to understand such things as what happens in the software development process, where defects have arisen, and why tests have failed. Based on that information, the AI instance can predict and determine the risks and impacts of defects module by module. “Model-based testing is essentially about not writing tests by a human being. What you do is create the model and let the model create tests for you, so when a change happens you are changing something a lot more upstream versus changing the underlying test cases,” said KMS Technology’s Honda. “Likewise, when a test is written and automated [by the AI instance], if certain GUI widgets change or my user interaction changes, since I did not automate the test in the first place, my AI-driven program would automatically try to define automated tests based on the manual test case. Predictive QA is more resilient to change, [which is important because] brittleness is the biggest challenge for continuous testing and automation.
How to tell if your CT effort is succeeding The general mandate is to get to market faster with higher quality software. CT is a means of doing that. In terms of speed, tests should be running within the timeframe necessary to keep pace with the CI/CD process, which tends to mean minutes or hours versus days. In terms of quality, CT identifies defects earlier in the life cycle, which minimizes the number of issues that make it into production. Another measure of CT success is a cultural one in which developers change their definition of “done” from
Automating tests when change is the norm Continuous testing requires automated testing to help speed the CI/CD process. The trick is to constantly expedite the delivery of code in an era when software change is not only a constant, but a constant that continues to occur at an ever-accelerating rate. In today’s competitive business environment, customers are won and lost based on code quality and the value the software provides. Rather than packing applications with a dizzying number of features (a mere fraction of which users actually utilize) the model has shifted to continuous improvement, which requires a much better understanding of customer expectations in real time, an unprecedented level of agility and a means of ensuring software quality practices are both time-efficient and cost-effective despite software changes. “You used to automate everything thinking you’re going to get an overall lift,” said Michelle DeCarlo, senior VP, Technology Engineering, Enterprise Delivery Practice at Lincoln Financial. “While that was true, there’s also a maintenance cost that can break you because there’s the cost of keeping things current. Now [we have] a lot more precision upfront in the cycle to identify where we should automate and where we’re going to get that return.” Rather than simply automating more tests because that’s what seems to facilitate a shift to CT, it’s wise to have a testing strategy that prioritizes tests and distinguishes between tests that should and should not be automated based on time savings and cost efficiency. “Before people were implementing DevOps, [they] used to say if you needed a stable application you should have one round of manual testing before you could venture into automation. Once people started implementing DevOps,
testing had to happen with development,” said Vishnu Nallani Chekravarthula, VP and head of innovation at software development and quality assurance consultancy Qentelli. “One of the approaches that we have found to be successful is writing the tests before you write the code, and then write the code to ensure that the tests pass.” While test-driven development (TDD) isn’t a new concept, it’s a common practice among organizations that have adopted CT. Whether TDD enables CT or the other way around depends on the unique starting point of an organization. With TDD and CT, automation isn’t an afterthought, it’s something that’s top of mind from the earliest stages of a project.
Adapting to constant change While applications have always been subject to change, change is no longer an event, it’s a constant. That’s why more organizations are going to market with a minimum viable project (MVP) and improving it or enhancing it over time based on customer behavior and feedback. Since development and delivery practices have had to evolve with the times, so must testing. Specifically, testing cycles must keep pace with CI/CD without increasing the risks of software failures. In addition, testers have to be involved in and have visibility into everything from the development of a user story to production. “You’re always able to analyze and do a sort of an impact analysis from user stories [so if] we change these areas or these features are changing, [you can come up with a] list of tests that we typically no longer need, that would have to be abated to reflect the new feature set,” said Mush Honda, VP of testing at software development, testing services and consulting KMS Technology. “So, it follows that the involvement and the engagement of the tester as part of the bigger team definitely needs to be a
JUST A LAB? THINK AGAIN.
Perfecto is the only end-to-end solution for continuous testing. Create, execute, AND analyze tests in our all-in-one platform. And do it all from our always-on, always-stable cloud-based lab. Explore all that Perfecto has to offer. Learn More
core component.” While it’s always been a tester’s responsibility to understand the features and functionality of the software they’re testing, they now have to understand what’s being built earlier in the life cycle, so tests can be written before code is written or tests can be written in parallel with code. The earlier-stage involvement saves time because testers have insight into what’s being built. Also, the code can better align with the user story. It’s also more apparent what should be automated to drive better returns on investment.
Be careful what you automate
mindsets away from overreliance on automated UI tests. “If you’re in a situation where you have the type of application that is driven by a lot of business needs and a lot is changing, from a technical perspective, you don’t want to automate at the UI level,” said Nancy Kastl, executive director of testing at digital transformation agency SPR. “Instead you want to automate at the unit level or the API services level.” “Think of applying for a bank loan. In the loan origination process you go through screen after screen. [As a tester,] you don’t want to involve the whole workflow because if something changes in one part, your tests are going to have to change throughout,” said Kastl.
A common mistake is to focus automation efforts on the UI. The problem with that is the UI tends to change n order to write tests that are more often than the back end, which less brittle, you have to test it makes those autofrom the bottom up and then as mated tests brittle. things change, you have to change The frequency of tests at the individual layers.” UI change tends to —Manish Manthuria be driven by the business, because when they see what’s built, they realize The concept is akin to building they’d prefer a UI element change, microservices applications that use such as moving the location of a button. small, self-contained pieces of code “If you have a car and if the car versus a long string of code. Like breaks down, you don’t just look at the microservices, the small, automated steering wheel and dashboard, so tests can be assembled into a string of unless you have tests and sensors at tests, yet a change to one small test individual parts of the car, you can’t does not necessarily require changes really tell why it has broken to all other tests in the string. down. The same idea applies “We need to think like proto software testing,” said grammers because if someManish Manthuria, CTO thing changes, I’ve got one and co-founder of digital script to change and everything strategy and services else fits together,” said Neil Pricecompany Infostretch. “In Jones, president of software testing order to write tests that and quality assurance consultancy are less brittle, you have to NVP Testing. test it from the bottom up and However, test automation can only then as things change, you do so much. If change is the norm have to change tests at the indibecause of ad hoc development pracvidual layers.” tices that aren’t aligned with the busiUsing a layered approach to testing, ness’ expectations in the first place, errors can be identified and addressed then test automation will never work, where they actually reside, which may according to SPR’s Kastl. Fix the way not be at the UI level. A layered you develop software first, then you’ll approach to testing also helps shift be able to get test automation to work. z
“I
the delivery of code to the delivery of tested code. “You need the cultural belief that developers can’t say something is done until it’s been tested. Another key success indicator is when all your testing is completed in the same Agile sprint,” said SPR’s Kastl. “It’s not saying ‘I’m going to do some testing in the sprint based on the amount of time I have so I’m going to automate regression in the next sprint.’ You should not be a sprint behind. The way to make sure in-sprint testing is being done as part of a CT process is developers are merging their code and it’s ready to test on an hourly or daily basis, so testers can do their work.” For Infostretch’s Mathuria, the highlevel indicator of CT success is data that proves a build or release is certified in an automated way. A lowerlevel indicator is that decisions are not being made about software promotion at any pre-defined level, such as this much functional testing is enough or that much security testing is enough. Instead, what qualifies as “enough” is determined by the CT process an organization has established. “Only exceptions are managed by people and not the base level workflow,” said Mathuria. “Once you achieve that then you see the right value from continuous testing.” And don’t forget metrics, because success needs to be measured. If speed is the goal, what kind of speed improvement are you trying to achieve? Define that and work backward to figure out what’s necessary to not only meet the delivery target but also be confident that the release is of an acceptable quality level. “You also need to think about skill sets. Are they able to adopt the tools necessary or not? Do they understand automation or not? Do they understand the continuous testing strategy or not?” said Honda. If you want to get to continuous anything, there has to be a timeline and a goal you have to measure up against, which ultimately defines whether you’re successful, not successful or facing roadblocks.” z
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:33 AM Page 29
Full Page Ads_SDT022.qxp_Layout 1 3/25/19 11:46 AM Page 22
Why Continuous Testing Is So Important
BY LISA MORGAN
rganizations continue to modernize their software development and delivery practices to minimize the impact of business disruption and stay competitive. Many of them have adopted continuous integration (CI) and continuous delivery (CD), but continuous testing (CT) tends to be missing. When CT is absent, software delivery speed and code quality suffer. In short, CT is the missing link required to achieve an end-to-end continuous process.
O
“Most companies say they are Agile or want to be Agile, so they’re doing development at the speed of whatever Agile practice they’re using, but QA [gets] in the way,” Manish Mathuria, CTO and founder of digital strategy and services company Infostretch. “If you want to test within the sprint’s boundary, certifying it and managing the business around it, continuous testing is the only way to go.” Part of the problem has to do with the processrelated designations the software industry has chosen, according to Nancy Kastl, executive director of testing at digital transformation agency SPR. “DevOps should have been called DevTestOps
THE FIRST OF THREE PARTS [and] CI/CD should have been called CI/CT/CD,” said Kastl. “In order to achieve that accelerated theme, you need to do continuous testing in parallel with coding. Otherwise, you’re going to be deploying poor quality software faster.” Although many types of testing have shifted left, CT has not yet become an integral part of an end-toend, continuous process. When CT is added to CI and CD, companies see improvements in speed and quality.
Automation is key Test automation is necessary for CT; however, the two are not synonymous. In fact, organizations should update their automated testing strategy to accommodate the end-to-end nature of continuous processes. “Automated testing is a part of the CI/CD pipeline,” said Vishnu Nallani Chekravarthula, VP and head of innovation at software development and quality assurance consultancy Qentelli. “All the quality check gates should be automated to ensure there is no manual intervention for code promotion between [the] different stages.” That’s not to say that manual testing is dead. The critical question is whether manual testing or automated testing is more efficient based on the situation. “You have to do a cost/benefit analysis [because] the automation has to pay for itself versus ongoing manual execution of the script,” said Neil PriceJones, president of software testing and QA consultancy NVP Testing. “The best thing I heard was
someone who said he could automate a test in twice the time it would take you to write it.” CT is not about automating everything all the time because that’s impractical. “Automated testing is often misunderstood as CT, but the key difference is that automated testing is not in-sprint most times [and] CT is always done in-sprint,” Qentelli’s Chekravarthula said. Deciding what to automate and what not to automate depends on what’s critical and what’s not. Mush Honda, vice president of testing at KMS Technology, considers the business priority issues first. “From a strategy perspective, you have to consider the ROI,” said Honda. “[During a] project I worked on, we realized that one area of the system was used only two percent of the total time so a business decision was made that only three cases in that two percent can be the descriptors, so we automated those.” In addition to doing a cost/benefit analysis, it’s also important to consider the risks, such as the business impact if a given functionality or capability were to fail. “The whole philosophy between CI and CD is you want to have the code coming from multiple developers that will get integrated. You want to make sure that code works itself as well as its interdependencies,” said SPR’s Kastl.
Tests that should be included CT involves shifting many types of tests left, including integration, API, performance and security, in addition to unit tests. All of those tests also apply to microservices. “For modern technology architectures such as microservices, automated API testing and contracts testing are important to ensure that services are able to communicate with each other in a much quicker way than compared to integration tests,” said Chekravarthula. “At various stages in the CI/CD pipeline, different types of tests are executed to ensure the code passes the quality check gates.” Importantly, CT occurs throughout the entire SDLC as orchestrated by a test strategy and tools that facilitate the progression of code. “The tests executed should ensure that the features that are part of a particular commit/release are covered,” said Chekravarthula. “Apart from the insprint tests, regression tests and non-functional tests should be executed to ensure that the application that goes into production does not cause failures. At each stage of the CI/CD pipeline, the quality check gates should also reach a specified pass percentage threshold for the build to qualify for promotion to the next stage.” Since every nuance of a microservices applica-
Full Page Ads_SDT022.qxp_Layout 1 3/25/19 1:10 PM Page 27
A brief history of web and mobile app testing.
BEFORE SAUCE LABS Devices. Delays. Despair.
AFTER SAUCE LABS Automated. Accelerated. Awesome.
Find out how Sauce Labs can accelerate continuous testing to the speed of awesome. For a demo, please visit saucelabs.com/demo Email sales@saucelabs.com or call (855) 677-0011 to learn more.
The Cloud’s #1 Continuous Testing Platform Accelerate digital transformation across the enterprise with a comprehensive suite of software testing tools – from agile test management to automated continuous testing for enterprise architectures.
tion is decomposed into a microservice, each can be tested at the unit level and certified there and then in integration tests created above them. “Functional, integration, end-to-end, security and performance tests [are] incorporated in your continuous testing plan,” said Infostretch’s Mathuria. “In addition, if your microservices-based architecture is to be deployed in a public or private cloud, that brings different or unique nuances of testing for identity management, testing for security. Your static testing could be different, and your deployment testing is definitely different so those are the new aspects that microservices-based architecture has to reflect.” KMS Technology’s Honda breaks testing into two categories — pre-production and production. In pre-production, he emphasizes contracts in testing, which includes looking at all the API documentation for the various APIs, veri-
His test automation emphasizes business-critical workflows including what would be chaotic or catastrophic if those functions didn’t work well in the application (which ties in with performance and load testing). “A lot of times what happens is performance and load are kept almost in their own silo, even with microservices and things of that nature. I would encourage the inclusion of load and performance testing to be done just like you do functional validation,” said Honda. On the production side, he stressed the need for testers to have access to data such as for monitoring and profiling. “With the shift in the role that testing plays, testers and the team in general should get access to that information [because it] allows us to align with the true business cases that are being applied to the application
‘I call out pre-prod and prod buckets specifically instead of calling out dev, QA, staging, UAT, all of those because of the nature of continuous delivery and deployment.’ — Mush Honda, KMS Technology
fying that the APIs perform as advertised, and checking to see if there is any linguistic contract behavior within the documentation or otherwise that the microservices leverage. He also considers the different active end points of all the APIs, including their security aspects and how they perform. Then, he considers how to get the testing processes and the automated test scripts in the build and deployment processes facilitated by the tool he’s using. Other pre-production tests include how an API behaves when the data has mutated and how APIs behave from a performance perspective alone and when interacting with other APIs. He also does considerable component integration testing. “I call out pre-prod and prod buckets specifically instead of calling out dev, QA, staging, UAT, all of those because of the nature of continuous delivery and deployment,” said Honda. “It’s so much simpler to keep them at these two high levels.”
when its live,” said Honda. “How or what areas of the system are being hit based on a particular event in the particular domain or socially? [That] also gives us the ability to get a good understanding of the user profile and what a typically useful style of architecture is. Using all of that data is really good for test scenarios and [for setting] our test strategy.” He also recommends exploratory testing in production such as A/B testing and canary deployments to ensure the code is stable. “The key thing that makes all of this testing possible is definitely test data,” said Honda. “When you test with APIs, the three key sorts of test data that I keep in mind are how do you successfully leverage stubs when there’s almost canned responses coming back with more logic necessarily built into those? How do you leverage fakes whereby you are simulating APIs and leveraging anything that’s exposed by the owner of any dependent services? Also, creating mocks and virtualization where we need to make sure that any of the mocks are invoked in a certain manner [which allows] you to focus on component interaction between services or microservices.”
In addition to API testing, some testing can be done via APIs, which SPR’s Kastl leverages. “If I test a service that is there to add a new customer into a database, I can test that the new customer can be added to the service without having a UI screen to collect all the data and add that customer,” said Kastl. “The API services-level testing winds up being much more efficient and with automation, it’s a lot more stable [because] the API services are not going to change as quickly as your UI is changing.” Jim Scheibmeir, associate principal analyst at Gartner, underscored the need for authentication and entitlements. “[Those are] really important when we talk about services: who can get to the data,
SLA and some agreed-upon independent factor that says our test cases are not necessarily depending on one another. It increases the concept of good scalability as your system matures,” said Honda. “One of the routine mistakes I’ve seen a lot of teams make is saying I’m going to execute six test cases, all of which are interdependent. [For example,] I can’t execute script number three unless tests one and two have been executed. [Test independence] becomes critical as your system matures and you have to execute 500 or 50,000 test cases in the same amount of time.” As his teams writes tests, they include metadata such as the importance of the test, its priority, the associated user story, the associated release, and what specific modules, component, or code file the test is executed against. That way, using the metadata, it is possible to choose tests based on What goes into the right mix of CT tests are goals. those that are going to test that functionality you’re “The type of test and the test’s purpose both have to be incorporated in your deciabout ready to deploy, as well as critical business sion-making about what tests to run,” said functionality that could be impacted by your change Honda. “You don’t write your tests thinkplus some performance and security tests.’ ing, ‘I want to test this,’ you write your test and incorporate a lot of metadata in — Nancy Kastl, SPR it, then you can use the tests for a specific how much they can get to, is goal like putting quality into your CD process or there a threshold to usage or doing a sanity test so you can promote it to the availability,” said Scheibmeir. “Perfornext stage. Then you can mix and match the metamance is also key, and because this is a complex data to arrive at the right suite of tests at runtime environment, I want to test integrations, even without hard-coding that suite.” degrading them, to understand how my composite Performance and security testing should be an application works when dependencies are down, or integral part of CT given user experience expectahow fast my self-healing infrastructure really is.” tions and the growing need for code-related risk management. Ensuring the right combination of tests “What goes into the right mix of CT tests are When deciding which tests to run, KMS Technol- going to be those tests that are going to test that ogy’s Honda considers three things: business prior- functionality you’re about ready to deploy, as well ity, the data, and test execution time. as critical business functionality that could be Business priority triages test cases based on the impacted by your change plus some performance business objective. Honda recommends getting and security tests,” said SPR’s Kastl. “Taking a riskbusiness buy-in and developer buy-in when based approach is what we do normally in testing explaining which tests will be included as part of and it’s even more important when it comes to the automation effort, since the business may have continuous testing.” thought of something the testers didn’t consider. Past application issues are also an indicator of Second, the data collected by monitoring tools which tests should be prioritized. While test coverand other capabilities of the operations team pro- age is always important, it’s not just about test covvide insight into application usage, the target, pro- erage percentages per se. Acceptable test coverage duction defects, how the production defects are depends on risk, value, cost effectiveness and the being being classified, how critical they are from a impact on a problem-prone area. business severity perspective, and the typical Gartner’s Scheibmeir said another way to workflows that are being used. He also pays atten- ensure the right mix of tests are being used is tion to execution speed since dependencies can benchmark against oneself over time, measuring such things as lead time, Net Promoter Score, or result in unwanted delays. “There should be some agreed-upon execution the business value of software and releases. z
CLOUD-BASE TESTING LAB With real devices and browsers to test, we’re always on. Always updated. Entirely secure. Get the stability and enterprise-grade scalability you need. SIMPLIFIED TEST CREATION Whether you need help with test authoring, maintenance, management, validations, or debugging, Perfecto has a solution for you. STREAMLINED TEST EXECUTION Orchestrate large suites across platforms for high velocity and parallel text execution with unlimited elastic scaling. SMART ANALYSIS Get an instant overview in a powerful management dashboard with Perfecto’s visual analytics. With fast feedback, you’ll get all the insights you need.
webinar ad.qxp_WirelessDC Ad.qxd 3/25/19 12:41 PM Page 1
Be a more effective manager
Visit the sdtimes.com Learning Center Watch a free webinar and get the information you need to make decisions about software development tools.
Learn about your industry at www.sdtimes.com/sections/webinars
Century-Old Company Prioritizes CT ompanies in just about every industry are being disrupted by digital natives. To stay competitive, old companies must adapt, which can be a painful transformation. Moving to Agile is tough enough. Now, more businesses have adopted DevOps, continuous integration (CI) and continuous deployment (CD), albeit at different speeds. Software testing at Lincoln Financial Group is progressive by design because software delivery speed and quality differentiate the company, so much so that Michelle DeCarlo, SVP of IT Enterprise Delivery, leads the company’s technical transformational initiatives. “DevOps, cloud, continuous testing all feed into our larger journey which is the digital transformation effort,” said DeCarlo. “At the core, it’s changing the way we work. So, when we talk about continuous testing, it’s how do we get faster, leaner, and eliminate friction.” Like many mature companies, Lincoln Financial had waterfall-type practices that involved a series of hand-offs. In addition, a lot of the testing was manual. The company is now using continuous processes, including CT to get to market faster. In fact, DeCarlo said that continuous testing has improved Lincoln Financial’s delivery speed by 30%. “Continuous testing is one of those levers that we view as being critically important and fundamental to our strategy,” she said. “Testing used to be viewed as a cost because we were always waiting. Now it’s what differentiates us and helps us get to market faster.” Of course, getting there wasn’t all about the CT process. In addition to changing the way people work, talent had to be upskilled and new tools were required. “There are a few things we do differently with CT compared to how it used to work. One of those things is recognizing that the advanced engineering practices that are important to learn, like test-driven development and service virtualization, are table stakes now,” said DeCarlo. “These skills allow us to accelerate delivery cycles and the test engineers are embedded in the delivery cycle.” Lincoln Financial automates many types of tests, including performance, security, regression, feature and smoke testing. “When you drop code, you need to determine in seconds whether that code is valuable. If it fails, it stops your production cycle,” said DeCarlo. “You have to think about it as one continuous flow
C
because if you have a weak link, it halts your production and you lose hours. You cannot advance your practice unless you have automation.” The company is also taking advantage of microservices and containers, which enable a flexible and scalable application architecture. “The ability to decompose your code base and test at the most granular level is how firms get an advantage,” said DeCarlo. “When you test monolithic systems in totality it takes a long time, so we now decompose code and features, and test at the smallest level possible. Minimum viable product is part of our DNA now.”
API testing is critical “The focus on service testing is huge,” said DeCarlo. “In the past, people were worried about testing the front-end, which is still helpful and necessary, but the majority of your effort should be focused on the service level because that’s where you need to make sure everything works.”
Data drives insights Lincoln Financial also makes a point of Michelle DiCarlo using encrypted test data that includes the scenarios it wants to test so it can very quickly test multiple parameters and scenarios against whatever use cases apply. “It’s critical to ensure that testers have all the data needed as opposed to waiting on other groups,” said DeCarlo. “We have figured out how to get the data that might be upstream, downstream, in the middle, or in our service architecture. Our strategy is to make data [available] on demand, which will allow us to respond at the same pace as other firms.”
Continuous testing means continuous learning Lincoln Financial has a mantra when it comes to software: build, measure, learn. The philosophy maps to developing the software, getting the data ready, testing it, going through the continuous delivery process, and measuring it. “You need to have performance dashboards that are available to all your stakeholders, whether it’s a developer, tester, or product owner, and then you have to make modifications and improvements because what worked yesterday might work today but likely will not work tomorrow,” said DeCarlo. “You have to embed this continuous improvement mindset and not look at it as failures, but learning.” z