AI-Powered Testing Tools Buyer's Guide

Page 1

AI-Powered Testing Tools

a g u i d e f o r b u y e r s

AI and the

evolution of test automation

Automating testing sets dev teams free to imagine more

Test automation has undergone quite an evolution in the decades since it first became possible

Yet despite the obvious benefits, the digitalization of the software development industry has created some new challenges

It comes down to three big things, according to Kevin Parker, vice president of product at Appvance. The first is velocity and how organizations ‘ can keep pace with the rate at which developers are moving fast and improving things, so that when they deliver new code, we can test it and make sure it’s good enough to go on to the next phase in whatever your life cycle is,’ he said

The second area is coverage Parker said it’s important to understand that enough testing is being done, and being done in the right places, to the right depth. And, he added, ‘It’s got to be the right kind of testing

If you Google test types, it comes back with several hundred kinds of testing.’

How do you know when you’ve tested enough? ‘If your experience is anything like mine,’ Parker said, ‘the first bugs that get reported when we put a new release out there, are from when the user goes off the script and does something unexpected, something we didn’t test for So how do we get ahead of that?’

And the final, and perhaps most important, area is the user interface, as this is where the rubber meets the road for customers and users of the applications ‘The user interfaces are becoming so exciting, so revolutionary, and the amount of psychology in the design of user interfaces is breathtaking But that presents even more challenges now for the automation engineer,’ Parker said

Adoption and challenges

According to a report by Research Nester, the test automation market is expected to

grow to more than $108 billion by 2031, up from about $17 billion in 2021. Yet as for uptake, it’s difficult to measure the extent to which organizations are successfully using automated testing

‘I think if you tried to ask anyone, ‘ are you doing DevOps? Are you doing Agile?’ Everyone will say yes,’ said Jonathan Wright, chief technologist at Keysight, which owns the Eggplant testing software ‘And everyone we speak to says, ‘ yes, we’re already doing automation ’ And then you dig a little bit deeper, they say, ‘well, we’re running some selenium, running some RPM, running some Postman script ’ So I think, yes, they are doing something ’

Wright said most enterprises that are having success with test automation have invested heavily in it, and have established automation as its own discipline. These organizations, he said,

‘They’ve got hundreds of people involved to keep this to a point where they can run

B U Y E R S G U I D E

thousands of scripts ’ but in the same breath, he noted that the conversation around test case optimization, and risk-based testing, still needs to be had ‘is over-testing a problem?’ he posited. ‘There’s a continuous view that we’re in a bit of a tech crunch at the moment We’re expected to do more with less, and testing, as always, is one of those areas that have been put under pressure and now, just saying i’ve got 5,000 scripts, kind of means nothing Why don’t you have 6,000 or 10,000? you have to understand that you’re not just adding a whole stack of tech debt into a regression folder that’s giving you this feelgood feeling that i’m reading 5,000 scripts a day, but they’re not actually adding any value because they’re not covering new features.’

Testing at the speed of DevOps one effect of the need to release software faster is the ever-increasing reliance on open-source software, which may or may not have been tested fully before being let out into the wild

arthur Hicken, chief evangelist at Parasoft, said he believes it’s a little forward thinking to assume that developers aren’t writing code anymore, that they’re simply gluing things together and standing them up ‘That’s as forward thinking as the people who presume that ai can generate all your code and all your tests now,’ he said. ‘The interesting thing about this is that your cloud native world is relying on a massive amount of component reuse The promises are really great but it’s also a trust assumption that the people who built those pieces did a good job We don’t yet have certification standards for components that help us understand what the quality of this component is ’

He suggested the industry create a bill of materials that includes testing. ‘This thing was built according to these standards, whatever they are, and tested and passed and the more we move toward a world where lots of code is built by people assembling components, the more important it will be that those components are well built, well tested and well understood.’

appvance’s Parker suggests doing testing as close to code delivery as possible ‘if you remember when you went to test automation school, we were always taught that we don’t test the code, we test against the requirements,’ he said ‘but the modern technologies

continued on page 4 >

Training the models for testing

Code coverage and end-to-end testing sometimes called path testing are particularly well-suited for automation, but they’re only as good as the training and implementation since ai doesn’t have an imagination, it is up to the model and whoever is feeding in that data to cover as many paths as you can in an end-to-end test so how would the ai discover something that the person creating the model couldn’t think of?

“if you hire a manual tester tomorrow, you say to them on day one, ‘i want you to test my application,’ “ appvance’s VP of Product Kevin Parker said “ ‘Here’s the urL, here are 10 things you need to know, use this user id and password to log in as an end user. and make sure that whenever you see quantity, price and cost, that quantity times price equals cost ’ you’re going to give that manual tester the basic rules ”

Parker said ai can be trained in the same way “What you teach is that the manual tests involve three things: how to behave, what to test, and what data to use. you teach the ai, ‘That’s an annoying popup box, just dismiss it don’t click the admin button, don’t go into the marketing content ’ you communicate those things to the ai, which then has the ability to autonomously go to each page, identify if any of the rules you’ve trained it on apply how to behave, how to interact, what data to enter and then exercise them ”

it turns out that it’s way easier to spend your time training the ai that will yield huge amounts of tests generated than it is to sit down and manually write those tests, he explained and that means that testing can now keep pace with development because you’re able to generate tests at scale and at volume, Parker said

Model trainers have the mechanism to teach the ai the ability to adapt, so when it sees something it’s not seen before, it can extrapolate an answer from the business rules it was trained it on, Parker said “We can then assume that what it’s about to do is the right thing,” he said yet most importantly, he added, is that it needs to know when to ask questions “it needs to know when to phone home to its human and say, ‘Hey, this is something, i’m not confident i know what to do, teach me,” Parker said.

and in that way, ai bots can be trained to autonomously interact with the application, and test it independently of humans n

that we use for test automation require us to have the code handy until we actually see the code, we can’t find those [selectors] so we’ve got to find ways where we can do just that, that is bring our test automation technology as far left in the development lifecycle as possible it would be ideal if we had the ability to use the same source that the developers use to be able to write our tests, so that as dev finishes, test finishes, and we’re able to test immediately, and of course, if we use the same source that dev is using, then we will find that Holy grail and be testing against requirements so for me, that’s where we have to get to, we have to get to that place where dev and test can work in parallel ’ as Parker noted earlier, there are hundreds of types of testing tools on the market for functional testing, performance testing, ui testing, security testing, and more and Parasoft’s Hicken pointed out the tension organizations have between using specialized, discrete tools or tools that work well together ‘in an old school traditional environment, you might have an iT department where developers write some tests and then testers write some tests, even though the developers already wrote tests, and then the performance engineers write some tests, and it’s extremely inefficient so having performance tools, end-to-end tools, functional tools and unit test tools that understand each other and can talk to each other, certainly is going to improve not just the speed at which you can do things and the amount of effort, but also the collaboration that goes on between the teams, because now the perform-

ance team picks up a functional scenario and they’re just going to enhance it, which means the next time, the functional team gets a better test, and it’s a virtuous circle rather than a vicious one. so i think that having a good platform that does a lot of this can help you ’

Coverage: How much is enough?

fernando Mattos, director of product marketing at test company mabl, believes that test coverage for flows that are very important should come as close to 100% as possible. but determining what those flows are is the hard part, he said ‘We have reports within mabl that we try to make easy for our customers to understand Here are all the different pages that i have on my application Here’s the complexity of each of those and here are the tests that have touched on those, the elements on those pages so at least you can see where you have gaps ’ it is common practice today for organizations to emphasize thorough testing of the critical pieces of an application, but Mattos said it comes down to balancing the time you have for testing and the quality that you’re shooting for, and the risk that a bug would introduce

‘if the risk is low, you don’t have time, and it’s better for your business to be introducing new features faster than necessarily having a bug go out that can be fixed relatively quickly and maybe that’s fine,’ he said

Parker said ai can help with coverage when it comes to testing every conceivable user experience. ‘The problem there,’ he said, ‘is this word conceivable, because it’s

humans conceiving, and our imagination is limited Whereas with ai, it’s essentially an unlimited resource to follow every potential possible path through the application and that’s what i was saying earlier about those first bugs that get reported after a new release, when the end user goes off the script We need to bring ai so that we can not only autonomously generate tests based on what we read in the test cases, but that we can also test things that nobody even thought about testing, so that the delivery of software is as close to being bug free as is technically possible ’

Parasoft’s Hicken holds the view that testing without coverage isn’t meaningful ‘if i turn a tool loose and it creates a whole bunch of new tests, is it improving the quality of my testing or just the quantity? We need to have a qualitative analysis and at the moment, coverage gives us one of the better ones in and of itself, coverage is not a great goal. but the lack of coverage is certainly indicative of insufficient testing so my pet peeve is that some people say, it’s not how much you test, it’s what you test No you need to have as broad code coverage as you can have.’

The all-important user experience

it’s important to have someone who is very close to the customer, who understands the customer journey but not necessarily anything about writing code, creating tests, according to mabl’s Mattos. ‘unless it’s manual testing, it tends to be technical, requiring writing code and no updating test scripts

That’s why we think low code can really be powerful because it can allow somebody who’s close to the customer but not technical customer support, customer success

They are not typically the ones who can understand gitHub and code and how to write it and update that or even understand what was tested so we think low code can bridge this gap That’s what we do ’

Where is this all going?

The use of generative ai to write tests is the evolution everyone wants to see, Mattos said ‘We’ll get better results by combining human insights We’re specifically working on ai technology that will allow implementing and creating test scripts, but still using human intellect to understand what is actu-

< continued from page 3

ally important for the user What’s important for the business? What are those flows, for example, that go to my application on my website, or my mobile app that actually generates revenue?’

‘We want to combine that with the machine,’ he continued ‘so the human understands the customer, the machine can replicate and create several different scenarios that traverse those. but of course, right, lots of companies are investing in allowing the machine to just navigate through your website and find out the different quarters, but they weren’t able to prioritize for us We don’t believe that they’re gonna be able to prioritize which ones are the most important for your company.’

Keysight’s Wright said the company is seeing value in generative ai capabilities ‘is it game changing? yes is it going to get rid of manual testers? absolutely not it still requires human intelligence around requirements, engineering, feeding in requirements, and then humans identifying that what it’s giving you is trustworthy and is valid if it suggests that i should test (my application) with every single language and every single country, is it really going to find anything i might do? but in essence, it’s just boundary value testing, it’s not really anything that spectacular and revolutionary ’

Wright said organizations that have dabbled with automation over the years and have had some levels of success are now just trying to get that extra 10% to 20% of value from automation, and get wider adoption across the organization ‘We’ve seen a shift toward not tools but how do we bring a platform together to help organizations get to that point where they can really leverage all the benefits of automation and i think a lot of that has been driven by open testing ’

‘as easy as it should be to get your test,’ he continued, ‘ you should also be able to move that into what’s referred to in some industries as an automation framework, something that’s in a standardized format for reporting purposes That way, when you start shifting up, and shifting the quality conversation, you can look at metrics and the shift has gone from how many tests am i running, to what are the business-oriented metrics? What’s the confidence rating? are we going to hit the deadlines? so we’re seeing a move toward risk-based testing, and really more agility within large-scale enterprises.’ n

How Cox Automotive found value in automated testing

How does a quality organization run? and how does it deliver a quality product for consumers?

according to roya Montazeri, senior director of test and quality at Cox automotive, no one tool or approach can solve the quality problem Cox automotive portfolios, she said, is a specialized software company that addresses the buying, selling, trading and everything about the car life cycle, with a broad portfolio of products that includes dealertrack, Kelly blue book, autoTrader, Manheim and more

‘Whatever we create from software automation and software delivery needs to make sure that all clients are getting the best deal,’ Montazeri said. ‘They can, and our dealers can, trust our software and at the end, the consumers can get the car they want and this is about digitalization of the entire process ’

When Montazeri joined Cox automotive, her area dealertrack was mature about testing, with automations in place but, she said, the focus on automation and the need to strengthen it started from two aspects: the quality of what was being delivered, and the impact of that on trust within the division ‘basically, when you have an increased defect rate, and when you have more [calls into] customer support, these are indications of a quality problem,’ she said ‘That was the realization of investment into more tools or more ability for automation ’

To improve quality, dealer track began to shift testing left, and invested in automating their Ci/Cd pipeline ‘you can’t have a Ci/Cd pipeline without automation,’ she said ‘it’s just a broken pipeline.’ and to have a fully automated pipeline, she said, training it is critical

another factor that led to the need for automation at dealertrack was the complexity of how their products work ‘any product these days is not a standalone on its own; there is a lot of integration,’ Montazeri said ‘so how do you test those integrations? and that led us to look at where most of our problems were.. is it at the component-level testing? or is it the complexity of the integration testing?’

That, she said, led to dealertrack using service virtualization software, from Parasoft, so they could mimic the same interactions and find the problems before they actually moved the software to production and make the integration happen

When they first adopted virtualization, Montazeri said they originally thought, ‘oh, we can basically figure out how many defects we found but that wasn’t the right KPi at the time for just virtualization We needed to mature enough to say, ‘it’s not just that we found that defect, it’s about exercising the path so we know what’s not even working so that’s how the investment came about for us ’ n

Accessibility testing

ne area in which test automation can deliver big value to organizations is in accessibility

accessibility is all about the user experience, and is especially important for users with disabilities automated end-to-end testing helps answer the question of how easy or difficult it is for users to engage with the software

“if the software is crummy, if it’s not responding, you’re going to have a bad experience,” noted arthur Hicken, technical evangelist at Parasoft “but let’s say the software has passed the steps of being well-designed and well-constructed Hicken said, after that, come those accessibility tests, which are, is this really usable and well-suited for humans? and which tasks do humans use most?”

There is nothing innate about test automation that can raise a flag to any issues, unless the model is trained to identify and report, for instance, if any tasks take more than four

steps to complete, it should be looked at according to Jonathan Wright of Keysight, it’s equally important to be sure the application is usable and accessible in various regional deployments involving different language sets and cultural variations “i had a call with a large-scale organization and they wanted to know how we could support their multiple different localization deployments, which includes help and documentation so it’s really the ability to support global rollouts that follow the sun ”

Wright said in large organizations, centers of enablement are being marginalized as self-service takes hold. “i’m in a large organization, what tools and technology do i need? and you know, it’s usually a people problem, not a technology problem it’s kind of giving them the right tools to be able to help them do the job.”

for accessibility testing, mabl’s fernando Mattos said companies often will have a different team to do that type of testing “Many times, it’s a third-party company performing

that, along with legal advice What we’re trying to do is to shift that left and allow the reusability of you having already tested the whole ui Why recreate all those tests in a separate tool, and why have a different team do it much later after deployment?”

The impact of a poor user experience on digital businesses can involve loss of customers and revenue as users today expect a seamless experience “in e-commerce, in bto-C commerce, they’re seeing hypercompetitiveness in the market and customer switching because the page takes a little too long to load,” he said “and that talks a little bit more about what end-to-end testing is.”

Mattos added that making sure things are working properly has been seen as functional quality, but it’s important for organizations to make sure the performance of the application is fast, that it responds quickly, and the ui shows up quickly He added that organizations can reuse their functional test cases to check for accessibility, so if a development team is pushing new features, and one introduces a critical accessibility issue that gets caught right at the commit or pull request phase, it can get fixed right away. Mabl, and the industry as a whole, is moving to shift this testing left, rather than performing it just prior to release

Mattos noted that there are libraries for automated accessibility testing that can be used to catch 55% to 60% of the issues, while the remaining 40-45% of issues have to be done by people with the disability or experts that know how to test for it but for the 5560%, mabl pushes those into development and introduces accessibility testing there, instead of waiting for a third-party company or team to duplicate the test and catch an error a week later n

ANALYST VIEW

Take advantage of AI-augmented software testing

The artificial intelligence-augmented software-testing market continues to rapidly evolve. as applications become increasingly complex, ai-augmented testing plays a critical role in helping teams deliver high-quality applications at speed by 2027, 80% of enterprises will have integrated ai-augmented testing tools into their software engineering toolchain, which is a significant increase from 10% in 2022, according to gartner ai-augmented software-testing tools assist humans in their testing efforts and reduce the need for human intervention overall, these tools streamline, accelerate and improve the test workflow

The future of the AI-augmented testing market

Many organizations continue to rely heavily on manual testing and aging technology, but market conditions demand a shift to automation, as well as more intelligent testing that is context-aware ai-augmented software-testing tools will amplify testing capacity and help to eliminate steps that can be performed more efficiently by intelligent technologies over the next few years, there will be several trends that drive the adoption of ai-augmented software-testing tools, including increasing complexity of applications, increased adoption of agile and devops, shortage of skilled automation engineers and the need for maintainability all of these factors will continue to drive an increasing need for ai and machine learning (ML) to increase the effectiveness of test creation, reduce the cost of maintenance and drive efficient test loops additionally, investment in ai-augmented testing will help software engineering leaders to delight their customers beyond their expectations and ensure production incidents are resolved quickly

ai augmentation is the next step in the

evolution of software testing and is a crucial element for a strategy to reduce significant business continuity risks when critical applications and services are severely compromised or stop working

How generative AI can improve software quality and testing

ai is transforming software testing by enabling improved test efficacy and faster delivery cycle times ai-augmented softwaretesting tools use algorithmic approaches to

location data with realistic addresses

l generative ai offers testers a pairing opportunity for training, evaluating and experimenting in new methods and technologies. This will be of less value than that of human peers who actively suggest improved alternatives during pairing exercises

l Converting existing automated test cases from one framework to another is possible, but will require more human engineering effort, and is currently best used as a pairing and learning activity rather than an autonomous one

While testers can leverage generative ai technology to assist in their roles, they should also expect a wave of mobile testing applications that are using generative capabilities

enhance the productivity of testers and offer a wide range of capabilities across different areas of the test workflow

There are currently several ways in which generative ai tools can assist software engineering leaders and their teams when it comes to software quality and testing:

l authoring test automation code is possible across unit, application programming interface (aPi) and user interface (ui) for both functional and nonfunctional checks and evaluation

l generative ai can help with general-impact analysts, such as comparing different versions of use stories, code files and test results for potential risks and causes, as well as to triage flaky tests and defects

l Test data can be generated for populating a database or driving test cases This could be common sales data, customer relationship management (CrM) and customer contact information, inventory information, or

software engineering leaders and their teams can exploit the positive impact of ai implications that use LLMs as long as human touch is still involved and integration with the broad landscape of development and testing tools is still improving However, avoid creating prompts to feed into systems based on large language models (LLMs) if they have the potential to contravene intellectual property laws, or expose a system’s design or its vulnerabilities

software engineering leaders can maximize the value of ai by identifying areas of software testing in their organizations where ai will be most applicable and impactful Modernize teams’ testing capabilities by establishing a community of practice to share information and lessons and budgeting for training. n

Jim scheibmeir is a sr director analyst at gartner, inc where he provides advice and analysis for software engineering leaders with a focus on software test automation.

Automated Testing Tools Guide

n APPVANCE is the leader in generative ai for software Quality. its premier product aiQ is an ai-native, unified software quality platform that delivers unprecedented levels of productivity to accelerate digital transformation in the enterprise Leveraging generative ai and machine learning, aiQ robots autonomously validate all the possible user flows to achieve complete application coverage

n KEYSIGHT is a leader in test automation, where our ai-driven, digital twin-based solutions help innovators push the boundaries of test case design, scheduling, and execution. Whether you’re looking to secure the best experience for application users, analyze high-fidelity models of complex systems, or take proactive control of network security and performance, easy-to-use solutions including eggplant and our broad array of network, security, traffic emulation, and application test software help you conquer the complexities of continuous integration, deployment, and test

n Applitools is built to test all the elements that appear on a screen with just one line of code, across all devices, browsers and all screen sizes We suppor t all major test automation frameworks and programming languages covering web, mobile, and desktop apps

n Digital.ai Continuous Testing provides expansive test coverage across 2,000+ real mobile devices and web browsers, and seamlessly integrates with best-in-class tools throughout the devops/devsecops pipeline.

n IBM: Quality is essential and the combination of automated testing and service virtualization from ibM rational Test Workbench allows teams to assess their software throughout their delivery life cycle ibM has a market leading solution for the continuous testing of end-to-end scenarios covering mobile, cloud, cognitive, mainframe and more

n MABL is the enterprise saas leader of intelligent, low-code test automation that empowers high-velocity software teams to embed automated end-to-end tests into the entire development lifecycle Mabl’s platform for easily creating, executing, and maintaining reliable browser, aPi and mobile web tests helps teams quickly deliver highquality applications with confidence That’s why brands like Charles schwab, jetblue, dollar shave Club, stack overflow, and more rely on mabl to create the digital experiences their customers demand.

n PARASOFT helps organizations continuously deliver high-quality software with its ai-powered software testing platform and automated test solutions. supporting embedded and enterprise markets, Parasoft’s proven technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software by integrating everything from deep code analysis and unit testing to ui and aPi testing, plus service virtualization and complete code coverage, into the delivery pipeline

n Micro Focus enables customers to accelerate test automation with one intelligent functional testing tool for web, mobile, aPi and enterprise apps users can test both the front-end functionality and back-end service parts of an application to increase test coverage across the ui and aPi

n Kobiton offers gigafox on-premises or hosted, and solves mobile device sharing and management challenges during development, debugging, manual testing, and automated testing a pre-installed and pre-configured appium server provides “instant on” appium test automation

n Orasi is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, Continuous delivery, monitoring, and mobile testing technology

n ProdPerfect is an autonomous, end-to-end (e2e) regression testing solution that continu-

ously identifies, builds and evolves e2e test suites via data-driven, machine-led analysis of live user behavior data it addresses critical test coverage gaps, eliminates long test suite runtimes and costly bugs in production

n Progress Software’s Telerik Test studio is a test automation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production

n Sauce Labs provides a cloud-based platform for automated testing of web and mobile applications optimized for use in Ci and Cd environment, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using selenium or appium

n SmartBear: offers tools for software development teams worldwide, ensuring visibility and endto-end quality through test management, automation, aPi development, and application stability Popular tools include swaggerHub, TestComplete, bugsnag, readyaPi, Zephyr, and others

n testRigor helps organizations dramatically reduce time spent on test maintenance, improve test stability, and dramatically improve the speed of test creation This is achieved through its support of “plain english” language that allows users to describe how to find elements on the screen and what to do with those elements from the end-user’s perspective. People creating tests on their system build 2,000+ tests per year per person. on top of it, testrigor helps teams deploy their analytics library in production that will make systems automatically produce tests reflecting the most frequently used end-to-end flows from production. n

S P O N S O R E D

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.