7 minute read

How to build trust in AI for software testing

The application of artificial intelligence (AI) and machine learning (ML) in software testing is both lauded and maligned, depending on who you ask. It’s an eventuality that strikes balanced notes of fear and optimism in its target users But one thing’s for sure: the AI revolution is coming our way And, when you thoughtfully consider the benefits of speed and efficiency, it turns out that it is a good thing So, how can we embrace AI with positivity and prepare to integrate it into our workflow while addressing the concerns of those who are inclined to distrust it?

Speed bumps on the road to trustville

Much of the resistance toward implementing AI in software testing comes down to two factors: a rational fear for personal job security and a healthy skepticism in the ability of AI to perf o r m t a s k s c o n t e x t u a l l y a s w e l l a s humans This skepticism is primarily based on limitations observed in early applications of the technology.

BY M U S H H O N DA

To further promote the adoption of AI in our industry, we must assuage the fears and disarm the skeptics by setting reasonable expectations and emphasizing the benefits Fortunately, as AI becomes more mainstream a direct result of improvements in its abilities a clearer picture has emerged of what AI and ML can do for software testers; one that is more realistic and less encumbered by marketing hype

First things first: Don’t panic

Here’s the good news: the AI bots are not coming for our jobs For as long as there have been AI and automation testing tools, there have been dystopian nightmares about humans losing their place in the world Equally prevalent are the naysayers who scoff at such doomsday scenarios as being little more than the whims of science fiction writers

The sooner we consider AI to be just another useful tool, the sooner we can start reaping its benefits. Just as the invention of the electrical screwdriver has not eliminated the need for workers to fasten screws, AI will not eliminate the need for engineers to author, edit, schedule and monitor test scripts. But it can help them perform these tasks faster, more efficiently, and with fewer distractions

Autonomous software testing is simply more realistic and more practical when viewed in the context of AI working in tandem with humans People will remain central to software development since they are the ones who define the boundaries and potential of their software The nature of software testing dictates that the “goal posts'' are always shifting as business requirements are often unclear and c o n s t a n t l y c h a n g i n g T h i s v a r i a b l e nature of the testing process demands continued human oversight

The early standards and methodologies for software testing (including the term “quality assurance”) come from the world of manufacturing product testing. Within that context, products were welldefined with testing far more mechanistic compared to software whose traits

Introducing “Improve,” the Continuous Improvement Conference series focusing on how organizations can gain process e iciencies, create secure, higher quality software and deploy more frequently and with confidence. Rapid release cycles, automation and process data give organizations the opportunity to continuously improve how they work and deliver software. This conference series will evaluate how the pieces are put into place to enable continuous improvement.

Join us on February 22 for the first event in this series: Testing

Wed, Feb 22, 2023

9:00 AM - 3:00 PM (EST) FREE Online Event

Upcoming online events in the Improve Conference series:

August 30

October 18

November 15 are malleable and often changing. In reality, however, software testing is not applicable to such uniform, robotic methods of assuring quality

In modern software development, there are many things that can’t be known by developers There are too many changing variables in the develo p m e n t o f s o f t w a r e t h a t r e q u i r e a higher level of decision-making than AI can provide And yet, while fully autonomous AI is unrealistic for the foreseeable future, AI that supports and extends human efforts at software quality is still a very worthwhile pursuit Keeping human testers in the mix to consistently monitor, correct, and teach the AI will result in an increasingly improved software product

The three stages of AI in software testing

Software testing AI development essentially has three stages of development maturity:

• Operational Testing AI

• Process Testing AI

• Systemic Testing AI

Most AI-enabled software testing is currently performed at the operational stage Operational testing involves creating scripts that mimic the routines human testers perform hundreds of times Process AI is a more mature version of Operational AI with testers using Process AI for test generation Other uses may include test coverage analysis and recommendations, defect root cause analysis and effort estimations, and test environment optimization Process AI can also facilitate synthetic data creation based on patterns and usages

The third stage, Systemic AI, is the least tenable of the three owing to the enormous volume of training it would require Testers can be reasonably confident that Process AI will suggest a single feature or function test to adequately assure software quality. With Systemic AI, however, testers cannot know with high confidence that the software will meet all requirements in all situations. AI at this level would test for all conceivable requirements even those that have not been imagined by humans. This would make the work of reviewing the autonomous AI's assumptions and conclusion such an enormous task that it would defeat the purpose of working toward full autonomy in the first place

Set realistic expectations

After clarifying what AI can and cannot do, it is best to define what we expect from those who use it Setting clear goals early on will prepare your team for success When AI tools are introduced to a testing program, it should be presented as a software project that has the full support of management with w e l l - d e f i n e d g o a l s a n d m i l e s t o n e s Offering an automated platform as an optional tool for testers to explore at their leisure is a setup for failure Without a clear directive from management and a finite timeline, it is all too easy for the project to never get off the ground. Give the project a mandate and you’ll b e w e l l o n y o u r w a y t o s u c c e s s f u l implementation. You should aim to be clear about who is on the team, what t h e i r r o l e s a r e , a n d h o w t h e y a r e expected to collaborate. It also means specifying what outcomes are expected and from whom

Accentuate the positive

Particularly in agile development environments, where software development is a team sport, AI is a technology that benefits not only testers but also everyone on the development team Give testers a stake in the project and allow them to analyze the functionality and benefits for themselves Having agency will build confidence in their use of the tools, and convince them that AI is a tool for augmenting their abilities and preparing them for the future

Remind your team that as software evolves, it requires more scripts and new approaches for testing added features, for additional use patterns and for platform integrations Automated testing is not a one-time occurrence. Even with machine learning assisting in the repairing of scripts, there will always be opportunities for further developing the test program in pursuit of greater test coverage, and higher levels of security and quality. Even with test scripts that approach 100 percent code execution, there will be new releases, new bug fixes, and new features to test The role of the test engineer is not going anywhere, it is just evolving

Freedom from the mundane

It is no secret that software test engineers are often burdened with a litany of tasks that are mundane To be effective, testing programs are designed to audit software functionality, performance, security, look and feel, etc in incrementally differing variations and at volume Writing these variations is repetitive, painstaking, and to many even boring By starting with this l o w - h a n g i n g f r u i t , t h e m u n d a n e , resource-intensive aspects of testing, you can score some early wins and gradually convince the skeptics of the value of using AI testing tools.

Converting skeptics won’t happen overnight. If you overwhelm your team by imposing sweeping changes, you may be setting yourself up for failure. Adding AI-assisted automation into your test program greatly reduces the load of such repetitive tasks, and allows test engineers to focus on new interests and skills

For example, one of the areas where automated tests frequently fail is in the identification of objects within a user interface (UI) AI tools can identify these objects quickly and accurately to bring clear benefit to the test script By focusing on such operational efficiencies, you can make a strong case for embracing AI When test engineers spend less time performing routine debugging tasks and more time focusing on strategy and coverage, they naturally become better at their jobs When they are better at their jobs, they will be more inclined to embrace technology

In the end, AI is only as useful as the way in which it is applied It is not an instantaneous solution to all our problems We need to acknowledge what it does right, and what it does better. Then we need to let it help us be better at our jobs. With that mindset, test engineers can find a very powerful partner in AI and will no doubt be much more likely to accept it into their workflow. z

This article is from: