3 minute read

GUEST VIEW by Shayak Sen

Next Article
News Watch

News Watch

Shayak Sen is co-founder and CTO of AI Quality solutions provider TruEra.

Guest View

BY SHAYAK SEN AI models need automated testing

In the 1990’ s, when software started to become ubiquitous in the business world, quality was still a big issue. It was common for new software and upgrades to be buggy and unreliable, and rollouts were difficult.

Software testing was mostly a manual process, and the people developing the software typically also tested it. Eventually, automated testing companies emerged, performing high-volume, accurate feature and load testing. Soon after, automated software monitoring tools emerged, to help ensure software quality in production. Eventually, automated testing and monitoring became the standard, and software quality soared, which of course helped accelerate software adoption.

AI model development is at a similar inflection point. AI and machine learning technologies are being adopted at a rapid pace, but quality varies. Often, the data scientists developing the models are also the ones manually testing them, and that can lead to blind spots. Testing is manual and slow. Monitoring is nascent and ad hoc. And AI model quality is suffering, becoming a gating factor for the successful adoption of AI. In fact, Gartner estimates that 85 percent of AI projects fail.

The stakes are getting higher. While AI was first primarily used for low-stakes decisions such as movie recommendations and delivery ETAs, more and more often, AI is now the basis for models that can have a big impact on people ’ s lives and on businesses. Consider credit scoring models that can impact a person ’ s ability to get a mortgage, and the Zillow home-buying model debacle that led to the closure of the company ’ s multi-billion dollar line of business buying and flipping homes.

Many organizations learned too late that COVID-19 broke their models — changing market conditions left models with outdated variables that no longer made sense (for instance, basing credit decisions for a travel-related credit card on volume of travel, when all non-essential travel had halted).

Not to mention, regulators are watching.

Takeproactivestepsto workautomatedtesting andmonitoringintothe AImodellifecycle.

Emulating testing approaches in software development

cantly different processes. That is because AI bugs are different. AI bugs are complex statistical data anomalies (not functional bugs), and the AI blackbox makes it really hard to identify and debug them.

AI model development differs from software development in three important ways: • It involves iterative training/experimentation vs. being task- and completion-oriented; • It’ s predictive vs. functional; and • Models are created via black-box automation vs. designed by humans.

Machine learning also presents unique technical challenges: • Opaqueness/Black box nature • Bias and fairness • Overfitting and unsoundness • Model reliability • Drift

The training data that AI and ML model development depend on can also be problematic. In the software world, you could purchase generic software testing data, and it could work across different types of applications. In the AI world, training data sets need to be specifically formulated for the industry and model type in order to work.

Taking proactive steps to ensure AI model quality

So what should companies leveraging AI models do now? Take proactive steps to work automated testing and monitoring into the AI model lifecycle.

A solid AI model quality strategy will encompass four categories: • Real-world model performance, including conceptual soundness, stability/monitoring and reliability, and segment and global performance. • Societal factors, including fairness and transparency, and security and privacy • Operational factors, such as explainability and collaboration, and documentation • Data quality, including missing and bad data

For AI models to become ubiquitous in the business world — as software eventually did — the industry has to dedicate time and resources to quality assurance. We are nowhere near the five9's of quality that’ s expected for software, but automated testing and monitoring is putting us on the path to get there. z

This article is from: