1 minute read

MACHINE LEARNING MEETS MEDICINE

Next Article
“Cover” Story

“Cover” Story

WHILE AI AND MACHINE LEARNING ARE RELATIVELY NEW TO MANY FIELDS, the idea of computers using data to help us make decisions has been in our hospitals for decades. As a result, once big data and neural networks came around, they were readily adopted into clinical workflow—especially in the realm of radiology.

At first, these algorithms could be relied upon to relieve and double-check the eyesight of radiologists who typically spend eight to ten hours a day staring at images until so benumbed that they’re bound to miss something. Eventually they were used to automate things even well-rested humans aren’t very good at—like measuring whether a tumor has grown, the space between discs in the spine, or the amount of plaque built up in an artery. But eventually, the technology evolved to be able to scan an image and identify, classify, and even predict the outcome of disease. And that’s when the real problems arose.

For instance, Judy Gichoya, a multidisciplinary researcher in both informatics and interventional radiology, was part of a team that found that AI designed to read medical images like

Xrays and CT scans could incidentally also predict the patient’s self-reported race just by looking at the scan, even from corrupted or cropped images. Perhaps even more concerning: Gichoya and her team could not figure out how or why the algorithm could pinpoint the person’s race.

Regardless of why, the results of the study indicate that, if these systems are somehow able to discern a person’s racial background so easily and accurately, these deep learning models they were trained on weren’t deep enough. “We need to better understand the consequences of deploying these systems,” says Gichoya, assistant professor in the Division of Interventional Radiology and Informatics at Emory. “The transparency is missing.”

She is now building a global network of AI researchers across disciplines (doctors, coders, scientists, etc.) who are concerned about bias in these systems and fairness in imaging. The self-described “AI Avengers” span six universities and three continents. Their goal is to build and provide diverse datasets to researchers and companies to better ensure that their systems work for everyone.

Meanwhile, her lab, the Healthcare Innovation and Translational Informatics Lab at Emory, which she co-leads with Hari Trivedi, has just released the EMory BrEast Imaging Dataset (EMBED), a racially diverse granular dataset of 3.5 million screening and diagnostic mammograms. This is one of the most diverse datasets for breast imaging ever compiled, representing 116,000 women divided equally between Black and white, in hopes of creating AI models that will better serve everyone.

“People think bias is always a bad thing,” says Gichoya. “It’s not. We just need to understand it and what it means for our patients.”

This article is from: