2 minute read

Should there be a Hippocratic Oath covering artificial intelligence?

BY JERRY ZEIDENBERG

CHICAGO – Do we need a Hippocratic Oath for AI? That question was posed at HIMSS by Dr. Emre Sezgin, principal investigator and head of the Intelligent Futures Research Lab at Nationwide Children’s Hospital, in Columbus, Ohio, and an assistant professor of pediatrics at Ohio State University’s College of Medicine.

The traditional oath requires physicians to pledge to prescribe only beneficial treatments and to refrain from causing harm. When one delves into the possible misuses of artificial intelligence, and you see what trouble it could cause, you start to agree. And maybe it shouldn’t only be physicians taking the pledge, but all professionals involved in applying AI to healthcare.

Dr. Sezgin was part of the ML and AI Forum. The panellists discussed voice technology in a session called: “It Speaks, It Listens, It’s Your Daily Voice Assistant”. After listening, I came away with a new understanding – and a sense of awe – of the power of voice technologies.

As Dr. Sezgin noted, “Voice is very personal, and it’s the most personal way to identify us, as individuals, outside of our DNA. [The analysis of voice can determine] our emotional state, gender, race, age and geographical background.”

When AI systems know all of this, Dr. Sezgin asked, “Will it discriminate against me, based on my voice?”

Another panelist, Dr. David Metcalf, asserted that police could potentially use this information to target citizens. (It wasn’t mentioned, but such a thought evoked images of Chinese police and authorities using AI to identify Uyghurs and other minorities.)

Dr. Metcalf is director of the Mixed Emerging Technology Integration Lab at the University of Central Florida.

He observed that by parsing the voice, AI systems can now detect Parkinson’s Disease. That’s good, if it’s used for early diagnosis and treatment. But what if it’s used by employers when hiring – will they hire a person who shows signs of developing a serious disease? What if they’re also using other voice technologies to assess employees?

Freddie Feldman, director of voice and conversational interfaces at Wolters Kluwer Health, noted that systems are now available that understand the conversations between patients and doctors, an application known as AI-powered scribes. The advertised benefit is to automatically capture the clinical encounter and to generate the chart, and in this way reduce the burden on the doctor. It’s a terrific way of helping with physician burnout – doctors have too much paperwork, after all.

But Feldman asked, will the systems also pick up what you’re saying in the waiting room? How will this information be used?

“It highlights the need for regulation in this area,” he commented.

The panelists naturally discussed ChatGPT – a topic that seemed to be on the lips of every speaker at the HIMSS conference. Feldman, for his part, was highly skeptical of the powers of ChatGPT. “Are you going to let patients talk to it? No, it can’t be trusted, we don’t know where the informa- tion comes from. It hallucinates. Ask it a question, and it gives you an answer, but we don’t know where it comes from, and we don’t [immediately] know if it’s correct.”

He described a test of ChatGPT, where it was asked: How do you instruct a person who is about to die? ChatGPT responded with what local funeral home to use. “We wanted to know what language to use with a dying person,” explained Feldman. However, he said that with further coaching and questions, ChatGPT was able to understand and provide a more appropriate answer.

Voice technologies and conversational

AI, including large-language models like ChatGPT, are evolving at an accelerated rate, observed Dr. Sezgin. But issues of standards, trust and ethics aren’t keeping pace. There needs to be more attention paid to privacy. Respect for personal rights and values, moreover, should be reinforced, he said.

This article is from: