7 minute read
Objectivity with no empathy: how symptom checkers can help patients
from OSOZ World
by OSOZ Polska
Artificial Intelligence is getting better in diagnosing. Although AI still can’t see and examine patients, it has an access to an unlimited medical knowledge and up-to-date research data. It’s learning very quickly and gaining new capabilities like emotional intelligence. An interview with Piotr Orzechowski, CEO of the startup Infermedica.
Artificial intelligence in healthcare is developing very rapidly, but the technology is being adopted on the market very slowly. Why is that, and what can be done about it?
Advertisement
There are many reasons – although last year there were many successful commercial implementations of artificial intelligence. Examples worth mentioning include IDx-DR, the first FDA-approved
medical device exploiting AI for the diagnosis of diabetic retinopathy, and Apple Watch 4’s feature for detecting atrial fibrillation, which shows that intelligent algorithms are already becoming available to mainstream consumers.
In my experience, the slow progress is due to three main factors: insufficient clinical validation of solutions, legal aspects, and distressing experiences associated with the computerization of healthcare. The first problem unfortunately affects the vast majority of AI suppliers, who have not yet provided solid evidence that the technology they’re offering is safe to use and will have specific benefits that will justify the investment. The second factor has to do with the lack of clear legal liability for errors committed by AI. As in the case of autonomous cars, who is responsible in the event of a bad decision? The doctor, the patient, the provider, or maybe the virtual AI entity, whose license to practice can be revoked? The final issue is related to the often painful experience of numerous organizations that have implemented solutions such as electronic patient documentation. In conversations with hospitals, especially in the United States, the first question is usually, “Can you integrate with our EHR system, and how complicated it will be?” Ironically, it seems that in many cases the IT infrastructure itself is a barrier to the implementation of new IT solutions, including AI.
There is currently a lot of hype about AI solutions in health. In which areas of medicine are they most promising?
I think that the impact of AI will be felt first in remote monitoring of cardiac patients, in imaging diagnostics as support for radiologists, and in preliminary diagnosis of a patient’s symptoms. In the case of this last application, there are a number of solutions, called chatbots or virtual assistants, whose aim is to gather information from an interview with the patient and recommend the next step, replacing “Dr. Google”. This type of solution is already being piloted by leading insurance companies, including Allianz, Bupa or Prudential.
Due to the powerful capabilities of data analysis, AI can be used successfully for preliminary assessment of disease symptoms. What direction will this develop in? Today, we are usually dealing with systems that suggest conditions based questions asked one after another. In the future, will other elements be included as well, such as real-time measurements of health parameters taken by wearables?
For a comprehensive and precise assessment of a patient’s health, answers to questions are not enough. After all, the patient’s clinical picture consists of a number of factors, such as treatment history, past diseases and medical procedures, test results, and also more subtle signals, such as the patient’s appearance, behavior, manner of speaking, and even the circumstances of the visit. Thus far no AI system has been able to aggregate all of these elements from various sources – from an electronic patient record to measurements taken by medical devices. This is definitely the direction in which AI has to develop in order to come close to human competence.
In systems of this type, we still use terms like “symptom evaluation” and “preliminary health assessment”, rather than “diagnosis”. When will we be able to speak of actually diagnosing patients, e.g. with the help of an application? What conditions must be met?
Due to the legal environment, as well as the intended use of the available systems, I don’t think it will happen soon. “Symptom checkers” are currently based on a certain portion of the information available to a doctor – they cannot see or hear the patient, assess the patient’s appearance and overall health, or perform a physical examination. The doctor often knows the patient and can extract
much more information from the conversation than just a subjective assessment of symptoms. Not to mention the patient’s history and test results, which are essentially unavailable to an external application at this time. We won’t be able to speak of a diagnosis until we close the gap between what a doctor can see, hear and feel and the senses that a phone application has.
Should “symptom checker” AI systems be enriched with elements of emotional intelligence, so that – like a doctor or nurse – they will be able to monitor and respond appropriately to patient behavior, facial expressions, etc.?
Absolutely. AI systems, especially voice applications and avatars, should be enriched with elements of emotional intelligence. A friend of mine, an experienced doctor, once told me he can assess the health of a patient admitted to the emergency room with just one look. He said he first sees to the person sitting quietly in a corner and saying very little. Such a patient may no longer have the strength to communicate his pain. Moreover, the reaction to pain depends on many factors, including culture and race. Imitating emotional intelligence is a not trivial matter, and it will probably be years before we have data sets – photos, videos and behaviors – that will be able to train algorithms to do this.
The gap between the health needs of the growing and ageing population and medical human resources is growing steadily. This gap can be closed by AI systems. As a society, long accustomed to the traditional model of medicine, are we ready for such a revolution? How can we persuade nurses and doctors who are afraid of being replaced by technology?
It seems to me that the biggest challenge is actually to implement AI without the need for a revolution. In the fantastic interview “Making the Right Choice the Easy Choice”, Roy Rosin, the chief innovation officer at the American hospital Penn Medicine, talks about how to improve patients’ health without changing their habits. For example, in Grand Rapids, Michigan, fluoride has been added to the tap water since 1945, because the residents had constant dental problems. Eleven years later, tooth decay in children born after that year had decreased by 60%. I think it should be like this with artificial intelligence. We should introduce it gently, without having to change the behavior and habits of patients, nurses and doctors.
Should AI systems in healthcare somehow be validated to ensure that they work in accordance with the best medical knowledge?
In contrast to classic medical devices, and even medicines, validation of AI systems presents completely new challenges. First of all, in the case of models built by machine learning methods, there is no simple pattern describing their behavior. So how many cases do we have to verify to ensure that the system is safe to use? Second, when dealing with a system that changes its model over time, how often should the validation be repeated? Or maybe only the development methodology should be subject to validation?
Many people wonder about the security of such solutions. What guarantee does the patient have that his medical data are secure and that the analysis process itself will not be flawed, e.g. due to a cyberattack?
Privacy issues are not specific to AI. After all, our data are already stored in hospitals and clinics, at the dentist, and in banking systems. I think the reliability of the software provider and its technological facilities are crucial here. Are we convinced that a given company can protect our data and its infrastructure against an attack? Do we have to provide our personal data in order to take advantage of the solution? For example, none of our services collects any data that could identify the user, not even IP addresses. We also don’t require a login. At the end of the day, however, it’s a question of credibility – of whether our users believe that we really do what we say we do.
What development trends in Artificial Intelligence will become dominant in in the global healthcare market the coming years?
I believe that four areas will play a key role: remote patient monitoring and measurements of vital parameters, imaging diagnostics, patient and physician support in differential diagnosis, and the use of AI in personalized medicine for the selection of an individual treatment plan.