1. Your extensive background spans psychology, business, government, non-profits, academia, and entrepreneurship. How do you see these diverse experiences informing your perspective on the application of AI in healthcare? How can interdisciplinary thinking enhance AI's role in improving global health outcomes?
Perhaps it’s a matter of perspective. I see AI as being best conceptualized as a tool, more so that a panacea or solution or easy button. When I first started out in clinical practice, I was very fearful of misdiagnosing someone as having a psychiatric condition rather than an endocrine disorder that presented with psychiatric symptoms. As a former math/computer science undergrad, I wrote a very, very basic program that would probabilistically “consider” the symptom presentation and suggest various non-psychiatric diagnoses to consider for referral to a specialist and/or lab test that could help point in a better diagnostic direction. It was more algorithmic than heuristic, and it was a tool, not an easy-button.
As the question concerns the very broad “healthcare” ecosystem, and knowing that prediction is difficult especially about the future (a la Niels Bohr), and that what I write today stands a more than fair chance to be seen as naïve (at best) or wrong (at worst). What all of I think should be take with a grain of salt. So, having said that, I think AI will be a helpful accelerator in drug discovery. There is nascent work afoot as I write this, in the area for treating individuals with schizophrenia.
There are a number of ChatBots already being used, but they are not without their bugs. GPT-2, and other natural-language algorithms like it, are known to embed deeply racist, sexist, and homophobic ideas. More than one chatbot has been led disastrously astray this way, the most
recent being a South Korean chatbot called Lee Luda that had the persona of a 20-year-old university student. After quickly gaining popularity and interacting with more and more users, it began using slurs to describe the queer and disabled communities.
2. As the Founding Director of the Center for Global Initiatives and having traveled to many countries, you have a unique understanding of global health needs. In what ways can AI be leveraged to address these diverse health issues? Can you provide an example of a health challenge that could be better addressed through AI integration?
The first thing that comes to mind is orphan drug development and neglected tropical diseases, along with medication development, refinement, and distribution based on ethnobotany. I worked on a project in Benin a few years ago on the latter, and we found there is need for language translation from native tongues, Latin genus/species classifications, and French, just to create a taxonomy of the plants that held promising medicinal properties. Then there is looking at toxicity, side effect profiles, potency, dosing, clinical efficacy, patient variation, genetic impact, etc. Most all of those could be augmented and accelerated by using AI.
As for the former, ditto. Years ago, I did research and wrote about the orphan drug development problem and neglected tropical diseases, and public agencies like the World Health Organization and the Gates Foundation focus and fund initiatives in these areas. Seems perfect for tech and AI to be marshaled to help there as well.
3. You have significant experience with start-ups and serve as an advisory board member and investor. How can emerging healthcare companies most effectively leverage AI in their services or products? What advice would you give to a healthfocused startup looking to utilize AI?
I think we’ll see funding via National Science Foundation grants from the government to fund innovative startups’ proposals, however my caution/advice would be that founders should not underestimate the complexity of mental illness, the diversity of symptom presentation, and the concomitant complexities of treatment, and the difficulties of gaining payment that is adequate to the development costs, let alone get to profitability. I have seen a number, and worked with a few, that imagined it would be easy-and-simple to throw a budding and sexy technology to the problems of mental illness and have it work at a significant, lasting and scalable level; pass muster in randomized control trials; and be safe for human subject testing, all within a highly regulated environment that is more sophisticated than an Airbnb, Uber or TikTok.
It is incumbent on founders who want to be successful in this space to bring together the diverse parties and experts to inform the developers of the myriad of challenges of not only care provision, evidence-based treatment guidelines (predicated on big data[bases]), proper assessment and diagnosis, applicable treatment approaches (based on availability, cost, applicability to a patient’s demographics, culture, medical history, genetics, lifestyle, social determinates and whatnot), but also to measure and track clinical outcomes and follow-up visà-vis recidivism, readmissions, and relapse rates. Then to constantly use the new findings to
improve said outcomes, iterate, and develop their tools to perform even better. That all sounds like a job AI can help with, but not do. At least not yet (see Bohr above).
4. As a licensed clinical psychologist, how do you see AI transforming the field of psychology and mental health? Can AI play a part in not only treatment but also in the early detection and prevention of mental health issues?
You bet, and it is now. Perhaps two good treatment examples are IESO Digital Health which uses AI to analyze the language used in its therapy sessions through natural-language processing, where machines process transcripts, with the goal to provide therapists with a better insight into their work to ensure the delivery of high standards of care and to help trainees improve. Lyssn.io provides clinics and universities with a technology designed to improve quality control and training. I have also met with folks from mental health outcomes companies Mirah and Clinicom, the latter being a bit more impressive with their tech approach. Nabla as tools to automate clinical charting. Their API seems open.
We need to be very, very cautious to not “Minority Report” early detection – see Challenges for Artificial Intelligence in Recognizing Mental Disorders, for example. Remember my clunky software I developed when starting my clinical practice was only a PROBABILISTIC tool to aid (just) me in my differential psychodiagnostic process, not do it for me.
As for prevention, that is more so a job for social, cultural, philosophical, interpersonal, genetic, parental, and environmental influences, not an AI. But that is just my opinion, albeit biased.
5. Hot Take Question - AI vs. Human Touch: You've dedicated your life to various facets of healthcare, all revolving around human well-being. Some argue that AI might desensitize healthcare, taking away the 'human touch' that's essential in patient care. What is your 'hot take' on this? Is there a middle ground where AI and human touch coexist in harmony?
Oh yes, for sure. Pundits are already Chicken Littl’ing it vis-à-vis art, literature, jobs, you name it, so certainly it would be the same in healthcare in general, and mental healthcare in particular, as there is a line of thought that a key part of the healing process in psychotherapy (not psychopharmacology) is the therapeutic relationship. Of course, you can have a relationship of sorts with your Alexa, or maybe even your OS (see Her), and I hold a deep love for my motorcycle, for example. As I wrote for LinkedIn:
“… (in) 1955 to be precise, John McCarthy, coined the term “Artificial Intelligence” and four years later, Arthur Samuel described what we think of as machine learning. In a clinical realm, ELIZA was the world’s first psychotherapist chatbot in 1964, which was capable of passing the Turing Test. ELIZA was lightyears beyond what I wrote. But even with this decades long existence of AI and ML, and even clinical uses, it was not until 2016 that the term machine learning first made its appearance in the (arguably) top two American medical journals The New England Journal of Medicine and the Journal of the American Medical Association ”
Also, we need to be cognizant of side-effects
“I’d like to think AI is agnostic, bias-free and data-centric, but with AI just as with humans you are what you eat. In one of my podcast episodes with Heather Dewey-Hagborg, we discussed the programmed-in biases of coders based on the availability of data/images’ subsequent impact on facial recognition based on what they built from. Topol noted Cathy O’Neil’s finding in her book Weapons of Math Destruction that ‘many of these models encoded human prejudice, misunderstanding, and bias. ’ Uh-oh.”
As for middle ground, AI is a great tool and step-saver. A human improved with AI tools makes for a dynamic duo. We still need to better incorporate philosophy and ethics into our algos, too. It’s great to have an auto-pilot, but we still need to be able to grab the yoke when sh*t hits the fan.
For more from me on this topic:
Can AI Really Make Healthcare More Human and not be creepy?
Big Data and Predictive Analytics Potential in Healthcare: Examples, Applications and Ideas
Augmenting Medicine with AI: Hassan Tetteh, MD, on Innovation in Healthcare
Jack Hidary on Quantum Information Sciences, Grand Challenges and Moonshots
How to Protect Yourself from Fad Science
Technology Trends in Healthcare and Medicine: Will 2019 Be Different?
Commoditization, Retailization and Something (Much) Worse in Medicine and Healthcare Fits and Starts: Predicting the (Very) Near Future of Technology and Behavioral Healthcare
Why I think 2018 will (Finally) be the Tipping Point for Medicine and Technology Healthcare Innovation: Are there really Medical Unicorns? Can (or Should) We Guarantee Medical Outcomes?
A Cure for What Ails Healthcare's Benchmarking Ills?
Why Global Health Matters
Can A Blockchain Approach Cure Healthcare Security's Ills?
Why Medicine is Poised for a (Big) Change
Is This the Future of Medicine? (Part 5)
Bringing Evidence into Practice, In a Big Way (Part 4)
Can Big Data Make Medicine Better? (Part 3)
Building Better Healthcare (Part 2)
Is Technology the Cure for Medicine’s Ills? (Part 1)
Access to Healthcare is a US Problem, Too