medicine worcester Volume 92
• Number 4
Published by Worcester District Medical Society
Winter 2023
WDMS.ORG
A.I. In Medicine
WORCESTER MEDICINE
ONLY
ONE
PLACE TO DELIVER YOUR BEST CARE. Your skills are in demand at many health systems. But only Reliant Medical Group offers you the right care culture in which to flourish. To collaborate and share your expertise. To give patients the attention they deserve. To leverage the latest treatment advances, information technologies and analytics to push the boundaries of medicine. Join us and there’s only one thing to do — your life’s best work.SM Reliant is a proud member of the Optum family of businesses.
2
Winter 2023
A.I. In Medicine ContentsHumanities in Medicine WORCESTER MEDICINE Winter 2023
A.I. In Medicine
on the cover
Cover design by Parul Sarwal, MD ©M.Wallflower/sparklestroke via Canva.com
Artificial Intelligence in Health Professions Education 17
Editorial 4
Parul Sarwal, MD
Anna K. Morin, PharmD, RPh
The Doctor Will See You Now… and 24x7 5 Larry Garber, MD
Cartoon 18 Christopher Baker, MD
Society Snippet 7 17th Annual Louis A. Cottle Medical Education Conference
Artificial Intelligence in Mental Healthcare: A Story of Hope and Hazard 7 Rajendra Aldis, MD, MSCS and Nicholas Carson, MD, FRCPC
AI in Healthcare: Balancing Innovation with Regulation 9
The Quest for Childhood Injury PreventionEmbodied in Safety Quest 19 Michael Hirsch, MD
Artificial Intelligence in Nursing 20 Cynthia Delmas, RN
WDMS Member Survey 10
Is it Time to Rethink How We Teach the Art of the Clinical Interview? A Medical Student Posits the Use of AI to Drill Doctoring and Clinical Skills 22
Healthcare Professionals’ Opinions on Artificial Intelligence in Medicine
Natasha Bitar, BS
Do Patients Want Artificial Intelligence or Human Intelligence? 11
As I See It 23
Vinit Gilvaz, MD and Zeba Hashmath, MD
Larry Garber, MD
Can AI be a Good Doctor?: What measuring a computer’s medical ability teaches us about human doctors 12 Andrew Beam, PhD
Digital Inclusion of Elderly People: Designing a purposeful serious game interface with memorable music 14
Sonia N. Chimienti, MD FIDSA
Legal Consult
Financial Transparency Comes Close to Home 24 Peter J. Martin, Esq. and Aastha Sharma, Esq.
Book Review
The Masters of Medicine by Andrew Lam, M.D. published 2023 26
Sunny Choi, PhD
Peter T. Zacharia, MD
Pixels, Patterns, and Patients: Radiology Residency in the AI Revolution 15
In Memoriam
Michael J. Purcaro, MD/PhD
Christopher H. Linden, MD 27 Edward L. Amaral, MD 27
published by
wdms editorial board
Worcester District Medical Society
Lisa Beittel, MBA Sonia Chimenti, MD Anthony L. Esposito, MD Larry Garber, MD Rebecca Kowaloff, DO Susan Krantz, MD Julianne Lauring, MD Anna Morin, PharmD Nancy Morris, PhD, NP Thoru Pederson, PhD Joel Popkin, MD Alwyn Rapose, MD Parul Sarwal, MD Akil Sherif, MD, SVH, Cardiology Fellow
321 Main Street, Worcester, MA 01608 wdms.org | mwright@wdms.org | 508-753-1579 wdms officers
President Giles Whalen, MD Vice President Alwyn Rapose, MD Secretary Michelle Hadley, DO Treasurer B. Dale Magee, MD wdms administration
Martha Wright, MBA, Executive Director Melissa Boucher, Administrative Assistant
Robert Sorrenti, MD Martha Wright, MBA Peter Zacharia, MD Alex Newbury, MD, UMass, Radiology Resident Arunava Saha, MD, SVH, Medicine Resident Olivia Buckle, Student Representative produced by
thank you to
Parul Sarwal, MD, Editor-in-Chief Sloane Perron, Editor Robert Howard, Designer Guiding Channels, Printer
The Reliant Medical Group, UMass Memorial Health Music Worcester Physicians Insurance AdCare Hospital of Worcester
advertising
Inquiries to Martha Wright mwright@wdms.org 508-753-1579
Worcester Medicine does not hold itself responsible for statements made by any contributor. Statements or opinions expressed in Worcester Medicine reflect the views of the author(s) and not the official policy of the Worcester District Medical Society unless so stated. Although all advertising material is expected to conform to ethical standards, acceptance does not imply endorsement by Worcester Medicine unless stated. Material printed in Worcester Medicine is covered by copyright. No copyright is claimed to any work of the U.S. government. No part of this publication may be Winter 2023 reproduced or transmitted in any form without written permission. For information on subscriptions, permissions, reprints and other services contact the Worcester District Medical Society.
3
WORCESTER MEDICINE
Editorial EDITOR-IN-CHIEF: Parul Sarwal, MD
I
n the grand tapestry of medical progress, artificial intelligence (AI) is emerging as one of the most revolutionary fibers. Having gained mainstream momentum in medicine more recently, the ball in fact started rolling in the 1960s, with the Dendral program pioneering AI by interpreting mass spectrometry data. The subsequent decade saw MYCIN demonstrate early diagnostic capabilities with rule-based systems, influencing antibiotic treatment decisions. The 1980s brought the advent of computer-aided diagnosis (CAD) systems for medical imaging. Moving into the 21st century, IBM’s Watson showcased AI’s potential in complex decision-making, especially in oncology. I remember plugging in our PlayStation to Stanford’s Folding@Home project as a teenager– its premise being to harness the collective power of volunteers’ computer processors to simulate protein folding for disease research. While not explicitly AI, it was my earliest encounter with distributed computing, a tool to leverage technology to analyze complex biological data. Not long after, the 2010s marked a pivotal shift with the rise of deep learning, significantly improving image recognition for medical diagnoses. This historical trajectory reveals AI’s evolution from rudimentary rule-based systems to sophisticated, data-driven applications in healthcare. As the chatter on the topic gets louder, physicians, educators, and researchers share their experiences with AI in this issue of Worcester Medicine. We surveyed WDMS members to gauge their perception towards AI. Our results, which are included in this issue, echoed the obvious: AI is streamlining administrative tasks for the doctor, allowing more time for patient care. Guideline-based algorithms infused with the power of AI are enhancing medical decisionmaking and, thus, patient outcomes. Telemedicine, potentiated by AI, has transcended geographical boundaries. It connects patients with specialists and provides healthcare to remote corners of the world, democratizing access to quality care. In this issue, Dr. Lawrence Garber, Medical Director for Informatics at Reliant Medical Group, talks about some of the exciting ways AI-enabled tools are making life easier for both doctors and patients. With AI, the patient is no longer a passive recipient of care but an active participant in their journey to well-being. The confluence of digital health and personal genomics allows for customized
4
“
…AI is streamlining administrative tasks for the doctor, allowing more time for patient care. treatment plans, tailored to each patient’s unique genetic makeup, medical history, and lifestyle. Beyond diagnosis and treatment, AI is bolstering the once-arduous task of medical research. It is analyzing vast datasets, discerning patterns, and anomalies that the human eye might miss, accelerating drug discovery and uncovering potential cures for otherwise enigmatic diseases. As with any other technology, AI comes with its challenges. In this issue, Drs. Vinit Gilvaz and Zeba Hashmath discuss how ethical concerns and data security inform regulation in the age of AI. According to surveyed WDMS members, adopting AI in medicine may lead to reduced patient trust and a sense of depersonalization. Dr. Sonia Chimienti, senior associate dean of medical education at Dartmouth’s Geisel School of Medicine, reinforces the importance of communication skills and teamwork to maintain the human touch while incorporating AI in clinical education and training. After all, AI is not a replacement for the empathetic doctor… or is it? Dr. Andrew Beam, founding deputy editor of NEJM-AI, navigates this nuanced question. In the AI-driven medical future, collaboration between clinicians, researchers, administrators, and technology developers will be the linchpin. This revolution transcends disciplines and requires the knitting of knowledge and innovation. The future of medicine is not just about AI; it’s about the harmony of human and artificial intelligence, an interweaving of hope, healing, and endless possibilities. +
Winter 2023
WORCESTER MEDICINE
The Doctor Will See You Now… and 24x7 Larry Garber, MD
A
rtificial intelligence (AI) is enabling many advancements in healthcare, ranging from improved quality of care and patient safety, to improved access to care. AI can act like a supervisor looking over our shoulder as we practice making sure that we don’t make mistakes. It can also make us work more efficiently by taking care of menial tasks like typing notes or summarizing a patient record for an insurance company. It is improving our experience as clinicians as well as improving the experience of our patients. But now AI is also making it practical to go one step further: ensuring that our patients are receiving good care every day, all day long, regardless of where they are. As practicing physicians, we know that even with a much larger support staff we still can’t keep track of all our patients every day. So how do we feel confident that our diabetic patients are keeping their sugars under control, our congestive heart failure patients are watching out for their salt and fluid intake, and our frail elderly patients aren’t teetering on the edge of needing to go to the ED? The answer is through remote patient monitoring and digital therapeutics, both of which are now made practical because of AI. Remote patient monitoring (RPM) has been around for over a decade. It allows wireless devices such as weight scales, blood pressure monitors, and wearable fitness trackers to record measurements and automatically send them to our electronic health records (EHRs). If there are abnormal readings, we get alerts. There are two problems with this. The first is that historically there have been too many alerts, and that’s where AI comes to the rescue. AI systems are able to not only look at specific values or trends, but also look at patterns to see if a collection of data points is truly pointing to the need for an intervention, or simply an acceptable fluctuation. And the beauty of AI is its use of machine learning that allows it to improve its algorithms over time. Better algorithms mean fewer alerts and a higher positive predictive value that the alerts will be actionable. The second problem with RPM is that historically they’ve required patient engagement. The patient would have to wear something or remember to step on a scale or take a blood pressure measurement. They’d also have to remember to charge the batteries. But if you’re trying to continually monitor, for instance, a frail elderly patient’s health, the wearable device won’t be monitoring their health while it’s charging. In fact, when that frail elderly patient wakes up, feels crummy and decides to stay in bed, they’re not going to put on their activity monitor, check their weight, or check their blood pressure on the day those readings are most important! Did they not measure their weight because they forgot, or is it because they
A.I. In Medicine
are unconscious on the floor? That’s where AI has made ambient patient monitoring possible. Ambient monitoring is like your smoke detector: it continuously and reliably monitors your safety without you having to engage with it or even think about it. There are now wireless radar devices that can privately monitor the activity of patients in their homes, their sleep patterns, and even their heart rate and respiratory rate without requiring the patients to do anything! Slower walking can indicate an impending decline in health [1] and increased risk of falls [2]. Restless sleep can also indicate impending clinical decline [3]. And, with the use of AI, emergency events can be predicted on average 10-14 days in advance using ambient monitors, allowing for plenty of time for interventions to be effective in preventing the need for an ED visit [4]. AI is also helping make digital therapeutics (DTX) more effective. DTX are apps for your smartphone, tablet, or laptop that help patients with chronic conditions take care of themselves. For example, a diabetes DTX app can see that the patient’s sugar is high and, using AI, give them advice regarding their diet or insulin dosing. Behavioral health DTX apps are helping patients control their anxiety and depression without having to contact their healthcare provider. With the shortage of healthcare providers, DTX is giving patients access to care that was otherwise impossible [5]. And generative AI is making it possible to have verbal conversations with voice assistants like Siri or Alexa that can have therapeutic benefits as well [6]. We all want our patients to maximize their health, make good healthcare decisions, and receive good care regardless of whether they are in our office or the majority of time when they are not. While we don’t need to continually be reminded that each of our patients is doing well each day, AI-enabled tools like remote patient monitoring and digital therapeutics will give us the peace of mind to know that they are receiving good care all the time. + Dr. Garber is an internist, the Medical Director for Informatics, and the Associate Medical Director of Research at Reliant Medical Group. References 1. Onder G, Penninx BW, Ferrucci L, Fried LP, Guralnik JM, Pahor M. Measures of physical performance and risk for progressive and catastrophic disability: results from the Women’s Health and Aging Study. J Gerontol A Biol Sci Med Sci. 2005 Jan;60(1):74-9. doi: 10.1093/gerona/60.1.74. PMID: 15741286. 2. Adam CE, Fitzpatrick AL, Leary CS, Hajat A, Ilango SD, Park C, Phelan EA, Semmens EO. Change in gait speed and fall risk among community-dwelling older adults with and without mild cognitive impairment: a retrospective cohort analysis. BMC Geriatr. 2023 May 25;23(1):328. doi: 10.1186/s12877-023-03890-6. PMID: 37231344; PMCID: PMC10214622. 3. Schütz N, Saner H, Botros A, Pais B, Santschi V, Buluschek P, Gatica-Perez D, Urwyler P, Müri RM, Nef T. Contactless Sleep Monitoring for Early Detection of Health Deteriorations in Community-Dwelling Older Adults: Exploratory Study. JMIR Mhealth Uhealth. 2021 Jun 11;9(6):e24666. doi: 10.2196/24666. PMID: 34114966; PMCID: PMC8235297. 4. Rantz M, Phillips LJ, Galambos C, Lane K, Alexander GL, Despins L, Koopman RJ, Skubic M, Hicks L, Miller S, Craver A, Harris BH, Deroche CB. Randomized Trial of Intelligent Sensor System for Early Illness Alerts in Senior Housing. J Am Med Dir Assoc. 2017 Oct 1;18(10):860-870. doi: 10.1016/j.jamda.2017.05.012. Epub 2017 Jul 12. PMID: 28711423; PMCID: PMC5679074. 5. Youn SJ, Jaso B, Eyllon M, Sah P, Hoyler G, Barnes JB, Jarama K, Murillo L, O’Dea H, Orth L, Pennine M, Rogers E, Welch G, Nordberg SS. Leveraging Implementation Science to Integrate Digital Mental Health Interventions as part of Routine Care in a Practice Research Network. Adm Policy Ment Health. 2023 Aug 24. doi: 10.1007/s10488-02301292-9. Epub ahead of print. PMID: 37615809. Click here for additional reference(s).
Winter 2023
5
WORCESTER MEDICINE
A.I. In Medicine Artificial Intelligence in Mental Healthcare: A Story of Hope and Hazard
T
The 17th Annual Louis A. Cottle Medical Education Conference, exploring “The Age of Artificial Intelligence in Medicine,” took place on October 18, 2023, at the Beechwood Hotel, Worcester. Organized by the Medical Education Committee of the Worcester District Medical Society, the event featured a dinner and presentations by keynote speaker Dr. Andrew Beam and panelists Dr. Larry Garber, Dr. Neil Marya, and Ricardo Poza, discussing the evolving landscape of AI in medicine.
Winter 2023
Rajendra Aldis, MD, MSCS Nicholas Carson, MD, FRCPC
he term artificial intelligence (AI) was coined at a computer science conference at Dartmouth College in 1956. Its use in healthcare began soon after, starting with rule-based systems that followed hard-coded instructions and then evolving to more advanced models that learn independently using data. Though the rate of adoption of AI in mental healthcare has been slower than in other medical specialties, its presence is growing. AI has the potential to address several challenges faced by mental healthcare. Providers today have at their fingertips an overwhelming amount of data. Electronic health records contain tens of thousands of data points and providers must consider an increasing number of them when making clinical decisions. However, there are limits to how many variables humans can take into consideration at once, and when that capacity is exceeded, we experience information overload. When compounded by stress, fatigue, and competing demands on attention, information overload can lead to errors [1]. One of the most critical decisions providers make is determining suicide risk. Suicide was among the top 9 leading causes of death for people ages 10-64 and remains the second leading cause of death for people ages 10-14 and 25-34 [2]. There are dozens of factors that must be considered when assessing suicide risk, and making accurate assessments is challenging [3]. Healthcare providers struggle to identify patients at risk: 54% of people who die by suicide were seen by a healthcare provider in the month before their death, but clinicians are typically no better than chance at estimating suicide risk [4]. Several AI models have been developed that can sort through large amounts of data to identify patients at risk for suicide. One model, which calculated a risk score using a combination of electronic health record data and patient report, performed better than clinician assessment alone [5]. With rigorous testing via randomized controlled trials, such models might eventually prove to be a cost-effective way to bring at-risk patients to the attention of busy providers [6]. In addition to augmenting clinical decision making, AI can also help advance our understanding of mental disorders. The mental health diagnostic categories used currently are heterogeneous; two patients with different sets of symptoms can receive the same diagnosis. There is also considerable overlap between mental disorders such that the same set of symptoms can result in different
7
A.I. In Medicine
WORCESTER MEDICINE
Artificial Intelligence in Mental Healthcare: A Story of Hope and Hazard Continued diagnoses [7]. Diagnostic heterogeneity and overlap complicate using diagnostic categories for clinical care and research. Artificial intelligence, particularly a form called unsupervised learning that can discern new patterns in data, can be used to identify subpopulations within diagnostic categories. This approach has been used to identify subgroups of mental health disorders such as depression and PTSD [8,9]. Another challenge in mental healthcare is determining which treatment will work for a particular patient. Patients frequently endure several trials of medications before finding one that is effective. If there was a way to personalize treatment recommendations, then patients could be spared a lengthy trial-and-error process. AI models have been developed that predict responses to a range of mental health treatments, including antidepressants and transcranial magnetic stimulation [8,10]. As with any technology, there are risks associated with using AI in healthcare. AI algorithms can be biased and perpetuate healthcare disparities. In one striking example, a large health insurance company used an AI model to identify patients with greater illness severity to provide them with more support. However, the model incorrectly underestimated the illness severity of Black patients and therefore misjudged the care management support they deserved [11]. Another limitation of AI is that some AI models make decisions in a “black box” fashion, where the model cannot provide the reasoning used to make the decision. This lack of transparency can lead to distrust among providers and patients. A further weakness of AI models is that their accuracy can degrade over time, known as model drift. For example, the dramatic impact that COVID had on healthcare caused some models that were created before COVID to perform poorly [12]. To detect and correct drift, models require monitoring and sometimes need retraining. These maintenance needs may increase the cost of AI compared to traditional methods. Also, AI models can struggle when faced with data they have not seen before, so a model created in one healthcare setting may not perform as well in another. This weakness, known as a lack of generalizability, can limit the widespread use of an AI model. Healthcare systems can take several steps to mitigate AI risks. Healthcare systems that use AI should have governance in place to ensure that AI is safe, effective, and beneficial [13]. Governance should be performed by multidisciplinary teams composed of providers, patient representatives, bioethicists, and AI experts. These teams can assess AI models for bias, explainability, and generalizability. Healthcare systems should also plan for how to transparently communicate with providers and patients about how AI is being used to deliver care. One
8
can imagine a situation where an AI tool delivers a result (e.g. a risk score for suicidal behavior) that differs from the provider’s judgment. This mismatch might confuse patients and providers and hinder treatment decisions. The presence of AI in mental health is likely to continue to grow. Providers will encounter AI more frequently and it will become increasingly necessary to understand its risks and benefits. Healthcare training programs should incorporate AI competency into curriculums, similar to how students learn how to critically evaluate biomedical literature [14]. With this knowledge, providers can critically assess AI models they encounter and have informed conversations with their patients about how AI is impacting the care they receive. AI in mental healthcare today is at a tipping point, where tremendous potential is tempered by significant risk. Research has led to the development of accurate tools, but more work is needed to better understand their real-world impact on providers, healthcare systems, and patients. Such work is underway, and there is reason to be optimistic about the benefits AI may bring to mental healthcare. + Rajendra Aldis, MD, MSCS Associate Medical Director of Research Informatics, Cambridge Health Alliance Physician Informaticist, Health Equity Research Lab Instructor in Psychiatry, Harvard Medical School Email: raaldis@challiance.org Nicholas Carson, MD, FRCPC Division Chief, Child and Adolescent Psychiatry, Cambridge Health Alliance Research Scientist, Health Equity Research Lab Assistant Professor in Psychiatry, Harvard Medical School Email: ncarson@challiance.org Disclosures: Dr. Carson receives research funding from Harvard University, NIMH, SAMHSA, and NCCIH. References 1. Nijor, S., Rallis, G., Lad, N., & Gokcen, E. (2022). Patient Safety Issues From Information Overload in Electronic Medical Records. Journal of patient safety, 18(6), e999–e1003. https://doi.org/10.1097/PTS.0000000000001002 2. CDC, (2023). Facts About Suicide. CDC. https://www.cdc.gov/suicide/facts/index.html 3. Franklin, J. C., Ribeiro, J. D., Fox, K. R., Bentley, K. H., Kleiman, E. M., Huang, X., Musacchio, K. M., Jaroszewski, A. C., Chang, B. P., & Nock, M. K. (2017). Risk factors for suicidal thoughts and behaviors: A meta-analysis of 50 years of research. Psychological bulletin, 143(2), 187–232. https://doi.org/10.1037/bul0000084 4. Ahmedani, B. K., Simon, G. E., Stewart, C., Beck, A., Waitzfelder, B. E., Rossom, R., Lynch, F., Owen-Smith, A., Hunkeler, E. M., Whiteside, U., Operskalski, B. H., Coffey, M. J., & Solberg, L. I. (2014). Health care contacts in the year before suicide death. Journal of general internal medicine, 29(6), 870–877. https://doi. org/10.1007/s11606-014-2767-3 5. Nock, M. K., Millner, A. J., Ross, E. L., Kennedy, C. J., Al-Suwaidi, M., BarakCorren, Y., Castro, V. M., Castro-Ramirez, F., Lauricella, T., Murman, N., Petukhova, M., Bird, S. A., Reis, B., Smoller, J. W., & Kessler, R. C. (2022). Prediction of Suicide Attempts Using Clinician Assessment, Patient Self-report, and Electronic Health Records. JAMA network open, 5(1), e2144373. https://doi.org/10.1001/ jamanetworkopen.2021.44373 Click here for additional reference(s).
Winter 2023
WORCESTER MEDICINE
AI in Healthcare: Balancing Innovation with Regulation Vinit Gilvaz, MD Zeba Hashmath, MD
O
ver the past few decades, computers have revolutionized the practice of medicine. They allow easy access to patient records, streamline provider communication, track prescriptions, and have enhanced our radiodiagnostic capabilities. One of the greatest impacts of computers on healthcare has resulted from the use of electronic medical records (EMR) to document and store patient information. They have taken us Art created using Mid Journey away from clipboards and paper charts and given us access to troves of patient data - all in one place. However, the widespread implementation of EMRs did not emerge from a technological breakthrough but rather, a legislative move. The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 incentivized healthcare centers nationwide to adopt EMRs, by rewarding compliance and penalizing non-adherence. As a result, EMR-adoption in the United States soared, with over 95% adoption in most regions by 2015. Although it is arguable whether this legislative policy achieved its specific objective of decreasing healthcare spending, its influence on medical practice across the country is undeniable. Several parallel advancements in medicine fueled by research in multiomics and advanced imaging have translated to more testing on the clinical front, allowing for nuanced data to be collected on individual patients. These advancements, coupled with the rise of smart sensors (activity trackers, continuous glucose monitors, long-term cardiac monitors, etc.), have resulted in an explosion of patient data. Currently, we amass several exabytes (billion gigabytes) of healthcare data annually, a figure that is projected to grow exponentially over the coming years. While intended to enhance patient care, this vast pool of data often leaves healthcare professionals feeling overburdened and overwhelmed. As Dr. Atul Gawande rightly put it, “The volume and complexity of what we know has exceeded our individual ability to deliver its benefits correctly, safely, or reliably. Knowledge has both saved us and burdened us.” Herein lies the potential of artificial intelligence (AI). AI-based tools can utilize these large data sets to identify patterns and generate clinically meaningful insights. AI is an umbrella term for computer algorithms capable of mimicking human intellect. It includes approaches such as machine learning (ML) and deep learning. ML models, as the name implies, learn from examples (trained) rather than being guided by predefined rules. They can improve and adapt in light of new information. Quite like humans, they ‘learn’ as they ‘experience’ new data. Deep learning is a subset of ML that employs artificial neural networks (ANNs), which are multi-layered computational structures inspired by the neural connections in the brain. Most of the
Winter 2023
A.I. In Medicine
recent breakthroughs in AI have been largely attributed to deep learning [1]. The more data an ML or deep learning model is trained on, the better it performs, making it the ideal tool to process the abundance of patient data we accumulate on a daily basis. From precisely detecting sepsis patients at risk of clinical deterioration [2] to forecasting 30-day readmissions in heart failure patients [3], there are countless examples of AI models accurately predicting patient outcomes. Advancements in computer vision have enabled automatic detection of pathology from radiographic images, often more accurately than trained radiologists [1]. Lately, a lot of attention has been focused on ‘foundation models,’ which are AI models trained on vast amounts of unlabeled data. Large language models (LLMs), such as OpenAI’s Chat GPT and Google’s PaLM, are prime examples. They can provide detailed answers to complex medical questions and enable natural interactions with computers. Recent iterations of these models have shown promise in answering USMLE (United States Medical Licensing Exam)-style questions with high accuracy and are currently being evaluated for their potential as clinical decision support systems [4]. Needless to say, the possibilities of AI in medicine are endless. However, despite all the progress made thus far, we are yet to see widespread implementation of AI tools in healthcare. There are several reasons for this, one of them being the technical challenges associated with AI models. Poor interpretability of large-scale models and the risk of ‘data hallucination’ with some of the newer LLMs are among the many technical hurdles that must be overcome. Another significant barrier to the broad adoption of AI in healthcare is the inability of regulation to match the pace of innovation. Like with EMR adoption, we need a combination of congressional guidance and enforceable regulation to ensure safe and effective adoption across the country. This is easier said than done, especially when trying to keep pace with a fastmoving field like AI. Big Tech has always been known to “move fast and break things” and aims to disrupt industries they venture into. Our regulatory bodies, on the other hand, have been known for their slow and measured approach. This has delayed widespread commercial implementation and deprived patients of the potential benefits of AI. In the United States, the Food and Drug Administration (FDA) oversees the safety and effectiveness of drugs, medical devices, and food products. In 2021, the FDA issued the ‘AI/ML-based Software as a Medical Device (SaMD) Action Plan’, supporting the development of methods to evaluate AI algorithms in healthcare. These, however, only
9
A.I. In Medicine
WORCESTER MEDICINE
AI in Healthcare: Balancing Innovation with Regulation Continued serve as guidance documents and do not establish legally enforceable responsibilities on AI companies. On the other hand, the European Union (EU) has enacted actionable regulations through the European Medical Device Regulation (EU-MDR), which aim to enhance the scrutiny of AI tools in healthcare [5]. They include measures such as categorizing AI tools by risk levels, enforcing stricter clinical evaluation standards, and post-marketing surveillance. Compliance with these rules results in a ‘certificate of conformity’ that is required to place medical devices on the market anywhere in the EU. Similar regulatory advancements are necessary in the US to enhance the safety and performance of medical AI devices. Once regulations are updated, there will be a need for more prospective trials involving AI tools. As of now, the majority of studies that apply AI in healthcare use retrospective data for both training and testing models. In conclusion, just like any other facet of healthcare, patient safety is of utmost importance. For over a century, strong regulatory oversight in the United States has ensured the safe introduction of medical products and services. It is thus crucial that we maintain this rigorous tradition and ensure that our regulatory framework keeps up with the fast-paced world of AI. + Vinit Gilvaz, MD, Rheumatologist at Lifespan Rheumatology, Providence, RI Zeba Hashmath, MD, Chief Cardiology Fellow at East Carolina University, Greenville, NC References: 1. Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 25(1), 44-56. 2. Henry, K. E., Adams, R., Parent, C., Soleimani, H., Sridharan, A., Johnson, L., ... & Saria, S. (2022). Factors driving provider adoption of the TREWS machine learningbased early warning system and its effects on sepsis treatment timing. Nature medicine, 28(7), 1447-1454. 3. Johnson, A. E., Brewer, L. C., Echols, M. R., Mazimba, S., Shah, R. U., & Breathett, K. (2022). Utilizing artificial intelligence to enhance health equity among patients with heart failure. Heart Failure Clinics, 18(2), 259-273. 4. Singhal, K., Azizi, S., Tu, T., Mahdavi, S. S., Wei, J., Chung, H. W., ... & Natarajan, V. (2023). Large language models encode clinical knowledge. Nature, 620(7972), 172-180. 5. Niemiec, E. (2022). Will the EU Medical Device Regulation help to improve the safety and performance of medical AI devices?. Digital Health, 8, 20552076221089079.
10
Infographic by Parul Sarwal, MD ©M.Wallflower/sparklestroke via Canva.com
Winter 2023
WORCESTER MEDICINE
A.I. In Medicine
Do Patients Want Artificial Intelligence or Human Intelligence? Larry Garber, MD
O
ne thing that the COVID-19 pandemic taught us is that good healthcare does not mean a patient has to deal with traffic, parking, and waiting room issues. In fact, for some types of problems, patients have preferred video visits from their home over an in-person office visit [1]. Convenience and cost are major factors in that decision. But how far removed from direct contact with their healthcare provider are patients willing to go for the sake of convenience and cost? Are they willing to let artificial intelligence (AI) guide their medical care? The answer is a resounding “Yes!” Patients are accepting, and, in some cases, preferring the medical guidance that they can receive from AI. Reliant Medical Group first Figure 1. OpenAI’s test results on standardized exams (Source: https://openai.com/research/gpt-4) tested the waters of AI-assisted patient care with the development and implementation of a symptom checker in 2021. Unlike other symptom checkers available on the internet or offered by some electronic health record (EHR) vendors, this system leverages the patient’s known medical history to reduce the number of questions needed to give guidance on over 800 conditions that could be causing their symptoms. It provides education on what each condition is, how to undertake self-care if appropriate, and when to seek medical care if the condition gets worse. Patients love it because half of them use it after hours and on weekends when the offices aren’t open, and a quarter of them get enough self-care information that they do not need input from Reliant’s providers or staff. This frees up Reliant’s providers and staff to In the past year, a form of AI known as “Generative AI” has hit the help serve other patients who have more acute needs. headlines. One version known as GPT-4 by OpenAI has been able to surpass humans on many standardized tests [2] (Figure 1) and specifically outperform If patients do want to schedule a video visit, they can do most physicians in the United States Medical Licensing Exam [3]. that directly from the app. All the symptom history they Even more remarkable is that GPT-4 scored higher on communication already entered will be available to the clinician within the EHR, so they don’t need to give their history again. skills such as empathy and professionalism than the average physician The symptom checker has also potentially saved several [4,5]. This has indeed been the experience as EHRs such as Cerner lives, referring some patients to the ED who were and Epic have implemented GPT-4-generated draft responses to online originally planning to undertake self-care, resulting in patient portal questions. As these messages appear in the In Baskets admissions for serious conditions. of physicians and nurses, the EHR displays an optional draft response.
“
…GPT-4 scored higher on communication skills such as empathy and professionalism than the average physician.
Winter 2023
11
A.I. In Medicine
WORCESTER MEDICINE
Do Patients Want Artificial Intelligence or Human Intelligence? Continued This response can be edited and sent back to the patient. Early studies show that these responses are longer and felt to be more empathetic by patients compared to a prior baseline. This has led to physician concerns that when doing subsequent visits, the patients will expect the physician to speak longer and appear more empathetic! Patients are starting to benefit from AI in other ways as well. For instance, AI is making real-time ambient transcription and summarization of office visit dialogue a reality. A microphone recording the conversation in the exam room allows the physician to focus their attention on the patient and keep good eye contact without having to worry about writing their notes. AI is also performing translation of physician communications to patients, making them more understandable by lowering the reading grade level and translating into other languages. AI is similarly automatically creating timely after-visit summaries and discharge summaries that are more patient-friendly. And, of course, the use of AI in EHR clinical decision support is helping patients to receive higher-quality and safer care. So yes, patients do want artificial intelligence to assist in their care. Sometimes they even prefer it over human intelligence. But will AI replace physicians in caring for patients? That answer is “No”. However, as Dr. Karim Lakhani, a professor at Harvard Business School, pointed out, “AI is not going to replace humans, but humans with AI are going to replace humans without AI.” +
Can AI be a Good Doctor?: What measuring a computer’s medical ability teaches us about human doctors Andrew Beam, PhD
A
s artificial intelligence (AI) advances, there is increasing interest in whether AI systems can take on roles traditionally filled by human experts, including doctors. AI systems are now being developed that can read medical images, analyze health records, diagnose diseases, and even communicate empathetically with patients. In some applications, these AI systems are reaching or even surpassing human-level performance. It is not an understatement to say the pace of progress on medical AI over the last decade has been staggering.
“
What has become clear over the last five years is that AI has an impressively good grasp of medical knowledge.
Dr. Garber is an internist, the Medical Director for Informatics, and the Associate Medical Director of Research at Reliant Medical Group. References 1. Predmore ZS, Roth E, Breslau J, Fischer SH, Uscher-Pines L. Assessment of Patient Preferences for Telehealth in Post– COVID-19 Pandemic Health Care. JAMA Netw Open. 2021;4(12):e2136405. doi:10.1001/jamanetworkopen.2021.36405 2. https://openai.com/research/gpt-4 3. Nori, Harsha & King, Nicholas & McKinney, Scott & Carignan, Dean & Horvitz, Eric. (2023). Capabilities of GPT-4 on Medical Challenge Problems. https://browse.arxiv.org/pdf/2303.13375. pdf 4. Brin D, Sorin V, Vaid A, Soroush A, Glicksberg BS, Charney AW, Nadkarni G, Klang E. Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments. Sci Rep. 2023 Oct 1;13(1):16492. doi: 10.1038/s41598-023-43436-9. PMID: 37779171; PMCID: PMC10543445. 5. Ayers JW, Poliak A, Dredze M, et al. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Intern Med. 2023;183(6):589–596. doi:10.1001/jamainternmed.2023.1838
12
This raises an inevitable question - can AI really ever be a good doctor? However, this question in turn raises another for human doctors that we rarely acknowledge outright: It is very challenging to reliably determine if any given person is a “good doctor”. Moreover, it is often still difficult to determine which doctors are good, even after decades of practice. Some doctors excel at diagnosis but have a problematic bedside manner. Some ace their boards but struggle to be a team player. Some are wonderful colleagues but may struggle with complex cases. Are any of these kinds of doctors better or worse when compared to each other? The answer from the field of health services research, is “It depends on what you’re measuring”. There is a long list of metrics that attempt to answer some form of this question, including aspects like test scores, RVUs, clinical
Winter 2023
WORCESTER MEDICINE
outcomes, error reduction, case volume, and patient satisfaction; these remain imperfect yardsticks and, again, are answering a very specific question. After all, can we truly distill the body of work of a physician into a simple “good” or “bad” designation? If not, can we hope to one day apply the same standard to AI? What has become clear over the last five years is that AI has an impressively good grasp of medical knowledge. Recent AI based on socalled large language models (LMMs: think of models like ChatGPT or Google’s Bard) have not gone to medical school, nor have they even been fed a diet of information specific to medicine. Instead, they have consumed the vast stores of text data available on the internet. Consequently, they are able to converse fluently on nearly any topic, and medicine just happens to be one of a potentially infinite number of things that these silicon polymaths will happily discuss. Any expert medical knowledge they possess is a totally accidental byproduct of having read the internet, in its entirety. So how do we know what they know about medicine if anything? An obvious answer is to have the AI take the same battery of evaluations that we make physicians-in-training take to prove they have mastered the fundamentals, such as the United States Medical Licensing Exam (USMLE) and specialty board certifications. Afterall, mastery of the material on these exams is a necessary, but not sufficient, condition to practice medicine in the United States. Progress by AI on these evaluations over the last year has been staggering, and AI systems are now scoring better than the best humans on many of these tests. For instance, AI systems developed by both Google and OpenAI are estimated to score better than 99% USMLE Step 1 test takers [1] [2]. Outside of text, there has been a flurry of progress on diagnostic medical imaging tasks, with AI systems now able to reliably read studies like chest x-rays, pathology slides, and retinal photographs with surprising accuracy. In some narrowly scoped and well-controlled evaluations, it has been found that many of these AI imaging systems perform as well or better than their human counterparts. However, as with all things, success in a lab setting does not necessarily translate to the real-world, and there have been numerous challenges [3] getting these systems to work in the real world. Though there is still much improvement to be made even within these narrow applications, the pace of progress for these systems shows no signs of slowing, as of yet. However, as with humans, being a good doctor is more than just book smarts or diagnostic acumen. Being a good doctor requires qualities like empathy, social intelligence, and communication skills. Surprisingly, recent studies have shown that AI systems can exhibit some of these qualities as well. For example, a study [4] published in JAMA Internal Medicine showed that AI could generate empathetic notes to patients after medical visits. In comparing AI and physician responses to patient questions posted online, the AI responses were preferred, rated as higher quality, and judged as more empathetic. Thus, it seems that AI is also mastering the “soft skills” of what it means to be a good doctor. Does this mean we should trust AI to be a good doctor? Not yet. Just as a doctor who writes empathetic notes but misdiagnoses patients is not necessarily a good doctor, an AI that communicates well but lacks other important
Winter 2023
A.I. In Medicine
skills would be equally flawed. So, to return to the initial question - can AI be a good doctor? Unfortunately, we find there are no easy answers, only parallels. If the field of healthcare services research has taught us anything, it’s that this is an incredibly complex question that doesn’t offer crisp answers, only trade-offs. The path to determining whether AI can be a “good doctor” will likely mirror the path that allows us to recognize great human doctors - accumulating a body of evidence over time using a broad range of metrics. The most we can say right now, is that AI seems to be working pretty well in some very limited domains, while being upfront about the caveats that must accompany those claims. Whether carbonor silicon-based, exactly what constitutes a “good doctor” remains elusive. + References 1. Nori, Harsha, et al. “Capabilities of GPT-4 on Medical Challenge Problems”, Microsoft (2023), https://www.microsof t.com/en-us/research/ publication/capabilities-of-gpt-4-on-medicalchallenge-problems/ 2. “A responsible path to generative AI in healthcare”, Google Cloud (2023), https://cloud.google.com/ blog/topics/healthcare-life-sciences/sharinggoogle-med-palm-2-medical-large-language-model 3. Murdock, Jason. “Accuracy, but Failed to Deliver in Real World Tests”, Newsweek (2020), https://www. newsweek.com/google-artificial-intelligence-deeplearning-health-research-thailand-clinics-diabeticretinopathy-1500692 4. Ayers, J.W., et al. (2023) Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Internal Medicine.
Andrew Beam, PhD, is a founding deputy editor of NEJM AI and a co-host of the NEJM Group podcast AI Grand Rounds. He is an assistant professor in the Department of Epidemiology at the Harvard T.H. Chan School of Public Health, with a secondary appointment in the Department of Biomedical Informatics at Harvard Medical School. His lab develops new AI methods by combining large scale deep learning models with techniques from causal inference to improve the safety and robustness for medical applications. He has a particular interest in using AI to improve neonatal and perinatal outcomes.
13
A.I. In Medicine
WORCESTER MEDICINE
Digital Inclusion of Elderly People: Designing a purposeful serious game interface with memorable music Sunny Choi, PhD
A
plethora of digital tools are available today with a relatively common aim to help achieve specific tasks or activities for the elderly. However, merely providing access to digital technology does not correlate with improved quality of life as digital interface design presumes certain physical gestures or understanding of what is being visually communicated to be reciprocated by the end users. Digitally affluent elderly people are collectively referred to as ‘silver surfers’ who represent a privileged group that has certain resources, motivation, and skills. However, those who require assistance with digital interfaces form the majority among the elderly population [1]. Numerous studies point to a lack of motivation from the elderly as a major reason for avoiding using digital tools. Yet, despite the rapid acceleration of the number of elderly on a global level, these users are involuntarily positioned in our society not to adopt digital tools to their full benefit due to the design of digital interfaces and conventions that implicitly assume users’ understanding and behavior as outcome. Getting older resentfully places the majority of elderly people in disadvantageous positions in our fast-paced, digitally driven society. While digital games have become popular over the last decades for a wider audience, their usability has been criticized as insufficient for the elderly population. Younger people today often do not require explicit teaching related to using digital devices as they are implicitly expected to already be informed of shared gestural actions and perceptions, such as ‘swipe to view.’ On the other hand, elderly citizens are said to be experiencing barriers to using digital tools, such as access, knowledge, design, and trust [2]. Reaching digital equity in our society, especially for the elderly population who did not grow up with digital devices, is a complex challenge. But there exists an opportunity to apply what is referred to as serious games to work with various stages of cognitive abilities of older people. Serious games are defined as digital games serving serious purposes like education, training, advertising, research, and health [3]. The objective of serious games is to span beyond creating interfaces for entertainment purposes in order to fulfill a purpose that exceeds the self-contained aim of the game itself [4]. While a surge in different types of roles of artificial intelligence technologies exist to address care-driven needs of older adults, how can digital interfaces be
14
“
While digital games have become popular over the last decades for a wider audience, their usability has been criticized as insufficient for the elderly population. purposely designed to promote digital inclusivity that rises from one’s individually motivated capability? What if technology served older people that promoted a sense of empowerment rather than positioning them as simply recipients of care? My research for planned postdoctoral studies situates artifacts as central to the research process, by specifically positioning the role of music for non-entertainment purposes for promoting digital inclusion for older adults. As an interdisciplinary project that combines educational science and arts-based research, it is designed to investigate interactions with a digital interface that may motivate an individual’s agency from recognizing the non-materialistic value towards achieving a sense of functioning for the elderly. As decline in cognitive functioning is a common and gradual life event even as part of normal aging process, the potential impact stimulated by familiar forms of expression may activate different types of memories. A recent study [5] which demonstrated cognitive improvement from music stimulation with piano instruction also inspires pursuing a deeper study in examining the correlation between digital stimulation and interaction. I specifically situate memorable music from the elderly participants to evoke certain emotions that drive motivation in stimulating and enhancing interactions with a serious game-based digital interface. A shared cultural commonality, such as autobiographical memory of memorable
Winter 2023
WORCESTER MEDICINE
music, may promote anticipation and expectation – two key factors that drive motivation which have been described as fundamental for the musical experience to understand the effects of music on emotion [6] – towards increasing engagement with digital interface. Data from one’s behavior with the digital interface triggered by memorable music may inform purposeful design goals for serious games that may contribute towards enhancing older adults’ cognitive abilities. A potential integral of artificial intelligence may involve assigning a novel dimension to its existing problem-solving field by adapting to varying states of cognitive function for monitoring purposes based on an individual’s interaction with a musical digital interface. In summary, achieving digital inclusion of elderly people must look beyond increasing access to technology. It must actively question the meaning of social inclusion that stems from rigorous digital interface design research that evokes purpose for the elderly’s fundamental wellbeing as coexisting citizens. + Sunny Choi, PhD is the vice president of product and operations at Clefer. She has worked on building education software applications for over 5 years. Email: choi.sunnys@gmail.com References 1. Olsson, T., & Viscovi, D. (2020). Who actually becomes a silver surfer? Prerequisites for digital inclusion. Javnost-The Public, 27(3), 230–246. 2. World Economic Forum. How can we ensure digital inclusion for older adults? (2022, December 12). https://www.weforum.org 3. Wiemeyer, J., & Kliem, A. (2012). Serious games in prevention and rehabilitation—a new panacea for elderly people?. European Review of Aging and Physical Activity, 9(1), 41–50. 4. Mitgutsch, K., & Alvarado, N. (2012, May). Purposeful by design? A serious game design assessment framework. In Proceedings of the International Conference on the foundations of digital games (pp. 121–128). 5. Worschech, F., James, C. E., Jünemann, K., Sinke, C., Krüger, T. H. C., Scholz, D. S., Kliegel, M., Marie, D., & Altenmüller, E. (2023). Fine motor control improves in older adults after 1 year of piano lessons: Analysis of individual development and its coupling with cognition and brain 2 structure. The European journal of neuroscience, 57(12), 2040–2061. https:// doi.org/10.1111/ejn.16031 Click here for additional reference(s).
A.I. In Medicine
Pixels, Patterns, and Patients: Radiology Residency in the AI Revolution Michael J. Purcaro, MD/PhD
A
s the Industrial Revolution once replaced the rhythmic trots of horses with the rhythmic hums of machines, artificial intelligence (AI) is replacing the manual intricacies of medicine with algorithms that promise to reshape our understanding of health and disease. Cutting across disciplines and industries, AI is not merely an evolution; it’s a revolution, changing the very foundation upon which systems operate. Medicine, always one of the first consumers of new technology, is itself on the precipice of revolution brought by AI. Radiology, perhaps more than any other field rooted in technology and innovation, is at the epicenter of this seismic shift. The potential of AI in enhancing diagnosis, treatment, and overall patient care is immense. But like any powerful tool, its true value can only be harnessed when understood in depth. As the adoption of deep learning tools in diagnostic imaging surges, the subtleties and potential errors of AI underscore the need for radiologists who excel not only in diagnostic acumen but also in liaising with computer scientists and software engineers. It becomes crucial, then, for radiology residents—future stalwarts of the discipline— to delve deep into the intricacies, challenges, and promises of AI. Integrating AI into the radiology resident education curriculum is an exciting but challenging new endeavor. A study led by Emory University in early 2023 revealed that 83% of surveyed radiology residents across 21 U.S. residency programs desired the inclusion of AI and machine learning education in their curriculum; less than 20%, though, had actually received any formal AI education or research experience (1). At the University of Massachusetts, our residency program has found several ways to integrate AI into our training. We have a wide variety of conferences from AI radiology subject matter experts, as well as didactic sessions, online and in-person AI conferences, and journal clubs to help navigate through the dilemmas and intricacies of AI. These sessions serve as dedicated spaces for exploring not only the mechanics of machine learning but also the ethical and professional conundrums that AI introduces to the field. The discussions facilitated by this forum enable residents to build a multidimensional understanding of AI, incorporating technical knowledge with ethics. Going beyond theory, the radiology department has integrated AI in practice. There are multiple AI tools being trialed by the attending radiologists. One tool in particular, Aidoc, has been integrated into the clinical process for multiple disciplines. Aidoc (AI-doc) is a sophisticated deeplearning convolutional neural network tool (2), currently used predominantly for annotating acute pathologies, including pulmonary embolisms and intracranial hemorrhages. Having processed tens of thousands of studies, the tool’s prowess in pulmonary embolism detection has reached unparalleled precision. If Aidoc identifies a potential embolism not mentioned in a radiology report, the system immediately flags the discrepancy for a thorough review. Aidoc’s capability to scan any CT study encompassing parts of the lungs has led to the serendipitous discovery of multiple pulmonary embolisms— incidents that would typically fly under the radar in conventional reviews. Senior residents, equipped with access to Aidoc and its suite of algorithms, witness firsthand the algorithm’s remarkable efficacy and its nuanced
Winter 2023
15
WORCESTER MEDICINE
Pixels, Patterns, and Patients: Radiology Residency in the AI Revolution Continued inaccuracies. Aidoc’s value transcends merely flagging pathologies. It serves as a springboard for intellectual exploration and dialogue, urging users to reflect upon both its errors and unexpected revelations. The unearthing of such incidental findings, while revolutionary, ushers in a host of challenges and inquiries, especially regarding their clinical relevance and ensuing management. For instance, the algorithm might occasionally, albeit mistakenly, detect a pulmonary embolism in a pulmonary vein. While such inaccuracies are becoming rarer as the algorithm evolves, they underscore the vital insight that the algorithm is a complement, not a substitute, to a radiologist’s expertise. Therefore, its outcomes must always be met with discernment and critical thinking. Language Learning Models (LLMs) represent a fusion of technology and linguistics, designed to grasp, and generate human-like text based on patterns from vast amounts of data. These models have rapidly become part of the general public’s mind as LLMs like ChatGPT and Bard enter daily use for many people. Likewise, LLMs will become more integrated into medicine (3) and, particularly, radiology. LLMs are becoming useful tools for residents, aiding in developing differential diagnoses. By seamlessly analyzing provided clinical information, they generate comprehensive lists of potential diagnoses. This not only facilitates quicker and more informed decision-making but also nurtures analytical and critical thinking skills among residents. Additionally, early pioneers have anecdotally begun using LLMs to start automating portions of the dictated report, generating, for instance, automatic summary impressions, saving the radiologists time. LLMs promise to help merge traditional knowledge with the prowess of modern technology. The integration of AI into radiology isn’t just inevitable; it’s transformative. It promises not just enhanced time efficiency and streamlined workflows, but it also carves a path for the emergence of adept radiologists who can harness AI’s full potential. The confluence of AI and radiology heralds a synergy that pushes the boundaries of what’s possible, setting new standards for top-tier healthcare delivery. Radiology residents, poised to be the vanguards of this discipline, must delve deep into the intricacies, challenges, and vast horizons of AI. This ensures that this groundbreaking technology is directed with discernment, commitment to ethical practices, and a relentless pursuit of exceptional patient care. + Michael Purcaro, MD/PhD, MS, is a computer scientist by training, and currently in the second year of his radiology residency at UMass Med. Email: michael.purcaro@umassmed.edu References: 1. Salastekar NV, et. al. “Artificial Intelligence/Machine Learning Education in Radiology: Multi-institutional Survey of Radiology Residents in the United States.” Acad. Radiol. 2023. 2. Ojeda P, et. al. “The utility of deep learning: evaluation of a convolutional neural network for detection of intracranial bleeds on non-contrast head computed tomography studies.” MEdical IMaging 2019: Image Processing, 2019. 3. Lee P, etl .al. “The AI Revolution in Medicine: GPT-4 and Beyond.” Pearson, 2023.
16
Winter 2023
WORCESTER MEDICINE
A.I. In Medicine
Artificial Intelligence in Health Professions Education Anna K. Morin, PharmD, RPh
E
very industry, including healthcare, is looking to better understand the capabilities of artificial intelligence (AI) technologies and how to apply them to optimize outcomes. AI utilizes computer systems and machines (i.e medical devices or robots) to model human problemsolving and decision-making behavior, incorporating mathematical algorithms that acquire knowledge through exposure [1,2]. Within the context of healthcare delivery, AI technologies have been applied to research and drug discovery, to enhance diagnostic capabilities, to personalize treatment plans, to better understand disease progression, and to provide patients with advice and support [1,2]. AI technologies can also be used to support teaching and learning in health professions education. The use of AI in health professions education is not new. Examples of applications of AI in health professions education are outlined below [1,2]: • Personalized learning and tutoring – Platforms are available and in development that can analyze a student’s knowledge level and learning style(s) to provide individualized learning pathways and enable adaptive learning and customized tutoring. • Simulation - The most common way that AI is currently being used in education is through simulation technology, allowing for the practice of various procedures and treatments or the review of more realistic patient cases and patient responses in a safe and controlled environment. Simulation can help students develop skills and gain confidence before working with actual patients. • “Flipped-Classroom” learning – Moving away from traditional lectures and memorization to a “flipped classroom” approach to health professions education provides more flexible learning options and delivery of content. Students view lectures and gather information using resources and AI tools available to them prior to coming to class. They then come together in the classroom to participate in complex simulations or games developed by AI. This educational model allows for interaction with professors, peers, and students in other health professions to evaluate data, think critically, employ problem solving skills, and develop care plans without placing patients at risk. • Assessment and feedback – AI platforms can provide competencybased assessment and immediate feedback for students. • Curriculum review – AI’s ability to predict future trends can be used to identify areas for improvement in curricula, ensuring that health professions education is up to date with advances in technology and best practices. • Enhancement of interprofessional practice and education – AI requires collaboration between health professionals, data scientists, computer engineers, informatics specialists, and others. The previously mentioned simulation platforms can be leveraged to engage students and educators from various disciplines for a team-focused patient-centered care approach. • Administrative workload – Administrative burdens and repetitive tasks (i.e., grading, documentation) associated with delivery of educational programs could be automated by AI, allowing educators to focus more directly on their students.
Winter 2023
“
AI should be used to supplement and not to replace the work of health professionals and educators. While AI has opened new doors for innovation in health professions education, it is not without risks. In their article on AI in medical education, Cooper and Rodman refer to AI as “A 21st-Century Pandora’s Box”, pointing out that modern technology has both disrupted and enhanced medical education and practice in the past [3]. Concerns associated with AI in both medical education and practice include “hallucinations” (a term the authors use for the ability of AI to make up missing information that is then presented as fact), ethical concerns around student and patient privacy, risk of biases based on incomplete data or the sources used, and a lack of transparency in identifying data sources in the algorithm used to arrive at a particular conclusion. AI includes “chatbots” (i.e., Chat-GPT) that incorporate language models and algorithms to produce text mimicking human thought. Concerns have been raised about maintaining academic standards if students use AI to complete assignments [1,2,3]. While some may consider using these tools as “cheating,” it’s important for students to engage with and understand the capabilities and limitations of this technology. Health professions programs are facing the challenge of incorporating both the use of and training for the use of AI into their curricula [3]. Students need to know how to utilize and evaluate AI in their academics and practices. However, at the same time, educators need to be trained to appropriately prepare students for the use of AI. As a result, some institutions are offering courses or certificate programs that explore AI and machine
17
A.I. In Medicine
WORCESTER MEDICINE
Artificial Intelligence in Health Professions Education Continued learning (ML) and provide fundamental understanding to evaluate and utilize AI tools. AI should be used to supplement and not to replace the work of health professionals and educators. An understanding of applications, opportunities, and the potential of AI to transform education is critical to providing health professionals with a wellrounded education. It is important to note that, similar to the internet, the general public has access to AI platforms such as ChatGPT. As a result, health professionals need to understand the capabilities and limitations associated with AI tools so they may play an active role in interpreting and managing the consequences of easily accessible, and potentially biased or inaccurate, medical information. + Anna K. Morin, PharmD, RPh Associate Provost-Worcester Manchester Professor of Pharmacy Practice Massachusetts College of Pharmacy and Health Sciences References 1. Lomis K, Jeffries P, Palatta A, et al. Artificial intelligence for health professions educations. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC. https://doi.10.31478/202109a 2. Shankar PR. Artificial intelligence in health professions education. Arch Med Health Sci 2022;10:256-61. 3. Cooper A, Rodman A. AI and medical education – a 21st-century Pandora’s Box. N Engl J Med; 2023:389 (5):385-387.
Cartoon by Christopher Baker MD, UMass radiologist/contributing cartoonist to Cartoonstock.com
18
Winter 2023
WORCESTER MEDICINE
A.I. In Medicine
The Quest for Childhood Injury Prevention- Embodied in Safety Quest Michael Hirsch, MD
The Safety Quest mobile classroom vehicle.
W
hen I joined the Injury Prevention Program at UMass Memorial Children’s Medical Center in 2000 as the Pediatric Trauma Medical Director, I found 650 pediatric trauma patients were being admitted each year. Fully 90 percent of these injuries were preventable, as I learned from my mentor, Dr. Barbara Barlow, founder of the Injury Free Coalition for Kids.
“
…we established the Greater Worcester Gun Buy Back Program…that has taken over 4,000 guns off the streets… To make a dent in childhood injury prevention in Central MA, which was incidentally the #1 killer of children under the age of 19, we tried many avenues. Bike rodeos where we did helmet fittings. Car seat safety checkpoints to ensure that car seats were used and used properly (50 percent are not). Home safety kits with room-by-room information on eliminating household hazards. Promoting the use of ski helmets at our local
ski venues. Working with the Juvenile Court system to create a program for 1st time Driving Offenders to have them spend a day in the Trauma Center to help them better understand the consequences of their reckless choices (TEEN RIDE-Reality Informed Drivers Education). This reduced the recidivism of these driving offenders from 35 percent to 7 percent. And of course, we established the Greater Worcester Gun Buy Back Program, Goods for Guns, that has taken over 4,000 guns off the streets of Worcester and 25 collaborative communities. Central Massachusetts boasts the lowest penetrating trauma rate of any county in the Commonwealth. Our flagship program was rolled out in 2008-Mobile Safety Street (MSS). It visited thousands of students in the Worcester Public Schools and other Central MA elementary schools. We had a brilliant Injury Prevention Educator, Allison Rook Burr, who worked with the school systems to integrate the MSS program into the health curriculum of the schools, so it was not an extra burden on the school systems. It demonstrated 40 indoor and 40 outdoor safety behaviors using a hands-on experience. We published papers verifying that students who participated in the MSS program had a 50 percent better understanding of the injury prevention information and better retained that information at 6 months post experience by a factor of 25 percent. Unfortunately, the vehicle in storage during the winter of 2017 was severely damaged and the necessary funding to refurbish it could not be found. We wanted to revive it, but the situation looked dire. With all the Injury Prevention efforts that we had made, surely complemented by the Massachusetts Legislators that helped pass great ordinances on helmets, ATV, Texting and Driving, improved junior operator licensing regulations, drunk driving, all helped us realize a reduction of trauma admissions to 200+/year. Injury Prevention was winning. But MSS was dead in the water. The lifesaver came when the Office of Philanthropy at UMass Memorial Health learned that the Fundación MAPFRE had read about the MSS program and wanted to explore a partnership to restart it. But times had changed and Fundación MAPFRE had great experience in their home
Winter 2023
19
A.I. In Medicine
WORCESTER MEDICINE
The Quest for Childhood Injury PreventionEmbodied in Safety Quest Continued country of Spain with using Virtual Reality instead of the hands-on model we had previously developed. After a couple years planning and developing, we brought the new Safety Quest program to 5th grade classrooms this September 2023. As of October 1, over 900 students have already participated in the innovative, new program. The UMMH Injury Prevention team, the UMMH Office of Philanthropy, Fundación MAPFRE representatives, and CTP Boston met regularly to develop curriculum, map out the RV, and work on all the thousands of details that it takes to create a program like this one. We landed on 4 different games. 1.) Escape Artist: With young experts guiding students on a fire education journey, students are quizzed on safe behaviors in the event of a home fire and are then put to the test in this virtual experience, asked to show what they would do to escape a fire. 2.) Street Smarts: Young people learn the importance of paying attention to their surroundings when traveling the streets. On touchscreen tablets, students adopt an animated character who will make their way around a neighborhood by foot and by bicycle. They gain points by avoiding obstacles and making safe choices. 3.) Home Hazard Hunt: This gesture-based game is the signature experience of Safety Quest. Students spend time in the house and outside in the playground and pool area, identifying potential hazards. They will earn points for all the dangers they find. 4.) Look Both Ways Charades: In this game, students are tasked with acting out or describing words revolving around water, home, fire, and road safety. This game takes place outside the vehicle but is designed to spark conversation and keep the students engaged. We are planning to test a cohort of students before and after their visit to Safety Quest and 6 months later to establish the efficacy of virtual reality in this type of education. “I wanted to reach out and thank you for bringing Safety Quest to our students,” said School Resource Officer Tracy Flagg of Winchendon, MA. “I received great feedback from staff and students. It was fun to watch the students participate.” + Michael Hirsh, MD, Director of the Injury Prevention Program at UMMHC and the Medical Director for the Worcester division of Public Health
20
Artificial Intelligence in Nursing Cynthia Delmas, RN
A
rtificial Intelligence (AI) combines electronic data collection and robust datasets to give machines the capability to process information. In this article I’ll discuss how AI can facilitate health care management (patient monitoring, wound care), specific nursing care (post-anesthesia care, clinical decision support, medication management), communication with the health care team (electronic medical record (EMR) data, timely communication, patient handoffs), and support for nursing scheduling and management. AI: Facilitating health care management There are several wearable devices that can monitor ECGs, heart rate, blood pressure, and oxygen saturation and detect arrhythmias. The Abbott Confirm Rx Insertable Cardiac Monitor, for example, can detect abnormal heart rhythms and recognize and analyze these rhythms, leading to quicker recognition of the need for patients to access care for further evaluation and treatment [1]. Nurses play an important role in facilitating the use of these devices and in teaching patients how to utilize the data to better care for themselves. AI can help with wound care using specialized image capture, analyzing the images, and noting characteristics such as tissue type, color, depth, and size. An automated analysis allows for standardized and objective measurement, allowing nurses to track a patient’s wound healing accurately. The AI-powered tool, Vac Veriflow Therapy System, identifies early signs of infection, delayed healing, tracks the healing process, and gives real-time feedback to the nurse [2]. This particular AI-powered tool combines negative pressure wound therapy with AI-powered wound assessment, allowing the nurse to make informed decisions and tailor treatment plans accordingly. AI: Enhancing decision-making and timeliness of nursing care AI can improve post-anesthesia care. AI-powered monitoring systems with advanced algorithms continually assess and analyze vital signs, ensuring early detection of complications and changes in the patient’s condition. The Early Sense sensor is placed underneath a patient’s mattress to monitor respiratory rate, heart rate, and movement [3]. This system continually analyzes data and notifies the nurse of any significant changes or signs of distress. AI can use predictive analysis to identify risk factors and effectively detect the probability of post-operative infections. This information allows the nurse to respond proactively and monitor for high-risk situations, improving patient safety and outcomes. AI has the ability to aid clinical decision-making as algorithms can analyze patient data and make evidence-based recommendations to nurses in real-time. The data comes from medical records, lab results, imaging studies, pathology reports, genetic profiles, and medical literature. The results generate tailored treatment options that nurses can consider. These recommendations do not replace nurse decision-making and judgment but can decrease the time required to put nursing plans into action [4]. AI algorithms can improve medication management by reconciling medication lists; flagging inconsistencies, duplicate and conflicting drug orders, and medication errors. The algorithm alerts the nurse to issues
Winter 2023
WORCESTER MEDICINE
“
…AI can foreast staffing needs and coverage gaps and identify nurses with matching skill sets while complying with contractual requirements and labor regulations. that require attention. AI-powered robots, ‘intelligent medication carts’ can verify medications, reducing medication errors. AI can also anticipate patient medication needs and use predictive analytics to effectively manage inventory. AI: Improving communication AI can reduce charting time for nurses by automating data entry with information from a patient’s medical forms, diagnostic imaging results, and laboratory reports. AI can populate information into a patient’s chart and retrieve patient information with a spoken command, reducing errors and reducing the time a nurse spends charting. This technology does not replace the nurse’s contributions and allows for individualization as the nurse can add data that doesn’t populate automatically [5]. AI also analyzes messages the nurse receives and can prioritize them through identification of keywords and assessment of content. A message with the phrase ‘critical condition’ can be marked urgent and get to the nurse more quickly. Algorithms can also be designed to highlight critical lab values and prioritize sending to the nurse. AI can facilitate patient handoffs. In lieu of compiling information to relay, AI-powered tools can analyze patient data and compile a summary of information for the nurse to share. For example, the iShift platform by the University of California San Diego uses natural language processing to compile concise handoff reports creating an efficient process and reducing the chance of miscommunication. This automated summary provides the foundation for the communication which the nurse can customize and add in any nuances specific to the patient as appropriate [5].
A.I. In Medicine
AI and Nursing Scheduling and Management AI technology can create staffing schedules, ensuring they are balanced, accounting for staff paid time off, and ensuring each nurse is working shift amounts in congruence with their contracted number of hours. AI can generate fair schedules and analyze historical data such as nurse availability and preferences, patient census, seniority, and skill sets to further create a fair and efficient schedule, meeting the organization’s needs and the staff ’s preferences [5]. By analyzing historical data and using predictive modeling, AI can forecast staffing needs and coverage gaps and identify nurses with matching skill sets while complying with contractual requirements and labor regulations. The Kronos Workforce Advisor tool in use at the University of Pittsburgh Medical Center has reduced administrative burden and assured fair shift distributions leading to improved employee satisfaction and patient care [5]. AI can streamline nursing processes and facilitate nursing judgment and clinical decision-making. There are many aspects of AI that can be useful to nurses as they work to improve patient safety and outcomes while improving staff satisfaction. Although expensive, AI can benefit nursing practice. A nurse is an educated, trained professional with a vast array of knowledge and skills. While the use of AI can help to augment and enhance the work of a nurse, it cannot replace the nurse. AI is based on algorithms and ordered rules and cannot perform nursing skills such as insertion of IV or other catheters. AI lacks consciousness and ethical judgment, has no emotional intelligence, can misinterpret context, and lacks the element of human touch and empathy. There is also a need to consider ethical concerns and biased algorithms leading to biased outcomes. AI in healthcare can help us improve efficiency, safety, and patient outcomes, and ultimately allow the nurse more time to practice nursing. + Cynthia Delmas, RN Founder, The New Bedford Health Initiative DNP Student, University of Massachusetts Medical School References 1. About the Confirm Rx Insertable Cardiac Monitor. (n.d.). Www. cardiovascular.abbott https://www.cardiovascular.abbott/us/en/hcp/ products/cardiac-rhythm-management/insertable-cardiac-monitors/ confirm-rx/about.html 2. V.A.C. VERAFLOTM Therapy. (n.d.). Www.acelity.com. https://www.acelity. com/healthcare-professionals/history-of-innovation/vac-veraflo-therapy 3. EarlySense launches streamlined sensor for nursing homes, plans wellnessfocused home sensor. (2016, September 27). MobiHealthNews. https://www. mobihealthnews.com/content/earlysense-launches-streamlined-sensornursing-homes-plans-wellness-focused-home-sensor 4. Clancy, T. R. (2020). Artificial intelligence and nursing: the future is now. JONA: The Journal of Nursing Administration, 50(3), 125-127. 5. Seibert, K., Domhoff, D., Bruch, D., Schulte-Althoff, M., Fürstenau, D., Biessmann, F., & Wolf-Ostermann, K. (2021). Application scenarios for artificial intelligence in nursing care: rapid review. Journal of medical Internet research, 23(11), e26522.
Winter 2023
21
A.I. In Medicine
WORCESTER MEDICINE
Is it Time to Rethink How We Teach the Art of the Clinical Interview? A Medical Student Posits the Use of AI to Drill Doctoring and Clinical Skills Natasha Bitar, BS
T
wo days remain before my OSCE exam, and I have not yet had the chance to practice with a peer. I find myself staring at my computer screen, attempting to figure out how to prepare for an evaluation based on my ability to effectively communicate with a human, collect a comprehensive medical history, perform a targeted physical examination, and develop a diagnostic plan. I noticed the ChatGPT tab on my computer. After a quick Google search to find sample standardized patient prompts, I fed ChatGPT specific instructions on how to conduct an interview with me, including my intention to add a rubric later so that ChatGPT could assess me and offer feedback on what I asked and forgot to ask. To make the experience more similar to the actual exam, I used a voice-to-text dictation tool to communicate with the chatbot. During the interview, I realized how similar the chatbot’s responses were to the standardized patients (SPs) I’ve worked with in the past. I also noticed that empathetic statements were met with appreciation. If you ask a bot, a form of artificial intelligence or AI, if it has feelings, you will receive an answer that terrifies those who fear the world takeover of robots in human roles and comforts those who seek an objective truth: It is free from human emotions, prior experiences, and biases. While AI could never capture how our interactions make them feel like a human SP does in their feedback, I cannot help but wonder how much more efficient and cost-effective these mandatory exams would be if standardized patients were replaced with AI. While we cannot rely on AI as an evaluator of how human we seem to patients, it does make for a competitive medical student: researchers at MGH found that ChatGPT was able to pass the USMLE Step 1, 2, 3 without “studying”. I would not be surprised at how many preceptors writing residency letters for students are also now using this technology in lieu of pre-written templates that are perhaps just as depersonalized. Beyond empathy and an ability to offer an emotional response, how does ‘humanity’ factor into the conduction of SP interviews? Inefficiency and inconsistency are two common elements frequently cited by students. If you asked a sample of medical students about SP exams, I would bet on at least half having the experience of receiving post-interview feedback claiming specific questions were not asked, only to look down at their own notes to see answers to those very same questions. Imposter syndrome often plagues students, stemming from the feeling that we don’t know enough to make clinical diagnoses and plans. Continuously prompting students during exams to reflect on whether we asked the right questions or collected enough data for an appropriate diagnostic plan fuels the already common medical student habits of self-doubt and second-guessing. While it could be arguable whether self-doubt or overconfidence are worse actors in the realm of patient safety, the perpetuation of self-doubt surely adds unnecessary confusion to students’ professional development. Though SPs are used at most medical schools in the US, the USMLE’s discontinuation of the Step 2 Clinical Skills exam, which had a high pass rate and was costly to students, begs the question of whether performing for
22
Winter 2023
an SP—who is performing an illness script, most likely without a lived experience inclusive of that illness— paints an accurate picture of clinical performance. As students, we are expected to possess a vast amount of clinical knowledge and the ability to ask the right questions during patient interviews for efficient data collection. This skill is becoming more critical, as providers across multiple specialties are increasingly struggling with short appointment times and double bookings for patients who require more attention. When I think about my ideal doctor, I want someone who is equally proficient at comforting me as they are at thoroughly evaluating my health concerns. In today’s reality this means effective data collection and dissemination within a much tighter timeframe than we are given to speak with patients during exams. AI serves as a phenomenally accurate data collection and synthesis tool and could therefore provide specific, timely feedback that allows learners to identify areas of improvement in clinical data gathering and consolidation, skills which are currently evaluated through SP OSCEs and clerkship feedback. As alluded to previously, there are, of course, things machines cannot replace. Standardized patients are trained to provide the human touch, providing feedback on organizational skills, flow, and empathy. The sticking point for some students is that following such feedback is not always conducive to performing thorough, efficient interviews in the 15-to-30-minute time slots allotted in the real outpatient setting. A warning I have heard repeatedly through my first three years of medical school is ‘Do not lose your empathy’. I am still puzzling out how to balance my desire to avoid compassion fatigue with my desire to sharpen my clinical acumen. I posit that utilizing AI to drill diagnostic reasoning through real-time interview feedback can allow for earlier introduction to clinical settings with a stronger foundation and more time learning with real patients as opposed to trained actors performing illness scripts. + Natasha Bitar, BS, is a third-year medical student at UMass Chan Medical School. Email: Natasha.bitar@umassmed.edu
WORCESTER MEDICINE
A.I. In Medicine
As I See It Sonia N. Chimienti, MD FIDSA
A
s a medical educator, it’s been tough to avoid conversations regarding the potential impact of generative artificial intelligence (Gen-AI) and machine learning (ML) in healthcare education and delivery. A PubMed search of “generative artificial intelligence medical education” this past month revealed that annual publications on this topic jumped exponentially in the past few years, from less than 50 prior to 2018, to 416 in 2022, and 519 so far in 2023. I offer this “As I See It” perspective with the knowledge that many of you may be way ahead of the game in this area than I am. For those who are latecomers to the conversation, like me, I hope that I can convince you that the time has come to jump onto the train or be left at the station. We need knowledge-informed input, as educational leaders and as care providers, to ensure that Gen-AI and ML facilitate learning, and ethical and equitable care of patients.
“
…I am not suggesting that we endorse computer science or technology education at the expense of learning the science… Students are already ahead of us in this content area. Last spring, I was chatting with a premedical student with a background in computer science, lamenting the inefficiencies of my clinical environment. I expressed my longing for technology that will help me create treatment plans without repeating costly testing, summarize my conversations with patients, and collate in milliseconds the medical data that took me hours to collect from the EMR. The student politely let me rant, and then shared that this technology already exists. And sure enough, during an AI summit a few weeks ago, a leader from a health system outlined the technology that is currently in place and under development for their providers. This
company has created a “Responsible Use of AI” program and a “Machine Learning Review Board”, to review AI tools, assess the “risk for harm” and establish “review processes for vendor-acquired AI”. The company is using AI-enhanced virtual assistants to help direct calls from patients and enhance personalized care, and utilizes data analytics, natural language processing, and ML to analyze patient outcomes. While this was the only healthcare company presenting at the summit, I suspect that healthcare companies worldwide are utilizing similar systems and approaches, to optimize and improve care. The challenge is that health professions students may not be learning about this technology (including the opportunities and limitations) during their educational programs and may not be well-prepared to contribute to initiatives and conversations ensuring ethics and avoidance of potential bias. To be clear – I am not suggesting that we endorse computer science or technology education at the expense of learning the science, communication skills, and teamwork that are fundamental to the education of a healthcare provider. Indeed, it is possible to incorporate Gen-AI and ML in our education programs in ways that facilitate learning and prepare students for a changing healthcare environment without sacrificing the critical knowledge and skills that they must learn; these tools may actually facilitate learning for students at a time when the volume of information to be learned and the competing demands on their time contribute to stress. A recent IAMSE (International Association of Medical Science Educators) series* brought thought leaders from around the country together in this area, highlighting the opportunities and obligations of medical educators to learn, embrace, and incorporate AI in our teaching. The IAMSE Fall 2023 Webcast Audio Seminar Series, called “Brains, Bots & Beyond: Exploring AI’s Impact on Medical Education” included five sessions addressing the opportunities and challenges that we must address as medical educators. Each of the sessions in this series was extraordinary - exciting, invigorating, expertly delivered - and just a bit overwhelming. Such sessions can serve as a guidepost for healthcare education, to help us with the innovation and reform required to prepare our students to be the “humans in the loop” to ensure equity, ethics, and holistic analyses. By incorporating these into health sciences education, we can prepare students for their roles as providers, researchers, and leaders who will ensure that patients remain at the center of these developments. We need to prepare our students not only to be aware of what is possible but to contribute as informed advocates for our patients, who are entrusting us with this responsibility. This is not going to be easy – curricula need to be adapted, and we need to support our already busy educators in doing so in an interdisciplinary and collaborative fashion. Fortunately, the IAMSE sessions provided a compelling overview to help us understand why this is important, and what we need to do. These are exciting times, not only for healthcare education but also for clinical care. With our engagement as educators, we can properly position our future healthcare providers to help inform the implementation of
Winter 2023
23
A.I. In Medicine
WORCESTER MEDICINE
As I See It Continued AI in ways that increase health equity and ensure that the application of these technologies supports the patient-provider connection, enhances our interprofessional engagement and relationships, removes bias, and ensures that we are serving all populations of patients with ethical advocacy. + Sonia Nagy Chimienti, MD FIDSA, Senior Associate Dean for Medical Education Dartmouth’s Geisel School of Medicine *IAMSE Fall 2023 Webcast Audio Seminar Series 1. “An Introduction to Artificial Intelligence and Machine Learning with Applications in Healthcare”, presented by H. Valafar, University of South Carolina 2. “Artificial Intelligence: Preparing for the Next Paradigm Shift in Medical Education”, presented by C. James and E. Otles, University of Michigan 3. “Transforming Healthcare Together: Empowering Health Professionals to Address Bias in the Rapidly Evolving AI-Driven Landscape”, presented by S. Bessias, Duke University School of Medicine, and M.P. Cary Jr, Duke School of Nursing 4. “AI Tools for Medical Educators”, V. Capaldi, D. Kurzweil, and E. Steinbach, Uniformed Services University of the Health Sciences 5. “ChatGPT and Other AI Tools for Medicine and Medical Education”, presented by B. Hersh, Oregon Health & Science University.
Compassionate addiction treatment close to home. If you or a loved one is struggling with addiction, contact an AdCare Hospital representative at
1-800-ALCOHOL
24
Winter 2023
LEGAL CONSULT: Financial Transparency Comes Close to Home Peter J. Martin, Esq. Aastha Sharma, Esq.
W
e live in an era rife with reports of financial skullduggery—from the Panama Papers, through Bernie Madoff and now the FTX revelations—where money laundering, fraud, and other forms of criminal behavior have led to much outrage and some regulation. For the most part, the response has been to impose greater levels of transparency on the “malefactors of great wealth.” For small and medium-sized enterprises, including medical and other health care practices, these efforts have been of some interest but of little practical import. Not anymore. The new Corporate Transparency Act (CTA) and its accompanying regulations will impose ownership disclosure obligations on even the smallest practices, beginning January 1, 2024. Owners of these practices can be forgiven if they greet this news with less than overwhelming enthusiasm, but as a practical matter the new requirements are not onerous. However, the penalties for non-compliance are severe and there are nuances that may make compliance difficult in some circumstances. Basically, the CTA requires most privately owned corporations, LLCs and other entities, to file a Beneficial Ownership Interest, or “BOI Report” with the Financial Crimes Enforcement Network (“FinCEN”) of the U. S. Department of the Treasury which includes certain information about the entity’s “beneficial owners.” Beneficial owners include persons who hold at least 25% of the ownership interests in the organization, or who otherwise exercise “substantial control” over the entity. The information to be provided about each beneficial owner is the individual’s name, date of birth, residential or business address, and an identification number from a source such as a driver’s license. There are a number of exemptions, which if applicable, will provide the entity respite from having to file the BOI Report. However, most of the exemptions are for entities that file reports with the Securities and Exchange Commission (SEC) or other regulatory authorities such as publicly traded companies, banks, credit unions, money services
WORCESTER MEDICINE
businesses, securities brokers and dealers, tax-exempt entities, insurance companies, state-licensed insurance producers, pooled investment vehicles, public utilities, and accounting firms. The exemptions that might be most applicable to Massachusetts health care provider entities are the following: • Entity is a tax-exempt entity, • Entity that operates exclusively to provide assistance to tax-exempt entities, • Entity that (a) employs more than 20 full-time employees in the U.S., (b) has an operating presence at a physical office within the U.S., and (c) reported more than $5 million in gross receipts or sales from U.S. sources only on its prior year federal tax return, or • Entity which is a wholly-owned or controlled subsidiary of an exempt entity.
“
…the new law poses some compliance issues. Existing organizations have until January 1, 2025 to file their first report. New organizations must file a report no later than 30 days after formation. However, there is a current proposal in the works by FinCEN, which if implemented, would extend the submission deadline for the new entities from 30 days to 90 days. All organizations must report any changes in beneficial ownership no later than 30 days after such a change. Organizations that fail to file such reports or file false ownership information are subject to a civil monetary penalty of $500 per day that the violation has not been remedied, plus a possible fine of up to $10,000 and imprisonment for up to two years. One reassuring aspect of the CTA is that BOI Report information will not be available for public view – this information will be maintained by FinCEN in a secure, nonpublic database. FinCEN can only disclose the reported information upon request from certain governmental agencies and regulators or financial institutions who obtain the consent of the reporting company for customer duediligence purposes. Apart from the added burden of making these new BOI Reports, the new law poses some compliance
A.I. In Medicine
issues. The first is whether any of the exemptions apply. Another is understanding who exercises “substantial control” in a medical practice and whose personal information thus must be reported. “Substantial control” under the CTA regulations can refer to the control exercised by anyone from a senior officer, person having authority to remove a senior officer or board of directors, important decision maker or person having “any other form of substantial control” over the company. The regulations further define a person exercising substantial control as one who “has substantial influence over important decisions” of the practice. Does this include the office manager? The regulations also state an individual may exercise “substantial control” indirectly through “arrangements or financial or business relationships, whether formal or informal . . . or any other contract, arrangement, understanding, relationship, or otherwise.” Does this mean accountants? Billing companies? Practices seeking guidance on such questions may have recourse to some FinCEN resource materials and clarificatory Q&A which are available on the FinCEN website - https://www.fincen.gov/boi. Apparently, the public policy rationale for this new law was that larger organizations already are subject to a variety of reporting obligations, such as to the SEC or as is required of banks, insurance companies, accounting firms and public utilities. The “Sense of Congress” accompanying the CTA noted, apparently with some alarm, that “(1) more than 2,000,000 corporations and limited liability companies are being formed under the laws of the States each year; (2) most or all States do not require information about the beneficial owners of the corporations, limited liability companies, or other similar entities formed under the law of the State.” The CTA is intended to remedy this perceived gap in information by pulling in much smaller organizations than those currently with beneficial owner disclosure obligations. The government’s seemingly unending thirst for previously private information may not be slaked by this statute, which includes a provision whereby the Secretary of the Treasury may make administrative or legislative recommendations to Congress to apply the law to previously exempted organizations if the Secretary finds such entities have “been involved in significant abuse relating to money laundering, the financing of terrorism, proliferation finance, serious tax fraud, or any other financial crime.” Physicians and others who have labored long and hard to establish and maintain medical and other types of practices to provide health care to their communities may not appreciate the notion that the federal government seems to think owners of these practices are liable to behave like money launderers, tax cheats, terrorists, and financiers of weapons of mass destruction. But such is the state of our not entirely happy Union, and world. + Peter J. Martin, Esquire, is a partner in the Worcester office of Bowditch & Dewey, LLP, his practice concentrating on health care and nonprofit law. Aastha Sharma, Esquire, an Associate at Bowditch, focuses her practice on corporate finance, mergers and acquisitions, joint ventures, venture capital, and early-stage investment transactions.
Winter 2023
25
A.I. In Medicine
WORCESTER MEDICINE
BOOK REVIEW: The Masters of Medicine by Andrew Lam, M.D. published 2023 Peter T. Zacharia, MD
A
ndrew Lam is an ophthalmologist and retina specialist practicing in Western Massachusetts who studied history as an undergraduate at Yale. His passion for and knowledge of history is evident in his latest work. This is his fifth book with two of his other publications being historical novels. In this survey of medical history, Dr. Lam chronicles humanity’s battles against the leading entities which have been complicit in shortening our lifespans and causing human suffering. He indicts seven of mankind’s medical challenges, devoting a section each to heart disease, diabetes, bacterial infection, viral infection, cancer, trauma, and childbirth. In each section the collaboration of the physician and historian inhabiting the same mind is obvious as Dr. Lam skillfully introduces each section with an anecdote involving a notable individual from history pitted in a struggle against a health challenge. He briefly discusses the essential science, and where appropriate, the medical physiology and anatomy necessary to understand the disease process or medical challenge, and in the case of heart disease enhances the explanation with diagrams. His storytelling maintains our interest with the tales behind each medical discovery or invention, tales which display the competitive and sometimes adversarial nature of the great individuals who advanced health science. Dr. Lam emphasizes the often-serendipitous nature of medical discoveries which may not have been realized were it not for a fortuitous combination of necessary conditions as in the case of Alexander Fleming’s discovery of penicillin, the assistant’s error allowing for Pasteur’s development of a vaccine against cholera in chickens, and the sequelae of a deadly World War II bombing which inspired the development of nitrogen mustard as the first cancer treating chemotherapeutic agent. Dr. Lam writes of the determination of individuals who persisted in their work, despite headwinds of rejection and skepticism, to solve medical dilemmas. Examples are Werner Forssmann who utilized chicanery to give him access to materials which allowed him to catheterize his own heart in developing cardiac catheterization, Frederick Banting who continued research to isolate insulin despite lukewarm support from his supervisor, and Katalin Kariko’s decades
26
long work on m-RNA in the trenches of the medical research establishment which ultimately resulted in the breakthrough culminating in the m-RNA vaccine for COVID-19. The stories revealing the humanity underlying the work to resolve each of the medical challenges make this a tremendously enjoyable book which I recommend to the lay reader without medical background as well as healthcare professionals who will gain from learning about how our various medical fields evolved. + Peter Zacharia is an ophthalmologist in private practice in Worcester who has been on the editorial board of Worcester Medicine for several years.
Winter 2023
WORCESTER MEDICINE
In Memoriam
Christopher H. Linden, MD September 30, 1952 to August 26, 2023 MMS Join Date: July 28, 1985
On Saturday, August 26, 2023, Dr. Christopher H. Linden died peacefully at home in Shrewsbury, MA. He leaves his beloved wife, Jeanne (Kristo) Linden; his children, Meredith and Martha Linden, Rebecca (Linden) Huard and her husband Travis, Erik Comes and his wife Danielle, and Kelley (Comes) Gallivan and her husband Timothy; his grandchildren Roseanna and Violet Huard, Paisley and Ryder Gallivan, and Lola Comes; and his sister Rachel Linden of Danvers. Born in Lynn, MA, he was the son of the late Robert A. Linden and Doris B. (Bartol) Linden. He was raised in Danvers, MA, where he graduated 2nd in the Class of 1970. Recently, Chris spearheaded reconnecting with a group of his “unruly” high school friends to joyfully “raise hell” a few more times. After high school, Chris attended and graduated with a B.A. from Amherst College in 1975 where he made more lifelong friends. A proud alumnus, he was recently involved in the planning of his upcoming 50th reunion. Pursuing the call to medicine, Chris attended the University of Massachusetts Medical School in Worcester, MA, receiving his Doctorate of Medicine in 1979 as part of the first full graduating class of UMass. From there, Chris completed his emergency medicine training in Hershey, PA, and completed his medical toxicology training in Denver, CO. While in Colorado, Chris also was commissioned as a Captain in the Medical Service Corps of the United States Army Reserves. Returning to the east coast, Chris continued to grow in knowledge and reputation, especially in the field of toxicology. With over a hundred credited publications, Dr. Linden was a national medical expert, resource, and asset, serving in numerous professional organizations, committees, and editorial boards. Locally, Dr. Linden was also known for his talent and skills in emergency medicine, working in the UMass Trauma Center and in the Milford-Whitinsville Regional Hospital system since 1989. He retired in 2020 from Milford Urgent Care and remained close to his colleagues and friends from Milford thereafter. Chris was never one to brag about his professional successes, choosing instead to focus on the legacy and family he built with his wife Jeanne. Chris and Jeanne began their adventurous journey through life together with their five kids. Together, Chris and Jeanne renovated a 4-story Victorian house that became the heart of the family and hosted countless gatherings with family and friends. Their door was always open to family and friends who needed medical advice, a meal, or just a place to hang out. If you knew Chris outside of the hospital, you may have known him as a stonemason, an electrician, a carpenter, a gardener, a daytrader, a daredevil skier, a Harley rider, the neighborhood field medic, or as the guy telling stories that made everyone laugh. He was all these things and more to those who loved him. He will be missed but will live on in our hearts and memories. + Steven Bird, MD
Edward L. Amaral, MD May 26, 1935 to October 13, 2023 WDMS/MMS Join Date: May 28, 1966
A noted Worcester surgeon, Edward Amaral, age 88, passed away on October 13, 2023. Ed was born in Salem, Mass. and was educated at Boston College, graduating in 1957. He attended medical school at Georgetown and took a surgical residency at R.I. Hospital in Providence, RI and also at St Vincent Hospital in Worcester. Following his residency, in 1966, he joined the US Air Force, serving as Chief, Surgical Service at Barksdale AFB in Shreveport, Louisiana. After his military service, he returned to Worcester and assumed the surgical practice of Dr. Charles Brown on Lincoln St. After years of private practice, he joined the Fallon Group in 1992. In all, he practiced general surgery in Worcester for 40 years, where he lived. Ed was appointed Assistant Professor of Surgery at UMass Medical School and served as President of the WDMS 1996-1997. He was a member of the American College of Surgeons and the Massachusetts Medical Society. At MMS, he served as Chair of the Arts, History and Culture Network. Toward this endeavor, he contributed his own stained-glass art. He was honored by the WDMS in 2019 with the President’s Award. Dr Amaral pioneered the use of laparoscopic surgery at Hahnemann and Memorial Hospitals in September 1991. He was devoted to community charitable works. As a member of the Rotary Club of Worcester, he went on a mission to Ukraine, organizing a fund for the children of Chernobyl. He devoted himself to other charitable works in the Worcester community. Ed was a fun-loving, affable colleague who contributed professionally and socially to his beloved Worcester community. We will miss him. + Sidney Kadish, MD
Winter 2023
27
Resources to Help Your Patients Learn About Medicare Options Introducing HealthShare360 Medicare Advisors UMass Memorial Health’s Complimentary Medicare Resource Provider Medicare open enrollment is October 15 to December 7, and we understand selecting Medicare plans can be confusing. To help your patients compare and select the right Medicare plan, HealthShare360*, our trusted Medicare advisor will: • Assist with Medicare-related questions. • Explain the Original/Traditional Medicare, Medicare Supplemental (“gap”), and Medicare Advantage plans accepted by UMass Memorial Health, including Mass Advantage**. • Review decision-making timelines.
Your patients can also find helpful information by visiting Medicare.gov.
Visit medicareanswers360.com/ummhealth or call 508-978-0345 (TTY 711). Experts are available Monday to Friday, 8 am to 6 pm; Saturday and Sunday, 10 am to 2 pm. *
Disclaimer re. HealthShare360 HealthShare360 Inc. is a licensed and certified health insurance agency that works with Medicare enrollees to explain Medicare Advantage, Medicare Supplement Insurance, and Prescription Drug Plan options with a Medicare contract. Enrollment in any Medicare plan depends upon contract renewal. We do not offer every plan available in your area. Currently, we represent 2 organizations which offer 6 products in your area. Please contact Medicare. gov, 1-800-MEDICARE, or your local State Health Insurance Program (SHIP) to get information on all of your options. For accommodations of persons with special needs at meetings, call 508-978-0349 (TTY 711). MULTI-PLAN_HS360_UMASSAD23_C ** Mass Advantage is affiliated with UMass Memorial Health and was designed with the help of UMass Memorial physicians.