5 minute read
Is it Time to Rethink How We Teach the Art of the Clinical Interview? A Medical Student Posits the Use of AI to Drill Doctoring and Clinial Skills
Two days remain before my OSCE exam, and I have not yet had the chance to practice with a peer. I find myself staring at my computer screen, attempting to figure out how to prepare for an evaluation based on my ability to effectively communicate with a human, collect a comprehensive medical history, perform a targeted physical examination, and develop a diagnostic plan.
I noticed the ChatGPT tab on my computer. After a quick Google search to find sample standardized patient prompts, I fed ChatGPT specific instructions on how to conduct an interview with me, including my intention to add a rubric later so that ChatGPT could assess me and offer feedback on what I asked and forgot to ask. To make the experience more similar to the actual exam, I used a voice-to-text dictation tool to communicate with the chatbot.
During the interview, I realized how similar the chatbot’s responses were to the standardized patients (SPs) I’ve worked with in the past. I also noticed that empathetic statements were met with appreciation.
If you ask a bot, a form of artificial intelligence or AI, if it has feelings, you will receive an answer that terrifies those who fear the world takeover of robots in human roles and comforts those who seek an objective truth: It is free from human emotions, prior experiences, and biases. While AI could never capture how our interactions make them feel like a human SP does in their feedback, I cannot help but wonder how much more efficient and cost-effective these mandatory exams would be if standardized patients were replaced with AI.
While we cannot rely on AI as an evaluator of how human we seem to patients, it does make for a competitive medical student: researchers at MGH found that ChatGPT was able to pass the USMLE Step 1, 2, 3 without “studying”. I would not be surprised at how many preceptors writing residency letters for students are also now using this technology in lieu of pre-written templates that are perhaps just as depersonalized.
Beyond empathy and an ability to offer an emotional response, how does ‘humanity’ factor into the conduction of SP interviews? Inefficiency and inconsistency are two common elements frequently cited by students. If you asked a sample of medical students about SP exams, I would bet on at least half having the experience of receiving post-interview feedback claiming specific questions were not asked, only to look down at their own notes to see answers to those very same questions. Imposter syndrome often plagues students, stemming from the feeling that we don’t know enough to make clinical diagnoses and plans. Continuously prompting students during exams to reflect on whether we asked the right questions or collected enough data for an appropriate diagnostic plan fuels the already common medical student habits of self-doubt and second-guessing. While it could be arguable whether self-doubt or overconfidence are worse actors in the realm of patient safety, the perpetuation of self-doubt surely adds unnecessary confusion to students’ professional development.
Though SPs are used at most medical schools in the US, the USMLE’s discontinuation of the Step 2 Clinical Skills exam, which had a high pass rate and was costly to students, begs the question of whether performing for an SP—who is performing an illness script, most likely without a lived experience inclusive of that illness— paints an accurate picture of clinical performance.
As students, we are expected to possess a vast amount of clinical knowledge and the ability to ask the right questions during patient interviews for efficient data collection. This skill is becoming more critical, as providers across multiple specialties are increasingly struggling with short appointment times and double bookings for patients who require more attention.
When I think about my ideal doctor, I want someone who is equally proficient at comforting me as they are at thoroughly evaluating my health concerns. In today’s reality this means effective data collection and dissemination within a much tighter timeframe than we are given to speak with patients during exams. AI serves as a phenomenally accurate data collection and synthesis tool and could therefore provide specific, timely feedback that allows learners to identify areas of improvement in clinical data gathering and consolidation, skills which are currently evaluated through SP OSCEs and clerkship feedback.
As alluded to previously, there are, of course, things machines cannot replace. Standardized patients are trained to provide the human touch, providing feedback on organizational skills, flow, and empathy. The sticking point for some students is that following such feedback is not always conducive to performing thorough, efficient interviews in the 15-to-30-minute time slots allotted in the real outpatient setting.
A warning I have heard repeatedly through my first three years of medical school is ‘Do not lose your empathy’. I am still puzzling out how to balance my desire to avoid compassion fatigue with my desire to sharpen my clinical acumen. I posit that utilizing AI to drill diagnostic reasoning through real-time interview feedback can allow for earlier introduction to clinical settings with a stronger foundation and more time learning with real patients as opposed to trained actors performing illness scripts.
Natasha Bitar, BS, is a third-year medical student at UMass Chan Medical School. Email: Natasha.bitar@umassmed.edu