Harvard Medicine magazine, Autumn 2024

Page 1


a sense of place

Despite some obvious differences, zebrafish have a surprising amount in common with humans, including about 70 percent of their genes and similarities in the structure and processes of liver cells. Those and other traits have made them an increasingly popular model organism over the past couple of decades, including in the HMS lab led by Wolfram Goessling, the Robert H. Ebert Professor of Medicine at Massachusetts General Hospital. Goessling and colleagues use zebrafish to investigate liver disease, which causes about two million deaths worldwide each year.

JOHN SOARES

INSIDE AI: As artificial intelligence has taken off, so too has the need for GPUs (graphics processing units), a type of computer chip often used to train neural networks. This close-up photograph of a GPU is known as a die shot. The colors have been added by the artist.

SPECIAL REPORT: EMERGING TECHNOLOGIES

12 Powerful Predictions by Stephanie Dutchen AI is helping clinicians prepare for the health consequences of climate change.

14 Below the Fold by Molly McDonough What’s next for scientists studying the protein-folding problem?

22 The Next Generation of Medicine by Elizabeth Gehrman Generative AI is sparking a revolution in medical education.

30 Can AI Make Medicine More Human? by Adam Rodman

The history of tools used to support clinical decisionmaking offers clues to the future of medicine in the age of AI.

38 Neural Network by Charles Schmidt

Twenty years after meeting at HMS, alumni are leading an effort to apply braincomputer interfaces to medicine.

BOOKSHELF

44 Untangling Health Care’s Twisted Roots by Suzanne Koven

Elizabeth Comen discusses the legacy of medicine’s male-dominated culture.

DEPARTMENTS

4 Commentary A letter from the dean

5 Discovery Research at Harvard Medical School

11 Noteworthy News from Harvard Medical School

48 Five Questions by Ekaterina Pesheva Marinka Zitnik on the intersection of machine learning and biomedicine

49 Roots by Catherine Caruso Sierra Washington on her path from HMS to Mozambique

50 Student Life by Bobbie Collins and Lisa McEvoy

Members of the class of 2024 on their time at HMS and the next steps in their training

52 Rounds

Alumni on the people and experiences that shaped their careers

medicine

Providing Thoughtful Leadership on AI in Medicine

IN 1996, THIS MAGAZINE DEVOTED AN ISSUE to an emerging technology that seemed poised to reshape science, medicine, and even society at large.

“The World Wide Web may seem a curiosity to some, an object of hype to many, even a danger to others,” wrote Robert Greenes, MD ’66 PhD ’70. Jerome Kassirer, then the editor in chief of the New England Journal of Medicine, wrote, “Although a health care delivery system that depends, even partly, on online communications holds considerable promise, the problems it poses are enormous…. For the benefit of our patients, physicians should be at the forefront of these changes, not dragged along by progress.”

It is not hard to see the parallels to today’s debates over generative artificial intelligence, perhaps the technology with the greatest potential to change how science and medicine are conducted since the internet. As with the internet, the use of generative AI in science and medicine has both tremendous potential and serious risks. HMS is already leading the way in applying a host of AI technologies creatively and thoughtfully, and we are well positioned to do the same with generative AI. Implemented responsibly, tools powered by generative AI can improve the care of patients, speed the development of therapeutics, and help us gain a deeper understanding of scientific mysteries. The key is to ensure that human experts — physicians and scientists — are at the center of these efforts.

I am impressed by the work being done by faculty in our Department of Biomedical Informatics and clinical faculty at our affiliated hospitals to understand the strengths and limitations of AI in medicine. They are providing invaluable insights into how to empower physicians and proving that, although it is a relatively new technology, generative AI can be evaluated using long-standing techniques, as with any other intervention.

I am also encouraged by the response we received to our call for proposals for funding to investigate the use of generative AI in education, research, and administration. Earlier this year, we awarded funding to thirty-three teams of faculty, clinicians, students, and staff. Many of these projects focus on generative AI in medical education, including developing chatbots to simulate patient interactions and provide personalized feedback to our students.

Understanding and evaluating information and consultation powered by large language models is going to become a vital skill for the physicians of tomorrow. To that end, we have built AI into the medical curriculum and will continue to adapt it. We’ve designed a course on AI in medicine for incoming students in our Health Sciences and Technology program, and students in our Pathways track are using large language models as tutors. This fall also marks the launch of a new PhD track in AI in medicine that recently welcomed its first class to campus.

As Greenes and Kassirer predicted almost thirty years ago, the internet is now ubiquitous in science and medicine. I believe the same will soon be true of generative AI. The leadership of today’s HMS faculty and students is crucial to ensuring that happens safely and effectively.

Editor Amos Esty

Associate Editor Molly McDonough

Design Director Paul DiMattia

Copyeditor April Poole

Designer Maya Rucinski-Szwec

Contributors

Catherine Caruso, Bobbie Collins, Stephanie Dutchen, Elizabeth Gehrman, Suzanne Koven, Lisa McEvoy, Ekaterina Pesheva, Adam Rodman, Charles Schmidt

Dean of Harvard Medical School

George Q. Daley, MD ’91

Executive Dean for Administration Lisa Muto

Chief Communications Officer Laura DeCoste

Harvard Medical Alumni Association

Louise Aronson, MD ’92, president

Chasity Jennings-Nuñez, MD ’95, vice president Scott Aaronson, MD ’80; Amir Ameri, MD ’19; Joanna Choi, MD ’09; Kalon Ho, MD ’87; Elbert Huang, MD ’96; Timothy Jenkins, MD ’92; Kristy Rialon, MD ’08; Michelle Rivera, MD ’92; Ben Robbins, MD ’16; Marc Sabatine, MD ’95; Kirstin Woody Scott, MD ’20; Ann Taylor, MD ’83; Laura Torres, MD ’88; Nancy Wei, MD ’06; Charmaine Smith Wright, MD ’03; Douglas Zipes, MD ’64

Chair of Alumni Relations

A. W. Karchmer, MD ’64

Harvard Medicine magazine is published two times a year, with online editions appearing monthly.

PUBLISHERS: Harvard Medical Alumni Association and Harvard Medical School

© The President and Fellows of Harvard College

EMAIL: harvardmedicine@hms.harvard.edu

WEB: magazine.hms.harvard.edu

ISSN 2152-9957 | Printed in the U.S.A.

RANDY GLASS

Microbes and Mood

GAS RELEASED BY GUT BACTERIA stimulates other gut bacteria to produce allopregnanolone, a hormone involved in pregnancy and in an FDA-approved treatment for postpartum depression, according to new research led by HMS scientists. The work shows how gut bacteria can produce new hormones from steroids in bile and, in doing so, function like an endocrine organ. Because allopregnanolone is linked to postpartum depression and other mood and psychiatric disorders, the study provides new evidence that doctors could one day treat or prevent certain kinds of mental health conditions by manipulating the gut microbiome.

McCurry MD et al., Cell, May 2024

CARDIOLOGY

Unintended consequences of new heart risk calculator

IN 2023, THE AMERICAN HEART ASSOCIATION unveiled an updated cardiovascular disease risk calculator that is better calibrated and more precise than the previous version. But a study led by HMS researchers found that the new calculator, called PREVENT, may have unintended consequences if current treatment guidelines for cholesterol and blood pressure therapy remain unchanged.

The researchers used both the new calculator and its predecessor, which was released in 2013, to gauge risk and outcomes under each tool for nearly 7,700 participants ages thirty to seventy-nine. Extrapolating from the findings within this group, they concluded that the calculator would reclassify about half of the U.S. population into lower risk categories, while reclassifying less than 0.5 percent of the population into higher risk categories.

Based on this analysis, they estimated that the new calculator would render nearly 16 million people newly ineligible for preventive therapies under current treatment thresholds. The change would occur mostly among men ages fifty to sixty-nine and would affect more Black adults than white adults. The resulting decrease in access to statin and blood pressure therapies could lead to 107,000 additional heart attacks and strokes over ten years.

The analysis also predicted that reduced use of cholesterol-lowering statins, which have been linked to diabetes risk, could cut the number of new-onset diabetes cases by nearly 58,000 over the same period.

Like its predecessor, the new tool includes standard cardiovascular measures such as cholesterol and high blood pressure. But it also incorporates new variables like kidney function and offers the option to incorporate blood sugar, urine protein, and neighborhood zip code. It excludes race in recognition of the notion that race is a social, rather than biological, construct. The American Heart Association and the American College of Cardiology have not yet officially endorsed

Hit or Miss

Does AI help or hurt human radiologists’ performance? It may depend on the doctor, according to a study from researchers at HMS, MIT, and Stanford University. The researchers examined how AI tools affected the performance of 140 radiologists on fifteen chest X-ray diagnostic tasks, using advanced computational methods to capture the magnitude of change in performance when using AI and when not using it. The effect of AI assistance varied: Some radiologists performed better when using AI, but others performed worse. The results signal the need to better understand how humans and AI interact and to design personalized approaches that boost human performance rather than hurt it.

Yu F et al., Nature Medicine, March 2024

the new calculator, but some clinicians are already using it to guide patient care.

The researchers emphasize that the unveiling of the new risk tool presents an important opportunity to reconsider current treatment thresholds to better individualize therapy and improve clinical decisions.

ONCOLOGY

Preventing relapse after CAR T-cell therapy

SCIENTISTS AT HMS and the Dana-Farber Cancer Institute have developed a booster treatment that could improve the efficiency of existing CAR T-cell therapies for cancer by addressing their major shortcoming: a high rate of relapse.

CAR T cells are genetically enhanced versions of a patient’s own cancer-fighting T cells that are modified to produce a tumor-

fighting surface structure called a chimeric antigen receptor, or CAR. This receptor can latch onto a specific marker, or antigen, on the patient’s tumor cells, triggering an immune attack on the cancer.

CAR T-cell therapies have revolutionized the treatment of certain blood cancers, including B-cell leukemias and lymphomas and multiple myeloma. However, the CAR T cells leave a small number of tumor cells behind, and patients often relapse as CAR T cells in their bloodstream disappear.

The team created the CAR-Enhancer (CAR-E) therapeutic platform to spur CAR T cells to be more active and persist longer in the body, enabling them to remain in battle mode until all tumor cells are eliminated. The platform also prompts CAR T cells to develop a memory of the cancer cells so they can spring back into action if the cancer recurs. While much research to address relapse has focused on reengineering the CAR T cells themselves, this platform instead enhances CAR T cells from the outside. It delivers a fused molecule consisting of a weakened form of the immune signaling protein interleukin-2 (IL-2) and the antigen the CAR is designed to bind to. In tests, the platform extended CAR T cells’ lives and prompted them to form memory.

The weakened form of IL-2 still has a strong effect on T cells but is less toxic — and it leaves normal T cells alone while stimulating CAR T cells. This targeting is accomplished by fusing IL-2 to B-cell maturation antigen (BCMA), which binds to multiple myeloma CAR T-cell therapies.

In experiments in animal models and human-derived cancer cell lines, the researchers found that CAR-E worked together with CAR T cells to eradicate all tumor cells. The next step is to test the platform in human clinical trials.

Rakhshandehroo T et al., Nature Biotechnology, July 2024

Examining Outcomes

HOW WELL A NEWLY MINTED DOCTOR performs on their medical board exams appears to be linked with patient survival, according to a new study led by researchers at HMS and the American Board of Internal Medicine. Analyzing patient outcomes among nearly 7,000 newly trained hospitalist physicians, researchers found a link between residents’ scores on certification exams and patients’ risk of dying or being readmitted to the hospital within seven days. The findings offer reassurance that certification exams, which aim to demonstrate physicians’ competence, are able to capture critical knowledge and clinical judgment skills.

Exploratory Science

IT’S WELL KNOWN THAT THYROID HORMONE regulates metabolism and other physiological processes. Now, HMS scientists have gained new insights into the effects of the hormone on the brain. The research, conducted in mice, shows that thyroid hormone rewires brain circuits, spurring animals to engage in exploratory behavior. The findings help elucidate how low levels of the hormone could lead to depressive states marked by a reduced desire to explore, whereas too much could precipitate manic states characterized by an extreme desire for exploration.

Hochbaum DR et al., Cell, August 2024

REPRODUCTIVE HEALTH

Cervix-on-a-chip accelerates research

SCIENTISTS HAVE DEVELOPED A CERVIX-ON-A-CHIP, a lab model that replicates the structure and function of the human cervix, with the goal of facilitating research on women’s reproductive health.

The team, led by researchers at HMS, Boston Children’s Hospital, the Wyss Institute for Biologically Inspired Engineering at Harvard University, and the University of California, Davis, used microfluidics to build a model of the cervix that captures the complex interactions between different cervical cell types. The scientists plan to use the model in conjunction with a vagina-ona-chip model they created previously to develop and test treatments for bacterial vaginosis and other diseases that affect the female reproductive tract.

Left untreated, bacterial vaginosis can cause severe discomfort as well as complications, such as increased risk of sexually transmitted infections, higher rates of miscarriage and preterm birth, pelvic inflammatory disease, and infertility. However, current treatment is limited to antibiotics, which often fail to kill the invading bacteria, leading to recurrence in more than 60 percent of women.

Organs-on-chips consist of miniature tissues grown on microfluidic chips that mimic the structure and function of human organs. Such models enable testing that can’t be done on human organs or in other lab settings and may reduce the need for animal models.

The new model consists of two cell types found in the human cervix: epithelial cells, which make up the tissue that lines the organ, and fibroblasts, cells that produce key structural and connective proteins. The researchers combined layers of these cell types to reproduce the cervical wall, which they differentiated into the upper and lower cervical canals. They also replicated the mucus production of the cervix.

To test their model, the researchers exposed it to healthy and unhealthy cervical bacteria. They found that healthy bacteria

increased the thickness and improved the quality of the mucus layer and left the epithelial layer intact. By contrast, unhealthy bacteria compromised the function of the epithelial layer and increased production of proteins associated with pathogenic processes, including inflammation. Along with the vagina-on-a-chip, the cervix-on-achip may eventually allow researchers to evaluate potential treatments for bacterial vaginosis and identify new treatments for other diseases that affect the female reproductive tract.

Izadifar Z et al., Nature Communications, May 2024

INFECTIOUS DISEASE

Geographic origin shapes TB strain risk

FOR SOME FORMS OF TUBERCULOSIS, the chances that an exposed person will get infected depend on whether the individual and the bacteria share a hometown, according to a new study comparing how different strains move through mixed populations in cosmopolitan cities.

Results of the research, led by HMS scientists, provide the first hard evidence of long-standing observations suggesting that pathogen, place, and human host collide in a distinctive interplay that influences infection risk and fuels differences in susceptibility to infection.

In the current analysis, believed to be the first controlled comparison of the infectivity of TB strains in populations of mixed geographic origins, the researchers custom built a study cohort by combining case files from patients with TB in three cities: New York City, Amsterdam, and Hamburg. The analysis showed that close household contacts of people diagnosed with a strain of TB from a geographically restricted lineage had a 14 percent lower rate of infection and a 45 percent lower rate of developing active TB disease compared with those exposed to a strain belonging to a widespread lineage.

The study also showed that strains with narrow geographic ranges are much more likely to infect people with roots in the bacteria’s native geographic region than people from outside the region.

The researchers found that the odds of infection dropped by 38 percent when a contact is exposed to a restricted pathogen from a geographic region that doesn’t match the person’s background, compared with when a person is exposed to a geographically restricted microbe from a region that does match their home country.

This pathogen-host affinity provides evidence that TB strains may evolve with their human hosts, adapting to be more infectious to specific populations. The findings may also help inform new prevention and treatment approaches for tuberculosis, a wily pathogen that, each year, sickens more than 10 million people and causes more than a million deaths worldwide.

Gröschel MI et al., Nature Microbiology, August 2024

AWARDS

Researcher wins 2024 Lasker Award

JOEL HABENER, PROFESSOR OF MEDICINE at HMS and director of the Laboratory of Molecular Endocrinology at Massachusetts General Hospital, has won the 2024 Lasker-DeBakey Clinical Medicine Research Award for his discovery of glucagon-like peptide-1 (GLP1), a molecule that has become the basis for therapies that have transformed the treatment of obesity.

Lasker awards are among the world’s most prestigious biomedical and clinical research awards.

Habener shares the prize with biochemist Svetlana Mojsov, of Rockefeller University, and Danish scientist Lotte Bjerre Knudsen, of Novo Nordisk.

The understanding of the complex hormonal interplay underlying the regulation of blood sugar stems from the work of continued on page 10

many scientists. However, the independent discoveries made by Habener, Mojsov, and Knudsen converged to enable the design of disease-altering therapies for type 2 diabetes, which affects nearly 400 million people worldwide, and obesity, estimated to affect about one billion globally.

In the 1970s, Habener was captivated by the role of the hormone glucagon in blood sugar regulation and its interactions with other hormones involved in glucose production and breakdown. When Habener cloned the gene for glucagon, he discovered that it encodes not only glucagon but another molecule, GLP-1. Habener further defined the biology of the molecule and its functions. Subsequent work by Habener and others demonstrated that GLP-1 is released in the blood from gut cells in response to food intake where it then acts to enhance the release of insulin from the beta cells of the pancreas. These findings suggested that augmenting the activity of GLP-1 could be an important therapeutic target.

Independently, Mojsov developed innovative research methods and reagents that provided scientists with the means to draw unambiguous conclusions about essential aspects of GLP-1 biology. Crucially, she identified and purified the physiologically active form of GLP-1.

In the 1990s, Knudsen, the head of GLP-1 therapeutics at Novo Nordisk, and her team transformed these insights into treatments to fight diabetes and obesity. Notably, they modified the drugs in a way that allowed them to linger in the body for longer, extending their therapeutic effects from a few hours to a week. Through their discoveries and dedicated efforts, Habener, Mojsov, and Knudsen have introduced a new era of weight management, with the potential to dramatically improve the health and well-being of hundreds of millions, the foundation said in its award citation.

This work has opened up a field of research to better define the mechanisms of additional health benefits that have begun to emerge from GLP-1 therapy, such as improvement in heart function, chronic kidney disorders, and fatty liver disease.

AWARDS

HMS scientist receives Nobel Prize

GARY RUVKUN, PHD ’82, AN HMS PROFESSOR of genetics and an investigator at Massachusetts General Hospital, has received the 2024 Nobel Prize in Physiology or Medicine for the discovery of microRNAs, a class of tiny RNA molecules that regulate the activities of target genes in plants and animals, including humans. Ruvkun shares the prize with Victor Ambros, of the University of Massachusetts Chan Medical School.

Ruvkun’s and Ambros’s discoveries have sparked a revolution in RNA medicine. As potent regulators of gene activity and of the expression of proteins made by these genes, microRNAs have profound implications for disease and health. The scientists’ work revealed that microRNAs are pivotal regulators of normal development and physiology of animals and plants as well as key players in an array of human diseases, including coronary heart disease, neurodegenerative conditions, and many forms of cancer.

Ruvkun and Ambros first collaborated in the 1980s as postdoctoral researchers in the lab of Robert Horvitz at MIT, where they studied gene mutations in the roundworm Caenorhabditis elegans. They focused on two genes: one called lin-14, which encodes proteins key to the worms’ early development, and another, lin-4, that inhibits lin-14 once the earlier developmental stage is complete.

In the early 1990s, Ruvkun showed that certain mutations in a non-protein-coding portion of the lin-14 messenger RNA (mRNA) allow it to ignore the “stop” signal from lin-4 and keep working. Ambros and his team discovered that lin-4 does not halt lin-14 by encoding a protein, as originally expected, but rather by encoding a tiny RNA molecule composed of about 22 nucleotides — much shorter than most other RNAs — that could bind to complementary sequences in lin-14’s messenger RNA to regulate its activity.

While the discovery did not generate great attention immediately, its implications grew clearer as the ubiquity of microRNAs across the animal kingdom emerged. In 2000, Ruvkun’s research team discovered that a second microRNA, let-7, is present in humans, fruit flies, chickens, frogs, zebrafish, mollusks, and sea urchins. In 2001, Ambros and colleagues discovered nearly 100 additional candidate microRNAs in flies, humans, and worms. More recent studies have revealed that the human genome contains about 1,000 microRNAs that could collectively control the majority of our protein-producing genes.

Gary Ruvkun

noteworthy

New chair for health care policy

NICOLE MAESTAS has been named the new chair of the Department of Health Care Policy in the Blavatnik Institute at HMS, effective November 1. Maestas ( fig. 1), the Margaret T. Morris Professor of Health Care Policy, has been a faculty member in the department since 2015. She will succeed Barbara McNeil, MD ’66 PhD ’72 ( fig. 2), the Ridley Watts Professor and Chair of Health Care Policy, who founded the department and has served as chair for 36 years.

Maestas, whose own research focuses on the economics of disability insurance, labor markets, health care systems, and population aging, highlighted the importance and promise of health care policy research at this moment. “With more data and more powerful analytical tools than we’ve ever had access to before, there’s so much we can learn and do to improve health policy at the local, state, and national levels,” she said.

She added that she is looking forward to building on the strengths of the faculty in the department and to finding ways to increase the impact of the research being done. “We need both basic research and translational activities to make sure that policymakers at every level have access to the insights and information they need to make policy work the best it can be,” she said.

Maestas regularly collaborates with researchers across many of Harvard’s schools and is excited to find new ways to coordinate and integrate the work of the department with what she described as the tremendous health policy community across Harvard and its affiliated hospitals.

Maestas also recognized the work of McNeil to establish and lead the department. “Barbara built a tremendous department from the ground up,” Maestas said. “It’s the honor of a lifetime to be chosen to succeed her.”

McNeil joined HMS in 1974 and was named full professor in 1983. A professor of radiology, she has conducted groundbreaking research on cost-effectiveness and decision analysis in imaging procedures. The Department of Health Care Policy has had a rich history of innovation during her tenure as chair. It was the first department of its kind in a medical school, and it has pioneered an approach to interdisciplinary research that draws on the strengths of clinicians, social scientists, statisticians, and bioinformaticists to understand the impacts of policy on health and to offer guidance on building better policies.

An extraordinary Match Day

FOR MEDICAL STUDENTS GRADUATING IN 2024, Match Day ( fig. 3) held extra significance. “Match Day is exciting even under ordinary circumstances,” Dean George Q. Daley, MD ’91, told the crowd in the TMEC Atrium on March 15. “But your circumstances, Class of 2024, have been anything but ordinary. Indeed, they have been extraordinary.”

“All of you commenced your medical studies amidst the shock of the first year of the COVID pandemic, which makes this moment of completion very special,” Daley said.

This year, 176 HMS students matched in clinical training, internships, or residency programs, with ninety of those placing at HMS-affiliated programs. Three students matched in oral and maxillofacial surgery programs, and three will pursue nonclinical training.

“I’m really excited for the chance to serve my patients and to make a difference in improving access to anesthesia around the world,” said Adetomiwa Owoseni, who matched in anesthesiology at Northwestern University. “That’s what it’s really all about.”

Welcoming the Class of 2028

ON AUGUST 5 , the 165 members of the HMS Class of 2028 began their training with a ceremony held on the Quad, where they received the white coats that symbolize their entry into the profession of medicine. Family, friends, faculty, advisors, and staff joined them to celebrate their achievements and wish them well as they settled into their new surroundings.

Bernard Chang, MMSc ’05, HMS dean for medical education, told the students that he and his fellow deans were welcoming the students not only to the start of medical school but also to their new home. Home, Chang said, is more than a place to reside during their training. It is a place where they will be cared for and, he added, “a place of mutual respect.”

The members of the class come to Boston from thirty-six states and seven countries outside the United States. Fifteen percent already hold an advanced degree.

The most popular specialty was internal medicine, selected by fifty-six students; eight of those students matched in residencies with primary care in the formal title. In other primary care–related fields, one HMS student matched in family medicine, one in medicine/pediatrics, seven in pediatrics, and thirteen in obstetrics/gynecology. Most of the graduates — 81 percent — will train in Massachusetts, Pennsylvania, metropolitan New York, or California. The rest will spread out across the country.

The White Coat Ceremony helped kick off the first week, which included a full schedule of classes and activities introducing students to their faculty, advisors, and classmates. Throughout the week, they met their first patients as well as members of surrounding Boston communities. Faculty mentored them on developing essential skills, such as cultural humility and digital professionalism. Dean for Students Fidencio Saldaña and faculty advisors emphasized the resources available to support not only students’ professional success but also their overall well-being.

To close the White Coat Ceremony, Chang had one more salutation for the first-year class. “Welcome to the profession,” he said.

fig. 1
fig. 2
fig. 3
AI is helping clinicians understand and prepare for the health consequences of climate change

Powerful Predictions

N 2023, WITH A “TRIPLEDEMIC” OF COVID-19, RSV, AND FLU looming and wildfires threatening to irritate more people’s lungs with particulate matter, hospital leaders across the country braced for a surge of patients with respiratory illnesses. They wondered when cases would peak and whether they would have enough beds to accommodate those in need.

John Brownstein, an HMS professor of pediatrics and chief innovation officer at Boston Children’s Hospital, and colleagues wanted to do better than wonder. They took all the relevant data they could gather — environmental, behavioral, infectious disease — and fed them into a computer model they developed that included a machine-learning algorithm. The result: a detailed forecast for when to expect young patients in the region to flood in with airway issues.

“We could predict to the day when the highest-level capacity needs would be,” and when demand would ebb, says Brownstein, who is also senior vice president of the hospital.

Forecasting health care needs

Machine learning and other forms of artificial intelligence have begun to play a role in protecting well-being on our warming planet by augmenting climate models, deepening understanding of how climate change affects human health, and improving health care systems’ ability to respond effectively. It makes sense: Climate science involves crunching huge amounts of data, and AI excels at interpreting and making predictions from vast, disparate, and incomplete information.

“By helping us pull together huge amounts of noisy and imperfect data with numerous variables, AI can play a substantial role in uncovering and projecting the health impacts of climate change,” says Brownstein.

Generative AI also offers unique opportunities in climate research to extrapolate from heterogeneous data sources, says Francesca Dominici, the Clarence James Gamble Professor of Biostatistics, Population, and Data Science at the Harvard T. H. Chan School of Public Health and director of the Harvard Data Science Initiative.

Some researchers are exploiting AI’s strengths to improve models of climate change and the extreme weather events it drives. The AI model GraphCast by Google DeepMind now delivers more accurate hurricane track predictions and ten-day weather forecasts than traditional models based on mathematical equations of atmospheric and hydrologic physics, which run on supercomputers. Microsoft’s AI model Aurora can calculate global air pollution patterns an unprecedented five days ahead, empowering clinicians and patients to prepare for health consequences. However, it’s harder to validate predictions that extend decades into the future. To rein in potentially outlandish results, scientists are exploring hybrid climate models that incorporate AI components into grounded, physics-based ones.

Other researchers are applying AI to identify and predict climaterelated impacts on health. Rather than asking questions piecemeal, such as how heat affects stroke risk, AI can unearth relationships between multiple diseases and environmental factors simultaneously. AI tools helped Brownstein and colleagues reveal in 2018 that rising local temperatures contribute to antibiotic resistance. Other AI tools have facilitated his group’s work by using unconventional data sources such as social media posts to track infectious disease spread in real time.

Efforts in the field include identifying the populations whose health is most at risk from particular aspects of climate change. The results can inform prevention and preparedness. “Very sophisticated algorithms can be trained on massive amounts of data from electronic health records, insurance claims, doctors’ notes, and research on climate stressors to tell you who is more likely to show up at the hospital for what disease a day, a week, or a month after a heat wave,” says Dominici.

AI could enrich health care systems’ climate resilience by making data more accessible.

Scientists are still in the early stages of exploring AI’s potential to illuminate the connections between climate and health. The authors of a review published in 2024 in PLOS Climate, including two HMS faculty members at Beth Israel Deaconess Medical Center, found only seven English-language studies that used machine learning to predict the health outcomes of climate-driven events.

AI could enrich health care systems’ climate resilience by, for instance, making data more accessible. Satchit Balsari, an HMS associate professor of emergency medicine at Beth Israel Deaconess, co-launched Climateverse in 2023 to integrate and annotate siloed information on climate and health in Southeast Asia. An AI chatbot helps researchers interact with the data and gain insights, such as which communities need the most help withstanding extreme weather events.

Another avenue looks to AI for ideas on decarbonizing health care and other sectors, says Dominici — for example, dynamically optimizing electrical grids and identifying which efforts to lower carbon dioxide emissions work best. Similarly, AI could help clinicians and policymakers analyze which health care interventions work best to protect against climate threats. When a heat wave looms, she says, models could synthesize outcomes from across the country to gauge whether leaders in a specific location should issue a heat warning, open more cooling centers, or send air conditioners to elderly residents.

A mix of sun and clouds

There’s some irony in asking AI how to reduce emissions, since the technologies themselves consume significant electricity, which can contribute to climate change. HMS community members working on environmental sustainability are considering the energy required to run AI systems. While AI models designed to replace traditional ones can save electricity by running faster and on less power-hungry computers, the overall surge in AI use may outweigh any energy gains.

Such considerations factor into larger calls for responsible use of AI as the field hurtles forward. Models can produce unreliable or biased results in climate-related work just as they can when proposing a medical diagnosis or treatment. Leaders at HMS and beyond are advocating for openness and caution in climate AI to ensure that predictions are as accurate as possible, that outputs reflect the populations they’re being applied to, and that people don’t place unearned trust in algorithms.

“This balance of harnessing the good and mitigating the bad of AI is really important for us to embody at Harvard and in medicine when we’re dealing with human lives,” says Dominici.

Stephanie Dutchen is editorial director in the HMS Office of Communications and External Relations.

Oxyhemoglobin, the hemoglobin molecule in its oxygenated state, as illustrated

by Irving Geis.
The protein-folding problem was one of biology’s greatest challenges. What’s next for the scientists studying it after the advances made with AI?

t started with a slab of sperm whale meat.

It was the early 1950s, and two British molecular biologists were trying to achieve a feat that had thus far eluded scientists: creating a three-dimensional model of a protein molecule.

Like other biologists at the time, John Kendrew and Max Perutz knew that proteins are essential building blocks of life, responsible for catalyzing every kind of reaction in the body. They suspected that the structure of a protein would offer clues about its particular function. But determining a protein’s shape was no small task. It involved a time-consuming technique called X-ray crystallography: growing crystals out of protein from tissue samples and then bombarding those crystals with X-rays to measure the angles at which rays bounce off individual atoms to piece together the molecular structure.

Kendrew had set out to model myoglobin, a protein that stores oxygen in muscles. After failing to spur sufficient crystal growth from penguin, tortoise, and porpoise meats, he finally found success using a nearby lab’s stash of whale flesh. (Sperm whales, promoted as an alternative source of meat in the U.K. during World War II, need abundant myoglobin to store oxygen during deep sea dives.) Aided by a squad of workers performing calculations and a computer weighing no less than six tons, Kendrew and colleagues finally unveiled the very first three-dimensional model of a protein molecule in 1958. The whole effort had taken more than two decades.

Fast-forward to today, and Nazim Bouatta, an HMS senior research fellow in systems biology and systems pharmacology, pulls up a webpage, enters a sequence of

An illustration of collagen by Irving Geis depicts the protein as thick and rope-like, reflecting its function of strengthening tendons and muscles in animals.

letters into an input form, hits a play button, and scrolls down. Almost instantaneously, a squiggly blue model of a molecule appears. This protein molecule has never been modeled in a lab using techniques like the ones Kendrew employed. Instead, it’s being pieced together using an artificial intelligence model called AlphaFold2. What took Kendrew decades now takes seconds.

The system Bouatta is displaying — an AI model that predicts the structure of protein molecules — may just be the most-hyped AI technology in science. Since AlphaFold2 debuted in 2020, journalists have touted it as “the most important achievement in AI, ever,” and “biology’s holy grail.” “One of the biggest problems in biology has finally been solved,” proclaimed a Scientific American headline.

As the initial fanfare dies down, it’s worth taking a look back at that very big biology problem. How much of the mystery did AI solve, and what does that mean for the scientists who have spent their careers trying to crack it? As they turn to the questions that the AI models haven’t answered, researchers like Bouatta are provoking bigger conversations about the roles of computers and humans in the discovery process — including how AI could transform the definition of science itself.

The protein-folding problem

To understand the mystery that AI is purported to have solved, it helps to dwell on that myoglobin model for a minute. Although Kendrew and Perutz went on to win a Nobel Prize for the work, Kendrew himself seemed underwhelmed with the model (shown above), admitting that “it could not be recommended on aesthetic grounds.” Unlike the DNA double helix, first modeled in the same U.K. lab just a few years earlier, the myoglobin molecule was not elegant and symmetrical. Colleagues compared it to “abdominal viscera” and “worms”; writing of its appearance in a museum years later, a Guardian reporter dubbed it the “turd of the century.”

Jokes aside, scientists soon realized that the first few protein models would not offer a blueprint scientists could use to model the

structure of all proteins. “One of the sad aspects of this discovery is we cannot see anything from the pattern,” said physicist Richard Feynman during a 1964 lecture. “We do not understand why it works the way it does. Of course, that is the next problem to be attacked.”

What Kendrew and Perutz’s work did do was lay out the experimental techniques researchers could use to piece together other proteins — which is exactly what they did, protein by painstaking protein, over the ensuing decades. Advances in imaging technologies helped speed the process along, but even today, it typically takes weeks to months to map out a single protein in a lab. Around 200,000 proteins have been modeled this way. “It’s a lot,” Bouatta says, “but if you put it in perspective with respect to the number of proteins that exist in nature, it’s a drop in the bucket.”

Helpful insights emerged over time. Scientists learned that proteins start out in cells as long chains of amino acids. In the 1960s, Christian Anfinsen, PhD ’43, proved that the sequence of these amino acids is like a recipe for how the protein will shift into its ultimate shape. “It starts as a floppy kind of spaghetti floating in water,” Bouatta says. “Then it starts folding.”

Thanks to advances in DNA sequencing, figuring out the makeup of those initial amino acid sequences has become a cinch. “You can just swab bacteria off the sidewalk and throw it into a DNA sequencer to get all

the protein-coding regions of its genome,” says Nicholas Polizzi, an assistant professor of biological chemistry and molecular pharmacology at HMS. “So we have billions of protein amino acid sequences from natural organisms in DNA databases.” In other words, it’s easy to find out how a protein begins. But figuring out exactly how it will fold into its final shape is much tougher.

That’s because the transition from floating spaghetti to folded protein is mind-bogglingly complex. The way a protein’s atoms interact with one another is shaped by complex physical forces, the surrounding environment, and the number of amino acids in the protein. Even a small protein made up of, say, one hundred amino acids could theoretically fold into a dizzying range of shapes. If the process happened through random sampling, it could take 1052 years for a protein to find its most stable, lowestenergy state — longer than the age of the universe. Yet somehow, in real cells, many proteins fold within a thousandth of a second.

Form and Function

Proteins are critical to the functioning of living organisms, driving all of the activity within cells. “It’s really proteins that do most of the work of living and dying,” says Nicholas Polizzi. “And the shape of a protein is really highly correlated with its function.” Collagen, for example, which strengthens skin and connective tissues, looks almost like a twisted rope. Antibodies are shaped like a Y, with two arms outstretched to catch and neutralize invading germs. And in his work designing synthetic proteins in the lab, Polizzi has created a protein that resembles the jaws of an alligator and chomps down on smaller molecules to catch and bind to them.

A clear picture of how each protein folds is also key to understanding and treating disease. Many diseases, like Alzheimer’s, Parkinson’s, and cystic fibrosis, result from glitches in the protein-folding process. Other conditions are caused by malfunctioning proteins; for example, in certain cancers, proteins that regulate cell growth can become mutated. Nearly all drugs bind to proteins, such as by fitting into a pocket somewhere in the protein molecule to activate or deactivate it.

The original model of the myoglobin molecule, constructed in plasticine by John Kendrew.

The so-called protein-folding problem thus ties together three related questions. What are the physical forces that transform a string of amino acids into a folded protein? How does it happen so quickly? And is there a way to predict how proteins will fold using their amino acid sequence alone?

Cracking the code

For a few decades, a group of researchers has been working to tackle this problem using computers. “At first, the paradigm was very physics-based,” says Mohammed AlQuraishi, an HMS fellow in therapeutic science and an assistant professor of systems biology at Columbia University. “You start with a protein sequence, you understand the types of energies involved in protein folding, and you try to emulate that on a computer.”

AlQuraishi was one of the first scientists to test if another approach could work: Instead of baking physics into the computations, could an AI model just comb through data on amino acid sequences and the corresponding molecules modeled in labs and learn how to fold proteins on its own? That’s the general premise of deep learning. A model like ChatGPT, for example, isn’t explicitly taught vocabulary or grammar. Instead, it learns to recognize patterns by being fed huge quantities of text.

In 2018, AlQuraishi, then a fellow in the HMS Laboratory of Systems Pharmacology, released an AI model using this approach that could make pretty good estimates of protein structures about six times faster than other existing methods. Another deep learning model developed in the lab of Debora Marks, a professor of systems biology in the Blavatnik Institute at HMS, came out several months later. At the time, AlQuraishi recalls, colleagues in the field were skeptical that this approach could ever be accurate enough. “The consensus was, ‘this is kind of cute, but ultimately you need physics to make this really work, right?’ ” he says.

The skeptics didn’t know it then, but a turning point was near. It arrived in 2020. Computational biologists had convened for a biennial competition, called the Critical Assessment of Structure Prediction, or CASP

An

— akin to the Olympics of protein structure prediction. The contest works like this: Organizers release amino acid sequences for a number of proteins whose folded structures have already been modeled in labs using experimental methods but haven’t yet been made publicly available. Contestants then use computer models they’ve developed to predict how those proteins will fold based only on the sequences. Whoever’s model comes closest to the actual folded proteins wins.

The day before the awards, AlQuraishi sat at his computer — the event was virtual due to COVID-19 — and combed through results as soon as they were posted online. “It was pretty clear something profound had happened,” he recalls. An AI model called AlphaFold2, developed by researchers at Google DeepMind, had predicted protein structures with a level of accuracy no model had even come close to before. It would sweep the competition.

“From the perspective of a scientist who wants to see progress, this was enormous. But it was bittersweet to think, suddenly, it’s done.”
AlphaFold-generated model of Q8I3H7, a protein that may protect the malaria parasite against immune system attack. Blue areas are predicted with high confidence; yellow and orange areas are predicted with lower confidence.

Like the previous model AlQuraishi developed, AlphaFold2 eschews a physicsbased model in favor of deep learning. But another important quirk is built into its architecture. When fed an amino acid sequence, the model scans many other similar sequences found in nature, detecting changes in some sections of the sequences that coincide with changes in others. These patterns suggest that the changes evolved together and are thus likely to be close to one another in space when the protein folds. Adding this step worked remarkably well. Within months, the number of predicted protein structures available to scientists shot up from 200,000 to 200 million. Importantly, AlphaFold2 could assign a confidence score to its predictions so that scientists knew how much to trust each one. Although not all of the predictions had a high confidence score, for the first time, a significant proportion of protein structures predicted by machine learning were as accurate as those developed in labs — accurate enough to have actual applications in biology.

Democratizing AI

The computers had become as good as the humans, and they were a heck of a lot faster. Cue the existential crises. “From the perspective of a scientist who wants to see progress, this was enormous,” AlQuraishi says. “But it was bittersweet to think, suddenly, it’s done.” He pondered the fate of his own research program, which he had mapped out years into the future to solve a problem that was now “eviscerated overnight.” Colleagues wondered if AI would spell the end of methods like crystallography and of structural biology itself. But the initial shock gave way to curiosity. “What’s happened is a realization that now we go to bigger problems,” AlQuraishi says. “If we can get individual proteins solved, what about complexes? What about proteins interacting with other molecules?” AlphaFold2 also had important limits. DeepMind released the source code needed for researchers to model other proteins, which was very useful for predicting a protein’s typi-

Nazim Bouatta

cal structure, but the AI model wasn’t built to predict mutations or to explain how proteins might interact with one another or with drug molecules. And DeepMind didn’t release enough of the code for researchers to retrain the model with different data or to innovate by adding new functions.

AlQuraishi and Bouatta knew they were at a crossroads. For researchers outside of Google to have access to the AI needed to tackle ever-bigger problems, they’d need to create something new: a fully opensource model inspired by AlphaFold2. They decided to make it happen. Collaborating with students — led by Columbia master’s student Gustaf Ahdritz, now a PhD candidate at Harvard — they released that model, called OpenFold, six months later.

OpenFold was even faster than AlphaFold2 and just as accurate. And it could be retrained, which soon led to further advances. By combining OpenFold with a language model similar to ChatGPT, for

OpenFold was even faster than AlphaFold2 and just as accurate. And it could be retrained, which soon led to further advances.

example, Meta AI was able to release the structures of more than 600 million littleknown proteins, like viruses that live deep in the ocean. And Bouatta is now working with research fellow Elena Rivas in Harvard’s Department of Molecular and Cellular Biology to use OpenFold to predict threedimensional structures of RNA, another central challenge in biology.

Of course, researchers at DeepMind have continued plowing ahead. Earlier this year they released an improved model called AlphaFold3. Published in Nature, the new model made big strides in predicting how proteins could interact with one another and with potential drug molecules. But this time, DeepMind released far less of the code than it shared for AlphaFold2.

Polizzi, who uses both AI models like AlphaFold and experimental methods to explore how small molecules bind to proteins, was among a group of researchers who wrote a letter to Nature criticizing the journal’s

“This is the best example of what AI can do in science.”

decision to publish AlphaFold3 without the accompanying code that would let scientists evaluate and build upon the work.

“We were pretty disappointed with the way the paper was published, which skimped on some important aspects of peer review,” Polizzi says, adding that AlphaFold was trained on open-source protein structure databases, so keeping aspects of it private felt contrary to the ethos of the larger structural biology community. “We just hope this isn’t a slippery slope that leads to more published AI models being withheld from the public domain, where they can be vetted and used.”

DeepMind eventually conceded, announcing that it will release AlphaFold3’s code at some point this year. But by then, Polizzi hopes, AlQuraishi, Bouatta, and colleagues will have come out with a similar model that replicates the results anyway.

Indeed, the next version of OpenFold is now in the works. “We now know that being able to use this kind of tool is not something you can take for granted,” AlQuraishi says. “The open-source component is critical and is only going to become more so moving forward.”

Problem solved?

So, in retrospect, does AlQuraishi think the initial fanfare over these types of models was warranted?

“The problem with AI and science is there is a tremendous amount of hype,” he says. “I certainly do think this is the best example of what AI can do in science. But it’s not like biology is over. There are many things it did not solve. It’s a sliver of a much bigger problem we’ve only begun to tackle.”

Plus, it’s not even safe to say the proteinfolding problem is solved. One important box is ticked off: Researchers can now predict protein structures from amino acid sequences. But two other key parts of the problem — understanding the underlying physics and how proteins fold so quickly — remain unsolved.

“Structure is not the final story; we want to understand much more,” says Bouatta. He points out that a few years post-AlphaFold2,

Mohammed AlQuraishi

there hasn’t been as much progress in treating diseases as some might have anticipated. For drug discovery, he adds, a deeper understanding of protein dynamics is essential. That includes studying how proteins behave in their natural cellular environments and why certain proteins fold incorrectly — a factor linked to many neurodegenerative diseases. “This is why we should still care about the physics,” Bouatta says. “It will radically improve our ability to think about diseases and strategies for dealing with them.”

Another alluring question is whether AI models like AlphaFold and OpenFold have actually figured out something about the physics of protein folding that humans haven’t. Similar questions are being asked in other fields. Linguists are exploring how the inner workings of large language models like ChatGPT can reveal new insights about how humans acquire and process language. And computer vision AI models trained

to recognize faces or objects have helped vision researchers understand how our own brains categorize what we see. “I’m optimistic about where that could go,” says AlQuraishi, “but it’s a tall order. And it will certainly require open-source models.”

Bouatta and AlQuraishi have already started peeking into the inner workings of OpenFold, examining the shapes of the structures it builds during its training to figure out how it learns. What they’ve found is a bit bizarre. For instance, instead of following steps of folding as they happen in nature, the model starts by creating one-dimensional views of folded proteins, followed by two- and threedimensional views. The model doesn’t necessarily need to know the basic physics to do its job of piecing together the protein structures. But Bouatta argues that the more that physical knowledge is infused back into these deep learning systems, the

better they’ll work — and “the better we’ll learn how they are learning.”

AlQuraishi agrees that humans do know fundamental facts about the physics of folding that the models miss. But he also has been surprised, over and over again, by how successful deep learning can be at making predictions with very limited prior knowledge of things humans have worked so tirelessly to understand.

In fact, it’s been slightly disheartening. It gives him a nagging feeling about AI that he just can’t shake. If we can predict how proteins fold without understanding how they do it, “are we even legitimately doing science anymore, or is it something different?” AlQuraishi muses. “In the past, what drove us was understanding. With this wave of AI, we’re maybe losing some of that. We’re able to get the practical benefits, but we’re not necessarily gaining intellectual benefits.” He stays optimistic by considering the practical benefits he expects in his lifetime. Like gaining a full picture of the molecules inside cells at the atomic level. Knowing not just how single proteins behave but also how constellations of them join forces to function. Creating drugs that are like tiny machines that travel into cells to make tweaks. Building new proteins and new cells with specific functions. “These will become possible because of the advances we’re making today,” he says.

Previous generations of scientists have also seen similar seismic shifts. The quantum revolution in the early 1900s gave scientists a predictive tool that was remarkably accurate. It paved the way for nuclear energy, computing, and nuclear weapons.

“But it left us with some questions about reality that … we haven’t really resolved,” AlQuraishi says.

“I think we’re seeing a transformation of science that is probably going to end up being as profound as the quantum revolution,” he says, “not just in terms of the scientific models and theories, but how we do science itself.”

McDonough is the associate editor of Harvard Medicine magazine.

Artist Irving Geis’s illustration of bovine trypsin, an enzyme that breaks down other proteins.

The Next Generation of Medicine

The emergence of generative AI is sparking a revolution in medical education

WITHIN A FEW WEEKS of its public launch in November 2022, ChatGPT was already beginning to feel ubiquitous, and Bernard Chang, MMSc ’05, was thinking about what that meant for the future of medical education. “Maybe once every few decades a true revolution occurs in the way we teach medical students and what we expect them to be able to do when they become doctors,” says Chang, HMS dean for medical education. “This is one of those times.”

By 2023, studies found that the initial public version of ChatGPT could perform at a passing level on the U.S. Medical Licensing Exam. A more powerful version of ChatGPT, released in March 2023, exceeded the performance of medical students, residents, and even practicing physicians on some tests of medical knowledge and clinical reasoning, and today there are a number of large language models that match ChatGPT’s abilities. So how will this affect today’s medical students — and the institutions educating them?

Chang says that the last such revolution in medical education occurred in the mid-1990s, when the internet became widely accessible. “Initially we just played games on it,” he says. “But it soon became indispensable, and that’s what’s happening with generative AI now. Within a few years it’s going to be built into everything.”

HMS is getting a jump on this shift by building generative AI (also called genAI) into the curriculum today. “The time is right to respond to this call,” Chang says. “We didn’t hold back and wait to see what other schools are doing, both because as an institution we wanted to be at the forefront of this and because it’s the right thing to do for our students.”

Incorporating AI

Among the changes incorporated this fall is a one-month introductory course on AI in health care for all incoming students on the Health Sciences and Technology (HST) track. “I don’t know of any other med school doing that,” says Chang. “Certainly not in the first month.” The course examines the latest uses for AI in medicine, critically evaluates its limitations in clinical decision-making, and crucially, he adds, “grounds students in the idea that medicine is going to be different going forward. In this day and age, if they want to be a physician-scientist or a physician-engineer, which is the goal of the HST curriculum, they won’t just need to be a good listener and a good medical interviewer and a good bedside doctor. They’ll also need good data skills, AI skills, and machinelearning skills.” About thirty students each year enroll in the HST track, and many of them will get a master’s degree or PhD in addition to their MD.

A PhD track that starts this semester, AI in Medicine (AIM), is taking AI-integrated education even further. “Bioinformatics students were increasingly saying they were excited about AI and asking if we could offer a PhD in it,” says Isaac Kohane, the

A new PhD track that starts this year, AI in Medicine, is taking AIintegrated education even further.
“The time is right to respond to this call.”

Marion V. Nelson Professor of Biomedical Informatics and chair of the Department of Biomedical Informatics in the Blavatnik Institute at HMS. “We didn’t know how much demand there would be, but we ended up with more than 400 applications for the seven spots we’re offering.”

“As with any big technological eruption,” Kohane says, “for a few years there will be a huge gap in the workforce. So we want to train researchers who know a lot about medicine and understand real problems in health care that can be addressed by AI.”

Also to that end, HMS has opened a third avenue for medical students and faculty who are interested in the technology: the Dean’s Innovation Awards for the Use of Artificial Intelligence in Education, Research, and Administration, which were announced last year and offer grants of up to $100,000 for each project selected (see sidebar on page 27). “These grants really show HMS is leading the way in trying to integrate these amazing new tools into the way we work and learn,” says Arya Rao, an MD-PhD student and a co-recipient of

Bernard Chang

an award to study AI for clinical training. “I’m grateful to have this experience to take forward into my medical career.”

Hospitals affiliated with HMS are also incorporating AI into their clinical workflows. Brigham and Women’s Hospital, for example, is testing the use of an ambient documentation tool that takes clinical notes so that doctors can spend more of their time interacting with patients. As these kinds of tools are implemented, Chang says, they will allow students to focus on talking to patients “instead of constantly turning away to look at a screen. It will also help them shift sooner to higher levels of learning and more advanced topics and things we want our doctors to do, like listen.”

“GenAI is often viewed as taking the humanity out of communication,” says Taralyn Tan, the assistant dean for educational scholarship and innovation within the Office for Graduate Education. “But I actually see it as being a mechanism to reincorporate a human dimension to clinical practice by taking the burden of many administrative tasks off of doctors.”

Rao agrees. “The real beauty of medicine, the reason to be in it, is the bonds you’re able to make with patients,” she says. “If you look at the amount of time doctors spend digging through medical records and writing notes, it’s hours and hours a day. AI can free up some of that time so we can devote it to what we’re really here for, which is helping people.”

Richard Schwartzstein, chair of the Learning Environment Steering Committee and the Ellen and Melvin Gordon Distinguished Professor of Medical Education, sees the value in corralling record-keeping and other such duties, but he warns that taken too far, AI use may lead to deficits in a student’s preparedness. “We need to put it in the context of real-world bedside medicine and how you work as a physician by emphasizing reasoning and critical thinking,” Schwartzstein says. “What does the bedside clinician use it for well? What does the clinician have to be wary of? What does the clinician still need to be good at to use AI appropriately?”

“We need to put [AI] in the context of real-world bedside medicine.”
Richard Schwartzstein
“I see [AI] as being a mechanism to reincorporate a human dimension to clinical practice.”
Taralyn Tan

Schwartzstein points out, for example, that AI can help doctors track down pathogens from places around the world that a patient may have been exposed to but that the physician is unfamiliar with. “I can do that now just with the internet,” he says, “but AI can do a broader and faster search. One of the drawbacks, though, is that it doesn’t tell you what sources it’s looking at, so you can’t be sure if the information comes from a journal you trust.”

Double-checking AI’s results is key, he says, as is being able to match the options it provides with a patient’s actual symptoms and history. “AI isn’t good at problem-solving, which is one of the toughest parts of medicine,” Schwartzstein notes. A study from researchers at HMS and Beth Israel Deaconess Medical Center found that although ChatGPT was accurate when making diagnoses, it made more errors than physicians in reasoning — tasks like considering why certain questions should be asked rather than just what to ask — than its more experienced human counterparts, doing better than residents but not attending physicians.

Schwartzstein says another area where students may be susceptible to overusing AI is in analyzing lab data. “Interpreting tests and working in inductive mode helps them learn critical thinking,” he says. “The majority of malpractice cases arising from possible diagnostic error are not weird cases. They’re basic cases that people make mistakes on — thinking errors. So while using AI for a case like that would be great for a nurse practitioner in an under-resourced area without the backstop of a physician nearby, it would be problematic for a physician to not have that training and competence in thinking skills.”

Once doctors have some years in practice behind them, though, “having a consistent AI agent overseeing our actions and catching errors would be a huge win,” Kohane contends. “Sometimes rookie errors are made by experienced physicians because they’re tired or not feeling well, so having our work checked by AI might significantly improve mortality and morbidity in hospitals.”

Advancing Innovation in Medical Education

In March 2024, HMS announced 33 recipients of the Dean’s Innovation Awards for the Use of Artificial Intelligence in Education, Research, and Administration to explore the use of generative AI in teaching, research, and administration. Below is a sample of the projects related to medical education.

The future patient persona: An interactive, large language model–augmented Harvard clinical training companion

Arya Rao, Marc Succi, and Susan Farrell

Providing opportunities for students to practice their clinical skills on standardized patients is an important part of medical school, says MD-PhD student Arya Rao. When the “visit” is over, students are graded by both the actor portraying a patient and their professor on their clinical reasoning, communication skills, and more. But the expense and time this takes can limit these opportunities. So Rao, Marc Succi, an HMS assistant professor of radiology at Mass General, and Susan Farrell, associate dean for assessment and evaluation and director of the comprehensive clinical skills OSCE exam, are developing customized large language models that can serve as standardized patients. They are reinforcing these models, which they call SP-LLMs, with material specific to the HMS curriculum. Students will be able to interact with the models using both text and voice, gathering patient histories, obtaining diagnostic information, and initiating clinical management, all while practicing their communication skills.

“One nice feature is that when the visit is over,” says Rao,“the SP-LLM also provides the student with feedback on the encounter, acting as both patient and preceptor. Since the tool is available anytime, anywhere, students can get a lot more practical experience before they start seeing real patients.”

Development of a generative artificial intelligence grading and learning tool

Greg Kuling, Jay Vasilev, Samantha Pullman, Randy King, Barbara Cockrill, Richard Schwartzstein, and Henrike Besche

HMS’s Pathways curriculum track emphasizes independent study and case-based collaborative classwork. Richard Schwartzstein, chair of the Learning Environment Steering Committee, and colleagues have developed a system that enables bulk auto-grading of short-answer questions to summarize students’ strengths and weaknesses, identify conceptual challenges, and suggest tailored teaching strategies. It takes Schwartzstein, who chaired the steering committee that developed the Pathways curriculum in 2015, about eight hours to grade responses to a single open-ended question for all 170 students in a class, not including providing feedback.“I can’t possibly do that with homework,” he says,“but it would be really helpful to them if AI could.” Streamlining the process, he adds, will allow students to do more exercises and hence “get more practice at figuring out whether they’re correctly applying the principles they’ve learned to case studies.”

Harnessing generative AI to create learnercentered and evidence-based course syllabi Taralyn Tan and Krisztina Fischer

Taralyn Tan, the assistant dean for educational scholarship and innovation within the Office for Graduate Education, and Krisztina Fischer, an HMS assistant professor of radiology, part-time, at Brigham and Women’s, are studying the use of AI in Tan’s Teaching 100 course to develop and pilot a tool that uses generative AI to create syllabi, with the goal of having it adopted by other HMS faculty. In the course, Tan’s students first try to create learner-centered, evidence-based syllabi components on their own, and then they work with AI to do the same thing.

“The class has a very meta dual purpose,” Tan says, “because the students are experiencing it both in their own teaching and from a learner’s perspective.” Tan also allows her students to use AI in the classroom outside of this capstone assignment. “The most common response I get when I ask about this is that they didn’t know how to use it,” she says. “So that speaks to the need for basic competencies for engaging our learners with it.”

“We want to train researchers who know a lot about medicine and understand real problems in health care that can be addressed by AI.”
PETER
Isaac Kohane
“HMS is leading the way in trying to integrate these amazing new tools.”

Practical applications

But isn’t AI, too, famously prone to error? ChatGPT’s “hallucinations” — such as providing a detailed but very wrong answer by glossing over the obvious error in a prompt like “What is the world record for crossing the English Channel entirely on foot?” — are the stuff of memes. This problem is expected to improve over time, says Kohane, but even today, he notes, “AI makes different kinds of errors than the ones humans make, so it can be a good partnership.” Not only is the underlying technology improving, he notes, but it also massively expands the data pools physicians can draw on to arrive at diagnoses. For instance, a machine-learning model trained on close to one million electrocardiograms was able to perform as well as or better than cardiologists in diagnosing thirty-eight types of conditions. “Imagine what that could be in the hands of primary care doctors,” Kohane says. Such gargantuan datasets can be made even more comprehensive when they’re supplemented by electronic health records (EHRs) and input from patient wearables, Kohane points out. “GenAI doesn’t have to draw only from trials and medical journals,” he says. “If real-life

data is gathered with consent and transparency, that extra information can help physicians see things they might not see otherwise.”

That type of data is already being used in a pilot program for internal medicine students at Brigham and Women’s.

“When they’re on the wards,” says Chang, “students can only learn from patients who happen to be in the hospital at that time. But this tool has access both to curriculum objectives and patient EHRs, so it can compare what the student actually encounters with our learning objectives.” Within a few years, Chang believes, such use cases will be commonplace. “Before going into rotations, students will access an app on their phones that will say, ‘Good morning, I suggest you see these three patients,’ because those patients represent gaps in the students’ knowledge.”

The problem of bias in AI training data is also well documented. And as Schwartzstein and colleagues point out in a paper published in the journal CHEST, not only is AI itself prone to reproducing the biases inherent in the human-generated materials it learns from, but also at least one study has shown that that loop can circle back and pass AI biases on to humans.

Gargantuan datasets can be made even more comprehensive when they’re supplemented by electronic health records and input from patient wearables.

At the same time, there is evidence that feedback can work in the other direction as well. A recent study from Brigham and Women’s shows that including more detail in AI-training datasets can reduce observed disparities, and ongoing research by a Mass General pediatrician is training AI to recognize bias in faculty evaluations of students.

“There are a lot of biases no matter where the information is coming from,” says Tan, “so we have to keep an attentive eye on that. But AI can be a useful tool in our tool kit for promoting equity in education if we can leverage it in synergistic ways — putting in specific articles, citations, tools we know are effective, for example, and asking it to draw from the resources that reflect the latest in the field while remaining aware of these issues.”

Part of the solution then, is being aware of the data used to create AI tools. Chang mentions HMS “tutorbots,” which are trained on homegrown curricula. “We’re using ChatGPT as the engine,” he says, “but constraining it using the language and the course information we’ve given it. If we didn’t, what would be special about coming to HMS?”

Given all the changes happening, what will be special about an HMS degree when it comes time for this year’s cohort to move on?

If the students in the AIM PhD program graduated today, “they would be immediately approached with top job offers in all the competitive hospitals and universities,” Kohane says. “I would estimate that 60 percent of the graduates will go into industry. But when they get out in five years or so they’ll find plenty of green fields in academia and research, too.”

The reason for that lies, in part, in the adaptability of students trained in these technologies, says Tan. “It’s hard to predict how far this will go,” she says. “But tomorrow’s most successful physicians and researchers will be the ones who can harness genAI for innovation and strategic planning. The people who come up with solutions will be the ones who are using these tools.”

Elizabeth Gehrman is a writer in Boston.

Arya Rao
Arya Rao
COURTESY MUSÉE DU LOUVRE, DÉPARTEMENT
In The Doctor, an 1891 painting by Luke Fildes, a physician attends to a sick child.

The history of tools used to support clinical decisionmaking offers clues to the future of medicine in the age of generative AI

GROWING UP as a nerdy child in the 1980s and ’90s, no television show shaped my worldview more than Star Trek: The Next Generation. While exploring strange new worlds, the bridge crew of the starship Enterprise frequently came into conflict, which they generally resolved with diplomacy, reason, and a fundamental acceptance of human (or alien) dignity. The role of technology, especially the computer, was to enhance — but not replace — this humanity. This was equally true for Dr. Crusher, the medical officer on the Enterprise. For her, the computer was a powerful diagnostic assistant, always available at her command — similar to how I might use a stethoscope in my job as a doctor today. And as with my stethoscope, there was no tension between doctor and computer — it fundamentally augmented her abilities.

It is an understatement to say that this is not the experience of most doctors today. Rather, the computer has become a scapegoat for all that ails the practice of medicine. From the patient perspective, visits have been reduced to sitting in an exam room as the doctor stares at a computer screen, occasionally glancing over the top of the monitor while tapping on the keyboard. And for doctors, multiple studies have shown that we spend most of our day either writing or reading notes

The computer has become a scapegoat for all that ails the practice of medicine.

on the computer rather than interacting with the patients described in those notes, time that increasingly bleeds into homelife — something that we in medicine euphemistically call “pajama time.”

Since the fall of 2022, when the technology company OpenAI released ChatGPT to the world, the discussions of how artificial intelligence will reshape medicine have sometimes sounded like something out of Star Trek. In fact, such claims are far from new. Attempts to use technology to enhance doctors’ clinical reasoning skills — or sometimes to replace doctors altogether — are as old as modern medicine itself. As a physician and a historian of medical epistemology, I have both studied and experienced firsthand the promises and shortcomings of these technologies.

If you had asked me two years ago whether it was likely that AI would play a significant role in clinical reasoning any time in the near future, I would have said no. In fact, just a few months before the release of ChatGPT, I published my first book, an intellectual overview of medical history, and in the final chapter I discussed machine learning and new AI technologies. Although I was optimistic about the potential of these technologies, I felt that proponents of AI drastically underestimated the complexity of diagnosis and the challenges that AI systems faced.

But I was intrigued enough that over the past two years I have added AI researcher to my list of professional identities. What I’ve found has changed my outlook on the promise of AI to improve clinical reasoning and, more broadly, on the potential role of technology in medicine. Even as I remain anxious about how AI will be implemented in clinical settings, I am also surprisingly hopeful about a future in which it will augment the abilities of physicians and allow us to focus on the uniquely human aspects of medicine.

From leeches to punch cards

My optimism stems in part from my study of past attempts to use technology to enhance the reasoning of physicians. These attempts can be traced back to post-Revolutionary France, where the physician PierreCharles-Alexandre Louis pioneered the idea of analyzing structured data to glean insights beyond the reach of any individual clinician, starting with the study of leeches. Bloodletting, especially via leeches, was in vogue in 1820s Paris. Influenced by superstar physician François-JosephVictor Broussais, known as “le vampire de la médecine,” patients would have dozens of the worms placed on their bodies, draining up to 80 percent of their blood volume. Soldiers receiving the treatment were said to appear to be wearing chain mail from all the shimmering leeches covering their bodies. No one really doubted that leeching worked, but for unknown reasons, Louis — now generally considered the first proto-

epidemiologist — decided to test the efficacy of leeching by analyzing discrete data points from multiple patients to look for trends, which he called the “numerical method.” He compared records of patients with pneumonia who were bled early in their treatment to those who were bled later and found that the patients who received “life-saving” bloodletting early in their course died at a far higher rate. Although Louis’s findings did not make an immediate impact, he was the first to realize that large amounts of organized information could, if analyzed properly, provide insights that no human ever could.

Over the following decades, Louis’ ideas became mainstream, with routine collection of vital signs and causes of death, as well as a proliferation of diagnostic tests. By the early twentieth century, reformers dreamed of using machines to automate the collection of all this data. The earliest reference I’ve found to the concept of what we would now call a medical artificial intelligence comes from the writer George Bernard Shaw, who in 1918 imagined a diagnostic machine similar to the counting machines that had changed finance.

“In the clinics and hospitals of the near future,” Shaw wrote, “we may quite reason-

In this lithograph (left) French physician FrançoisJoseph-Victor Broussais instructs a nurse to continue bleeding a patient. “But I no longer have a drop of blood in my veins,” reads the text below the image.

Pierre-Charles-Alexandre Louis (above) conducted research to analyze the use of bleeding to treat pneumonia.

ably expect that the doctors will delegate all the preliminary work of diagnosis to machine operators as they now leave the taking of a temperature to a nurse, and with much more confidence in the accuracy of the report than they could place in the guesses of a member of the Royal College of Physicians.… From such sources of error machinery is free.”

The routine collection of data to take better care of patients permeated the medical zeitgeist, including in medical education. At HMS in the early twentieth century, Walter Cannon and Richard Cabot introduced a new didactic structure, called the clinicopathologic conference — commonly referred to as a CPC. Adapted from legal education, this new method of teaching presented learners with a “mystery case” with all the information already collected and collated. Each case followed the reasoning of experts as they developed a final diagnosis. The implication was clear — medicine was fundamentally a data-sorting task. If enough information was available and organized properly, a diagnosis would inevitably follow. CPCs soon spread around

the world, and they are still published in the New England Journal of Medicine to this day.

The technological and mathematical advances fueled by World War II moved solutions of the sort Shaw imagined from fiction to reality. Inspired by attempts of the U.S. Army to screen draftees during the war effort, Cornell psychiatrist Keeve Brodman developed a systematized list of symptoms, now called the review of systems, which most people have dutifully filled out for their doctor or dentist. Brodman was also the first to describe the concept of an algorithmic doctor, in 1946, the year electronic computers were invented.

In the 1950s, after systematically going through hundreds of CPCs published in the New England Journal of Medicine, Robert Ledley and Lee Lusted — arguably the founders of medical AI — described a process by which the new fields of epidemiology and biostatistics would soon replace traditional clinical reasoning. Patients would give their history on punch cards, and a computer system would print out a list of diagnoses in order of probability. As information from additional tests was included, the probabilities would change, until eventually the computer reached a final diagnosis.

There was an incredible sense of optimism that diagnosis would soon be done entirely by computers, and, indeed, a steady drip of innovations seemed to support this outlook. There were druginteraction checkers, expert systems that could pick antibiotics to treat almost any infection, and decision-support tools that could diagnose appendicitis on submarines. Then there’s my personal favorite: INTERNIST-I, an AI system designed to solve CPCs better than any individual human could by mimicking the diagnostic powers of a single person, Jack Myers, the chair of medicine at the University of Pittsburgh at the time.

To some extent, many of these systems actually worked in the real world, such as AAPHelp, a computer system that assisted in the diagnosis of appendicitis. In a multicenter randomized controlled trial of AAPHelp,

It was no wonder that they couldn’t make the statistical leap to understand that bloodletting was killing their patients.

negative laparotomies fell by almost half, error rates decreased from 0.9 percent to 0.2 percent, and mortality fell by 22 percent. But even when they worked, these systems were only effective in limited domains, such as for patients with acute abdominal pain in the case of AAPHelp. The larger dream of a generalist AI doctor fell by the wayside, replaced by the more limited goal of providing clinical decision support, and the field of AI in general fell into what is commonly referred to as an AI winter. In fact, a review of diagnostic AI systems from 2012 shows minimal gain in performance over the systems of the late 1980s.

Understanding the clinical mind

Even as progress stalled on medical AI, the field of clinical reasoning saw an important shift with the adoption of a new framing — cognitive psychology, which, like computer science, had emerged from the cybernetics movement of the 1950s. Inspired by the work of psychologists Daniel Kahneman and Amos Tversky, a new generation of researchers sought to gain a better understanding of the human mind, in some instances in order to build AI systems and in others to improve the training of doctors. They learned that medical diagnosis is subject to many of the same psychological principles as other domains of human decision-making. Much of what appeared to be interrogation in the style of Sherlock Holmes was actually what we would today call System 1 thinking — fast, efficient heuristics, constrained by the same biases and failures of other System 1 processes. For example, studies in the emergency department have shown that doctors are less likely to test for a blood clot in the lungs if a history of heart failure is mentioned, a phenomenon called anchoring bias, in which early information is valued more than later pieces.

Humans, it seemed, did not think like computers, and doctors were no different. The calculation of probabilities was only a small part of how doctors formed a diagnosis. It was no wonder that the contemporaries of Pierre-Charles-Alexandre Louis

couldn’t make the statistical leap to understand that bloodletting was killing their patients. Their brains simply didn’t function that way. These insights helped explain, at least in part, the failures to create an AI that could perform clinical reasoning — the early systems were based on a faulty understanding of how doctors reason. If AI systems were to be trained to think like doctors, mathematical probabilities were clearly not the way forward.

When I came of age as a physician in the early 2010s, this cognitivist understanding of clinical reasoning was dominant. One of the most important precepts of our field was that clinical decision-making was driven by the quirks of how human brains store and access information. It was almost in the realm of science fiction to think that a computer could model that messy reality. So, if you wanted to improve clinical reasoning and diagnosis — and who wouldn’t, given the incredible burden that missed or delayed diagnosis has on our patients — the focus was on educating human doctors.

But at the margins of the field, AI continued to be part of the discussion. Advances in computer power, among other factors, helped researchers exit the AI winter, and new types of machine-learning algorithms were increasingly used in medicine. These were algorithms with very specific uses, such as predicting whether a patient in the emergency department might have sepsis. And while they had significant limitations — and often dramatically underperformed when implemented — their potential spurred the U.S. Food and Drug Administration to create a regulatory pathway for such technologies. By the late 2010s, we in the diagnosis world again spoke of AI as a technology that might be able to help physicians and patients, though the timing was always projected to be at some point comfortably in the future.

From language, reasoning?

While I was likely more aware of these technologies than most physicians in the field of clinical reasoning, I’m rather

embarrassed to say that large language models (LLMs), machine-learning algorithms that take human text inputs to create realistic-sounding text outputs, were not on my radar at all for having an impact on clinical reasoning. I had used GPT-3, the model preceding ChatGPT, almost a year before ChatGPT was released publicly, but while it had an eerie ability to produce poignant and sometimes absurdist poetry, there did not seem to be anything inherent in language and the association between words that would allow such a system to assist in diagnosis.

When OpenAI released ChatGPT to the public in the fall of 2022, I tested it with clinical cases to assess the model’s clinical reasoning. And while it was creative and showed what seemed to be flashes of insight, it frequently made up information (referred to as hallucinations) and its context window (the amount of text it could process in any single prompt) was too small to be useful to a doctor.

Then, in March 2023, OpenAI released an updated version of ChatGPT based on a more powerful model, GPT-4. I immediately stress-tested it with a complex clinical

Punch cards were widely used to process data in the midtwentieth century.
“With just basic prompting and my exact thoughts from when I was managing the patient, the model provided a comprehensive differential diagnosis.”

case — and was taken aback. There were no obvious hallucinations, and the model’s reasoning and diagnoses seemed to mimic those of an expert physician.

Next, I took a case of my own where I initially made an incorrect diagnosis, which was later corrected. I took my summary of the case, removed all identifying information, and gave it to ChatGPT, instructing it to provide a list of potential diagnoses. As I read the model’s response, I realized I was looking at something new in the history of diagnostic AI. With just basic prompting and my exact thoughts from when I was managing the patient, the model provided a comprehensive differential diagnosis. The first diagnosis the model suggested was the correct, actual diagnosis that I had been wrong about. The second was what I had thought the patient had. What if I had had this tool six months earlier when I missed the diagnosis? How would it have changed the care of my patient?

Though I was initially impressed by ChatGPT’s apparent clinical reasoning abilities, I wasn’t going to be swayed by a single anecdote. I adapted Ledley and Lusted’s chosen methodology of using CPCs and ran an experiment, eventually publishing the results in JAMA. The study found that GPT-4, with no additional training on medical resources, had an emergent ability to make helpful differential diagnoses equivalent to or better than any system that had been developed previously — and far better than even the best human physicians.

A blizzard of similar studies confirmed these results. The more advanced LLMs had what appeared to be an understanding of the connections between diseases and how their likelihood varied depending on test results and other factors, a level of understanding superior to that of humans, despite not having access to the real world or localized epidemiologic information. The models performed well not just on CPCs but also on real cases taken directly from medical records. Patients started using these tools as well, in some cases to correct missed diagnoses. The models could present reasoning better than humans, make more

Antonia Seligowski
“My research on AI models, as well as my work as a physician and historian, leave me optimistic about what is to come.”
In addition to his clinical practice and research, Adam Rodman (standing) teaches residents at Beth Israel Deaconness, such as (from left) Andrew Bueno, Christopher Dietrich, and Aliya Dincer.
JOHN SOARES

accurate diagnoses, and even, in an impressive study by researchers at Google, collect a clinical history directly from a patient in a true blinded Turing test.

How is this possible? How can an AI algorithm that basically uses a huge corpus of text to predict the next word in a sentence possibly outpace humans in diagnostic ability? The answer likely lies in the fact that the way these models make text predictions is remarkably similar to our understanding of how physicians make decisions, an understanding that grew out of the cognitive psychology movement.

Doctors store diagnostic information in semantic groupings that we call scripts. When we see a patient with acute shortness of breath, it activates diagnoses such as a pulmonary embolism or a massive heart attack. A patient with chronic progressive shortness of breath brings up a completely different list of diagnoses, such as heart failure or interstitial lung disease. As doctors gain experience, we refine these scripts. This is remarkably similar to the statistical associations between the strings of tokens that undergird LLMs. Although LLMs lack real-world experience, they also encode far more knowledge than I will ever have. And perhaps most importantly, they never have to admit five patients in a row at 2 a.m. while fueled only by a pot of coffee.

This gets us to another reason these technologies have so much promise. As Louis’s study of leeches showed, and as an abundance of research has confirmed, human clinicians — myself included — are deeply flawed. The cognitive load of modern clinical environments is far too much for any human being to realistically manage. The full text of the charts of all my patients when I come on service with my residents is longer than Melville’s Moby Dick. Doctors are interrupted every few minutes by pages or secure chat messages. We are subject to the same cognitive biases that affect all humans, such as anchoring bias and confirmation bias, where we give greater weight to evidence that supports what we already think. To adapt George Bernard Shaw’s phrase, from such errors, LLMs are free.

Or are they? There are doctors and researchers I greatly respect who think I’m deluding myself. Aren’t these models just “stochastic parrots” cosplaying a doctor for my benefit? And what about the persistent concerns about racial, gender, and ethnic biases encoded in their pretraining data, concerns that have been validated by multiple rigorous studies?

Then there’s perhaps the most serious concern: hallucinations, incorrect statements that are an artifact of the way models make their text predictions. There are methods to mitigate hallucinations, and we’ve already seen improvements, but there is no reason to think this problem will ever be completely solved.

A strange new world

Despite these real and serious drawbacks, my research on AI models, as well as my work as a physician and historian, leave me optimistic about what is to come. I can’t help but hope that AI moves us at least somewhat closer to the Star Trek dreams of my childhood — a system that can make humans more human and help doctors take better care of our patients.

In the short term, this would most likely consist of a second opinion consult service. Just as data suggest that human second opinions improve clinical care, an AI second opinion placed as an order in the electronic health record could do the same. This work is still in its infancy, and anyone who says differently has something to sell. Important questions remain. At which point in the diagnostic process is an opinion most helpful? How can outputs be monitored to protect patient safety? Which patients would benefit the most? But building such a system right now requires only careful study — we already have the technology. Looking ahead a few years, I can see the possibility of having hundreds or even thousands of AI agents deployed across the health care infrastructure, reading our notes, giving us feedback, recommending personalized education, and even listening in on our encounters with patients, prodding us to ask better questions or consider a diagnosis we might have forgotten.

Never before have I been so hopeful about a future where technology truly helps me be a better human being.

It sounds like science fiction, but this future already exists. I’ve used it. I recently took care of a pregnant patient with abdominal pain so severe that she landed in the hospital. The specialists were stumped, especially because her pregnancy meant that many of our advanced tests could not be used. Her lab values were worsening, despite efforts to dig deep into the medical literature and consultations with multiple specialists. My patient was despondent. Finally, while pacing around her room late one evening, I asked her if she minded if I consulted an AI. She enthusiastically agreed, and I opened the ChatGPT app. Being careful not to disclose any protected health information, she and I had a conversation with the model, going over our questions. The AI gave a thorough list of everything that I could be missing, and then the patient and I went over each possibility to talk about what fit her case and what did not.

In the end, my initial hypothesis was right. My patient improved and was discharged home. ChatGPT, of course, was also right. But what stood out to me about the encounter was how using the technology affected how my patient and I felt about her care. It increased our confidence and engaged us more fully as people. In other words, it made our encounter more human.

Large language models are not a panacea for medicine. In fact, I’m pessimistic about how some of these technologies are currently being rolled out. And these models will not fix larger problems, such as the cost of care or access for vulnerable populations.

But never before have I been so hopeful about a future where technology truly helps me be a better human being instead of trying to convert me into a data entry clerk whose primary job is to collect information. That’s a strange new world that I can’t wait to explore.

Adam Rodman is a general internal medicine physician, educator, researcher, and writer at Beth Israel Deaconess Medical Center and an assistant professor at HMS. He lives in Roxbury, Massachusetts, with his wife and two sons.

Neural Network

Cortex in Red, a depiction of the cerebral cortex by artist Greg Dunn.
Twenty years after meeting at HMS, two alumni are at the leading edge of efforts to use minimally invasive brain-computer interfaces to improve human health

CRAIG MERMEL, MD ’12 PHD ’10, was nervously waiting for his HMS entrance interview when a newspaper headline about the Mars Rover caught his eye. It was January 2004, and the spacecraft had just capped a seven-month voyage by landing on the red planet. The news sparked a conversation between Mermel and another jittery applicant — Ben Rapoport, MD ’13 — who was sitting nearby in the Dean’s lounge at Gordon Hall. “I remember thinking it was nice to have a distraction,” Mermel says. “Then we each got called into our interviews, and I thought, ‘Probably won’t see him again.’ ”

Mermel thought wrong. Twenty years after they were accepted to the Harvard/MIT MD-PhD program, the longtime collaborators are leading efforts to create braincomputer interfaces (BCIs) that connect neurons to the digital world. Modern electrode arrays can tune in to the activity of “hundreds or thousands of neurons at a time,” says Rapoport, now a neurosurgeon at Mount Sinai Hospital in New York City. By decoding how intentional signaling in the brain gives rise to corresponding physical movements, scientists can design systems that allow people to operate computers with their thoughts alone. No longer confined to the realm of science fiction, BCIs are increasingly being tested in human studies. In 2021, Rapoport launched Precision Neuroscience, a BCI start-up with aspirations to restore lost functioning in patients with paralysis, and Mermel joined the company in 2022. BCIs have transformative potential for treating movement disorders. As of this writing, the company’s technology has been tested in eighteen patients, and Rapoport anticipates an early version of the technology could reach commercial markets in 2025.

BCI background

The term “brain computer interface” was coined in 1973 by Jacques Vidal, a computer scientist at the University of California, Los Angeles, who proposed that electrical signals in the brain might one day be used to control prosthetic devices. Researchers have since gone a long way toward realizing Vidal’s early vision. In 2003, a team at Duke University showed that monkeys implanted with microelectrode arrays could consciously control robotic arms. A year later, a young man named Matt Nagle became the first paralyzed person to benefit from BCI technology. A star athlete in his teens, Nagle had been injured during a knife

No longer confined to the realm of science fiction, BCIs are increasingly being tested in human studies.

attack that severed his spinal cord. But using a BCI, he could control a computer cursor and move a prosthetic hand.

The type of implant inserted into Nagle’s brain “marked a turning point for neuroscience and what became the BCI field,” Rapoport says. Called the Utah array, it was studded with up to one hundred needlelike electrodes that penetrate the brain’s outer layer. Electrodes used previously in neuroscience varied from one lab to the next. There wasn’t any standardized process for manufacturing the devices, “nor was there a standardized way of processing the signals,” Rapoport says. The development of Utah arrays changed that. Introduced in 1989, the devices were uniformly fabricated using the same methods used to produce microchips. Utah arrays have since been

implanted in dozens of people and were adopted by BrainGate, a broad-based, multi-institutional effort that grew out of research at Brown University in the late 1990s and now includes researchers from HMS and Massachusetts General Hospital.

Converging advances in implant design and high-performance computing increasingly allow scientists to “translate the brain’s intrinsic, electrical language to something that’s intelligible by a human or a machine,” Rapoport says. That’s crucial to achieving the technology’s full potential, but it also introduces tricky ethical challenges, including “concerns over brain privacy,” says Gabriel Lázaro-Muñoz, an HMS assistant professor of psychiatry at Massachusetts General Hospital and member of the Center for Bioethics at HMS. Mermel agrees. Neural data belong to patients, he says, and should be used only in accordance with their consent and for their direct benefit. “This is an issue we are taking very seriously at Precision,” Mermel says.

Applied science

Rapoport and Mermel’s work in this field grew out of their time as students in the HarvardMIT Program in Health Sciences and Technology (HST). The program has educational tracks leading to an MD from HMS and a PhD in medical engineering and medical physics from Harvard or MIT. Its mission is “not just to train good doctors but also to teach students to think about medicine as a place where there are broader problems to solve,” says Matthew Frosch, MD ’87 PhD ’87, the associate director of admissions for the HST program. Incoming students are anchored in rigorous science, Frosch says, “often with deep quantitative mathematics skills.”

For Rapoport and Mermel, HST was an optimal fit. A math and biochemistry major during his undergraduate years, Mermel had a long-standing interest in the quantitative side of medicine. For his PhD research at Harvard, he developed computational tools for finding new drug targets in complex genomic datasets. Rapoport attended MIT for his doctorate, where he began his work on implantable electronics for brain chips — the

Ben Rapoport

subject that still motivates him today. The pair also honed their business skills during graduate school by creating a company that they subsequently sold to Apple. Called Simbionics, it was built around an analytics platform that used wearable sensor data to monitor cardiovascular fitness.

Mermel wound up taking a job at Apple after finishing his residency in pathology at Mass General. Rapoport, meanwhile, completed his residency and fellowships in neurosurgery at New York-Presbyterian/ Weill Cornell Medical Center. He continued with his surgical career but was also recruited in 2016 to join Elon Musk and other scientists in cofounding Neuralink, a BCI start-up.

Neuralink developed an entirely new type of implant that contains more than a thousand electrodes distributed over polymer threads so thin they have to be stitched into the brain by a specialized robot. More electrodes mean more readings from individual cells, so compared to the Utah array, Neuralink’s implant affords a higherbandwidth connection to the neuronal chatter. However, all penetrating electrodes share certain drawbacks. Scar tissue can form around them, diminishing signal quality over time, and moreover, “all penetrating electrodes damage the brain,” Rapoport says. “If you add more electrodes to scale bandwidth, then the damage increases. And if you’re doing that with a device that’s supposed to help vulnerable people — then there’s a bit of a paradox there.”

That paradox ultimately drove Rapoport to leave Neuralink in 2018 so he could pursue a different approach. The prevail-

Above left, as part of the process of training a device developed by Precision Neuroscience, a patient moves his hand while sensors capture his brain signals. Above, the Layer 7 Cortical Interface, an implant designed to sit on the surface of the brain.

ing belief at the time was that penetrating electrodes were needed to achieve optimal signal strength. But Rapoport was encouraged by mounting evidence that highquality readings could be obtained from the brain less invasively. Indeed, several other BCI companies have gone so far as to develop sensor-equipped helmets that measure neuronal signals from the surface of the scalp. Rapoport explains that conscious thought, movement, sensation, vision, and memory are all computed in the brain’s outermost layer, a thin mantle of tissue called the cortex. Whether electrodes read from within the neocortex or directly on top of it, “the distances are still very small,” he says. “And the surface activity is what we’re interested in.”

Rapoport soon teamed with Michael Mager, an investor and Harvard alum, to form Precision Neuroscience. And he asked Mermel, who was leading a team working on AI-assisted medical imaging at Google, to join as a board member. Mermel had little experience in either BCIs or neuroscience at the time. “My background was all in applications of data and machine learning to different types of health domains, and the practical matters of building products and getting them out into the world,” he says. But Mermel was intrigued by the prospect of using BCI technology to access very detailed information on cognitive processes such as movement and sensory processing. Treatments for neurological diseases had long been hampered by the lack of technology to extract high-quality data about the brain, he says, and “this is something that I saw companies like Precision helping to solve.” Mermel soon joined the company as president and chief product officer.

Meanwhile, the company settled on its core technology: a flexible implant with 1,024 electrodes that sits directly on the surface of the cortex. Called the Layer 7 Cortical Interface, it was designed to be implanted through a one-millimeter “micro-slit” cut surgically into the skull. Brain signals go to a decoder equipped with software “that’s sort of like Google Translate for brain activity,” Mermel says. The decoder

Several BCI companies have gone so far as to develop sensorequipped helmets that measure neuronal signals from the surface of the scalp.

listens in as the brain thinks about moving or speaking and then converts cognitive intentions into digital commands.

Practical technology

Precision Neuroscience launched officially in 2021 after raising $12 million in start-up funding. The company began testing in humans two years later — first at West Virginia University’s Rockefeller Neuroscience Institute and then at the Icahn School of Medicine at Mount Sinai and the University of Pennsylvania’s Perelman School of Medicine. During studies so far, Precision Neuroscience has teamed with neurosurgeons who agree to evaluate the technology while operating on patients for unrelated procedures. It ordinarily takes longer for BCI companies to enter the clinic. But the Precision implant is reversible — it can be removed without causing damage — and so the company faces a shorter path to regulatory clearance.

Iahn Cajigas, MD ’09, a neurosurgeon at Penn who has known Rapoport and Mermel since his own time as an HST student, has tested it in five patients so far. Surgical patients in his operating room engage with the technology while they’re awake. In one case, a man undergoing deep-brain stimulation for Parkinson’s disease was implanted with the device and then asked to perform hand exercises while wearing a sensor-equipped glove. Researchers working with Cajigas matched the patient’s brain signals to his movements, thereby “ground-truthing the data so that what we predict matches with reality,” Cajigas says. The implant is removed when the surgery is complete.

In the near term, Precision plans to bring to market a wired version of its device — one

Iahn Cajigas
JAY WATSON
“If you’re paralyzed, it’s very difficult to operate a computer.… We want to restore a person’s ability to do that so they can be citizens of the digital world.”
Craig Mermel

in which the cortical interface and back-end electronics are directly connected — as a diagnostic tool for neurological conditions, including epilepsy. The ability to record brain activity at high spatial and temporal resolution “will allow us to redefine our understanding of how the brain coordinates complex behaviors,” Rapoport says. Then, within the next two years, the company plans to submit a wireless system for regulatory approval, geared this time to patients with paralysis. The implant sits between the skull and scalp, where it amplifies and records neuronal signals. The signals are transmitted to software that controls a computer cursor or a prosthetic limb.

“Right now, if you’re paralyzed, it’s very difficult to operate a computer at the speeds that are necessary to hold a typical desk job,” Mermel says. “We want to restore a person’s ability to do that so they can be citizens of the digital world.”

All implantable devices need to undergo cybersecurity risk assessments as part of the FDA-approval process. But Mermel says the company plans to go further by taking additional steps to ensure that neural data aren’t used in ways that haven’t been explicitly authorized by patients. “Core to this issue is building data security into the device from the ground up,” he says. “That being said, we are not reading the person’s inner thoughts in a way that they can’t control or wouldn’t have agency over.”

The BCI industry is rapidly growing, with investments predicted to increase from about $1.7 billion in 2022 to $6.2 billion by 2030, according to recent projections from the World Economic Forum. Research is focused in multiple domains, even gaming. Musk has famously stated that high-bandwidth connections to the brain may one day allow people to mesh with artificial intelligence. Rapoport downplays these kinds of aspirations. “I don’t want to stray too much into the science fiction world of neural interfaces as elective procedures for able-bodied people,” he says. “What we’re working on is medical technology for the foreseeable future.”

Charles Schmidt is a writer based in Maine.

Untangling Health Care’s Twisted Roots

Elizabeth Comen on how medicine’s maledominated culture has shaped the care women receive

IN THE MID-1990 S , UP TO 40 PERCENT of menopausal women in America were prescribed hormone replacement therapy (HRT), many of them indefinitely. Doctors urged even asymptomatic women to take HRT, telling them it would decrease their risk of cardiovascular disease and dementia, prevent osteoporosis, and improve their overall sense of well-being. Then, in 2002, the Women’s Health Initiative study showed that prolonged use of HRT increases women’s risk of heart disease, stroke, and breast cancer. Prescriptions for HRT plummeted. Though the study has widely been considered flawed, today under 5 percent of menopausal American women take HRT. Of those who do take it, most do so for less than five years, even if they feel it benefits them and wish to continue taking it.

The rise and fall of HRT might be viewed as a simple case of medical knowledge advancing — we did better when we knew better, to paraphrase Maya Angelou. But Elizabeth Comen, MD ’04, a breast oncologist at NYU Langone Health, sees a darker, parallel story in which sexism and misogyny have negatively affected the medical care women receive, the extent to which women’s autonomy regarding their own bodies is respected, and the quality of research on women’s health.

Elizabeth Comen

In All in Her Head: The Truth and Lies Early Medicine Taught Us About Women’s Bodies and Why It Matters Today, Comen exhaustively explores how male-dominated culture has informed women’s health care from Hippocrates to the present day. She’s especially interested in the ways women’s health concerns have so often been misunderstood, ignored, or dismissed as anxiety. Comen, who shares stories from her own life and medical practice in her book, acknowledges that she’s been on the receiving end of sexism in medicine as a patient and as a trainee and that at times she’s perpetuated it unintentionally as a doctor.

You were a history of science concentrator as an undergraduate at Harvard. Were the seeds of this book planted back then?

Medicine doesn’t exist in a vacuum. I have had a long-standing interest in how science is reflected in the humanities, back and forth, that porous membrane. When I was in college, I worked in a lab at Dana-Farber studying the estrogen receptor and wrote my undergraduate thesis in part on the history of breast cancer treatment and the estrogen receptor. But I never imagined, even if I went into the field of breast cancer, that I would be a women’s health advocate. The idea of women’s health was really just gynecology in my mind. I was a product of the reductionist thinking that I’m trying to fight against now — that we’re not just “our boobs and our tubes” but we’re head-to-toe different. Our biology and our presentation of disease are extraordinarily different from men’s.

The scope of this book is enormous. You cover every organ system and millennia of history. Clearly you were bringing your humanities background to this effort. How did you research and write it?

In bite-size pieces. I think there had to be some bravery to take on the history because I’m not a professor of the history of science. I thought that if I divided it up, like we do in medical school, with each chapter devoted to a single organ system or function, I could make the research more manageable.

I started with nineteenth-century Boston physician Horatio Storer. I’d read about how Storer treated a woman who was married to an older man and who presumably had a higher libido than her husband. Storer diagnosed the woman with nymphomania and recommended that she be committed to an asylum if his barbaric treatments didn’t work. Not surprisingly, he condemned his own wife to an asylum for “catamenial mania” (menstruation-induced insanity). Yet Storer believed himself to be an early and passionate advocate for women’s health and founded Boston’s first gynecological society.

I thought this was a fascinating example of how the culture of the times intersected with the medical care of women. I collected more compelling historical stories, those of my own patients, as well as interviews with experts. I thought about which diseases or syndromes or conundrums — because some of them didn’t have names — women present with and asked who laid the groundwork and the blueprint for what we think today. Then the book really wrote itself.

There’s a longstanding idea that because we endure childbirth we’re meant to endure pain.

You write that modern medicine has grown on the “twisted roots” that were established in the nineteenth and early twentieth centuries and that we’re still reaping the fruits of beliefs about women that we now think of as ridiculous, such as that menstrual blood is toxic. How are these roots manifested now?

A clear example is how we continue to dismiss women’s pain. You do a skin biopsy in a dermatologist’s office and they numb you up before. Yet many women get IUDs placed or have an endometrial biopsy without proper pain management. There’s a long-standing idea that because we endure childbirth we’re meant to endure pain. And this idea, by the way, pervades what women physicians are expected to do in academic settings. We’re the peacekeepers. We volunteer for things. We engage in unpaid labor in and out of the home. We endure.

You write in your book about the long-standing medical view of women as “not men,” of female bodies as defective or incomplete versions of male bodies, as “other.” Do you think this othering, this dehumanization, contributes to inadequate pain management as it does for people of color?

There has never been, in the history of Western medicine, a belief system that extolled women’s bodies, women’s intelligence, as being as powerful as men’s. This framework still infiltrates all of medicine.

The issue of control comes up over and over in your book. You write of women sent to asylums for reading novels and how women were more likely to receive lobotomies than men. You also describe many instances, including in the story of Horatio Storer, where a woman whose behavior is troubling or inconvenient to men is pathologized and then locked up or subjected to unnecessary and cruel procedures. Is one of your themes that the pathologizing of women has been a means of controlling them?

Absolutely. I think we see that even today in how women’s activity is limited. We aren’t encouraged to become strong and participate in sports the way men are. Doctors learn very little in medical school about strength training for women or the type of exercise that women should do — about what is actually critical for their mobility, their longevity, and their bone health. In the nineteenth century women were told they’d get an ugly “bicycle face” if they rode bicycles. We still hear about activities that don’t “look feminine,” such as weight lifting. The medical profession has long been involved in keeping women smaller, promoting the idea that we’re not strong enough or powerful enough to engage in all aspects of the world professionally and personally.

You write that when you were fourteen your injured knee wasn’t repaired because the surgeon you saw advised you to wait and see how active you would be.

Yes, and for me that was devastating. It had huge psychological consequences. I was a dancer, I played volleyball and tennis, and

that instability in my knee limited me tremendously. And when I did finally have my knee reconstructed, I didn’t have rehabilitation specific to my body. My physical therapist specialized in treating male ice hockey players. I didn’t have the kind of physical therapy we would provide for a woman with an ACL injury today.

You write quite charitably about how some of these doctors in the nineteenth century and beyond were products of their times and some of them clearly meant well — though some clearly did not. But on the other hand, some of the theories and practices they espoused were so absurd it’s hard to understand how intelligent and educated men believed in them. How do you account for that?

I tried to lean in with compassion and empathy in the book. For all my advocacy, for all my desire to be an equitable, compassionate doctor, I know I’ve fallen prey to many of these stereotypes that are woven into my own behavior, that I’m still unpacking. Researching the history of women’s health, it’s enraging and saddening to read and imagine what women must have gone through physically and psychologically. My hope is that the anger translates into constructive change moving forward. I’m an oncologist and optimistic by nature. My compass is focused on the future.

Speaking of the future, despite the fact that there are now more women than men enrolled in medical school and more women than men doctors under thirty-five in the U.S., the culture of medicine remains “male.” How can we promote gender equity for both patients and health professionals going forward?

I think what you’re bringing up isn’t so much a gender war, it’s a spiritual war: not valuing what we traditionally call “feminine” attributes, these ineffable qualities that are so important to what it means to care for somebody, that both men and women can engage in, but that we have not historically valued in medicine — compassion, empathy, listening. We have data showing that women demonstrate these qualities better because we’ve been culturally ingrained to listen, to show compassion. But it is not always something that’s valued in our academic medical system or by payers. There aren’t more insurance dollars for holding a dying patient’s hand. There may be money for developing a new drug or a surgical procedure or an infusion, but if you listen longer, is that considered productive? You actually lose money for listening longer. What have we done to denigrate empathy from a productivity standpoint? And in the process the North Star of caring for the patient gets lost.

I think the problem we have today is that we have this objective, quantifiable system of medicine that has fragmented the patient. And so no matter how many women we have in medicine, it’s not really about that. It’s about having a diversity of approaches to medicine that equally and equitably have a place in leadership, which we don’t see today. We have many women entering medicine but just as many women leaving

I’m an oncologist and optimistic by nature. My compass is focused on the future.

medicine because of burnout, because they don’t feel valued, or because they’re entering fields like primary care or palliative care or pediatrics, and those fields have trouble getting remunerated. Continuity of care fields that might be really attractive for certain types of personalities ultimately aren’t valued by the system of medicine.

Finally, as we speak, a woman is running for president. You point out that historically women were thought to be too “hormonal” to have such positions of leadership (as if, you point out, men don’t also have hormones). Do you see in these attacks echoes of sexism in the history of medicine?

Of course! If you look throughout history, what was valued and focused on in the health of women was predominantly the idea that we are vessels, that we are valued for our childbearing capacity, whether it was being excluded from NIH trials because you couldn’t have women of childbearing age, or you couldn’t run a marathon because your uterus would fall out, or whatever it may be as it related to your primary god-given function on earth to reproduce. And when you see the criticism of a potential female president, whether it be for her race or because she hasn’t given birth to children, it’s fascinating and at times tragic to see how a woman is valued (or devalued) by society. We have not escaped the caged limits of our past even when it comes to potentially electing a woman president.

The problem is that so many things women face are unique to their biology — we are not small men. For example, 80 percent of autoimmune disease cases occur in women. Those presentations were not as common in the men who were treating them. It’s human nature to be interested in that which could affect you or your peers. Look at frozen shoulder, which disproportionately affects women. The historic treatment for that was benign neglect. Imagine men not being able to use their arms to throw a ball or work on a farm. We would never have treated that with “there, there you’ll be fine, just suffer for two years and it will get better” when we know there are more targeted, direct, and humane treatments for that.

It’s human nature that when something is outside your own experience, we say it can’t be something that I don’t know or I haven’t figured out — it must be something wrong with you. When you’re faced with something you don’t know, it takes bravery to be curious. The insidious incuriosity about women’s health by a maledominated medical profession is in part why we are where we are today. We need to remain curious, thoughtful, and empathetic to have a more equitable future.

Suzanne Koven is an associate professor of medicine and global health and social medicine at HMS and writer-in-residence at Massachusetts General Hospital, where she also practiced internal medicine for more than 30 years. She is the inaugural recipient of the Valerie Winchester Family Endowed Chair in Primary Care Medicine at Mass General. Her memoir, The Mirror Box, will be published in 2026.

in

A conversation with Marinka Zitnik, assistant professor of biomedical informatics in the Blavatnik Institute at HMS

You work at the intersection of machine learning and biomedicine. What ignited your interest in this field?

As a student I was deeply interested in mathematics, but I also had a strong desire to become a doctor and help people. Early in college, I designed an algorithm to predict candidate genes in a species of slime mold that activate antibacterial pathways that trap and kill bacteria. That was my first direct experience of the impact that computation can have on biomedicine. Since then, I have sought opportunities to work in this emerging area at the interface of the two fields.

How do you stay creative?

I generally start the day with a cup of coffee and a quick scan of the latest research papers, both from machine-learning conferences and areas of biology for which we have projects in the lab. This keeps me up to date and sparks new ideas for my own research. I also organize international workshops and conferences that bring together scientists from diverse disciplines. I encourage my research group to explore topics outside of our immediate area of expertise because the potential for groundbreaking discoveries and novel approaches is greatest at the borders where one field meets another.

What is your greatest hope and your greatest fear with the rise of artificial intelligence in science and medicine?

My greatest hope is that we develop AI systems that could eventually make major discoveries — the type worthy of a Nobel Prize. I hope this will not take humans out of the discovery process; instead, human creativity and expertise could be augmented by the capacity of AI models to analyze large datasets and execute repetitive tasks. My

hope is to leverage increasingly powerful models to develop better medicines to cure and manage diseases, particularly those that have very few or no treatments.

My greatest fear is that we’re developing AI models that address low-hanging fruit and focus too narrowly on diseases for which we already have a lot of data and knowledge. This could worsen health inequalities, in the sense that we are generating more and larger datasets for diseases that are already much better understood than others. We need to maintain focus on challenges that could benefit all patients, including those with less-researched diseases where AI-ready datasets are scarce.

Is there a feature of human intelligence that AI will never achieve?

Human intelligence is characterized by qualities like empathy, moral compass, intuition, and emotional understanding rooted in our experiences interacting with one another in the real world. Recent advances have shown that AI systems can potentially mimic certain aspects of these qualities. Yet subjective, very nuanced aspects of human intelligence, like grasping and appreciating creativity or moral reasoning, might remain beyond AI’s reach.

What do we get wrong about AI?

One common misconception is that AI will completely replace humans in various fields. AI systems are not drop-in replacements for human ingenuity and creativity. Another misunderstanding is that AI is infallible. AI systems are only as good as the data and algorithms that underpin them. They can be biased, they can make errors, and they require careful oversight and continuous improvement. Another point of confusion is that AI lacks transparency, that it’s a black box that cannot be trusted or understood. Although that’s true for some models, there are ongoing efforts to design AI with insights into how the models make decisions in a way that is easier for humans to understand, trust, and verify.

—Ekaterina Pesheva

Training for Tomorrow

WHEN SIERRA WASHINGTON, MD ’05, STARTED HER CLINICAL ROTATION in obstetrics and gynecology during medical school, it was like a puzzle piece clicking into place. She had found a specialty that would allow her to combine a newfound desire to care for pregnant women with a preexisting interest in global health.

“I loved the highs of delivering babies and being part of the most important moments in a woman’s life,” Washington says.

In OB-GYN she also saw opportunities to provide compassionate, equity-focused care to people in underserved and underinsured communities. She’d arrived at HMS with an interest in global health that was bolstered by her coursework. She found a mentor in the late Paul Farmer, MD ’90 PhD ’90, and a social medicine course he co-taught with Jim Yong Kim, MD ’91 PhD ’93, and Kenneth Fox was especially life-changing.

“That class helped me see the social and structural impacts that society can impose on people’s health,” she says. She adds that HMS allowed her to embrace her idealism and become a global-minded, equity-focused physician. Indeed, Washington paused her training to earn a master’s degree in public health. Since then, she has focused her career on “building health services for the bottom rung,” she says, “particularly for pregnant women in Africa.”

An elective rotation in Cameroon during medical school showed Washington the difference she could make as an OB-GYN, as she treated childbirth complications she had only read about in textbooks. “I remember feeling like,‘What the . . . ?’ It’s the year 2000, so why are women still dying in childbirth from complications that are actually pretty simple to fix with antibiotics, blood, and a skilled surgeon?” she says.

The experience led Washington to focus her career in Africa, in countries such as Zambia, Kenya, Rwanda, and Tanzania. In addition to obstetric care, she began teaching and providing abortion services — a decision spurred by seeing entire inpatient wards full of young women suffering complications, and sometimes dying, from unsafe abortions.

Over time, Washington realized that while it was important to deliver high-quality medical care in places in need, it was even more important to offer high-quality training to doctors living and working in those places. To that end, Washington is now the director of the Center for Global Health Equity at Stony Brook University. The center collaborates with the Hospital Central de Maputo and Universidade Eduardo Mondlane in Mozambique to train medical residents in OB-GYN, surgery, and emergency medicine.

Washington, who lives in Mozambique, hopes her work there will transform residency education, in turn improving the quality of medical care. “I know that creating doctors who have the best training possible is going to have a generational impact,” she says.

Washington adds that even as her work has evolved, her motivation is still captured by the same three concepts: equity, social justice, and positionality: “As a woman with a platinum education from a rich nation, I feel a moral and ethical obligation to use my position to fight for people with less.”

Student Life

AS A HIGH SCHOOL STUDENT , Deborah Plana loved the quantitative sciences. Then her biology teacher talked about new drugs being tested to treat genetic diseases and the many questions that still weren’t answered in the life sciences, and Plana’s interests expanded.

“I love physics and statistics. I love working on research that has an important technical piece. But just as much, I love to do research that could meaningfully improve patients’ lives,” says Plana, who graduated this year from the Harvard-MIT Health Sciences and Technology program with an MD to join her PhD.

Personal experience with cancer in her family cemented Plana’s determination to pursue a career that combines medicine and research. Plana followed these dual interests when she came to HMS as an MD-PhD student. In 2022, she earned her PhD in systems, synthetic, and quantitative biology from the Harvard Graduate School of Arts and Sciences (now the Griffin GSAS). Her thesis, “Clinical Trial Data Science to Advance Precision Oncology,” looked at ways to enhance analysis of clinical trial results. In collaboration with researchers from Dana-Farber Cancer Institute, she digitized the information presented in graphs published in scientific studies and backcalculated the patient events that likely led to those results.

She showed that researchers can reanalyze existing trial data to assess drug synergy in animal models and human data, predict the likelihood of trial success from small sample sizes, and model long-term benefits of new therapies using short-term trial results. She hopes the work provides a new tool scientists can use to deal with data challenges presented by relatively small patient populations, patient stratification, pediatric populations, and patients with rare diseases.

Part of Plana’s focus on oncology stemmed from her work with PhD advisors Peter Sorger, the Otto Krayer Professor of Systems Pharmacology in the Blavatnik Institute at HMS, and Adam Palmer, at the time a postdoctoral fellow in systems pharmacology. The team worked closely with oncologists on data coming from clinical and preclinical trials to see whether they could improve how treatments are assigned to patients and understand variation in treatment response.

Plana hopes her work convinces others that it is worthwhile to share the raw data generated in clinical studies and that there are interesting insights to be gained from additional analysis — something she says many trial participants support.

Now a resident in a combined anesthesia research track at Massachusetts General Hospital, she looks forward to facilitating such exchanges. “I want to use systems biology to identify personalized interventions for patients that are based on the same tools and principles cancer doctors use but that anesthesiologists and critical care doctors can apply in the short time frames they work within,” she says.

THIS SPRING, the Navy had only one residency spot in dermatology for graduating medical students, and Mitchell Winkie was selected to fill it. Despite a heightened sense of competition in the matching process, Winkie says, he found support in his team — his wife, his classmates and advisors at HMS, and his mentors in the Navy.

Winkie, who graduated this year with a joint MD-MBA, didn’t yet have medicine in mind as a career when he entered the Naval Academy for college. But once there, he was moved by the patients he saw being treated for traumatic brain injuries at Walter Reed National Military Medical Center and the physicians dedicated to rehabilitating them.

“I found that powerful and something I could see myself doing for a lifetime,” Winkie says.

Military medicine united his desires to serve the country, contribute to a team, support U.S. troops and their families, and apply a long-standing love of science. Being exposed to different working environments in the Navy by spending time on an aircraft carrier, in a submarine, and with flight squadrons gave him a feel for the day-to-day life of service members and the requirements of their jobs, which he believes will help him serve them better as a physician.

Once at HMS, he became interested in dermatology because the skin is easy to work with, yet the field allows for the practice of nuanced, complex medicine on a wide range of conditions and causes. Gene therapies and machine learning are advancing the diagnosis and treatment of dermatological conditions, and Winkie found mentors at HMS who have incorporated these new technologies into their work to improve health care delivery.

“There are interventions that can scale very quickly,” he says. “A lot of good can be done.”

Winkie and his classmates were just getting to know each other during their first year when they were sent home in March 2020, at the beginning of the COVID pandemic.

The pandemic opened his eyes to the complexity of supply chain, capacity, and operational health care issues, which pushed him to pursue an MBA at Harvard alongside his medical degree. He hopes to help make the military health system structure more efficient to produce better health care outcomes at lower costs.

“Similar to the civilian side, military health care is becoming exponentially more expensive,” he says. “I’m driven to find ways to improve the system and ultimately improve care for our service members, so both our troops and we as their physicians can pursue our missions.”

—Bobbie Collins

DETAILS, UPDATES, AND OBSERVATIONS FROM ALUMNI

Who or what has made you the physician or scientist you are today?

Constantine Psimopoulos, MSc ’23

The late and great Paul Farmer, MD ’90 PhD ’90. I only had the privilege of meeting him once before his passing, but he left an indelible mark on me as a bioethicist. With sacrifices and a preferential option for the poor, he changed the world with his virtue ethics and principles of accompaniment, expert mercy, and the belief that no life is worth less than others.

Bliss Chang, MD ’20

I believe we are each the ultimate expression of the cumulative life experiences we hold. I think what has helped me derive the most out of my life experiences has been setting aside time each weekend to actively reflect on those experiences and taking the best parts and integrating them into who I am. I also keep in mind the not-so-great experiences and use those to understand who I do not want to be. I continue to seek novel experiences, because whether good or bad, they shape me for the better.

Benjamin Rix Brooks, MD ’70

Surgeon and HMS professor Judah Folkman, MD ’57, director of Introduction to the Clinic at Boston City Hospital, would call us after midnight, present sets of vital signs — body temperature, pulse rate, respiration rate, blood pressure — and require us to give differential diagnoses and potential treatment plans. It was a wonderful exercise to catch us vulnerable and tired late at night after studying, which sensitized us to the need to recalibrate our thinking when presented with details that necessitate action. A great preparation for internship and residency.

Rachel Hitt, MD ’99

My father, Barry Levine, MD ’65. He always remembered that the patient was human while providing excellent medical care. And my grandfather, Carl Haas, who was also a doctor and who seemed to have delivered every child in his small Maine town.

Jose Giron, MD ’75

Many physicians, including Joseph Rossi; Samuel Latt, MD ’64 PhD ’71; Shalom Hirschman; Burt Meyers; and Gary Wormser. I have tried in my professional career to live up to their standards.

Edward Walkley, MD ’70

T. Berry Brazelton, a wonderful physician and mentor during residency. He taught compassion. He taught how to question. Also, clinical rotations at HMS and residency at Boston Children’s Hospital. Great exposure to wonderful teachers, cases, and role models. Also, exposure to some dinosaurs stuck in the past, negative role models.

Kenneth Chin, MD ’74

CT, MRI, and interventional radiology had just started to poke their heads through the topsoil when I started. I’m glad to have participated in their growth and evolution.

Tsontcho Ianchulev, MD ’99

Grit, perseverance, and creativity.

Stephen Grund, MD ’91

Primary care physician and HMS professor John Stoeckle, MD ’47. I was fortunate to do my three years of outpatient clinic in his pod. I appreciated his passion for primary care, his compassion

I continue to seek novel experiences, because whether good or bad, they shape me for the better.

for patients, and his dedication to teaching. I tried to emulate his approach to patient care within the field of hematology/oncology, where technology, molecular biology, and the latest scientific advances often overshadow the importance and primacy of the doctor-patient relationship. Because of him I appreciated the value of team-based care.

Pablo LaPuerta, MD ’89

My father was the greatest physician that I have ever known, and he made me the person I am today. He combined experience, judgment, and total commitment. As a lung doctor he practiced what is now called “permissive hypercapnia” years before publications on it were available. He saw that the overuse of ventilators was dangerous, and he figured out how to use them more safely. As doctors we should know how guidelines are made, and we should be prepared to adjust our practices when needed.

Johnson Lightfoote, MD ’76

I recall Dan Federman, MD ’53, endocrinologist and former HMS dean for medical education, with affection and respect. He admitted me to Stanford’s residency and served HMS with aplomb and wisdom as an inspiring gentleman.

Richard J. Hannah, MD ’66

I had eight important mentors from age sixteen through age thirty-two — from high school through the completion of formal internal medicine training and my military service obligation — including two teachers from my four years at HMS. From age thirty-two to age seventy-two — including forty years of an active internal medicine practice with office, home, and acute hospital care — there were many, many physicians of impact. A large number were positive role models.

Burt Singerman, MD ’73

Robert Ebert, former dean of HMS, was a major role model for me. He appointed me as the student representative to the minority admissions committee. He asked me to be the first HMS student to rotate in psychiatry at Cambridge Hospital. He told me about two programs where I would have HMS mentors. At Johns Hopkins, he called the chairman of psychiatry and a famous psychotherapist, both HMS alumni, who became my mentors. He was a great dean and a great man.

Jorge Casas-Ganem, MD ’98

Thanks to all who shared their thoughts. We welcome responses to our future Rounds questions at alumni.hms.harvard. edu/rounds. Submissions will appear in print, online, or both in the coming months.

My mentor in medical school was Henry Mankin at Massachusetts General Hospital. He was one of the grandfathers of the field of orthopedic oncology and sarcoma surgery. He helped arrange my year abroad learning limb salvage techniques in Spain and Italy.

Herbert Dan Adams, MD ’65

A long concatenation of success and failure, which slowly generated wisdom.

In Memoriam

1940s

1943

Robert C. Jones, MD

January 4, 2021

Robert J. McKay Jr., MD November 23, 2012

1944

Robert Z. Klein, MD

December 5, 2011

1947

James F. Dickson III, MD August 12, 2023

Frank Egloff, MD February 9, 2020

H. William Fister, MD October 16, 2014

Harold L. Godwin, MD May 10, 2024

Robert B. Zufall, MD March 5, 2024

1948

James C. Fahl, MD April 21, 2024

Samuel H. Hay, MD February 6, 2024

Howard H. Hiatt, MD March 2, 2024

1949

Donald R. Chadwick, MD August 3, 2014

1950s

1950

Edward H. Caul, MD August 29, 2023

Edward M. Mahoney, MD August 17, 2023

1951

Aaron D. Weiner, MD March 28, 2015

Frank M. Wiygul Jr., MD July 25, 2023

1952

Garth R. Drewry, MD October 18, 2023

Armand A. Lefemine, MD June 22, 2023

Linus C. Pauling Jr., MD June 10, 2023

1953

Walter L. Barker, MD November 25, 2020

Ned Feder, MD March 26, 2024

Alan L. Kaitz, MD December 10, 2023

Barbara M. Orski, MD September 10, 2023

Richard L. Sidman, MD March 6, 2024

Robert L. Timmons, MD August 9, 2020

1954

Phillip M. Allen, MD February 15, 2023

Ramon M. Greenberg, MD February 7, 2024

Ernst J. Meyer, MD May 2, 2024

Lillian Pothier, MD October 8, 2023

Don F. Thomas, MD November 20, 2023

1955

David J. Becker, MD May 1, 2024

Alan F. Carpenter, MD May 21, 2024

Aram V. Chobanian, MD August 31, 2023

Roman W. DeSanctis, MD July 8, 2024

Martin P. Feldman, MD November 18, 2020

E. Rodman Heine, MD July 29, 2023

Karl E. Humiston, MD August 14, 2024

Lawrence C. Thum, MD October 20, 2023

Arthur B. Yull, MD April 16, 2024

1956

Chester A. Alper, MD July 20, 2024

Charles F. Barbarisi, MD March 19, 2024

Spencer Gordon Jr., MD June 2, 2024

Richard S. O’Hara, MD December 30, 2023

Sanford I. Roth, MD February 22, 2024

Stefan C. Schatzki, MD January 20, 2024

1957

Arthur I. Grayzel, MD August 14, 2024

Robert B. Layzer, MD June 12, 2024

James J. McCusker, MD June 5, 2018

1958

J. Claude Bennett, MD August 11, 2024

Hugh S. Harris Jr., MD January 25, 2024

Rodman D. Starke, MD December 5, 2023

Robert H. Tichell, MD March 27, 2024

1959

Robert S. Adelstein, MD May 7, 2024

Lynn L. Ault, MD March 2, 2024

Thomas A. Carey, MD October 4, 2023

Rudolph S. Cumberbatch, MD May 23, 2024

George D. Raymond, MD October 6, 2023

John B. Rodgers, MD November 10, 2023

1960s

1960

Richard T. Burtis, MD October 18, 2023

Robert W. Colman, MD March 10, 2023

Thomas W. Hansen, MD April 30, 2016

Hugh G. Hare, MD January 9, 2022

Herbert B. Hechtman, MD March 16, 2024

David Korn, MD March 10, 2024

Laurence J. McCarthy, MD September 6, 2023

Mark G. Perlroth, MD September 17, 2023

Ervin Philipps, MD June 28, 2024

Lawrence M. Roth, MD September 4, 2023

Stephen B. Shohet, MD March 1, 2024

Neal H. Steigbigel, MD January 13, 2024

Martha L. Warnock (Martha E. Lawall), MD February 13, 2022

John S. Weltner, MD November 25, 2023

1961

Howard E. Rotner, MD January 3, 2024

Richard W. Sagebiel, MD February 6, 2023

Richard J. Steckel, MD June 20, 2024

Lubert Stryer, MD April 8, 2024

1962

Ernest W. Franklin III, MD September 2, 2023

Joel D. Mack, MD August 8, 2024

Peter McPhedran, MD April 7, 2024

1963

William J. Casarella, MD February 2, 2024

Roger W. Turkington, MD July 9, 2023

Norman L. Wilson Jr., MD March 19, 2024

1964

Donald W. Mitchell, MD November 1, 2023

Georges Peter, MD January 11, 2024

John M. Tudor Jr., MD December 19, 2023

1965

Don Carl Bienfang, MD December 9, 2023

Dorothy A. Black (Dorothy A. Hufnage), MD January 1, 2023

Bruce A. Feldman, MD April 21, 2022

Ralph W. Stoll, MD September 23, 2023

S. Anthony Wolfe, MD December 26, 2023

1966

William M. Carleton, MD November 11, 2023

Mahlon R. DeLong, MD May 17, 2024

Gilman D. Grave, MD April 7, 2023

Robert D. Pipkin, MD January 24, 2024

Anne L. Rassiga, MD March 26, 2024

Samuel Strober, MD February 11, 2022

Phillip H. Taylor, MD December 22, 2023

William C. Wood, MD August 18, 2024

1967

Charles R. Hayes, MD July 9, 2021

Nicholas A. Mastroianni Jr., MD October 13, 2021

1968

Barbara H. Harley (Mara Hurwitz), MD February 4, 2024

Stephen G. Pauker, MD February 16, 2024

1969

Thomas P. Hyde, MD November 27, 2023

Sharon B. Murphy (Sharon L. Boehm), MD November 4, 2023

1970s

1970

Thomas B. Mackenzie, MD September 15, 2023

Herbert C. Morse III, MD September 11, 2023

1972

Andrew B. Larkin, MD August 11, 2023

1973

Jeffrey A. Kelman, MD February 8, 2024

Steven Varga-Golovcsenko, MD October 12, 2023

Robert M. Williams, MD PhD July 18, 2024

1974

Michel Jean-Baptiste, MD May 11, 2024

Lewis C. Lipson, MD December 26, 2023

1976

Paul Maurice Heath, MD June 5, 2024

1978

Elizabeth D. Mellins, MD March 24, 2024

1979

Joanna K. Zawadzki, MD February 10, 2024

1980s

1982

Vicki L. Heller, MD March 29, 2024

This listing of deceased alumni includes those whose notices of death were received between August 30, 2023, and September 10, 2024.

Lawrence Ma, MD June 6, 2022

1985

William O. Fletcher Jr., MD June 4, 2024

1988

David M. Frim, MD PhD August 22, 2023

1990s

1992

John C. Lualdi, MD October 28, 2023

1993

Peter M. Li, MD May 5, 2024

2000s

2000

Shannon M. Heitritter, MD April 2024

2003

Mark D. Price, MD August 18, 2024

2005

Larissa J. Lee, MD June 1, 2021

2008

Cullen M. Taniguchi, MD PhD November 14, 2023

2010s

2015

Shekinah Nefreteri Elmore, MD July 24, 2024

PRESIDENT’S REPORT

Spring 2024 Meeting

THE SPRING 2024 HMS ALUMNI COUNCIL MEETING covered many of the issues facing universities and health care systems nationwide, including campus protests, hospital mergers, student debt, and the effect on admissions of the 2023 ruling by the U.S. Supreme Court that effectively ended affirmative action.

Dean George Q. Daley, MD ’91, spoke at length about the challenges that Harvard and HMS leadership face regarding the crisis in the Middle East, emphasizing the need to recognize all suffering and advocate for nonviolent resolutions. Regarding public discussion of what universities should say and do at such times, he said that where there are issues directly relevant to the school’s mission, the leadership reserves the right to have a voice but that the schools do not have a foreign policy. He also noted that Harvard leadership has created task forces to combat antisemitism and anti-Muslim, anti-Arab, and anti-Palestinian bias; met repeatedly with student groups; and held more than forty forums to facilitate respectful discussion.

Bernard Chang, MMSc ’05, HMS dean for medical education, reported that medical students were not involved in the encampments at Harvard College, but HMS students and faculty have participated in peaceful, nondisruptive protests and gatherings on the Longwood campus. He shared his intent to create curricula that both acknowledge tensions and help community members gain or hone skills in civil discourse and intentional discourse.

We also discussed seismic changes to health care and medical education with Dean Chang. Reflecting national trends, Mass General Brigham is consolidating its clinical departments, a move that could affect both student learning and the teaching burden on clinical faculty. To help prepare students for the new world of health care, HMS has developed a program for students focused on management practices and clinical service delivery and convened the Task Force on Teaching Faculty Experience, which will focus on promotion, compensation, and recognition of teaching.

Associate Dean Andrea Reid, MD ’88, associate dean for student and multicultural affairs in the Program in Medical Education and director of the Office of Recruitment and Multicultural Affairs, detailed the impact of the Supreme Court decision on admissions. The new policies have affected the school’s ability to recruit a diverse class, but on a positive note, the office supports a robust array of programs that offer information and inclusion to all applicants.

And what have we on the Council been up to? The Awards Working Group selected A.W. Karchmer, MD ’64 — a beloved faculty member and tireless volunteer and fundraiser — to receive the Distinguished Service Award. The Slate Committee successfully executed a transition from our traditional contested Alumni Council elections to approval by acclamation, a process that is better aligned with the approach of other schools. The change was passed by alumni vote at Reunion in early June. The Engagement Working Group has reenvisioned reunion for younger graduates, improved the swag(!), and developed a plan for virtual reunions between the five-year in-person reunions.

Although it may be hard to imagine that Alumni Council meetings could be interesting, much less fun, they are both, and for the same reason that HMS itself is both interesting and fun: the people — thoughtful, good-natured Council members; members of the school leadership, who contend with complexity; and the alumni office staff, who work behind the scenes. Together, these fantastic humans make my twenty-four hours in Boston and fifteen hours in transit feel entirely worthwhile — and that’s no small feat!

Louise Aronson, MD ’92, is a professor of medicine at the University of California, San Francisco.

Alumni Council Welcomes Seven New Members

Harvard Medical School MD graduates have elected seven new members to the Alumni Council, including both candidates from the Second and Ninth Pentad races, which ended in ties. Representing the Second Pentad (classes of 2014–2018) are Amir H. Ameri, MD ’19 (Class of 2018), and Ben Robbins, MD ’16 (Class of 2014). Charmaine Smith Wright, MD ’03, will represent the Fifth Pentad (classes 1999–2003). Scott T. Aaronson, MD ’81, and Ann E. Taylor, MD ’83, will represent the Ninth Pentad (classes of 1979–1983). Joanna “Mimi” Choi, MD ’09, has been appointed as the council’s vice president. Marc S. Sabatine, MD ’95 (Class of 1994), will serve as a councilor-at-large representing all classes. Learn more about the new representatives at alumni. hms.harvard.edu/election

Nominate a Deserving Alum

If you know someone who has shown outstanding dedication to the School, consider nominating them for the 2025 Distinguished Service Award for HMS Alumni. This award, established in 2019, recognizes MD alumni who have demonstrated loyalty, service, and commitment to HMS through volunteering, community building, acting as an ambassador for the School, or otherwise supporting HMS and its mission. Submit your nomination by Dec. 31 at alumni.hms.harvard.edu/nomination

Thank You, Alumni Donors

Alumni giving plays an important role in helping Harvard Medical School pursue its mission of alleviating suffering and improving health and well-being for all. The philanthropic support from 2,118 MD alumni during fiscal year 2024 has been instrumental in propelling HMS toward a new era of possibility in science and medicine. MD alumni can view the Honor Roll of Donors — a list of those who made donations between July 1, 2023, and June 30, 2024 — broken down by class at alumni.hms.harvard.edu/honor-roll

Reunion Report Submissions Are Open

The MD classes ending in “0” and “5” will be celebrating their Reunions in 2025. Now is the time to share updates on your life and provide contact information for your class’s Reunion Report. Events will be held June 12–14 for the 20th–60th classes and June 13–14 for the 5th–15th classes. Committees may plan additional class-specific activities that take place outside of these dates. This year, the Classes of 2010, 2015, and 2020 will pilot a digital Reunion Report platform that allows them to reconnect, share, and explore updates throughout the year from their classmates in a private space. Find instructions on how to fill out your Reunion Report at alumni.hms.harvard.edu/reunion

WHAT WILL BE YOUR LEGACY?

Consider investing in longer, healthier lives

Imagine leaving a legacy that advances medical breakthroughs and transforms lives.

A gift through your will or trust is a simple way to make a lasting impact at Harvard Medical School.

The 1970s saw the first female graduates of the MD-PhD program, forging a path for women in medicine and science.

“Supporting the next generation of physician-scientists at HMS is now a key piece of my legacy. My own experience as a student at HMS, coupled with my lifetime career, informed my bequest gift. I hope to help pave the way for students who come after me and who also hope to make transformative change in medicine.”

• Fulfill your financial and estate planning goals

• Reduce or eliminate estate tax

• Improve health and well-being for all

U.S. Postage PAID

Burlington, VT 05401

25 Shattuck Street

Boston, Massachusetts 02115

Electronic Service Requested

The Doctor

In this 1891 painting by Luke Fildes, a doctor attends to a sick child on a house call. The care and attention exhibited by the doctor have made the work a symbol of the human side of medicine and the timeless importance of the physician-patient relationship. Although technology has sometimes contributed to fraying that bond, the emergence of AI, some HMS educators and researchers argue, offers an opportunity to repair that relationship and restore humanity to medicine.

Permit No. 391

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.