5 minute read
Changing Practice in the Hospital Setting: A Tale of Two Teams
By Brian Milman, MD and Joshua Gentges, DO, MPH, on behalf
Let’s start with a case…
You presented an article during journal club last week that is a game-changer. It showed definitively that intravenous contrast for computed tomography is safe in every patient (BTW, no such article exists, although the evidence is strong for this view — see Dr. Farkas' EmCrit article). You are excited about this and explain to your attending that Mrs. Anderson in room seven can safely receive IV contrast for the pulmonary embolism (PE) study she needs. Your attending laughs and says, “There’s no way radiology will do that study. Her creatinine is 1.9!”
Crestfallen, you cancel her CT and admit her to the floor so she can be “rehydrated.”
of the SAEM
Research Committee
Translating Research Into Practice: A Complex Problem
There are countless examples like this vignette, in which clinical policy and practice does not reflect the current state of the literature. If you have listened to Ken Milne, the skeptical host of Skeptics Guide to Emergency Medicine (SGEM), you have likely heard that it takes greater than 10 years for high quality evidence to make it from publication to the bedside. The research is poorly developed, but translation from research into clinical practice can take as long as 17 years (Morris, 2013). One might think that with increased engagement of learners and clinicians with high quality free open access medical education (FOAM) that the lag between paper and practice should decrease. We don’t know, because studies that evaluate the success of blogs, podcasts, and social media measure short term knowledge translation rather than patient-related outcomes.
One goal of the SGEM podcast is to decrease the knowledge translation gap from over 10 years to less than one year. Many other emergency medicine (EM) blogs, podcasts, and social media accounts have similar goals, but we’re not sure that knowledge acquisition is the rate-limiting factor when turning research advances into clinical medicine. Like most things, the problem is complex. In the example of pulmonary embolism, we know that using a Well’s score, the PERC (Pulmonary Embolism Rule Out Criteria) rules, and a d-dimer can lead to exclusion of PE with high confidence, but this approach languishes in the face of the CT scanner. Westafer et al., (2020) investigated some of the reasons why and found that risk tolerance, need for diagnostic certainty, subject knowledge, confidence in gestalt, time pressure, and lack of institutional resources were some of the reasons that practicing physicians ordered CT imaging when it may not have been indicated. We think the biggest reason is that work — the hard work of educating staff, building a plan, getting buy-in from administration, shepherding policy through hospital committees — is difficult and time demanding. We’ve worked on projects to translate research into practice many times, from acetaminophen overdose protocols to removing Xopenex (levalbuterol) from formulary, and have seen both successes and failures. For the rest of this article we will focus on two initiatives at our shop, one that didn’t work and one that did, and give tips for ways to help translate evidence into practice.
The One That Didn’t Work and Why
Imagine you’re an emergency physician who has been passionate about hospital crowding since internship. You know that crowding is multifactorial and that your department diligently implements best intradepartmental practices for crowding. For example, you assign patients to rooms immediately, triage in the rooms, physicians meet ambulance patients when they arrive and order beds early in the workup. The department remains overrun, with boarded patients taking up most of your beds, so that new patients go to the waiting room, the hallway, or languish on ambulance cots. It’s dangerous for patients to be in those places (Kelen, 2021), and you decide to join a hospital committee to work on hospital wide initiatives to reduce crowding. This committee includes the CEO, CFO, every major administrator in the hospital — and you. You are sure you know what to do: change hospital policy so that the house is kept at 90% of capacity or less rather than full all the time. Unfortunately, you don’t engage stakeholders, learn about barriers, or consider individual and group priorities, and the initiative evaporates into a complicated, full capacity plan that is difficult to implement. The hospital remains crowded, which harms patients, providers, and the bottom line.
The One That Worked and How
The second scenario involves the creation of a vancomycin usage reduction initiative using nasal MRSA (methicillin-resistant Staphylococcus aureus) swabs as a decision point. This was another hospitalwide initiative, requiring buy-in from the burn center, critical care, surgery, the ED, pharmacy, and administrators worried about reduction in profitable services. This time, you engage stakeholders early, find out what is possible, and create measurable goals that align with the values of the group. The initiative begins with provider education, is evaluated by both process (MRSA swabs) and outcome (vancomycin use-days/1000 patients) measures and is shown to be budget neutral or positive. Within a year the team cuts vancomycin use at the hospital in half, with some evidence of decreased length of stay for some patient categories.
Planning Makes All the Difference
So, what’s the difference? It’s having a plan. Policy change happens when opportunity meets preparation. This means not just knowing what best practices are but also understanding the dynamics of the organization and the personalities involved. This allows an initiative to be timed to the needs of the organization as circumstances change. For example, a worrisome increase in vancomycin resistant pathogens across service lines created an opportunity for change without resistance from physicians used to a certain practice pattern. You must strike when the iron is hot as windows of opportunity like this do not last. Inertia is a powerful force, and a desire not to change, or to let someone else be the agent of change, is hard to resist. When you are developing the specifics of an intervention, it’s wise to be SMART in program design: Specific. Your goal should be concrete, narrow, and…
Measurable. You should be able to measure both process (what we did) and outcome (what happened to patients)
Achievable. Does the organization have the resources (human, technical, financial) to pull off the intervention? Do key stakeholders agree? Can resistance be overcome?
Relevant. Is there an institutional need for the intervention? Will the project get resource priority?
Time sensitive. Is it the right time for this change? How long will it take? Do the results over time add value, improve processes, or save lives?
SMART design was first conceptualized by Peter Drucker in 1954 and has stood the test of time, but the point is to have a systematic framework for reaching your goal. Most readers of this article understand this intuitively — it’s hard to be successful in medical school without a plan. Reaching one’s goals without a plan may not happen, or will take longer, or cost more. This is true for translating research into practice as well, which is one reason why it takes a decade for game-changing research to translate into lifesaving practice. All of this leads to the conclusion that ED physicians should be involved in changing practice at their hospitals. We are used to complicated problems, resolving interpersonal differences, and managing a team. So, join a hospital committee. Talk to the CEO. Design a plan for your personal research-to-practice desires. You might save some lives.
About The Authors
Dr. Milman is an assistant professor of emergency medicine and associate residency program director at the University of Oklahoma School of Community Medicine in Tulsa, Oklahoma..
Dr. Gentges is an associate professor and the research director for the University of Oklahoma's Department of Emergency Medicine.