8 minute read
DO ANTIDEPRESSANTS WORK?
A REVIEW FOCUSING ON PUBLICATION BIAS, THE PLACEBO EFFECT AND TRAIL DESIGNS
By Deniz Eracar Sophia Carino
Advertisement
“Do antidepressants really work?”
It might seem like we have a definitive answer for this question, yet this question is commonly and repeatedly addressed by researchers and pharmaceutical companies across the United States as the country continues to have the highest rate of antidepressant use compared to any other nation in the world. Despite the extensive research conducted on developing new and “improved” antidepressants, the plethora of peer-reviewed research articles on antidepressants, and the fact that these medications are being used by a large percentage of the population, there is still an ongoing debate around their efficacy. While many literatures may suggest that antidepressants are effective and life-changing, their harms might outweigh benefits in the long run or their short term benefits may be more modest than we think.
To understand the basis of these arguments and the skepticism surrounding the efficiency of antidepressant treatments, it is necessary to look back to the advancements in antidepressant research conducted over the past decades and the published research articles supporting these advancements, which is the approach that Turner et al. took in their 2008 meta-analysis. One main argument that might partially explain the conflicting views on these drugs is publication bias (or the “file drawer effect”), which refers to our intentions to selectively publish studies showing positive results over those showing neutral or negative results. In line with this idea, Turner et al. analyzed US Food and Drug Administration’s (FDA) registry and results database, in which drug companies are required to register all trials that they are planning to use to support their application for marketing approval or change in labelling, for antidepressant trials registered between the years 1987 and 2004. Their meta-analysis revealed that whether the studies were published and how their results were reported if they were published were heavily dependent on their outcomes. For example, out of the 74 studies that the research group included in their data analysis, 38 were deemed positive by the FDA, and out of these 38 studies 37 were published. The remaining 36 studies were either categorized as negative or questionable and only 3 studies out of these were published as not positive, whereas the remaining 33 were either unpublished (22 studies) or published in a way that deemed the results of the study positive, which conflicted with the FDA’s conclusion. Overall, this makes the “positive” studies 12 times more likely to be published than the negative or inconclusive studies and implies a strong publication bias towards the publication of positive results. In addition to making the evidence that we base our medical decisions on biased, the “cherry-picking” of study outcomes may also make these decisions non-optimal and create
a public view on antidepressants that is misrepresentative of the actual study results: the “negative” and “neutral” studies that never make it past being buried in a drawer may reveal a much different picture about antidepressants than the selectively positive published literature.
Apart from whether we publish results of a study or how we mask a study’s results, how we choose participants for a study is also important when we are making claims about efficiency. For example, one meta analysis by Kirsch et al. suggests that the efficacy of antidepressants might depend on the severity level of the depression itself. To reach this conclusion, the researchers obtained full datasets from all clinical trials submitted to the FDA in support of the licensing of four new-generation antidepressants. The obtained data was assessed using meta-analytic techniques to analyze the effects of initial severity on self-report improvement scores of the patients. This revealed that drug-placebo differences increased as a function of initial severity: whereas there was almost no difference observed for patients who had moderate levels of depression, the drugs resulted in a relatively small difference for patients with very severe depression and could only reach “clinical significance” (assessed by conventional criteria), for patients who were placed at the upper end of the severely depressed category.
Lastly, we have to consider the placebo effect, and more specifically whether an increase in the placebo effect over the years may have contributed to our perception of the increased efficacy of the antidepressant drugs. In their research report, Khan et al. started their discussion by stating that more than fifteen years ago, the high failure rate of antidepressants clinical trials was attributed to the increasing magnitude of the placebo response. This finding, reported by Wash et al., has paved the way for meta-analytic reviews of antidepressant trials, psychotropic trials, and trials focused on patient-level data for major depressive disorder, which all support that the placebo response has continued to grow between 1987-2013. To contribute to this debate, Khan et al. accessed the US Food and Drug Administration’s reviews for 85 clinical trials approved between the years 1987 and 2013 conducted on a total of 23109 patients. By grouping the trials into pre-2000 and post-2000 ones and controlling for possible confounding variables such as changes in trial design, the research team was able to calculate and compare the magnitude of placebo and antidepressant responses, antidepressant-placebo differences as well as the effect sizes and success rates of the clinical trials that took place during the 15 years time period. Their analysis was not only able to confirm their proposed hypothesis that the magnitude of the placebo response has continued to increase during the 15 year time period (the percentage symptom reduction for pre-2000 clinical trials was noted to be 29.8% whereas the percentage symptom
reduction for trials after 2000 was noted to be 36.2%), but they were also able to replicate the findings of the original study by Wash et al. They reached the conclusion that the pattern of increase in placebo response noted by Wash et al. in 2001 has continued.
Overall, the meta-analytic approaches taken by different research groups focusing on different aspects of the efficacy of antidepressants have proven that the question “Do antidepressants work?” might be more multifaceted than it seems. Whereas it is evident that antidepressants may not be as effective as published research portrays them to be due to publication bias, other factors such as the placebo effect and who the clinical research trials are designed for should also be taken into consideration.
Carroll, A. E. (2018, March 12). Do Antidepressants
Work? The New York Times. https:// www.nytimes.com/2018/03/12/upshot/ do-antidepressants-work.html. Khan, Arif, Kaysee Fahl Mar, Jim Faucett, Shirin
Khan Schilling, and Walter A. Brown. "Has the rising placebo response impacted antidepressant clinical trial outcome? Data from the US Food and Drug Administration 1987‐2013." World Psychiatry 16, no. 2 (2017): 181-192. Turner, Erick H., Annette M. Matthews,
Eftihia Linardatos, Robert A. Tell, and
Robert Rosenthal. "Selective publication of antidepressant trials and its influence on apparent efficacy." New England Journal of
Medicine 358, no. 3 (2008): 252-260. Kirsch, Irving, Brett J. Deacon, Tania B. Huedo-
Medina, Alan Scoboria, Thomas J. Moore, and Blair T. Johnson. "Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug
Administration." PLoS Med 5, no. 2 (2008): e45.
A HISTORY OF ZOONOTIC DISEASES: DON'T BLAME THE BAT
By Sanjana Rao Ashley Chen
It’s almost 100% likely that you know or know of someone who has been infected by a zoonotic disease. Zoonotic diseases have taken the lives of upwards of 33 million people since 1981 and are estimated to infect a billion people every year. Yet, what are zoonotic diseases? Zoonoses include any disease caused by a pathogen that originated in animals and ‘spilled over’ into humans. SARS, Malaria, HIV, Rabies, and most recently, COVID19, all fall under this category—as do most of the pandemics within the last century.
The lethality and virulence of zoonotic diseases have mostly been attributed to two causes: natural selection within the animal host, and the novel nature of animal borne diseases. Our immune systems and gastrointestinal tracts are responsible for fighting off and excreting most of the pathogens that attempt to interrupt our daily lives. However, when they see a pathogen that they’ve never encountered before, especially one that can successfully replicate within a human host, it must ‘catch up’ with the virus and quickly produce an immune response against it. This novel nature provides some explanation for how zoonotic diseases spread and evolve into pandemics so rapidly. Unfortunately, there are also limitations; it does not explain why they’re so hard to treat. This may be due to natural selection within the animal itself, as well as within the human host. The host and the pathogen are theorized to be locked in an evolutionary arms race of sorts, both adapting to the responses of the other. As the host is often at a disadvantage, especially if the pathogen is a virus (and can thus adapt quicker), a pandemic is often hard to eradicate. The critical point is when the animal virus or pathogen gains the ability to replicate/ reproduce in the first human host (patient zero). This is termed the spillover point.
While there are multiple theories, there has yet to be a conclusive story addressing the origin of COVID-19. However, we do know how SARS, its closely related coronavirus cousin, came to be. SARS, or severe acute respiratory syndrome, was first seen in China in 2002, where it quickly spread to other countries via travelers to Vietnam, Singapore, and Hong Kong. It was highly virulent and spread by airborne transmission amongst doctors treating affected patients, neighboring hotel rooms, and even within housing estates. It caused pneumonia-like symptoms. It had a short incubation time and a high fatality rate, both of which may have contributed to the ease (relative to COVID-19) of eradication despite the lack of effective vaccines. Mid-2003, researchers found traces of a similar coronavirus in palm civets, which were sold at a market in Guangdong, where the first patient was from; however, these results only pertained to a few civets (most civets had no antibodies to the virus), making it unlikely that they were the source of the disease. Researchers then surveyed 45 species, ultimately finding traces of coronavirus precursors in only