DO ANTIDEPRESSANTS WORK? A REVIEW FOCUSING ON PUBLICATION BIAS, THE PLACEBO EFFECT AND TRAIL DESIGNS By
Deniz Eracar Sophia Carino
“Do antidepressants really work?” It might seem like we have a definitive answer for this question, yet this question is commonly and repeatedly addressed by researchers and pharmaceutical companies across the United States as the country continues to have the highest rate of antidepressant use compared to any other nation in the world. Despite the extensive research conducted on developing new and “improved” antidepressants, the plethora of peer-reviewed research articles on antidepressants, and the fact that these medications are being used by a large percentage of the population, there is still an ongoing debate around their efficacy. While many literatures may suggest that antidepressants are effective and life-changing, their harms might outweigh benefits in the long run or their short term benefits may be more modest than we think. To understand the basis of these arguments and the skepti-
22 || pulse
cism surrounding the efficiency of antidepressant treatments, it is necessary to look back to the advancements in antidepressant research conducted over the past decades and the published research articles supporting these advancements, which is the approach that Turner et al. took in their 2008 meta-analysis. One main argument that might partially explain the conflicting views on these drugs is publication bias (or the “file drawer effect”), which refers to our intentions to selectively publish studies showing positive results over those showing neutral or negative results. In line with this idea, Turner et al. analyzed US Food and Drug Administration’s (FDA) registry and results database, in which drug companies are required to register all trials that they are planning to use to support their application for marketing approval or change in labelling, for antidepressant trials registered between the years 1987 and 2004. Their meta-analysis revealed that whether the studies
were published and how their results were reported if they were published were heavily dependent on their outcomes. For example, out of the 74 studies that the research group included in their data analysis, 38 were deemed positive by the FDA, and out of these 38 studies 37 were published. The remaining 36 studies were either categorized as negative or questionable and only 3 studies out of these were published as not positive, whereas the remaining 33 were either unpublished (22 studies) or published in a way that deemed the results of the study positive, which conflicted with the FDA’s conclusion. Overall, this makes the “positive” studies 12 times more likely to be published than the negative or inconclusive studies and implies a strong publication bias towards the publication of positive results. In addition to making the evidence that we base our medical decisions on biased, the “cherry-picking” of study outcomes may also make these decisions non-optimal and create