11 minute read

Michele Avissar-Whiting – Editor in Chief, Research Square

By Tom Gilson (Associate Editor, Against the Grain) <gilsont@cofc.edu> and Katina Strauch (Editor, Against the Grain) <kstrauch@comcast.net>

ATG: Michele, you joined Research Square after finishing your postdoctoral work on cancer epigenetics at Brown University. Given your background and training, what was it that you found compelling about working for Research Square?

MA-W: I joined Research Square when it was still only known as American Journal Experts, or “AJE,” which was — and still is — a company dedicated to easing barriers to publication for researchers whose mother tongue is not English. I found that mission really appealing and also liked that the company was engaged in a number of other pursuits related to scholarly publishing, such as exploring mechanisms for journal-independent peer review. As a researcher, the part of my work that I found most satisfying was connecting my findings with the rest of the literature, tying together a narrative through prose and visuals. I invested a lot of time and effort into creating effective visuals for my own publications, and so I eagerly got involved with the burgeoning Figure Formatting division soon after joining the company. Later, I would help to start up the Video Abstract service and fully entrench myself in the world of science communication, which became a great passion for me.

ATG: Research Square bills itself as “a multidisciplinary preprint and author services platform.” How do your author services and tools work to complement the preprints that are submitted? Can you give us an idea what kinds of support a prospective author can expect after they submit a preprint? Are these services also available to authors who do not submit preprints?

MA-W: I like to think of our platform as a new category in publishing: the intersection of manuscript preparation, preprinting, and post-publication assessment. Our most developed and most popular services on the platform are AIbased digital language assessment and automated editing tools, which are available to authors as soon as they upload their manuscript to our platform. Authors get to see their scores before and after editing and have the option of downloading the edited file. We also offer professional assessments for methodological and data reporting, where our in-house experts check applicable sections of a paper for items related to reproducibility and transparency. Authors are given detailed feedback about which items are missing, and those who successfully pass these assessments can earn badges for their preprints to publicly signify their papers’ adherence to these standards. All of this can occur before the preprint is posted. Or, if the author would rather not wait, a revision can be posted once they make the necessary changes. The hope is to empower authors to put their best foot forward while sharing research on their own terms. Right now, our services are tightly coupled with preprint posting, but this won’t always be the case. Keep watching us. Exciting developments are coming to our platform in 2022!

ATG: In a Scholarly Kitchen article dated June 3, 2021, you championed preprinting *as part* of the journal publication process. Can you elaborate? What role does preprinting currently play? What role should it play? What do you predict for the future of preprints?

MA-W: More and more, we are seeing journals and publishers not only adopt permissive policies with respect to preprints, but fully embrace them as part of the publication process. Elsevier has SSRN, Wiley has Authorea, and Springer Nature — of course — has partnered with Research Square to offer a streamlined preprint deposition service that is integrated into the submission process. Publishers are increasingly acknowledging the importance of rapid dissemination for speeding up the pace of discovery. Reputable publishing houses understand that supporting preprints does not undermine the most important functions of the journal, which are to provide validation, endorsement, and curation. In fact, decoupling dissemination from peer review means that the emphasis on “time-to-firstdecision” can be somewhat relieved, giving editors and reviewers the space to do the important work of assessing the manuscript. Having the preprint publicly available also means there are many more eyes on it, which can only stand to make an editor’s job easier. Many of the people who find the preprint organically — researchers who are intimately familiar with the topic — are the ones best positioned to critically evaluate it. These people can catch problems the reviewers or editor may easily miss and leave a comment or send an email to the author, editor, or preprint server. In short, scholarly publishing has been among the slowest industries to evolve in the digital age, and preprints are leveraging the benefits of the Internet in a way that has been long overdue. The future will see preprints go fully mainstream as both funders and major publishers lean in further. What we could see in a preprint-first world is the decommodification of the scientific article. This has the potential to change — for the better — the problematic incentive structures in academia and align them around rigor and transparency as opposed to volume or deeply skewed notions of “impact.”

ATG: As you know, some people have expressed concern that preprints can be mistakenly cited as accepted research, possibly leading to misinformation and false results. Are the preprints on the Research Square platform subjected to any kind of quality control that might help diminish the possibility of this occurring?

MA-W: We place a huge amount of emphasis on screening here. We have a dedicated team of screeners who follow detailed protocols to ensure that pseudoscience or potentially harmful or unethical research does not get shared on our platform. It isn’t always an easy call though. Particularly during the pandemic,

we’ve had to weigh the importance of disseminating a certain finding against its potential to create controversy or panic. We have a process for escalation and consultation specifically to deal with submissions like these. One approach that I started to take in 2021 was to work with authors to draft Editorial Notes to accompany reasonable but potentially controversial or alarming preprints. The goal is to explain the findings in plain language and acknowledge the study’s limitations upfront. This approach seems to have worked remarkably well to deflect misinterpretation or misuse of the preprint. Through that process, I can also get reassurance for myself about an author’s motives — it speaks volumes when a researcher is happy to discuss the caveats of their findings and acknowledge the potential for controversy. That said, there certainly have been a few instances where a preprint that seemed reasonable at the point of submission was later found to be horribly flawed, or in the worst cases, fraudulent. There isn’t much that a preprint server or a journal can do to completely avoid these situations, but I like to think that having the preprints openly available allowed these problems to be discovered and addressed much faster than they would have under a traditional publishing system. We know from experience how long it can take for work to be retracted from a journal, even in the face of overwhelming evidence. In contrast, we have withdrawn preprints within days of being faced with incontrovertible evidence of issues such as data manipulation or fabrication.

ATG: Given your emphasis on screening, we wonder who are the people responsible for making sure that nothing unethical or harmful gets on your platform? What are their qualifications? And who makes the final determinations as to whether a preprint gets posted or not? Can you tell us what your rejection rate is?

MA-W: Our screening is completed by a small team of full-time employees who either have advanced degrees in an academic discipline or years of experience performing similar quality-control screening for journals. Screeners undergo a training regimen, follow specific protocols, and have multiple outlets to discuss and seek advisement on specific issues as they arise. In addition, we have regular meetings to ensure that our standards for screening remain calibrated and to ensure that everyone is aware of new controversies or stories that may make their way into our sphere. Our screeners may reject obviously problematic or out-of-scope submissions outright or send them for consultation with me if they are not certain of the best action. Sometimes, a request for consultation leads to further internal discussion regarding whether we should proceed with a submission. For direct submissions, our rejection rate averages 31%, most of which are scope-related rejections. For submissions coming in via a journal (and therefore after a QC process by the journal), the rate of rejection is much lower — around 4%.

ATG: From your experience, are there categories of preprints that are especially susceptible to possible misinformation and false results? Does Research Square monitor those categories on its platform in any way?

MA-W: It’s hard not to point to COVID-19 here, but at some point it becomes a game of probabilities. More research has been published on this virus in the span of two years than was published on all other viruses collectively in the last century. And the speed with which the papers were being generated, shared, and reviewed was unlike anything we’ve experienced before. So, naturally, these papers are more subject to mistakes and false information. Among the torrents of COVID-19 research, I think the research around potential treatments turned out to be uniquely problematic. A big part of the problem was randomized controlled trials being published without their accompanying patient-level data, hiding egregious analytical errors and even intentional manipulations. Most journals do not require data deposition let alone conduct reviews of those data, and that would be far outside the scope of preprint screening for even the most judicious server. However, after having to withdraw a few problematic RCTs, we did start to consider data availability statements and the inclusion of data for studies asserting treatment benefits. I even added a line to our editorial policies advising that under certain circumstances, a preprint may be rejected for failing to include openly accessible data.

ATG: Preprints have been in the news a great deal of late — sometimes it is positive, and other times, negative. Is there any particular factor(s) that is influencing this newfound attention? Has Research Square kept track of these various ups and downs in the news coverage? Have you detected any useful trends?

MA-W: The driving factor is simply that preprint servers are increasingly becoming the first point of entry for new research, and journalists have eagerly picked them up despite the disclaimers that warn against reporting on them as established findings. It’s not reasonable to expect science writers to hold off on reporting on a preprint, particularly during a time of such great volatility as we’ve been living through these last two years. In my view, however, it can and has been done responsibly. We provided guidance for the media on our platform early on in the pandemic. What it boils down to is this: Reporters should provide a link to the source, they should clearly indicate that it is a preprint and has not yet undergone peer review, and they should always seek consultation from an unbiased subjectmatter expert to ensure that they’ve appropriately represented the topic. Honestly, I think that is a critical step for the coverage of any scientific article; journal articles are not immune to misinterpretation or misrepresentation. In fact, I think preprints have been unfairly maligned simply for being first on the scene. We can’t rerun the simulation to learn how the pandemic would have played out without this mode of rapid sharing, but my guess is that it would have been a significant net negative in terms of death and suffering.

ATG: In a recent interview, you said that “we’re witnessing the very beginning of a fundamental shift in the way that research is published.” When all is said and done, where do you see this fundamental shift taking us? What will scholarly publishing look like when this shift runs its course?

MA-W: To expand on some of what I said above, if support for preprints — particularly by funding organizations — expands, we are likely to see scholarly publishing transform radically. Given the level of acceptance publishers have already shown — with greater than 80% of journals accepting preprinted articles — it’s not difficult to imagine this practice becoming the norm. The idea of holding back dissemination will start to seem antiquated. eLife is probably the best exemplar of this shift among life science publishers: Last year they signaled their unequivocal support for the post-publication peer review model by requiring preprint deposition for all eLife submissions. I expect that we will see more journals move in this direction in the near term. Longer term, my hope is that this naturally leads to greater parsimony in the form of overlay journals and other models that allow assessment and endorsement (by journals, societies, and other organizations) to occur in place on preprint servers.

ATG: Michele, thank you so much for making time in your schedule to talk to us. We’ve really enjoyed it.

This article is from: