8 minute read

Stop, Look, Listen

Next Article
Learning Belongs

Learning Belongs

Column Editor: Dr. Sven Fund (Managing Director, Fullstopp GmbH, Maximiliankorso 66, D-13465 Berlin, Germany; Phone: +49 (0) 172 511 4899) <sven.fund@fullstopp.com> www.fullstopp.com

Over the past 20 years or so, academic publishing has come a long way. It has developed from of a profession of mainly elegant, cosmopolitan, white men with a pretty slow pace of innovation into a digital, highly consolidated, everincreasingly profitable business. A fast-paced digitization of products and processes has removed much of the romanticism attached to scholarly publishing and replaced it with an increasing level of efficiency and transparency. Despite all that, however, one of the core processes of the industry remains surprisingly non-digital. This element has so far managed to resist the impetus by commercially-minded CEOs of large international publishing conglomerates. Indeed, it seems rather that it in order to be professional, it had to be non-transparent and basically hidden from the public eye. Most surprisingly, almost all players involved — research funders, scholarly communications officers as well as publishers — have apparently not cared about it, nor could they even agree on the status quo in an ecosystem where there is little agreement on anything.

I am talking about peer review as the central mechanism of quality assurance. Performed by researchers in an anonymous fashion, it is one of the few remaining myths in what is otherwise by now a fairly rational industry. But also a pretty irrational one: reviewers invest countless hours in reading, improving, or rejecting their colleagues’ articles or books. They do not do so alone: in many cases, two or even three reviews are needed to make sure an article meets the standards of a journal or a series. Independent research has estimated large amounts of time being spent on this: Aczel et al estimate in their 2021 paper “A billion-dollar donation: estimating the cost of researchers’ time spent on peer review” that an annual value of labor time equaling 2.5bn U.S. dollars went into peer review in 2020.

As critics of the scholarly publishing system are often keen to point out, the societal costs of peer review are usually covered by the institutions that the researchers work for, not by the publishers benefiting from it financially through the transformation of the intellectual work into a commercial product. It comes as a surprise, therefore, that even in the new realities of open science which have alleviated many of the financial intransparencies of scholarly publishing, the subsidy paid in the form of peer review has rarely been a case for debate.

Since the beginning of digitization, publishers have invested significantly in peer review systems. This helped first of all their own organizations to deal with ever-increasing amounts of articles submitted for publication. If in doubt about for whom these systems are made, just ask a researcher how customercentric any of them are…

Peer Review Fatigue

Despite the deployment of such systems, the central interests of researchers in peer review remain uncatered to. Indeed, the scholarly publishing ecosystem relies on a huge volunteer effort by hundreds of thousands of researchers every year. With academic publication output growing at a faster rate than the number of researchers and in an increasingly competitive research environment that still centers around publishing volume and quality, academics are faced with a tough choice: should they continue to provide largely invisible peer review to their profession, or rather take the time to write their own article which will be visible to their colleagues?

As if this is not already enough cause for concern, the seismic shift in publishing from established (paywalled) journals to open access publications by fast-growing new market entrants poses a major challenge to both editorial boards and the commercial interests of publishing houses: reviewer preferences need to change as quickly as publication preferences. But that is unlikely to happen, as OA publishing volume is directly dependent on ease of funding (in transformative agreements) and offers the direct benefit to each individual author as their work gets viewed far more than paywalled content. The stability of behavioral patterns in the researcher community makes change unlikely unless obvious incentives are provided.

The severity of the issue is demonstrated by various attempts to automate peer review. I would argue that while this addresses the issue from the commercial perspective of publishers, it leaves the reviewer completely out of the picture. And while automation and technology will be useful in addressing certain elements of peer review, I very much doubt that algorithms will be able to successfully resolve the many challenges.

“Funders, together with universities and other higher education institutions, will need to play a pivotal role in finding a sustainable approach to peer review.”

Homo Economicus and Homo Academicus

There is no doubt that the system of peer review is on the brink of collapse. Certain claims by some stakeholders are helping to deepen the crisis. One of them is that peer review is only high quality and independent if it comes for free. This concept is obviously being promoted by those who have an interest in not paying for what they get, as it increases their profit.

The much more central issue, that of making peer reviewing part of the academic record and hence giving it a non-monetary value, does not seem to be a key consideration for publishers.

Psychology teaches us that human beings feel rewarded by different stimuli. In research, recognition of work is a core value. Researchers working in this incentive system can be described as representing the “homo academicus.” They strive first and foremost to further their career within their discipline, which is why they spend long hours in labs, libraries, at scholarly conferences or at their desks.

At the same time, most researchers would agree that their drive to create maximum visibility is not just altruistic. If they are successful, their research translates into economic success, both to receive funding for projects and research groups, but also for themselves. Climbing up the career ladder, securing a position at a top university and increasing their earnings is dependent on how their work is being perceived by their colleagues.

Quality assurance through peer review will remain at the heart of the publishing process, and publishers need to take steps to keep this vital element intact. In their own interest, they need to increase the mobility of reviewers and abandon their traditional thinking of “owning” their reviewers. In return, they need to trust that they have sufficient access to researchers doing peer review through transparency.

Researchers, in many countries well trained in their institutions and also by publishing houses, need to be provided with an infrastructure in which they can decide what to review — and under which conditions.

Publishing Integrity Reconfigured

Agora of Peer Review

How can this agora of quality assurance be created? A few companies are trying to tackle the issues surrounding peer review. Most of them strive to create visibility and include it in researchers’ CVs and institutional websites. Some, like Reviewer Credits, are experimenting with economic rewards alongside stimulating the homo academicus: reviewer credits can be spent on discounts for language editing, on book vouchers or other services a researcher needs when publishing. These experiments address the central challenges peer review is facing.

Technology not only helps publishers to reward researchers acting as reviewers far better than at any time in the past, indeed publishers would also benefit from such an approach. Even more important seems to me to be the increase in efficiency which potential reviewers would enjoy. They would no longer be spammed with lots of requests to review content, many of which are inappropriate being outside their specialty. They could calibrate their expertise in a database much more specifically than by a few key words of their own articles published (and written!) a long time ago. And they define their own set of parameters according to which they are looking to review. The self-serving assumption mentioned above (that for reasons of integrity peer review has to be free of charge for publishers) has proven to be a very effective barrier to any timely change in the way peer review is carried out. No doubt all technical innovation will remain piecemeal, unless the ecosystem responds to researchers’ demands at a higher level.

In my view, only the academic system, and not publishing, can make the move needed to change attitudes about, and hence structures in, peer review. Funders have been instrumental in making open access the dominant publication model in terms of article volume and innovation. They have done so by strong mandates requiring funding recipients to publish open access and at the same time simplifying compliance with the rules — in the case of OA by concluding transformative deals with publishers.

Funders, together with universities and other higher education institutions, will need to play a pivotal role in finding a sustainable approach to peer review. Alongside research and scholarly output, peer review as a practice must become part of the academic record. It is dangerously shortsighted to look mainly at article output when making academic appointments and not take review activity into account, as it will undermine the other 50% of academic publishing, which is so far invisible.

A pragmatic approach is needed to introduce a completely new category into the academic record. A first step could be to make the absolute number of peer reviews per year transparent. Providers like Reviewer Credits are already able to provide additional information on the quality of peer review by using the Reviewer Contribution Index. At the same time, when economic rewards are being introduced, they need to be made transparent to all those involved in assessing a researcher’s review activity. More work undoubtedly needs to be done in this field, but the tools are already available to ensure that this key process for assuring integrity in publishing is a far more productive and rewarding one.

This article is from: