Transparency in Peer Review

Catherine Cocks's picture

A Conversation with Amy Brand (Director of MIT Press) and Jessica Polka (Executive Director of ASAPBio) by Catherine Cocks (Editor-in-Chief at Michigan State University Press)

Feeding the Elephant: A Forum for Scholarly Communications


When I first heard of the Peer Review Transparency project in 2016, it made no sense to me. How could you be transparent about a confidential process? The answer to that is, the idea is to make how the peer review was conducted transparent. (Though some do advocate dispensing with anonymous reviews; we’ll talk about that in a later post.) Did the reviewer know who the author was or not? How many reviewers were there, and did they read each other’s reviews? Who managed the review process, an editor employed by the publisher or by some other organization? What was reviewed—a proposal, an article, a monograph, a dataset? Are the reviews themselves kept confidential, or are they published (with or without the reviewers’ names) alongside the work reviewed? These questions represent just a few of the many permutations of peer review.

The impetus for opening up the black box of peer review comes from a variety of sources. The rise of open-access publications and publishers, along with the author-pays model, has made issues of credentialing more visible and urgent than in the era of print-only, user-pays publishing. And we live in a moment when expertise is being challenged on many fronts. As a report for the Peer Review Transparency Project puts it, "Scholarly communication, in the end, is the accredited review and promulgation of scholarship; the value a peer review process confers on a work and its author(s) is increasingly important in a time that truth claims are being challenged and corroded, and facts are open to contestation” (14). [“Transparency in Standards and Practices of Peer Review: Report of a Stakeholders’ Workshop and Recommendations for Action,” May 2018]

Also contributing to calls for transparency are the shortcomings of the existing peer review system in a world of proliferating journals and highly profitable commercial publishers. This is a problem more acute for STEM researchers than those in the arts, humanities, and social sciences, so it’s not surprising that scientists have led the way in criticizing the peer review process for being exploitative and time-consuming. Even more important, it’s not clear that peer review is doing its job—vetting the accuracy and quality of scholarship in a fair and consistent way. A concern shared across the disciplines is whether peer review reproduces the existing gender, racial, and other inequities within the academy.

The Peer Review Transparency Project is just one of the organizations trying to come up with a system for summing up the kind of peer review performed and making it visible to readers of the published work. In the European Union, HIRMEOS has developed a questionnaire for this purpose. And ASAPbio, an organization promoting transparency in scholarly communications in the life sciences, is also doing so. To get a quick primer on these projects, I talked with Jessica Polka, the executive director of ASAPbio, and Amy Brand, the director of MIT Press and one of the co-founders of the Peer Review Transparency Project.


People within a discipline might know what to expect from journals within their field, but that inside knowledge is not necessarily available to researchers doing interdisciplinary research.


Catherine Cocks: Tell me a little bit about the ASAPbio and Peer Review Transparency project initiatives.

Jessica Polka: ASAPbio, together with other collaborators, is participating in an effort in the sciences to track peer review processes across publishers and journals. Together, we launched Transpose, a searchable online database of journal policies on peer review. The Transpose team also conducted a landscape survey of the most highly cited journals across varied disciplines to determine what their peer review policies were. In many cases, these policies were not stated in any easily accessible place or were not articulated. People within a discipline might know what to expect from journals within their field, but that inside knowledge is not necessarily available to researchers doing interdisciplinary research.


We need a taxonomy to signal to readers what kind of peer review happened at which stage of a project.


Also, peer review happens at different stages of a project (preregistration, pre-publication, and post-publication), and it may or may not involve interactions among peer reviewers. We need a taxonomy to signal to readers what kind of peer review happened at which stage of a project.

Amy Brand: The Peer Review Transparency project began with an initiative launched by Amherst College Press, Lever Press, and MIT Press to study peer review practices across a range of scholarly publishers and to investigate the possibility of creating an easily readable visual signal indicating the form and substance of the review. The partnership received a grant from the Open Society Foundation to convene a workshop in early 2018 to discuss creating a taxonomy of types of peer review that could be applied across both books and journals. We came up with a system of icons inspired by those used to summarize Creative Commons licenses. [See “Transparency,” appendix 2.]

JP: Several organizations are interested in these areas, and we would like to encourage further conversation to harmonize all the efforts. For example, HIRMEOS in Europe, working with the Directory of Open Access Books, developed a questionnaire on peer review types. There is also a JATS4R working group dedicated to developing XML tags for peer review materials. CrossRef [a service that makes research outputs easy to find, cite, link, and assess] has also developed metadata to describe peer review materials.

AB: One of the ongoing discussions concerns certification. How can we be sure that participating publishers are in fact doing the kind of peer review they say they are? Predatory publishers that harvest author fees and publish without peer review are a serious problem. But I prefer not to have an enforcement process; other groups, such as HIRMEOS, are developing certification systems. 

CC: Has anyone started implementing the PRT Standards?

AB: Coming out of the 2018 workshop, a lot of work remains to be done to develop the taxonomy and spread the word. However, we are seeing an increase in transparency about peer review practices. MIT Press has put its policies on its website. Publishers are talking with Crossref about this effort as well; Crossmark [a service that tracks the current status of a work, such as updated or retracted] now captures richer metadata, including information about peer review. It can even include the reviews themselves if they were generated by an open review process.

CC: Are scholars aware of these efforts?

AB: So far, we’re mainly discussing them with our acquisitions editors, not scholars.

JP: In the life sciences (the fields ASAPbio serves) there is a great deal of interest among scholars. The Transpose database contains records for nearly 3,000 journals, and can be searched according to various peer review types (open reports, open participation, etc). It also allows users to perform a side-by-side comparison of up to three journals. We hope that this will enable authors to better identify journals with the policies they’d like to support. As a result of these efforts, journal editors are being prompted to codify their practices.

For studies of the impact of greater transparency, including publishing peer reviews alongside the work evaluated, see ASAPbio’s peer review FAQ.


...peer review is a precious and scarce resource.


CC: Have you encountered any criticism or resistance to increasing transparency in peer review?

AB: Not as such, but some confuse the call for transparency with a call for open review, or they think there will be some kind of enforcement of specific forms of peer review. That isn’t what the Peer Review Transparency project aims to do. No one rejects the idea of tracking the type of review conducted. What we need to communicate to scholars is that peer review is a precious and scarce resource. We need to use it to greatest effect. Many scholars complain that peer review as it’s practiced now is not fair or impartial, that it often involves cronyism; that it’s time intensive; and that the results are
variable.

CC: Being open about how peer review was conducted won’t necessarily reduce cronyism. As an acquisitions editor, I could still be asking the same small group of people to review for me, whatever form the review took.

AB: No, but open review would. Or look at the latest iteration of Plan S [a European plan to make all publications resulting from publicly funded research open access by 2021], which suggests tracking the number of potential reviewers an editor reached out to before signing up the people who did the review. 

JP: It’s true that metadata showing what type of peer review was done won’t necessarily fix all the problems with the process, but it would make a verifiable claim about what the process was. Frankly, the bar is so low, any additional transparency would help.


Resources

In addition to the links provided above, ASAPbio provides links to six recent studies of peer review, and we list some other helpful resources here.


Questions for Feeding the Elephant Readers

Does the journal you edit, or do journals that you contribute to, offer a clear statement of how the peer review process works? On the book side, do the publishers you work with provide clear explanations of their peer review process on their website? I have to admit that MSU Press doesn’t—yet. We’re working on developing such a statement for both our books and journals.

What concerns do you have about peer review? How might the scholarly community, including scholars and publishers, address those concerns?


In our next Feeding the Elephant post, we’ll look at peer review, diversity, and equity. Stay tuned!

Thank you for this excellent post. Who hasn't been frustrated at times by the opaque nature of most peer review processes! I've often wondered whether there wasn't some space for H-Net networks to play a role in improving the transparency of the review process within their respective fields, perhaps by soliciting data on time to publication, time to rejection, etc., from journal editors and storing this info on a dedicated webpage. I'll note that the Political Science Rumors website, although itself an often toxic space, has a self-reporting feature for the review processes in the field (https://www.poliscirumors.com/journals.php). It's an interesting idea, although one that might have problems with skewed data. 

Thanks, David! I like your idea of enlisting H-Net (and maybe other professional organizations) to collect or host data on peer review processes. I encourage anyone interested in this issue to check out the report of the Peer Review Transparency project. The Committee on Publication Ethics (https://publicationethics.org/) also offers guidance for journal editors on making their journal's peer review policies and practices more transparent.

Thank you very much for your reply, David! I've added the Political Science Rumors website to the Peer Review Resources page as a reference tool and a resource that we can draw further conversation from.