Is speedy delivery always worthwhile?

Margaret DeLacy Discussion

Friends:

On July 8, 2020,  the Scholarly Kitchen blog  posted a guest colum by Joseph DeBruin, the head of product management at ResearchGate entitled "The Covid Infodemic and the Future of the Communication of Science."

DeBruin gives several examples of the way the fast dissemination of scientific communicatiosn about the COVID-19 virus was both innovative and potentially dangerous and comments that:.

"speed and uncertainty in science are two sides of the same coin . . .  the trends we see in Covid are indicative of a lasting shift rather than a temporary blip. Nonetheless, there are big steps forward we can take towards mitigating the risk associated with limited validation while maintaining speed, especially with the help of modern technologies."

You can read the full post at

https://scholarlykitchen.sspnet.org/2020/07/08/guest-post-the-covid-infodemic-and-the-future-of-the-communication-of-science/

1 Reply

Post Reply

Contributed by Sandra Ham, acting as subscriber.

I comment on the article and online commentators of the Scholarly Kitchen article by Joseph Debruin, Head of Product Management at ResearchGate from July 8-9 (https://scholarlykitchen.sspnet.org/2020/07/08/guest-post-the-covid-inf…). They give opinions about what peer review in science can and cannot do that are representative of some scientists and many laypeople. I find that these opinions represent unrealistically optimistic misunderstandings about peer review.

Debruin writes about communication of science by scientists and the media. He notes the need to balance peer review, which mitigates risk from uncertainty in science, with speed of publishing, particularly in the context of preprints about COVID-19. Misunderstandings may occur when the media and lay public take up early or raw scientific research that is written for other scientists in a relatively small community. The media and lay public, lacking the proper context for the research, sometimes misinterpret it, and they sometimes consume low quality research without knowing it. Although the article itself is about scientific communication and not really about what peer review can and cannot do, some commentators took up the topic of peer review, and I do as well.

Three commentators at the bottom of the article write about the process and value of peer review as a necessary step that purportedly ensures that published, peer-reviewed articles communicate truth. Lanfear points out a fundamental difference in uncertainty of preprints versus peer-reviewed journal articles, calling for disclaimers on preprints so that scientists can somehow avoid setting up lay readers to misinterpret the science that is in preprint articles. He says that "every significant claim in a preprint is basically hypothetical (i.e. not confirmed by peers)." So peers are needed to transition claims from hypotheses to truth. Forsdyke comments on the need to make the tasks of peer review easier, particularly the burdensome task of checking references cited in a journal manuscript and what post-publication peer readers have said about them. Dayal is optimistic about the truth-telling nature of science--whether preprints or by accelerated peer review. Dayal says, "The moderation during preprint submission takes care of useless pieces of information on most occasions, the open forum for comments also helps reduce misinterpretation, and finally the ultimate consumer who is either a clinician or researchers does have the right mind to see through the truth." Together these commentators reflect beliefs in their respective scientific communities that are also commonly held among laypeople: that peer review confirms the findings of others, that a key task is to check cited references, and that science conveys truth after screening and correction by online moderators, commentators, journal editors, peer reviewers and readers such that clinicians, researchers and others can always identify truth in the text.

As a statistician with 20 years of experience publishing scientific research, I focus on the factors that distinguish good from shoddy science, and, being more realistic, I do not give peer review the high status given by many scientists and laypersons. Beginning with the first claim that peer reviewers confirm scientific claims, transforming them from mere hypotheses to truth, I critique the claim by giving several reasons why peer reviewers do not ever "confirm" the findings of other scientists.

The difference between good science and shoddy science is that good science uses good study designs, data collection methods, and reliable and valid indicators. Then the data are managed well, analyzed appropriately, and interpreted reasonably–neither under-interpreted (a minor sin) or over-interpreted (a more serious sin) and integrated with the literature to arrive at a generalized conclusion about the larger issue at hand. In good science, the authors should clearly state all of the methods that are needed to replicate the study and the important factors that could affect the interpretation of the data. Shoddy science falls short on some of these by problems with the study design and execution, rushing to submit an underdeveloped manuscript, weak thinking, weak writing or ignorance by the coauthors. This is equally true for preprints and peer-reviewed articles.

So which of these criteria do peer reviewers “confirm” in their assessment? They can agree with a study design, choice of indicators, and data collection and analysis methods. They can determine whether sufficient detail is given about the methods so that the study can be replicated by other researchers with other study subjects. They can assess whether the analytical methods are appropriate and comprehensive for a given topic and study design. But reviewers cannot know whether an important factor affecting the study design or its execution and analysis was hidden from view and omitted from the write-up. Nor can they know how the data are collected and managed, with potential to introduce error and bias because of sloppiness. It is nearly impossible for any reviewer to discern whether a data analyst made an error in the data management algorithms and failed to find it by not checking computed variables against raw data---a common occurrence especially under time constraints and when analysts are impatient with big data.

Aside from the heart of any research study--its data--astute reviewers can judge the reasonableness of the interpretation of the findings and the clarity of writing. Less astute reviewers could overlook the omission of important items from the methods or omission by way of vague and ambiguous writing---both are common occurrences. Contrary to common beliefs, reviewers never have their hands on someone else's data as part of the review process, nor do reviewers ever replicate a study from scratch as part of their duties. Therefore, peer reviewers do not ever "confirm" the findings of someone else's study but instead fall far short of "confirmation."

Claim two is about the value of checking references. Most reviewers should be able to judge how well the findings were integrated with the literature to draw sound, logical conclusions about the larger question at hand. The literature cited should include the most relevant literature to the current state of the discussion by the scientific community that works on the problem. The literature should not focus on a tangential issue or controversial paper without also making a good argument for why readers should care. These are the important factors related to references. Reviewers can check to verify that claims made about the literature are accurate interpretations, and that the citations (titles, authors, page numbers, etc.) are correct. However, scientific articles commonly cite 30-100 references, and each reference might contribute minor support to one or two claims by the citing authors---each citation carries relatively low importance by itself. Thus, references cited are more important as sources of context with which to integrate the present study's findings, rather than a source of important claims for peer reviewers to reckon with as Forsdyke suggested. I fail to see how checking the cited references makes much difference in peer review. Cited references tie a study to the community of researchers, the research question, and the study conclusions, but do not affect the quality of the research itself that was done and reported.

A thorough review of a mediocre paper that has potential to contribute value to the literature can take 8 hours and several passes through the paper--each pass bringing new weaknesses and logical problems to light. Manuscripts that are polished might take 2 hours to read, evaluate, and write up for a journal editor. When editors push reviewers to turn papers around faster, the reviews can be more superficial, thereby missing some important critique of mediocre papers that might come to light with a deeper, more thoughtful evaluation.

Considering all of these factors that are involved with conducting and writing a scientific study and its peer review, I do not regard peer review or moderation as mechanisms that can always reliably separate good science from shoddy science, as is commonly believed. In addition to the factors described above, authors, for whatever reason, can present mediocre data in a positive light by selectively omitting findings or their discussion from a write-up, and stretching their interpretation by using positive adjectives that are not quite warranted. Yet, even after the scandals of research integrity a few years ago, this behavior is still very common and will probably continue to be common because scientists are human beings with egos and careers to protect and promote. While researchers may compete to be the first to publish an important finding, tenure committees, collaborators, funders, and department heads are all interested in productivity and return on investment in research. In addition, journals have word limits that often require making decisions about what information to include in the write-up and what to omit. These present limitations to what peer review can and cannot do.

The final claim in the comments to the Scholarly Kitchen article is about regarding science as the criterion for human knowledge of truth. Certainly, it is not true that, as Dayal claims, “Science has an inbuilt filter that ultimately separates out truth from untruth.” I hope to have shown how much untruth can be hidden in a scientific journal article. This is even more true of papers that “need” to be published because the researchers invested so much time and resources in it that they will do whatever it takes to get it published--even when the results are null or negative.

The bottom line is that peer review is an essential component of the scientific process today, but it cannot "confirm" the veracity of research findings. Moreover, while reviewers can identify and correct weaknesses made out of carelessness or ignorance, the process often does not uncover the potentially more impactful weaknesses that authors wish to hide.