Fixing published research mistakes not easy; fixing the publishing system may be harder

Articles in peer-reviewed research journals sometimes have mistakes, and a UAB study shows the process to correct such mistakes is flawed.

correction processA commentary published today in Nature suggests that the process for fixing mistakes in peer-reviewed research articles is flawed. The article, written by scientists at the University of Alabama at Birmingham, points out that journals are slow to respond and even slower to take action when questions regarding the accuracy of a published research paper are raised.

The authors say that, in the course of assembling weekly lists of articles on obesity and nutrition, they began to notice more peer-reviewed articles containing what they refer to as ‘substantial or invalidating errors.’ “What was striking was how severe some of these errors were, involving mathematically impossible values, probabilities greater than one, weight loss results that, if true, would have required that adults had grown over 6 centimeters in height in two months, to name just a few,” said David B. Allison, Ph.D., leader of the research team and associate dean for Science in the UAB School of Public Health.

“These errors involved factual mistakes or practices which veered substantially from clearly accepted procedures in ways that, if corrected, might alter a paper’s conclusions,” said Andrew Brown, Ph.D., a scientist in the UAB School of Public Health and co-author of the commentary. “In several cases, our noting these errors led to retractions of the papers containing them.”

Brown says the team attempted to address more than 25 of these errors with letters to authors or journals. Their efforts revealed invalidating practices that occur repeatedly and showed how journals and authors react when faced with mistakes that need correction.

“We learned that post-publication peer review is not consistent, smooth or rapid,” Allison said. “Many journal editors and staff seemed unprepared to investigate, take action or even respond. Too often, the process spiraled through layers of ineffective emails among authors, editors and unidentified journal representatives, often without any public statement’s being added to the original article.”

During the informal 18-month review of literature, the authors found a number of recurring problems:

  • Editors are often unprepared or reluctant to take speedy and appropriate action
  • Where to send expressions of concern is unclear
  • Journal staff who acknowledged invalidating errors were reluctant to issue retractions or even timely expressions of concern
  • Some journals may charge fees to authors who report the issues to correct others’ mistakes (more than $1,000)
  • No standard mechanism exists to request raw data for review to confirm the errors
  • Concerns expressed through online forums are easily overlooked and are not connected in a way to be found by readers of the article in question
The authors observed that there is little formal guidance for post-publication corrections. They recommend that journals should standardize their submission and peer-review processes, establish clear protocols to address expressions of concern, and waive publication fees associated with those expressions of concern.

The authors observed that there is little formal guidance for post-publication corrections. They recommend that journals should standardize their submission and peer-review processes, establish clear protocols to address expressions of concern, and waive publication fees associated with those expressions of concern.

Further suggestions include creating an environment to address readers’ concerns rapidly and provide clear information on how and to whom such concerns should be addressed.

“We also think it is very important to create an understanding that such expressions of concern are not a condemnation of the work, but should be viewed as an alert that the work is undergoing further scrutiny,” said co-author Kathryn A. Kaiser, Ph.D.

Additional recommendations suggest journals and statistical experts should work together to identify common statistical mistakes and that authors and journals should be prepared to share data and analysis code quickly when questions arise.

The authors noted common statistical errors in many of the studies, including mistaken design or analysis of cluster randomized trials, miscalculation in meta-analyses, and inappropriate baseline comparisons.

The authors acknowledge that their work did not constitute a formal survey and suggest that a more formal, systematic survey is needed to establish whether their experiences are representative of science in general.

“Ideally, anyone who detects a potential problem with a study will engage, whether by writing to authors and editors or by commenting online, and will do so in a collegial way,” Brown said. “Scientists who engage in post-publication review often do so out of a sense of duty to their community, but this important work does not come with the same prestige as other scientific endeavors.”

“Robust science needs robust corrections,” Allison added. “It is time to make the process less onerous.”

Co-authors of the commentary are Brown, Allison, Kaiser and Brandon J. George, Ph.D., of the UAB School of Public Health.