Ten major errors in obesity research discussed

UAB statistician and multi-institution team discuss common statistical errors in obesity research and how to avoid them.

math gif1Concerns about rigor in science, particularly obesity research, have been raised in recent years, and a movement is underway to proactively help investigators structure the design and reproducibility of their science. A paper from investigators at the University of Alabama at Birmingham recently published in Obesity identifies several key statistical errors commonly seen in obesity research with discussions on how to identify and avoid making these mistakes.

“Our goal is to provide researchers and reviewers with a tutorial to improve the rigor of the science in future obesity studies,” said Brandon George, Ph.D., statistician in the University of Alabama at Birmingham Office of Energetics. “Investigators who conduct primary research may find the paper useful to read or share with statistical collaborators to obtain a deeper understanding of statistical issues, avoid making the discussed errors, and increase the reproducibility and rigor of the field. Editors, reviewers and consumers will find valuable information allowing them to properly identify these common errors while critically reading the work of others.”

Most notable are errors related to tests of pre-post differences between groups, inappropriate design or analysis of cluster randomized trials, and calculation errors in meta-analyses. Ten of the most common types of statistical errors stem from errors in statistical design, analysis, interpretation and reporting. George and colleagues further discuss ways to identify, avoid and correct such errors when researching obesity.

Ten common errors in obesity research include:

  • Misinterpretation of statistical significance
  • Inappropriate testing against baseline values
  • Excessive and undisclosed multiple testing and “p-value hacking”
  • Mishandling of clustering in cluster randomized trials
  • Misconceptions about nonparametric tests
  • Mishandling of missing data
  • Miscalculation of effect sizes
  • Ignoring regression to the mean
  • Ignoring confirmation bias
  • Insufficient statistical reporting
math gif2

“There have been valid critiques of the science in obesity research based on errors in statistical aspects of the studies,” George said. “We have seen that many of these errors tend to repeat from study to study.”

When testing for the effects of an intervention on a given outcome over time with a treatment group and a control group, the appropriate analysis looks at the “difference of differences” between groups.

According to George, it is not acceptable to compare the nominal significance of within-group changes versus baseline to make inference about between-group differences.

Cluster randomized trials, where groups of subjects are randomized together, such as school-based interventions, need to be identified and analyzed using a method that properly accounts for within-cluster correlation between subjects.

George advises that these types of studies should have the involvement of a statistician with specialized expertise in this topic.

In meta-analysis or a statistical technique for combining the findings from independent studies, the calculation of effect sizes is frequently done incorrectly. Two common errors seem to be meta-analysis confusion about how to deal with incomplete or nonstandard reporting in the original papers and which variances to use in which context.

“This frequently leads to miscalculation of effect size or their variances,” George said. “Investigators performing meta-analyses may benefit from including someone with advanced training in meta-analytic calculations in their study team.”

“With the increased emphasis on rigor, reproducibility and transparency at the National Institutes of Health and in the field overall, obesity and other researchers are hungry for concrete practical advice on how to proceed, and we hope this paper is a step in that direction,” said David Allison, Ph.D., senior investigator of the authorship group and director of the UAB Nutrition Obesity Research Center.

math gif3Considering frequent appearances of these errors in the literature, journals need more statistical support when evaluating papers submitted to their journal. Obesity researchers would benefit from rigorous statistical training and support, through coursework during graduate or postdoctoral training or from workshops or short courses, while the support could come from institutional changes to increase accessibility to statisticians, Allison says.