News
Article
Author(s):
Riaz Qureshi, PhD, explains the effects of spin sneaking into systematic reviews and ways the scientific community can better identify it to improve the validity.
When it comes to scientific reporting, words can make a difference. As is the case with media, spin — even when it is done unintentionally — is something researchers say study authors and readers alike should be aware of and try to avoid.
According to a University of Colorado School of Medicine news release,1 “spin” is defined as inappropriate reporting, interpretation, or the extrapolation of results that can mislead the reader as to the findings and conclusions of a study.2
Riaz Qureshi, PhD, an assistant professor of ophthalmology at the University of Colorado School of Medicine, recently published a paper in Annals of Internal Medicine. It features an international team of researchers that has established a framework providing guidance to authors, peer reviewers, and editors to rectify spin of harms in systematic review.1
“Everybody does it, even if they don’t mean to,” Qureshi said. “Spin is part of the human condition, and it’s often not deliberate, but being able to spot it and fix it can make a meaningful contribution to scientific research.”
The paper identifies several instances of spin from a random sample of 100 systematic reviews of interventions — 58 of which that assessed harm and 42 that did not — to identify and address instances of spin. Nearly half of the 58 reviews in the study that assessed harm had at least one of 12 types of spin, while 14% of the 42 reviews that didn’t assess harm still showed signs of spin for harms.2
In the paper, the researchers revised examples to remove spin, taking into consideration the context, findings for harms, and methodological limitations of the original reviews. The researchers set out to come up with a way for researchers and reviewers alike to avoid spin, ultimately improving the clarity and accuracy of harms reporting in systematic review publications.
The researchers developed the framework through an iterative process involving an international group of researchers specializing in spin and reporting bias. The point out that the framework includes 12 specific types of spin for harms, grouped by 7 categories across 3 domains, including reporting, interpretation, and extrapolation.1
The researchers pulled together instances of spin taken form a random sample of 100 systemic reviews of interventions. Of 58 reviews that assessed harm and 42 that did not, the researchers found that 28 and 6, respectively had at least 1 of the 12 types of spin. They also found that inappropriate extrapolation of the results and conclusions for impact on populations, interventions, outcomes, or settings that are not assessed in a review proved to be the most common category of spin, cited in 17 of 100 reviews.
Qureshi pointed out the domains are at a very high level.
“You can have misleading or selective reporting, misleading interpretation, and misleading or selective extrapolation,” Qureshi explained. “Within those, there are seven categories, which are considered our ‘medium level’ of spin – the general ways that spin can be classified. From there, we define 12 types of spin for harms that are the specific ways that the categories manifest in reviews.”
According to the study, a paper could feature multiple type of spin within and across the categories. As an example, one review could examine multiple harms but only highlight specific ones in conclusions to over- or understate the harms of an intervention. This ultimately could offer a misleading graphical summary of harms that distorts the true findings, both of which fall under “selective reporting of or over-/under-emphasis on harm outcomes,” and also could downplay the harms when summarizing the net benefits and risks, which is one method by which reviews can inappropriately extrapolate their findings to some setting outside the scope of the review.2
Moreover, Qureshi explained that some of the categories of spin for harms are more common than others. He noted 17% of all intervention systematic reviews that were assessed, results and conclusions for harms were inappropriately extrapolated to another population, intervention, outcome, or setting that were assessed in the review.
The researchers also found that 14% of all reviews ignored the limitations for methods used to assess harms and 12% had selective reporting of harm outcomes.
Further, the researchers highlighted an instance of spin that occurs frequently, when a paper doesn’t justify the selection criteria that is used to report and assess a subset of all identified harms, which occurred in 11% of the reviews. In these instances, the researchers explain it is always best to be specific and “depict the evidence for all assessed harms and note any with potentially important effects.”
According to the researchers, their framework could help improve the entire scientific research process, from groups performing systematic reviews all the way to journal editors and people reading the studies.
“For researchers and systematic reviewers, we hope that this framework is incorporated into their work in a way that helps them think carefully about the words that they use to describe their findings,” Qureshi pointed out.
Qureshi pointed out that for harms, which often are overlooked in health research, it’s important for people to think about how a lack of evidence doesn’t constitute evidence of an absence of effect.
“Just because you didn’t find any harms in your systematic review, doesn’t mean that there are no harms out there for that intervention, especially since there’s so many known problems with how harms are reported in primary literature,” he said. “Because of these challenges, it is almost never appropriate to simply conclude an intervention is ‘safe.’”
According to the news release, the researchers’ framework also can prove to benefit clinicians who read systematic reviews and use them in their practice. The framework can help them recognize spin, and this can be crucial to thinking critically about the literature and not making incorrect inferences based on a review’s potentially inappropriate conclusions.1
“Ultimately, if as researchers we are not focusing on this, it results in a degradation of the quality of what is available because of inappropriate reporting, interpretation, and extrapolation,” Qureshi explained. “Secondly, it has a potential to effect clinical care, guidelines, and anything else that relies on systematic reviews. If the conclusions are not accurate, people will take away the wrong message and that might inform crucial decisions.”
As a result, the best way of addressing spin ultimately may be to avoid it altogether.
“Be as transparent and objective in your findings as possible and use language that describes exactly what was found, as this minimizes the potential for incorrect interpretation,” Qureshi concluded.
Study validates long-term efficacy of MicroPulse TLT for glaucoma management