User login
LOS ANGELES –
“This is concerning result,” said general physician Lucas Piason F. Martins, MD, of the Bahiana School of Medicine and Public Health, Salvador, Brazil. “Many of these trials have been included in clinical guidelines and cited extensively in systematic reviews and meta-analyses, especially those related to hypothermia therapy.”
Dr. Martins presented the findings at the annual meeting of the American Association of Neurological Surgeons.
Defining spin
In recent years, medical researchers have sought to define and identify spin in medical literature. According to a 2017 report in PLOS Biology, “spin refers to reporting practices that distort the interpretation of results and mislead readers so that results are viewed in a more favorable light.”
Any spin can be dangerous, Dr. Martins said, because it “can potentially mislead readers and affect the interpretation of study results, which in turn can impact clinical decision-making.”
For the new report, a systematic review, Dr. Martins and colleagues examined 150 studies published in 18 top-ranked journals including the Journal of Neurotrauma (26%), the Journal of Neurosurgery (15%), Critical Care Medicine (9%), and the New England Journal of Medicine (8%).
Studies were published between 1960 and 2020. The review protocol was previously published in BMJ Open.
According to the report, most of the 32 studies with spin (75%) had a “focus on statistically significant results not based on primary outcome.”
For example, Dr. Martins said in an interview that the abstract for a study about drug treatment of brain contusions highlighted a secondary result instead of the main finding that the medication had no effect. Another study of treatment for severe closed head injuries focused on a subgroup outcome.
As Dr. Martins noted, it’s potentially problematic for studies to have several outcomes, measure outcomes in different ways, and have multiple time points without a predefined primary outcome. “A positive finding based on such strategies could potentially be explained by chance alone,” he said.
The researchers also reported that 65% of the studies with spin highlighted “the beneficial effect of the treatment despite statistically nonsignificant results” and that 9% had incorrect statistical analysis.
The findings are especially noteworthy because “the trials we analyzed were deemed to have the highest quality of methodology,” Dr. Martins said.
The researchers didn’t identify specific studies that they deemed to have spin, and they won’t do so, Dr. Martins said. The authors do plan to reveal which journals were most spin-heavy but only when these findings are published.
Were the study authors trying to mislead readers? Not necessarily. Researchers “may search for positive results to confirm their beliefs, although with good intentions,” Dr. Martins said, adding that the researchers found that “positive research tends to be more cited.”
They also reported that studies with smaller sample sizes were more likely to have spin (P = .04).
At 21%, the percentage of studies with spin was lower than that found in some previous reports that analyzed medical literature in other specialties.
A 2019 study of 93 randomized clinical studies in cardiology, for example, found spin in 57% of abstracts and 67% of full texts. The lower number in the new study may be due to its especially conservative definition of spin, Dr. Martins said.
Appropriate methodology
Cardiologist Richard Krasuski, MD, of Duke University Medical Center, Durham, N.C., who coauthored the 2019 study into spin in cardiology studies, told this news organization that the new analysis follows appropriate methodology and appears to be valid.
It makes sense, he said, that smaller studies had more spin: “It is much harder to show statistical significance in small studies and softer endpoints can be harder to predict. Small neutral trials are also much harder to publish in high-level journals. This all increases the tendency to spin the results so the reviewer and eventually the reader is more captivated.”
Why is there so much spin in medical research? “As an investigator, you always hope to positively impact patient health and outcomes, so there is a tendency to look at secondary analyses to have something good to emphasize,” he said. “This is an inherent trait in most of us, to find something good we can focus on. I do believe that much of this is subconscious and perhaps with noble intent.”
Dr. Krasuski said that he advises trainees to look at the methodology of studies, not just the abstract or discussion sections. “You don’t have to be a trained statistician to identify how well the findings match the author’s interpretation.
“Always try to identify what the primary outcome of the study was at the time of the design and whether the investigators achieved their objective. As a reviewer, my own personal experience in research into spin makes me more cognizant of its existence, and I generally require authors to reword and tone down their message if it is not supported by the data.”
What’s next? The investigators want to look for spin in the wider neurosurgery literature, Dr. Martins said, with an eye toward developing “practical strategies to assess spin and give pragmatic recommendations for good practice in clinical research.”
No study funding is reported. Dr. Martins has no disclosures, and several study authors reported funding from the UK National Institute for Health Research. Dr. Krasuski has no disclosures.
A version of this article first appeared on Medscape.com.
LOS ANGELES –
“This is concerning result,” said general physician Lucas Piason F. Martins, MD, of the Bahiana School of Medicine and Public Health, Salvador, Brazil. “Many of these trials have been included in clinical guidelines and cited extensively in systematic reviews and meta-analyses, especially those related to hypothermia therapy.”
Dr. Martins presented the findings at the annual meeting of the American Association of Neurological Surgeons.
Defining spin
In recent years, medical researchers have sought to define and identify spin in medical literature. According to a 2017 report in PLOS Biology, “spin refers to reporting practices that distort the interpretation of results and mislead readers so that results are viewed in a more favorable light.”
Any spin can be dangerous, Dr. Martins said, because it “can potentially mislead readers and affect the interpretation of study results, which in turn can impact clinical decision-making.”
For the new report, a systematic review, Dr. Martins and colleagues examined 150 studies published in 18 top-ranked journals including the Journal of Neurotrauma (26%), the Journal of Neurosurgery (15%), Critical Care Medicine (9%), and the New England Journal of Medicine (8%).
Studies were published between 1960 and 2020. The review protocol was previously published in BMJ Open.
According to the report, most of the 32 studies with spin (75%) had a “focus on statistically significant results not based on primary outcome.”
For example, Dr. Martins said in an interview that the abstract for a study about drug treatment of brain contusions highlighted a secondary result instead of the main finding that the medication had no effect. Another study of treatment for severe closed head injuries focused on a subgroup outcome.
As Dr. Martins noted, it’s potentially problematic for studies to have several outcomes, measure outcomes in different ways, and have multiple time points without a predefined primary outcome. “A positive finding based on such strategies could potentially be explained by chance alone,” he said.
The researchers also reported that 65% of the studies with spin highlighted “the beneficial effect of the treatment despite statistically nonsignificant results” and that 9% had incorrect statistical analysis.
The findings are especially noteworthy because “the trials we analyzed were deemed to have the highest quality of methodology,” Dr. Martins said.
The researchers didn’t identify specific studies that they deemed to have spin, and they won’t do so, Dr. Martins said. The authors do plan to reveal which journals were most spin-heavy but only when these findings are published.
Were the study authors trying to mislead readers? Not necessarily. Researchers “may search for positive results to confirm their beliefs, although with good intentions,” Dr. Martins said, adding that the researchers found that “positive research tends to be more cited.”
They also reported that studies with smaller sample sizes were more likely to have spin (P = .04).
At 21%, the percentage of studies with spin was lower than that found in some previous reports that analyzed medical literature in other specialties.
A 2019 study of 93 randomized clinical studies in cardiology, for example, found spin in 57% of abstracts and 67% of full texts. The lower number in the new study may be due to its especially conservative definition of spin, Dr. Martins said.
Appropriate methodology
Cardiologist Richard Krasuski, MD, of Duke University Medical Center, Durham, N.C., who coauthored the 2019 study into spin in cardiology studies, told this news organization that the new analysis follows appropriate methodology and appears to be valid.
It makes sense, he said, that smaller studies had more spin: “It is much harder to show statistical significance in small studies and softer endpoints can be harder to predict. Small neutral trials are also much harder to publish in high-level journals. This all increases the tendency to spin the results so the reviewer and eventually the reader is more captivated.”
Why is there so much spin in medical research? “As an investigator, you always hope to positively impact patient health and outcomes, so there is a tendency to look at secondary analyses to have something good to emphasize,” he said. “This is an inherent trait in most of us, to find something good we can focus on. I do believe that much of this is subconscious and perhaps with noble intent.”
Dr. Krasuski said that he advises trainees to look at the methodology of studies, not just the abstract or discussion sections. “You don’t have to be a trained statistician to identify how well the findings match the author’s interpretation.
“Always try to identify what the primary outcome of the study was at the time of the design and whether the investigators achieved their objective. As a reviewer, my own personal experience in research into spin makes me more cognizant of its existence, and I generally require authors to reword and tone down their message if it is not supported by the data.”
What’s next? The investigators want to look for spin in the wider neurosurgery literature, Dr. Martins said, with an eye toward developing “practical strategies to assess spin and give pragmatic recommendations for good practice in clinical research.”
No study funding is reported. Dr. Martins has no disclosures, and several study authors reported funding from the UK National Institute for Health Research. Dr. Krasuski has no disclosures.
A version of this article first appeared on Medscape.com.
LOS ANGELES –
“This is concerning result,” said general physician Lucas Piason F. Martins, MD, of the Bahiana School of Medicine and Public Health, Salvador, Brazil. “Many of these trials have been included in clinical guidelines and cited extensively in systematic reviews and meta-analyses, especially those related to hypothermia therapy.”
Dr. Martins presented the findings at the annual meeting of the American Association of Neurological Surgeons.
Defining spin
In recent years, medical researchers have sought to define and identify spin in medical literature. According to a 2017 report in PLOS Biology, “spin refers to reporting practices that distort the interpretation of results and mislead readers so that results are viewed in a more favorable light.”
Any spin can be dangerous, Dr. Martins said, because it “can potentially mislead readers and affect the interpretation of study results, which in turn can impact clinical decision-making.”
For the new report, a systematic review, Dr. Martins and colleagues examined 150 studies published in 18 top-ranked journals including the Journal of Neurotrauma (26%), the Journal of Neurosurgery (15%), Critical Care Medicine (9%), and the New England Journal of Medicine (8%).
Studies were published between 1960 and 2020. The review protocol was previously published in BMJ Open.
According to the report, most of the 32 studies with spin (75%) had a “focus on statistically significant results not based on primary outcome.”
For example, Dr. Martins said in an interview that the abstract for a study about drug treatment of brain contusions highlighted a secondary result instead of the main finding that the medication had no effect. Another study of treatment for severe closed head injuries focused on a subgroup outcome.
As Dr. Martins noted, it’s potentially problematic for studies to have several outcomes, measure outcomes in different ways, and have multiple time points without a predefined primary outcome. “A positive finding based on such strategies could potentially be explained by chance alone,” he said.
The researchers also reported that 65% of the studies with spin highlighted “the beneficial effect of the treatment despite statistically nonsignificant results” and that 9% had incorrect statistical analysis.
The findings are especially noteworthy because “the trials we analyzed were deemed to have the highest quality of methodology,” Dr. Martins said.
The researchers didn’t identify specific studies that they deemed to have spin, and they won’t do so, Dr. Martins said. The authors do plan to reveal which journals were most spin-heavy but only when these findings are published.
Were the study authors trying to mislead readers? Not necessarily. Researchers “may search for positive results to confirm their beliefs, although with good intentions,” Dr. Martins said, adding that the researchers found that “positive research tends to be more cited.”
They also reported that studies with smaller sample sizes were more likely to have spin (P = .04).
At 21%, the percentage of studies with spin was lower than that found in some previous reports that analyzed medical literature in other specialties.
A 2019 study of 93 randomized clinical studies in cardiology, for example, found spin in 57% of abstracts and 67% of full texts. The lower number in the new study may be due to its especially conservative definition of spin, Dr. Martins said.
Appropriate methodology
Cardiologist Richard Krasuski, MD, of Duke University Medical Center, Durham, N.C., who coauthored the 2019 study into spin in cardiology studies, told this news organization that the new analysis follows appropriate methodology and appears to be valid.
It makes sense, he said, that smaller studies had more spin: “It is much harder to show statistical significance in small studies and softer endpoints can be harder to predict. Small neutral trials are also much harder to publish in high-level journals. This all increases the tendency to spin the results so the reviewer and eventually the reader is more captivated.”
Why is there so much spin in medical research? “As an investigator, you always hope to positively impact patient health and outcomes, so there is a tendency to look at secondary analyses to have something good to emphasize,” he said. “This is an inherent trait in most of us, to find something good we can focus on. I do believe that much of this is subconscious and perhaps with noble intent.”
Dr. Krasuski said that he advises trainees to look at the methodology of studies, not just the abstract or discussion sections. “You don’t have to be a trained statistician to identify how well the findings match the author’s interpretation.
“Always try to identify what the primary outcome of the study was at the time of the design and whether the investigators achieved their objective. As a reviewer, my own personal experience in research into spin makes me more cognizant of its existence, and I generally require authors to reword and tone down their message if it is not supported by the data.”
What’s next? The investigators want to look for spin in the wider neurosurgery literature, Dr. Martins said, with an eye toward developing “practical strategies to assess spin and give pragmatic recommendations for good practice in clinical research.”
No study funding is reported. Dr. Martins has no disclosures, and several study authors reported funding from the UK National Institute for Health Research. Dr. Krasuski has no disclosures.
A version of this article first appeared on Medscape.com.
FROM AANS 2023