Article Type
Changed
Wed, 04/19/2017 - 00:01
Display Headline
Analysis reveals common sources of bias in scientific research

Photo by Daniel Sone
Researcher in the lab

A recent study suggested that bias in research varies across scientific disciplines, but there are some common factors the different fields share.

The data consistently showed that small studies, early studies, and highly cited studies overestimated effect size.

In addition, a scientist’s early career status, isolation from other researchers, and involvement in misconduct appeared to be risk factors for unreliable results.

John P. A. Ioannidis, MD, of Stanford University School of Medicine in Stanford, California, and his colleagues reported these findings in PNAS.

The team reviewed more than 3000 meta-analyses that included nearly 50,000 individual studies across 22 scientific fields.

“I think that this is a mapping exercise,” Dr Ioannidis said. “It maps all the main biases that have been proposed across all 22 scientific disciplines. Now, we have a map for each scientific discipline, which biases are important, and which have a bigger impact, and, therefore, scientists can think about where do they want to go next with their field.”

Types of bias

The researchers examined several hypothesized kinds of scientific bias, including:

  • Small-study effect: When studies with small sample sizes report large effect sizes.
  • Gray literature bias: The tendency of smaller or statistically insignificant effects to be reported in PhD theses, conference proceedings, or personal communications rather than in peer-reviewed literature.
  • Early extremes effect: When extreme or controversial findings are published early just because they are astonishing.
  • Decline effect: When reports of extreme effects are followed by subsequent reports of reduced effects.
  • Citation bias: The larger the effect size, the more likely the study will be cited.
  • United States effect: When US researchers overestimate effect sizes.
  • Industry bias: When industry sponsorship and affiliation affect the direction and size of reported effects.

Dr Ioannidis and his colleagues also looked at other factors that might potentially affect the risk of bias, such as size and types of collaborations, the gender of the researchers, and pressure to publish.

Results

Small studies, highly cited studies, and those published in peer-reviewed journals seemed more likely to overestimate effects. US studies and early studies seemed to report more extreme effects.

Early career researchers and researchers working in small or long-distance collaborations were more likely to overestimate effect sizes. And researchers with a history of misconduct tended to overestimate effect sizes.

On the other hand, studies by highly cited authors who published frequently were not more affected by bias than average. And there was no difference in bias according to gender.

In addition, scientists in countries with strong incentives to publish were not more affected by bias than scientists from countries where there was less pressure to publish.

Dr Ioannidis said that, in the data he and his colleagues examined, the influence of different kinds of bias changed over time and seemed to depend on the individual scientist.

“We show that some of the patterns and risk factors seem to be getting worse in intensity over time,” he said. “This is particularly driven by the social sciences . . . . It seems that the social sciences are seeing the more prominent worsening of these biases over time.”

Another finding of this study is that the kinds and amounts of bias were irregularly distributed across the literature.

“Although bias may be worryingly high in specific research areas, it is nonexistent in many others,” said study author Daniele Fanelli, PhD, of the Meta-Research Innovation Center at Stanford (METRICS), Stanford University in Palo Alto. “So bias does not undermine the scientific enterprise as a whole.”

 

 

Yet another finding is that the relative magnitude of biases closely reflects the level of attention they receive in the literature. That is, the kinds of biases researchers are most concerned about are, in fact, the ones they should be concerned about.

“Our understanding of bias is improving, and our priorities are set on the right targets,” Dr Fanelli said, though he noted that researchers should not become complacent when it comes to bias.

“We perhaps understand bias better, but we are far from having rid science of it. Indeed, our results suggest that the challenge might be greater than many think because interventions might need to be tailored to the needs and problems of individual disciplines of fields. One-size-fits all solutions are unlikely to work.”

Publications
Topics

Photo by Daniel Sone
Researcher in the lab

A recent study suggested that bias in research varies across scientific disciplines, but there are some common factors the different fields share.

The data consistently showed that small studies, early studies, and highly cited studies overestimated effect size.

In addition, a scientist’s early career status, isolation from other researchers, and involvement in misconduct appeared to be risk factors for unreliable results.

John P. A. Ioannidis, MD, of Stanford University School of Medicine in Stanford, California, and his colleagues reported these findings in PNAS.

The team reviewed more than 3000 meta-analyses that included nearly 50,000 individual studies across 22 scientific fields.

“I think that this is a mapping exercise,” Dr Ioannidis said. “It maps all the main biases that have been proposed across all 22 scientific disciplines. Now, we have a map for each scientific discipline, which biases are important, and which have a bigger impact, and, therefore, scientists can think about where do they want to go next with their field.”

Types of bias

The researchers examined several hypothesized kinds of scientific bias, including:

  • Small-study effect: When studies with small sample sizes report large effect sizes.
  • Gray literature bias: The tendency of smaller or statistically insignificant effects to be reported in PhD theses, conference proceedings, or personal communications rather than in peer-reviewed literature.
  • Early extremes effect: When extreme or controversial findings are published early just because they are astonishing.
  • Decline effect: When reports of extreme effects are followed by subsequent reports of reduced effects.
  • Citation bias: The larger the effect size, the more likely the study will be cited.
  • United States effect: When US researchers overestimate effect sizes.
  • Industry bias: When industry sponsorship and affiliation affect the direction and size of reported effects.

Dr Ioannidis and his colleagues also looked at other factors that might potentially affect the risk of bias, such as size and types of collaborations, the gender of the researchers, and pressure to publish.

Results

Small studies, highly cited studies, and those published in peer-reviewed journals seemed more likely to overestimate effects. US studies and early studies seemed to report more extreme effects.

Early career researchers and researchers working in small or long-distance collaborations were more likely to overestimate effect sizes. And researchers with a history of misconduct tended to overestimate effect sizes.

On the other hand, studies by highly cited authors who published frequently were not more affected by bias than average. And there was no difference in bias according to gender.

In addition, scientists in countries with strong incentives to publish were not more affected by bias than scientists from countries where there was less pressure to publish.

Dr Ioannidis said that, in the data he and his colleagues examined, the influence of different kinds of bias changed over time and seemed to depend on the individual scientist.

“We show that some of the patterns and risk factors seem to be getting worse in intensity over time,” he said. “This is particularly driven by the social sciences . . . . It seems that the social sciences are seeing the more prominent worsening of these biases over time.”

Another finding of this study is that the kinds and amounts of bias were irregularly distributed across the literature.

“Although bias may be worryingly high in specific research areas, it is nonexistent in many others,” said study author Daniele Fanelli, PhD, of the Meta-Research Innovation Center at Stanford (METRICS), Stanford University in Palo Alto. “So bias does not undermine the scientific enterprise as a whole.”

 

 

Yet another finding is that the relative magnitude of biases closely reflects the level of attention they receive in the literature. That is, the kinds of biases researchers are most concerned about are, in fact, the ones they should be concerned about.

“Our understanding of bias is improving, and our priorities are set on the right targets,” Dr Fanelli said, though he noted that researchers should not become complacent when it comes to bias.

“We perhaps understand bias better, but we are far from having rid science of it. Indeed, our results suggest that the challenge might be greater than many think because interventions might need to be tailored to the needs and problems of individual disciplines of fields. One-size-fits all solutions are unlikely to work.”

Photo by Daniel Sone
Researcher in the lab

A recent study suggested that bias in research varies across scientific disciplines, but there are some common factors the different fields share.

The data consistently showed that small studies, early studies, and highly cited studies overestimated effect size.

In addition, a scientist’s early career status, isolation from other researchers, and involvement in misconduct appeared to be risk factors for unreliable results.

John P. A. Ioannidis, MD, of Stanford University School of Medicine in Stanford, California, and his colleagues reported these findings in PNAS.

The team reviewed more than 3000 meta-analyses that included nearly 50,000 individual studies across 22 scientific fields.

“I think that this is a mapping exercise,” Dr Ioannidis said. “It maps all the main biases that have been proposed across all 22 scientific disciplines. Now, we have a map for each scientific discipline, which biases are important, and which have a bigger impact, and, therefore, scientists can think about where do they want to go next with their field.”

Types of bias

The researchers examined several hypothesized kinds of scientific bias, including:

  • Small-study effect: When studies with small sample sizes report large effect sizes.
  • Gray literature bias: The tendency of smaller or statistically insignificant effects to be reported in PhD theses, conference proceedings, or personal communications rather than in peer-reviewed literature.
  • Early extremes effect: When extreme or controversial findings are published early just because they are astonishing.
  • Decline effect: When reports of extreme effects are followed by subsequent reports of reduced effects.
  • Citation bias: The larger the effect size, the more likely the study will be cited.
  • United States effect: When US researchers overestimate effect sizes.
  • Industry bias: When industry sponsorship and affiliation affect the direction and size of reported effects.

Dr Ioannidis and his colleagues also looked at other factors that might potentially affect the risk of bias, such as size and types of collaborations, the gender of the researchers, and pressure to publish.

Results

Small studies, highly cited studies, and those published in peer-reviewed journals seemed more likely to overestimate effects. US studies and early studies seemed to report more extreme effects.

Early career researchers and researchers working in small or long-distance collaborations were more likely to overestimate effect sizes. And researchers with a history of misconduct tended to overestimate effect sizes.

On the other hand, studies by highly cited authors who published frequently were not more affected by bias than average. And there was no difference in bias according to gender.

In addition, scientists in countries with strong incentives to publish were not more affected by bias than scientists from countries where there was less pressure to publish.

Dr Ioannidis said that, in the data he and his colleagues examined, the influence of different kinds of bias changed over time and seemed to depend on the individual scientist.

“We show that some of the patterns and risk factors seem to be getting worse in intensity over time,” he said. “This is particularly driven by the social sciences . . . . It seems that the social sciences are seeing the more prominent worsening of these biases over time.”

Another finding of this study is that the kinds and amounts of bias were irregularly distributed across the literature.

“Although bias may be worryingly high in specific research areas, it is nonexistent in many others,” said study author Daniele Fanelli, PhD, of the Meta-Research Innovation Center at Stanford (METRICS), Stanford University in Palo Alto. “So bias does not undermine the scientific enterprise as a whole.”

 

 

Yet another finding is that the relative magnitude of biases closely reflects the level of attention they receive in the literature. That is, the kinds of biases researchers are most concerned about are, in fact, the ones they should be concerned about.

“Our understanding of bias is improving, and our priorities are set on the right targets,” Dr Fanelli said, though he noted that researchers should not become complacent when it comes to bias.

“We perhaps understand bias better, but we are far from having rid science of it. Indeed, our results suggest that the challenge might be greater than many think because interventions might need to be tailored to the needs and problems of individual disciplines of fields. One-size-fits all solutions are unlikely to work.”

Publications
Publications
Topics
Article Type
Display Headline
Analysis reveals common sources of bias in scientific research
Display Headline
Analysis reveals common sources of bias in scientific research
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica