Article Type
Changed
Wed, 03/12/2014 - 05:00
Display Headline
Trial registry info differs from journal publication

Science journals

Credit: CDC/James Gathany

New research has revealed discrepancies between information posted on the ClinicalTrials.gov website and information published in journals.

During a 1-year period, nearly all of the trials published in “high-impact” journals and listed on ClinicalTrials.gov had at least 1 discrepancy between the 2 sources.

This included differences in study group data, intervention information, and primary and secondary endpoints.

Jessica E. Becker, of the Yale University School of Medicine in New Haven, Connecticut, and her colleagues disclosed these findings in a letter to JAMA.

The researchers identified 96 trials reporting results on ClinicalTrials.gov that were also published in 19 “high-impact” journals from July 1, 2010, to June 30, 2011. The trials were most frequently published in NEJM (n=23; 24%), The Lancet (n=18; 19%), and JAMA (n=11; 12%).

Common conditions investigated in these studies included cardiovascular disease, diabetes, and hyperlipidemia (n=21; 23%); cancer (n=20; 21%); and infectious disease (n=19; 20%). Seventy-three percent of the trials (n=70) were primarily funded by industry.

Cohort, intervention, and efficacy endpoint information was reported in both the journal and on ClinicalTrials.gov for most of the trials—ranging from 93% to 100%.

For 97% of the trials (93/96), there was at least 1 difference in information between the registry and the journal article. The level of discordance between the sources was lowest for enrollment numbers—2%—and highest for completion rates—22%.

Discordance was also quite high for the trial interventions (16%). This included differences in dosage descriptions, frequencies, and duration of the intervention.

There were 132 primary efficacy endpoints described in both sources. Fifty-two percent of these endpoints could be compared between the 2 sources and had concordant results. Results for 23% (n=30) could not be compared, and 16% (n=21) were discordant.

The majority (n=15) of discordant results did not alter the interpretation of the trial. But for 6 trials, the discordance did affect interpretation.

These trials had differences in time to disease progression, rate of disease recurrence, time to resolution of a condition, progression-free survival, and results of statistical analyses.

Among the 619 secondary efficacy endpoints that were described in both sources, results for 37% (n=228) could not be compared, and 9% (n=53) were discordant. Overall, 16% of secondary efficacy endpoints were described in both sources and reported concordant results.

The researchers said this study raises questions about the accuracy of information published on ClinicalTrials.gov and in journals.

Furthermore, because the journals studied have rigorous peer review processes, the trials in this sample may represent best-case scenarios with regard to the quality of results reporting.

Publications
Topics

Science journals

Credit: CDC/James Gathany

New research has revealed discrepancies between information posted on the ClinicalTrials.gov website and information published in journals.

During a 1-year period, nearly all of the trials published in “high-impact” journals and listed on ClinicalTrials.gov had at least 1 discrepancy between the 2 sources.

This included differences in study group data, intervention information, and primary and secondary endpoints.

Jessica E. Becker, of the Yale University School of Medicine in New Haven, Connecticut, and her colleagues disclosed these findings in a letter to JAMA.

The researchers identified 96 trials reporting results on ClinicalTrials.gov that were also published in 19 “high-impact” journals from July 1, 2010, to June 30, 2011. The trials were most frequently published in NEJM (n=23; 24%), The Lancet (n=18; 19%), and JAMA (n=11; 12%).

Common conditions investigated in these studies included cardiovascular disease, diabetes, and hyperlipidemia (n=21; 23%); cancer (n=20; 21%); and infectious disease (n=19; 20%). Seventy-three percent of the trials (n=70) were primarily funded by industry.

Cohort, intervention, and efficacy endpoint information was reported in both the journal and on ClinicalTrials.gov for most of the trials—ranging from 93% to 100%.

For 97% of the trials (93/96), there was at least 1 difference in information between the registry and the journal article. The level of discordance between the sources was lowest for enrollment numbers—2%—and highest for completion rates—22%.

Discordance was also quite high for the trial interventions (16%). This included differences in dosage descriptions, frequencies, and duration of the intervention.

There were 132 primary efficacy endpoints described in both sources. Fifty-two percent of these endpoints could be compared between the 2 sources and had concordant results. Results for 23% (n=30) could not be compared, and 16% (n=21) were discordant.

The majority (n=15) of discordant results did not alter the interpretation of the trial. But for 6 trials, the discordance did affect interpretation.

These trials had differences in time to disease progression, rate of disease recurrence, time to resolution of a condition, progression-free survival, and results of statistical analyses.

Among the 619 secondary efficacy endpoints that were described in both sources, results for 37% (n=228) could not be compared, and 9% (n=53) were discordant. Overall, 16% of secondary efficacy endpoints were described in both sources and reported concordant results.

The researchers said this study raises questions about the accuracy of information published on ClinicalTrials.gov and in journals.

Furthermore, because the journals studied have rigorous peer review processes, the trials in this sample may represent best-case scenarios with regard to the quality of results reporting.

Science journals

Credit: CDC/James Gathany

New research has revealed discrepancies between information posted on the ClinicalTrials.gov website and information published in journals.

During a 1-year period, nearly all of the trials published in “high-impact” journals and listed on ClinicalTrials.gov had at least 1 discrepancy between the 2 sources.

This included differences in study group data, intervention information, and primary and secondary endpoints.

Jessica E. Becker, of the Yale University School of Medicine in New Haven, Connecticut, and her colleagues disclosed these findings in a letter to JAMA.

The researchers identified 96 trials reporting results on ClinicalTrials.gov that were also published in 19 “high-impact” journals from July 1, 2010, to June 30, 2011. The trials were most frequently published in NEJM (n=23; 24%), The Lancet (n=18; 19%), and JAMA (n=11; 12%).

Common conditions investigated in these studies included cardiovascular disease, diabetes, and hyperlipidemia (n=21; 23%); cancer (n=20; 21%); and infectious disease (n=19; 20%). Seventy-three percent of the trials (n=70) were primarily funded by industry.

Cohort, intervention, and efficacy endpoint information was reported in both the journal and on ClinicalTrials.gov for most of the trials—ranging from 93% to 100%.

For 97% of the trials (93/96), there was at least 1 difference in information between the registry and the journal article. The level of discordance between the sources was lowest for enrollment numbers—2%—and highest for completion rates—22%.

Discordance was also quite high for the trial interventions (16%). This included differences in dosage descriptions, frequencies, and duration of the intervention.

There were 132 primary efficacy endpoints described in both sources. Fifty-two percent of these endpoints could be compared between the 2 sources and had concordant results. Results for 23% (n=30) could not be compared, and 16% (n=21) were discordant.

The majority (n=15) of discordant results did not alter the interpretation of the trial. But for 6 trials, the discordance did affect interpretation.

These trials had differences in time to disease progression, rate of disease recurrence, time to resolution of a condition, progression-free survival, and results of statistical analyses.

Among the 619 secondary efficacy endpoints that were described in both sources, results for 37% (n=228) could not be compared, and 9% (n=53) were discordant. Overall, 16% of secondary efficacy endpoints were described in both sources and reported concordant results.

The researchers said this study raises questions about the accuracy of information published on ClinicalTrials.gov and in journals.

Furthermore, because the journals studied have rigorous peer review processes, the trials in this sample may represent best-case scenarios with regard to the quality of results reporting.

Publications
Publications
Topics
Article Type
Display Headline
Trial registry info differs from journal publication
Display Headline
Trial registry info differs from journal publication
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica