User login
Photo by Rhoda Baer
Researchers have developed a metric that uses citation rates to determine the influence of a scientific article.
The team says the metric, known as the Relative Citation Ratio (RCR), measures a scientific publication’s influence in a way that is article-level and field-independent.
George Santangelo, PhD, of the National Institutes of Health in Bethesda, Maryland, and his colleagues described this metric in PLOS Biology.
The researchers noted that citation is the primary mechanism for scientists to recognize the importance of each other’s work, but citation practices vary widely between fields.
RCR incorporates a novel method for field-normalization: the co-citation network. This network is formed from the reference lists of articles that cite the article in question.
For example, if Article X is cited by Article A, Article B, and Article C, the co-citation network of Article X would contain all the articles from the reference lists of Articles A, B, and C. Comparing the citation rate of Article X to the citation rate in the co-citation network allows each article to create its own individualized field.
In addition to using the co-citation network, RCR is also benchmarked to a peer comparison group so that it’s easy to determine the relative impact of an article.
The researchers said this benchmarking step is particularly important as it allows “apples-to-apples” comparisons for groups of papers; eg, comparing research output between similar types of institutions or between developing nations.
To test RCR, Dr Santangelo and his colleagues analyzed 88,835 articles published between 2003 and 2010.
The team said the National Institutes of Health awardees listed as authors of those articles “occupy relatively stable positions of influence across all disciplines.” Furthermore, the values generated by RCR correlated with the opinions of subject matter experts.
Still, the researchers acknowledged that RCR should not be used as a substitute for expert opinion.
“No number can fully represent the impact of an individual work or investigator,” Dr Santangelo said. “Neither RCR nor any other metric can quantitate the underlying value of a study nor measure the importance of making progress in solving a particular problem.”
Dr Santangelo said that, although expert opinion will remain the gold standard, RCR can assist in “the dissemination of a dynamic way to measure the influence of articles on their respective fields.”
A beta version of “iCite,” a web tool for calculating the RCR of articles listed in PubMed, is available at https://icite.od.nih.gov.
Photo by Rhoda Baer
Researchers have developed a metric that uses citation rates to determine the influence of a scientific article.
The team says the metric, known as the Relative Citation Ratio (RCR), measures a scientific publication’s influence in a way that is article-level and field-independent.
George Santangelo, PhD, of the National Institutes of Health in Bethesda, Maryland, and his colleagues described this metric in PLOS Biology.
The researchers noted that citation is the primary mechanism for scientists to recognize the importance of each other’s work, but citation practices vary widely between fields.
RCR incorporates a novel method for field-normalization: the co-citation network. This network is formed from the reference lists of articles that cite the article in question.
For example, if Article X is cited by Article A, Article B, and Article C, the co-citation network of Article X would contain all the articles from the reference lists of Articles A, B, and C. Comparing the citation rate of Article X to the citation rate in the co-citation network allows each article to create its own individualized field.
In addition to using the co-citation network, RCR is also benchmarked to a peer comparison group so that it’s easy to determine the relative impact of an article.
The researchers said this benchmarking step is particularly important as it allows “apples-to-apples” comparisons for groups of papers; eg, comparing research output between similar types of institutions or between developing nations.
To test RCR, Dr Santangelo and his colleagues analyzed 88,835 articles published between 2003 and 2010.
The team said the National Institutes of Health awardees listed as authors of those articles “occupy relatively stable positions of influence across all disciplines.” Furthermore, the values generated by RCR correlated with the opinions of subject matter experts.
Still, the researchers acknowledged that RCR should not be used as a substitute for expert opinion.
“No number can fully represent the impact of an individual work or investigator,” Dr Santangelo said. “Neither RCR nor any other metric can quantitate the underlying value of a study nor measure the importance of making progress in solving a particular problem.”
Dr Santangelo said that, although expert opinion will remain the gold standard, RCR can assist in “the dissemination of a dynamic way to measure the influence of articles on their respective fields.”
A beta version of “iCite,” a web tool for calculating the RCR of articles listed in PubMed, is available at https://icite.od.nih.gov.
Photo by Rhoda Baer
Researchers have developed a metric that uses citation rates to determine the influence of a scientific article.
The team says the metric, known as the Relative Citation Ratio (RCR), measures a scientific publication’s influence in a way that is article-level and field-independent.
George Santangelo, PhD, of the National Institutes of Health in Bethesda, Maryland, and his colleagues described this metric in PLOS Biology.
The researchers noted that citation is the primary mechanism for scientists to recognize the importance of each other’s work, but citation practices vary widely between fields.
RCR incorporates a novel method for field-normalization: the co-citation network. This network is formed from the reference lists of articles that cite the article in question.
For example, if Article X is cited by Article A, Article B, and Article C, the co-citation network of Article X would contain all the articles from the reference lists of Articles A, B, and C. Comparing the citation rate of Article X to the citation rate in the co-citation network allows each article to create its own individualized field.
In addition to using the co-citation network, RCR is also benchmarked to a peer comparison group so that it’s easy to determine the relative impact of an article.
The researchers said this benchmarking step is particularly important as it allows “apples-to-apples” comparisons for groups of papers; eg, comparing research output between similar types of institutions or between developing nations.
To test RCR, Dr Santangelo and his colleagues analyzed 88,835 articles published between 2003 and 2010.
The team said the National Institutes of Health awardees listed as authors of those articles “occupy relatively stable positions of influence across all disciplines.” Furthermore, the values generated by RCR correlated with the opinions of subject matter experts.
Still, the researchers acknowledged that RCR should not be used as a substitute for expert opinion.
“No number can fully represent the impact of an individual work or investigator,” Dr Santangelo said. “Neither RCR nor any other metric can quantitate the underlying value of a study nor measure the importance of making progress in solving a particular problem.”
Dr Santangelo said that, although expert opinion will remain the gold standard, RCR can assist in “the dissemination of a dynamic way to measure the influence of articles on their respective fields.”
A beta version of “iCite,” a web tool for calculating the RCR of articles listed in PubMed, is available at https://icite.od.nih.gov.