User login
Bivalent Polio Vaccine Performs as Well as Monovalents
A new bivalent polio vaccine is as effective in immunizing infants as are existing monovalent vaccines and more effective than a widely used trivalent vaccine, researchers with the World Health Organization have found.
In an article published online Oct. 26 in the Lancet, Dr. Roland W. Sutter of WHO in Geneva, and his colleagues, presented findings on a randomized, double-blind controlled trial of a two-dose oral vaccine containing antigens to wild poliovirus types 1 and 3 that was conducted in India and enrolled 900 infants. The researchers randomized the infants into five groups and examined the immunogenicity of three existing monovalent vaccines, the bivalent 1 and 3 vaccine, and a trivalent vaccine containing antigens to types 1, 2, and 3.
Recent polio eradication efforts have favored the use of monovalent vaccines, because trivalent vaccines – while offering the convenience of delivering all three antigens – have shown disappointing results attributed to an interference by the type 2 antigens in inducing typespecific immunity to types 1 and 3, weakening the effectiveness of the vaccine.
Wild poliovirus type 2, whose antigens are included in the trivalent vaccine, was last isolated in 1999, and is therefore a type 2 vaccine considered a less essential weapon in the fight to eradicate polio. Monovalent vaccines for types 1 and 3, while effective, have the drawback of complicating decision making about vaccine selection, Dr. Sutter and colleagues wrote (Lancet 2010 Oct. 26 [doi:10.1016/S0140- 6736(10)61230-5]).
For their research, Dr. Sutter and colleagues randomly assigned 900 newborns of healthy birth weight to one of five vaccine groups (about 180 patients per group); of these, 70 (8%) discontinued, leaving 830 for analysis. Parents and health care workers were blinded to vaccine allocation, and all five vaccines were supplied by Panacea Biotec, the designer of the study and one of its sponsors; samples were verified for potency at WHO collaborating laboratories in Europe.
The first dose of each vaccine was given at birth, when cord blood was also drawn; the second dose was administered at 30 days, after which more blood was taken, and at 60 days, final blood samples were drawn for analysis.
After two doses, seroconversion to poliovirus type 1 was 90% for monovalent type 1 and 86% for bivalent, compared with 63% for trivalent vaccine. Seroconversion to type 2 was 90% with monovalent type 2 vaccine, and 91% with trivalent vaccine. Conversion to poliovirus type 3 was 84% for monovalent and 74% for bivalent, compared with 52% for the trivalent vaccine.
The authors noted that because all the study sites were in southern and central India, one limitation of the study was in generalizing the findings to poliomyelitisendemic areas in northern India and elsewhere.
Nonetheless, the results, “confirmed that the bivalent vaccine leads to significantly more seroconversion than the trivalent vaccine,” the investigators said. Further, they wrote, the bivalent vaccine “will enhance individual and population immunity simultaneously for both poliovirus types 1 and 3, without any serious loss in immunogenicity” compared with monovalent vaccines.
Bivalent vaccine is already in wide use in India, the authors noted, “to increase population immunity against and accelerate the elimination of the final chains of transmission of these two remaining wild polioviruses, especially in areas where both poliovirus types 1 and 3 cocirculate.”
Yet while the bivalent vaccine covers both polio types known to be circulating, allowing for the eventual phasing out of effective trivalent vaccines, “a stockpile of [type 2 vaccine] should be kept once poliomyelitis eradication has been achieved to allow typespecific control measures should type 2 poliomyelitis be reintroduced,” the researchers wrote.
In an editorial comment, Dr. Nigel W. Crawford, MBBS, MPH, of the Royal Children’s Hospital in Melbourne, Australia called the bivalent vaccine “important for the poliomyelitis endgame” and said that it was likely responsible for the recent dramatic reduction in Indian polio cases, which were 32 in 2010, compared with 260 in 2009. However, Dr. Crawford also cautioned, WHO’s “plan of action for poliomyelitis eradication – with bOPV as the centerpiece – is only 50% funded for 2010-12.” (Lancet 2010 Oct. 26 [doi:10.1016/S0140- 6736(10)61427-4])
The study was funded by the GAVI Alliance, the World Health Organization, and Panacea Biotec. Two of its authors are employees of Panacea Biotec; no other conflicts of interest were reported.
A new bivalent polio vaccine is as effective in immunizing infants as are existing monovalent vaccines and more effective than a widely used trivalent vaccine, researchers with the World Health Organization have found.
In an article published online Oct. 26 in the Lancet, Dr. Roland W. Sutter of WHO in Geneva, and his colleagues, presented findings on a randomized, double-blind controlled trial of a two-dose oral vaccine containing antigens to wild poliovirus types 1 and 3 that was conducted in India and enrolled 900 infants. The researchers randomized the infants into five groups and examined the immunogenicity of three existing monovalent vaccines, the bivalent 1 and 3 vaccine, and a trivalent vaccine containing antigens to types 1, 2, and 3.
Recent polio eradication efforts have favored the use of monovalent vaccines, because trivalent vaccines – while offering the convenience of delivering all three antigens – have shown disappointing results attributed to an interference by the type 2 antigens in inducing typespecific immunity to types 1 and 3, weakening the effectiveness of the vaccine.
Wild poliovirus type 2, whose antigens are included in the trivalent vaccine, was last isolated in 1999, and is therefore a type 2 vaccine considered a less essential weapon in the fight to eradicate polio. Monovalent vaccines for types 1 and 3, while effective, have the drawback of complicating decision making about vaccine selection, Dr. Sutter and colleagues wrote (Lancet 2010 Oct. 26 [doi:10.1016/S0140- 6736(10)61230-5]).
For their research, Dr. Sutter and colleagues randomly assigned 900 newborns of healthy birth weight to one of five vaccine groups (about 180 patients per group); of these, 70 (8%) discontinued, leaving 830 for analysis. Parents and health care workers were blinded to vaccine allocation, and all five vaccines were supplied by Panacea Biotec, the designer of the study and one of its sponsors; samples were verified for potency at WHO collaborating laboratories in Europe.
The first dose of each vaccine was given at birth, when cord blood was also drawn; the second dose was administered at 30 days, after which more blood was taken, and at 60 days, final blood samples were drawn for analysis.
After two doses, seroconversion to poliovirus type 1 was 90% for monovalent type 1 and 86% for bivalent, compared with 63% for trivalent vaccine. Seroconversion to type 2 was 90% with monovalent type 2 vaccine, and 91% with trivalent vaccine. Conversion to poliovirus type 3 was 84% for monovalent and 74% for bivalent, compared with 52% for the trivalent vaccine.
The authors noted that because all the study sites were in southern and central India, one limitation of the study was in generalizing the findings to poliomyelitisendemic areas in northern India and elsewhere.
Nonetheless, the results, “confirmed that the bivalent vaccine leads to significantly more seroconversion than the trivalent vaccine,” the investigators said. Further, they wrote, the bivalent vaccine “will enhance individual and population immunity simultaneously for both poliovirus types 1 and 3, without any serious loss in immunogenicity” compared with monovalent vaccines.
Bivalent vaccine is already in wide use in India, the authors noted, “to increase population immunity against and accelerate the elimination of the final chains of transmission of these two remaining wild polioviruses, especially in areas where both poliovirus types 1 and 3 cocirculate.”
Yet while the bivalent vaccine covers both polio types known to be circulating, allowing for the eventual phasing out of effective trivalent vaccines, “a stockpile of [type 2 vaccine] should be kept once poliomyelitis eradication has been achieved to allow typespecific control measures should type 2 poliomyelitis be reintroduced,” the researchers wrote.
In an editorial comment, Dr. Nigel W. Crawford, MBBS, MPH, of the Royal Children’s Hospital in Melbourne, Australia called the bivalent vaccine “important for the poliomyelitis endgame” and said that it was likely responsible for the recent dramatic reduction in Indian polio cases, which were 32 in 2010, compared with 260 in 2009. However, Dr. Crawford also cautioned, WHO’s “plan of action for poliomyelitis eradication – with bOPV as the centerpiece – is only 50% funded for 2010-12.” (Lancet 2010 Oct. 26 [doi:10.1016/S0140- 6736(10)61427-4])
The study was funded by the GAVI Alliance, the World Health Organization, and Panacea Biotec. Two of its authors are employees of Panacea Biotec; no other conflicts of interest were reported.
A new bivalent polio vaccine is as effective in immunizing infants as are existing monovalent vaccines and more effective than a widely used trivalent vaccine, researchers with the World Health Organization have found.
In an article published online Oct. 26 in the Lancet, Dr. Roland W. Sutter of WHO in Geneva, and his colleagues, presented findings on a randomized, double-blind controlled trial of a two-dose oral vaccine containing antigens to wild poliovirus types 1 and 3 that was conducted in India and enrolled 900 infants. The researchers randomized the infants into five groups and examined the immunogenicity of three existing monovalent vaccines, the bivalent 1 and 3 vaccine, and a trivalent vaccine containing antigens to types 1, 2, and 3.
Recent polio eradication efforts have favored the use of monovalent vaccines, because trivalent vaccines – while offering the convenience of delivering all three antigens – have shown disappointing results attributed to an interference by the type 2 antigens in inducing typespecific immunity to types 1 and 3, weakening the effectiveness of the vaccine.
Wild poliovirus type 2, whose antigens are included in the trivalent vaccine, was last isolated in 1999, and is therefore a type 2 vaccine considered a less essential weapon in the fight to eradicate polio. Monovalent vaccines for types 1 and 3, while effective, have the drawback of complicating decision making about vaccine selection, Dr. Sutter and colleagues wrote (Lancet 2010 Oct. 26 [doi:10.1016/S0140- 6736(10)61230-5]).
For their research, Dr. Sutter and colleagues randomly assigned 900 newborns of healthy birth weight to one of five vaccine groups (about 180 patients per group); of these, 70 (8%) discontinued, leaving 830 for analysis. Parents and health care workers were blinded to vaccine allocation, and all five vaccines were supplied by Panacea Biotec, the designer of the study and one of its sponsors; samples were verified for potency at WHO collaborating laboratories in Europe.
The first dose of each vaccine was given at birth, when cord blood was also drawn; the second dose was administered at 30 days, after which more blood was taken, and at 60 days, final blood samples were drawn for analysis.
After two doses, seroconversion to poliovirus type 1 was 90% for monovalent type 1 and 86% for bivalent, compared with 63% for trivalent vaccine. Seroconversion to type 2 was 90% with monovalent type 2 vaccine, and 91% with trivalent vaccine. Conversion to poliovirus type 3 was 84% for monovalent and 74% for bivalent, compared with 52% for the trivalent vaccine.
The authors noted that because all the study sites were in southern and central India, one limitation of the study was in generalizing the findings to poliomyelitisendemic areas in northern India and elsewhere.
Nonetheless, the results, “confirmed that the bivalent vaccine leads to significantly more seroconversion than the trivalent vaccine,” the investigators said. Further, they wrote, the bivalent vaccine “will enhance individual and population immunity simultaneously for both poliovirus types 1 and 3, without any serious loss in immunogenicity” compared with monovalent vaccines.
Bivalent vaccine is already in wide use in India, the authors noted, “to increase population immunity against and accelerate the elimination of the final chains of transmission of these two remaining wild polioviruses, especially in areas where both poliovirus types 1 and 3 cocirculate.”
Yet while the bivalent vaccine covers both polio types known to be circulating, allowing for the eventual phasing out of effective trivalent vaccines, “a stockpile of [type 2 vaccine] should be kept once poliomyelitis eradication has been achieved to allow typespecific control measures should type 2 poliomyelitis be reintroduced,” the researchers wrote.
In an editorial comment, Dr. Nigel W. Crawford, MBBS, MPH, of the Royal Children’s Hospital in Melbourne, Australia called the bivalent vaccine “important for the poliomyelitis endgame” and said that it was likely responsible for the recent dramatic reduction in Indian polio cases, which were 32 in 2010, compared with 260 in 2009. However, Dr. Crawford also cautioned, WHO’s “plan of action for poliomyelitis eradication – with bOPV as the centerpiece – is only 50% funded for 2010-12.” (Lancet 2010 Oct. 26 [doi:10.1016/S0140- 6736(10)61427-4])
The study was funded by the GAVI Alliance, the World Health Organization, and Panacea Biotec. Two of its authors are employees of Panacea Biotec; no other conflicts of interest were reported.
Bivalent Polio Vaccine Performs as Well as Monovalents
A new bivalent polio vaccine is as effective in immunizing infants as are existing monovalent vaccines and more effective than a widely used trivalent vaccine, researchers with the World Health Organization have found.
In an article published online Oct. 26 in the Lancet, Dr. Roland W. Sutter of WHO in Geneva, and his colleagues, presented findings on a randomized, double-blind controlled trial of a two-dose oral vaccine containing antigens to wild poliovirus types 1 and 3 that was conducted in India and enrolled 900 infants. The researchers randomized the infants into five groups and examined the immunogenicity of three existing monovalent vaccines, the bivalent 1 and 3 vaccine, and a trivalent vaccine containing antigens to types 1, 2, and 3.
Recent polio eradication efforts have favored the use of monovalent vaccines, because trivalent vaccines – while offering the convenience of delivering all three antigens – have shown disappointing results attributed to an interference by the type 2 antigens in inducing typespecific immunity to types 1 and 3, weakening the effectiveness of the vaccine.
Wild poliovirus type 2, whose antigens are included in the trivalent vaccine, was last isolated in 1999, and is therefore a type 2 vaccine considered a less essential weapon in the fight to eradicate polio. Monovalent vaccines for types 1 and 3, while effective, have the drawback of complicating decision making about vaccine selection, Dr. Sutter and colleagues wrote (Lancet 2010 Oct. 26 [doi:10.1016/S0140- 6736(10)61230-5]).
For their research, Dr. Sutter and colleagues randomly assigned 900 newborns of healthy birth weight to one of five vaccine groups (about 180 patients per group); of these, 70 (8%) discontinued, leaving 830 for analysis. Parents and health care workers were blinded to vaccine allocation, and all five vaccines were supplied by Panacea Biotec, the designer of the study and one of its sponsors; samples were verified for potency at WHO collaborating laboratories in Europe.
The first dose of each vaccine was given at birth, when cord blood was also drawn; the second dose was administered at 30 days, after which more blood was taken, and at 60 days, final blood samples were drawn for analysis.
After two doses, seroconversion to poliovirus type 1 was 90% for monovalent type 1 and 86% for bivalent, compared with 63% for trivalent vaccine. Seroconversion to type 2 was 90% with monovalent type 2 vaccine, and 91% with trivalent vaccine. Conversion to poliovirus type 3 was 84% for monovalent and 74% for bivalent, compared with 52% for the trivalent vaccine.
The authors noted that because all the study sites were in southern and central India, one limitation of the study was in generalizing the findings to poliomyelitisendemic areas in northern India and elsewhere.
Nonetheless, the results, “confirmed that the bivalent vaccine leads to significantly more seroconversion than the trivalent vaccine,” the investigators said. Further, they wrote, the bivalent vaccine “will enhance individual and population immunity simultaneously for both poliovirus types 1 and 3, without any serious loss in immunogenicity” compared with monovalent vaccines.
Bivalent vaccine is already in wide use in India, the authors noted, “to increase population immunity against and accelerate the elimination of the final chains of transmission of these two remaining wild polioviruses, especially in areas where both poliovirus types 1 and 3 cocirculate.”
Yet while the bivalent vaccine covers both polio types known to be circulating, allowing for the eventual phasing out of effective trivalent vaccines, “a stockpile of [type 2 vaccine] should be kept once poliomyelitis eradication has been achieved to allow typespecific control measures should type 2 poliomyelitis be reintroduced,” the researchers wrote.
In an editorial comment, Dr. Nigel W. Crawford, MBBS, MPH, of the Royal Children’s Hospital in Melbourne, Australia called the bivalent vaccine “important for the poliomyelitis endgame” and said that it was likely responsible for the recent dramatic reduction in Indian polio cases, which were 32 in 2010, compared with 260 in 2009. However, Dr. Crawford also cautioned, WHO’s “plan of action for poliomyelitis eradication – with bOPV as the centerpiece – is only 50% funded for 2010-12.” (Lancet 2010 Oct. 26 [doi:10.1016/S0140- 6736(10)61427-4])
The study was funded by the GAVI Alliance, the World Health Organization, and Panacea Biotec. Two of its authors are employees of Panacea Biotec; no other conflicts of interest were reported.
A new bivalent polio vaccine is as effective in immunizing infants as are existing monovalent vaccines and more effective than a widely used trivalent vaccine, researchers with the World Health Organization have found.
In an article published online Oct. 26 in the Lancet, Dr. Roland W. Sutter of WHO in Geneva, and his colleagues, presented findings on a randomized, double-blind controlled trial of a two-dose oral vaccine containing antigens to wild poliovirus types 1 and 3 that was conducted in India and enrolled 900 infants. The researchers randomized the infants into five groups and examined the immunogenicity of three existing monovalent vaccines, the bivalent 1 and 3 vaccine, and a trivalent vaccine containing antigens to types 1, 2, and 3.
Recent polio eradication efforts have favored the use of monovalent vaccines, because trivalent vaccines – while offering the convenience of delivering all three antigens – have shown disappointing results attributed to an interference by the type 2 antigens in inducing typespecific immunity to types 1 and 3, weakening the effectiveness of the vaccine.
Wild poliovirus type 2, whose antigens are included in the trivalent vaccine, was last isolated in 1999, and is therefore a type 2 vaccine considered a less essential weapon in the fight to eradicate polio. Monovalent vaccines for types 1 and 3, while effective, have the drawback of complicating decision making about vaccine selection, Dr. Sutter and colleagues wrote (Lancet 2010 Oct. 26 [doi:10.1016/S0140- 6736(10)61230-5]).
For their research, Dr. Sutter and colleagues randomly assigned 900 newborns of healthy birth weight to one of five vaccine groups (about 180 patients per group); of these, 70 (8%) discontinued, leaving 830 for analysis. Parents and health care workers were blinded to vaccine allocation, and all five vaccines were supplied by Panacea Biotec, the designer of the study and one of its sponsors; samples were verified for potency at WHO collaborating laboratories in Europe.
The first dose of each vaccine was given at birth, when cord blood was also drawn; the second dose was administered at 30 days, after which more blood was taken, and at 60 days, final blood samples were drawn for analysis.
After two doses, seroconversion to poliovirus type 1 was 90% for monovalent type 1 and 86% for bivalent, compared with 63% for trivalent vaccine. Seroconversion to type 2 was 90% with monovalent type 2 vaccine, and 91% with trivalent vaccine. Conversion to poliovirus type 3 was 84% for monovalent and 74% for bivalent, compared with 52% for the trivalent vaccine.
The authors noted that because all the study sites were in southern and central India, one limitation of the study was in generalizing the findings to poliomyelitisendemic areas in northern India and elsewhere.
Nonetheless, the results, “confirmed that the bivalent vaccine leads to significantly more seroconversion than the trivalent vaccine,” the investigators said. Further, they wrote, the bivalent vaccine “will enhance individual and population immunity simultaneously for both poliovirus types 1 and 3, without any serious loss in immunogenicity” compared with monovalent vaccines.
Bivalent vaccine is already in wide use in India, the authors noted, “to increase population immunity against and accelerate the elimination of the final chains of transmission of these two remaining wild polioviruses, especially in areas where both poliovirus types 1 and 3 cocirculate.”
Yet while the bivalent vaccine covers both polio types known to be circulating, allowing for the eventual phasing out of effective trivalent vaccines, “a stockpile of [type 2 vaccine] should be kept once poliomyelitis eradication has been achieved to allow typespecific control measures should type 2 poliomyelitis be reintroduced,” the researchers wrote.
In an editorial comment, Dr. Nigel W. Crawford, MBBS, MPH, of the Royal Children’s Hospital in Melbourne, Australia called the bivalent vaccine “important for the poliomyelitis endgame” and said that it was likely responsible for the recent dramatic reduction in Indian polio cases, which were 32 in 2010, compared with 260 in 2009. However, Dr. Crawford also cautioned, WHO’s “plan of action for poliomyelitis eradication – with bOPV as the centerpiece – is only 50% funded for 2010-12.” (Lancet 2010 Oct. 26 [doi:10.1016/S0140- 6736(10)61427-4])
The study was funded by the GAVI Alliance, the World Health Organization, and Panacea Biotec. Two of its authors are employees of Panacea Biotec; no other conflicts of interest were reported.
A new bivalent polio vaccine is as effective in immunizing infants as are existing monovalent vaccines and more effective than a widely used trivalent vaccine, researchers with the World Health Organization have found.
In an article published online Oct. 26 in the Lancet, Dr. Roland W. Sutter of WHO in Geneva, and his colleagues, presented findings on a randomized, double-blind controlled trial of a two-dose oral vaccine containing antigens to wild poliovirus types 1 and 3 that was conducted in India and enrolled 900 infants. The researchers randomized the infants into five groups and examined the immunogenicity of three existing monovalent vaccines, the bivalent 1 and 3 vaccine, and a trivalent vaccine containing antigens to types 1, 2, and 3.
Recent polio eradication efforts have favored the use of monovalent vaccines, because trivalent vaccines – while offering the convenience of delivering all three antigens – have shown disappointing results attributed to an interference by the type 2 antigens in inducing typespecific immunity to types 1 and 3, weakening the effectiveness of the vaccine.
Wild poliovirus type 2, whose antigens are included in the trivalent vaccine, was last isolated in 1999, and is therefore a type 2 vaccine considered a less essential weapon in the fight to eradicate polio. Monovalent vaccines for types 1 and 3, while effective, have the drawback of complicating decision making about vaccine selection, Dr. Sutter and colleagues wrote (Lancet 2010 Oct. 26 [doi:10.1016/S0140- 6736(10)61230-5]).
For their research, Dr. Sutter and colleagues randomly assigned 900 newborns of healthy birth weight to one of five vaccine groups (about 180 patients per group); of these, 70 (8%) discontinued, leaving 830 for analysis. Parents and health care workers were blinded to vaccine allocation, and all five vaccines were supplied by Panacea Biotec, the designer of the study and one of its sponsors; samples were verified for potency at WHO collaborating laboratories in Europe.
The first dose of each vaccine was given at birth, when cord blood was also drawn; the second dose was administered at 30 days, after which more blood was taken, and at 60 days, final blood samples were drawn for analysis.
After two doses, seroconversion to poliovirus type 1 was 90% for monovalent type 1 and 86% for bivalent, compared with 63% for trivalent vaccine. Seroconversion to type 2 was 90% with monovalent type 2 vaccine, and 91% with trivalent vaccine. Conversion to poliovirus type 3 was 84% for monovalent and 74% for bivalent, compared with 52% for the trivalent vaccine.
The authors noted that because all the study sites were in southern and central India, one limitation of the study was in generalizing the findings to poliomyelitisendemic areas in northern India and elsewhere.
Nonetheless, the results, “confirmed that the bivalent vaccine leads to significantly more seroconversion than the trivalent vaccine,” the investigators said. Further, they wrote, the bivalent vaccine “will enhance individual and population immunity simultaneously for both poliovirus types 1 and 3, without any serious loss in immunogenicity” compared with monovalent vaccines.
Bivalent vaccine is already in wide use in India, the authors noted, “to increase population immunity against and accelerate the elimination of the final chains of transmission of these two remaining wild polioviruses, especially in areas where both poliovirus types 1 and 3 cocirculate.”
Yet while the bivalent vaccine covers both polio types known to be circulating, allowing for the eventual phasing out of effective trivalent vaccines, “a stockpile of [type 2 vaccine] should be kept once poliomyelitis eradication has been achieved to allow typespecific control measures should type 2 poliomyelitis be reintroduced,” the researchers wrote.
In an editorial comment, Dr. Nigel W. Crawford, MBBS, MPH, of the Royal Children’s Hospital in Melbourne, Australia called the bivalent vaccine “important for the poliomyelitis endgame” and said that it was likely responsible for the recent dramatic reduction in Indian polio cases, which were 32 in 2010, compared with 260 in 2009. However, Dr. Crawford also cautioned, WHO’s “plan of action for poliomyelitis eradication – with bOPV as the centerpiece – is only 50% funded for 2010-12.” (Lancet 2010 Oct. 26 [doi:10.1016/S0140- 6736(10)61427-4])
The study was funded by the GAVI Alliance, the World Health Organization, and Panacea Biotec. Two of its authors are employees of Panacea Biotec; no other conflicts of interest were reported.
FDA Approves Trastuzumab for HER2-Positive Gastric Cancer
The Food and Drug Administration has approved trastuzumab, along with chemotherapy, to treat metastatic HER2-positive gastric cancers in people who have not been previously treated for metastatic disease.
The agency’s decision, announced late Oct. 20, follows a January move by the European Medicines Agency to grant marketing authorization to trastuzumab (Herceptin, Genentech) for the same patient group. Trastuzumab, already approved in both the United Sates and Europe for the treatment of HER2-overexpressing breast cancers, works by blocking the HER2 (human epidermal growth factor 2) protein on the surface of some cancer cells, possibly interrupting signals that make them grow.
In a phase III, manufacturer-sponsored randomized controlled trial comparing trastuzumab with chemotherapy vs. chemotherapy alone in patients with advanced gastric cancers, 594 patients had tumors expressing HER2 at high levels.* (HER2 overexpression has been reported in between 6% and 35% of all stomach and gastroesophageal tumors.)
For these trial subjects trastuzumab added to a dual chemotherapy (capecitabine or 5-fluorouracil and cisplatin) resulted in improved overall survival of 37% over the chemotherapy alone group, with median overall survival of 13.5 vs. 11.0 months.
An updated analysis based on an additional year of follow-up showed a 25% improvement in overall survival, with a median 13.1 months in the trastuzumab arm vs. 11.7 months in the chemotherapy alone arm. (J Clin Oncol 27:18s, 2009 [suppl; abstr LBA4509])
The survival benefit of trastuzumab was seen to increase with the level of HER2 expressed in tumors. On September 29, England’s National Institute for Health and Clinical Excellence recommended trastuzumab to the National Health Service only for patients whose gastric tumors express the highest measurable levels of HER-2, who were seen in a subgroup of the same Phase III randomized controlled trial to have had the best survival improvement with trastuzumab. In this subgroup (n=279) overall survival was 18 months in the treatment arm compared with 12.4 months for the chemotherapy alone group, a 5.6 month improvement.
* CORRECTION, 11/19/2010: The original version of this article misstated the percentage of patients with gastroesophageal and gastric tumors expressing HER2 at high levels. HER2-overexpression was seen in all 594 patients. This version has been updated.
The Food and Drug Administration has approved trastuzumab, along with chemotherapy, to treat metastatic HER2-positive gastric cancers in people who have not been previously treated for metastatic disease.
The agency’s decision, announced late Oct. 20, follows a January move by the European Medicines Agency to grant marketing authorization to trastuzumab (Herceptin, Genentech) for the same patient group. Trastuzumab, already approved in both the United Sates and Europe for the treatment of HER2-overexpressing breast cancers, works by blocking the HER2 (human epidermal growth factor 2) protein on the surface of some cancer cells, possibly interrupting signals that make them grow.
In a phase III, manufacturer-sponsored randomized controlled trial comparing trastuzumab with chemotherapy vs. chemotherapy alone in patients with advanced gastric cancers, 594 patients had tumors expressing HER2 at high levels.* (HER2 overexpression has been reported in between 6% and 35% of all stomach and gastroesophageal tumors.)
For these trial subjects trastuzumab added to a dual chemotherapy (capecitabine or 5-fluorouracil and cisplatin) resulted in improved overall survival of 37% over the chemotherapy alone group, with median overall survival of 13.5 vs. 11.0 months.
An updated analysis based on an additional year of follow-up showed a 25% improvement in overall survival, with a median 13.1 months in the trastuzumab arm vs. 11.7 months in the chemotherapy alone arm. (J Clin Oncol 27:18s, 2009 [suppl; abstr LBA4509])
The survival benefit of trastuzumab was seen to increase with the level of HER2 expressed in tumors. On September 29, England’s National Institute for Health and Clinical Excellence recommended trastuzumab to the National Health Service only for patients whose gastric tumors express the highest measurable levels of HER-2, who were seen in a subgroup of the same Phase III randomized controlled trial to have had the best survival improvement with trastuzumab. In this subgroup (n=279) overall survival was 18 months in the treatment arm compared with 12.4 months for the chemotherapy alone group, a 5.6 month improvement.
* CORRECTION, 11/19/2010: The original version of this article misstated the percentage of patients with gastroesophageal and gastric tumors expressing HER2 at high levels. HER2-overexpression was seen in all 594 patients. This version has been updated.
The Food and Drug Administration has approved trastuzumab, along with chemotherapy, to treat metastatic HER2-positive gastric cancers in people who have not been previously treated for metastatic disease.
The agency’s decision, announced late Oct. 20, follows a January move by the European Medicines Agency to grant marketing authorization to trastuzumab (Herceptin, Genentech) for the same patient group. Trastuzumab, already approved in both the United Sates and Europe for the treatment of HER2-overexpressing breast cancers, works by blocking the HER2 (human epidermal growth factor 2) protein on the surface of some cancer cells, possibly interrupting signals that make them grow.
In a phase III, manufacturer-sponsored randomized controlled trial comparing trastuzumab with chemotherapy vs. chemotherapy alone in patients with advanced gastric cancers, 594 patients had tumors expressing HER2 at high levels.* (HER2 overexpression has been reported in between 6% and 35% of all stomach and gastroesophageal tumors.)
For these trial subjects trastuzumab added to a dual chemotherapy (capecitabine or 5-fluorouracil and cisplatin) resulted in improved overall survival of 37% over the chemotherapy alone group, with median overall survival of 13.5 vs. 11.0 months.
An updated analysis based on an additional year of follow-up showed a 25% improvement in overall survival, with a median 13.1 months in the trastuzumab arm vs. 11.7 months in the chemotherapy alone arm. (J Clin Oncol 27:18s, 2009 [suppl; abstr LBA4509])
The survival benefit of trastuzumab was seen to increase with the level of HER2 expressed in tumors. On September 29, England’s National Institute for Health and Clinical Excellence recommended trastuzumab to the National Health Service only for patients whose gastric tumors express the highest measurable levels of HER-2, who were seen in a subgroup of the same Phase III randomized controlled trial to have had the best survival improvement with trastuzumab. In this subgroup (n=279) overall survival was 18 months in the treatment arm compared with 12.4 months for the chemotherapy alone group, a 5.6 month improvement.
* CORRECTION, 11/19/2010: The original version of this article misstated the percentage of patients with gastroesophageal and gastric tumors expressing HER2 at high levels. HER2-overexpression was seen in all 594 patients. This version has been updated.
India's Malaria Deaths Grossly Underestimated
Malaria kills an estimated 205,000 people per year in India, not the 15,000 estimated annually by the World Health Organization, researchers have learned.
Using data from a large cohort study of rural deaths in India, Dr. Neeraj Dhingra and Dr. Prabhat Jha of St. Michael's University, Toronto, and the University of Toronto, and their colleagues, determined that 3.6% of unattended febrile deaths of people between 1 month and 70 years of age were attributable to malaria.
Modeling using known population statistics, an estimated 55,000 early-childhood, 30,000 childhood and 120,000 adult deaths occur each year from malaria in India, the researchers wrote, while acknowledging lower and upper limits of 125,000 and 277,000 malaria deaths annually.
The findings, published online ahead of print Oct. 21 in the Lancet, suggest that the WHO, which relies heavily on India's hospital-based epidemiologic surveillance, has woefully underestimated India's true malaria burden.
"Because the Indian national malaria program cures nearly all the cases it treats, it detects only about 1,000 malaria deaths each year," Dr. Dhingra and Dr. Jha wrote, adding that the WHO estimates, while taking into consideration the likelihood of some undiagnosed cases, nonetheless "[depend] indirectly on the low death rates in diagnosed patients." The malaria death rates did correspond, however, with the Indian national program's reported malaria transmission trends by geographic region.
For their research, Dr. Dhingra and Dr. Jha examined results from verbal autopsies – interviews with household members of the deceased – with data recorded using standardized questionnaire forms. The verbal autopsies were conducted between 2001 and 2003, by trained nonmedical field workers, in 6,671 randomly selected areas of India. Of the 122,291 autopsies conducted, 75,342 were of people between 1 month and 70 years of age.
Of the 2,681 deaths attributable to malaria, 90% were in rural areas and 86% were not in a hospital or clinic, Dr. Dhingra and Dr. Jha noted: "Most deaths in rural India take place at home, without prior intervention by any qualified health care worker."
In an accompanying editorial in the Lancet, Robert W. Snow, Ph.D., of the KEMRI–University of Oxford–Wellcome Trust Research Programme in Nairobi, said the finding that 86% of India’s malaria deaths did not occur in hospitals or clinics suggests that "the health-management information system in India is not fit for purpose for the recording of malaria morbidity and mortality." This, Dr. Snow said, "is particularly surprising for a country that boasts a space program and is an emerging global economic leader."
The verbal autopsy results were used to determine the onset and severity of the fever leading to death, among other clinical characteristics of malaria such as shivering, jaundice, vomiting, breathlessness, decreased urine output, headache, convulsions, or unconsciousness. Blood tests for malaria were rarely reported. Two physicians analyzed each autopsy record, assigning a code for cause of death. Malaria deaths were catalogued separately from other febrile deaths such as those caused by dengue or typhoid.
"The major source of uncertainty in our estimates arises from the possible misclassification of malaria deaths as deaths from other diseases," Dr. Dhingra and Dr. Jha wrote, saying that their lower and upper estimates – of 125,000 and 277,000 annual deaths – were calculated by including only those deaths immediately coded by two physicians as malaria and, for the high end, all deaths with malaria as the initial diagnosis by one coder, a quarter of which were later attributed to other causes.
In his editorial, Dr. Snow praised the researchers' methodology and conclusions. "First, there was a strong geographical correlation with state-reported malaria mortality statistics; second, the malaria mortality data showed credible temporal trends with peaks after the wet season in every district; third, there was striking correspondence with malaria transmission rates calculated independently at the district level; and fourth, this spatial correlation was not seen in three other diseases whose symptoms are often confused with malaria (dengue, typhoid, and meningitis)," Dr. Snow wrote.
Similar disparities in the WHO malaria statistics and disease burden, Dr. Snow wrote, "could exist in other heavily populated, remote regions that are exposed to malaria and have unreliable access to health care, such as Burma, Bangladesh, Pakistan, Afghanistan, and Indonesia."
All those countries, except Pakistan and Afghanistan, occur in the WHO's South-East Asia region, which also includes India. In its 2009 global report on malaria, the World Health Organization said that the region, "received the least money per person at risk for malaria and saw the lowest increase in external financing between 2000 and 2007," adding that, in general, "High levels of external assistance are associated with increased procurement of commodities and decreases in malaria incidence."
The Indian study studied largely the effects of disease caused by Plasmodium falciparum mosquitoes, Dr. Snow noted in his Lancet editorial, and the less-studied disease burden a second species, P. vivax, might represent a larger threat in India still.
The malaria deaths study was funded by the National Institutes of Health, Canadian Institute of Health Research, and the Li Ka Shing Knowledge Institute. Neither the study authors nor Dr. Snow declared conflicts of interest.
Malaria kills an estimated 205,000 people per year in India, not the 15,000 estimated annually by the World Health Organization, researchers have learned.
Using data from a large cohort study of rural deaths in India, Dr. Neeraj Dhingra and Dr. Prabhat Jha of St. Michael's University, Toronto, and the University of Toronto, and their colleagues, determined that 3.6% of unattended febrile deaths of people between 1 month and 70 years of age were attributable to malaria.
Modeling using known population statistics, an estimated 55,000 early-childhood, 30,000 childhood and 120,000 adult deaths occur each year from malaria in India, the researchers wrote, while acknowledging lower and upper limits of 125,000 and 277,000 malaria deaths annually.
The findings, published online ahead of print Oct. 21 in the Lancet, suggest that the WHO, which relies heavily on India's hospital-based epidemiologic surveillance, has woefully underestimated India's true malaria burden.
"Because the Indian national malaria program cures nearly all the cases it treats, it detects only about 1,000 malaria deaths each year," Dr. Dhingra and Dr. Jha wrote, adding that the WHO estimates, while taking into consideration the likelihood of some undiagnosed cases, nonetheless "[depend] indirectly on the low death rates in diagnosed patients." The malaria death rates did correspond, however, with the Indian national program's reported malaria transmission trends by geographic region.
For their research, Dr. Dhingra and Dr. Jha examined results from verbal autopsies – interviews with household members of the deceased – with data recorded using standardized questionnaire forms. The verbal autopsies were conducted between 2001 and 2003, by trained nonmedical field workers, in 6,671 randomly selected areas of India. Of the 122,291 autopsies conducted, 75,342 were of people between 1 month and 70 years of age.
Of the 2,681 deaths attributable to malaria, 90% were in rural areas and 86% were not in a hospital or clinic, Dr. Dhingra and Dr. Jha noted: "Most deaths in rural India take place at home, without prior intervention by any qualified health care worker."
In an accompanying editorial in the Lancet, Robert W. Snow, Ph.D., of the KEMRI–University of Oxford–Wellcome Trust Research Programme in Nairobi, said the finding that 86% of India’s malaria deaths did not occur in hospitals or clinics suggests that "the health-management information system in India is not fit for purpose for the recording of malaria morbidity and mortality." This, Dr. Snow said, "is particularly surprising for a country that boasts a space program and is an emerging global economic leader."
The verbal autopsy results were used to determine the onset and severity of the fever leading to death, among other clinical characteristics of malaria such as shivering, jaundice, vomiting, breathlessness, decreased urine output, headache, convulsions, or unconsciousness. Blood tests for malaria were rarely reported. Two physicians analyzed each autopsy record, assigning a code for cause of death. Malaria deaths were catalogued separately from other febrile deaths such as those caused by dengue or typhoid.
"The major source of uncertainty in our estimates arises from the possible misclassification of malaria deaths as deaths from other diseases," Dr. Dhingra and Dr. Jha wrote, saying that their lower and upper estimates – of 125,000 and 277,000 annual deaths – were calculated by including only those deaths immediately coded by two physicians as malaria and, for the high end, all deaths with malaria as the initial diagnosis by one coder, a quarter of which were later attributed to other causes.
In his editorial, Dr. Snow praised the researchers' methodology and conclusions. "First, there was a strong geographical correlation with state-reported malaria mortality statistics; second, the malaria mortality data showed credible temporal trends with peaks after the wet season in every district; third, there was striking correspondence with malaria transmission rates calculated independently at the district level; and fourth, this spatial correlation was not seen in three other diseases whose symptoms are often confused with malaria (dengue, typhoid, and meningitis)," Dr. Snow wrote.
Similar disparities in the WHO malaria statistics and disease burden, Dr. Snow wrote, "could exist in other heavily populated, remote regions that are exposed to malaria and have unreliable access to health care, such as Burma, Bangladesh, Pakistan, Afghanistan, and Indonesia."
All those countries, except Pakistan and Afghanistan, occur in the WHO's South-East Asia region, which also includes India. In its 2009 global report on malaria, the World Health Organization said that the region, "received the least money per person at risk for malaria and saw the lowest increase in external financing between 2000 and 2007," adding that, in general, "High levels of external assistance are associated with increased procurement of commodities and decreases in malaria incidence."
The Indian study studied largely the effects of disease caused by Plasmodium falciparum mosquitoes, Dr. Snow noted in his Lancet editorial, and the less-studied disease burden a second species, P. vivax, might represent a larger threat in India still.
The malaria deaths study was funded by the National Institutes of Health, Canadian Institute of Health Research, and the Li Ka Shing Knowledge Institute. Neither the study authors nor Dr. Snow declared conflicts of interest.
Malaria kills an estimated 205,000 people per year in India, not the 15,000 estimated annually by the World Health Organization, researchers have learned.
Using data from a large cohort study of rural deaths in India, Dr. Neeraj Dhingra and Dr. Prabhat Jha of St. Michael's University, Toronto, and the University of Toronto, and their colleagues, determined that 3.6% of unattended febrile deaths of people between 1 month and 70 years of age were attributable to malaria.
Modeling using known population statistics, an estimated 55,000 early-childhood, 30,000 childhood and 120,000 adult deaths occur each year from malaria in India, the researchers wrote, while acknowledging lower and upper limits of 125,000 and 277,000 malaria deaths annually.
The findings, published online ahead of print Oct. 21 in the Lancet, suggest that the WHO, which relies heavily on India's hospital-based epidemiologic surveillance, has woefully underestimated India's true malaria burden.
"Because the Indian national malaria program cures nearly all the cases it treats, it detects only about 1,000 malaria deaths each year," Dr. Dhingra and Dr. Jha wrote, adding that the WHO estimates, while taking into consideration the likelihood of some undiagnosed cases, nonetheless "[depend] indirectly on the low death rates in diagnosed patients." The malaria death rates did correspond, however, with the Indian national program's reported malaria transmission trends by geographic region.
For their research, Dr. Dhingra and Dr. Jha examined results from verbal autopsies – interviews with household members of the deceased – with data recorded using standardized questionnaire forms. The verbal autopsies were conducted between 2001 and 2003, by trained nonmedical field workers, in 6,671 randomly selected areas of India. Of the 122,291 autopsies conducted, 75,342 were of people between 1 month and 70 years of age.
Of the 2,681 deaths attributable to malaria, 90% were in rural areas and 86% were not in a hospital or clinic, Dr. Dhingra and Dr. Jha noted: "Most deaths in rural India take place at home, without prior intervention by any qualified health care worker."
In an accompanying editorial in the Lancet, Robert W. Snow, Ph.D., of the KEMRI–University of Oxford–Wellcome Trust Research Programme in Nairobi, said the finding that 86% of India’s malaria deaths did not occur in hospitals or clinics suggests that "the health-management information system in India is not fit for purpose for the recording of malaria morbidity and mortality." This, Dr. Snow said, "is particularly surprising for a country that boasts a space program and is an emerging global economic leader."
The verbal autopsy results were used to determine the onset and severity of the fever leading to death, among other clinical characteristics of malaria such as shivering, jaundice, vomiting, breathlessness, decreased urine output, headache, convulsions, or unconsciousness. Blood tests for malaria were rarely reported. Two physicians analyzed each autopsy record, assigning a code for cause of death. Malaria deaths were catalogued separately from other febrile deaths such as those caused by dengue or typhoid.
"The major source of uncertainty in our estimates arises from the possible misclassification of malaria deaths as deaths from other diseases," Dr. Dhingra and Dr. Jha wrote, saying that their lower and upper estimates – of 125,000 and 277,000 annual deaths – were calculated by including only those deaths immediately coded by two physicians as malaria and, for the high end, all deaths with malaria as the initial diagnosis by one coder, a quarter of which were later attributed to other causes.
In his editorial, Dr. Snow praised the researchers' methodology and conclusions. "First, there was a strong geographical correlation with state-reported malaria mortality statistics; second, the malaria mortality data showed credible temporal trends with peaks after the wet season in every district; third, there was striking correspondence with malaria transmission rates calculated independently at the district level; and fourth, this spatial correlation was not seen in three other diseases whose symptoms are often confused with malaria (dengue, typhoid, and meningitis)," Dr. Snow wrote.
Similar disparities in the WHO malaria statistics and disease burden, Dr. Snow wrote, "could exist in other heavily populated, remote regions that are exposed to malaria and have unreliable access to health care, such as Burma, Bangladesh, Pakistan, Afghanistan, and Indonesia."
All those countries, except Pakistan and Afghanistan, occur in the WHO's South-East Asia region, which also includes India. In its 2009 global report on malaria, the World Health Organization said that the region, "received the least money per person at risk for malaria and saw the lowest increase in external financing between 2000 and 2007," adding that, in general, "High levels of external assistance are associated with increased procurement of commodities and decreases in malaria incidence."
The Indian study studied largely the effects of disease caused by Plasmodium falciparum mosquitoes, Dr. Snow noted in his Lancet editorial, and the less-studied disease burden a second species, P. vivax, might represent a larger threat in India still.
The malaria deaths study was funded by the National Institutes of Health, Canadian Institute of Health Research, and the Li Ka Shing Knowledge Institute. Neither the study authors nor Dr. Snow declared conflicts of interest.
FROM THE LANCET
Major Finding: 3.6% of unattended febrile deaths of people between 1 month and 70 years of age were attributable to malaria
Data Source: Verbal autopsies, with data recorded using standardized questionnaire forms, conducted between 2001 and 2003, in 6,671 randomly selected areas of India. Of the 122,291
autopsies conducted, 75,342 were of people between 1 month and 70 years
of age.
Disclosures: The investigators reported having none.
India's Malaria Deaths Grossly Underestimated
Malaria kills an estimated 205,000 people per year in India, not the 15,000 estimated annually by the World Health Organization, researchers have learned.
Using data from a large cohort study of rural deaths in India, Dr. Neeraj Dhingra and Dr. Prabhat Jha of St. Michael's University, Toronto, and the University of Toronto, and their colleagues, determined that 3.6% of unattended febrile deaths of people between 1 month and 70 years of age were attributable to malaria.
Modeling using known population statistics, an estimated 55,000 early-childhood, 30,000 childhood and 120,000 adult deaths occur each year from malaria in India, the researchers wrote, while acknowledging lower and upper limits of 125,000 and 277,000 malaria deaths annually.
The findings, published online ahead of print Oct. 21 in the Lancet, suggest that the WHO, which relies heavily on India's hospital-based epidemiologic surveillance, has woefully underestimated India's true malaria burden.
"Because the Indian national malaria program cures nearly all the cases it treats, it detects only about 1,000 malaria deaths each year," Dr. Dhingra and Dr. Jha wrote, adding that the WHO estimates, while taking into consideration the likelihood of some undiagnosed cases, nonetheless "[depend] indirectly on the low death rates in diagnosed patients." The malaria death rates did correspond, however, with the Indian national program's reported malaria transmission trends by geographic region.
For their research, Dr. Dhingra and Dr. Jha examined results from verbal autopsies – interviews with household members of the deceased – with data recorded using standardized questionnaire forms. The verbal autopsies were conducted between 2001 and 2003, by trained nonmedical field workers, in 6,671 randomly selected areas of India. Of the 122,291 autopsies conducted, 75,342 were of people between 1 month and 70 years of age.
Of the 2,681 deaths attributable to malaria, 90% were in rural areas and 86% were not in a hospital or clinic, Dr. Dhingra and Dr. Jha noted: "Most deaths in rural India take place at home, without prior intervention by any qualified health care worker."
In an accompanying editorial in the Lancet, Robert W. Snow, Ph.D., of the KEMRI–University of Oxford–Wellcome Trust Research Programme in Nairobi, said the finding that 86% of India’s malaria deaths did not occur in hospitals or clinics suggests that "the health-management information system in India is not fit for purpose for the recording of malaria morbidity and mortality." This, Dr. Snow said, "is particularly surprising for a country that boasts a space program and is an emerging global economic leader."
The verbal autopsy results were used to determine the onset and severity of the fever leading to death, among other clinical characteristics of malaria such as shivering, jaundice, vomiting, breathlessness, decreased urine output, headache, convulsions, or unconsciousness. Blood tests for malaria were rarely reported. Two physicians analyzed each autopsy record, assigning a code for cause of death. Malaria deaths were catalogued separately from other febrile deaths such as those caused by dengue or typhoid.
"The major source of uncertainty in our estimates arises from the possible misclassification of malaria deaths as deaths from other diseases," Dr. Dhingra and Dr. Jha wrote, saying that their lower and upper estimates – of 125,000 and 277,000 annual deaths – were calculated by including only those deaths immediately coded by two physicians as malaria and, for the high end, all deaths with malaria as the initial diagnosis by one coder, a quarter of which were later attributed to other causes.
In his editorial, Dr. Snow praised the researchers' methodology and conclusions. "First, there was a strong geographical correlation with state-reported malaria mortality statistics; second, the malaria mortality data showed credible temporal trends with peaks after the wet season in every district; third, there was striking correspondence with malaria transmission rates calculated independently at the district level; and fourth, this spatial correlation was not seen in three other diseases whose symptoms are often confused with malaria (dengue, typhoid, and meningitis)," Dr. Snow wrote.
Similar disparities in the WHO malaria statistics and disease burden, Dr. Snow wrote, "could exist in other heavily populated, remote regions that are exposed to malaria and have unreliable access to health care, such as Burma, Bangladesh, Pakistan, Afghanistan, and Indonesia."
All those countries, except Pakistan and Afghanistan, occur in the WHO's South-East Asia region, which also includes India. In its 2009 global report on malaria, the World Health Organization said that the region, "received the least money per person at risk for malaria and saw the lowest increase in external financing between 2000 and 2007," adding that, in general, "High levels of external assistance are associated with increased procurement of commodities and decreases in malaria incidence."
The Indian study studied largely the effects of disease caused by Plasmodium falciparum mosquitoes, Dr. Snow noted in his Lancet editorial, and the less-studied disease burden a second species, P. vivax, might represent a larger threat in India still.
The malaria deaths study was funded by the National Institutes of Health, Canadian Institute of Health Research, and the Li Ka Shing Knowledge Institute. Neither the study authors nor Dr. Snow declared conflicts of interest.
Malaria kills an estimated 205,000 people per year in India, not the 15,000 estimated annually by the World Health Organization, researchers have learned.
Using data from a large cohort study of rural deaths in India, Dr. Neeraj Dhingra and Dr. Prabhat Jha of St. Michael's University, Toronto, and the University of Toronto, and their colleagues, determined that 3.6% of unattended febrile deaths of people between 1 month and 70 years of age were attributable to malaria.
Modeling using known population statistics, an estimated 55,000 early-childhood, 30,000 childhood and 120,000 adult deaths occur each year from malaria in India, the researchers wrote, while acknowledging lower and upper limits of 125,000 and 277,000 malaria deaths annually.
The findings, published online ahead of print Oct. 21 in the Lancet, suggest that the WHO, which relies heavily on India's hospital-based epidemiologic surveillance, has woefully underestimated India's true malaria burden.
"Because the Indian national malaria program cures nearly all the cases it treats, it detects only about 1,000 malaria deaths each year," Dr. Dhingra and Dr. Jha wrote, adding that the WHO estimates, while taking into consideration the likelihood of some undiagnosed cases, nonetheless "[depend] indirectly on the low death rates in diagnosed patients." The malaria death rates did correspond, however, with the Indian national program's reported malaria transmission trends by geographic region.
For their research, Dr. Dhingra and Dr. Jha examined results from verbal autopsies – interviews with household members of the deceased – with data recorded using standardized questionnaire forms. The verbal autopsies were conducted between 2001 and 2003, by trained nonmedical field workers, in 6,671 randomly selected areas of India. Of the 122,291 autopsies conducted, 75,342 were of people between 1 month and 70 years of age.
Of the 2,681 deaths attributable to malaria, 90% were in rural areas and 86% were not in a hospital or clinic, Dr. Dhingra and Dr. Jha noted: "Most deaths in rural India take place at home, without prior intervention by any qualified health care worker."
In an accompanying editorial in the Lancet, Robert W. Snow, Ph.D., of the KEMRI–University of Oxford–Wellcome Trust Research Programme in Nairobi, said the finding that 86% of India’s malaria deaths did not occur in hospitals or clinics suggests that "the health-management information system in India is not fit for purpose for the recording of malaria morbidity and mortality." This, Dr. Snow said, "is particularly surprising for a country that boasts a space program and is an emerging global economic leader."
The verbal autopsy results were used to determine the onset and severity of the fever leading to death, among other clinical characteristics of malaria such as shivering, jaundice, vomiting, breathlessness, decreased urine output, headache, convulsions, or unconsciousness. Blood tests for malaria were rarely reported. Two physicians analyzed each autopsy record, assigning a code for cause of death. Malaria deaths were catalogued separately from other febrile deaths such as those caused by dengue or typhoid.
"The major source of uncertainty in our estimates arises from the possible misclassification of malaria deaths as deaths from other diseases," Dr. Dhingra and Dr. Jha wrote, saying that their lower and upper estimates – of 125,000 and 277,000 annual deaths – were calculated by including only those deaths immediately coded by two physicians as malaria and, for the high end, all deaths with malaria as the initial diagnosis by one coder, a quarter of which were later attributed to other causes.
In his editorial, Dr. Snow praised the researchers' methodology and conclusions. "First, there was a strong geographical correlation with state-reported malaria mortality statistics; second, the malaria mortality data showed credible temporal trends with peaks after the wet season in every district; third, there was striking correspondence with malaria transmission rates calculated independently at the district level; and fourth, this spatial correlation was not seen in three other diseases whose symptoms are often confused with malaria (dengue, typhoid, and meningitis)," Dr. Snow wrote.
Similar disparities in the WHO malaria statistics and disease burden, Dr. Snow wrote, "could exist in other heavily populated, remote regions that are exposed to malaria and have unreliable access to health care, such as Burma, Bangladesh, Pakistan, Afghanistan, and Indonesia."
All those countries, except Pakistan and Afghanistan, occur in the WHO's South-East Asia region, which also includes India. In its 2009 global report on malaria, the World Health Organization said that the region, "received the least money per person at risk for malaria and saw the lowest increase in external financing between 2000 and 2007," adding that, in general, "High levels of external assistance are associated with increased procurement of commodities and decreases in malaria incidence."
The Indian study studied largely the effects of disease caused by Plasmodium falciparum mosquitoes, Dr. Snow noted in his Lancet editorial, and the less-studied disease burden a second species, P. vivax, might represent a larger threat in India still.
The malaria deaths study was funded by the National Institutes of Health, Canadian Institute of Health Research, and the Li Ka Shing Knowledge Institute. Neither the study authors nor Dr. Snow declared conflicts of interest.
Malaria kills an estimated 205,000 people per year in India, not the 15,000 estimated annually by the World Health Organization, researchers have learned.
Using data from a large cohort study of rural deaths in India, Dr. Neeraj Dhingra and Dr. Prabhat Jha of St. Michael's University, Toronto, and the University of Toronto, and their colleagues, determined that 3.6% of unattended febrile deaths of people between 1 month and 70 years of age were attributable to malaria.
Modeling using known population statistics, an estimated 55,000 early-childhood, 30,000 childhood and 120,000 adult deaths occur each year from malaria in India, the researchers wrote, while acknowledging lower and upper limits of 125,000 and 277,000 malaria deaths annually.
The findings, published online ahead of print Oct. 21 in the Lancet, suggest that the WHO, which relies heavily on India's hospital-based epidemiologic surveillance, has woefully underestimated India's true malaria burden.
"Because the Indian national malaria program cures nearly all the cases it treats, it detects only about 1,000 malaria deaths each year," Dr. Dhingra and Dr. Jha wrote, adding that the WHO estimates, while taking into consideration the likelihood of some undiagnosed cases, nonetheless "[depend] indirectly on the low death rates in diagnosed patients." The malaria death rates did correspond, however, with the Indian national program's reported malaria transmission trends by geographic region.
For their research, Dr. Dhingra and Dr. Jha examined results from verbal autopsies – interviews with household members of the deceased – with data recorded using standardized questionnaire forms. The verbal autopsies were conducted between 2001 and 2003, by trained nonmedical field workers, in 6,671 randomly selected areas of India. Of the 122,291 autopsies conducted, 75,342 were of people between 1 month and 70 years of age.
Of the 2,681 deaths attributable to malaria, 90% were in rural areas and 86% were not in a hospital or clinic, Dr. Dhingra and Dr. Jha noted: "Most deaths in rural India take place at home, without prior intervention by any qualified health care worker."
In an accompanying editorial in the Lancet, Robert W. Snow, Ph.D., of the KEMRI–University of Oxford–Wellcome Trust Research Programme in Nairobi, said the finding that 86% of India’s malaria deaths did not occur in hospitals or clinics suggests that "the health-management information system in India is not fit for purpose for the recording of malaria morbidity and mortality." This, Dr. Snow said, "is particularly surprising for a country that boasts a space program and is an emerging global economic leader."
The verbal autopsy results were used to determine the onset and severity of the fever leading to death, among other clinical characteristics of malaria such as shivering, jaundice, vomiting, breathlessness, decreased urine output, headache, convulsions, or unconsciousness. Blood tests for malaria were rarely reported. Two physicians analyzed each autopsy record, assigning a code for cause of death. Malaria deaths were catalogued separately from other febrile deaths such as those caused by dengue or typhoid.
"The major source of uncertainty in our estimates arises from the possible misclassification of malaria deaths as deaths from other diseases," Dr. Dhingra and Dr. Jha wrote, saying that their lower and upper estimates – of 125,000 and 277,000 annual deaths – were calculated by including only those deaths immediately coded by two physicians as malaria and, for the high end, all deaths with malaria as the initial diagnosis by one coder, a quarter of which were later attributed to other causes.
In his editorial, Dr. Snow praised the researchers' methodology and conclusions. "First, there was a strong geographical correlation with state-reported malaria mortality statistics; second, the malaria mortality data showed credible temporal trends with peaks after the wet season in every district; third, there was striking correspondence with malaria transmission rates calculated independently at the district level; and fourth, this spatial correlation was not seen in three other diseases whose symptoms are often confused with malaria (dengue, typhoid, and meningitis)," Dr. Snow wrote.
Similar disparities in the WHO malaria statistics and disease burden, Dr. Snow wrote, "could exist in other heavily populated, remote regions that are exposed to malaria and have unreliable access to health care, such as Burma, Bangladesh, Pakistan, Afghanistan, and Indonesia."
All those countries, except Pakistan and Afghanistan, occur in the WHO's South-East Asia region, which also includes India. In its 2009 global report on malaria, the World Health Organization said that the region, "received the least money per person at risk for malaria and saw the lowest increase in external financing between 2000 and 2007," adding that, in general, "High levels of external assistance are associated with increased procurement of commodities and decreases in malaria incidence."
The Indian study studied largely the effects of disease caused by Plasmodium falciparum mosquitoes, Dr. Snow noted in his Lancet editorial, and the less-studied disease burden a second species, P. vivax, might represent a larger threat in India still.
The malaria deaths study was funded by the National Institutes of Health, Canadian Institute of Health Research, and the Li Ka Shing Knowledge Institute. Neither the study authors nor Dr. Snow declared conflicts of interest.
FROM THE LANCET
Study Finds Anorexia Linked to Eye Damage
Anorexia and bulimia can do measurable and likely irreversible damage to women’s eyes, researchers in Greece have found.
In a small study whose results were published online Oct. 20 in the British Journal of Ophthalmology, Marilita M. Moschos, Ph.D., and her colleagues at the University of Athens reported finding “a significant anatomical and functional impairment, marked by a decrease in macular and retinal nerve fiber layer thickness as well as a decrease in electrical activity in the macula,” among women with a history of anorexia or bulimia.
“The good thing is that the [anorexic and bulimic subjects] still had good vision,” Dr. Moschos said in an interview. “But there is a crucial moment where if they lose more photoreceptors – for example, with untreated disease – “this will cause an irreversible vision loss.”
For their research, Dr. Moschos and her colleagues evaluated macular and retinal nerve fiber layer thickness, as well as the electrical activity of the macula, in 13 female patients (mean age 28.6 years) with a diagnosis of anorexia nervosa (AN) – either of the calorie-restricting (n = 6) or binge-purge (n = 7) type, along with 20 healthy controls matched for age. Anorexic and bulimic patients had been diagnosed at least 8 years prior to the study and were in treatment at the time of the study, without current marked vitamin deficiencies (Br. J. Ophthalmol. 2010 [doi 10.1136/bjo.2009.177899]).
None of the anorexic or control patients had evidence of any visual failure; visual acuity for all remained normal. What the researchers found was subclinical damage to the structure of the anorexic women’s eyes. The anorexic women saw a mean foveal thickness of 140.04 mcm, compared with 150.85 in the control group. Retinal nerve fiber layers were also thinner – 116.42 mcm – in the superior area (vs. 123.15 in the control group) and 121.08 mcm in the inferior area (compared with 137.6 in the control group) around the optic nerve. With patients who self-induced vomiting, the damage was worse: in the left eye only, the calorie-restricting anorexics had a better foveal thickness (median 142 mcm) than did bulimics (median 134 mcm).
“Our results show that the retinal thickness of the macula is higher in restrictive-type anorectic patients than in binge-purge type patients, which means that the anatomical impairment of the fovea is greater in the AN binge-purge type,” Dr. Moschos and colleagues wrote.
The possible reason for this, said Dr. Moschos in an interview, is that while calorie-restricting anorexics manage to obtain some vitamins, women who purge absorb fewer. “My opinion is that there is a correlation to vitamin deficiencies” over prolonged periods, she said.
Dr. Moschos and her colleagues noted that deficiencies of vitamin A in particular, a presumed culprit in one case study they cited of an anorexic with retinal lesions (J. Fr. Ophtalmol. 2007;30:15), were not seen among their subjects, whose own ocular changes, they speculated, were either caused by deficiencies of other nutrients or occurred in relation to dopamine, “an important neurotransmitter in the visual pathway.”
The researchers mentioned several previous studies examining dopamine and physical changes to the retina. In people with Parkinson’s disease, “where there is a reduction in dopamine in the retina,” they wrote, changes to retinal structure and function have been observed (Invest. Ophthalmol. Vis. Sci. 1990;31:2473-5).
And documented instances of impairment in visual discrimination learning among anorexics (Appetite 2003;40:85e9) “may be related to decreased appetitive function, possibly resulting from impaired dopaminergic neurotransmission, either as a result of food restriction or, more intriguingly, related to the underlying pathophysiology of AN itself,” Dr. Moschos and her colleagues wrote.
The investigators acknowledged the limitations posed by the small size of their study, which is ongoing. Now, the group is extending the study to seek longer-term evidence of decline or even recovery in the young women’s maculae following treatment for their anorexia or bulimia. Currently, Dr. Moschos said, the wisdom that macular damage is irreversible stems from the fact that “what we know about maculae concerns much older people,” than the anorexics in the study.
The study was funded by the University of Athens. Dr. Moschos and her colleagues reported no conflicts of interest.
Anorexia and bulimia can do measurable and likely irreversible damage to women’s eyes, researchers in Greece have found.
In a small study whose results were published online Oct. 20 in the British Journal of Ophthalmology, Marilita M. Moschos, Ph.D., and her colleagues at the University of Athens reported finding “a significant anatomical and functional impairment, marked by a decrease in macular and retinal nerve fiber layer thickness as well as a decrease in electrical activity in the macula,” among women with a history of anorexia or bulimia.
“The good thing is that the [anorexic and bulimic subjects] still had good vision,” Dr. Moschos said in an interview. “But there is a crucial moment where if they lose more photoreceptors – for example, with untreated disease – “this will cause an irreversible vision loss.”
For their research, Dr. Moschos and her colleagues evaluated macular and retinal nerve fiber layer thickness, as well as the electrical activity of the macula, in 13 female patients (mean age 28.6 years) with a diagnosis of anorexia nervosa (AN) – either of the calorie-restricting (n = 6) or binge-purge (n = 7) type, along with 20 healthy controls matched for age. Anorexic and bulimic patients had been diagnosed at least 8 years prior to the study and were in treatment at the time of the study, without current marked vitamin deficiencies (Br. J. Ophthalmol. 2010 [doi 10.1136/bjo.2009.177899]).
None of the anorexic or control patients had evidence of any visual failure; visual acuity for all remained normal. What the researchers found was subclinical damage to the structure of the anorexic women’s eyes. The anorexic women saw a mean foveal thickness of 140.04 mcm, compared with 150.85 in the control group. Retinal nerve fiber layers were also thinner – 116.42 mcm – in the superior area (vs. 123.15 in the control group) and 121.08 mcm in the inferior area (compared with 137.6 in the control group) around the optic nerve. With patients who self-induced vomiting, the damage was worse: in the left eye only, the calorie-restricting anorexics had a better foveal thickness (median 142 mcm) than did bulimics (median 134 mcm).
“Our results show that the retinal thickness of the macula is higher in restrictive-type anorectic patients than in binge-purge type patients, which means that the anatomical impairment of the fovea is greater in the AN binge-purge type,” Dr. Moschos and colleagues wrote.
The possible reason for this, said Dr. Moschos in an interview, is that while calorie-restricting anorexics manage to obtain some vitamins, women who purge absorb fewer. “My opinion is that there is a correlation to vitamin deficiencies” over prolonged periods, she said.
Dr. Moschos and her colleagues noted that deficiencies of vitamin A in particular, a presumed culprit in one case study they cited of an anorexic with retinal lesions (J. Fr. Ophtalmol. 2007;30:15), were not seen among their subjects, whose own ocular changes, they speculated, were either caused by deficiencies of other nutrients or occurred in relation to dopamine, “an important neurotransmitter in the visual pathway.”
The researchers mentioned several previous studies examining dopamine and physical changes to the retina. In people with Parkinson’s disease, “where there is a reduction in dopamine in the retina,” they wrote, changes to retinal structure and function have been observed (Invest. Ophthalmol. Vis. Sci. 1990;31:2473-5).
And documented instances of impairment in visual discrimination learning among anorexics (Appetite 2003;40:85e9) “may be related to decreased appetitive function, possibly resulting from impaired dopaminergic neurotransmission, either as a result of food restriction or, more intriguingly, related to the underlying pathophysiology of AN itself,” Dr. Moschos and her colleagues wrote.
The investigators acknowledged the limitations posed by the small size of their study, which is ongoing. Now, the group is extending the study to seek longer-term evidence of decline or even recovery in the young women’s maculae following treatment for their anorexia or bulimia. Currently, Dr. Moschos said, the wisdom that macular damage is irreversible stems from the fact that “what we know about maculae concerns much older people,” than the anorexics in the study.
The study was funded by the University of Athens. Dr. Moschos and her colleagues reported no conflicts of interest.
Anorexia and bulimia can do measurable and likely irreversible damage to women’s eyes, researchers in Greece have found.
In a small study whose results were published online Oct. 20 in the British Journal of Ophthalmology, Marilita M. Moschos, Ph.D., and her colleagues at the University of Athens reported finding “a significant anatomical and functional impairment, marked by a decrease in macular and retinal nerve fiber layer thickness as well as a decrease in electrical activity in the macula,” among women with a history of anorexia or bulimia.
“The good thing is that the [anorexic and bulimic subjects] still had good vision,” Dr. Moschos said in an interview. “But there is a crucial moment where if they lose more photoreceptors – for example, with untreated disease – “this will cause an irreversible vision loss.”
For their research, Dr. Moschos and her colleagues evaluated macular and retinal nerve fiber layer thickness, as well as the electrical activity of the macula, in 13 female patients (mean age 28.6 years) with a diagnosis of anorexia nervosa (AN) – either of the calorie-restricting (n = 6) or binge-purge (n = 7) type, along with 20 healthy controls matched for age. Anorexic and bulimic patients had been diagnosed at least 8 years prior to the study and were in treatment at the time of the study, without current marked vitamin deficiencies (Br. J. Ophthalmol. 2010 [doi 10.1136/bjo.2009.177899]).
None of the anorexic or control patients had evidence of any visual failure; visual acuity for all remained normal. What the researchers found was subclinical damage to the structure of the anorexic women’s eyes. The anorexic women saw a mean foveal thickness of 140.04 mcm, compared with 150.85 in the control group. Retinal nerve fiber layers were also thinner – 116.42 mcm – in the superior area (vs. 123.15 in the control group) and 121.08 mcm in the inferior area (compared with 137.6 in the control group) around the optic nerve. With patients who self-induced vomiting, the damage was worse: in the left eye only, the calorie-restricting anorexics had a better foveal thickness (median 142 mcm) than did bulimics (median 134 mcm).
“Our results show that the retinal thickness of the macula is higher in restrictive-type anorectic patients than in binge-purge type patients, which means that the anatomical impairment of the fovea is greater in the AN binge-purge type,” Dr. Moschos and colleagues wrote.
The possible reason for this, said Dr. Moschos in an interview, is that while calorie-restricting anorexics manage to obtain some vitamins, women who purge absorb fewer. “My opinion is that there is a correlation to vitamin deficiencies” over prolonged periods, she said.
Dr. Moschos and her colleagues noted that deficiencies of vitamin A in particular, a presumed culprit in one case study they cited of an anorexic with retinal lesions (J. Fr. Ophtalmol. 2007;30:15), were not seen among their subjects, whose own ocular changes, they speculated, were either caused by deficiencies of other nutrients or occurred in relation to dopamine, “an important neurotransmitter in the visual pathway.”
The researchers mentioned several previous studies examining dopamine and physical changes to the retina. In people with Parkinson’s disease, “where there is a reduction in dopamine in the retina,” they wrote, changes to retinal structure and function have been observed (Invest. Ophthalmol. Vis. Sci. 1990;31:2473-5).
And documented instances of impairment in visual discrimination learning among anorexics (Appetite 2003;40:85e9) “may be related to decreased appetitive function, possibly resulting from impaired dopaminergic neurotransmission, either as a result of food restriction or, more intriguingly, related to the underlying pathophysiology of AN itself,” Dr. Moschos and her colleagues wrote.
The investigators acknowledged the limitations posed by the small size of their study, which is ongoing. Now, the group is extending the study to seek longer-term evidence of decline or even recovery in the young women’s maculae following treatment for their anorexia or bulimia. Currently, Dr. Moschos said, the wisdom that macular damage is irreversible stems from the fact that “what we know about maculae concerns much older people,” than the anorexics in the study.
The study was funded by the University of Athens. Dr. Moschos and her colleagues reported no conflicts of interest.
FROM BRITISH JOURNAL OF OPHTHALMOLOGY
Glucosamine, Chondroitin Didn't Ease Joint Pain
Neither glucosamine nor chondroitin, alone or combined, reduced joint pain or preserved joint space, according to Swiss researchers, who conclude that these supplements should not be prescribed, and if they are, health insurance should not cover them.
Meanwhile, despite a growing body of recent evidence showing the popular supplements to be ineffective, global sales of glucosamine and chondroitin have more than doubled since 2003. As of 2008, the sales of these supplements approached $2 billion and are projected to reach $2.3 billion in 2013, according to the same research team, which published its findings from a meta-analysis of data from 10 randomized, controlled trials.
The paradox of a market for a medication growing as its evidence base shrinks is probably merely the result of a predictable delay between evidence and adoption, said Dr. Peter Jüni, an epidemiologist at the University of Bern (Switzerland), lead author of the study.
High-quality evidence from large randomized controlled trials is relatively recent in the field of osteoarthritis, Dr. Jüni said in an interview, Sept. 17. “Only in the last 5-10 years has it become established in this field to do large-scale clinical trials,” he said. Of the 10 published randomized placebo-controlled trials Dr. Jüni and colleagues identified for their analysis, one was published in 1994 and the rest in the past decade, with the most recent in 2008.
“At the end of the 1990s and beginning of the 2000s, there were moderately small studies that actually made it into meta-analysis and into treatment guidelines” showing favorable results from glucosamine and chondroitin,” Dr. Jüni said. “Physicians were very reluctant to accept these then.” Eventually, of course, they did, and now “it will take time for the bad news to sink in, just as it took time for the good news in the 1990s.” Currently, Dr. Jüni noted, two more large nonindustry trials of glucosamine and chondroitin are underway. These “could put the nail in the coffin – or, you never know, could reopen the book.”
For their research, Dr. Jüni and colleagues analyzed results from randomized, placebo-controlled trials – seven of them industry-sponsored – enrolling 200 or more patients with knee or hip osteoarthritis (3,803 patients total). Using complex statistical modeling that allowed for comparisons at varied time points, the team assessed changes in levels of perceived pain after patients took glucosamine, chondroitin, or placebo daily for between 1 and 36 months. Six of the trials also measured joint narrowing (BMJ 2010;341:c4675[doi:10.1136/bmj.c4675]).
The 10 trials differed significantly in design. The majority enrolled patients with osteoarthritis of the knee only, though one enrolled patients with osteoarthritis of the hip or knee, and another included just patients with osteoarthritis of the hip. Supplements used included glucosamine sulfate, glucosamine hydrochloride, chondroitin sulfate, and combinations of these. All the glucosamine supplements were tested at 1,500 mg daily, while the chondroitin supplements varied between 800 and 1,200 mg daily.
Some of the trials took place in the United States, where supplements are not standardized for quality, and some in Europe, where they are. In eight, the supplements were evaluated to ensure correct concentrations of glucosamine or chondroitin, and in two the quality of the supplements was unclear. Patients ranged in age from 58 to 66 years, and the median percentage of women participants was 68%.
On a 10-cm visual analogue scale, Dr. Jüni and colleagues found, the overall difference in pain intensity compared with placebo was −0.4 cm (95% confidence interval, −0.7 to −0.1 cm) for glucosamine, −0.3 cm (−0.7 to 0.0 cm) for chondroitin, and −0.5 cm (−0.9 to 0.0 cm) for the combination. “For none of the estimates did the 95% credible intervals cross the boundary of the minimal clinically important difference,” the investigators wrote. “The differences in changes in minimal width of joint space were all minute, with 95% credible intervals overlapping zero.”
The seven industry-sponsored trials were more likely to detect an effect, however limited, than the non-industry trials (P = .02 for interaction). In industry independent trials, estimated treatment effects “were minute to zero and by no means clinically relevant,” Dr. Jüni and colleagues wrote in their analysis.
A possible reason that glucosamine and chondroitin are perceived widely as effective, Dr. Jüni said, is because osteoarthritic pain tends to fluctuate naturally. If people take a supplement when their symptoms are worse, “it leads you to perceive that it works perfectly” as they gradually subside, and the theoretical mechanism of the supplement is biologically plausible. In the end, he said, it may come down to need – as much as 30% of the adult population suffers joint pain, he said. Glucosamine and chondroitin are demonstrably safe, and there are few truly safe long-term treatments for joint pain.
The study was funded by grants from the Swiss National Science Foundation's National Research Program. Neither Dr. Jüni nor any of his colleagues declared conflicts of interest.
Neither glucosamine nor chondroitin, alone or combined, reduced joint pain or preserved joint space, according to Swiss researchers, who conclude that these supplements should not be prescribed, and if they are, health insurance should not cover them.
Meanwhile, despite a growing body of recent evidence showing the popular supplements to be ineffective, global sales of glucosamine and chondroitin have more than doubled since 2003. As of 2008, the sales of these supplements approached $2 billion and are projected to reach $2.3 billion in 2013, according to the same research team, which published its findings from a meta-analysis of data from 10 randomized, controlled trials.
The paradox of a market for a medication growing as its evidence base shrinks is probably merely the result of a predictable delay between evidence and adoption, said Dr. Peter Jüni, an epidemiologist at the University of Bern (Switzerland), lead author of the study.
High-quality evidence from large randomized controlled trials is relatively recent in the field of osteoarthritis, Dr. Jüni said in an interview, Sept. 17. “Only in the last 5-10 years has it become established in this field to do large-scale clinical trials,” he said. Of the 10 published randomized placebo-controlled trials Dr. Jüni and colleagues identified for their analysis, one was published in 1994 and the rest in the past decade, with the most recent in 2008.
“At the end of the 1990s and beginning of the 2000s, there were moderately small studies that actually made it into meta-analysis and into treatment guidelines” showing favorable results from glucosamine and chondroitin,” Dr. Jüni said. “Physicians were very reluctant to accept these then.” Eventually, of course, they did, and now “it will take time for the bad news to sink in, just as it took time for the good news in the 1990s.” Currently, Dr. Jüni noted, two more large nonindustry trials of glucosamine and chondroitin are underway. These “could put the nail in the coffin – or, you never know, could reopen the book.”
For their research, Dr. Jüni and colleagues analyzed results from randomized, placebo-controlled trials – seven of them industry-sponsored – enrolling 200 or more patients with knee or hip osteoarthritis (3,803 patients total). Using complex statistical modeling that allowed for comparisons at varied time points, the team assessed changes in levels of perceived pain after patients took glucosamine, chondroitin, or placebo daily for between 1 and 36 months. Six of the trials also measured joint narrowing (BMJ 2010;341:c4675[doi:10.1136/bmj.c4675]).
The 10 trials differed significantly in design. The majority enrolled patients with osteoarthritis of the knee only, though one enrolled patients with osteoarthritis of the hip or knee, and another included just patients with osteoarthritis of the hip. Supplements used included glucosamine sulfate, glucosamine hydrochloride, chondroitin sulfate, and combinations of these. All the glucosamine supplements were tested at 1,500 mg daily, while the chondroitin supplements varied between 800 and 1,200 mg daily.
Some of the trials took place in the United States, where supplements are not standardized for quality, and some in Europe, where they are. In eight, the supplements were evaluated to ensure correct concentrations of glucosamine or chondroitin, and in two the quality of the supplements was unclear. Patients ranged in age from 58 to 66 years, and the median percentage of women participants was 68%.
On a 10-cm visual analogue scale, Dr. Jüni and colleagues found, the overall difference in pain intensity compared with placebo was −0.4 cm (95% confidence interval, −0.7 to −0.1 cm) for glucosamine, −0.3 cm (−0.7 to 0.0 cm) for chondroitin, and −0.5 cm (−0.9 to 0.0 cm) for the combination. “For none of the estimates did the 95% credible intervals cross the boundary of the minimal clinically important difference,” the investigators wrote. “The differences in changes in minimal width of joint space were all minute, with 95% credible intervals overlapping zero.”
The seven industry-sponsored trials were more likely to detect an effect, however limited, than the non-industry trials (P = .02 for interaction). In industry independent trials, estimated treatment effects “were minute to zero and by no means clinically relevant,” Dr. Jüni and colleagues wrote in their analysis.
A possible reason that glucosamine and chondroitin are perceived widely as effective, Dr. Jüni said, is because osteoarthritic pain tends to fluctuate naturally. If people take a supplement when their symptoms are worse, “it leads you to perceive that it works perfectly” as they gradually subside, and the theoretical mechanism of the supplement is biologically plausible. In the end, he said, it may come down to need – as much as 30% of the adult population suffers joint pain, he said. Glucosamine and chondroitin are demonstrably safe, and there are few truly safe long-term treatments for joint pain.
The study was funded by grants from the Swiss National Science Foundation's National Research Program. Neither Dr. Jüni nor any of his colleagues declared conflicts of interest.
Neither glucosamine nor chondroitin, alone or combined, reduced joint pain or preserved joint space, according to Swiss researchers, who conclude that these supplements should not be prescribed, and if they are, health insurance should not cover them.
Meanwhile, despite a growing body of recent evidence showing the popular supplements to be ineffective, global sales of glucosamine and chondroitin have more than doubled since 2003. As of 2008, the sales of these supplements approached $2 billion and are projected to reach $2.3 billion in 2013, according to the same research team, which published its findings from a meta-analysis of data from 10 randomized, controlled trials.
The paradox of a market for a medication growing as its evidence base shrinks is probably merely the result of a predictable delay between evidence and adoption, said Dr. Peter Jüni, an epidemiologist at the University of Bern (Switzerland), lead author of the study.
High-quality evidence from large randomized controlled trials is relatively recent in the field of osteoarthritis, Dr. Jüni said in an interview, Sept. 17. “Only in the last 5-10 years has it become established in this field to do large-scale clinical trials,” he said. Of the 10 published randomized placebo-controlled trials Dr. Jüni and colleagues identified for their analysis, one was published in 1994 and the rest in the past decade, with the most recent in 2008.
“At the end of the 1990s and beginning of the 2000s, there were moderately small studies that actually made it into meta-analysis and into treatment guidelines” showing favorable results from glucosamine and chondroitin,” Dr. Jüni said. “Physicians were very reluctant to accept these then.” Eventually, of course, they did, and now “it will take time for the bad news to sink in, just as it took time for the good news in the 1990s.” Currently, Dr. Jüni noted, two more large nonindustry trials of glucosamine and chondroitin are underway. These “could put the nail in the coffin – or, you never know, could reopen the book.”
For their research, Dr. Jüni and colleagues analyzed results from randomized, placebo-controlled trials – seven of them industry-sponsored – enrolling 200 or more patients with knee or hip osteoarthritis (3,803 patients total). Using complex statistical modeling that allowed for comparisons at varied time points, the team assessed changes in levels of perceived pain after patients took glucosamine, chondroitin, or placebo daily for between 1 and 36 months. Six of the trials also measured joint narrowing (BMJ 2010;341:c4675[doi:10.1136/bmj.c4675]).
The 10 trials differed significantly in design. The majority enrolled patients with osteoarthritis of the knee only, though one enrolled patients with osteoarthritis of the hip or knee, and another included just patients with osteoarthritis of the hip. Supplements used included glucosamine sulfate, glucosamine hydrochloride, chondroitin sulfate, and combinations of these. All the glucosamine supplements were tested at 1,500 mg daily, while the chondroitin supplements varied between 800 and 1,200 mg daily.
Some of the trials took place in the United States, where supplements are not standardized for quality, and some in Europe, where they are. In eight, the supplements were evaluated to ensure correct concentrations of glucosamine or chondroitin, and in two the quality of the supplements was unclear. Patients ranged in age from 58 to 66 years, and the median percentage of women participants was 68%.
On a 10-cm visual analogue scale, Dr. Jüni and colleagues found, the overall difference in pain intensity compared with placebo was −0.4 cm (95% confidence interval, −0.7 to −0.1 cm) for glucosamine, −0.3 cm (−0.7 to 0.0 cm) for chondroitin, and −0.5 cm (−0.9 to 0.0 cm) for the combination. “For none of the estimates did the 95% credible intervals cross the boundary of the minimal clinically important difference,” the investigators wrote. “The differences in changes in minimal width of joint space were all minute, with 95% credible intervals overlapping zero.”
The seven industry-sponsored trials were more likely to detect an effect, however limited, than the non-industry trials (P = .02 for interaction). In industry independent trials, estimated treatment effects “were minute to zero and by no means clinically relevant,” Dr. Jüni and colleagues wrote in their analysis.
A possible reason that glucosamine and chondroitin are perceived widely as effective, Dr. Jüni said, is because osteoarthritic pain tends to fluctuate naturally. If people take a supplement when their symptoms are worse, “it leads you to perceive that it works perfectly” as they gradually subside, and the theoretical mechanism of the supplement is biologically plausible. In the end, he said, it may come down to need – as much as 30% of the adult population suffers joint pain, he said. Glucosamine and chondroitin are demonstrably safe, and there are few truly safe long-term treatments for joint pain.
The study was funded by grants from the Swiss National Science Foundation's National Research Program. Neither Dr. Jüni nor any of his colleagues declared conflicts of interest.
Chest Compression CPR Offers Better Survival Odds Than Mouth to Mouth
Cardiopulmonary resuscitation using only chest compression is more lifesaving than standard CPR when performed by nonprofessionals, possibly because of its simplicity, according to an article published online Oct. 15 in the Lancet.
In the United Kingdom, compression-first CPR already is the standard recommendation for treating sudden adult cardiac arrest; guidelines since 2005 have reduced (though not eliminated) the recommended amount of mouth-to-mouth or mouth-to-nose ventilation from earlier recommendations. The current findings add weight to the case for compression-only CPR without any rescue ventilation as the default for nonprofessional bystanders confronted with a cardiac arrest.
For their research, Dr. Michael Hüpfl of the department of anesthesiology at the Medical University of Vienna and his colleagues performed a meta-analysis pooling data from three randomized trials, and analyzed results for 3,031 patients. They found that chest-compression-only CPR performed by bystanders under directions from a telephone dispatcher was associated with an improved chance of survival compared with standard CPR performed by the same (14% vs. 12%) in adult patients experiencing cardiac arrest outside a hospital. The absolute increase in survival was 2.4%, with the relative chances of survival increased 22% by chest compression–only CPR (Lancet 2010 [doi:10.1016/S0140- 6736(10)61454-7]).
In a secondary meta-analysis of seven observational cohort studies, the researchers saw no significant difference between the compression-only and standard CPR arms.
Compression-only CPR, the investigators concluded, should become the default instructions for dispatchers to give to bystanders. “The pooled effect size of about 22% might seem small, but rates of survival after out-of-hospital cardiac arrest have been about 4%-8% for the past few decades, so our result could represent important progress,” they wrote in their analysis.
The reason for the relative success of compression-only CPR, the researchers wrote, may lie in its simplicity to perform. “By avoidance of rescue ventilation during CPR, which is often fairly time consuming for lay bystanders, a continuous uninterrupted coronary perfusion pressure is maintained, which increases the probability of a successful outcome. These considerations were the main reason to increase the compression-to-ventilation ratio for standard basic life support from 15:2 to 30:2 in the 2005 resuscitation guidelines.”
Already, in advance of the Lancet article, the Resuscitation Council UK, which makes CPR guidelines widely followed in the United Kingdom and in Europe, had new guidelines for bystanders in the works that do away with the recommendation for rescue ventilation.
Dr. Jerry P. Nolan of the Royal United Hospital NHS Trust in Bath, England, and an author of existing Resuscitation Council guidelines, said in an interview that the council’s new guidelines, scheduled to be published Oct. 18, were somewhat coincidental to the Lancet article – but that the coincidence was fortuitous.
“The guidelines went ahead on less strong data, but this really seals it,” said Dr. Nolan, who contributed editorial comment on the findings in the Lancet. “If people have not been trained, they should be no doubt doing compression only. The act of stopping compression [to ventilate] most certainly leads to long delays.”
In cases of adult cardiac arrest where a trained professional is on the scene, standard CPR with ventilation remains preferable, Dr. Nolan said, noting that compression-only CPR “works for only about the first 4 or 5 minutes. The whole thing comes down to what is ideal for the bystander’s level of training.”
In terms of improving the general public’s ability to provide a first response in cases of cardiac arrest, Dr. Nolan said he had great hope for compression-only CPR. “In the U.K. right now in about 30% of [cardiac arrest] cases, someone attempts CPR,” he said. “What we would like to see is a big increase in the number of bystanders that are prepared to do CPR. A good percentage of people will benefit.”
Dr. Nolan said that he expects that CPR guidelines throughout Europe and the United States will soon be updated to reflect the compression-only emphasis.
Dr. Hüpfl and his colleagues’ study was funded by the National Institutes of Health and the American Heart Association. Coauthor Dr. Peter Nagele disclosed that his institution, Washington University in St. Louis, had received research support from Roche Diagnostics, unrelated to the study, and that he had received consultancy fees from Gerson Lehrman Group. Another study author, Dr. Harald F. Selig, reported receiving a salary from St. John’s Ambulance Service, Vienna, and other support from Novo Nordisk.
Dr. Nolan and his coauthor on the editorial, Dr. Jasmeet Soar of the North Bristol (England) NHS Trust, declared no conflicts of interest.
Cardiopulmonary resuscitation using only chest compression is more lifesaving than standard CPR when performed by nonprofessionals, possibly because of its simplicity, according to an article published online Oct. 15 in the Lancet.
In the United Kingdom, compression-first CPR already is the standard recommendation for treating sudden adult cardiac arrest; guidelines since 2005 have reduced (though not eliminated) the recommended amount of mouth-to-mouth or mouth-to-nose ventilation from earlier recommendations. The current findings add weight to the case for compression-only CPR without any rescue ventilation as the default for nonprofessional bystanders confronted with a cardiac arrest.
For their research, Dr. Michael Hüpfl of the department of anesthesiology at the Medical University of Vienna and his colleagues performed a meta-analysis pooling data from three randomized trials, and analyzed results for 3,031 patients. They found that chest-compression-only CPR performed by bystanders under directions from a telephone dispatcher was associated with an improved chance of survival compared with standard CPR performed by the same (14% vs. 12%) in adult patients experiencing cardiac arrest outside a hospital. The absolute increase in survival was 2.4%, with the relative chances of survival increased 22% by chest compression–only CPR (Lancet 2010 [doi:10.1016/S0140- 6736(10)61454-7]).
In a secondary meta-analysis of seven observational cohort studies, the researchers saw no significant difference between the compression-only and standard CPR arms.
Compression-only CPR, the investigators concluded, should become the default instructions for dispatchers to give to bystanders. “The pooled effect size of about 22% might seem small, but rates of survival after out-of-hospital cardiac arrest have been about 4%-8% for the past few decades, so our result could represent important progress,” they wrote in their analysis.
The reason for the relative success of compression-only CPR, the researchers wrote, may lie in its simplicity to perform. “By avoidance of rescue ventilation during CPR, which is often fairly time consuming for lay bystanders, a continuous uninterrupted coronary perfusion pressure is maintained, which increases the probability of a successful outcome. These considerations were the main reason to increase the compression-to-ventilation ratio for standard basic life support from 15:2 to 30:2 in the 2005 resuscitation guidelines.”
Already, in advance of the Lancet article, the Resuscitation Council UK, which makes CPR guidelines widely followed in the United Kingdom and in Europe, had new guidelines for bystanders in the works that do away with the recommendation for rescue ventilation.
Dr. Jerry P. Nolan of the Royal United Hospital NHS Trust in Bath, England, and an author of existing Resuscitation Council guidelines, said in an interview that the council’s new guidelines, scheduled to be published Oct. 18, were somewhat coincidental to the Lancet article – but that the coincidence was fortuitous.
“The guidelines went ahead on less strong data, but this really seals it,” said Dr. Nolan, who contributed editorial comment on the findings in the Lancet. “If people have not been trained, they should be no doubt doing compression only. The act of stopping compression [to ventilate] most certainly leads to long delays.”
In cases of adult cardiac arrest where a trained professional is on the scene, standard CPR with ventilation remains preferable, Dr. Nolan said, noting that compression-only CPR “works for only about the first 4 or 5 minutes. The whole thing comes down to what is ideal for the bystander’s level of training.”
In terms of improving the general public’s ability to provide a first response in cases of cardiac arrest, Dr. Nolan said he had great hope for compression-only CPR. “In the U.K. right now in about 30% of [cardiac arrest] cases, someone attempts CPR,” he said. “What we would like to see is a big increase in the number of bystanders that are prepared to do CPR. A good percentage of people will benefit.”
Dr. Nolan said that he expects that CPR guidelines throughout Europe and the United States will soon be updated to reflect the compression-only emphasis.
Dr. Hüpfl and his colleagues’ study was funded by the National Institutes of Health and the American Heart Association. Coauthor Dr. Peter Nagele disclosed that his institution, Washington University in St. Louis, had received research support from Roche Diagnostics, unrelated to the study, and that he had received consultancy fees from Gerson Lehrman Group. Another study author, Dr. Harald F. Selig, reported receiving a salary from St. John’s Ambulance Service, Vienna, and other support from Novo Nordisk.
Dr. Nolan and his coauthor on the editorial, Dr. Jasmeet Soar of the North Bristol (England) NHS Trust, declared no conflicts of interest.
Cardiopulmonary resuscitation using only chest compression is more lifesaving than standard CPR when performed by nonprofessionals, possibly because of its simplicity, according to an article published online Oct. 15 in the Lancet.
In the United Kingdom, compression-first CPR already is the standard recommendation for treating sudden adult cardiac arrest; guidelines since 2005 have reduced (though not eliminated) the recommended amount of mouth-to-mouth or mouth-to-nose ventilation from earlier recommendations. The current findings add weight to the case for compression-only CPR without any rescue ventilation as the default for nonprofessional bystanders confronted with a cardiac arrest.
For their research, Dr. Michael Hüpfl of the department of anesthesiology at the Medical University of Vienna and his colleagues performed a meta-analysis pooling data from three randomized trials, and analyzed results for 3,031 patients. They found that chest-compression-only CPR performed by bystanders under directions from a telephone dispatcher was associated with an improved chance of survival compared with standard CPR performed by the same (14% vs. 12%) in adult patients experiencing cardiac arrest outside a hospital. The absolute increase in survival was 2.4%, with the relative chances of survival increased 22% by chest compression–only CPR (Lancet 2010 [doi:10.1016/S0140- 6736(10)61454-7]).
In a secondary meta-analysis of seven observational cohort studies, the researchers saw no significant difference between the compression-only and standard CPR arms.
Compression-only CPR, the investigators concluded, should become the default instructions for dispatchers to give to bystanders. “The pooled effect size of about 22% might seem small, but rates of survival after out-of-hospital cardiac arrest have been about 4%-8% for the past few decades, so our result could represent important progress,” they wrote in their analysis.
The reason for the relative success of compression-only CPR, the researchers wrote, may lie in its simplicity to perform. “By avoidance of rescue ventilation during CPR, which is often fairly time consuming for lay bystanders, a continuous uninterrupted coronary perfusion pressure is maintained, which increases the probability of a successful outcome. These considerations were the main reason to increase the compression-to-ventilation ratio for standard basic life support from 15:2 to 30:2 in the 2005 resuscitation guidelines.”
Already, in advance of the Lancet article, the Resuscitation Council UK, which makes CPR guidelines widely followed in the United Kingdom and in Europe, had new guidelines for bystanders in the works that do away with the recommendation for rescue ventilation.
Dr. Jerry P. Nolan of the Royal United Hospital NHS Trust in Bath, England, and an author of existing Resuscitation Council guidelines, said in an interview that the council’s new guidelines, scheduled to be published Oct. 18, were somewhat coincidental to the Lancet article – but that the coincidence was fortuitous.
“The guidelines went ahead on less strong data, but this really seals it,” said Dr. Nolan, who contributed editorial comment on the findings in the Lancet. “If people have not been trained, they should be no doubt doing compression only. The act of stopping compression [to ventilate] most certainly leads to long delays.”
In cases of adult cardiac arrest where a trained professional is on the scene, standard CPR with ventilation remains preferable, Dr. Nolan said, noting that compression-only CPR “works for only about the first 4 or 5 minutes. The whole thing comes down to what is ideal for the bystander’s level of training.”
In terms of improving the general public’s ability to provide a first response in cases of cardiac arrest, Dr. Nolan said he had great hope for compression-only CPR. “In the U.K. right now in about 30% of [cardiac arrest] cases, someone attempts CPR,” he said. “What we would like to see is a big increase in the number of bystanders that are prepared to do CPR. A good percentage of people will benefit.”
Dr. Nolan said that he expects that CPR guidelines throughout Europe and the United States will soon be updated to reflect the compression-only emphasis.
Dr. Hüpfl and his colleagues’ study was funded by the National Institutes of Health and the American Heart Association. Coauthor Dr. Peter Nagele disclosed that his institution, Washington University in St. Louis, had received research support from Roche Diagnostics, unrelated to the study, and that he had received consultancy fees from Gerson Lehrman Group. Another study author, Dr. Harald F. Selig, reported receiving a salary from St. John’s Ambulance Service, Vienna, and other support from Novo Nordisk.
Dr. Nolan and his coauthor on the editorial, Dr. Jasmeet Soar of the North Bristol (England) NHS Trust, declared no conflicts of interest.
Chest Compression CPR Offers Better Survival Odds Than Mouth to Mouth
Cardiopulmonary resuscitation using only chest compression is more lifesaving than standard CPR when performed by nonprofessionals, possibly because of its simplicity, according to an article published online Oct. 15 in the Lancet.
In the United Kingdom, compression-first CPR already is the standard recommendation for treating sudden adult cardiac arrest; guidelines since 2005 have reduced (though not eliminated) the recommended amount of mouth-to-mouth or mouth-to-nose ventilation from earlier recommendations. The current findings add weight to the case for compression-only CPR without any rescue ventilation as the default for nonprofessional bystanders confronted with a cardiac arrest.
For their research, Dr. Michael Hüpfl of the department of anesthesiology at the Medical University of Vienna and his colleagues performed a meta-analysis pooling data from three randomized trials, and analyzed results for 3,031 patients. They found that chest-compression-only CPR performed by bystanders under directions from a telephone dispatcher was associated with an improved chance of survival compared with standard CPR performed by the same (14% vs. 12%) in adult patients experiencing cardiac arrest outside a hospital. The absolute increase in survival was 2.4%, with the relative chances of survival increased 22% by chest compression–only CPR (Lancet 2010 [doi:10.1016/S0140- 6736(10)61454-7]).
In a secondary meta-analysis of seven observational cohort studies, the researchers saw no significant difference between the compression-only and standard CPR arms.
Compression-only CPR, the investigators concluded, should become the default instructions for dispatchers to give to bystanders. “The pooled effect size of about 22% might seem small, but rates of survival after out-of-hospital cardiac arrest have been about 4%-8% for the past few decades, so our result could represent important progress,” they wrote in their analysis.
The reason for the relative success of compression-only CPR, the researchers wrote, may lie in its simplicity to perform. “By avoidance of rescue ventilation during CPR, which is often fairly time consuming for lay bystanders, a continuous uninterrupted coronary perfusion pressure is maintained, which increases the probability of a successful outcome. These considerations were the main reason to increase the compression-to-ventilation ratio for standard basic life support from 15:2 to 30:2 in the 2005 resuscitation guidelines.”
Already, in advance of the Lancet article, the Resuscitation Council UK, which makes CPR guidelines widely followed in the United Kingdom and in Europe, had new guidelines for bystanders in the works that do away with the recommendation for rescue ventilation.
Dr. Jerry P. Nolan of the Royal United Hospital NHS Trust in Bath, England, and an author of existing Resuscitation Council guidelines, said in an interview that the council’s new guidelines, scheduled to be published Oct. 18, were somewhat coincidental to the Lancet article – but that the coincidence was fortuitous.
“The guidelines went ahead on less strong data, but this really seals it,” said Dr. Nolan, who contributed editorial comment on the findings in the Lancet. “If people have not been trained, they should be no doubt doing compression only. The act of stopping compression [to ventilate] most certainly leads to long delays.”
In cases of adult cardiac arrest where a trained professional is on the scene, standard CPR with ventilation remains preferable, Dr. Nolan said, noting that compression-only CPR “works for only about the first 4 or 5 minutes. The whole thing comes down to what is ideal for the bystander’s level of training.”
In terms of improving the general public’s ability to provide a first response in cases of cardiac arrest, Dr. Nolan said he had great hope for compression-only CPR. “In the U.K. right now in about 30% of [cardiac arrest] cases, someone attempts CPR,” he said. “What we would like to see is a big increase in the number of bystanders that are prepared to do CPR. A good percentage of people will benefit.”
Dr. Nolan said that he expects that CPR guidelines throughout Europe and the United States will soon be updated to reflect the compression-only emphasis.
Dr. Hüpfl and his colleagues’ study was funded by the National Institutes of Health and the American Heart Association. Coauthor Dr. Peter Nagele disclosed that his institution, Washington University in St. Louis, had received research support from Roche Diagnostics, unrelated to the study, and that he had received consultancy fees from Gerson Lehrman Group. Another study author, Dr. Harald F. Selig, reported receiving a salary from St. John’s Ambulance Service, Vienna, and other support from Novo Nordisk.
Dr. Nolan and his coauthor on the editorial, Dr. Jasmeet Soar of the North Bristol (England) NHS Trust, declared no conflicts of interest.
Cardiopulmonary resuscitation using only chest compression is more lifesaving than standard CPR when performed by nonprofessionals, possibly because of its simplicity, according to an article published online Oct. 15 in the Lancet.
In the United Kingdom, compression-first CPR already is the standard recommendation for treating sudden adult cardiac arrest; guidelines since 2005 have reduced (though not eliminated) the recommended amount of mouth-to-mouth or mouth-to-nose ventilation from earlier recommendations. The current findings add weight to the case for compression-only CPR without any rescue ventilation as the default for nonprofessional bystanders confronted with a cardiac arrest.
For their research, Dr. Michael Hüpfl of the department of anesthesiology at the Medical University of Vienna and his colleagues performed a meta-analysis pooling data from three randomized trials, and analyzed results for 3,031 patients. They found that chest-compression-only CPR performed by bystanders under directions from a telephone dispatcher was associated with an improved chance of survival compared with standard CPR performed by the same (14% vs. 12%) in adult patients experiencing cardiac arrest outside a hospital. The absolute increase in survival was 2.4%, with the relative chances of survival increased 22% by chest compression–only CPR (Lancet 2010 [doi:10.1016/S0140- 6736(10)61454-7]).
In a secondary meta-analysis of seven observational cohort studies, the researchers saw no significant difference between the compression-only and standard CPR arms.
Compression-only CPR, the investigators concluded, should become the default instructions for dispatchers to give to bystanders. “The pooled effect size of about 22% might seem small, but rates of survival after out-of-hospital cardiac arrest have been about 4%-8% for the past few decades, so our result could represent important progress,” they wrote in their analysis.
The reason for the relative success of compression-only CPR, the researchers wrote, may lie in its simplicity to perform. “By avoidance of rescue ventilation during CPR, which is often fairly time consuming for lay bystanders, a continuous uninterrupted coronary perfusion pressure is maintained, which increases the probability of a successful outcome. These considerations were the main reason to increase the compression-to-ventilation ratio for standard basic life support from 15:2 to 30:2 in the 2005 resuscitation guidelines.”
Already, in advance of the Lancet article, the Resuscitation Council UK, which makes CPR guidelines widely followed in the United Kingdom and in Europe, had new guidelines for bystanders in the works that do away with the recommendation for rescue ventilation.
Dr. Jerry P. Nolan of the Royal United Hospital NHS Trust in Bath, England, and an author of existing Resuscitation Council guidelines, said in an interview that the council’s new guidelines, scheduled to be published Oct. 18, were somewhat coincidental to the Lancet article – but that the coincidence was fortuitous.
“The guidelines went ahead on less strong data, but this really seals it,” said Dr. Nolan, who contributed editorial comment on the findings in the Lancet. “If people have not been trained, they should be no doubt doing compression only. The act of stopping compression [to ventilate] most certainly leads to long delays.”
In cases of adult cardiac arrest where a trained professional is on the scene, standard CPR with ventilation remains preferable, Dr. Nolan said, noting that compression-only CPR “works for only about the first 4 or 5 minutes. The whole thing comes down to what is ideal for the bystander’s level of training.”
In terms of improving the general public’s ability to provide a first response in cases of cardiac arrest, Dr. Nolan said he had great hope for compression-only CPR. “In the U.K. right now in about 30% of [cardiac arrest] cases, someone attempts CPR,” he said. “What we would like to see is a big increase in the number of bystanders that are prepared to do CPR. A good percentage of people will benefit.”
Dr. Nolan said that he expects that CPR guidelines throughout Europe and the United States will soon be updated to reflect the compression-only emphasis.
Dr. Hüpfl and his colleagues’ study was funded by the National Institutes of Health and the American Heart Association. Coauthor Dr. Peter Nagele disclosed that his institution, Washington University in St. Louis, had received research support from Roche Diagnostics, unrelated to the study, and that he had received consultancy fees from Gerson Lehrman Group. Another study author, Dr. Harald F. Selig, reported receiving a salary from St. John’s Ambulance Service, Vienna, and other support from Novo Nordisk.
Dr. Nolan and his coauthor on the editorial, Dr. Jasmeet Soar of the North Bristol (England) NHS Trust, declared no conflicts of interest.
Cardiopulmonary resuscitation using only chest compression is more lifesaving than standard CPR when performed by nonprofessionals, possibly because of its simplicity, according to an article published online Oct. 15 in the Lancet.
In the United Kingdom, compression-first CPR already is the standard recommendation for treating sudden adult cardiac arrest; guidelines since 2005 have reduced (though not eliminated) the recommended amount of mouth-to-mouth or mouth-to-nose ventilation from earlier recommendations. The current findings add weight to the case for compression-only CPR without any rescue ventilation as the default for nonprofessional bystanders confronted with a cardiac arrest.
For their research, Dr. Michael Hüpfl of the department of anesthesiology at the Medical University of Vienna and his colleagues performed a meta-analysis pooling data from three randomized trials, and analyzed results for 3,031 patients. They found that chest-compression-only CPR performed by bystanders under directions from a telephone dispatcher was associated with an improved chance of survival compared with standard CPR performed by the same (14% vs. 12%) in adult patients experiencing cardiac arrest outside a hospital. The absolute increase in survival was 2.4%, with the relative chances of survival increased 22% by chest compression–only CPR (Lancet 2010 [doi:10.1016/S0140- 6736(10)61454-7]).
In a secondary meta-analysis of seven observational cohort studies, the researchers saw no significant difference between the compression-only and standard CPR arms.
Compression-only CPR, the investigators concluded, should become the default instructions for dispatchers to give to bystanders. “The pooled effect size of about 22% might seem small, but rates of survival after out-of-hospital cardiac arrest have been about 4%-8% for the past few decades, so our result could represent important progress,” they wrote in their analysis.
The reason for the relative success of compression-only CPR, the researchers wrote, may lie in its simplicity to perform. “By avoidance of rescue ventilation during CPR, which is often fairly time consuming for lay bystanders, a continuous uninterrupted coronary perfusion pressure is maintained, which increases the probability of a successful outcome. These considerations were the main reason to increase the compression-to-ventilation ratio for standard basic life support from 15:2 to 30:2 in the 2005 resuscitation guidelines.”
Already, in advance of the Lancet article, the Resuscitation Council UK, which makes CPR guidelines widely followed in the United Kingdom and in Europe, had new guidelines for bystanders in the works that do away with the recommendation for rescue ventilation.
Dr. Jerry P. Nolan of the Royal United Hospital NHS Trust in Bath, England, and an author of existing Resuscitation Council guidelines, said in an interview that the council’s new guidelines, scheduled to be published Oct. 18, were somewhat coincidental to the Lancet article – but that the coincidence was fortuitous.
“The guidelines went ahead on less strong data, but this really seals it,” said Dr. Nolan, who contributed editorial comment on the findings in the Lancet. “If people have not been trained, they should be no doubt doing compression only. The act of stopping compression [to ventilate] most certainly leads to long delays.”
In cases of adult cardiac arrest where a trained professional is on the scene, standard CPR with ventilation remains preferable, Dr. Nolan said, noting that compression-only CPR “works for only about the first 4 or 5 minutes. The whole thing comes down to what is ideal for the bystander’s level of training.”
In terms of improving the general public’s ability to provide a first response in cases of cardiac arrest, Dr. Nolan said he had great hope for compression-only CPR. “In the U.K. right now in about 30% of [cardiac arrest] cases, someone attempts CPR,” he said. “What we would like to see is a big increase in the number of bystanders that are prepared to do CPR. A good percentage of people will benefit.”
Dr. Nolan said that he expects that CPR guidelines throughout Europe and the United States will soon be updated to reflect the compression-only emphasis.
Dr. Hüpfl and his colleagues’ study was funded by the National Institutes of Health and the American Heart Association. Coauthor Dr. Peter Nagele disclosed that his institution, Washington University in St. Louis, had received research support from Roche Diagnostics, unrelated to the study, and that he had received consultancy fees from Gerson Lehrman Group. Another study author, Dr. Harald F. Selig, reported receiving a salary from St. John’s Ambulance Service, Vienna, and other support from Novo Nordisk.
Dr. Nolan and his coauthor on the editorial, Dr. Jasmeet Soar of the North Bristol (England) NHS Trust, declared no conflicts of interest.
H1N1 Stole Focus from Other Threats: ECDC
Europe’s top epidemiologists say that they’ve vastly improved their monitoring of emerging disease threats, including obscure ones, such as anthrax and plague. They also acknowledged, however, that the H1N1 pandemic may have resulted in less attention paid to other threats last year.
This week the European Centre for Disease Prevention and Control issued a report describing disease outbreaks and threats from 2009 – both within Europe and with the potential to affect Europe -- and assessing its response to each.
During the peak of the H1N1 pandemic in Europe, between April and September 2009, the agency noted a dip in reporting of other threats, particularly the type of food and waterborne diseases that tend to increase in the summer. This suggested that the member states’ attention to H1N1 may have resulted in under-monitoring and under-reporting of these threats to ECDC, according to Dr. Denis Coulombier, the head of the ECDC’s preparedness and response unit in Stockholm.
“It’s very clear that some of the other notifications that we should have received were not coming in,” Dr. Coulombier said in an interview. The dip seemed to be limited to the diarrheal illnesses and not the potentially graver threats on ECDC’s radar that year, such as the ongoing Q fever outbreak in the Netherlands, which saw more than 2,000 cases in 2009. “My main concern was to miss something else because of the pandemic,” Dr. Coulombier said. “Thankfully, I don’t think we missed much.”
The ECDC is a young organization, established in 2005 to increase information sharing among European Union member states and, to some degree, relieve them of their responsibilities in monitoring any global infectious disease threats, such as severe acute respiratory syndrome (SARS) and avian influenza, with the potential to impact Europe. Threats can include anything from a tuberculosis case on a plane to a multi-year threat such as West Nile virus, or even an endemic threat such as Q fever. Like criminal cases, the threats are considered open or closed. The vast majority are open for two weeks or less, while a small number of standing threats, such as avian influenza, remain open over a period of years.
Since instituting centralized monitoring, the agency noted in its report, the number of threats it has watched annually increased from 99 in 2005 (ECDC started monitoring in June of that year) to 251 in 2008. The increase, Dr. Coulombier said, was mainly due to the ECDC’s intelligence work on global threats, while threats reported by the member states remained relatively constant.
But in 2009, the number of threats dropped again —to 192, suggesting that the emergence of pandemic H1N1 may have siphoned off the attentions of the reporting countries and even the agency itself. “Events such as a pandemic require the extensive mobilization of public health resources, which seems to significantly reduce vigilance for other threats,” ECDC noted in its report.
While the most public threat of 2009, pandemic influenza H1N1, gobbled most of the headlines, the agency was also monitoring and helping to investigate outbreaks of Q fever, anthrax, measles, mumps, E. coli in petting zoos, an accidental exposure to Ebola, locally transmitted malaria, and a report of bubonic plague at a terrorist camp in Algeria, later deemed a hoax.
One threat from 2009, detailed in the report, involved a possible deliberate contamination of pool water in Italy. Dr. Coulombier said that the ECDC was “really strengthening the investigation” of cases involving the potential intentional release of pathogens and wants member states to be more vigilant about them. Last year the ECDC investigated a biosafety threat involving Ebola exposure in a lab and dealt with mysterious cases of anthrax among drug users in Scotland, issuing warnings to member states with protocols for handling patients, corpses, and biological or drug samples. The anthrax threat, believed to be linked to contaminated heroin, is considered ongoing.
Cases of Q fever rose from fewer than 20 notified cases annually to 168 in 2007, to 1,007 in 2008. In 2009, cases doubled. All the cases occurred in the Netherlands, and are likely related to goat and sheep farming near densely populated areas, the agency said in its threat report, but the disease has the potential to cross borders.
Dr. Coulombier said that the ECDC is continuing to monitor Q fever intensely, along with a panoply of obscure but real threats: vector-borne diseases such as dengue, malaria, West Nile disease, and Chikungunya fever, all of which occurred both last year and in 2010 in Europe.
But the polio epidemic in Tajikistan and Russia is considered one of the most serious emerging threats, with devastating potential for Europe, and Dr. Coulombier said the agency is urging member states not to lower their guard.
“This is a major one,” Dr. Coulombier said. “We have under-vaccinated populations in the EU. We know that the EU could be at risk of seeing the re-emergence of polio in these populations.”
Europe’s top epidemiologists say that they’ve vastly improved their monitoring of emerging disease threats, including obscure ones, such as anthrax and plague. They also acknowledged, however, that the H1N1 pandemic may have resulted in less attention paid to other threats last year.
This week the European Centre for Disease Prevention and Control issued a report describing disease outbreaks and threats from 2009 – both within Europe and with the potential to affect Europe -- and assessing its response to each.
During the peak of the H1N1 pandemic in Europe, between April and September 2009, the agency noted a dip in reporting of other threats, particularly the type of food and waterborne diseases that tend to increase in the summer. This suggested that the member states’ attention to H1N1 may have resulted in under-monitoring and under-reporting of these threats to ECDC, according to Dr. Denis Coulombier, the head of the ECDC’s preparedness and response unit in Stockholm.
“It’s very clear that some of the other notifications that we should have received were not coming in,” Dr. Coulombier said in an interview. The dip seemed to be limited to the diarrheal illnesses and not the potentially graver threats on ECDC’s radar that year, such as the ongoing Q fever outbreak in the Netherlands, which saw more than 2,000 cases in 2009. “My main concern was to miss something else because of the pandemic,” Dr. Coulombier said. “Thankfully, I don’t think we missed much.”
The ECDC is a young organization, established in 2005 to increase information sharing among European Union member states and, to some degree, relieve them of their responsibilities in monitoring any global infectious disease threats, such as severe acute respiratory syndrome (SARS) and avian influenza, with the potential to impact Europe. Threats can include anything from a tuberculosis case on a plane to a multi-year threat such as West Nile virus, or even an endemic threat such as Q fever. Like criminal cases, the threats are considered open or closed. The vast majority are open for two weeks or less, while a small number of standing threats, such as avian influenza, remain open over a period of years.
Since instituting centralized monitoring, the agency noted in its report, the number of threats it has watched annually increased from 99 in 2005 (ECDC started monitoring in June of that year) to 251 in 2008. The increase, Dr. Coulombier said, was mainly due to the ECDC’s intelligence work on global threats, while threats reported by the member states remained relatively constant.
But in 2009, the number of threats dropped again —to 192, suggesting that the emergence of pandemic H1N1 may have siphoned off the attentions of the reporting countries and even the agency itself. “Events such as a pandemic require the extensive mobilization of public health resources, which seems to significantly reduce vigilance for other threats,” ECDC noted in its report.
While the most public threat of 2009, pandemic influenza H1N1, gobbled most of the headlines, the agency was also monitoring and helping to investigate outbreaks of Q fever, anthrax, measles, mumps, E. coli in petting zoos, an accidental exposure to Ebola, locally transmitted malaria, and a report of bubonic plague at a terrorist camp in Algeria, later deemed a hoax.
One threat from 2009, detailed in the report, involved a possible deliberate contamination of pool water in Italy. Dr. Coulombier said that the ECDC was “really strengthening the investigation” of cases involving the potential intentional release of pathogens and wants member states to be more vigilant about them. Last year the ECDC investigated a biosafety threat involving Ebola exposure in a lab and dealt with mysterious cases of anthrax among drug users in Scotland, issuing warnings to member states with protocols for handling patients, corpses, and biological or drug samples. The anthrax threat, believed to be linked to contaminated heroin, is considered ongoing.
Cases of Q fever rose from fewer than 20 notified cases annually to 168 in 2007, to 1,007 in 2008. In 2009, cases doubled. All the cases occurred in the Netherlands, and are likely related to goat and sheep farming near densely populated areas, the agency said in its threat report, but the disease has the potential to cross borders.
Dr. Coulombier said that the ECDC is continuing to monitor Q fever intensely, along with a panoply of obscure but real threats: vector-borne diseases such as dengue, malaria, West Nile disease, and Chikungunya fever, all of which occurred both last year and in 2010 in Europe.
But the polio epidemic in Tajikistan and Russia is considered one of the most serious emerging threats, with devastating potential for Europe, and Dr. Coulombier said the agency is urging member states not to lower their guard.
“This is a major one,” Dr. Coulombier said. “We have under-vaccinated populations in the EU. We know that the EU could be at risk of seeing the re-emergence of polio in these populations.”
Europe’s top epidemiologists say that they’ve vastly improved their monitoring of emerging disease threats, including obscure ones, such as anthrax and plague. They also acknowledged, however, that the H1N1 pandemic may have resulted in less attention paid to other threats last year.
This week the European Centre for Disease Prevention and Control issued a report describing disease outbreaks and threats from 2009 – both within Europe and with the potential to affect Europe -- and assessing its response to each.
During the peak of the H1N1 pandemic in Europe, between April and September 2009, the agency noted a dip in reporting of other threats, particularly the type of food and waterborne diseases that tend to increase in the summer. This suggested that the member states’ attention to H1N1 may have resulted in under-monitoring and under-reporting of these threats to ECDC, according to Dr. Denis Coulombier, the head of the ECDC’s preparedness and response unit in Stockholm.
“It’s very clear that some of the other notifications that we should have received were not coming in,” Dr. Coulombier said in an interview. The dip seemed to be limited to the diarrheal illnesses and not the potentially graver threats on ECDC’s radar that year, such as the ongoing Q fever outbreak in the Netherlands, which saw more than 2,000 cases in 2009. “My main concern was to miss something else because of the pandemic,” Dr. Coulombier said. “Thankfully, I don’t think we missed much.”
The ECDC is a young organization, established in 2005 to increase information sharing among European Union member states and, to some degree, relieve them of their responsibilities in monitoring any global infectious disease threats, such as severe acute respiratory syndrome (SARS) and avian influenza, with the potential to impact Europe. Threats can include anything from a tuberculosis case on a plane to a multi-year threat such as West Nile virus, or even an endemic threat such as Q fever. Like criminal cases, the threats are considered open or closed. The vast majority are open for two weeks or less, while a small number of standing threats, such as avian influenza, remain open over a period of years.
Since instituting centralized monitoring, the agency noted in its report, the number of threats it has watched annually increased from 99 in 2005 (ECDC started monitoring in June of that year) to 251 in 2008. The increase, Dr. Coulombier said, was mainly due to the ECDC’s intelligence work on global threats, while threats reported by the member states remained relatively constant.
But in 2009, the number of threats dropped again —to 192, suggesting that the emergence of pandemic H1N1 may have siphoned off the attentions of the reporting countries and even the agency itself. “Events such as a pandemic require the extensive mobilization of public health resources, which seems to significantly reduce vigilance for other threats,” ECDC noted in its report.
While the most public threat of 2009, pandemic influenza H1N1, gobbled most of the headlines, the agency was also monitoring and helping to investigate outbreaks of Q fever, anthrax, measles, mumps, E. coli in petting zoos, an accidental exposure to Ebola, locally transmitted malaria, and a report of bubonic plague at a terrorist camp in Algeria, later deemed a hoax.
One threat from 2009, detailed in the report, involved a possible deliberate contamination of pool water in Italy. Dr. Coulombier said that the ECDC was “really strengthening the investigation” of cases involving the potential intentional release of pathogens and wants member states to be more vigilant about them. Last year the ECDC investigated a biosafety threat involving Ebola exposure in a lab and dealt with mysterious cases of anthrax among drug users in Scotland, issuing warnings to member states with protocols for handling patients, corpses, and biological or drug samples. The anthrax threat, believed to be linked to contaminated heroin, is considered ongoing.
Cases of Q fever rose from fewer than 20 notified cases annually to 168 in 2007, to 1,007 in 2008. In 2009, cases doubled. All the cases occurred in the Netherlands, and are likely related to goat and sheep farming near densely populated areas, the agency said in its threat report, but the disease has the potential to cross borders.
Dr. Coulombier said that the ECDC is continuing to monitor Q fever intensely, along with a panoply of obscure but real threats: vector-borne diseases such as dengue, malaria, West Nile disease, and Chikungunya fever, all of which occurred both last year and in 2010 in Europe.
But the polio epidemic in Tajikistan and Russia is considered one of the most serious emerging threats, with devastating potential for Europe, and Dr. Coulombier said the agency is urging member states not to lower their guard.
“This is a major one,” Dr. Coulombier said. “We have under-vaccinated populations in the EU. We know that the EU could be at risk of seeing the re-emergence of polio in these populations.”