User login
Combo Tx Improves BMD at Spine, Hip
Major Findings: Treatment with teriparatide plus zoledronic acid increased spinal bone mineral density more than either drug alone.
Source of Data: One-year study of 412 postmenopausal women with previously untreated osteoporosis.
Disclosures: The study was sponsored by Novartis, which makes Zometa. Dr. Cosman reported that she has received consulting fees from Novartis and other pharmaceutical companies.
DENVER — Bone mineral density at the spine and hip increased more rapidly and to a greater degree with combined teriparatide and zoledronic acid than with either agent alone in a 1-year study of 412 postmenopausal women with previously untreated osteoporosis.
“Combination therapy could therefore be considered in some patients at high risk for hip and other fractures,” Dr. Felicia Cosman said at the annual meeting of the American Society for Bone and Mineral Research.
Clinical fractures were assessed as part of serious adverse event monitoring and were confirmed using radiographic reports. There were 13 fractures in the zoledronic acid (Zometa) group, 8 in the teriparatide (Forteo) group, and 4 in the combination therapy group.
At 1 year, the increase in spine BMD was 4.4% with zoledronic acid alone, 7.1% with the teriparatide alone, and 7.5% with combination therapy. Spine BMD increased more rapidly with combination therapy, but it eventually caught up to similar levels with teriparatide alone.
Similarly significant increases in total hip BMD occurred in all treatment groups, said Dr. Cosman, medical director of the clinical research center at Helen Hayes Hospital in West Haverstraw, N.Y.
The study included three active treatment groups: 5 mg zoledronic acid at baseline (open arm), 20 mcg daily subcutaneous teriparatide (placebo infusion at baseline), and a combination of the two. All patients received calcium and vitamin D supplements. Average age at baseline was 65 years. The women had a mean spine T score of −2.9, and a mean total hip T score of −1.9. Baseline variables did not differ among the three groups.
The researchers also measured two bone markers: Beta C-terminal telopeptide of type I collagen (CTx) is a marker of bone resorption, and amino-terminal propeptide of type 1 procollagen (P1NP) is a marker of bone formation.
“In the combination group, there is first a small increase and then a brief but modest decline in P1NP, followed by a progressive rise thereafter,” she said. The decline in P1NP for the combination group is not as great as for those on zoledronic acid alone.
For patients on zoledronic acid alone, there was a rapid and robust suppression of beta CTx up to 4 weeks, when the levels trended back toward baseline. There was no change in beta CTx in patients on teriparatide alone for the first month. Then beta CTx levels began to increase, peaking at about 6 months. In the combination group, there was a prominent suppression of beta CTx (bone resorption) similar to that of zoledronic acid over the first 2 months. A gradual increase followed, with levels greater than at baseline for the latter half of the year.
For P1NP, there is a lag in suppression compared with beta CTx with zoledronic acid treatment, followed by prominent suppression with a nadir/plateau at 6 months. For those on teriparatide alone, there is a doubling of baseline P1NP levels by 4 weeks, with levels peaking at about 6 months.
Major Findings: Treatment with teriparatide plus zoledronic acid increased spinal bone mineral density more than either drug alone.
Source of Data: One-year study of 412 postmenopausal women with previously untreated osteoporosis.
Disclosures: The study was sponsored by Novartis, which makes Zometa. Dr. Cosman reported that she has received consulting fees from Novartis and other pharmaceutical companies.
DENVER — Bone mineral density at the spine and hip increased more rapidly and to a greater degree with combined teriparatide and zoledronic acid than with either agent alone in a 1-year study of 412 postmenopausal women with previously untreated osteoporosis.
“Combination therapy could therefore be considered in some patients at high risk for hip and other fractures,” Dr. Felicia Cosman said at the annual meeting of the American Society for Bone and Mineral Research.
Clinical fractures were assessed as part of serious adverse event monitoring and were confirmed using radiographic reports. There were 13 fractures in the zoledronic acid (Zometa) group, 8 in the teriparatide (Forteo) group, and 4 in the combination therapy group.
At 1 year, the increase in spine BMD was 4.4% with zoledronic acid alone, 7.1% with the teriparatide alone, and 7.5% with combination therapy. Spine BMD increased more rapidly with combination therapy, but it eventually caught up to similar levels with teriparatide alone.
Similarly significant increases in total hip BMD occurred in all treatment groups, said Dr. Cosman, medical director of the clinical research center at Helen Hayes Hospital in West Haverstraw, N.Y.
The study included three active treatment groups: 5 mg zoledronic acid at baseline (open arm), 20 mcg daily subcutaneous teriparatide (placebo infusion at baseline), and a combination of the two. All patients received calcium and vitamin D supplements. Average age at baseline was 65 years. The women had a mean spine T score of −2.9, and a mean total hip T score of −1.9. Baseline variables did not differ among the three groups.
The researchers also measured two bone markers: Beta C-terminal telopeptide of type I collagen (CTx) is a marker of bone resorption, and amino-terminal propeptide of type 1 procollagen (P1NP) is a marker of bone formation.
“In the combination group, there is first a small increase and then a brief but modest decline in P1NP, followed by a progressive rise thereafter,” she said. The decline in P1NP for the combination group is not as great as for those on zoledronic acid alone.
For patients on zoledronic acid alone, there was a rapid and robust suppression of beta CTx up to 4 weeks, when the levels trended back toward baseline. There was no change in beta CTx in patients on teriparatide alone for the first month. Then beta CTx levels began to increase, peaking at about 6 months. In the combination group, there was a prominent suppression of beta CTx (bone resorption) similar to that of zoledronic acid over the first 2 months. A gradual increase followed, with levels greater than at baseline for the latter half of the year.
For P1NP, there is a lag in suppression compared with beta CTx with zoledronic acid treatment, followed by prominent suppression with a nadir/plateau at 6 months. For those on teriparatide alone, there is a doubling of baseline P1NP levels by 4 weeks, with levels peaking at about 6 months.
Major Findings: Treatment with teriparatide plus zoledronic acid increased spinal bone mineral density more than either drug alone.
Source of Data: One-year study of 412 postmenopausal women with previously untreated osteoporosis.
Disclosures: The study was sponsored by Novartis, which makes Zometa. Dr. Cosman reported that she has received consulting fees from Novartis and other pharmaceutical companies.
DENVER — Bone mineral density at the spine and hip increased more rapidly and to a greater degree with combined teriparatide and zoledronic acid than with either agent alone in a 1-year study of 412 postmenopausal women with previously untreated osteoporosis.
“Combination therapy could therefore be considered in some patients at high risk for hip and other fractures,” Dr. Felicia Cosman said at the annual meeting of the American Society for Bone and Mineral Research.
Clinical fractures were assessed as part of serious adverse event monitoring and were confirmed using radiographic reports. There were 13 fractures in the zoledronic acid (Zometa) group, 8 in the teriparatide (Forteo) group, and 4 in the combination therapy group.
At 1 year, the increase in spine BMD was 4.4% with zoledronic acid alone, 7.1% with the teriparatide alone, and 7.5% with combination therapy. Spine BMD increased more rapidly with combination therapy, but it eventually caught up to similar levels with teriparatide alone.
Similarly significant increases in total hip BMD occurred in all treatment groups, said Dr. Cosman, medical director of the clinical research center at Helen Hayes Hospital in West Haverstraw, N.Y.
The study included three active treatment groups: 5 mg zoledronic acid at baseline (open arm), 20 mcg daily subcutaneous teriparatide (placebo infusion at baseline), and a combination of the two. All patients received calcium and vitamin D supplements. Average age at baseline was 65 years. The women had a mean spine T score of −2.9, and a mean total hip T score of −1.9. Baseline variables did not differ among the three groups.
The researchers also measured two bone markers: Beta C-terminal telopeptide of type I collagen (CTx) is a marker of bone resorption, and amino-terminal propeptide of type 1 procollagen (P1NP) is a marker of bone formation.
“In the combination group, there is first a small increase and then a brief but modest decline in P1NP, followed by a progressive rise thereafter,” she said. The decline in P1NP for the combination group is not as great as for those on zoledronic acid alone.
For patients on zoledronic acid alone, there was a rapid and robust suppression of beta CTx up to 4 weeks, when the levels trended back toward baseline. There was no change in beta CTx in patients on teriparatide alone for the first month. Then beta CTx levels began to increase, peaking at about 6 months. In the combination group, there was a prominent suppression of beta CTx (bone resorption) similar to that of zoledronic acid over the first 2 months. A gradual increase followed, with levels greater than at baseline for the latter half of the year.
For P1NP, there is a lag in suppression compared with beta CTx with zoledronic acid treatment, followed by prominent suppression with a nadir/plateau at 6 months. For those on teriparatide alone, there is a doubling of baseline P1NP levels by 4 weeks, with levels peaking at about 6 months.
CA-MRSA Found Less Often in Kids With Atopic Dermatitis
PHILADELPHIA — Community-associated methicillin-resistant Staphylococcus aureus skin infections occur significantly less often among children with atopic dermatitis than among other outpatients with skin and soft tissue infections, based on a retrospective study of 78 children.
Children with atopic dermatitis (AD) and Staphylococcus aureus skin infections seen at a pediatric and adolescent dermatology clinic had a relatively low incidence (14%) of methicillin resistance, much lower than the rate noted (45.5%) in other outpatient services during the same period (2007–2008), Dr. Catalina Matiz and her colleagues wrote in a poster presented at the annual meeting of the Society for Pediatric Dermatology.
Dr. Matiz, a postdoctoral fellow at Rady Children's Hospital in San Diego, and her coinvestigators conducted a retrospective chart study of 78 children with superinfected AD seen at the Rady pediatric and adolescent dermatology clinic between June 2007 and June 2008. The children had a positive skin culture for S. aureus.
The investigators compared these data with all skin and soft tissue infection outpatient samples sent to the hospital's microbiology lab during the same period, and also with those sent during January 2000 through January 2001 (excluding samples from the dermatology clinic). The CA-MRSA rate for samples from all outpatient services from 2000 to 2001 was 4% (192 S. aureus–positive cultures). The outpatient services rates for 2000–2001 and 2007–2008 highlight the sharp increase in CA-MRSA over the last several years.
The rate of community-associated methicillin-sensitive S. aureus (CA-MSSA) among patients with AD in the dermatology clinic from 2007 to 2008 was 86%. In comparison, the CA-MSSA rate for other outpatient services during that period was 55%. The CA-MSSA rate for all outpatient services from 2000 to 2001 was 96%.
They found that prior history of hospitalization, eczema severity, age, sex, and prior antibiotic treatment had no impact on the risk of methicillin resistance or sensitivity in these patients. For the patients with AD, positive S. aureus cultures were most common among patients aged 1–4 years (26%), followed by those aged 5–9 years (24%), and those less than a year old (23%).
The findings are striking. “It's absolutely counterintuitive because if you think of patients with AD as being more at risk for infection, you would think that at the very least they would have the same rate as that occurring in the regular population,” said Dr. Sheila Fallon Friedlander, a study coauthor and professor of pediatrics and medicine at the University of California, San Diego.
It may be that “because these kids are colonized already so much of the time with regular S. aureus, that it may exert sort of a protective effect against CA-MRSA,” Dr. Friedlander said.
In addition, patients with AD tend to present more often with multiple lesions. “That may also play a role in this. It may be that our atopic patients are presenting with secondarily infected lesions that are distinct from the abscesses and the folliculitis that we are seeing in the community,” she noted.
The findings “have informed the way that I prescribe medications for my patients,” she said. The results suggest that more standard antibiotic drugs with fewer side effects—like cephalosporins—can be used first, especially while waiting for culture results. This could not only reduce costs but also save patients from the more serious side effects of antibiotics used for resistant pathogens.
Disclosures: Dr. Matiz and Dr. Friedlander had no conflicts of interest related to this study.
PHILADELPHIA — Community-associated methicillin-resistant Staphylococcus aureus skin infections occur significantly less often among children with atopic dermatitis than among other outpatients with skin and soft tissue infections, based on a retrospective study of 78 children.
Children with atopic dermatitis (AD) and Staphylococcus aureus skin infections seen at a pediatric and adolescent dermatology clinic had a relatively low incidence (14%) of methicillin resistance, much lower than the rate noted (45.5%) in other outpatient services during the same period (2007–2008), Dr. Catalina Matiz and her colleagues wrote in a poster presented at the annual meeting of the Society for Pediatric Dermatology.
Dr. Matiz, a postdoctoral fellow at Rady Children's Hospital in San Diego, and her coinvestigators conducted a retrospective chart study of 78 children with superinfected AD seen at the Rady pediatric and adolescent dermatology clinic between June 2007 and June 2008. The children had a positive skin culture for S. aureus.
The investigators compared these data with all skin and soft tissue infection outpatient samples sent to the hospital's microbiology lab during the same period, and also with those sent during January 2000 through January 2001 (excluding samples from the dermatology clinic). The CA-MRSA rate for samples from all outpatient services from 2000 to 2001 was 4% (192 S. aureus–positive cultures). The outpatient services rates for 2000–2001 and 2007–2008 highlight the sharp increase in CA-MRSA over the last several years.
The rate of community-associated methicillin-sensitive S. aureus (CA-MSSA) among patients with AD in the dermatology clinic from 2007 to 2008 was 86%. In comparison, the CA-MSSA rate for other outpatient services during that period was 55%. The CA-MSSA rate for all outpatient services from 2000 to 2001 was 96%.
They found that prior history of hospitalization, eczema severity, age, sex, and prior antibiotic treatment had no impact on the risk of methicillin resistance or sensitivity in these patients. For the patients with AD, positive S. aureus cultures were most common among patients aged 1–4 years (26%), followed by those aged 5–9 years (24%), and those less than a year old (23%).
The findings are striking. “It's absolutely counterintuitive because if you think of patients with AD as being more at risk for infection, you would think that at the very least they would have the same rate as that occurring in the regular population,” said Dr. Sheila Fallon Friedlander, a study coauthor and professor of pediatrics and medicine at the University of California, San Diego.
It may be that “because these kids are colonized already so much of the time with regular S. aureus, that it may exert sort of a protective effect against CA-MRSA,” Dr. Friedlander said.
In addition, patients with AD tend to present more often with multiple lesions. “That may also play a role in this. It may be that our atopic patients are presenting with secondarily infected lesions that are distinct from the abscesses and the folliculitis that we are seeing in the community,” she noted.
The findings “have informed the way that I prescribe medications for my patients,” she said. The results suggest that more standard antibiotic drugs with fewer side effects—like cephalosporins—can be used first, especially while waiting for culture results. This could not only reduce costs but also save patients from the more serious side effects of antibiotics used for resistant pathogens.
Disclosures: Dr. Matiz and Dr. Friedlander had no conflicts of interest related to this study.
PHILADELPHIA — Community-associated methicillin-resistant Staphylococcus aureus skin infections occur significantly less often among children with atopic dermatitis than among other outpatients with skin and soft tissue infections, based on a retrospective study of 78 children.
Children with atopic dermatitis (AD) and Staphylococcus aureus skin infections seen at a pediatric and adolescent dermatology clinic had a relatively low incidence (14%) of methicillin resistance, much lower than the rate noted (45.5%) in other outpatient services during the same period (2007–2008), Dr. Catalina Matiz and her colleagues wrote in a poster presented at the annual meeting of the Society for Pediatric Dermatology.
Dr. Matiz, a postdoctoral fellow at Rady Children's Hospital in San Diego, and her coinvestigators conducted a retrospective chart study of 78 children with superinfected AD seen at the Rady pediatric and adolescent dermatology clinic between June 2007 and June 2008. The children had a positive skin culture for S. aureus.
The investigators compared these data with all skin and soft tissue infection outpatient samples sent to the hospital's microbiology lab during the same period, and also with those sent during January 2000 through January 2001 (excluding samples from the dermatology clinic). The CA-MRSA rate for samples from all outpatient services from 2000 to 2001 was 4% (192 S. aureus–positive cultures). The outpatient services rates for 2000–2001 and 2007–2008 highlight the sharp increase in CA-MRSA over the last several years.
The rate of community-associated methicillin-sensitive S. aureus (CA-MSSA) among patients with AD in the dermatology clinic from 2007 to 2008 was 86%. In comparison, the CA-MSSA rate for other outpatient services during that period was 55%. The CA-MSSA rate for all outpatient services from 2000 to 2001 was 96%.
They found that prior history of hospitalization, eczema severity, age, sex, and prior antibiotic treatment had no impact on the risk of methicillin resistance or sensitivity in these patients. For the patients with AD, positive S. aureus cultures were most common among patients aged 1–4 years (26%), followed by those aged 5–9 years (24%), and those less than a year old (23%).
The findings are striking. “It's absolutely counterintuitive because if you think of patients with AD as being more at risk for infection, you would think that at the very least they would have the same rate as that occurring in the regular population,” said Dr. Sheila Fallon Friedlander, a study coauthor and professor of pediatrics and medicine at the University of California, San Diego.
It may be that “because these kids are colonized already so much of the time with regular S. aureus, that it may exert sort of a protective effect against CA-MRSA,” Dr. Friedlander said.
In addition, patients with AD tend to present more often with multiple lesions. “That may also play a role in this. It may be that our atopic patients are presenting with secondarily infected lesions that are distinct from the abscesses and the folliculitis that we are seeing in the community,” she noted.
The findings “have informed the way that I prescribe medications for my patients,” she said. The results suggest that more standard antibiotic drugs with fewer side effects—like cephalosporins—can be used first, especially while waiting for culture results. This could not only reduce costs but also save patients from the more serious side effects of antibiotics used for resistant pathogens.
Disclosures: Dr. Matiz and Dr. Friedlander had no conflicts of interest related to this study.
Complementary Foods Move Beyond Rice Cereal
Rice cereal has traditionally been the first complementary food given to American infants, but there is no good reason not to introduce meats, vegetables, and fruits as the first complementary foods.
Introducing these foods early and often promotes healthy eating habits and preferences for these naturally nutrient-rich foods, according to Dr. Frank R. Greer, who is a professor of pediatrics at the University of Wisconsin in Madison and also a member of the American Academy of Pediatrics's Committee on Nutrition.
“Complementary foods introduced to infants should be based on their nutrient requirements and the nutrient density of foods, not on traditional practices that have no scientific basis,” Dr. Greer said in an interview.
In fact, the AAP's Committee on Nutrition is working on a statement that will include these new ideas, Dr. Greer said in an interview. Currently, there are no official AAP recommendations for introduction of complementary foods, which are any nutrient-containing solid or liquid foods other than breast milk or formula given to infants, excluding vitamin and mineral supplements. By 6 months of age, human milk becomes insufficient to meet the requirements of an infant for energy, protein, iron, zinc, and some fat-soluble vitamins (J. Pediatr. Gastroenterol. Nutr. 2008;46:99–110).
Rice cereal has been the first complementary food given to infants in the United States for many reasons, including cultural tradition. By the 1960s, most U.S. infants (70%-80%) were fed cereal by 1 month of age. By 1980, rice cereal predominated, as it was considered to be well tolerated and “hypoallergenic”—given growing concerns about food allergies, he said.
However, newer thinking is that the emphasis for complementary foods should be on naturally nutrient-rich foods. This includes protein and fiber, along with vitamins A, C, D, and E and the B vitamins. In addition, saturated and trans fats should be limited, as should sugar, said Dr. Greer.
In light of this thinking, rice cereal is a less than perfect choice for the first complementary food given to infants. Rice cereal is low in protein and high in carbohydrates. It is often mixed with varying amounts of breast milk or formula. Although most brands of formula now have added iron, zinc, and vitamins, iron is poorly absorbed—only about 7.8% of intake is incorporated into red blood cells.
In contrast, meat is a rich source of iron, zinc, and arachidonic acid. Consumption of meat, fish, or poultry provides iron in the form of heme and promotes absorption of nonheme iron, noted Dr. Greer. Red meat and dark poultry meat have the greatest concentration of heme iron. Heme iron is absorbed intact into intestinal mucosal cells and is not affected by inhibitors of nonheme iron from the intestinal tract. Iron salts present in infant cereal are generally insoluble and poorly absorbed (with the exception of iron fumarate).
By 6 months, roughly a third of U.S. infants have been introduced to fruit (71%) and vegetables (73%), but only 21% have been introduced to meat. In a 2008 study in Pediatrics, researchers reported that 15% of infants have less than one serving of fruit or vegetable per day by 8 months of age (Pediatrics 2008;122[suppl. 2]:S91–7).
The early introduction of a variety of complementary foods is important for several reasons. Early experiences promote healthy eating patterns, said Dr. Greer. It's known that food flavors are transmitted to breast milk; infants whose mothers eat fruits and vegetables during lactation will have greater consumption of fruits and vegetables during childhood (Public Health Nutr. 2004;7:295–302). It's also been shown that infants are more accepting of food after repeated exposure (Am. J. Clin. Nutr. 2001;73:1080–5). Dr. Greer reported having no conflicts of interest.
Strategy Shifts on Allergenic Foods
Delaying or avoiding the introduction of allergenic foods during a critical window in the first year of life doesn't appear to prevent the development of food allergies and may even put children at increased risk, according to Dr. Greer.
There is a lack of evidence to support food allergen avoidance in infants, he said. Any benefits appear to be largely in the first 3–4 months of life, when exclusive breastfeeding is of the greatest benefit for prevention of atopic disease.
Oral tolerance is an antigen-driven process and depends on regular exposure to food antigens during an early window. Allergen avoidance may be unsuccessful or detrimental in allergy prevention in infants. There is some evidence that continued breastfeeding during new food introduction is beneficial in preventing atopic disease.
In 2008, the AAP recommended that complementary foods should not be introduced before 4–6 months and noted that there is no indication that delayed introduction of certain foods, including allergenic foods such as wheat, fish, egg, and peanut-containing products, protects against atopic disease (Pediatrics 2008;121:183–91).
Likewise, the European Society for Paediatric Gastroenterology, Hepatology, and Nutrition (ESPGHAN) recommended in 2008 that complementary foods should be introduced between 17 and 26 weeks. The group also recommended against the avoidance or late introduction of allergenic foods such as wheat, fish, egg, and peanut (J. Pediatr. Gastroenterol. Nutr. 2008;46:99–110).
Most allergic reactions to foods (90%) are due to eight food types: milk, eggs, peanuts, tree nuts, fish, shellfish, soy, and wheat. However, studies generally have not supported a protective effect for a maternal exclusionary diet during pregnancy; a diet excluding cow's milk, eggs, peanuts, and fish has not been found to protect against the development of atopic disease in infants.
Rice cereal has traditionally been the first complementary food given to American infants, but there is no good reason not to introduce meats, vegetables, and fruits as the first complementary foods.
Introducing these foods early and often promotes healthy eating habits and preferences for these naturally nutrient-rich foods, according to Dr. Frank R. Greer, who is a professor of pediatrics at the University of Wisconsin in Madison and also a member of the American Academy of Pediatrics's Committee on Nutrition.
“Complementary foods introduced to infants should be based on their nutrient requirements and the nutrient density of foods, not on traditional practices that have no scientific basis,” Dr. Greer said in an interview.
In fact, the AAP's Committee on Nutrition is working on a statement that will include these new ideas, Dr. Greer said in an interview. Currently, there are no official AAP recommendations for introduction of complementary foods, which are any nutrient-containing solid or liquid foods other than breast milk or formula given to infants, excluding vitamin and mineral supplements. By 6 months of age, human milk becomes insufficient to meet the requirements of an infant for energy, protein, iron, zinc, and some fat-soluble vitamins (J. Pediatr. Gastroenterol. Nutr. 2008;46:99–110).
Rice cereal has been the first complementary food given to infants in the United States for many reasons, including cultural tradition. By the 1960s, most U.S. infants (70%-80%) were fed cereal by 1 month of age. By 1980, rice cereal predominated, as it was considered to be well tolerated and “hypoallergenic”—given growing concerns about food allergies, he said.
However, newer thinking is that the emphasis for complementary foods should be on naturally nutrient-rich foods. This includes protein and fiber, along with vitamins A, C, D, and E and the B vitamins. In addition, saturated and trans fats should be limited, as should sugar, said Dr. Greer.
In light of this thinking, rice cereal is a less than perfect choice for the first complementary food given to infants. Rice cereal is low in protein and high in carbohydrates. It is often mixed with varying amounts of breast milk or formula. Although most brands of formula now have added iron, zinc, and vitamins, iron is poorly absorbed—only about 7.8% of intake is incorporated into red blood cells.
In contrast, meat is a rich source of iron, zinc, and arachidonic acid. Consumption of meat, fish, or poultry provides iron in the form of heme and promotes absorption of nonheme iron, noted Dr. Greer. Red meat and dark poultry meat have the greatest concentration of heme iron. Heme iron is absorbed intact into intestinal mucosal cells and is not affected by inhibitors of nonheme iron from the intestinal tract. Iron salts present in infant cereal are generally insoluble and poorly absorbed (with the exception of iron fumarate).
By 6 months, roughly a third of U.S. infants have been introduced to fruit (71%) and vegetables (73%), but only 21% have been introduced to meat. In a 2008 study in Pediatrics, researchers reported that 15% of infants have less than one serving of fruit or vegetable per day by 8 months of age (Pediatrics 2008;122[suppl. 2]:S91–7).
The early introduction of a variety of complementary foods is important for several reasons. Early experiences promote healthy eating patterns, said Dr. Greer. It's known that food flavors are transmitted to breast milk; infants whose mothers eat fruits and vegetables during lactation will have greater consumption of fruits and vegetables during childhood (Public Health Nutr. 2004;7:295–302). It's also been shown that infants are more accepting of food after repeated exposure (Am. J. Clin. Nutr. 2001;73:1080–5). Dr. Greer reported having no conflicts of interest.
Strategy Shifts on Allergenic Foods
Delaying or avoiding the introduction of allergenic foods during a critical window in the first year of life doesn't appear to prevent the development of food allergies and may even put children at increased risk, according to Dr. Greer.
There is a lack of evidence to support food allergen avoidance in infants, he said. Any benefits appear to be largely in the first 3–4 months of life, when exclusive breastfeeding is of the greatest benefit for prevention of atopic disease.
Oral tolerance is an antigen-driven process and depends on regular exposure to food antigens during an early window. Allergen avoidance may be unsuccessful or detrimental in allergy prevention in infants. There is some evidence that continued breastfeeding during new food introduction is beneficial in preventing atopic disease.
In 2008, the AAP recommended that complementary foods should not be introduced before 4–6 months and noted that there is no indication that delayed introduction of certain foods, including allergenic foods such as wheat, fish, egg, and peanut-containing products, protects against atopic disease (Pediatrics 2008;121:183–91).
Likewise, the European Society for Paediatric Gastroenterology, Hepatology, and Nutrition (ESPGHAN) recommended in 2008 that complementary foods should be introduced between 17 and 26 weeks. The group also recommended against the avoidance or late introduction of allergenic foods such as wheat, fish, egg, and peanut (J. Pediatr. Gastroenterol. Nutr. 2008;46:99–110).
Most allergic reactions to foods (90%) are due to eight food types: milk, eggs, peanuts, tree nuts, fish, shellfish, soy, and wheat. However, studies generally have not supported a protective effect for a maternal exclusionary diet during pregnancy; a diet excluding cow's milk, eggs, peanuts, and fish has not been found to protect against the development of atopic disease in infants.
Rice cereal has traditionally been the first complementary food given to American infants, but there is no good reason not to introduce meats, vegetables, and fruits as the first complementary foods.
Introducing these foods early and often promotes healthy eating habits and preferences for these naturally nutrient-rich foods, according to Dr. Frank R. Greer, who is a professor of pediatrics at the University of Wisconsin in Madison and also a member of the American Academy of Pediatrics's Committee on Nutrition.
“Complementary foods introduced to infants should be based on their nutrient requirements and the nutrient density of foods, not on traditional practices that have no scientific basis,” Dr. Greer said in an interview.
In fact, the AAP's Committee on Nutrition is working on a statement that will include these new ideas, Dr. Greer said in an interview. Currently, there are no official AAP recommendations for introduction of complementary foods, which are any nutrient-containing solid or liquid foods other than breast milk or formula given to infants, excluding vitamin and mineral supplements. By 6 months of age, human milk becomes insufficient to meet the requirements of an infant for energy, protein, iron, zinc, and some fat-soluble vitamins (J. Pediatr. Gastroenterol. Nutr. 2008;46:99–110).
Rice cereal has been the first complementary food given to infants in the United States for many reasons, including cultural tradition. By the 1960s, most U.S. infants (70%-80%) were fed cereal by 1 month of age. By 1980, rice cereal predominated, as it was considered to be well tolerated and “hypoallergenic”—given growing concerns about food allergies, he said.
However, newer thinking is that the emphasis for complementary foods should be on naturally nutrient-rich foods. This includes protein and fiber, along with vitamins A, C, D, and E and the B vitamins. In addition, saturated and trans fats should be limited, as should sugar, said Dr. Greer.
In light of this thinking, rice cereal is a less than perfect choice for the first complementary food given to infants. Rice cereal is low in protein and high in carbohydrates. It is often mixed with varying amounts of breast milk or formula. Although most brands of formula now have added iron, zinc, and vitamins, iron is poorly absorbed—only about 7.8% of intake is incorporated into red blood cells.
In contrast, meat is a rich source of iron, zinc, and arachidonic acid. Consumption of meat, fish, or poultry provides iron in the form of heme and promotes absorption of nonheme iron, noted Dr. Greer. Red meat and dark poultry meat have the greatest concentration of heme iron. Heme iron is absorbed intact into intestinal mucosal cells and is not affected by inhibitors of nonheme iron from the intestinal tract. Iron salts present in infant cereal are generally insoluble and poorly absorbed (with the exception of iron fumarate).
By 6 months, roughly a third of U.S. infants have been introduced to fruit (71%) and vegetables (73%), but only 21% have been introduced to meat. In a 2008 study in Pediatrics, researchers reported that 15% of infants have less than one serving of fruit or vegetable per day by 8 months of age (Pediatrics 2008;122[suppl. 2]:S91–7).
The early introduction of a variety of complementary foods is important for several reasons. Early experiences promote healthy eating patterns, said Dr. Greer. It's known that food flavors are transmitted to breast milk; infants whose mothers eat fruits and vegetables during lactation will have greater consumption of fruits and vegetables during childhood (Public Health Nutr. 2004;7:295–302). It's also been shown that infants are more accepting of food after repeated exposure (Am. J. Clin. Nutr. 2001;73:1080–5). Dr. Greer reported having no conflicts of interest.
Strategy Shifts on Allergenic Foods
Delaying or avoiding the introduction of allergenic foods during a critical window in the first year of life doesn't appear to prevent the development of food allergies and may even put children at increased risk, according to Dr. Greer.
There is a lack of evidence to support food allergen avoidance in infants, he said. Any benefits appear to be largely in the first 3–4 months of life, when exclusive breastfeeding is of the greatest benefit for prevention of atopic disease.
Oral tolerance is an antigen-driven process and depends on regular exposure to food antigens during an early window. Allergen avoidance may be unsuccessful or detrimental in allergy prevention in infants. There is some evidence that continued breastfeeding during new food introduction is beneficial in preventing atopic disease.
In 2008, the AAP recommended that complementary foods should not be introduced before 4–6 months and noted that there is no indication that delayed introduction of certain foods, including allergenic foods such as wheat, fish, egg, and peanut-containing products, protects against atopic disease (Pediatrics 2008;121:183–91).
Likewise, the European Society for Paediatric Gastroenterology, Hepatology, and Nutrition (ESPGHAN) recommended in 2008 that complementary foods should be introduced between 17 and 26 weeks. The group also recommended against the avoidance or late introduction of allergenic foods such as wheat, fish, egg, and peanut (J. Pediatr. Gastroenterol. Nutr. 2008;46:99–110).
Most allergic reactions to foods (90%) are due to eight food types: milk, eggs, peanuts, tree nuts, fish, shellfish, soy, and wheat. However, studies generally have not supported a protective effect for a maternal exclusionary diet during pregnancy; a diet excluding cow's milk, eggs, peanuts, and fish has not been found to protect against the development of atopic disease in infants.
Pediatricians Not Savvy About Passenger Safety
WASHINGTON — Relatively few pediatric emergency physicians could correctly answer all questions regarding the American Academy of Pediatrics' child passenger safety recommendations, and these instructions are frequently not included in discharge instructions to parents, based on a survey of 274 physicians.
Among all respondents, 36% correctly answered all questions about AAP child passenger safety recommendations and 41% correctly answered all questions about indications to replace car seats after motor vehicle crashes, Dr. Mark R. Zonfrillo reported at the annual meeting of the American Academy of Pediatrics.
The researchers used an anonymous, cross-sectional survey of the 1,088 emergency medicine physicians. Participants were eligible if they were attending physicians who routinely treated children involved in motor vehicle crashes. A total of 274 physicians completed the surveys; 90% were board-eligible or board-certified in pediatric emergency medicine.
Survey questions addressed knowledge of age- and size-appropriate restraints, and indications to replace a car seat after a motor vehicle crash.
Although relatively few respondents could answer all three questions on the AAP recommendations, 85% of the respondents affirmed that child passenger safety information should be included in a gold standard instruction sheet following a crash. In addition, 74% affirmed that indications to replace car seats after motor vehicle crashes should be included. There were no differences in child passenger safety knowledge based on gender, residency program, or years as an attending.
A total of 54 physicians identified themselves as division/department chiefs. Regarding practices in their emergency department, 80% of chiefs reported using template-based discharge instructions. Two-thirds (67%) reported using computer-based discharge instructions. In addition, 84% reported that it was possible to customize the computer-based discharge instructions.
However, when asked specifically about motor vehicle safety, 43% reported that they provided no specific motor vehicle safety guidance, 22% provided motor vehicle safety instructions that were not pediatric specific, and 29% provided specific pediatric vehicle safety guidance.
“However, there were a few interesting subanalyses,” said Dr. Zonfrillo, who is a senior fellow in pediatric emergency medicine at the Children's Hospital of Philadelphia. Respondents were more likely to know when forward-facing car seats were appropriate if they had a child. They were more likely to know when a child could safely sit in the front seat if they had a child older than 8 years. About three-quarters of the sample had at least one child.
Respondents also were given the chance to specify additional information that they thought should be included in the child passenger safety information. Five major themes emerged: statistics on effectiveness, resources for subsidized/free car seats, resources for non-English speakers, common errors in child passenger safety, and state passenger laws.
These results underscore the need for pediatric emergency physicians to maintain current knowledge about child passenger safety and to provide this information to families after the ED evaluation for motor vehicle crashes, Dr. Zonfrillo said.
Dr. Zonfrillo reported that he has no relevant financial relationships.
AAP publications on child passenger safety can be found at www.aap.org/healthtopics/carseatsafety.cfm
WASHINGTON — Relatively few pediatric emergency physicians could correctly answer all questions regarding the American Academy of Pediatrics' child passenger safety recommendations, and these instructions are frequently not included in discharge instructions to parents, based on a survey of 274 physicians.
Among all respondents, 36% correctly answered all questions about AAP child passenger safety recommendations and 41% correctly answered all questions about indications to replace car seats after motor vehicle crashes, Dr. Mark R. Zonfrillo reported at the annual meeting of the American Academy of Pediatrics.
The researchers used an anonymous, cross-sectional survey of the 1,088 emergency medicine physicians. Participants were eligible if they were attending physicians who routinely treated children involved in motor vehicle crashes. A total of 274 physicians completed the surveys; 90% were board-eligible or board-certified in pediatric emergency medicine.
Survey questions addressed knowledge of age- and size-appropriate restraints, and indications to replace a car seat after a motor vehicle crash.
Although relatively few respondents could answer all three questions on the AAP recommendations, 85% of the respondents affirmed that child passenger safety information should be included in a gold standard instruction sheet following a crash. In addition, 74% affirmed that indications to replace car seats after motor vehicle crashes should be included. There were no differences in child passenger safety knowledge based on gender, residency program, or years as an attending.
A total of 54 physicians identified themselves as division/department chiefs. Regarding practices in their emergency department, 80% of chiefs reported using template-based discharge instructions. Two-thirds (67%) reported using computer-based discharge instructions. In addition, 84% reported that it was possible to customize the computer-based discharge instructions.
However, when asked specifically about motor vehicle safety, 43% reported that they provided no specific motor vehicle safety guidance, 22% provided motor vehicle safety instructions that were not pediatric specific, and 29% provided specific pediatric vehicle safety guidance.
“However, there were a few interesting subanalyses,” said Dr. Zonfrillo, who is a senior fellow in pediatric emergency medicine at the Children's Hospital of Philadelphia. Respondents were more likely to know when forward-facing car seats were appropriate if they had a child. They were more likely to know when a child could safely sit in the front seat if they had a child older than 8 years. About three-quarters of the sample had at least one child.
Respondents also were given the chance to specify additional information that they thought should be included in the child passenger safety information. Five major themes emerged: statistics on effectiveness, resources for subsidized/free car seats, resources for non-English speakers, common errors in child passenger safety, and state passenger laws.
These results underscore the need for pediatric emergency physicians to maintain current knowledge about child passenger safety and to provide this information to families after the ED evaluation for motor vehicle crashes, Dr. Zonfrillo said.
Dr. Zonfrillo reported that he has no relevant financial relationships.
AAP publications on child passenger safety can be found at www.aap.org/healthtopics/carseatsafety.cfm
WASHINGTON — Relatively few pediatric emergency physicians could correctly answer all questions regarding the American Academy of Pediatrics' child passenger safety recommendations, and these instructions are frequently not included in discharge instructions to parents, based on a survey of 274 physicians.
Among all respondents, 36% correctly answered all questions about AAP child passenger safety recommendations and 41% correctly answered all questions about indications to replace car seats after motor vehicle crashes, Dr. Mark R. Zonfrillo reported at the annual meeting of the American Academy of Pediatrics.
The researchers used an anonymous, cross-sectional survey of the 1,088 emergency medicine physicians. Participants were eligible if they were attending physicians who routinely treated children involved in motor vehicle crashes. A total of 274 physicians completed the surveys; 90% were board-eligible or board-certified in pediatric emergency medicine.
Survey questions addressed knowledge of age- and size-appropriate restraints, and indications to replace a car seat after a motor vehicle crash.
Although relatively few respondents could answer all three questions on the AAP recommendations, 85% of the respondents affirmed that child passenger safety information should be included in a gold standard instruction sheet following a crash. In addition, 74% affirmed that indications to replace car seats after motor vehicle crashes should be included. There were no differences in child passenger safety knowledge based on gender, residency program, or years as an attending.
A total of 54 physicians identified themselves as division/department chiefs. Regarding practices in their emergency department, 80% of chiefs reported using template-based discharge instructions. Two-thirds (67%) reported using computer-based discharge instructions. In addition, 84% reported that it was possible to customize the computer-based discharge instructions.
However, when asked specifically about motor vehicle safety, 43% reported that they provided no specific motor vehicle safety guidance, 22% provided motor vehicle safety instructions that were not pediatric specific, and 29% provided specific pediatric vehicle safety guidance.
“However, there were a few interesting subanalyses,” said Dr. Zonfrillo, who is a senior fellow in pediatric emergency medicine at the Children's Hospital of Philadelphia. Respondents were more likely to know when forward-facing car seats were appropriate if they had a child. They were more likely to know when a child could safely sit in the front seat if they had a child older than 8 years. About three-quarters of the sample had at least one child.
Respondents also were given the chance to specify additional information that they thought should be included in the child passenger safety information. Five major themes emerged: statistics on effectiveness, resources for subsidized/free car seats, resources for non-English speakers, common errors in child passenger safety, and state passenger laws.
These results underscore the need for pediatric emergency physicians to maintain current knowledge about child passenger safety and to provide this information to families after the ED evaluation for motor vehicle crashes, Dr. Zonfrillo said.
Dr. Zonfrillo reported that he has no relevant financial relationships.
AAP publications on child passenger safety can be found at www.aap.org/healthtopics/carseatsafety.cfm
NSF Uncommon After Contrast Agent Black Box Warnings
GAITHERSBURG, MD. — Black box warnings that have been added to the labels of all gadolinium-based MRI contrast agents have reduced the number of reported nephrogenic systemic fibrosis events to almost none during the past year, according to an analysis by Dr. James Kaiser.
“The numbers of new events have tapered dramatically, probably due to public awareness of the association of NSF [nephrogenic systemic fibrosis] with GBCA [gadolinium-based contrast agent] administration,” he said at a joint meeting of the Food and Drug Administration's Cardiovascular and Renal Drugs and Drug Safety and Risk Management advisory committees.
Event dates are based on either the date of administration of contrast or the date of diagnosis of NSF.
The FDA began receiving reports of NSF possibly being linked to gadolinium-based contrast agents in 2006, when 194 event dates were reported.
This reporting “probably reflects awareness of the medical community of the potential connection between GBCA administration and NSF and changes in radiologic practice,” said Dr. Kaiser of the FDA's office of surveillance and epidemiology. There were 128 reported events in 2007, 55 in 2008, and 6 in 2009 (through September).
In 2007, the FDA asked manufacturers to include a boxed warning on the product labels of all gadolinium-based contrast agents. The warnings caution that patients with severe kidney insufficiency who receive gadolinium-based agents are at increased risk for the development of NSF.
In addition, patients who are in need of a liver transplantation, those who have just undergone liver transplantation, patients who have chronic liver disease, and patients experiencing kidney insufficiency of any severity also have an increased risk of NSF.
Five gadolinium-based contrast agents have been approved for use in the United States: Magnevist (gadopentetate dimeglumine); Omniscan (gadodiamide); OptiMARK (gadoversetamide); MultiHance (gadobenate dimeglumine); and ProHance (gadoteridol).
As of September 2009, a total of 382 reports of NSF had been associated with Omniscan (GE HealthCare), 195 with Magnevist (Bayer HealthCare), 35 with OptiMARK (Covidien), 1 with MultiHance (Bracco Diagnostics), and 0 with ProHance (Bracco Diagnostics). These numbers are based on reported cases in which a patient had known exposure to only one gadolinium-based contrast agent.
Although there was no formal vote during the committee meeting, the FDA asked the committees to consider whether warning labels should continue to be grouped together as a class or if there was adequate evidence to single out contrast agents that increase the risk of NSF.
“The majority of the group feels that at least two of the agents appear to be different from the other agents,” said Dr. Robert A. Harrington, who chairs the Cardiovascular and Renal Drugs Advisory Committee. The majority of the committee members recommended that the use of Omniscan and OptiMARK be contraindicated in patients who have severe kidney dysfunction. However, there was uncertainty as to how to define severe kidney dysfunction.
There was less consensus on whether a third agent, Magnevist, might also warrant contraindication language. As for the other agents, “there was no clear evidence that any one single agent was safe in this patient population,” Dr. Harrington noted.
GAITHERSBURG, MD. — Black box warnings that have been added to the labels of all gadolinium-based MRI contrast agents have reduced the number of reported nephrogenic systemic fibrosis events to almost none during the past year, according to an analysis by Dr. James Kaiser.
“The numbers of new events have tapered dramatically, probably due to public awareness of the association of NSF [nephrogenic systemic fibrosis] with GBCA [gadolinium-based contrast agent] administration,” he said at a joint meeting of the Food and Drug Administration's Cardiovascular and Renal Drugs and Drug Safety and Risk Management advisory committees.
Event dates are based on either the date of administration of contrast or the date of diagnosis of NSF.
The FDA began receiving reports of NSF possibly being linked to gadolinium-based contrast agents in 2006, when 194 event dates were reported.
This reporting “probably reflects awareness of the medical community of the potential connection between GBCA administration and NSF and changes in radiologic practice,” said Dr. Kaiser of the FDA's office of surveillance and epidemiology. There were 128 reported events in 2007, 55 in 2008, and 6 in 2009 (through September).
In 2007, the FDA asked manufacturers to include a boxed warning on the product labels of all gadolinium-based contrast agents. The warnings caution that patients with severe kidney insufficiency who receive gadolinium-based agents are at increased risk for the development of NSF.
In addition, patients who are in need of a liver transplantation, those who have just undergone liver transplantation, patients who have chronic liver disease, and patients experiencing kidney insufficiency of any severity also have an increased risk of NSF.
Five gadolinium-based contrast agents have been approved for use in the United States: Magnevist (gadopentetate dimeglumine); Omniscan (gadodiamide); OptiMARK (gadoversetamide); MultiHance (gadobenate dimeglumine); and ProHance (gadoteridol).
As of September 2009, a total of 382 reports of NSF had been associated with Omniscan (GE HealthCare), 195 with Magnevist (Bayer HealthCare), 35 with OptiMARK (Covidien), 1 with MultiHance (Bracco Diagnostics), and 0 with ProHance (Bracco Diagnostics). These numbers are based on reported cases in which a patient had known exposure to only one gadolinium-based contrast agent.
Although there was no formal vote during the committee meeting, the FDA asked the committees to consider whether warning labels should continue to be grouped together as a class or if there was adequate evidence to single out contrast agents that increase the risk of NSF.
“The majority of the group feels that at least two of the agents appear to be different from the other agents,” said Dr. Robert A. Harrington, who chairs the Cardiovascular and Renal Drugs Advisory Committee. The majority of the committee members recommended that the use of Omniscan and OptiMARK be contraindicated in patients who have severe kidney dysfunction. However, there was uncertainty as to how to define severe kidney dysfunction.
There was less consensus on whether a third agent, Magnevist, might also warrant contraindication language. As for the other agents, “there was no clear evidence that any one single agent was safe in this patient population,” Dr. Harrington noted.
GAITHERSBURG, MD. — Black box warnings that have been added to the labels of all gadolinium-based MRI contrast agents have reduced the number of reported nephrogenic systemic fibrosis events to almost none during the past year, according to an analysis by Dr. James Kaiser.
“The numbers of new events have tapered dramatically, probably due to public awareness of the association of NSF [nephrogenic systemic fibrosis] with GBCA [gadolinium-based contrast agent] administration,” he said at a joint meeting of the Food and Drug Administration's Cardiovascular and Renal Drugs and Drug Safety and Risk Management advisory committees.
Event dates are based on either the date of administration of contrast or the date of diagnosis of NSF.
The FDA began receiving reports of NSF possibly being linked to gadolinium-based contrast agents in 2006, when 194 event dates were reported.
This reporting “probably reflects awareness of the medical community of the potential connection between GBCA administration and NSF and changes in radiologic practice,” said Dr. Kaiser of the FDA's office of surveillance and epidemiology. There were 128 reported events in 2007, 55 in 2008, and 6 in 2009 (through September).
In 2007, the FDA asked manufacturers to include a boxed warning on the product labels of all gadolinium-based contrast agents. The warnings caution that patients with severe kidney insufficiency who receive gadolinium-based agents are at increased risk for the development of NSF.
In addition, patients who are in need of a liver transplantation, those who have just undergone liver transplantation, patients who have chronic liver disease, and patients experiencing kidney insufficiency of any severity also have an increased risk of NSF.
Five gadolinium-based contrast agents have been approved for use in the United States: Magnevist (gadopentetate dimeglumine); Omniscan (gadodiamide); OptiMARK (gadoversetamide); MultiHance (gadobenate dimeglumine); and ProHance (gadoteridol).
As of September 2009, a total of 382 reports of NSF had been associated with Omniscan (GE HealthCare), 195 with Magnevist (Bayer HealthCare), 35 with OptiMARK (Covidien), 1 with MultiHance (Bracco Diagnostics), and 0 with ProHance (Bracco Diagnostics). These numbers are based on reported cases in which a patient had known exposure to only one gadolinium-based contrast agent.
Although there was no formal vote during the committee meeting, the FDA asked the committees to consider whether warning labels should continue to be grouped together as a class or if there was adequate evidence to single out contrast agents that increase the risk of NSF.
“The majority of the group feels that at least two of the agents appear to be different from the other agents,” said Dr. Robert A. Harrington, who chairs the Cardiovascular and Renal Drugs Advisory Committee. The majority of the committee members recommended that the use of Omniscan and OptiMARK be contraindicated in patients who have severe kidney dysfunction. However, there was uncertainty as to how to define severe kidney dysfunction.
There was less consensus on whether a third agent, Magnevist, might also warrant contraindication language. As for the other agents, “there was no clear evidence that any one single agent was safe in this patient population,” Dr. Harrington noted.
Contrast Agent Restrictions May Help Curb NSF
Findings: No new cases of nephrogenic systemic fibrosis were reported after reduction in use of gadolinium-based contrast agents, as well as substitution of new agents.
Data Source: Patient records at two tertiary care centers.
Disclosure: One study author received research support from GE Healthcare, Bayer Healthcare Pharmaceuticals, and Bracco, makers of gadolinium-based contrast agents used in the study.
No new cases of nephrogenic systemic fibrosis occurred at two large tertiary care facilities after more restrictive policies on the use of gadolinium-based contrast agents were introduced.
Before the changes, the incidence of nephrogenic systemic fibrosis (NSF) was 1 in 33 in high-risk patients. In patients on dialysis, the incidence of NSF was 1 in 35, according Dr. Ersan Altun, a radiologist at the University of North Carolina in Chapel Hill, and his coauthors.
“The absence of NSF cases in the postadoption period may reflect the effect of the use of different GBCAs [gadolinium-based contrast agents] and the adoption of restrictive GBCA policies on the incidence of NSF,” they wrote (Radiology 2009;253:689–96).
In 2006, reports to the Food and Drug Administration suggested a strong association between NSF and gadolinium-based contrast agents used in MRI. The exact mechanism remains unknown; however, GBCAs vary in their dissociation rates, and dissociation of the gadolinium ion from the chelating ligand may be a risk factor for NSF, the researchers said.
Cases of NSF were documented at two tertiary care centers for three periods: before the adoption of restrictive GBCA policies and a change in agents, during the transition period, and after the adoption of these policieschanges.
The new policies included careful screening of patients for risk factors for NSF such as renal disease, hypertension, dialysis, and diabetes before they underwent gadolinium-enhanced MRI. If GBCA-enhanced imaging was unavoidable in a patient deemed to be at high risk, a half dose of gadobenate dimeglumine was used. The policies also specified greater use of other types of imaging that don't require contrast agents.
In addition, gadolinium-enhanced MRI was not performed in pregnant women unless maternal survival was at stake, was not performed in any patient twice within 48 hours unless absolutely necessary, and was not done twice within 48 hours in any patient deemed to be at high risk of NSF.
Before the adoption of the changes, both of the centers used gadodiamide (Omniscan, GE Healthcare). After the adoption of revised policies, both centers used either gadobenate dimeglumine (Multihance, Bracco Diagnostics) or gadopentetate dimeglumine (Magnevist, Bayer Healthcare Pharmaceuticals). Gadobenate was used for all MRI studies of adults, patients younger than 1 year, and pediatric patients at risk for the development of NSF. Gadopentetate was used for pediatric patients 1 year and older who were not at risk for NSF. Both of the agents have lower dissociation rates than gadodiamide.
Patients considered to be at high risk for NSF were defined as having stage 4 or 5 chronic renal disease, undergoing dialysis, having acute renal insufficiency (including patients with hepatorenal syndrome), and being in the perioperative liver transplantation period. Patients considered to be at low risk were those who had stage 3 chronic renal disease, children less than 1 year of age, and pregnant patients. NSF was diagnosed by clinical findings and histopathologic evaluation of deep-skin biopsy.
At center A, 35 patients with NSF were identified in the preadoption period; of these, 28 underwent MRI with gadodiamide. The benchmark incidence of NSF at center A was 1/1,750 and the NSF incidence in high-risk patients was 1/33.
At center B, 14 patients with NSF were identified in the preadoption period; of these, 9 underwent MRI with gadodiamide. The benchmark incidence of NSF at center B was 1/1,803 and the NSF incidence in dialysis patients was 1/35.
There were no cases of NSF in the transitional and postadoption periods at either center.
My Take
Prevention of NSF Remains a Challenge
It's important to note that the reported decrease in NSF may not have occurred simply because new contrast agents were used. The more restrictive policies on the use of contrast agents might account for the improvement.
Companies that manufacture alternative contrast agents have a clear financial incentive in conducting and reporting such research. This study is by no means definitive, nor should the findings be construed to mean that a flat-out switch to an alternative agent is recommended.
That being said, we've only begun to scratch the surface of NSF. We don't understand the pathophysiology of the disorder, nor do we have any factor other than known renal disease that we can use to stratify patients for NSF risk.
Clearly, we still have a long way to go in addressing the problem of NSF.
FRANK MICHOTA, M.D., is the Director of Academic Affairs in the Department of Hospital Medicine at the Cleveland Clinic. He reports no relevant conflicts of interest.
Findings: No new cases of nephrogenic systemic fibrosis were reported after reduction in use of gadolinium-based contrast agents, as well as substitution of new agents.
Data Source: Patient records at two tertiary care centers.
Disclosure: One study author received research support from GE Healthcare, Bayer Healthcare Pharmaceuticals, and Bracco, makers of gadolinium-based contrast agents used in the study.
No new cases of nephrogenic systemic fibrosis occurred at two large tertiary care facilities after more restrictive policies on the use of gadolinium-based contrast agents were introduced.
Before the changes, the incidence of nephrogenic systemic fibrosis (NSF) was 1 in 33 in high-risk patients. In patients on dialysis, the incidence of NSF was 1 in 35, according Dr. Ersan Altun, a radiologist at the University of North Carolina in Chapel Hill, and his coauthors.
“The absence of NSF cases in the postadoption period may reflect the effect of the use of different GBCAs [gadolinium-based contrast agents] and the adoption of restrictive GBCA policies on the incidence of NSF,” they wrote (Radiology 2009;253:689–96).
In 2006, reports to the Food and Drug Administration suggested a strong association between NSF and gadolinium-based contrast agents used in MRI. The exact mechanism remains unknown; however, GBCAs vary in their dissociation rates, and dissociation of the gadolinium ion from the chelating ligand may be a risk factor for NSF, the researchers said.
Cases of NSF were documented at two tertiary care centers for three periods: before the adoption of restrictive GBCA policies and a change in agents, during the transition period, and after the adoption of these policieschanges.
The new policies included careful screening of patients for risk factors for NSF such as renal disease, hypertension, dialysis, and diabetes before they underwent gadolinium-enhanced MRI. If GBCA-enhanced imaging was unavoidable in a patient deemed to be at high risk, a half dose of gadobenate dimeglumine was used. The policies also specified greater use of other types of imaging that don't require contrast agents.
In addition, gadolinium-enhanced MRI was not performed in pregnant women unless maternal survival was at stake, was not performed in any patient twice within 48 hours unless absolutely necessary, and was not done twice within 48 hours in any patient deemed to be at high risk of NSF.
Before the adoption of the changes, both of the centers used gadodiamide (Omniscan, GE Healthcare). After the adoption of revised policies, both centers used either gadobenate dimeglumine (Multihance, Bracco Diagnostics) or gadopentetate dimeglumine (Magnevist, Bayer Healthcare Pharmaceuticals). Gadobenate was used for all MRI studies of adults, patients younger than 1 year, and pediatric patients at risk for the development of NSF. Gadopentetate was used for pediatric patients 1 year and older who were not at risk for NSF. Both of the agents have lower dissociation rates than gadodiamide.
Patients considered to be at high risk for NSF were defined as having stage 4 or 5 chronic renal disease, undergoing dialysis, having acute renal insufficiency (including patients with hepatorenal syndrome), and being in the perioperative liver transplantation period. Patients considered to be at low risk were those who had stage 3 chronic renal disease, children less than 1 year of age, and pregnant patients. NSF was diagnosed by clinical findings and histopathologic evaluation of deep-skin biopsy.
At center A, 35 patients with NSF were identified in the preadoption period; of these, 28 underwent MRI with gadodiamide. The benchmark incidence of NSF at center A was 1/1,750 and the NSF incidence in high-risk patients was 1/33.
At center B, 14 patients with NSF were identified in the preadoption period; of these, 9 underwent MRI with gadodiamide. The benchmark incidence of NSF at center B was 1/1,803 and the NSF incidence in dialysis patients was 1/35.
There were no cases of NSF in the transitional and postadoption periods at either center.
My Take
Prevention of NSF Remains a Challenge
It's important to note that the reported decrease in NSF may not have occurred simply because new contrast agents were used. The more restrictive policies on the use of contrast agents might account for the improvement.
Companies that manufacture alternative contrast agents have a clear financial incentive in conducting and reporting such research. This study is by no means definitive, nor should the findings be construed to mean that a flat-out switch to an alternative agent is recommended.
That being said, we've only begun to scratch the surface of NSF. We don't understand the pathophysiology of the disorder, nor do we have any factor other than known renal disease that we can use to stratify patients for NSF risk.
Clearly, we still have a long way to go in addressing the problem of NSF.
FRANK MICHOTA, M.D., is the Director of Academic Affairs in the Department of Hospital Medicine at the Cleveland Clinic. He reports no relevant conflicts of interest.
Findings: No new cases of nephrogenic systemic fibrosis were reported after reduction in use of gadolinium-based contrast agents, as well as substitution of new agents.
Data Source: Patient records at two tertiary care centers.
Disclosure: One study author received research support from GE Healthcare, Bayer Healthcare Pharmaceuticals, and Bracco, makers of gadolinium-based contrast agents used in the study.
No new cases of nephrogenic systemic fibrosis occurred at two large tertiary care facilities after more restrictive policies on the use of gadolinium-based contrast agents were introduced.
Before the changes, the incidence of nephrogenic systemic fibrosis (NSF) was 1 in 33 in high-risk patients. In patients on dialysis, the incidence of NSF was 1 in 35, according Dr. Ersan Altun, a radiologist at the University of North Carolina in Chapel Hill, and his coauthors.
“The absence of NSF cases in the postadoption period may reflect the effect of the use of different GBCAs [gadolinium-based contrast agents] and the adoption of restrictive GBCA policies on the incidence of NSF,” they wrote (Radiology 2009;253:689–96).
In 2006, reports to the Food and Drug Administration suggested a strong association between NSF and gadolinium-based contrast agents used in MRI. The exact mechanism remains unknown; however, GBCAs vary in their dissociation rates, and dissociation of the gadolinium ion from the chelating ligand may be a risk factor for NSF, the researchers said.
Cases of NSF were documented at two tertiary care centers for three periods: before the adoption of restrictive GBCA policies and a change in agents, during the transition period, and after the adoption of these policieschanges.
The new policies included careful screening of patients for risk factors for NSF such as renal disease, hypertension, dialysis, and diabetes before they underwent gadolinium-enhanced MRI. If GBCA-enhanced imaging was unavoidable in a patient deemed to be at high risk, a half dose of gadobenate dimeglumine was used. The policies also specified greater use of other types of imaging that don't require contrast agents.
In addition, gadolinium-enhanced MRI was not performed in pregnant women unless maternal survival was at stake, was not performed in any patient twice within 48 hours unless absolutely necessary, and was not done twice within 48 hours in any patient deemed to be at high risk of NSF.
Before the adoption of the changes, both of the centers used gadodiamide (Omniscan, GE Healthcare). After the adoption of revised policies, both centers used either gadobenate dimeglumine (Multihance, Bracco Diagnostics) or gadopentetate dimeglumine (Magnevist, Bayer Healthcare Pharmaceuticals). Gadobenate was used for all MRI studies of adults, patients younger than 1 year, and pediatric patients at risk for the development of NSF. Gadopentetate was used for pediatric patients 1 year and older who were not at risk for NSF. Both of the agents have lower dissociation rates than gadodiamide.
Patients considered to be at high risk for NSF were defined as having stage 4 or 5 chronic renal disease, undergoing dialysis, having acute renal insufficiency (including patients with hepatorenal syndrome), and being in the perioperative liver transplantation period. Patients considered to be at low risk were those who had stage 3 chronic renal disease, children less than 1 year of age, and pregnant patients. NSF was diagnosed by clinical findings and histopathologic evaluation of deep-skin biopsy.
At center A, 35 patients with NSF were identified in the preadoption period; of these, 28 underwent MRI with gadodiamide. The benchmark incidence of NSF at center A was 1/1,750 and the NSF incidence in high-risk patients was 1/33.
At center B, 14 patients with NSF were identified in the preadoption period; of these, 9 underwent MRI with gadodiamide. The benchmark incidence of NSF at center B was 1/1,803 and the NSF incidence in dialysis patients was 1/35.
There were no cases of NSF in the transitional and postadoption periods at either center.
My Take
Prevention of NSF Remains a Challenge
It's important to note that the reported decrease in NSF may not have occurred simply because new contrast agents were used. The more restrictive policies on the use of contrast agents might account for the improvement.
Companies that manufacture alternative contrast agents have a clear financial incentive in conducting and reporting such research. This study is by no means definitive, nor should the findings be construed to mean that a flat-out switch to an alternative agent is recommended.
That being said, we've only begun to scratch the surface of NSF. We don't understand the pathophysiology of the disorder, nor do we have any factor other than known renal disease that we can use to stratify patients for NSF risk.
Clearly, we still have a long way to go in addressing the problem of NSF.
FRANK MICHOTA, M.D., is the Director of Academic Affairs in the Department of Hospital Medicine at the Cleveland Clinic. He reports no relevant conflicts of interest.
CEA Deemed Safer Than Stenting
NEW YORK — Carotid endarterectomy was deemed safer than carotid artery stenting for symptomatic patients based on results from a multicenter study of 1,710 patients, although post-procedure complications suggest that as stent technology evolves, the two approaches will need to be revisited, according to Dr. Frans Moll.
The International Carotid Stenting Study found that there were twice as many strokes (58) for carotid artery stent (CAS) patients in the per-protocol 30-day analysis vs. the 27 experienced by the carotid endarterectomy (CEA) patients. Furthermore, 72 patients in the CAS group had a stroke, MI, or had died at 120 days of follow-up, compared with 43 in the CEA group, for a hazard ratio of 1.73, Dr. Moll said at the Veith symposium on vascular medicine sponsored by the Cleveland Clinic.
However, “the complications occurred not so much during [stenting] but at 1–3 days after the procedure,” Dr. Moll said in an interview. “You put in the stent. You give all of the drugs in the correct way. The technology is good. Then the patient goes from the table and you get a call from the neurologist telling you that your patient has got a serious minor stroke at day 2. This [suggests] that maybe some technical features of the stent are not yet as good as we wish they were.” It may be that “the development of stent technology has not reached the level that is necessary to replace traditional surgical skills,” said Dr. Moll, a professor of vascular surgery at the University Medical Center in Utrecht, the Netherlands.
In this study, patients with symptomatic carotid artery stenosis greater than 50% were randomized to treatment with CAS (853) or CEA (857). To be included, patients had to be deemed as requiring treatment and the stenosis had to be suitable for both stenting and surgery. Ultrasound study of the carotid artery to be treated was performed at or before randomization and at 1 month following treatment—and will continue annually.
Participating surgeons had to have performed more than 50 CEA or 50 CAS procedures—and more than 10 cases/stents a year—at supervised centers included in the study. Several stents were approved for use in this trial. All patients received best medical care including antiplatelet therapy or anticoagulation (when appropriate) and control of medical risk factors. Aspirin plus clopidogrel were provided before stenting.
The researchers were able to analyze the 853 CAS patients and 857 CEA patients by ITT up to 120 days post randomization. The per-protocol analysis included 821 patients in the CEA group and 828 in the CAS group. In terms of secondary outcomes at 120 days (see table), more patients in the CAS group had any stroke (65), compared with the CEA group (34). The hazard ratio for any stroke or death for CAS vs. CEA was 1.91.
In an MRI substudy of 108 CAS patients and 92 CEA patients at five centers, “we see a real difference between CAS and CEA” at up to 6 weeks' follow-up, said Dr. Moll. In terms of new ischemic lesions seen on diffusion-weighted MRI after the procedures, the odds ratio for CAS vs. CEA was 5.24.
“The number of serious strokes was not so much different—disabling strokes were not the biggest difference—but all of these minor strokes and lesions on diffusion-weighted imaging were striking,” he said in an interview.
Notably, protection devices were recommended for use during CAS but were not mandatory. A total of 245 patients got CAS without a protection device, and the remainder had protection. There was no significant difference in outcomes regardless of whether a protection device was used, Dr. Moll said.
Source Elsevier Global Medical News
NEW YORK — Carotid endarterectomy was deemed safer than carotid artery stenting for symptomatic patients based on results from a multicenter study of 1,710 patients, although post-procedure complications suggest that as stent technology evolves, the two approaches will need to be revisited, according to Dr. Frans Moll.
The International Carotid Stenting Study found that there were twice as many strokes (58) for carotid artery stent (CAS) patients in the per-protocol 30-day analysis vs. the 27 experienced by the carotid endarterectomy (CEA) patients. Furthermore, 72 patients in the CAS group had a stroke, MI, or had died at 120 days of follow-up, compared with 43 in the CEA group, for a hazard ratio of 1.73, Dr. Moll said at the Veith symposium on vascular medicine sponsored by the Cleveland Clinic.
However, “the complications occurred not so much during [stenting] but at 1–3 days after the procedure,” Dr. Moll said in an interview. “You put in the stent. You give all of the drugs in the correct way. The technology is good. Then the patient goes from the table and you get a call from the neurologist telling you that your patient has got a serious minor stroke at day 2. This [suggests] that maybe some technical features of the stent are not yet as good as we wish they were.” It may be that “the development of stent technology has not reached the level that is necessary to replace traditional surgical skills,” said Dr. Moll, a professor of vascular surgery at the University Medical Center in Utrecht, the Netherlands.
In this study, patients with symptomatic carotid artery stenosis greater than 50% were randomized to treatment with CAS (853) or CEA (857). To be included, patients had to be deemed as requiring treatment and the stenosis had to be suitable for both stenting and surgery. Ultrasound study of the carotid artery to be treated was performed at or before randomization and at 1 month following treatment—and will continue annually.
Participating surgeons had to have performed more than 50 CEA or 50 CAS procedures—and more than 10 cases/stents a year—at supervised centers included in the study. Several stents were approved for use in this trial. All patients received best medical care including antiplatelet therapy or anticoagulation (when appropriate) and control of medical risk factors. Aspirin plus clopidogrel were provided before stenting.
The researchers were able to analyze the 853 CAS patients and 857 CEA patients by ITT up to 120 days post randomization. The per-protocol analysis included 821 patients in the CEA group and 828 in the CAS group. In terms of secondary outcomes at 120 days (see table), more patients in the CAS group had any stroke (65), compared with the CEA group (34). The hazard ratio for any stroke or death for CAS vs. CEA was 1.91.
In an MRI substudy of 108 CAS patients and 92 CEA patients at five centers, “we see a real difference between CAS and CEA” at up to 6 weeks' follow-up, said Dr. Moll. In terms of new ischemic lesions seen on diffusion-weighted MRI after the procedures, the odds ratio for CAS vs. CEA was 5.24.
“The number of serious strokes was not so much different—disabling strokes were not the biggest difference—but all of these minor strokes and lesions on diffusion-weighted imaging were striking,” he said in an interview.
Notably, protection devices were recommended for use during CAS but were not mandatory. A total of 245 patients got CAS without a protection device, and the remainder had protection. There was no significant difference in outcomes regardless of whether a protection device was used, Dr. Moll said.
Source Elsevier Global Medical News
NEW YORK — Carotid endarterectomy was deemed safer than carotid artery stenting for symptomatic patients based on results from a multicenter study of 1,710 patients, although post-procedure complications suggest that as stent technology evolves, the two approaches will need to be revisited, according to Dr. Frans Moll.
The International Carotid Stenting Study found that there were twice as many strokes (58) for carotid artery stent (CAS) patients in the per-protocol 30-day analysis vs. the 27 experienced by the carotid endarterectomy (CEA) patients. Furthermore, 72 patients in the CAS group had a stroke, MI, or had died at 120 days of follow-up, compared with 43 in the CEA group, for a hazard ratio of 1.73, Dr. Moll said at the Veith symposium on vascular medicine sponsored by the Cleveland Clinic.
However, “the complications occurred not so much during [stenting] but at 1–3 days after the procedure,” Dr. Moll said in an interview. “You put in the stent. You give all of the drugs in the correct way. The technology is good. Then the patient goes from the table and you get a call from the neurologist telling you that your patient has got a serious minor stroke at day 2. This [suggests] that maybe some technical features of the stent are not yet as good as we wish they were.” It may be that “the development of stent technology has not reached the level that is necessary to replace traditional surgical skills,” said Dr. Moll, a professor of vascular surgery at the University Medical Center in Utrecht, the Netherlands.
In this study, patients with symptomatic carotid artery stenosis greater than 50% were randomized to treatment with CAS (853) or CEA (857). To be included, patients had to be deemed as requiring treatment and the stenosis had to be suitable for both stenting and surgery. Ultrasound study of the carotid artery to be treated was performed at or before randomization and at 1 month following treatment—and will continue annually.
Participating surgeons had to have performed more than 50 CEA or 50 CAS procedures—and more than 10 cases/stents a year—at supervised centers included in the study. Several stents were approved for use in this trial. All patients received best medical care including antiplatelet therapy or anticoagulation (when appropriate) and control of medical risk factors. Aspirin plus clopidogrel were provided before stenting.
The researchers were able to analyze the 853 CAS patients and 857 CEA patients by ITT up to 120 days post randomization. The per-protocol analysis included 821 patients in the CEA group and 828 in the CAS group. In terms of secondary outcomes at 120 days (see table), more patients in the CAS group had any stroke (65), compared with the CEA group (34). The hazard ratio for any stroke or death for CAS vs. CEA was 1.91.
In an MRI substudy of 108 CAS patients and 92 CEA patients at five centers, “we see a real difference between CAS and CEA” at up to 6 weeks' follow-up, said Dr. Moll. In terms of new ischemic lesions seen on diffusion-weighted MRI after the procedures, the odds ratio for CAS vs. CEA was 5.24.
“The number of serious strokes was not so much different—disabling strokes were not the biggest difference—but all of these minor strokes and lesions on diffusion-weighted imaging were striking,” he said in an interview.
Notably, protection devices were recommended for use during CAS but were not mandatory. A total of 245 patients got CAS without a protection device, and the remainder had protection. There was no significant difference in outcomes regardless of whether a protection device was used, Dr. Moll said.
Source Elsevier Global Medical News
Mammography Experts Assail USPSTF Stance
New mammography screening recommendations from the U.S. Preventive Services Task Force will cost women's lives and essentially take the breast cancer death rate back to 1950s levels, a panel of mammography experts said at the annual meeting of the Radiological Society of North America.
The net effect of the recommendations is that “screening would begin too late and would be too little. We would save money but we would lose lives,” said Dr. Stephen A. Feig, a professor of radiology at the University of California, Irvine, and president-elect of the American Society of Breast Disease.
The task force now recommends that women aged 50–74 years need only get biennial exams instead of annual screenings and that routine mammographic screening is not necessary for women aged 40–49 years.
“What does this tell women in their 40s? It tells them basically that they can go back to the 1950s, when they waited until a cancer was too large to ignore any more and then bring it to their doctor's attention,” said Dr. Daniel B. Kopans, who is senior radiologist in the breast imaging division at Massachusetts General Hospital and a professor of radiology at Harvard Medical School, both in Boston. “They're basically saying, 'Ignore your breasts until there's an obvious cancer.'”
The recommendations—released in November (Ann. Intern. Med. 2009;151:716–26)—triggered a controversy among physicians, patients, and politicians. The recommendations were the subject of a Dec. 1 hearing before the Health subcommittee of the House Energy and Commerce Committee, at which task force members were put on the defensive.
The USPSTF guidelines were updated using evidence from two studies commissioned by the task force. One study, funded by the Agency for Healthcare Research and Quality, is an updated systematic review of screening mammography randomized, controlled trials (Ann. Intern. Med. 2009;151: 727–37). It concludes that mammography screening reduces breast cancer mortality by 15% for women aged 39–69 years and that both false-positive results and additional imaging are common.
The other study, by the Cancer Intervention and Surveillance Modeling Network, used estimates of screening outcomes for a range of screening strategies at different frequencies and ages of initiation and cessation (Ann. Intern. Med. 2009;151:738–47). This study concluded that “biennial intervals are more efficient and provide a better balance of benefits and harms than annual intervals.”
The use of these studies as the basis of the new recommendations angered experts on the RSNA panel. “They used selective science and they also used computer modeling as the major new analysis that they put forth,” Dr. Kopans said. “There were direct studies that were actually ignored by the task force.” These studies show that most of the decrease in breast cancer deaths is because of screening and not therapy.
Dr. Feig agreed and cited several randomized studies. “We know from these studies that women who are screened may have their risk of death from breast cancer reduced by as much as 40%–50%.”
In the Swedish Two-County trial (Lancet 1985;1:829–32), a 31% reduction in mortality was seen in women aged 40–74 years who were offered screening. “These randomized trials underestimate the benefits of screening” because the results include all women who were offered screening, not just those who underwent screening, Dr. Feig said.
In the Swedish seven-county service screening study (Cancer 2002;95:458–69), there was a 44% reduction in mortality among women who were screened.
“In the United States—where many women are being screened—the average woman with invasive breast cancer today is almost 40% less likely to die from her disease, compared with her counterpart in the 1980s,” Dr. Feig said (Cancer 2002;95:451–7).
“About 20% of all breast cancer deaths in our country are found in women in their 40s. Because they're younger, they have longer life expectancies. About 40% of the years of life lost to breast cancer are linked to those that are found in their 40s,” he said. The new recommendations would put these younger women at risk.
The RSNA panelists also expressed concern that the recommendations could prompt insurers to stop paying for screening mammography not recommended by the task force.
Disclosures: Dr. Feig and Dr. Kopans reported that they have no relevant conflicts of interest.
New mammography screening recommendations from the U.S. Preventive Services Task Force will cost women's lives and essentially take the breast cancer death rate back to 1950s levels, a panel of mammography experts said at the annual meeting of the Radiological Society of North America.
The net effect of the recommendations is that “screening would begin too late and would be too little. We would save money but we would lose lives,” said Dr. Stephen A. Feig, a professor of radiology at the University of California, Irvine, and president-elect of the American Society of Breast Disease.
The task force now recommends that women aged 50–74 years need only get biennial exams instead of annual screenings and that routine mammographic screening is not necessary for women aged 40–49 years.
“What does this tell women in their 40s? It tells them basically that they can go back to the 1950s, when they waited until a cancer was too large to ignore any more and then bring it to their doctor's attention,” said Dr. Daniel B. Kopans, who is senior radiologist in the breast imaging division at Massachusetts General Hospital and a professor of radiology at Harvard Medical School, both in Boston. “They're basically saying, 'Ignore your breasts until there's an obvious cancer.'”
The recommendations—released in November (Ann. Intern. Med. 2009;151:716–26)—triggered a controversy among physicians, patients, and politicians. The recommendations were the subject of a Dec. 1 hearing before the Health subcommittee of the House Energy and Commerce Committee, at which task force members were put on the defensive.
The USPSTF guidelines were updated using evidence from two studies commissioned by the task force. One study, funded by the Agency for Healthcare Research and Quality, is an updated systematic review of screening mammography randomized, controlled trials (Ann. Intern. Med. 2009;151: 727–37). It concludes that mammography screening reduces breast cancer mortality by 15% for women aged 39–69 years and that both false-positive results and additional imaging are common.
The other study, by the Cancer Intervention and Surveillance Modeling Network, used estimates of screening outcomes for a range of screening strategies at different frequencies and ages of initiation and cessation (Ann. Intern. Med. 2009;151:738–47). This study concluded that “biennial intervals are more efficient and provide a better balance of benefits and harms than annual intervals.”
The use of these studies as the basis of the new recommendations angered experts on the RSNA panel. “They used selective science and they also used computer modeling as the major new analysis that they put forth,” Dr. Kopans said. “There were direct studies that were actually ignored by the task force.” These studies show that most of the decrease in breast cancer deaths is because of screening and not therapy.
Dr. Feig agreed and cited several randomized studies. “We know from these studies that women who are screened may have their risk of death from breast cancer reduced by as much as 40%–50%.”
In the Swedish Two-County trial (Lancet 1985;1:829–32), a 31% reduction in mortality was seen in women aged 40–74 years who were offered screening. “These randomized trials underestimate the benefits of screening” because the results include all women who were offered screening, not just those who underwent screening, Dr. Feig said.
In the Swedish seven-county service screening study (Cancer 2002;95:458–69), there was a 44% reduction in mortality among women who were screened.
“In the United States—where many women are being screened—the average woman with invasive breast cancer today is almost 40% less likely to die from her disease, compared with her counterpart in the 1980s,” Dr. Feig said (Cancer 2002;95:451–7).
“About 20% of all breast cancer deaths in our country are found in women in their 40s. Because they're younger, they have longer life expectancies. About 40% of the years of life lost to breast cancer are linked to those that are found in their 40s,” he said. The new recommendations would put these younger women at risk.
The RSNA panelists also expressed concern that the recommendations could prompt insurers to stop paying for screening mammography not recommended by the task force.
Disclosures: Dr. Feig and Dr. Kopans reported that they have no relevant conflicts of interest.
New mammography screening recommendations from the U.S. Preventive Services Task Force will cost women's lives and essentially take the breast cancer death rate back to 1950s levels, a panel of mammography experts said at the annual meeting of the Radiological Society of North America.
The net effect of the recommendations is that “screening would begin too late and would be too little. We would save money but we would lose lives,” said Dr. Stephen A. Feig, a professor of radiology at the University of California, Irvine, and president-elect of the American Society of Breast Disease.
The task force now recommends that women aged 50–74 years need only get biennial exams instead of annual screenings and that routine mammographic screening is not necessary for women aged 40–49 years.
“What does this tell women in their 40s? It tells them basically that they can go back to the 1950s, when they waited until a cancer was too large to ignore any more and then bring it to their doctor's attention,” said Dr. Daniel B. Kopans, who is senior radiologist in the breast imaging division at Massachusetts General Hospital and a professor of radiology at Harvard Medical School, both in Boston. “They're basically saying, 'Ignore your breasts until there's an obvious cancer.'”
The recommendations—released in November (Ann. Intern. Med. 2009;151:716–26)—triggered a controversy among physicians, patients, and politicians. The recommendations were the subject of a Dec. 1 hearing before the Health subcommittee of the House Energy and Commerce Committee, at which task force members were put on the defensive.
The USPSTF guidelines were updated using evidence from two studies commissioned by the task force. One study, funded by the Agency for Healthcare Research and Quality, is an updated systematic review of screening mammography randomized, controlled trials (Ann. Intern. Med. 2009;151: 727–37). It concludes that mammography screening reduces breast cancer mortality by 15% for women aged 39–69 years and that both false-positive results and additional imaging are common.
The other study, by the Cancer Intervention and Surveillance Modeling Network, used estimates of screening outcomes for a range of screening strategies at different frequencies and ages of initiation and cessation (Ann. Intern. Med. 2009;151:738–47). This study concluded that “biennial intervals are more efficient and provide a better balance of benefits and harms than annual intervals.”
The use of these studies as the basis of the new recommendations angered experts on the RSNA panel. “They used selective science and they also used computer modeling as the major new analysis that they put forth,” Dr. Kopans said. “There were direct studies that were actually ignored by the task force.” These studies show that most of the decrease in breast cancer deaths is because of screening and not therapy.
Dr. Feig agreed and cited several randomized studies. “We know from these studies that women who are screened may have their risk of death from breast cancer reduced by as much as 40%–50%.”
In the Swedish Two-County trial (Lancet 1985;1:829–32), a 31% reduction in mortality was seen in women aged 40–74 years who were offered screening. “These randomized trials underestimate the benefits of screening” because the results include all women who were offered screening, not just those who underwent screening, Dr. Feig said.
In the Swedish seven-county service screening study (Cancer 2002;95:458–69), there was a 44% reduction in mortality among women who were screened.
“In the United States—where many women are being screened—the average woman with invasive breast cancer today is almost 40% less likely to die from her disease, compared with her counterpart in the 1980s,” Dr. Feig said (Cancer 2002;95:451–7).
“About 20% of all breast cancer deaths in our country are found in women in their 40s. Because they're younger, they have longer life expectancies. About 40% of the years of life lost to breast cancer are linked to those that are found in their 40s,” he said. The new recommendations would put these younger women at risk.
The RSNA panelists also expressed concern that the recommendations could prompt insurers to stop paying for screening mammography not recommended by the task force.
Disclosures: Dr. Feig and Dr. Kopans reported that they have no relevant conflicts of interest.
South Asians' High Fat/Low Lean Mass Linked to Higher Insulin Levels
South Asian men and women appear to have a high fat/low lean mass phenotype that may put them at greater risk for insulin resistance and diabetes, based on a study of individuals in four ethnic groups.
“Ethnic differences in lean mass do occur, such that South Asian men and women have significantly less lean mass than Aboriginal, Chinese, and European men and women of the same body size,” noted lead author Scott A. Lear, Ph.D. “South Asians have a distinct phenotype of high fat mass and low lean mass,” which may account for a substantial portion of their higher insulin levels compared with other ethnic groups.
A total of 828 participants were recruited as part of the Multicultural Community Health Assessment Trial, involving individuals of exclusive Aboriginal, Chinese, European, and South Asian origin. Body mass index ranges were less than 24.9 kg/m
Total body fat was moderately correlated with total body lean mass in both sexes of all four ethnic groups. Aboriginal, Chinese, and European men had 3.42 kg, 3.01 kg, and 3.57 kg more lean mass, respectively, than did South Asian men at a given total fat mass. Likewise, Aboriginal, Chinese, and European women had 1.98 kg, 2.24 kg, and 2.97 kg more lean mass than did South Asian women at a given total fat mass. “These models accounted for 66% and 63% of the variation in lean mass for men and women, respectively,” the researchers wrote (J. Clin. Endocrinol. Metab. 2009, Oct. 9 [doi: 10.1210/jc.2009-1030
The fat mass to lean mass (F:LM) ratio was significantly greater for South Asian men compared with Chinese and European men and there was a trend toward a greater F:LM ratio compared with Aboriginal men. Likewise, the F:LM ratio was also significantly greater for South Asian women compared with Chinese, European, and Aboriginal women. These relationships persisted after adjustment for age, height, humerus breadth, smoking status, physical activity, and diet (and menopausal status in women).
“South Asian men and women have a lower lean mass at a given fat mass even when taking into account possible confounders such as lifestyle habits known to affect body composition,” noted Dr. Lear, professor of biomedical physiology and kinesiology at Simon Fraser University, Vancouver, B.C., and his coinvestigators.
In a post hoc analysis, insulin and homeostasis model assessment (HOMA) levels were significantly greater for South Asian men than for Chinese and European men, even after adjustment for total body fat mass, age, height, humerus breadth, diet, and physical activity. When the models were adjusted for differences in the F:LM ratios instead of total body fat mass, there were no longer any differences in insulin or HOMA levels among the groups.
South Asian women had significantly greater insulin levels than did Chinese women in a similar post hoc analysis. There were no differences in HOMA between the groups. After adjustment for total body fat mass, age, menopausal status, height, humerus breadth, diet, and physical activity, South Asian women had significantly greater insulin and HOMA values than did Chinese and European women. As with men, when the models were adjusted for differences in the F:LM ratios instead of total body fat mass, there were no longer any differences in insulin or HOMA levels.
The study was supported by the Canadian Institutes of Health Research, Institute of Nutrition, Metabolism, and Diabetes. The authors reported that they had no conflicts of interest.
South Asian men and women appear to have a high fat/low lean mass phenotype that may put them at greater risk for insulin resistance and diabetes, based on a study of individuals in four ethnic groups.
“Ethnic differences in lean mass do occur, such that South Asian men and women have significantly less lean mass than Aboriginal, Chinese, and European men and women of the same body size,” noted lead author Scott A. Lear, Ph.D. “South Asians have a distinct phenotype of high fat mass and low lean mass,” which may account for a substantial portion of their higher insulin levels compared with other ethnic groups.
A total of 828 participants were recruited as part of the Multicultural Community Health Assessment Trial, involving individuals of exclusive Aboriginal, Chinese, European, and South Asian origin. Body mass index ranges were less than 24.9 kg/m
Total body fat was moderately correlated with total body lean mass in both sexes of all four ethnic groups. Aboriginal, Chinese, and European men had 3.42 kg, 3.01 kg, and 3.57 kg more lean mass, respectively, than did South Asian men at a given total fat mass. Likewise, Aboriginal, Chinese, and European women had 1.98 kg, 2.24 kg, and 2.97 kg more lean mass than did South Asian women at a given total fat mass. “These models accounted for 66% and 63% of the variation in lean mass for men and women, respectively,” the researchers wrote (J. Clin. Endocrinol. Metab. 2009, Oct. 9 [doi: 10.1210/jc.2009-1030
The fat mass to lean mass (F:LM) ratio was significantly greater for South Asian men compared with Chinese and European men and there was a trend toward a greater F:LM ratio compared with Aboriginal men. Likewise, the F:LM ratio was also significantly greater for South Asian women compared with Chinese, European, and Aboriginal women. These relationships persisted after adjustment for age, height, humerus breadth, smoking status, physical activity, and diet (and menopausal status in women).
“South Asian men and women have a lower lean mass at a given fat mass even when taking into account possible confounders such as lifestyle habits known to affect body composition,” noted Dr. Lear, professor of biomedical physiology and kinesiology at Simon Fraser University, Vancouver, B.C., and his coinvestigators.
In a post hoc analysis, insulin and homeostasis model assessment (HOMA) levels were significantly greater for South Asian men than for Chinese and European men, even after adjustment for total body fat mass, age, height, humerus breadth, diet, and physical activity. When the models were adjusted for differences in the F:LM ratios instead of total body fat mass, there were no longer any differences in insulin or HOMA levels among the groups.
South Asian women had significantly greater insulin levels than did Chinese women in a similar post hoc analysis. There were no differences in HOMA between the groups. After adjustment for total body fat mass, age, menopausal status, height, humerus breadth, diet, and physical activity, South Asian women had significantly greater insulin and HOMA values than did Chinese and European women. As with men, when the models were adjusted for differences in the F:LM ratios instead of total body fat mass, there were no longer any differences in insulin or HOMA levels.
The study was supported by the Canadian Institutes of Health Research, Institute of Nutrition, Metabolism, and Diabetes. The authors reported that they had no conflicts of interest.
South Asian men and women appear to have a high fat/low lean mass phenotype that may put them at greater risk for insulin resistance and diabetes, based on a study of individuals in four ethnic groups.
“Ethnic differences in lean mass do occur, such that South Asian men and women have significantly less lean mass than Aboriginal, Chinese, and European men and women of the same body size,” noted lead author Scott A. Lear, Ph.D. “South Asians have a distinct phenotype of high fat mass and low lean mass,” which may account for a substantial portion of their higher insulin levels compared with other ethnic groups.
A total of 828 participants were recruited as part of the Multicultural Community Health Assessment Trial, involving individuals of exclusive Aboriginal, Chinese, European, and South Asian origin. Body mass index ranges were less than 24.9 kg/m
Total body fat was moderately correlated with total body lean mass in both sexes of all four ethnic groups. Aboriginal, Chinese, and European men had 3.42 kg, 3.01 kg, and 3.57 kg more lean mass, respectively, than did South Asian men at a given total fat mass. Likewise, Aboriginal, Chinese, and European women had 1.98 kg, 2.24 kg, and 2.97 kg more lean mass than did South Asian women at a given total fat mass. “These models accounted for 66% and 63% of the variation in lean mass for men and women, respectively,” the researchers wrote (J. Clin. Endocrinol. Metab. 2009, Oct. 9 [doi: 10.1210/jc.2009-1030
The fat mass to lean mass (F:LM) ratio was significantly greater for South Asian men compared with Chinese and European men and there was a trend toward a greater F:LM ratio compared with Aboriginal men. Likewise, the F:LM ratio was also significantly greater for South Asian women compared with Chinese, European, and Aboriginal women. These relationships persisted after adjustment for age, height, humerus breadth, smoking status, physical activity, and diet (and menopausal status in women).
“South Asian men and women have a lower lean mass at a given fat mass even when taking into account possible confounders such as lifestyle habits known to affect body composition,” noted Dr. Lear, professor of biomedical physiology and kinesiology at Simon Fraser University, Vancouver, B.C., and his coinvestigators.
In a post hoc analysis, insulin and homeostasis model assessment (HOMA) levels were significantly greater for South Asian men than for Chinese and European men, even after adjustment for total body fat mass, age, height, humerus breadth, diet, and physical activity. When the models were adjusted for differences in the F:LM ratios instead of total body fat mass, there were no longer any differences in insulin or HOMA levels among the groups.
South Asian women had significantly greater insulin levels than did Chinese women in a similar post hoc analysis. There were no differences in HOMA between the groups. After adjustment for total body fat mass, age, menopausal status, height, humerus breadth, diet, and physical activity, South Asian women had significantly greater insulin and HOMA values than did Chinese and European women. As with men, when the models were adjusted for differences in the F:LM ratios instead of total body fat mass, there were no longer any differences in insulin or HOMA levels.
The study was supported by the Canadian Institutes of Health Research, Institute of Nutrition, Metabolism, and Diabetes. The authors reported that they had no conflicts of interest.
Menopausal Status Affects BMD/CRP Link
DENVER — Menopausal status appears to modify the relationship between inflammation and bone mineral density, on the basis of findings from the Framingham Osteoporosis Study.
Postmenopausal women on estrogen replacement therapy (ERT) with higher levels of C-reactive protein—a measure of systemic inflammation—had greater bone mineral density (BMD) at the femoral neck than did those in the same group with lower CRP levels, Dr. Robert R. McLean and his coinvestigators reported in a poster at the annual meeting of the American Society for Bone and Mineral Research. In contrast, in premenopausal women, increased CRP levels were associated with a decrease in BMD at the trochanter.
The Framingham Heart Study Offspring Cohort enrolled 5,124 children and spouses of the original Framingham cohort. From 1996 to 2001, BMD was measured in 3,035 offspring in the Framingham Osteoporosis Study. Fasting blood samples were collected from 2,095 of them during 1998-2001. C-reactive protein levels were measured after BMD in 72% of participants, with a median time between assessments of 1.4 years.
BMD was measured at the right femoral neck and trochanter, and at the lumbar spine. Other variables obtained at the time of BMD measurement included age, height, weight, physical activity, smoking status, and use of NSAIDs. In women, menopause status, current ERT use, and years since menopause were recorded. Separate analyses were performed for the 1,291 men, 229 premenopausal women, 497 postmenopausal women using ERT, and 888 postmenopausal women not using ERT. Analyses were adjusted for age, height, weight, physical activity, and smoking status, with additional adjustment for NSAID use.
Median CRP levels were higher for postmenopausal women (3.9 mg/L for those on ERT and 2.3 mg/L for those not on ERT) than for men (1.9 mg/L) or for premenopausal women (1.4 mg/L). In all, 74% of men, 62% of premenopausal women, 86% of postmenopausal women on ERT, and 77% of postmenopausal women not on ERT had CRP levels of at least 1 mg/L.
CRP level was not associated with BMD in men or in postmenopausal women using ERT. However, in those women, there was a significant association between years since menopause and BMD at all three sites. The researchers repeated the analysis for women fewer than 10 years past menopause and those at least 10 years past menopause. “The association of CRP with femoral neck BMD tended to be negative for those less than 10 years past menopause and positive for those at least 10 years past menopause, while there was no significant association at the trochanter or lumbar spine,” they wrote.
For postmenopausal women not using ERT, those with CRP levels of at least 1 mg/L had 2.5% greater BMD at the femoral neck, compared with the lower CRP level group, a significant difference. However, there were no significant associations at the trochanter or lumbar spine. “Contrary to our hypothesis, greater inflammation may be associated with higher BMD among postmenopausal women not using ERT,” wrote Dr. McLean, who is a researcher at the Institute for Aging Research, a research affiliate of Harvard Medical School, Boston.
In premenopausal women, CRP level was not associated with BMD at the femoral neck or the lumbar spine. However, each unit increase in CRP was associated with 0.005 g/cm
Preclinical studies have suggested that proinflammatory cytokines play a role in bone resorption, but the impact on BMD is not clear.
Dr. McLean reported that he has no relevant financial relationships.
DENVER — Menopausal status appears to modify the relationship between inflammation and bone mineral density, on the basis of findings from the Framingham Osteoporosis Study.
Postmenopausal women on estrogen replacement therapy (ERT) with higher levels of C-reactive protein—a measure of systemic inflammation—had greater bone mineral density (BMD) at the femoral neck than did those in the same group with lower CRP levels, Dr. Robert R. McLean and his coinvestigators reported in a poster at the annual meeting of the American Society for Bone and Mineral Research. In contrast, in premenopausal women, increased CRP levels were associated with a decrease in BMD at the trochanter.
The Framingham Heart Study Offspring Cohort enrolled 5,124 children and spouses of the original Framingham cohort. From 1996 to 2001, BMD was measured in 3,035 offspring in the Framingham Osteoporosis Study. Fasting blood samples were collected from 2,095 of them during 1998-2001. C-reactive protein levels were measured after BMD in 72% of participants, with a median time between assessments of 1.4 years.
BMD was measured at the right femoral neck and trochanter, and at the lumbar spine. Other variables obtained at the time of BMD measurement included age, height, weight, physical activity, smoking status, and use of NSAIDs. In women, menopause status, current ERT use, and years since menopause were recorded. Separate analyses were performed for the 1,291 men, 229 premenopausal women, 497 postmenopausal women using ERT, and 888 postmenopausal women not using ERT. Analyses were adjusted for age, height, weight, physical activity, and smoking status, with additional adjustment for NSAID use.
Median CRP levels were higher for postmenopausal women (3.9 mg/L for those on ERT and 2.3 mg/L for those not on ERT) than for men (1.9 mg/L) or for premenopausal women (1.4 mg/L). In all, 74% of men, 62% of premenopausal women, 86% of postmenopausal women on ERT, and 77% of postmenopausal women not on ERT had CRP levels of at least 1 mg/L.
CRP level was not associated with BMD in men or in postmenopausal women using ERT. However, in those women, there was a significant association between years since menopause and BMD at all three sites. The researchers repeated the analysis for women fewer than 10 years past menopause and those at least 10 years past menopause. “The association of CRP with femoral neck BMD tended to be negative for those less than 10 years past menopause and positive for those at least 10 years past menopause, while there was no significant association at the trochanter or lumbar spine,” they wrote.
For postmenopausal women not using ERT, those with CRP levels of at least 1 mg/L had 2.5% greater BMD at the femoral neck, compared with the lower CRP level group, a significant difference. However, there were no significant associations at the trochanter or lumbar spine. “Contrary to our hypothesis, greater inflammation may be associated with higher BMD among postmenopausal women not using ERT,” wrote Dr. McLean, who is a researcher at the Institute for Aging Research, a research affiliate of Harvard Medical School, Boston.
In premenopausal women, CRP level was not associated with BMD at the femoral neck or the lumbar spine. However, each unit increase in CRP was associated with 0.005 g/cm
Preclinical studies have suggested that proinflammatory cytokines play a role in bone resorption, but the impact on BMD is not clear.
Dr. McLean reported that he has no relevant financial relationships.
DENVER — Menopausal status appears to modify the relationship between inflammation and bone mineral density, on the basis of findings from the Framingham Osteoporosis Study.
Postmenopausal women on estrogen replacement therapy (ERT) with higher levels of C-reactive protein—a measure of systemic inflammation—had greater bone mineral density (BMD) at the femoral neck than did those in the same group with lower CRP levels, Dr. Robert R. McLean and his coinvestigators reported in a poster at the annual meeting of the American Society for Bone and Mineral Research. In contrast, in premenopausal women, increased CRP levels were associated with a decrease in BMD at the trochanter.
The Framingham Heart Study Offspring Cohort enrolled 5,124 children and spouses of the original Framingham cohort. From 1996 to 2001, BMD was measured in 3,035 offspring in the Framingham Osteoporosis Study. Fasting blood samples were collected from 2,095 of them during 1998-2001. C-reactive protein levels were measured after BMD in 72% of participants, with a median time between assessments of 1.4 years.
BMD was measured at the right femoral neck and trochanter, and at the lumbar spine. Other variables obtained at the time of BMD measurement included age, height, weight, physical activity, smoking status, and use of NSAIDs. In women, menopause status, current ERT use, and years since menopause were recorded. Separate analyses were performed for the 1,291 men, 229 premenopausal women, 497 postmenopausal women using ERT, and 888 postmenopausal women not using ERT. Analyses were adjusted for age, height, weight, physical activity, and smoking status, with additional adjustment for NSAID use.
Median CRP levels were higher for postmenopausal women (3.9 mg/L for those on ERT and 2.3 mg/L for those not on ERT) than for men (1.9 mg/L) or for premenopausal women (1.4 mg/L). In all, 74% of men, 62% of premenopausal women, 86% of postmenopausal women on ERT, and 77% of postmenopausal women not on ERT had CRP levels of at least 1 mg/L.
CRP level was not associated with BMD in men or in postmenopausal women using ERT. However, in those women, there was a significant association between years since menopause and BMD at all three sites. The researchers repeated the analysis for women fewer than 10 years past menopause and those at least 10 years past menopause. “The association of CRP with femoral neck BMD tended to be negative for those less than 10 years past menopause and positive for those at least 10 years past menopause, while there was no significant association at the trochanter or lumbar spine,” they wrote.
For postmenopausal women not using ERT, those with CRP levels of at least 1 mg/L had 2.5% greater BMD at the femoral neck, compared with the lower CRP level group, a significant difference. However, there were no significant associations at the trochanter or lumbar spine. “Contrary to our hypothesis, greater inflammation may be associated with higher BMD among postmenopausal women not using ERT,” wrote Dr. McLean, who is a researcher at the Institute for Aging Research, a research affiliate of Harvard Medical School, Boston.
In premenopausal women, CRP level was not associated with BMD at the femoral neck or the lumbar spine. However, each unit increase in CRP was associated with 0.005 g/cm
Preclinical studies have suggested that proinflammatory cytokines play a role in bone resorption, but the impact on BMD is not clear.
Dr. McLean reported that he has no relevant financial relationships.