User login
Topical Steroids Underused in Atopic Dermatitis
Topical corticosteroids, a first-line therapy for atopic dermatitis, appear to be poorly prescribed for pediatric patients, based on an analysis of data from two national databases of patient visits.
Fewer than a third of the estimated 7.4 million physician visits for AD by pediatric patients between 1997 and 2004 involved a prescription for topical corticosteroids, reported Dr. Kimberly A. Horii of the dermatology section at Children's Mercy Hospitals and Clinics in Kansas City, Mo., and her colleagues.
"Topical corticosteroids are accepted first-line anti-inflammatory agents to treat flares of AD," the researchers wrote. Surprisingly, fewer than one-third of all pediatric [AD] visits and fewer than one in four visits for AD or patients who were younger than 2 years were treated with topical corticosteroids in this cohort, they noted (Pediatrics 2007;120:e527-e34 [Epub doi:10.1542/peds.20070289]).
From 1997 to 2000, there were 2.8 million outpatient pediatric visits for AD; 34% of these involved the prescription of topical corticosteroids. From 2001 to 2004, there were 4.6 million AD visits, of which 25% involved the prescription of topical corticosteroids, although the decline was not significant. The researchers suggested that steroid phobia of parents or physicians may limit steroid use. "Misunderstandings about the use and adverse effects of topical corticosteroids … may also contribute significantly to the undertreatment of pediatric AD," the researchers wrote.
They used data on outpatient encounters by patients aged 18 years or younger compiled from the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS) databases between 1997 and 2004.
There were about 620,000 annual AD visits in 1997, and the number peaked at 1.7 million in 2003. The increase in AD visits per year was statistically significant. By 2004, the number had declined to 850,000.
"Although this peak in 2003 may be an anomaly, a potential explanation for this finding may be related to the introduction of the topical calcineurin inhibitors in 2000 and 2001," the researchers wrote. "With increased public knowledge of topical calcineurin inhibitors, parents who had previously not seen a physician for their child's AD may have potentially sought this novel treatment."
However, they noted that there was an insufficient number of records in the database to allow a statistical analysis of this trend.
Between 1997 and 2000, 14% of AD visits involved the prescription of antihistamines, which increased to 21% from 20012004. The increase was not statistically significant. Topical tacrolimus (Protopic) and pimecrolimus (Elidel) were prescribed in 10% and 13% of AD visits between 2001 and 2004. Between 1997 and 2004, oral corticosteroids were prescribed in 17% of the 6.7 million AD visits (excluding those involving asthma). It was possible for more than one drug to be prescribed per visit.
The researchers also looked specifically at children younger than 2 years. Between 1997 and 2000, there were 0.7 million AD visits for this age group; there were 1.3 million between 2001 and 2004. Of these visits, 21% and 24%, respectively, involved the prescription of topical corticosteroids.
Interestingly, in the same age group, 8% and 7% of AD visits between 2001 and 2004 involved prescriptions of tacrolimus or pimecrolimus, respectively. "In this study, we found similar rates of use of topical corticosteroids and topical calcineurin inhibitors in patients who were younger than 2 years (24% vs. 22%)," the researchers said. Calcineurin inhibitors are indicated for the short-term or intermittent treatment of mild to moderate AD (topical pimecrolimus) or moderate to severe AD (topical tacrolimus) in patients older than 2 years.
The demographics of pediatric AD visits were similar to all pediatric visits, with the exception of race and age. Of all pediatric visits, 69% were made to a generalist, compared with 43% of pediatric AD visits, and 22% of all pediatric visits were made by children aged 25 years.
Black and Asian children were seen more frequently for the diagnosis of AD. Black children and adolescents accounted for 19% of AD visits and Asian children and adolescents accounted for 11%. In contrast, black children and adolescents accounted for 12% of all visits and Asian children and adolescents accounted for 4%. A total of 69% of all pediatric visits were by white patients, compared with only 51% of pediatric AD visits.
ELSEVIER GLOBAL MEDICAL NEWS
Topical corticosteroids, a first-line therapy for atopic dermatitis, appear to be poorly prescribed for pediatric patients, based on an analysis of data from two national databases of patient visits.
Fewer than a third of the estimated 7.4 million physician visits for AD by pediatric patients between 1997 and 2004 involved a prescription for topical corticosteroids, reported Dr. Kimberly A. Horii of the dermatology section at Children's Mercy Hospitals and Clinics in Kansas City, Mo., and her colleagues.
"Topical corticosteroids are accepted first-line anti-inflammatory agents to treat flares of AD," the researchers wrote. Surprisingly, fewer than one-third of all pediatric [AD] visits and fewer than one in four visits for AD or patients who were younger than 2 years were treated with topical corticosteroids in this cohort, they noted (Pediatrics 2007;120:e527-e34 [Epub doi:10.1542/peds.20070289]).
From 1997 to 2000, there were 2.8 million outpatient pediatric visits for AD; 34% of these involved the prescription of topical corticosteroids. From 2001 to 2004, there were 4.6 million AD visits, of which 25% involved the prescription of topical corticosteroids, although the decline was not significant. The researchers suggested that steroid phobia of parents or physicians may limit steroid use. "Misunderstandings about the use and adverse effects of topical corticosteroids … may also contribute significantly to the undertreatment of pediatric AD," the researchers wrote.
They used data on outpatient encounters by patients aged 18 years or younger compiled from the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS) databases between 1997 and 2004.
There were about 620,000 annual AD visits in 1997, and the number peaked at 1.7 million in 2003. The increase in AD visits per year was statistically significant. By 2004, the number had declined to 850,000.
"Although this peak in 2003 may be an anomaly, a potential explanation for this finding may be related to the introduction of the topical calcineurin inhibitors in 2000 and 2001," the researchers wrote. "With increased public knowledge of topical calcineurin inhibitors, parents who had previously not seen a physician for their child's AD may have potentially sought this novel treatment."
However, they noted that there was an insufficient number of records in the database to allow a statistical analysis of this trend.
Between 1997 and 2000, 14% of AD visits involved the prescription of antihistamines, which increased to 21% from 20012004. The increase was not statistically significant. Topical tacrolimus (Protopic) and pimecrolimus (Elidel) were prescribed in 10% and 13% of AD visits between 2001 and 2004. Between 1997 and 2004, oral corticosteroids were prescribed in 17% of the 6.7 million AD visits (excluding those involving asthma). It was possible for more than one drug to be prescribed per visit.
The researchers also looked specifically at children younger than 2 years. Between 1997 and 2000, there were 0.7 million AD visits for this age group; there were 1.3 million between 2001 and 2004. Of these visits, 21% and 24%, respectively, involved the prescription of topical corticosteroids.
Interestingly, in the same age group, 8% and 7% of AD visits between 2001 and 2004 involved prescriptions of tacrolimus or pimecrolimus, respectively. "In this study, we found similar rates of use of topical corticosteroids and topical calcineurin inhibitors in patients who were younger than 2 years (24% vs. 22%)," the researchers said. Calcineurin inhibitors are indicated for the short-term or intermittent treatment of mild to moderate AD (topical pimecrolimus) or moderate to severe AD (topical tacrolimus) in patients older than 2 years.
The demographics of pediatric AD visits were similar to all pediatric visits, with the exception of race and age. Of all pediatric visits, 69% were made to a generalist, compared with 43% of pediatric AD visits, and 22% of all pediatric visits were made by children aged 25 years.
Black and Asian children were seen more frequently for the diagnosis of AD. Black children and adolescents accounted for 19% of AD visits and Asian children and adolescents accounted for 11%. In contrast, black children and adolescents accounted for 12% of all visits and Asian children and adolescents accounted for 4%. A total of 69% of all pediatric visits were by white patients, compared with only 51% of pediatric AD visits.
ELSEVIER GLOBAL MEDICAL NEWS
Topical corticosteroids, a first-line therapy for atopic dermatitis, appear to be poorly prescribed for pediatric patients, based on an analysis of data from two national databases of patient visits.
Fewer than a third of the estimated 7.4 million physician visits for AD by pediatric patients between 1997 and 2004 involved a prescription for topical corticosteroids, reported Dr. Kimberly A. Horii of the dermatology section at Children's Mercy Hospitals and Clinics in Kansas City, Mo., and her colleagues.
"Topical corticosteroids are accepted first-line anti-inflammatory agents to treat flares of AD," the researchers wrote. Surprisingly, fewer than one-third of all pediatric [AD] visits and fewer than one in four visits for AD or patients who were younger than 2 years were treated with topical corticosteroids in this cohort, they noted (Pediatrics 2007;120:e527-e34 [Epub doi:10.1542/peds.20070289]).
From 1997 to 2000, there were 2.8 million outpatient pediatric visits for AD; 34% of these involved the prescription of topical corticosteroids. From 2001 to 2004, there were 4.6 million AD visits, of which 25% involved the prescription of topical corticosteroids, although the decline was not significant. The researchers suggested that steroid phobia of parents or physicians may limit steroid use. "Misunderstandings about the use and adverse effects of topical corticosteroids … may also contribute significantly to the undertreatment of pediatric AD," the researchers wrote.
They used data on outpatient encounters by patients aged 18 years or younger compiled from the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS) databases between 1997 and 2004.
There were about 620,000 annual AD visits in 1997, and the number peaked at 1.7 million in 2003. The increase in AD visits per year was statistically significant. By 2004, the number had declined to 850,000.
"Although this peak in 2003 may be an anomaly, a potential explanation for this finding may be related to the introduction of the topical calcineurin inhibitors in 2000 and 2001," the researchers wrote. "With increased public knowledge of topical calcineurin inhibitors, parents who had previously not seen a physician for their child's AD may have potentially sought this novel treatment."
However, they noted that there was an insufficient number of records in the database to allow a statistical analysis of this trend.
Between 1997 and 2000, 14% of AD visits involved the prescription of antihistamines, which increased to 21% from 20012004. The increase was not statistically significant. Topical tacrolimus (Protopic) and pimecrolimus (Elidel) were prescribed in 10% and 13% of AD visits between 2001 and 2004. Between 1997 and 2004, oral corticosteroids were prescribed in 17% of the 6.7 million AD visits (excluding those involving asthma). It was possible for more than one drug to be prescribed per visit.
The researchers also looked specifically at children younger than 2 years. Between 1997 and 2000, there were 0.7 million AD visits for this age group; there were 1.3 million between 2001 and 2004. Of these visits, 21% and 24%, respectively, involved the prescription of topical corticosteroids.
Interestingly, in the same age group, 8% and 7% of AD visits between 2001 and 2004 involved prescriptions of tacrolimus or pimecrolimus, respectively. "In this study, we found similar rates of use of topical corticosteroids and topical calcineurin inhibitors in patients who were younger than 2 years (24% vs. 22%)," the researchers said. Calcineurin inhibitors are indicated for the short-term or intermittent treatment of mild to moderate AD (topical pimecrolimus) or moderate to severe AD (topical tacrolimus) in patients older than 2 years.
The demographics of pediatric AD visits were similar to all pediatric visits, with the exception of race and age. Of all pediatric visits, 69% were made to a generalist, compared with 43% of pediatric AD visits, and 22% of all pediatric visits were made by children aged 25 years.
Black and Asian children were seen more frequently for the diagnosis of AD. Black children and adolescents accounted for 19% of AD visits and Asian children and adolescents accounted for 11%. In contrast, black children and adolescents accounted for 12% of all visits and Asian children and adolescents accounted for 4%. A total of 69% of all pediatric visits were by white patients, compared with only 51% of pediatric AD visits.
ELSEVIER GLOBAL MEDICAL NEWS
Let Patient Preferences Guide Bisphosphonates Use
WASHINGTON — Physicians and patients need to work together to decide for or against long-term bisphosphonate treatment for osteoporosis. The body of evidence is still evolving and there's no one-size-fits-all answer, said Dr. Sundeep Khosla, research chair of the division of endocrinology at the Mayo Clinic in Rochester, Minn.
“I think ultimately the patient has to decide with her physician. … Patient values factor into this,” said Dr. Khosla at an international symposium sponsored by the National Osteoporosis Foundation. A physician can inform a patient about the best information that is currently available in terms of fracture risk and the risk of complications. However, the patient has to decide what risk she is willing to take with regard to fracture.
Dr. Khosla discussed the pros and cons of long-term bisphosphonate use in the context of a hypothetical patient familiar to many physicians. A 60-year-old woman started on vitamin D/calcium supplements and 70 mg/week alendronate 5 years ago when her dual-energy x-ray absorptiometry (DXA) scan revealed a spine T score of −2.6 and a total hip T score of −2.0. She also has a family history of hip fracture. Her bone mineral density (BMD) has increased about 5% at the spine and 3% at the hip. She has not had any clinical fractures. She asks if she should continue with alendronate and if so, for how long.
So should a patient who has been on alendronate for 5 years continue with therapy? In favor of continuing, it does appear that continuation will reduce the risk of clinical vertebral fractures.
Alendronate is the longest-available bisphosphonate, with 10 years of follow-up data. In one analysis of 10 years of data for postmenopausal women on varying regimens of alendronate, those on 10 mg daily of alendronate had increased BMD for the spine and hip (N. Engl. J. Med. 2004;350:1189–99). Spine BMD increased by 13.7% from baseline over that period, and total hip BMD increased by 6.7%. Smaller gains in BMD were noted for women on 5 mg daily of alendronate: 9.3% and 2.9% for the spine and total hip, respectively. For women in the discontinuation group, spinal BMD leveled off (an increase of 0.3% from years 6–10) and total hip BMD declined slightly (a decrease of 1% from years 6–10).
There was an initial reduction in vertebral fractures for women on alendronate, but there was no difference in vertebral fractures during years 6–10. However, the study was not adequately powered to assess fractures.
This study “told us that alendronate did in fact have sustained effects over 10 years on bone density and bone turnover markers,” said Dr. Khosla. However, the fracture data were inconclusive: “At best, there was no clear evidence for an increase in vertebral or nonvertebral fractures following long-term alendronate therapy.”
Other data suggest that stopping treatment for 5 years will increase the risk of nonvertebral fractures and minor vertebral deformities.
In the FLEX (Fracture Intervention Trial [FIT] Long-Term Extension) study, published late last year, researchers assessed the effects of continuing or stopping alendronate after 5 years of treatment (JAMA 2006;296:2927–38). In this study, women who had received 5 years of alendronate therapy were randomized to continue on 5 mg/day or 10 mg/day alendronate, or to stop therapy.
For women on placebo for years 5–10, total hip BMD returned to baseline levels. Women on both doses of alendronate gained and maintained a 4% increase in hip BMD over baseline during the same period. In terms of spine BMD, women on placebo during years 5–10 had a slight increase and women on alendronate had a steeper increase.
Women who continued on alendronate for 10 years had an almost 50% reduction in clinical vertebral fractures, compared with those who stopped treatment after 5 years. There was no difference between the groups in terms of nonvertebral or morphometric vertebral fractures.
“So if you look at clinical vertebral fractures, what you see is that if the BMD was greater than −2.0, there doesn't appear to be any real benefit [to continued alendronate]. But if you have a BMD less than −2.0 or less than −2.5 … it appears that both of these subgroups benefitted from continuing alendronate for 10 years as opposed to stopping it after 5 years.”
The study provides some useful clinical answers. “It says that continuation of alendronate for 10 years does maintain bone mass and reduces bone remodeling, compared with discontinuation after 5 years,” said Dr. Khosla. Discontinuation did not increase the risk of nonvertebral fractures or x-ray-detected vertebral fractures, but the risk of clinically detected vertebral fractures was significantly increased in those who discontinued therapy after 5 years.
“For many women, stopping alendronate after 5 years for up to 5 more years does not significantly increase fracture risk, but women at high risk of vertebral fractures—such as those who already have a vertebral fracture or those [who might have] very low bone density—may benefit by continuing beyond 5 years.”
Fewer data are available for risedronate. Over 5 years, women on risedronate had continued modest increases in spine bone density, and relative stabilization of femoral-neck bone density, judging from findings from the Vertebral Efficacy With Risedronate Therapy-Multinational (VERT-MN) trial (Bone 2003;32:120–6). Women on placebo had a reduction in femoral-neck bone density and a relative stabilization of spine bone density during the 2-year extension of the trial that originally was designed to run 3 years. During the 2 years of the extension, women on risedronate had more than a 50% reduction in vertebral fractures, compared with women who stopped therapy.
Even fewer data are available for ibandronate. In a 3-year study of almost 3,000 women, the incidence of new vertebral fractures in women on oral daily ibandronate (2.5 mg) was 11%, compared with 6% for women in the placebo group (Bone 2005;37:651–4).
“There are potential concerns with long-term bisphosphonate therapy,” said Dr. Khosla. One important question is whether the continued and potent inhibition of bone turnover could be harmful because of the increased mineralization of bone that has been observed in animal models.
There is also concern about the accumulation of microdamage. “Here, the thought is that because bone constantly needs to repair microcracks and microfractures, if you [inhibit] resorption for long periods of time, these microcracks will accumulate, and you can start to see a paradoxical increase in fractures in various sites because you haven't repaired the skeleton normally,” said Dr. Khosla.
Animal and human studies do show that bisphosphonate-induced inhibition of bone resorption is associated with increased bone mineralization. Increased bone mineralization does increase bone strength, but only up to a point because bone also becomes too stiff.
However, despite the results of animal studies with high doses of bisphosphonates, there is no evidence in humans for increased accumulation of microdamage. “This is a theoretical concern,” said Dr. Khosla.
WASHINGTON — Physicians and patients need to work together to decide for or against long-term bisphosphonate treatment for osteoporosis. The body of evidence is still evolving and there's no one-size-fits-all answer, said Dr. Sundeep Khosla, research chair of the division of endocrinology at the Mayo Clinic in Rochester, Minn.
“I think ultimately the patient has to decide with her physician. … Patient values factor into this,” said Dr. Khosla at an international symposium sponsored by the National Osteoporosis Foundation. A physician can inform a patient about the best information that is currently available in terms of fracture risk and the risk of complications. However, the patient has to decide what risk she is willing to take with regard to fracture.
Dr. Khosla discussed the pros and cons of long-term bisphosphonate use in the context of a hypothetical patient familiar to many physicians. A 60-year-old woman started on vitamin D/calcium supplements and 70 mg/week alendronate 5 years ago when her dual-energy x-ray absorptiometry (DXA) scan revealed a spine T score of −2.6 and a total hip T score of −2.0. She also has a family history of hip fracture. Her bone mineral density (BMD) has increased about 5% at the spine and 3% at the hip. She has not had any clinical fractures. She asks if she should continue with alendronate and if so, for how long.
So should a patient who has been on alendronate for 5 years continue with therapy? In favor of continuing, it does appear that continuation will reduce the risk of clinical vertebral fractures.
Alendronate is the longest-available bisphosphonate, with 10 years of follow-up data. In one analysis of 10 years of data for postmenopausal women on varying regimens of alendronate, those on 10 mg daily of alendronate had increased BMD for the spine and hip (N. Engl. J. Med. 2004;350:1189–99). Spine BMD increased by 13.7% from baseline over that period, and total hip BMD increased by 6.7%. Smaller gains in BMD were noted for women on 5 mg daily of alendronate: 9.3% and 2.9% for the spine and total hip, respectively. For women in the discontinuation group, spinal BMD leveled off (an increase of 0.3% from years 6–10) and total hip BMD declined slightly (a decrease of 1% from years 6–10).
There was an initial reduction in vertebral fractures for women on alendronate, but there was no difference in vertebral fractures during years 6–10. However, the study was not adequately powered to assess fractures.
This study “told us that alendronate did in fact have sustained effects over 10 years on bone density and bone turnover markers,” said Dr. Khosla. However, the fracture data were inconclusive: “At best, there was no clear evidence for an increase in vertebral or nonvertebral fractures following long-term alendronate therapy.”
Other data suggest that stopping treatment for 5 years will increase the risk of nonvertebral fractures and minor vertebral deformities.
In the FLEX (Fracture Intervention Trial [FIT] Long-Term Extension) study, published late last year, researchers assessed the effects of continuing or stopping alendronate after 5 years of treatment (JAMA 2006;296:2927–38). In this study, women who had received 5 years of alendronate therapy were randomized to continue on 5 mg/day or 10 mg/day alendronate, or to stop therapy.
For women on placebo for years 5–10, total hip BMD returned to baseline levels. Women on both doses of alendronate gained and maintained a 4% increase in hip BMD over baseline during the same period. In terms of spine BMD, women on placebo during years 5–10 had a slight increase and women on alendronate had a steeper increase.
Women who continued on alendronate for 10 years had an almost 50% reduction in clinical vertebral fractures, compared with those who stopped treatment after 5 years. There was no difference between the groups in terms of nonvertebral or morphometric vertebral fractures.
“So if you look at clinical vertebral fractures, what you see is that if the BMD was greater than −2.0, there doesn't appear to be any real benefit [to continued alendronate]. But if you have a BMD less than −2.0 or less than −2.5 … it appears that both of these subgroups benefitted from continuing alendronate for 10 years as opposed to stopping it after 5 years.”
The study provides some useful clinical answers. “It says that continuation of alendronate for 10 years does maintain bone mass and reduces bone remodeling, compared with discontinuation after 5 years,” said Dr. Khosla. Discontinuation did not increase the risk of nonvertebral fractures or x-ray-detected vertebral fractures, but the risk of clinically detected vertebral fractures was significantly increased in those who discontinued therapy after 5 years.
“For many women, stopping alendronate after 5 years for up to 5 more years does not significantly increase fracture risk, but women at high risk of vertebral fractures—such as those who already have a vertebral fracture or those [who might have] very low bone density—may benefit by continuing beyond 5 years.”
Fewer data are available for risedronate. Over 5 years, women on risedronate had continued modest increases in spine bone density, and relative stabilization of femoral-neck bone density, judging from findings from the Vertebral Efficacy With Risedronate Therapy-Multinational (VERT-MN) trial (Bone 2003;32:120–6). Women on placebo had a reduction in femoral-neck bone density and a relative stabilization of spine bone density during the 2-year extension of the trial that originally was designed to run 3 years. During the 2 years of the extension, women on risedronate had more than a 50% reduction in vertebral fractures, compared with women who stopped therapy.
Even fewer data are available for ibandronate. In a 3-year study of almost 3,000 women, the incidence of new vertebral fractures in women on oral daily ibandronate (2.5 mg) was 11%, compared with 6% for women in the placebo group (Bone 2005;37:651–4).
“There are potential concerns with long-term bisphosphonate therapy,” said Dr. Khosla. One important question is whether the continued and potent inhibition of bone turnover could be harmful because of the increased mineralization of bone that has been observed in animal models.
There is also concern about the accumulation of microdamage. “Here, the thought is that because bone constantly needs to repair microcracks and microfractures, if you [inhibit] resorption for long periods of time, these microcracks will accumulate, and you can start to see a paradoxical increase in fractures in various sites because you haven't repaired the skeleton normally,” said Dr. Khosla.
Animal and human studies do show that bisphosphonate-induced inhibition of bone resorption is associated with increased bone mineralization. Increased bone mineralization does increase bone strength, but only up to a point because bone also becomes too stiff.
However, despite the results of animal studies with high doses of bisphosphonates, there is no evidence in humans for increased accumulation of microdamage. “This is a theoretical concern,” said Dr. Khosla.
WASHINGTON — Physicians and patients need to work together to decide for or against long-term bisphosphonate treatment for osteoporosis. The body of evidence is still evolving and there's no one-size-fits-all answer, said Dr. Sundeep Khosla, research chair of the division of endocrinology at the Mayo Clinic in Rochester, Minn.
“I think ultimately the patient has to decide with her physician. … Patient values factor into this,” said Dr. Khosla at an international symposium sponsored by the National Osteoporosis Foundation. A physician can inform a patient about the best information that is currently available in terms of fracture risk and the risk of complications. However, the patient has to decide what risk she is willing to take with regard to fracture.
Dr. Khosla discussed the pros and cons of long-term bisphosphonate use in the context of a hypothetical patient familiar to many physicians. A 60-year-old woman started on vitamin D/calcium supplements and 70 mg/week alendronate 5 years ago when her dual-energy x-ray absorptiometry (DXA) scan revealed a spine T score of −2.6 and a total hip T score of −2.0. She also has a family history of hip fracture. Her bone mineral density (BMD) has increased about 5% at the spine and 3% at the hip. She has not had any clinical fractures. She asks if she should continue with alendronate and if so, for how long.
So should a patient who has been on alendronate for 5 years continue with therapy? In favor of continuing, it does appear that continuation will reduce the risk of clinical vertebral fractures.
Alendronate is the longest-available bisphosphonate, with 10 years of follow-up data. In one analysis of 10 years of data for postmenopausal women on varying regimens of alendronate, those on 10 mg daily of alendronate had increased BMD for the spine and hip (N. Engl. J. Med. 2004;350:1189–99). Spine BMD increased by 13.7% from baseline over that period, and total hip BMD increased by 6.7%. Smaller gains in BMD were noted for women on 5 mg daily of alendronate: 9.3% and 2.9% for the spine and total hip, respectively. For women in the discontinuation group, spinal BMD leveled off (an increase of 0.3% from years 6–10) and total hip BMD declined slightly (a decrease of 1% from years 6–10).
There was an initial reduction in vertebral fractures for women on alendronate, but there was no difference in vertebral fractures during years 6–10. However, the study was not adequately powered to assess fractures.
This study “told us that alendronate did in fact have sustained effects over 10 years on bone density and bone turnover markers,” said Dr. Khosla. However, the fracture data were inconclusive: “At best, there was no clear evidence for an increase in vertebral or nonvertebral fractures following long-term alendronate therapy.”
Other data suggest that stopping treatment for 5 years will increase the risk of nonvertebral fractures and minor vertebral deformities.
In the FLEX (Fracture Intervention Trial [FIT] Long-Term Extension) study, published late last year, researchers assessed the effects of continuing or stopping alendronate after 5 years of treatment (JAMA 2006;296:2927–38). In this study, women who had received 5 years of alendronate therapy were randomized to continue on 5 mg/day or 10 mg/day alendronate, or to stop therapy.
For women on placebo for years 5–10, total hip BMD returned to baseline levels. Women on both doses of alendronate gained and maintained a 4% increase in hip BMD over baseline during the same period. In terms of spine BMD, women on placebo during years 5–10 had a slight increase and women on alendronate had a steeper increase.
Women who continued on alendronate for 10 years had an almost 50% reduction in clinical vertebral fractures, compared with those who stopped treatment after 5 years. There was no difference between the groups in terms of nonvertebral or morphometric vertebral fractures.
“So if you look at clinical vertebral fractures, what you see is that if the BMD was greater than −2.0, there doesn't appear to be any real benefit [to continued alendronate]. But if you have a BMD less than −2.0 or less than −2.5 … it appears that both of these subgroups benefitted from continuing alendronate for 10 years as opposed to stopping it after 5 years.”
The study provides some useful clinical answers. “It says that continuation of alendronate for 10 years does maintain bone mass and reduces bone remodeling, compared with discontinuation after 5 years,” said Dr. Khosla. Discontinuation did not increase the risk of nonvertebral fractures or x-ray-detected vertebral fractures, but the risk of clinically detected vertebral fractures was significantly increased in those who discontinued therapy after 5 years.
“For many women, stopping alendronate after 5 years for up to 5 more years does not significantly increase fracture risk, but women at high risk of vertebral fractures—such as those who already have a vertebral fracture or those [who might have] very low bone density—may benefit by continuing beyond 5 years.”
Fewer data are available for risedronate. Over 5 years, women on risedronate had continued modest increases in spine bone density, and relative stabilization of femoral-neck bone density, judging from findings from the Vertebral Efficacy With Risedronate Therapy-Multinational (VERT-MN) trial (Bone 2003;32:120–6). Women on placebo had a reduction in femoral-neck bone density and a relative stabilization of spine bone density during the 2-year extension of the trial that originally was designed to run 3 years. During the 2 years of the extension, women on risedronate had more than a 50% reduction in vertebral fractures, compared with women who stopped therapy.
Even fewer data are available for ibandronate. In a 3-year study of almost 3,000 women, the incidence of new vertebral fractures in women on oral daily ibandronate (2.5 mg) was 11%, compared with 6% for women in the placebo group (Bone 2005;37:651–4).
“There are potential concerns with long-term bisphosphonate therapy,” said Dr. Khosla. One important question is whether the continued and potent inhibition of bone turnover could be harmful because of the increased mineralization of bone that has been observed in animal models.
There is also concern about the accumulation of microdamage. “Here, the thought is that because bone constantly needs to repair microcracks and microfractures, if you [inhibit] resorption for long periods of time, these microcracks will accumulate, and you can start to see a paradoxical increase in fractures in various sites because you haven't repaired the skeleton normally,” said Dr. Khosla.
Animal and human studies do show that bisphosphonate-induced inhibition of bone resorption is associated with increased bone mineralization. Increased bone mineralization does increase bone strength, but only up to a point because bone also becomes too stiff.
However, despite the results of animal studies with high doses of bisphosphonates, there is no evidence in humans for increased accumulation of microdamage. “This is a theoretical concern,” said Dr. Khosla.
CT May Be Used to Assess Myocardial Ischemia
WASHINGTON — Multidetector CT imaging using a rest/dipyridamole-induced stress protocol shows promise for the assessment of myocardial ischemia, on the basis of data presented at the annual meeting of the Society of Cardiovascular Computed Tomography.
Researchers compared rest-stress multidetector CT (MDCT) with stress-rest Tc 99m sestamibi SPECT (single-photon emission computed tomography) imaging to detect myocardial ischemia in 47 patients.
For ischemic regions, a reduction in contrast enhancement on MDCT during stress was 77% sensitive and 99% specific, compared with SPECT. A reduction in contrast enhancement at rest on MDCT identified scarred tissue with 96% sensitivity and 98% specificity, vs. SPECT.
“Our results show that dipyridamole stress-rest multidetector CT can evaluate both myocardial ischemia induced by dipyridamole and coronary artery stenosis,” said Dr. Patricia Carrascosa of Diagnóstico Maipú in Buenos Aires.
Patients included in the study had an average age of 60 years; most were men (68%). Only 12 patients were symptomatic. All patients underwent rest-stress MDCT and stress-rest SPECT. A 17-segment model was used to assess myocardial ischemia. Based on the SPECT results, segments were classified as normal, ischemic, or scarred. Normal segments appeared homogeneous. Ischemic segments had hypodense areas seen only on stress images. Scarred segments had hypodense areas visible on both rest and stress images.
In a subgroup of 20 patients, MDCT results were compared with the results of invasive coronary angiography. Significant stenosis was defined as stenosis of at least 50%. The sensitivity of MDCT, compared with that of invasive angiography, was 92% and the specificity was 97%.
WASHINGTON — Multidetector CT imaging using a rest/dipyridamole-induced stress protocol shows promise for the assessment of myocardial ischemia, on the basis of data presented at the annual meeting of the Society of Cardiovascular Computed Tomography.
Researchers compared rest-stress multidetector CT (MDCT) with stress-rest Tc 99m sestamibi SPECT (single-photon emission computed tomography) imaging to detect myocardial ischemia in 47 patients.
For ischemic regions, a reduction in contrast enhancement on MDCT during stress was 77% sensitive and 99% specific, compared with SPECT. A reduction in contrast enhancement at rest on MDCT identified scarred tissue with 96% sensitivity and 98% specificity, vs. SPECT.
“Our results show that dipyridamole stress-rest multidetector CT can evaluate both myocardial ischemia induced by dipyridamole and coronary artery stenosis,” said Dr. Patricia Carrascosa of Diagnóstico Maipú in Buenos Aires.
Patients included in the study had an average age of 60 years; most were men (68%). Only 12 patients were symptomatic. All patients underwent rest-stress MDCT and stress-rest SPECT. A 17-segment model was used to assess myocardial ischemia. Based on the SPECT results, segments were classified as normal, ischemic, or scarred. Normal segments appeared homogeneous. Ischemic segments had hypodense areas seen only on stress images. Scarred segments had hypodense areas visible on both rest and stress images.
In a subgroup of 20 patients, MDCT results were compared with the results of invasive coronary angiography. Significant stenosis was defined as stenosis of at least 50%. The sensitivity of MDCT, compared with that of invasive angiography, was 92% and the specificity was 97%.
WASHINGTON — Multidetector CT imaging using a rest/dipyridamole-induced stress protocol shows promise for the assessment of myocardial ischemia, on the basis of data presented at the annual meeting of the Society of Cardiovascular Computed Tomography.
Researchers compared rest-stress multidetector CT (MDCT) with stress-rest Tc 99m sestamibi SPECT (single-photon emission computed tomography) imaging to detect myocardial ischemia in 47 patients.
For ischemic regions, a reduction in contrast enhancement on MDCT during stress was 77% sensitive and 99% specific, compared with SPECT. A reduction in contrast enhancement at rest on MDCT identified scarred tissue with 96% sensitivity and 98% specificity, vs. SPECT.
“Our results show that dipyridamole stress-rest multidetector CT can evaluate both myocardial ischemia induced by dipyridamole and coronary artery stenosis,” said Dr. Patricia Carrascosa of Diagnóstico Maipú in Buenos Aires.
Patients included in the study had an average age of 60 years; most were men (68%). Only 12 patients were symptomatic. All patients underwent rest-stress MDCT and stress-rest SPECT. A 17-segment model was used to assess myocardial ischemia. Based on the SPECT results, segments were classified as normal, ischemic, or scarred. Normal segments appeared homogeneous. Ischemic segments had hypodense areas seen only on stress images. Scarred segments had hypodense areas visible on both rest and stress images.
In a subgroup of 20 patients, MDCT results were compared with the results of invasive coronary angiography. Significant stenosis was defined as stenosis of at least 50%. The sensitivity of MDCT, compared with that of invasive angiography, was 92% and the specificity was 97%.
Individualized A1c Levels Urged for Type 2 Diabetes
Physicians should base target hemoglobin A1c levels for patients with type 2 diabetes on individualized assessments of comorbidity, life expectancy, risk for complications, and patient preferences, while striving for glycemic control as low as is feasible, according to new clinical guidelines released by the American College of Physicians.
The new guidelines —“Glycemic Control and Type 2 Diabetes Mellitus: The Optimal Hemoglobin A1c Targets. A Guidance Statement from the American College of Physicians”—are based on a review of existing guidelines on glycemic control from nine medical organizations (Ann. Intern. Med. 2007;147:417–22).
Instead of developing another guideline, the authors “felt that it would be more useful to provide clinicians with a rigorous review of the currently available guidelines so that they could make evidence-based care decisions,” wrote Dr. Amir Qaseem and his colleagues on the Clinical Efficacy Assessment Subcommittee of the ACP.
The guidelines made three recommendations regarding optimal hemoglobin A1c (HbA1c) levels for patients with type 2 diabetes:
▸ The goal for glycemic control should be as low as is feasible without undue risk for adverse events or an unacceptable burden on patients. Physicians should also discuss with the patient the benefits and harms of specific levels of glycemic control. “A hemoglobin A1c level less than 7% based on individualized assessment is a reasonable goal for many but not all patients,” the group wrote.
▸ Target HbA1c levels should be based on individualized assessments of comorbidity, life expectancy, risk for complications from diabetes, and patient preferences.
▸ Further research is needed to assess the optimal level of glycemic control, particularly in the presence of comorbid conditions.
To develop the guideline, the group started with a MEDLINE search using the keyword “diabetes” limited to “guideline.” The search identified 416 articles. In addition, group members searched the National Guideline Clearinghouse for guidelines on diabetes. They excluded articles that did not address glycemic control, were duplicates, or were primary research studies. They also excluded articles that were not in English.
The group followed the AGREE (Appraisal of Guidelines Research and Evaluation in Europe) collaboration method. This method asks 23 questions in six domains: scope and purpose; stakeholder involvement; rigor of development; clarity and presentation; applicability; and editorial independence. Each guideline was evaluated using an additive score. The group considered the lack of an explicit link between evidence and recommendations a major flaw for a guideline.
The guidelines that remained were independently reviewed by two reviewers, using the AGREE method. Guidelines were scored by the reviewers, and scores were tabulated and compared across domains. The group then pulled out specific recommendations about glycemic control from each guideline.
In total, nine guidelines were evaluated. These included those from the American Association of Clinical Endocrinologists, the American Academy of Family Physicians (AAFP), the American Diabetes Association, the American Geriatric Society, the Canadian Diabetes Association, the Institute for Clinical Systems Improvement, the National Institute for Health and Clinical Excellence, the Scottish Intercollegiate Guidelines Network, and the Veterans Health Administration.
All guidelines except those from the AAFP set HbA1c targets. However, those that did set specific target levels differed in the choice of target. For the most part, the guidelines used a target HbA1c level of 7%.
Some guidelines recommended 7% as a general target, but also suggested tailoring target HbA1c levels according to various factors, such as comorbid conditions.
Physicians should base target hemoglobin A1c levels for patients with type 2 diabetes on individualized assessments of comorbidity, life expectancy, risk for complications, and patient preferences, while striving for glycemic control as low as is feasible, according to new clinical guidelines released by the American College of Physicians.
The new guidelines —“Glycemic Control and Type 2 Diabetes Mellitus: The Optimal Hemoglobin A1c Targets. A Guidance Statement from the American College of Physicians”—are based on a review of existing guidelines on glycemic control from nine medical organizations (Ann. Intern. Med. 2007;147:417–22).
Instead of developing another guideline, the authors “felt that it would be more useful to provide clinicians with a rigorous review of the currently available guidelines so that they could make evidence-based care decisions,” wrote Dr. Amir Qaseem and his colleagues on the Clinical Efficacy Assessment Subcommittee of the ACP.
The guidelines made three recommendations regarding optimal hemoglobin A1c (HbA1c) levels for patients with type 2 diabetes:
▸ The goal for glycemic control should be as low as is feasible without undue risk for adverse events or an unacceptable burden on patients. Physicians should also discuss with the patient the benefits and harms of specific levels of glycemic control. “A hemoglobin A1c level less than 7% based on individualized assessment is a reasonable goal for many but not all patients,” the group wrote.
▸ Target HbA1c levels should be based on individualized assessments of comorbidity, life expectancy, risk for complications from diabetes, and patient preferences.
▸ Further research is needed to assess the optimal level of glycemic control, particularly in the presence of comorbid conditions.
To develop the guideline, the group started with a MEDLINE search using the keyword “diabetes” limited to “guideline.” The search identified 416 articles. In addition, group members searched the National Guideline Clearinghouse for guidelines on diabetes. They excluded articles that did not address glycemic control, were duplicates, or were primary research studies. They also excluded articles that were not in English.
The group followed the AGREE (Appraisal of Guidelines Research and Evaluation in Europe) collaboration method. This method asks 23 questions in six domains: scope and purpose; stakeholder involvement; rigor of development; clarity and presentation; applicability; and editorial independence. Each guideline was evaluated using an additive score. The group considered the lack of an explicit link between evidence and recommendations a major flaw for a guideline.
The guidelines that remained were independently reviewed by two reviewers, using the AGREE method. Guidelines were scored by the reviewers, and scores were tabulated and compared across domains. The group then pulled out specific recommendations about glycemic control from each guideline.
In total, nine guidelines were evaluated. These included those from the American Association of Clinical Endocrinologists, the American Academy of Family Physicians (AAFP), the American Diabetes Association, the American Geriatric Society, the Canadian Diabetes Association, the Institute for Clinical Systems Improvement, the National Institute for Health and Clinical Excellence, the Scottish Intercollegiate Guidelines Network, and the Veterans Health Administration.
All guidelines except those from the AAFP set HbA1c targets. However, those that did set specific target levels differed in the choice of target. For the most part, the guidelines used a target HbA1c level of 7%.
Some guidelines recommended 7% as a general target, but also suggested tailoring target HbA1c levels according to various factors, such as comorbid conditions.
Physicians should base target hemoglobin A1c levels for patients with type 2 diabetes on individualized assessments of comorbidity, life expectancy, risk for complications, and patient preferences, while striving for glycemic control as low as is feasible, according to new clinical guidelines released by the American College of Physicians.
The new guidelines —“Glycemic Control and Type 2 Diabetes Mellitus: The Optimal Hemoglobin A1c Targets. A Guidance Statement from the American College of Physicians”—are based on a review of existing guidelines on glycemic control from nine medical organizations (Ann. Intern. Med. 2007;147:417–22).
Instead of developing another guideline, the authors “felt that it would be more useful to provide clinicians with a rigorous review of the currently available guidelines so that they could make evidence-based care decisions,” wrote Dr. Amir Qaseem and his colleagues on the Clinical Efficacy Assessment Subcommittee of the ACP.
The guidelines made three recommendations regarding optimal hemoglobin A1c (HbA1c) levels for patients with type 2 diabetes:
▸ The goal for glycemic control should be as low as is feasible without undue risk for adverse events or an unacceptable burden on patients. Physicians should also discuss with the patient the benefits and harms of specific levels of glycemic control. “A hemoglobin A1c level less than 7% based on individualized assessment is a reasonable goal for many but not all patients,” the group wrote.
▸ Target HbA1c levels should be based on individualized assessments of comorbidity, life expectancy, risk for complications from diabetes, and patient preferences.
▸ Further research is needed to assess the optimal level of glycemic control, particularly in the presence of comorbid conditions.
To develop the guideline, the group started with a MEDLINE search using the keyword “diabetes” limited to “guideline.” The search identified 416 articles. In addition, group members searched the National Guideline Clearinghouse for guidelines on diabetes. They excluded articles that did not address glycemic control, were duplicates, or were primary research studies. They also excluded articles that were not in English.
The group followed the AGREE (Appraisal of Guidelines Research and Evaluation in Europe) collaboration method. This method asks 23 questions in six domains: scope and purpose; stakeholder involvement; rigor of development; clarity and presentation; applicability; and editorial independence. Each guideline was evaluated using an additive score. The group considered the lack of an explicit link between evidence and recommendations a major flaw for a guideline.
The guidelines that remained were independently reviewed by two reviewers, using the AGREE method. Guidelines were scored by the reviewers, and scores were tabulated and compared across domains. The group then pulled out specific recommendations about glycemic control from each guideline.
In total, nine guidelines were evaluated. These included those from the American Association of Clinical Endocrinologists, the American Academy of Family Physicians (AAFP), the American Diabetes Association, the American Geriatric Society, the Canadian Diabetes Association, the Institute for Clinical Systems Improvement, the National Institute for Health and Clinical Excellence, the Scottish Intercollegiate Guidelines Network, and the Veterans Health Administration.
All guidelines except those from the AAFP set HbA1c targets. However, those that did set specific target levels differed in the choice of target. For the most part, the guidelines used a target HbA1c level of 7%.
Some guidelines recommended 7% as a general target, but also suggested tailoring target HbA1c levels according to various factors, such as comorbid conditions.
Image of the Month
In most children with Sturge-Weber syndrome (SWS) only one hemisphere of the brain is involved. For Csaba Juhasz, Ph.D., an assistant professor of pediatrics and neurology at Wayne State University in Detroit, and colleagues, this provided an opportunity to use magnetic resonance imaging (MRI) to assess cortical gray matter and hemispheric white matter volumes and any possible relationships with global intellectual function in a prospective study, with the unaffected hemisphere serving as an internal control for each child (Arch. Neurol. 2007;64:1169–74).
The researchers used MRI to study 21 children (13 girls) with SWS plus a history of partial seizures, which are common in this disorder. The children ranged in age from 18 months to 10.3 years (mean age 5.3 years). Most of the children (18) were on daily antiepileptic medication, either mono- or polytherapy with oxcarbazepine, valproate semisodium, levetiracetam, carbamazepine, phenobarbital, and/or topiramate.
The children underwent neuropsychologic testing 1 day prior to scanning. Those aged 18–36 months were assessed using the Bayley Scales of Infant Development, which yielded the Mental Developmental Index. Children aged 3–6 years were assessed using the Wechsler Preschool and Primary Scales of Intelligence, third edition. Children older than 6 years were assessed using the Wechsler Intelligence Scales for Children, third edition. Both indices yielded the full-scale IQ. The Mental Developmental Index and the full-scale IQ are highly correlated. Verbal and nonverbal intellectual functions were available for a subgroup of 15 children. Manual dexterity scores were also obtained using age-appropriate tests.
MRI was performed using a 1.5-T scanner. Researchers calculated volumetric measurements using findings from an axial three-dimensional T1-weighted scan.
The volumetric MR images were processed using statistical parametric mapping software. Dr. Juhasz and colleagues created a study-specific template using the brains of the children in this study because most software only includes an adult template. The template is used to spatially normalize the volumetric images of individual children.
By comparing the new template to each child's MRI, researchers produced three new volumetric maps with voxel values between 0 and 1, allowing them to assess volumes of gray matter, white matter, and cerebrospinal fluid.
Left and right hemispheric regions of interest were defined in all supratentorial image planes to derive cortical gray matter and hemispheric white matter volumes. The researchers calculated cortical gray matter and hemispheric white matter volumes for each hemisphere ipsilateral and contralateral to the angioma.
White matter volume in the affected hemisphere was a major predictor of cognitive impairment. Multivariate regression analysis showed a strong correlation between ipsilateral hemispheric white matter volume and full-scale IQ. Age was negatively correlated with full-scale IQ; cortical gray matter volumes showed no link with full-scale IQ. Ipsilateral white matter volumes correlated significantly for both verbal and nonverbal IQ. “The implication is that the white matter has to be preserved or targeted by some kind of treatment in the future,” he added. Currently, SWS is treated primarily with seizure drugs. “There is no rational therapy right now to somehow protect or preserve white matter, but it looks like that would be important.”
Given the findings, abnormal development or loss of white matter may be a critical factor of cognitive decline in SWS. It may be that incomplete maturation or disruption of fiber tracts that connect various cortical and subcortical structures may lead to loss of functional connectivity, impaired efficiency of information processing, and abnormal cognitive development.
The researchers are also using diffusion tensor imaging (DTI) to view tracts of fiber. Susceptibility-weighted imaging (SWI) allows for visualization of small vessel abnormalities. In SWS, vessel abnormalities typically start in the parietal and occipital regions. The frontal lobe becomes involved as the disease progresses. Preliminary DTI appears to show more extensive diffusion changes than would be expected from conventional MR images, said Dr. Juhasz.
“We are trying to figure out why for many of these children their brain involvement looks quite limited on conventional MR imaging but the neurocognitive outcomes are very variable,” said Dr. Juhasz.
Mismatch between the brain's large metabolic demands in the first few years of life and the limited blood supply in children with SWS may provide one answer. Dr. Juhasz and Dr. Harry T. Chugani, (director of the PET center at the Children's Hospital of Michigan, Detroit) and their colleagues also recently studied children with SWS using [18F]-2-fluoro-deoxy-D-glucose (FDG) PET imaging. They showed major metabolic progression in children with SWS occurs in the first 3–4 years of life, coinciding with a huge increase of metabolic demand as the brain develops.
A major aim of the larger (MRI and PET) longitudinal study is to find imaging markers to flag children with SWS who need aggressive therapy. Currently, there are not many treatment options. Aspirin is sometimes used; it may diminish the effects of ischemia. “The bottom line is that it only works if we can catch the patients early,” he said.
These findings are part of a longitudinal study that is continuing to recruit patients for neuroimaging. With a larger population—on the order of 30–40 patients—researchers can start looking at the effects of age, gender, and side of the angioma along with focusing on brain regions rather than hemispheres.
Dr. Juhasz noted that once progression has started, it is not always a bad thing if the angioma destroys the affected hemisphere very, very early. “The only plausible explanation is that they undergo a reorganization to the other hemisphere.”
“We use the contralateral hemisphere assuming that it is basically a healthy, normal hemisphere. But actually in some cases where this happened very early, the other hemisphere is not just normal but I would say 'supernormal,'” he said.
Neurologists wishing to enroll their SWS patients in Dr. Juhasz's longitudinal study should contact him directly at [email protected]
Nonsegmented T1-weighted MRI (left), gray matter mask (center), and white matter mask (right) show the same plane in the same child: The red dots delinate the 39% loss of gray and 48% loss of white matter in the right hemisphere. Photos courtesy Dr. Csaba Juhasz
In most children with Sturge-Weber syndrome (SWS) only one hemisphere of the brain is involved. For Csaba Juhasz, Ph.D., an assistant professor of pediatrics and neurology at Wayne State University in Detroit, and colleagues, this provided an opportunity to use magnetic resonance imaging (MRI) to assess cortical gray matter and hemispheric white matter volumes and any possible relationships with global intellectual function in a prospective study, with the unaffected hemisphere serving as an internal control for each child (Arch. Neurol. 2007;64:1169–74).
The researchers used MRI to study 21 children (13 girls) with SWS plus a history of partial seizures, which are common in this disorder. The children ranged in age from 18 months to 10.3 years (mean age 5.3 years). Most of the children (18) were on daily antiepileptic medication, either mono- or polytherapy with oxcarbazepine, valproate semisodium, levetiracetam, carbamazepine, phenobarbital, and/or topiramate.
The children underwent neuropsychologic testing 1 day prior to scanning. Those aged 18–36 months were assessed using the Bayley Scales of Infant Development, which yielded the Mental Developmental Index. Children aged 3–6 years were assessed using the Wechsler Preschool and Primary Scales of Intelligence, third edition. Children older than 6 years were assessed using the Wechsler Intelligence Scales for Children, third edition. Both indices yielded the full-scale IQ. The Mental Developmental Index and the full-scale IQ are highly correlated. Verbal and nonverbal intellectual functions were available for a subgroup of 15 children. Manual dexterity scores were also obtained using age-appropriate tests.
MRI was performed using a 1.5-T scanner. Researchers calculated volumetric measurements using findings from an axial three-dimensional T1-weighted scan.
The volumetric MR images were processed using statistical parametric mapping software. Dr. Juhasz and colleagues created a study-specific template using the brains of the children in this study because most software only includes an adult template. The template is used to spatially normalize the volumetric images of individual children.
By comparing the new template to each child's MRI, researchers produced three new volumetric maps with voxel values between 0 and 1, allowing them to assess volumes of gray matter, white matter, and cerebrospinal fluid.
Left and right hemispheric regions of interest were defined in all supratentorial image planes to derive cortical gray matter and hemispheric white matter volumes. The researchers calculated cortical gray matter and hemispheric white matter volumes for each hemisphere ipsilateral and contralateral to the angioma.
White matter volume in the affected hemisphere was a major predictor of cognitive impairment. Multivariate regression analysis showed a strong correlation between ipsilateral hemispheric white matter volume and full-scale IQ. Age was negatively correlated with full-scale IQ; cortical gray matter volumes showed no link with full-scale IQ. Ipsilateral white matter volumes correlated significantly for both verbal and nonverbal IQ. “The implication is that the white matter has to be preserved or targeted by some kind of treatment in the future,” he added. Currently, SWS is treated primarily with seizure drugs. “There is no rational therapy right now to somehow protect or preserve white matter, but it looks like that would be important.”
Given the findings, abnormal development or loss of white matter may be a critical factor of cognitive decline in SWS. It may be that incomplete maturation or disruption of fiber tracts that connect various cortical and subcortical structures may lead to loss of functional connectivity, impaired efficiency of information processing, and abnormal cognitive development.
The researchers are also using diffusion tensor imaging (DTI) to view tracts of fiber. Susceptibility-weighted imaging (SWI) allows for visualization of small vessel abnormalities. In SWS, vessel abnormalities typically start in the parietal and occipital regions. The frontal lobe becomes involved as the disease progresses. Preliminary DTI appears to show more extensive diffusion changes than would be expected from conventional MR images, said Dr. Juhasz.
“We are trying to figure out why for many of these children their brain involvement looks quite limited on conventional MR imaging but the neurocognitive outcomes are very variable,” said Dr. Juhasz.
Mismatch between the brain's large metabolic demands in the first few years of life and the limited blood supply in children with SWS may provide one answer. Dr. Juhasz and Dr. Harry T. Chugani, (director of the PET center at the Children's Hospital of Michigan, Detroit) and their colleagues also recently studied children with SWS using [18F]-2-fluoro-deoxy-D-glucose (FDG) PET imaging. They showed major metabolic progression in children with SWS occurs in the first 3–4 years of life, coinciding with a huge increase of metabolic demand as the brain develops.
A major aim of the larger (MRI and PET) longitudinal study is to find imaging markers to flag children with SWS who need aggressive therapy. Currently, there are not many treatment options. Aspirin is sometimes used; it may diminish the effects of ischemia. “The bottom line is that it only works if we can catch the patients early,” he said.
These findings are part of a longitudinal study that is continuing to recruit patients for neuroimaging. With a larger population—on the order of 30–40 patients—researchers can start looking at the effects of age, gender, and side of the angioma along with focusing on brain regions rather than hemispheres.
Dr. Juhasz noted that once progression has started, it is not always a bad thing if the angioma destroys the affected hemisphere very, very early. “The only plausible explanation is that they undergo a reorganization to the other hemisphere.”
“We use the contralateral hemisphere assuming that it is basically a healthy, normal hemisphere. But actually in some cases where this happened very early, the other hemisphere is not just normal but I would say 'supernormal,'” he said.
Neurologists wishing to enroll their SWS patients in Dr. Juhasz's longitudinal study should contact him directly at [email protected]
Nonsegmented T1-weighted MRI (left), gray matter mask (center), and white matter mask (right) show the same plane in the same child: The red dots delinate the 39% loss of gray and 48% loss of white matter in the right hemisphere. Photos courtesy Dr. Csaba Juhasz
In most children with Sturge-Weber syndrome (SWS) only one hemisphere of the brain is involved. For Csaba Juhasz, Ph.D., an assistant professor of pediatrics and neurology at Wayne State University in Detroit, and colleagues, this provided an opportunity to use magnetic resonance imaging (MRI) to assess cortical gray matter and hemispheric white matter volumes and any possible relationships with global intellectual function in a prospective study, with the unaffected hemisphere serving as an internal control for each child (Arch. Neurol. 2007;64:1169–74).
The researchers used MRI to study 21 children (13 girls) with SWS plus a history of partial seizures, which are common in this disorder. The children ranged in age from 18 months to 10.3 years (mean age 5.3 years). Most of the children (18) were on daily antiepileptic medication, either mono- or polytherapy with oxcarbazepine, valproate semisodium, levetiracetam, carbamazepine, phenobarbital, and/or topiramate.
The children underwent neuropsychologic testing 1 day prior to scanning. Those aged 18–36 months were assessed using the Bayley Scales of Infant Development, which yielded the Mental Developmental Index. Children aged 3–6 years were assessed using the Wechsler Preschool and Primary Scales of Intelligence, third edition. Children older than 6 years were assessed using the Wechsler Intelligence Scales for Children, third edition. Both indices yielded the full-scale IQ. The Mental Developmental Index and the full-scale IQ are highly correlated. Verbal and nonverbal intellectual functions were available for a subgroup of 15 children. Manual dexterity scores were also obtained using age-appropriate tests.
MRI was performed using a 1.5-T scanner. Researchers calculated volumetric measurements using findings from an axial three-dimensional T1-weighted scan.
The volumetric MR images were processed using statistical parametric mapping software. Dr. Juhasz and colleagues created a study-specific template using the brains of the children in this study because most software only includes an adult template. The template is used to spatially normalize the volumetric images of individual children.
By comparing the new template to each child's MRI, researchers produced three new volumetric maps with voxel values between 0 and 1, allowing them to assess volumes of gray matter, white matter, and cerebrospinal fluid.
Left and right hemispheric regions of interest were defined in all supratentorial image planes to derive cortical gray matter and hemispheric white matter volumes. The researchers calculated cortical gray matter and hemispheric white matter volumes for each hemisphere ipsilateral and contralateral to the angioma.
White matter volume in the affected hemisphere was a major predictor of cognitive impairment. Multivariate regression analysis showed a strong correlation between ipsilateral hemispheric white matter volume and full-scale IQ. Age was negatively correlated with full-scale IQ; cortical gray matter volumes showed no link with full-scale IQ. Ipsilateral white matter volumes correlated significantly for both verbal and nonverbal IQ. “The implication is that the white matter has to be preserved or targeted by some kind of treatment in the future,” he added. Currently, SWS is treated primarily with seizure drugs. “There is no rational therapy right now to somehow protect or preserve white matter, but it looks like that would be important.”
Given the findings, abnormal development or loss of white matter may be a critical factor of cognitive decline in SWS. It may be that incomplete maturation or disruption of fiber tracts that connect various cortical and subcortical structures may lead to loss of functional connectivity, impaired efficiency of information processing, and abnormal cognitive development.
The researchers are also using diffusion tensor imaging (DTI) to view tracts of fiber. Susceptibility-weighted imaging (SWI) allows for visualization of small vessel abnormalities. In SWS, vessel abnormalities typically start in the parietal and occipital regions. The frontal lobe becomes involved as the disease progresses. Preliminary DTI appears to show more extensive diffusion changes than would be expected from conventional MR images, said Dr. Juhasz.
“We are trying to figure out why for many of these children their brain involvement looks quite limited on conventional MR imaging but the neurocognitive outcomes are very variable,” said Dr. Juhasz.
Mismatch between the brain's large metabolic demands in the first few years of life and the limited blood supply in children with SWS may provide one answer. Dr. Juhasz and Dr. Harry T. Chugani, (director of the PET center at the Children's Hospital of Michigan, Detroit) and their colleagues also recently studied children with SWS using [18F]-2-fluoro-deoxy-D-glucose (FDG) PET imaging. They showed major metabolic progression in children with SWS occurs in the first 3–4 years of life, coinciding with a huge increase of metabolic demand as the brain develops.
A major aim of the larger (MRI and PET) longitudinal study is to find imaging markers to flag children with SWS who need aggressive therapy. Currently, there are not many treatment options. Aspirin is sometimes used; it may diminish the effects of ischemia. “The bottom line is that it only works if we can catch the patients early,” he said.
These findings are part of a longitudinal study that is continuing to recruit patients for neuroimaging. With a larger population—on the order of 30–40 patients—researchers can start looking at the effects of age, gender, and side of the angioma along with focusing on brain regions rather than hemispheres.
Dr. Juhasz noted that once progression has started, it is not always a bad thing if the angioma destroys the affected hemisphere very, very early. “The only plausible explanation is that they undergo a reorganization to the other hemisphere.”
“We use the contralateral hemisphere assuming that it is basically a healthy, normal hemisphere. But actually in some cases where this happened very early, the other hemisphere is not just normal but I would say 'supernormal,'” he said.
Neurologists wishing to enroll their SWS patients in Dr. Juhasz's longitudinal study should contact him directly at [email protected]
Nonsegmented T1-weighted MRI (left), gray matter mask (center), and white matter mask (right) show the same plane in the same child: The red dots delinate the 39% loss of gray and 48% loss of white matter in the right hemisphere. Photos courtesy Dr. Csaba Juhasz
Temozolomide/Vaccination Combo May Work as Glioblastoma Therapy
CHICAGO — Temozolomide may not be incompatible with immunologic approaches for the treatment of glioblastoma, based on data from a small analysis presented at the annual meeting of the American Society of Clinical Oncology.
Vaccinating patients who have glioblastoma multiforme (GBM) with dendritic cells and either acid-eluted peptides or an antigen-specific peptide has shown promising results in extending patient survival. Likewise, temozolomide (Temodar) has been shown to prolong survival in these patients and has become part of a standard treatment regimen.
However, temozolomide also often induces a profound and long-lasting lymphopenia that could limit immunotherapeutic approaches, such as vaccination.
“This preliminary experience suggests that sequential administration of chemotherapy and immunotherapy may not be deleterious,” wrote Dr. John H. Sampson, who is an associate professor of surgery at Duke University in Durham, N.C., and his colleagues.
This analysis involved patients from two ongoing trials, who are newly diagnosed with GBM, are epidermal growth factor receptor variant III (EGFRvIII)-positive, and have had complete resection.
In the ACTIVATE trial, patients received radiation (approximately 60 Gy) and concurrent temozolomide (50–75 mg/m
Peripheral blood counts were monitored in patients. Grade 2 lymphopenia (less than 800 lymphocytes per mcL of blood) was induced in all patients receiving temozolomide after the first cycle. Grade 3 lymphopenia (less than 500 lymphocytes per mcL of blood) was induced in 70% of patients after the first cycle of temozolomide. However, lymphocyte counts returned to normal after treatment with the drug was stopped.
Regulatory T cells increased from 5% to 12% after the combination of temozolomide and radiation. Cycles of temozolomide do not appear to have diminished EGFRvIII-specific CD3-positive/CD8-positive T cells producing interferon-gamma. EGFRvIII-specific IgG responses were induced and maintained during temozolomide.
Dr. Sampson did not report having any relevant conflicts of interest.
CHICAGO — Temozolomide may not be incompatible with immunologic approaches for the treatment of glioblastoma, based on data from a small analysis presented at the annual meeting of the American Society of Clinical Oncology.
Vaccinating patients who have glioblastoma multiforme (GBM) with dendritic cells and either acid-eluted peptides or an antigen-specific peptide has shown promising results in extending patient survival. Likewise, temozolomide (Temodar) has been shown to prolong survival in these patients and has become part of a standard treatment regimen.
However, temozolomide also often induces a profound and long-lasting lymphopenia that could limit immunotherapeutic approaches, such as vaccination.
“This preliminary experience suggests that sequential administration of chemotherapy and immunotherapy may not be deleterious,” wrote Dr. John H. Sampson, who is an associate professor of surgery at Duke University in Durham, N.C., and his colleagues.
This analysis involved patients from two ongoing trials, who are newly diagnosed with GBM, are epidermal growth factor receptor variant III (EGFRvIII)-positive, and have had complete resection.
In the ACTIVATE trial, patients received radiation (approximately 60 Gy) and concurrent temozolomide (50–75 mg/m
Peripheral blood counts were monitored in patients. Grade 2 lymphopenia (less than 800 lymphocytes per mcL of blood) was induced in all patients receiving temozolomide after the first cycle. Grade 3 lymphopenia (less than 500 lymphocytes per mcL of blood) was induced in 70% of patients after the first cycle of temozolomide. However, lymphocyte counts returned to normal after treatment with the drug was stopped.
Regulatory T cells increased from 5% to 12% after the combination of temozolomide and radiation. Cycles of temozolomide do not appear to have diminished EGFRvIII-specific CD3-positive/CD8-positive T cells producing interferon-gamma. EGFRvIII-specific IgG responses were induced and maintained during temozolomide.
Dr. Sampson did not report having any relevant conflicts of interest.
CHICAGO — Temozolomide may not be incompatible with immunologic approaches for the treatment of glioblastoma, based on data from a small analysis presented at the annual meeting of the American Society of Clinical Oncology.
Vaccinating patients who have glioblastoma multiforme (GBM) with dendritic cells and either acid-eluted peptides or an antigen-specific peptide has shown promising results in extending patient survival. Likewise, temozolomide (Temodar) has been shown to prolong survival in these patients and has become part of a standard treatment regimen.
However, temozolomide also often induces a profound and long-lasting lymphopenia that could limit immunotherapeutic approaches, such as vaccination.
“This preliminary experience suggests that sequential administration of chemotherapy and immunotherapy may not be deleterious,” wrote Dr. John H. Sampson, who is an associate professor of surgery at Duke University in Durham, N.C., and his colleagues.
This analysis involved patients from two ongoing trials, who are newly diagnosed with GBM, are epidermal growth factor receptor variant III (EGFRvIII)-positive, and have had complete resection.
In the ACTIVATE trial, patients received radiation (approximately 60 Gy) and concurrent temozolomide (50–75 mg/m
Peripheral blood counts were monitored in patients. Grade 2 lymphopenia (less than 800 lymphocytes per mcL of blood) was induced in all patients receiving temozolomide after the first cycle. Grade 3 lymphopenia (less than 500 lymphocytes per mcL of blood) was induced in 70% of patients after the first cycle of temozolomide. However, lymphocyte counts returned to normal after treatment with the drug was stopped.
Regulatory T cells increased from 5% to 12% after the combination of temozolomide and radiation. Cycles of temozolomide do not appear to have diminished EGFRvIII-specific CD3-positive/CD8-positive T cells producing interferon-gamma. EGFRvIII-specific IgG responses were induced and maintained during temozolomide.
Dr. Sampson did not report having any relevant conflicts of interest.
Dendritic Cell Vaccine Shows Promise in GBM
CHICAGO — Combining dendritic cell vaccination with imiquimod for the treatment of glioblastoma more than doubled survival in an intervention group compared to a conventionally treated control group in a small phase I trial presented as a poster at the annual meeting of the American Society of Clinical Oncology.
Two-year survival—the primary clinical end point—for 19 patients treated with the dendritic cell vaccines was 68%, and 3-year survival was 43%. In comparison, only 26% of control patients at the University of California, Los Angeles, survived to 2 years, and only 20% survived to 3 years.
To date, the median progression-free survival and median overall survival in the vaccinated group are 18 months and 34 months, respectively. This compared with 7 months and 15 months, respectively, for control patients from the published literature.
Dr. Linda Liau, a neurosurgeon at UCLA, and her colleagues presented immunologic response data for 13 patients with newly diagnosed glioblastoma multiforme (GBM). Patients underwent resection, followed by a 6-week course of radiation and chemotherapy with temozolomide. Two weeks prior to the first immunization, patients underwent MRI. One week before the first immunization, patients underwent leukapheresis and immunologic assays.
The vaccines were composed of autologous dendritic cells that have been pulsed with lysates from GBM tumor cells. Preclinical studies have demonstrated that dendritic cells are preferentially responsible for the sensitization of naive T cells in their first exposure to antigen.
Each patient initially received three vaccinations at 2-week intervals. Four patients received 1 million dendritic cells per immunization; four others received 5 million dendritic cells per immunization, and the remaining five received 10 million dendritic cells per immunization.
Patients without tumor progression subsequently received booster injections every 3 months combined with topical administration of imiquimod, which is a toll-like receptor-7 agonist that enhances both the innate and acquired immune response. Imiquimod (Aldara) is indicated for the treatment of actinic keratosis, superficial basal cell carcinoma, and external genital and perianal warts.
Immunologic responses to tumor antigens were monitored using several methods. Clinical tumor growth was monitored by MRI every 2 months.
The control group consisted of a total of 191 patients with GBM at UCLA, who received standard treatment. The average age of the patients in the vaccinated and control groups was 51 and 49 years, respectively.
“It appears that vaccination approaches in general are very successful,” said Dr. Albert Wong of Stanford University Palo Alto, Calif., who reviewed the poster during a discussion session.
Almost all of the patients had de novo infiltration of T lymphocytes into CNS tumors. In addition, CNS tumors were found to be expressing known tumor-associated antigens. Five patients also had an increase in tumor antigen-specific CD8-positive T cells with vaccination.
The relationship between response to tumor antigens and patient survival was somewhat disappointing. “In my opinion, there was not a strong correlation between the response to these defined tumor antigens and patient response,” Dr. Wong said.
In general there is a need for better surrogate markers to assess immune response. Perhaps the best may be the infiltration of T cells into the tumor, he added.
In terms of safety, no grade 3 or 4 adverse events were reported. The most frequent adverse events were low-grade fever, injection-site itching and pain, and arthralgia and myalgia. Seizures also occurred that were possibly related to the vaccines; however, seizures are also typical in GBM patients.
An important next step is to identify what the true tumor antigens are, in order to better refine the vaccine. Dr. Wong likened the current generation of dendritic cell vaccines to using foxglove to treat “x,” when it would really be better to extract and use the active component, digitalis.
Dr. Liau did not disclose any conflicts of interest. The study was sponsored in part by Northwest Biotherapeutics Inc., which is developing the technology behind the vaccines. A phase II clinical trial, sponsored by Northwest Biotherapeutics Inc., is underway.
CHICAGO — Combining dendritic cell vaccination with imiquimod for the treatment of glioblastoma more than doubled survival in an intervention group compared to a conventionally treated control group in a small phase I trial presented as a poster at the annual meeting of the American Society of Clinical Oncology.
Two-year survival—the primary clinical end point—for 19 patients treated with the dendritic cell vaccines was 68%, and 3-year survival was 43%. In comparison, only 26% of control patients at the University of California, Los Angeles, survived to 2 years, and only 20% survived to 3 years.
To date, the median progression-free survival and median overall survival in the vaccinated group are 18 months and 34 months, respectively. This compared with 7 months and 15 months, respectively, for control patients from the published literature.
Dr. Linda Liau, a neurosurgeon at UCLA, and her colleagues presented immunologic response data for 13 patients with newly diagnosed glioblastoma multiforme (GBM). Patients underwent resection, followed by a 6-week course of radiation and chemotherapy with temozolomide. Two weeks prior to the first immunization, patients underwent MRI. One week before the first immunization, patients underwent leukapheresis and immunologic assays.
The vaccines were composed of autologous dendritic cells that have been pulsed with lysates from GBM tumor cells. Preclinical studies have demonstrated that dendritic cells are preferentially responsible for the sensitization of naive T cells in their first exposure to antigen.
Each patient initially received three vaccinations at 2-week intervals. Four patients received 1 million dendritic cells per immunization; four others received 5 million dendritic cells per immunization, and the remaining five received 10 million dendritic cells per immunization.
Patients without tumor progression subsequently received booster injections every 3 months combined with topical administration of imiquimod, which is a toll-like receptor-7 agonist that enhances both the innate and acquired immune response. Imiquimod (Aldara) is indicated for the treatment of actinic keratosis, superficial basal cell carcinoma, and external genital and perianal warts.
Immunologic responses to tumor antigens were monitored using several methods. Clinical tumor growth was monitored by MRI every 2 months.
The control group consisted of a total of 191 patients with GBM at UCLA, who received standard treatment. The average age of the patients in the vaccinated and control groups was 51 and 49 years, respectively.
“It appears that vaccination approaches in general are very successful,” said Dr. Albert Wong of Stanford University Palo Alto, Calif., who reviewed the poster during a discussion session.
Almost all of the patients had de novo infiltration of T lymphocytes into CNS tumors. In addition, CNS tumors were found to be expressing known tumor-associated antigens. Five patients also had an increase in tumor antigen-specific CD8-positive T cells with vaccination.
The relationship between response to tumor antigens and patient survival was somewhat disappointing. “In my opinion, there was not a strong correlation between the response to these defined tumor antigens and patient response,” Dr. Wong said.
In general there is a need for better surrogate markers to assess immune response. Perhaps the best may be the infiltration of T cells into the tumor, he added.
In terms of safety, no grade 3 or 4 adverse events were reported. The most frequent adverse events were low-grade fever, injection-site itching and pain, and arthralgia and myalgia. Seizures also occurred that were possibly related to the vaccines; however, seizures are also typical in GBM patients.
An important next step is to identify what the true tumor antigens are, in order to better refine the vaccine. Dr. Wong likened the current generation of dendritic cell vaccines to using foxglove to treat “x,” when it would really be better to extract and use the active component, digitalis.
Dr. Liau did not disclose any conflicts of interest. The study was sponsored in part by Northwest Biotherapeutics Inc., which is developing the technology behind the vaccines. A phase II clinical trial, sponsored by Northwest Biotherapeutics Inc., is underway.
CHICAGO — Combining dendritic cell vaccination with imiquimod for the treatment of glioblastoma more than doubled survival in an intervention group compared to a conventionally treated control group in a small phase I trial presented as a poster at the annual meeting of the American Society of Clinical Oncology.
Two-year survival—the primary clinical end point—for 19 patients treated with the dendritic cell vaccines was 68%, and 3-year survival was 43%. In comparison, only 26% of control patients at the University of California, Los Angeles, survived to 2 years, and only 20% survived to 3 years.
To date, the median progression-free survival and median overall survival in the vaccinated group are 18 months and 34 months, respectively. This compared with 7 months and 15 months, respectively, for control patients from the published literature.
Dr. Linda Liau, a neurosurgeon at UCLA, and her colleagues presented immunologic response data for 13 patients with newly diagnosed glioblastoma multiforme (GBM). Patients underwent resection, followed by a 6-week course of radiation and chemotherapy with temozolomide. Two weeks prior to the first immunization, patients underwent MRI. One week before the first immunization, patients underwent leukapheresis and immunologic assays.
The vaccines were composed of autologous dendritic cells that have been pulsed with lysates from GBM tumor cells. Preclinical studies have demonstrated that dendritic cells are preferentially responsible for the sensitization of naive T cells in their first exposure to antigen.
Each patient initially received three vaccinations at 2-week intervals. Four patients received 1 million dendritic cells per immunization; four others received 5 million dendritic cells per immunization, and the remaining five received 10 million dendritic cells per immunization.
Patients without tumor progression subsequently received booster injections every 3 months combined with topical administration of imiquimod, which is a toll-like receptor-7 agonist that enhances both the innate and acquired immune response. Imiquimod (Aldara) is indicated for the treatment of actinic keratosis, superficial basal cell carcinoma, and external genital and perianal warts.
Immunologic responses to tumor antigens were monitored using several methods. Clinical tumor growth was monitored by MRI every 2 months.
The control group consisted of a total of 191 patients with GBM at UCLA, who received standard treatment. The average age of the patients in the vaccinated and control groups was 51 and 49 years, respectively.
“It appears that vaccination approaches in general are very successful,” said Dr. Albert Wong of Stanford University Palo Alto, Calif., who reviewed the poster during a discussion session.
Almost all of the patients had de novo infiltration of T lymphocytes into CNS tumors. In addition, CNS tumors were found to be expressing known tumor-associated antigens. Five patients also had an increase in tumor antigen-specific CD8-positive T cells with vaccination.
The relationship between response to tumor antigens and patient survival was somewhat disappointing. “In my opinion, there was not a strong correlation between the response to these defined tumor antigens and patient response,” Dr. Wong said.
In general there is a need for better surrogate markers to assess immune response. Perhaps the best may be the infiltration of T cells into the tumor, he added.
In terms of safety, no grade 3 or 4 adverse events were reported. The most frequent adverse events were low-grade fever, injection-site itching and pain, and arthralgia and myalgia. Seizures also occurred that were possibly related to the vaccines; however, seizures are also typical in GBM patients.
An important next step is to identify what the true tumor antigens are, in order to better refine the vaccine. Dr. Wong likened the current generation of dendritic cell vaccines to using foxglove to treat “x,” when it would really be better to extract and use the active component, digitalis.
Dr. Liau did not disclose any conflicts of interest. The study was sponsored in part by Northwest Biotherapeutics Inc., which is developing the technology behind the vaccines. A phase II clinical trial, sponsored by Northwest Biotherapeutics Inc., is underway.
Image of the Month
Extremity MRI allows office-based rheumatologists to diagnose rheumatoid arthritis early in its course and then to make clinical decisions as to whether to continue a given therapy, add an additional agent to the regimen, or switch to another agent, said Dr. Norman B. Gaylis, a rheumatologist in private practice in Aventura, Fla., who performs extremity MRI in the office.
Erosions and bone marrow inflammation can be seen on MRI but not on x-ray, which makes MRI a better tool for early diagnosis. “The other way in which I think in-office MRI is extremely helpful is to see whether the treatment is working or is not working,” said Dr. Gaylis. Treatment-related changes may not be apparent on x-ray for 2 years or longer. “That's a whole lot of time to be on a drug that is… very expensive and… maybe not working,” he said.
Standard MRI (0.75 tesla or greater) has been shown to be useful in diagnosing RA and for following therapy. The great demand for MRI on the larger machines that are based in hospitals and imaging centers results in long lead times for appointments. As a result, rheumatologists don't typically take advantage of this tool. In addition, patients with active RA sometimes find it intolerably painful to hold the position necessary for imaging in large machines.
Dr. Gaylis noted that extremity MRI (0.2 tesla) is performed in the office and allows for the patient to assume a more comfortable position during imaging. In addition, slices with this type of MRI average less than 1 mm and are contiguous, which is not always true of machines with stronger magnets. This can be important because it is possible for erosions to “hide” between the slices of larger machines, he added.
Extremity MRI is twice as sensitive as radiography in detecting erosions at baseline, according to a recent study by Dr. Gaylis and his colleagues (Mod. Rheumatol. 2007;17:273-8). In the study, 31 patients underwent both baseline extremity MRI and x-ray examinations. For 108 metacarpophalangeal joints, the sensitivity of radiography was 55.8%, compared with MRI, and specificity of radiography was 95.4%. Positive predictive value was 88.9% and negative predictive value was 76.5%.
In terms of in-office set up, smaller extremity MRI machines don't have many special requirements. Extremity MRI can be set up in a standard exam room. The floors need to provide sufficient support because the magnets are heavy. “One thing that you have to be careful of is that you have an environment where there is not that much noise [which interferes with the software],” said Dr. Gaylis. It's also important to keep the room cool because of the magnet.
Although the cost varies, in-office extremity MRI equipment costs around $250,000. However, the machines are typically leased, as are many other pieces of medical equipment, said Dr. Gaylis.
Once an extremity MRI is performed, Dr. Gaylis digitally sends the image to a radiologist, who reads the image and sends back a report, usually the next day. “I like this format because it combines my knowledge of the patient with the expertise of a musculoskeletal radiologist,” he said.
MRIs are more complicated to read than are x-rays because every joint imaged with MRI yields a number of slices. “So at the end of my day, after I've seen × number of patients, for me to go and read MRIs is really not practical,” said Dr. Gaylis. In addition, musculoskeletal radiologists have a high level of expertise in reading MRIs.
“At the end of the day, I think it allows me more credibility to say that my radiologist is reading it,” he said. The rheumatologist's responsibility is to react to the MRI findings and treat the patient appropriately, Dr. Gaylis noted.
Extremity MRIs can also help improve patient compliance. Patients can see the erosions for themselves. “When they see them and they understand why we want to put them through the process of a biologic … it absolutely makes the patient more responsive to [our] therapeutic suggestions,” said Dr. Gaylis.
The MRIs can also help keep patients on the right drugs. “They get a lot more understanding when they see an MRI that reflects what's going on,” said Dr. Gaylis.
Reimbursement of extremity MRI is a tricky subject, however. Even though extremity MRI is commonplace in the orthopedic setting, there is no reimbursement code that is specific to extremity MRI. Instead, codes for the larger conventional machines are used. Getting third-party payers to foot the bill for extremity MRI can be tough, but it can be done. “We've been able to show them that they actually would save money by getting [the patient] an MRI annually. If you give someone Remicade [infliximab] and it's not working … why not find out and stop it and stop paying all that money if it's not working,” said Dr. Gaylis. He estimates that 70% of his payers are paying for extremity MRI.
The American College of Rheumatology has yet to endorse the use of the extremity MRI for RA. The organization issued a white paper 2 years ago on extremity MRI, indicating that more evidence was needed to demonstrate the validity of the technique for RA.
The International Society of Extremity MRI—which comprises rheumatologists and radiologists—currently is working on providing the ACR with enough data to review the white paper findings, according to Dr. Gaylis.
Extremity MRI. Top, carpal bone erosion; bottom, healed erosions after infliximab. Left are T1, right are STIR sequence.
The MRI images on the left (T1) and on the right (STIR) show erosions of the lunate and scaphoid bones.
A normal x-ray is shown for comparison. Photos courtesy Dr. Norman B. Gaylis/Dr. Steven Needell
Extremity MRI allows office-based rheumatologists to diagnose rheumatoid arthritis early in its course and then to make clinical decisions as to whether to continue a given therapy, add an additional agent to the regimen, or switch to another agent, said Dr. Norman B. Gaylis, a rheumatologist in private practice in Aventura, Fla., who performs extremity MRI in the office.
Erosions and bone marrow inflammation can be seen on MRI but not on x-ray, which makes MRI a better tool for early diagnosis. “The other way in which I think in-office MRI is extremely helpful is to see whether the treatment is working or is not working,” said Dr. Gaylis. Treatment-related changes may not be apparent on x-ray for 2 years or longer. “That's a whole lot of time to be on a drug that is… very expensive and… maybe not working,” he said.
Standard MRI (0.75 tesla or greater) has been shown to be useful in diagnosing RA and for following therapy. The great demand for MRI on the larger machines that are based in hospitals and imaging centers results in long lead times for appointments. As a result, rheumatologists don't typically take advantage of this tool. In addition, patients with active RA sometimes find it intolerably painful to hold the position necessary for imaging in large machines.
Dr. Gaylis noted that extremity MRI (0.2 tesla) is performed in the office and allows for the patient to assume a more comfortable position during imaging. In addition, slices with this type of MRI average less than 1 mm and are contiguous, which is not always true of machines with stronger magnets. This can be important because it is possible for erosions to “hide” between the slices of larger machines, he added.
Extremity MRI is twice as sensitive as radiography in detecting erosions at baseline, according to a recent study by Dr. Gaylis and his colleagues (Mod. Rheumatol. 2007;17:273-8). In the study, 31 patients underwent both baseline extremity MRI and x-ray examinations. For 108 metacarpophalangeal joints, the sensitivity of radiography was 55.8%, compared with MRI, and specificity of radiography was 95.4%. Positive predictive value was 88.9% and negative predictive value was 76.5%.
In terms of in-office set up, smaller extremity MRI machines don't have many special requirements. Extremity MRI can be set up in a standard exam room. The floors need to provide sufficient support because the magnets are heavy. “One thing that you have to be careful of is that you have an environment where there is not that much noise [which interferes with the software],” said Dr. Gaylis. It's also important to keep the room cool because of the magnet.
Although the cost varies, in-office extremity MRI equipment costs around $250,000. However, the machines are typically leased, as are many other pieces of medical equipment, said Dr. Gaylis.
Once an extremity MRI is performed, Dr. Gaylis digitally sends the image to a radiologist, who reads the image and sends back a report, usually the next day. “I like this format because it combines my knowledge of the patient with the expertise of a musculoskeletal radiologist,” he said.
MRIs are more complicated to read than are x-rays because every joint imaged with MRI yields a number of slices. “So at the end of my day, after I've seen × number of patients, for me to go and read MRIs is really not practical,” said Dr. Gaylis. In addition, musculoskeletal radiologists have a high level of expertise in reading MRIs.
“At the end of the day, I think it allows me more credibility to say that my radiologist is reading it,” he said. The rheumatologist's responsibility is to react to the MRI findings and treat the patient appropriately, Dr. Gaylis noted.
Extremity MRIs can also help improve patient compliance. Patients can see the erosions for themselves. “When they see them and they understand why we want to put them through the process of a biologic … it absolutely makes the patient more responsive to [our] therapeutic suggestions,” said Dr. Gaylis.
The MRIs can also help keep patients on the right drugs. “They get a lot more understanding when they see an MRI that reflects what's going on,” said Dr. Gaylis.
Reimbursement of extremity MRI is a tricky subject, however. Even though extremity MRI is commonplace in the orthopedic setting, there is no reimbursement code that is specific to extremity MRI. Instead, codes for the larger conventional machines are used. Getting third-party payers to foot the bill for extremity MRI can be tough, but it can be done. “We've been able to show them that they actually would save money by getting [the patient] an MRI annually. If you give someone Remicade [infliximab] and it's not working … why not find out and stop it and stop paying all that money if it's not working,” said Dr. Gaylis. He estimates that 70% of his payers are paying for extremity MRI.
The American College of Rheumatology has yet to endorse the use of the extremity MRI for RA. The organization issued a white paper 2 years ago on extremity MRI, indicating that more evidence was needed to demonstrate the validity of the technique for RA.
The International Society of Extremity MRI—which comprises rheumatologists and radiologists—currently is working on providing the ACR with enough data to review the white paper findings, according to Dr. Gaylis.
Extremity MRI. Top, carpal bone erosion; bottom, healed erosions after infliximab. Left are T1, right are STIR sequence.
The MRI images on the left (T1) and on the right (STIR) show erosions of the lunate and scaphoid bones.
A normal x-ray is shown for comparison. Photos courtesy Dr. Norman B. Gaylis/Dr. Steven Needell
Extremity MRI allows office-based rheumatologists to diagnose rheumatoid arthritis early in its course and then to make clinical decisions as to whether to continue a given therapy, add an additional agent to the regimen, or switch to another agent, said Dr. Norman B. Gaylis, a rheumatologist in private practice in Aventura, Fla., who performs extremity MRI in the office.
Erosions and bone marrow inflammation can be seen on MRI but not on x-ray, which makes MRI a better tool for early diagnosis. “The other way in which I think in-office MRI is extremely helpful is to see whether the treatment is working or is not working,” said Dr. Gaylis. Treatment-related changes may not be apparent on x-ray for 2 years or longer. “That's a whole lot of time to be on a drug that is… very expensive and… maybe not working,” he said.
Standard MRI (0.75 tesla or greater) has been shown to be useful in diagnosing RA and for following therapy. The great demand for MRI on the larger machines that are based in hospitals and imaging centers results in long lead times for appointments. As a result, rheumatologists don't typically take advantage of this tool. In addition, patients with active RA sometimes find it intolerably painful to hold the position necessary for imaging in large machines.
Dr. Gaylis noted that extremity MRI (0.2 tesla) is performed in the office and allows for the patient to assume a more comfortable position during imaging. In addition, slices with this type of MRI average less than 1 mm and are contiguous, which is not always true of machines with stronger magnets. This can be important because it is possible for erosions to “hide” between the slices of larger machines, he added.
Extremity MRI is twice as sensitive as radiography in detecting erosions at baseline, according to a recent study by Dr. Gaylis and his colleagues (Mod. Rheumatol. 2007;17:273-8). In the study, 31 patients underwent both baseline extremity MRI and x-ray examinations. For 108 metacarpophalangeal joints, the sensitivity of radiography was 55.8%, compared with MRI, and specificity of radiography was 95.4%. Positive predictive value was 88.9% and negative predictive value was 76.5%.
In terms of in-office set up, smaller extremity MRI machines don't have many special requirements. Extremity MRI can be set up in a standard exam room. The floors need to provide sufficient support because the magnets are heavy. “One thing that you have to be careful of is that you have an environment where there is not that much noise [which interferes with the software],” said Dr. Gaylis. It's also important to keep the room cool because of the magnet.
Although the cost varies, in-office extremity MRI equipment costs around $250,000. However, the machines are typically leased, as are many other pieces of medical equipment, said Dr. Gaylis.
Once an extremity MRI is performed, Dr. Gaylis digitally sends the image to a radiologist, who reads the image and sends back a report, usually the next day. “I like this format because it combines my knowledge of the patient with the expertise of a musculoskeletal radiologist,” he said.
MRIs are more complicated to read than are x-rays because every joint imaged with MRI yields a number of slices. “So at the end of my day, after I've seen × number of patients, for me to go and read MRIs is really not practical,” said Dr. Gaylis. In addition, musculoskeletal radiologists have a high level of expertise in reading MRIs.
“At the end of the day, I think it allows me more credibility to say that my radiologist is reading it,” he said. The rheumatologist's responsibility is to react to the MRI findings and treat the patient appropriately, Dr. Gaylis noted.
Extremity MRIs can also help improve patient compliance. Patients can see the erosions for themselves. “When they see them and they understand why we want to put them through the process of a biologic … it absolutely makes the patient more responsive to [our] therapeutic suggestions,” said Dr. Gaylis.
The MRIs can also help keep patients on the right drugs. “They get a lot more understanding when they see an MRI that reflects what's going on,” said Dr. Gaylis.
Reimbursement of extremity MRI is a tricky subject, however. Even though extremity MRI is commonplace in the orthopedic setting, there is no reimbursement code that is specific to extremity MRI. Instead, codes for the larger conventional machines are used. Getting third-party payers to foot the bill for extremity MRI can be tough, but it can be done. “We've been able to show them that they actually would save money by getting [the patient] an MRI annually. If you give someone Remicade [infliximab] and it's not working … why not find out and stop it and stop paying all that money if it's not working,” said Dr. Gaylis. He estimates that 70% of his payers are paying for extremity MRI.
The American College of Rheumatology has yet to endorse the use of the extremity MRI for RA. The organization issued a white paper 2 years ago on extremity MRI, indicating that more evidence was needed to demonstrate the validity of the technique for RA.
The International Society of Extremity MRI—which comprises rheumatologists and radiologists—currently is working on providing the ACR with enough data to review the white paper findings, according to Dr. Gaylis.
Extremity MRI. Top, carpal bone erosion; bottom, healed erosions after infliximab. Left are T1, right are STIR sequence.
The MRI images on the left (T1) and on the right (STIR) show erosions of the lunate and scaphoid bones.
A normal x-ray is shown for comparison. Photos courtesy Dr. Norman B. Gaylis/Dr. Steven Needell
Benefits of Low-Dose Aspirin Alone Offset Costs
WASHINGTON — Low-dose aspirin alone may be the most cost-effective antiplatelet therapy despite the risk of adverse gastrointestinal outcomes, according to an analysis presented at the annual Digestive Disease Week.
“In average-risk patients, aspirin alone optimized the economic balance between cardiovascular and GI outcomes,” said Dr. Martin Van Oijen of Radboud University, Nijmegen Medical Centre, the Netherlands.
Aspirin and clopidogrel, the two most commonly used antiplatelet therapies, increase the risk of GI bleeding, peptic ulcers, and dyspepsia. Concurrent use of proton pump inhibitors (PPIs) can reduce the risk of GI events, but at increased cost.
Dr. Van Oijen and his colleagues assessed the cost-effectiveness of various combinations of aspirin, clopidogrel, and PPIs. The base case for this analysis was a 60-year-old man with a 5-year MI risk of more than 3%. Six strategies were considered: aspirin alone, aspirin plus a PPI, clopidogrel alone, clopidogrel plus a PPI, aspirin in combination with clopidogrel, and the combination of aspirin, clopidogrel, and a PPI.
They derived probability estimates from a review of the literature. Costs were evaluated from the perspective of a third-party payer. Medicare reimbursement schemes and wholesale average drug prices were used in the calculations. The primary outcome was the cost increment per quality-adjusted life year (QALY).
Aspirin alone optimized the economic balance between cardiovascular and GI outcomes. In contrast, clopidogrel-based strategies appeared to be cost ineffective.
The researchers hypothesized that the cost of PPIs would have an important effect on outcome. When the cost of a PPI became $2.50 per tablet, the aspirin plus a PPI strategy became viable.
WASHINGTON — Low-dose aspirin alone may be the most cost-effective antiplatelet therapy despite the risk of adverse gastrointestinal outcomes, according to an analysis presented at the annual Digestive Disease Week.
“In average-risk patients, aspirin alone optimized the economic balance between cardiovascular and GI outcomes,” said Dr. Martin Van Oijen of Radboud University, Nijmegen Medical Centre, the Netherlands.
Aspirin and clopidogrel, the two most commonly used antiplatelet therapies, increase the risk of GI bleeding, peptic ulcers, and dyspepsia. Concurrent use of proton pump inhibitors (PPIs) can reduce the risk of GI events, but at increased cost.
Dr. Van Oijen and his colleagues assessed the cost-effectiveness of various combinations of aspirin, clopidogrel, and PPIs. The base case for this analysis was a 60-year-old man with a 5-year MI risk of more than 3%. Six strategies were considered: aspirin alone, aspirin plus a PPI, clopidogrel alone, clopidogrel plus a PPI, aspirin in combination with clopidogrel, and the combination of aspirin, clopidogrel, and a PPI.
They derived probability estimates from a review of the literature. Costs were evaluated from the perspective of a third-party payer. Medicare reimbursement schemes and wholesale average drug prices were used in the calculations. The primary outcome was the cost increment per quality-adjusted life year (QALY).
Aspirin alone optimized the economic balance between cardiovascular and GI outcomes. In contrast, clopidogrel-based strategies appeared to be cost ineffective.
The researchers hypothesized that the cost of PPIs would have an important effect on outcome. When the cost of a PPI became $2.50 per tablet, the aspirin plus a PPI strategy became viable.
WASHINGTON — Low-dose aspirin alone may be the most cost-effective antiplatelet therapy despite the risk of adverse gastrointestinal outcomes, according to an analysis presented at the annual Digestive Disease Week.
“In average-risk patients, aspirin alone optimized the economic balance between cardiovascular and GI outcomes,” said Dr. Martin Van Oijen of Radboud University, Nijmegen Medical Centre, the Netherlands.
Aspirin and clopidogrel, the two most commonly used antiplatelet therapies, increase the risk of GI bleeding, peptic ulcers, and dyspepsia. Concurrent use of proton pump inhibitors (PPIs) can reduce the risk of GI events, but at increased cost.
Dr. Van Oijen and his colleagues assessed the cost-effectiveness of various combinations of aspirin, clopidogrel, and PPIs. The base case for this analysis was a 60-year-old man with a 5-year MI risk of more than 3%. Six strategies were considered: aspirin alone, aspirin plus a PPI, clopidogrel alone, clopidogrel plus a PPI, aspirin in combination with clopidogrel, and the combination of aspirin, clopidogrel, and a PPI.
They derived probability estimates from a review of the literature. Costs were evaluated from the perspective of a third-party payer. Medicare reimbursement schemes and wholesale average drug prices were used in the calculations. The primary outcome was the cost increment per quality-adjusted life year (QALY).
Aspirin alone optimized the economic balance between cardiovascular and GI outcomes. In contrast, clopidogrel-based strategies appeared to be cost ineffective.
The researchers hypothesized that the cost of PPIs would have an important effect on outcome. When the cost of a PPI became $2.50 per tablet, the aspirin plus a PPI strategy became viable.
Statins May Reduce Hepatocellular Carcinoma Risk
WASHINGTON — The use of statins may reduce the risk of hepatocellular carcinoma by half, according to the results of a case-control study of more than 6,500 U.S. veterans.
“Statin use may be associated with a 40%–50% risk reduction of HCC,” said Dr. Hashem El-Serag, chief of gastroenterology and hepatology at Baylor College of Medicine in Houston, at the annual Digestive Disease Week.
Although there have been a number of experimental studies indicating a potential cancer-reducing effect for statins, there has been little epidemiologic evidence for a protective effect of statins against HCC.
Dr. El-Serag and his colleagues performed a nested, case-control study within a large cohort of Veterans Affairs patients who were newly diagnosed with diabetes between 1997 and 2002. Health records for these patients have been linked to the VA pharmacy benefits data. VA health records were also linked with Medicare records for patients 65 years and older.
Cases were defined as patients with incident HCC, identified by ICD-9 codes indicative of the disease. Control patients were identified among patients in the diabetes cohort, who did not have a diagnosis of HCC by the time of cancer diagnosis in a corresponding case.
Case patients were matched with four control patients based on age, gender, and incidence density, which allows for matching of the duration and timing of the potential exposure period between cases and controls. In all, 1,303 case patients with HCC were identified. They were matched with 5,212 control patients.
The main exposure was statin filled prescriptions prior to the index date (the date of HCC diagnosis) or the corresponding date for the control patients. All statins on the VA formulary were included. Simvastatin was the most commonly used during the study period.
The unadjusted odds ratio for the association of HCC and any statin was 0.45, and the adjusted odds ratio was 0.63.
“On average there's a 55% reduction in the risk among those prescribed statins compared to those without statins,” said Dr. El-Serag, who is also gastrointestinal section chief at the Michael E. DeBakey VA Medical Center in Houston.
Duration of statin use did not appear to affect the reduction in HCC risk associated with statin use. The exclusion of statin use within 1 year of HCC diagnosis produced similar results. Likewise, the use of simvastatin alone does not appear to affect the association.
The researchers assessed confounding by presence of liver disease, because patients with liver disease are less likely to receive statins. Liver disease was broadly defined from codes indicative of mere elevation of liver function tests to those indicative of cirrhosis. However, no actual laboratory results were available from the database.
Patients with liver disease had an odds ratio for HCC of 0.53 and those without liver disease had an odds ratio of 0.63. “If you adjust for liver disease, the odds ratios lessen a little bit or the association is attenuated, but again you have approximately a 40% reduction in the risk,” Dr. El-Serag said.
To assess confounding by indication, they also examined the association between HCC and nonstatin cholesterol-lowering medications and triglyceride-lowering medications. However, there were no significant associations between HCC and use of nonstatin cholesterol-lowering or triglyceride-lowering prescription drugs.
The researchers also performed a chart validation study to verify exposure and case-control status. To do this, they identified 9 case patients and 95 control patients who had visits to the local VA facility. “We found 100% agreement between the presence or absence of a filled statin prescription in the database as compared with the record,” Dr. El-Serag said.
They also found a 100% negative predictive value, meaning that patients without an ICD-9 code for HCC were correctly identified as control patients. The positive predictive value was 89%, meaning that 89% of patients had HCC in their medical record.
Dr. El-Serag reported that the study was funded by a grant from the American College of Gastroenterology.
WASHINGTON — The use of statins may reduce the risk of hepatocellular carcinoma by half, according to the results of a case-control study of more than 6,500 U.S. veterans.
“Statin use may be associated with a 40%–50% risk reduction of HCC,” said Dr. Hashem El-Serag, chief of gastroenterology and hepatology at Baylor College of Medicine in Houston, at the annual Digestive Disease Week.
Although there have been a number of experimental studies indicating a potential cancer-reducing effect for statins, there has been little epidemiologic evidence for a protective effect of statins against HCC.
Dr. El-Serag and his colleagues performed a nested, case-control study within a large cohort of Veterans Affairs patients who were newly diagnosed with diabetes between 1997 and 2002. Health records for these patients have been linked to the VA pharmacy benefits data. VA health records were also linked with Medicare records for patients 65 years and older.
Cases were defined as patients with incident HCC, identified by ICD-9 codes indicative of the disease. Control patients were identified among patients in the diabetes cohort, who did not have a diagnosis of HCC by the time of cancer diagnosis in a corresponding case.
Case patients were matched with four control patients based on age, gender, and incidence density, which allows for matching of the duration and timing of the potential exposure period between cases and controls. In all, 1,303 case patients with HCC were identified. They were matched with 5,212 control patients.
The main exposure was statin filled prescriptions prior to the index date (the date of HCC diagnosis) or the corresponding date for the control patients. All statins on the VA formulary were included. Simvastatin was the most commonly used during the study period.
The unadjusted odds ratio for the association of HCC and any statin was 0.45, and the adjusted odds ratio was 0.63.
“On average there's a 55% reduction in the risk among those prescribed statins compared to those without statins,” said Dr. El-Serag, who is also gastrointestinal section chief at the Michael E. DeBakey VA Medical Center in Houston.
Duration of statin use did not appear to affect the reduction in HCC risk associated with statin use. The exclusion of statin use within 1 year of HCC diagnosis produced similar results. Likewise, the use of simvastatin alone does not appear to affect the association.
The researchers assessed confounding by presence of liver disease, because patients with liver disease are less likely to receive statins. Liver disease was broadly defined from codes indicative of mere elevation of liver function tests to those indicative of cirrhosis. However, no actual laboratory results were available from the database.
Patients with liver disease had an odds ratio for HCC of 0.53 and those without liver disease had an odds ratio of 0.63. “If you adjust for liver disease, the odds ratios lessen a little bit or the association is attenuated, but again you have approximately a 40% reduction in the risk,” Dr. El-Serag said.
To assess confounding by indication, they also examined the association between HCC and nonstatin cholesterol-lowering medications and triglyceride-lowering medications. However, there were no significant associations between HCC and use of nonstatin cholesterol-lowering or triglyceride-lowering prescription drugs.
The researchers also performed a chart validation study to verify exposure and case-control status. To do this, they identified 9 case patients and 95 control patients who had visits to the local VA facility. “We found 100% agreement between the presence or absence of a filled statin prescription in the database as compared with the record,” Dr. El-Serag said.
They also found a 100% negative predictive value, meaning that patients without an ICD-9 code for HCC were correctly identified as control patients. The positive predictive value was 89%, meaning that 89% of patients had HCC in their medical record.
Dr. El-Serag reported that the study was funded by a grant from the American College of Gastroenterology.
WASHINGTON — The use of statins may reduce the risk of hepatocellular carcinoma by half, according to the results of a case-control study of more than 6,500 U.S. veterans.
“Statin use may be associated with a 40%–50% risk reduction of HCC,” said Dr. Hashem El-Serag, chief of gastroenterology and hepatology at Baylor College of Medicine in Houston, at the annual Digestive Disease Week.
Although there have been a number of experimental studies indicating a potential cancer-reducing effect for statins, there has been little epidemiologic evidence for a protective effect of statins against HCC.
Dr. El-Serag and his colleagues performed a nested, case-control study within a large cohort of Veterans Affairs patients who were newly diagnosed with diabetes between 1997 and 2002. Health records for these patients have been linked to the VA pharmacy benefits data. VA health records were also linked with Medicare records for patients 65 years and older.
Cases were defined as patients with incident HCC, identified by ICD-9 codes indicative of the disease. Control patients were identified among patients in the diabetes cohort, who did not have a diagnosis of HCC by the time of cancer diagnosis in a corresponding case.
Case patients were matched with four control patients based on age, gender, and incidence density, which allows for matching of the duration and timing of the potential exposure period between cases and controls. In all, 1,303 case patients with HCC were identified. They were matched with 5,212 control patients.
The main exposure was statin filled prescriptions prior to the index date (the date of HCC diagnosis) or the corresponding date for the control patients. All statins on the VA formulary were included. Simvastatin was the most commonly used during the study period.
The unadjusted odds ratio for the association of HCC and any statin was 0.45, and the adjusted odds ratio was 0.63.
“On average there's a 55% reduction in the risk among those prescribed statins compared to those without statins,” said Dr. El-Serag, who is also gastrointestinal section chief at the Michael E. DeBakey VA Medical Center in Houston.
Duration of statin use did not appear to affect the reduction in HCC risk associated with statin use. The exclusion of statin use within 1 year of HCC diagnosis produced similar results. Likewise, the use of simvastatin alone does not appear to affect the association.
The researchers assessed confounding by presence of liver disease, because patients with liver disease are less likely to receive statins. Liver disease was broadly defined from codes indicative of mere elevation of liver function tests to those indicative of cirrhosis. However, no actual laboratory results were available from the database.
Patients with liver disease had an odds ratio for HCC of 0.53 and those without liver disease had an odds ratio of 0.63. “If you adjust for liver disease, the odds ratios lessen a little bit or the association is attenuated, but again you have approximately a 40% reduction in the risk,” Dr. El-Serag said.
To assess confounding by indication, they also examined the association between HCC and nonstatin cholesterol-lowering medications and triglyceride-lowering medications. However, there were no significant associations between HCC and use of nonstatin cholesterol-lowering or triglyceride-lowering prescription drugs.
The researchers also performed a chart validation study to verify exposure and case-control status. To do this, they identified 9 case patients and 95 control patients who had visits to the local VA facility. “We found 100% agreement between the presence or absence of a filled statin prescription in the database as compared with the record,” Dr. El-Serag said.
They also found a 100% negative predictive value, meaning that patients without an ICD-9 code for HCC were correctly identified as control patients. The positive predictive value was 89%, meaning that 89% of patients had HCC in their medical record.
Dr. El-Serag reported that the study was funded by a grant from the American College of Gastroenterology.