User login
Doug Brunk is a San Diego-based award-winning reporter who began covering health care in 1991. Before joining the company, he wrote for the health sciences division of Columbia University and was an associate editor at Contemporary Long Term Care magazine when it won a Jesse H. Neal Award. His work has been syndicated by the Los Angeles Times and he is the author of two books related to the University of Kentucky Wildcats men's basketball program. Doug has a master’s degree in magazine journalism from the S.I. Newhouse School of Public Communications at Syracuse University. Follow him on Twitter @dougbrunk.
Study Looks at Cost of Lowering HbA1c Cutoffs
Results from a novel study demonstrated that lowering the glycosylated hemoglobin cutoff for prediabetes leads to less cost-effective preventive interventions.
The study, which used a simulation sample of U.S. adults from the NHANES (National Health and Nutrition Examination Survey) from 1999 to 2006, found that the cost per quality-adjusted life-year (QALY) associated with an HbA1c cutoff of 5.7% or higher was identified as being below $50,000 per QALY, "a widely recognized threshold for the cost-effective use of [health care] resources," wrote the researchers, who reported their findings in the April issue of the American Journal of Preventive Medicine.
Led by Xiaohui Zhuo, Ph.D., of the division of diabetes translation at the National Center for Chronic Disease Prevention and Health Promotion, the researchers used a Markov simulation model to assess the cost-effectiveness associated with a progressive 0.1% decrease in the HbA1c cutoff from 6.4% to 5.5%. "Previous studies have evaluated the cost-effectiveness of interventions to prevent type 2 diabetes," the researchers wrote. "However, no study has compared the cost-effectiveness of type 2 diabetes preventive intervention when using alternative HbA1c cutoffs to determine eligibility for intervention."
Establishment of an HbA1c cutoff for prediabetes, however, has been more challenging than that for diabetes because the relationship between the incidence of type 2 diabetes and HbA1c below 6.5% is continual, with no clearly demarcated threshold that is associated with an accelerated risk of diabetes or other morbidities. Therefore, the debate continues over what HbA1c level should be used to define prediabetes, and professional organizations have independently recommended at least three different cutoffs: 6.0%, 5.7%, and 5.5%.
The goal of the current study was to examine the change in the cost-effectiveness of diabetes-preventive interventions because of progressive 0.1% decremental reductions in the HbA1c cutoff from 6.4% to 5.5% (Am. J. Prev. Med. 2012;42:374-81).
The simulation sample included NHANES participants who had baseline HbA1c values below 6.5%. Analysis was conducted under two scenarios for type 2 diabetes prevention interventions: a high-cost intervention, as implemented in the Diabetes Prevention Program (DPP) clinical trial, which costs about $1,000 per year, and a low-cost intervention as implemented in the PLAN4WARD (Promoting a Lifestyle of Activity and Nutrition for Working to Alter the Risk of Diabetes) study, which costs about $300 per year.
The following costs were considered: the cost of a one-time HbA1c test for all participants; the costs of lifestyle interventions, annual screening tests, and the associated direct medical costs for people with prediabetes; and – after the diagnosis of type 2 diabetes – the direct medical costs of type 2 diabetes and diabetes-related complications. Direct medical cost associated with prediabetes was based on the DPP, and costs were expressed in 2009 dollars.
The simulation analysis was performed under two scenarios for type 2 diabetes–preventive interventions: a high-cost lifestyle intervention as implemented in the DPP clinical trial, and a low-cost lifestyle intervention as implemented in the PLAN4WARD study.
Dr. Zhuo and associates determined that in the high-cost intervention, lowering the HbA1c cutoff from 6.0% to 5.9% would cost $27,000 per QALY gained, whereas lowering the cutoff from 5.9% to 5.8% would cost $34,000 per QALY gained. In addition, they found that lowering the HbA1c from 5.8% to 5.7%, from 5.7% to 5.6%, and from 5.6% to 5.5% would cost $45,000, $58,000, and $96,000 per QALY gained, respectively.
In the low-cost intervention, the researchers determined that lowering the HbA1c cutoff from 6.0% to 5.9% would cost $24,000 per QALY gained, whereas lowering the cutoff from 5.9% to 5.8% would cost $27,000 per QALY gained. In addition, they found that lowering the HbA1c from 5.8% to 5.7%, from 5.7% to 5.6%, and from 5.6% to 5.5% would cost $34,000, $43,000, and $70,000 per QALY gained, respectively.
"Assuming a conventional cost-effectiveness benchmark of $50,000 [per QALY], setting the HbA1c cutoff at no lower than 5.7% was found to be cost effective," Dr. Zhuo and associates wrote. "However, lowering the cutoff from 5.7% to 5.6% or even lower also might be cost effective, if the costs of preventive interventions could be reduced."
The researchers acknowledged certain limitations of the study, including the fact that the analysis did not include other measures, such as fasting plasma glucose and body mass index; that the population studied was restricted to middle-aged and elderly adults; and that study participants who were identified as having prediabetes "were assumed to receive the same lifestyle intervention and to attain the same relative diabetes risk reduction regardless of baseline HbA1c level."
This study used data from a trial supported by the National Heart, Lung, and Blood Institute. Dr. Zhuo and associates stated that they had no relevant financial conflicts to disclose.
The researchers take it as a given that HbA1c is highly useful in diagnosing diabetes. Their findings are limited by the fact that the study population is drawn from a specific, relatively healthy population: people who participated in NHANES from 1999 to 2006, with the diagnosis made from glucose tolerance testing.
Zachary T. Bloomgarden |
One should then ask whether diabetes can be effectively diagnosed using HbA1c levels alone, or whether HbA1c levels should be a secondary measure for diagnosing diabetes. A great deal of variability exists, however, in the relationship between blood sugar and HbA1c. For example, African Americans tend to have a higher HbA1c level than do non-Hispanic whites. Thus, a fixed HbA1c standard would incorrectly diagnose diabetes more frequently in African Americans than in non-Hispanic whites. Similarly, HbA1c levels appear to increase with age. So, again, with 70-year-olds, the use of HbA1c as a screening test may lead to overdiagnosis of diabetes. Conversely, people who have kidney disease or who are in a variety of states of illness tend to have lower HbA1c for a given level of blood sugar, related to more rapid red blood cell turnover.
In a certain sense, then, this study is a bit simplistic in taking a best-case scenario approach and using it to assess the cost-benefit of use of HbA1c for the diagnosis of diabetes.
Zachary T. Bloomgarden, M.D., of the division of endocrinology, diabetes, and bone disease at Mount Sinai School of Medicine, New York, is also the editor of the Journal of Diabetes. Dr. Bloomgarden said he is a consultant/advisor for Novartis, Dainippon Sumitomo Pharma America, Forest Laboratories, Nastech, Medtronic, Takeda, Merck, AtheroGenics, CV Therapeutics, Daiichi Sankyo, Bristol-Myers Squibb, AstraZeneca, Pfizer, Amylin, and Johnson & Johnson, as well as a speaker and stockholder of various drug manufacturers.
The researchers take it as a given that HbA1c is highly useful in diagnosing diabetes. Their findings are limited by the fact that the study population is drawn from a specific, relatively healthy population: people who participated in NHANES from 1999 to 2006, with the diagnosis made from glucose tolerance testing.
Zachary T. Bloomgarden |
One should then ask whether diabetes can be effectively diagnosed using HbA1c levels alone, or whether HbA1c levels should be a secondary measure for diagnosing diabetes. A great deal of variability exists, however, in the relationship between blood sugar and HbA1c. For example, African Americans tend to have a higher HbA1c level than do non-Hispanic whites. Thus, a fixed HbA1c standard would incorrectly diagnose diabetes more frequently in African Americans than in non-Hispanic whites. Similarly, HbA1c levels appear to increase with age. So, again, with 70-year-olds, the use of HbA1c as a screening test may lead to overdiagnosis of diabetes. Conversely, people who have kidney disease or who are in a variety of states of illness tend to have lower HbA1c for a given level of blood sugar, related to more rapid red blood cell turnover.
In a certain sense, then, this study is a bit simplistic in taking a best-case scenario approach and using it to assess the cost-benefit of use of HbA1c for the diagnosis of diabetes.
Zachary T. Bloomgarden, M.D., of the division of endocrinology, diabetes, and bone disease at Mount Sinai School of Medicine, New York, is also the editor of the Journal of Diabetes. Dr. Bloomgarden said he is a consultant/advisor for Novartis, Dainippon Sumitomo Pharma America, Forest Laboratories, Nastech, Medtronic, Takeda, Merck, AtheroGenics, CV Therapeutics, Daiichi Sankyo, Bristol-Myers Squibb, AstraZeneca, Pfizer, Amylin, and Johnson & Johnson, as well as a speaker and stockholder of various drug manufacturers.
The researchers take it as a given that HbA1c is highly useful in diagnosing diabetes. Their findings are limited by the fact that the study population is drawn from a specific, relatively healthy population: people who participated in NHANES from 1999 to 2006, with the diagnosis made from glucose tolerance testing.
Zachary T. Bloomgarden |
One should then ask whether diabetes can be effectively diagnosed using HbA1c levels alone, or whether HbA1c levels should be a secondary measure for diagnosing diabetes. A great deal of variability exists, however, in the relationship between blood sugar and HbA1c. For example, African Americans tend to have a higher HbA1c level than do non-Hispanic whites. Thus, a fixed HbA1c standard would incorrectly diagnose diabetes more frequently in African Americans than in non-Hispanic whites. Similarly, HbA1c levels appear to increase with age. So, again, with 70-year-olds, the use of HbA1c as a screening test may lead to overdiagnosis of diabetes. Conversely, people who have kidney disease or who are in a variety of states of illness tend to have lower HbA1c for a given level of blood sugar, related to more rapid red blood cell turnover.
In a certain sense, then, this study is a bit simplistic in taking a best-case scenario approach and using it to assess the cost-benefit of use of HbA1c for the diagnosis of diabetes.
Zachary T. Bloomgarden, M.D., of the division of endocrinology, diabetes, and bone disease at Mount Sinai School of Medicine, New York, is also the editor of the Journal of Diabetes. Dr. Bloomgarden said he is a consultant/advisor for Novartis, Dainippon Sumitomo Pharma America, Forest Laboratories, Nastech, Medtronic, Takeda, Merck, AtheroGenics, CV Therapeutics, Daiichi Sankyo, Bristol-Myers Squibb, AstraZeneca, Pfizer, Amylin, and Johnson & Johnson, as well as a speaker and stockholder of various drug manufacturers.
Results from a novel study demonstrated that lowering the glycosylated hemoglobin cutoff for prediabetes leads to less cost-effective preventive interventions.
The study, which used a simulation sample of U.S. adults from the NHANES (National Health and Nutrition Examination Survey) from 1999 to 2006, found that the cost per quality-adjusted life-year (QALY) associated with an HbA1c cutoff of 5.7% or higher was identified as being below $50,000 per QALY, "a widely recognized threshold for the cost-effective use of [health care] resources," wrote the researchers, who reported their findings in the April issue of the American Journal of Preventive Medicine.
Led by Xiaohui Zhuo, Ph.D., of the division of diabetes translation at the National Center for Chronic Disease Prevention and Health Promotion, the researchers used a Markov simulation model to assess the cost-effectiveness associated with a progressive 0.1% decrease in the HbA1c cutoff from 6.4% to 5.5%. "Previous studies have evaluated the cost-effectiveness of interventions to prevent type 2 diabetes," the researchers wrote. "However, no study has compared the cost-effectiveness of type 2 diabetes preventive intervention when using alternative HbA1c cutoffs to determine eligibility for intervention."
Establishment of an HbA1c cutoff for prediabetes, however, has been more challenging than that for diabetes because the relationship between the incidence of type 2 diabetes and HbA1c below 6.5% is continual, with no clearly demarcated threshold that is associated with an accelerated risk of diabetes or other morbidities. Therefore, the debate continues over what HbA1c level should be used to define prediabetes, and professional organizations have independently recommended at least three different cutoffs: 6.0%, 5.7%, and 5.5%.
The goal of the current study was to examine the change in the cost-effectiveness of diabetes-preventive interventions because of progressive 0.1% decremental reductions in the HbA1c cutoff from 6.4% to 5.5% (Am. J. Prev. Med. 2012;42:374-81).
The simulation sample included NHANES participants who had baseline HbA1c values below 6.5%. Analysis was conducted under two scenarios for type 2 diabetes prevention interventions: a high-cost intervention, as implemented in the Diabetes Prevention Program (DPP) clinical trial, which costs about $1,000 per year, and a low-cost intervention as implemented in the PLAN4WARD (Promoting a Lifestyle of Activity and Nutrition for Working to Alter the Risk of Diabetes) study, which costs about $300 per year.
The following costs were considered: the cost of a one-time HbA1c test for all participants; the costs of lifestyle interventions, annual screening tests, and the associated direct medical costs for people with prediabetes; and – after the diagnosis of type 2 diabetes – the direct medical costs of type 2 diabetes and diabetes-related complications. Direct medical cost associated with prediabetes was based on the DPP, and costs were expressed in 2009 dollars.
The simulation analysis was performed under two scenarios for type 2 diabetes–preventive interventions: a high-cost lifestyle intervention as implemented in the DPP clinical trial, and a low-cost lifestyle intervention as implemented in the PLAN4WARD study.
Dr. Zhuo and associates determined that in the high-cost intervention, lowering the HbA1c cutoff from 6.0% to 5.9% would cost $27,000 per QALY gained, whereas lowering the cutoff from 5.9% to 5.8% would cost $34,000 per QALY gained. In addition, they found that lowering the HbA1c from 5.8% to 5.7%, from 5.7% to 5.6%, and from 5.6% to 5.5% would cost $45,000, $58,000, and $96,000 per QALY gained, respectively.
In the low-cost intervention, the researchers determined that lowering the HbA1c cutoff from 6.0% to 5.9% would cost $24,000 per QALY gained, whereas lowering the cutoff from 5.9% to 5.8% would cost $27,000 per QALY gained. In addition, they found that lowering the HbA1c from 5.8% to 5.7%, from 5.7% to 5.6%, and from 5.6% to 5.5% would cost $34,000, $43,000, and $70,000 per QALY gained, respectively.
"Assuming a conventional cost-effectiveness benchmark of $50,000 [per QALY], setting the HbA1c cutoff at no lower than 5.7% was found to be cost effective," Dr. Zhuo and associates wrote. "However, lowering the cutoff from 5.7% to 5.6% or even lower also might be cost effective, if the costs of preventive interventions could be reduced."
The researchers acknowledged certain limitations of the study, including the fact that the analysis did not include other measures, such as fasting plasma glucose and body mass index; that the population studied was restricted to middle-aged and elderly adults; and that study participants who were identified as having prediabetes "were assumed to receive the same lifestyle intervention and to attain the same relative diabetes risk reduction regardless of baseline HbA1c level."
This study used data from a trial supported by the National Heart, Lung, and Blood Institute. Dr. Zhuo and associates stated that they had no relevant financial conflicts to disclose.
Results from a novel study demonstrated that lowering the glycosylated hemoglobin cutoff for prediabetes leads to less cost-effective preventive interventions.
The study, which used a simulation sample of U.S. adults from the NHANES (National Health and Nutrition Examination Survey) from 1999 to 2006, found that the cost per quality-adjusted life-year (QALY) associated with an HbA1c cutoff of 5.7% or higher was identified as being below $50,000 per QALY, "a widely recognized threshold for the cost-effective use of [health care] resources," wrote the researchers, who reported their findings in the April issue of the American Journal of Preventive Medicine.
Led by Xiaohui Zhuo, Ph.D., of the division of diabetes translation at the National Center for Chronic Disease Prevention and Health Promotion, the researchers used a Markov simulation model to assess the cost-effectiveness associated with a progressive 0.1% decrease in the HbA1c cutoff from 6.4% to 5.5%. "Previous studies have evaluated the cost-effectiveness of interventions to prevent type 2 diabetes," the researchers wrote. "However, no study has compared the cost-effectiveness of type 2 diabetes preventive intervention when using alternative HbA1c cutoffs to determine eligibility for intervention."
Establishment of an HbA1c cutoff for prediabetes, however, has been more challenging than that for diabetes because the relationship between the incidence of type 2 diabetes and HbA1c below 6.5% is continual, with no clearly demarcated threshold that is associated with an accelerated risk of diabetes or other morbidities. Therefore, the debate continues over what HbA1c level should be used to define prediabetes, and professional organizations have independently recommended at least three different cutoffs: 6.0%, 5.7%, and 5.5%.
The goal of the current study was to examine the change in the cost-effectiveness of diabetes-preventive interventions because of progressive 0.1% decremental reductions in the HbA1c cutoff from 6.4% to 5.5% (Am. J. Prev. Med. 2012;42:374-81).
The simulation sample included NHANES participants who had baseline HbA1c values below 6.5%. Analysis was conducted under two scenarios for type 2 diabetes prevention interventions: a high-cost intervention, as implemented in the Diabetes Prevention Program (DPP) clinical trial, which costs about $1,000 per year, and a low-cost intervention as implemented in the PLAN4WARD (Promoting a Lifestyle of Activity and Nutrition for Working to Alter the Risk of Diabetes) study, which costs about $300 per year.
The following costs were considered: the cost of a one-time HbA1c test for all participants; the costs of lifestyle interventions, annual screening tests, and the associated direct medical costs for people with prediabetes; and – after the diagnosis of type 2 diabetes – the direct medical costs of type 2 diabetes and diabetes-related complications. Direct medical cost associated with prediabetes was based on the DPP, and costs were expressed in 2009 dollars.
The simulation analysis was performed under two scenarios for type 2 diabetes–preventive interventions: a high-cost lifestyle intervention as implemented in the DPP clinical trial, and a low-cost lifestyle intervention as implemented in the PLAN4WARD study.
Dr. Zhuo and associates determined that in the high-cost intervention, lowering the HbA1c cutoff from 6.0% to 5.9% would cost $27,000 per QALY gained, whereas lowering the cutoff from 5.9% to 5.8% would cost $34,000 per QALY gained. In addition, they found that lowering the HbA1c from 5.8% to 5.7%, from 5.7% to 5.6%, and from 5.6% to 5.5% would cost $45,000, $58,000, and $96,000 per QALY gained, respectively.
In the low-cost intervention, the researchers determined that lowering the HbA1c cutoff from 6.0% to 5.9% would cost $24,000 per QALY gained, whereas lowering the cutoff from 5.9% to 5.8% would cost $27,000 per QALY gained. In addition, they found that lowering the HbA1c from 5.8% to 5.7%, from 5.7% to 5.6%, and from 5.6% to 5.5% would cost $34,000, $43,000, and $70,000 per QALY gained, respectively.
"Assuming a conventional cost-effectiveness benchmark of $50,000 [per QALY], setting the HbA1c cutoff at no lower than 5.7% was found to be cost effective," Dr. Zhuo and associates wrote. "However, lowering the cutoff from 5.7% to 5.6% or even lower also might be cost effective, if the costs of preventive interventions could be reduced."
The researchers acknowledged certain limitations of the study, including the fact that the analysis did not include other measures, such as fasting plasma glucose and body mass index; that the population studied was restricted to middle-aged and elderly adults; and that study participants who were identified as having prediabetes "were assumed to receive the same lifestyle intervention and to attain the same relative diabetes risk reduction regardless of baseline HbA1c level."
This study used data from a trial supported by the National Heart, Lung, and Blood Institute. Dr. Zhuo and associates stated that they had no relevant financial conflicts to disclose.
FROM THE AMERICAN JOURNAL OF PREVENTIVE MEDICINE
Major Finding: In a high-cost type 2 diabetes prevention intervention, lowering the HbA1c cutoff from 6.0% to 5.9% would cost $27,000 per QALY gained, whereas lowering the cutoff from 5.9% to 5.8% would cost $34,000 per QALY gained. In a low-cost type 2 diabetes prevention intervention, lowering the HbA1c cutoff from 6.0% to 5.9% would cost $24,000 per QALY gained, whereas lowering the cutoff from 5.9% to 5.8% would cost $27,000 per QALY gained.
Data Source: Data are from a simulation sample of participants in the National Health and Nutrition Examination Survey in 1999-2006 who had baseline HbA1c values below 6.5%.
Disclosures: This study uses data from a trial supported by the National Heart, Lung, and Blood Institute. Dr. Zhuo and associates stated that they had no relevant financial conflicts to disclose.
Extension Study of Bydureon Shows Positive Results
Following 84 weeks of treatment, patients with type 2 diabetes who received exenatide once weekly experienced better glycemic control with sustained overall weight loss and a lower risk of hypoglycemia, compared with those who received insulin glargine.
The findings come from an extended trial of a previously reported, 26-week, open-label trial of exenatide once weekly (EQW, marketed as Bydureon) compared with insulin glargine (IG) in patients with type 2 diabetes who failed to maintain sufficient glycemic control using metformin alone or in combination with sulfonylurea (Lancet 2010; 375:2234-43).
"These results, coupled with the significantly lower incidence of hypoglycemia observed with EQW compared with IG therapy, suggest that EQW can be a therapeutic option for patients with type 2 diabetes for whom the convenience of once-a-week dosing, weight loss, and reduction of risk for hypoglycemia are important," Dr. Michaela Diamant of the diabetes center at Free University Medical Center, Amsterdam, and associates reported in the April 2012 issue of Diabetes Care.
For the current study, the researchers enrolled 390 of the 415 patients who completed the 26-week trial, and followed them for a total of 84 weeks, which made this the longest controlled trial of EQW reported to date. The patients were randomly assigned to add EQW (2 mg) or once-daily IG (10 IU/day) to their existing blood glucose–lowering regimen.
The key efficacy measure was change in HbA1C level from baseline to study treatment end point (defined as the last nonmissing postbaseline measurement prior to any change to treatment regimen after week 48). Secondary measures included the proportion of patients achieving HbA1C targets of less than 7.0% and 6.5% or lower; change in body weight; and fasting serum lipids (Diabetes Care 2012;35:683-9).
After 84 weeks, the change in HbA1C levels from baseline to end point was significantly greater among patients in the EQW group, compared with those in the IG group (–1.2% vs. –1.0%, respectively; P = .029). A greater proportion of patients in the EQW group reached HbA1C levels of less than 7.0% at 84 weeks (44.6% vs. 36.8% among patients in the IG group), although this difference did not reach statistical significance (P = .084). However, a significantly greater proportion of patients in the EQW group reached HbA1C levels of 6.5% or less (31.3% vs. 20.2% among patients in the IG group; P = .009).
Effects on body weight significantly favored patients in the EQW group, who lost a mean of 2.1 kg from baseline to end point, whereas their counterparts in the IG group gained a mean of 2.4 kg (P less than .001).
The mean time to failure to maintain glycemic control was significantly longer for patients taking EQW than for those taking IG (57 weeks vs. 47 weeks; P = .0007).
Among the subgroup of study participants who were taking metformin plus sulfonylurea, the incidence of minor hypoglycemia was significantly lower in the EQW group, compared with the IG group (24% vs. 54%, respectively; P less than .001). A similar effect was observed among the subgroup of patients who were taking metformin alone (8% vs. 32%; P less than .001).
Nasopharyngitis was the most common adverse event (AE) that occurred in 5% or more of all study participants, affecting 21% of patients in the EQW group and 23% of patients in the IG group, a difference that was statistically similar. Diarrhea occurred more frequently in the EQW group, compared with the IG group (12% vs. 6%; P less than .05), as did nausea (15% vs. 1%; P less than .05).
"In general, after 26 weeks, new cases of AEs slowed compared with the period of time between baseline and 26 weeks," the researchers observed. "This is noteworthy with regard to nausea and vomiting, a typical AE of exenatide therapy. Our observation is consistent with several clinical trials of exenatide twice daily, which showed decreased gastrointestinal distress over time. There were some AEs that emerged only after 26 weeks of therapy. Among the most prevalent were injection site nodule at 6% of the EQW group and bronchitis, cough, and toothache at [approximately] 5% of the IG group."
Dr. Diamant and her associates acknowledged certain limitations of the study, including the "open-label nature of the design" and the fact that the study population was predominately white. "It should be noted that [approximately] 30% of patients in both groups required a reduction from baseline in their sulfonylurea dose. Although a reduction in sulfonylurea dose may reduce the risk of hypoglycemia, such a change in sulfonylurea dose may be associated with a negative impact on glucose control. In addition, the possibility for bias introduced through patient self-selection for continuation into the extension study exists."
The study was sponsored by Eli Lilly. Dr. Diamant disclosed that she is a consultant and speaker for Lilly; Novo Nordisk; and Merck Sharp and Dohme, as well as a consultant for Sanofi-Aventis. Many of the study’s coauthors also disclosed relevant financial conflicts with a variety of drug manufacturers: Coauthor Leigh MacConell, Ph.D., is an employee of Amylin, which markets exenatide; Dr. Michael Trautmann is an employee of Lilly; and Harry Haber and Jamie Scism-Bacon, Ph.D., are employees of i3 Statprobe, a part of the inVentiv Health Company.
Following 84 weeks of treatment, patients with type 2 diabetes who received exenatide once weekly experienced better glycemic control with sustained overall weight loss and a lower risk of hypoglycemia, compared with those who received insulin glargine.
The findings come from an extended trial of a previously reported, 26-week, open-label trial of exenatide once weekly (EQW, marketed as Bydureon) compared with insulin glargine (IG) in patients with type 2 diabetes who failed to maintain sufficient glycemic control using metformin alone or in combination with sulfonylurea (Lancet 2010; 375:2234-43).
"These results, coupled with the significantly lower incidence of hypoglycemia observed with EQW compared with IG therapy, suggest that EQW can be a therapeutic option for patients with type 2 diabetes for whom the convenience of once-a-week dosing, weight loss, and reduction of risk for hypoglycemia are important," Dr. Michaela Diamant of the diabetes center at Free University Medical Center, Amsterdam, and associates reported in the April 2012 issue of Diabetes Care.
For the current study, the researchers enrolled 390 of the 415 patients who completed the 26-week trial, and followed them for a total of 84 weeks, which made this the longest controlled trial of EQW reported to date. The patients were randomly assigned to add EQW (2 mg) or once-daily IG (10 IU/day) to their existing blood glucose–lowering regimen.
The key efficacy measure was change in HbA1C level from baseline to study treatment end point (defined as the last nonmissing postbaseline measurement prior to any change to treatment regimen after week 48). Secondary measures included the proportion of patients achieving HbA1C targets of less than 7.0% and 6.5% or lower; change in body weight; and fasting serum lipids (Diabetes Care 2012;35:683-9).
After 84 weeks, the change in HbA1C levels from baseline to end point was significantly greater among patients in the EQW group, compared with those in the IG group (–1.2% vs. –1.0%, respectively; P = .029). A greater proportion of patients in the EQW group reached HbA1C levels of less than 7.0% at 84 weeks (44.6% vs. 36.8% among patients in the IG group), although this difference did not reach statistical significance (P = .084). However, a significantly greater proportion of patients in the EQW group reached HbA1C levels of 6.5% or less (31.3% vs. 20.2% among patients in the IG group; P = .009).
Effects on body weight significantly favored patients in the EQW group, who lost a mean of 2.1 kg from baseline to end point, whereas their counterparts in the IG group gained a mean of 2.4 kg (P less than .001).
The mean time to failure to maintain glycemic control was significantly longer for patients taking EQW than for those taking IG (57 weeks vs. 47 weeks; P = .0007).
Among the subgroup of study participants who were taking metformin plus sulfonylurea, the incidence of minor hypoglycemia was significantly lower in the EQW group, compared with the IG group (24% vs. 54%, respectively; P less than .001). A similar effect was observed among the subgroup of patients who were taking metformin alone (8% vs. 32%; P less than .001).
Nasopharyngitis was the most common adverse event (AE) that occurred in 5% or more of all study participants, affecting 21% of patients in the EQW group and 23% of patients in the IG group, a difference that was statistically similar. Diarrhea occurred more frequently in the EQW group, compared with the IG group (12% vs. 6%; P less than .05), as did nausea (15% vs. 1%; P less than .05).
"In general, after 26 weeks, new cases of AEs slowed compared with the period of time between baseline and 26 weeks," the researchers observed. "This is noteworthy with regard to nausea and vomiting, a typical AE of exenatide therapy. Our observation is consistent with several clinical trials of exenatide twice daily, which showed decreased gastrointestinal distress over time. There were some AEs that emerged only after 26 weeks of therapy. Among the most prevalent were injection site nodule at 6% of the EQW group and bronchitis, cough, and toothache at [approximately] 5% of the IG group."
Dr. Diamant and her associates acknowledged certain limitations of the study, including the "open-label nature of the design" and the fact that the study population was predominately white. "It should be noted that [approximately] 30% of patients in both groups required a reduction from baseline in their sulfonylurea dose. Although a reduction in sulfonylurea dose may reduce the risk of hypoglycemia, such a change in sulfonylurea dose may be associated with a negative impact on glucose control. In addition, the possibility for bias introduced through patient self-selection for continuation into the extension study exists."
The study was sponsored by Eli Lilly. Dr. Diamant disclosed that she is a consultant and speaker for Lilly; Novo Nordisk; and Merck Sharp and Dohme, as well as a consultant for Sanofi-Aventis. Many of the study’s coauthors also disclosed relevant financial conflicts with a variety of drug manufacturers: Coauthor Leigh MacConell, Ph.D., is an employee of Amylin, which markets exenatide; Dr. Michael Trautmann is an employee of Lilly; and Harry Haber and Jamie Scism-Bacon, Ph.D., are employees of i3 Statprobe, a part of the inVentiv Health Company.
Following 84 weeks of treatment, patients with type 2 diabetes who received exenatide once weekly experienced better glycemic control with sustained overall weight loss and a lower risk of hypoglycemia, compared with those who received insulin glargine.
The findings come from an extended trial of a previously reported, 26-week, open-label trial of exenatide once weekly (EQW, marketed as Bydureon) compared with insulin glargine (IG) in patients with type 2 diabetes who failed to maintain sufficient glycemic control using metformin alone or in combination with sulfonylurea (Lancet 2010; 375:2234-43).
"These results, coupled with the significantly lower incidence of hypoglycemia observed with EQW compared with IG therapy, suggest that EQW can be a therapeutic option for patients with type 2 diabetes for whom the convenience of once-a-week dosing, weight loss, and reduction of risk for hypoglycemia are important," Dr. Michaela Diamant of the diabetes center at Free University Medical Center, Amsterdam, and associates reported in the April 2012 issue of Diabetes Care.
For the current study, the researchers enrolled 390 of the 415 patients who completed the 26-week trial, and followed them for a total of 84 weeks, which made this the longest controlled trial of EQW reported to date. The patients were randomly assigned to add EQW (2 mg) or once-daily IG (10 IU/day) to their existing blood glucose–lowering regimen.
The key efficacy measure was change in HbA1C level from baseline to study treatment end point (defined as the last nonmissing postbaseline measurement prior to any change to treatment regimen after week 48). Secondary measures included the proportion of patients achieving HbA1C targets of less than 7.0% and 6.5% or lower; change in body weight; and fasting serum lipids (Diabetes Care 2012;35:683-9).
After 84 weeks, the change in HbA1C levels from baseline to end point was significantly greater among patients in the EQW group, compared with those in the IG group (–1.2% vs. –1.0%, respectively; P = .029). A greater proportion of patients in the EQW group reached HbA1C levels of less than 7.0% at 84 weeks (44.6% vs. 36.8% among patients in the IG group), although this difference did not reach statistical significance (P = .084). However, a significantly greater proportion of patients in the EQW group reached HbA1C levels of 6.5% or less (31.3% vs. 20.2% among patients in the IG group; P = .009).
Effects on body weight significantly favored patients in the EQW group, who lost a mean of 2.1 kg from baseline to end point, whereas their counterparts in the IG group gained a mean of 2.4 kg (P less than .001).
The mean time to failure to maintain glycemic control was significantly longer for patients taking EQW than for those taking IG (57 weeks vs. 47 weeks; P = .0007).
Among the subgroup of study participants who were taking metformin plus sulfonylurea, the incidence of minor hypoglycemia was significantly lower in the EQW group, compared with the IG group (24% vs. 54%, respectively; P less than .001). A similar effect was observed among the subgroup of patients who were taking metformin alone (8% vs. 32%; P less than .001).
Nasopharyngitis was the most common adverse event (AE) that occurred in 5% or more of all study participants, affecting 21% of patients in the EQW group and 23% of patients in the IG group, a difference that was statistically similar. Diarrhea occurred more frequently in the EQW group, compared with the IG group (12% vs. 6%; P less than .05), as did nausea (15% vs. 1%; P less than .05).
"In general, after 26 weeks, new cases of AEs slowed compared with the period of time between baseline and 26 weeks," the researchers observed. "This is noteworthy with regard to nausea and vomiting, a typical AE of exenatide therapy. Our observation is consistent with several clinical trials of exenatide twice daily, which showed decreased gastrointestinal distress over time. There were some AEs that emerged only after 26 weeks of therapy. Among the most prevalent were injection site nodule at 6% of the EQW group and bronchitis, cough, and toothache at [approximately] 5% of the IG group."
Dr. Diamant and her associates acknowledged certain limitations of the study, including the "open-label nature of the design" and the fact that the study population was predominately white. "It should be noted that [approximately] 30% of patients in both groups required a reduction from baseline in their sulfonylurea dose. Although a reduction in sulfonylurea dose may reduce the risk of hypoglycemia, such a change in sulfonylurea dose may be associated with a negative impact on glucose control. In addition, the possibility for bias introduced through patient self-selection for continuation into the extension study exists."
The study was sponsored by Eli Lilly. Dr. Diamant disclosed that she is a consultant and speaker for Lilly; Novo Nordisk; and Merck Sharp and Dohme, as well as a consultant for Sanofi-Aventis. Many of the study’s coauthors also disclosed relevant financial conflicts with a variety of drug manufacturers: Coauthor Leigh MacConell, Ph.D., is an employee of Amylin, which markets exenatide; Dr. Michael Trautmann is an employee of Lilly; and Harry Haber and Jamie Scism-Bacon, Ph.D., are employees of i3 Statprobe, a part of the inVentiv Health Company.
FROM DIABETES CARE
Major Finding: After 84 weeks, the change in HbA1C levels from baseline to end point was significantly greater among patients in the EQW group, compared with those in the IG group (–1.2% vs. –1.0%, respectively; P = .029).
Data Source: Data are from an extension study that investigated efficacy and safety of once-weekly exenatide, compared with insulin glargine, in 390 patients with type 2 diabetes who failed to maintain sufficient glycemic control using metformin either alone or in combination with sulfonylurea.
Disclosures: The study was sponsored by Eli Lilly. Dr. Diamant disclosed that she is a consultant and speaker for Lilly; Novo Nordisk; and Merck Sharp and Dohme, as well as a consultant for Sanofi-Aventis. Many of the study’s coauthors also disclosed relevant financial conflicts with a variety of drug manufacturers: Coauthor Leigh MacConell, Ph.D., is an employee of Amylin; Dr. Michael Trautmann is an employee of Lilly; and Harry Haber and Jamie Scism-Bacon, Ph.D., are employees of i3 Statprobe, a part of the inVentiv Health Co.
Hormonal Contraceptives Insignificantly Lower Vitamin B12 Levels
Hormonal contraception caused serum levels of vitamin B12 to decrease over a period of 3 years, but the reduction did not appear to be clinically significant or to impact bone mineral density in a large study limited to women of child-bearing age.
"This longitudinal study adds to the literature by demonstrating a cause and effect relationship between OC use and serum B12 levels. In addition, we observed lower B12 levels among DMPA [depot medroxyprogesterone acetate] users, which has not been previously reported in the literature," lead author Dr. Abbey B. Berenson reported online March 28 in the journal Contraception (Contraception 2012 March 28 [doi: 10.1016/j.contraception.2012.02.015]). "However, levels still remained in the normal range for almost all users. By and large, these findings agree with previous cross-sectional and case-control studies on the effect of OCs on serum B12 levels.
Dr. Berenson, of the department of obstetrics and gynecology and the Center for Interdisciplinary Research at The University of Texas Medical Branch, Galveston, and her associate, Dr. Mahbubur Rahman, recruited 703 women aged 16-33 years. The women were asked to choose one of three birth control methods: 245 chose an OC (0.15 mg desogestrel+20 mcg EE taken for 21 days, followed by 2 days of placebo and 5 days of 10 mcg EE); 240 chose DMPA; and 218 chose a nonhormonal method (barrier method, 53%; tubal ligation, 33%; copper T, 10%; and partner vasectomy, 4%). DMPA and the OC were dispensed every 3 months. Nonhormonal contraceptive users also attended a clinic every 3 months and were offered a supply of condoms at each visit.
At each 6-month visit over the course of 3 years, study participants underwent weight and height measurements, BMD measures of the lumbar spine and femoral neck via dual energy X-ray absorptiometry, and serum B12 measures via immunoassay.
The mean age of the study participants was 24 years, 36% were Hispanic, 35% were non-Hispanic white, and 29% were non-Hispanic black.
Women in all three groups had a decrease in mean B12 levels over the 3-year time period. Those taking an OC or DMPA had greater B12 decreases than women using nonhormonal contraception (P less than .001). During the first 6 months of use, for example, women in the OC and DMPA groups experienced B12 decreases of 97 pg/mL and 64 pg/mL, respectively, which represented declines of 20% and 13% from their baseline values. At the same time, women in the nonhormonal group experienced a decrease of 14 pg/mL, or a decline of 3% from the baseline value. Over the remaining 30 months of the study, B12 levels did not further decline among women in the OC group, but dropped another 22 pg/mL in the women in the DMPA group, and fell another 17 pg/mL in the women in using nonhormonal contraception.
After adjusting for age, whole body lean mass, duration of use, contraceptive methods, race/ethnicity, smoking status, and duration of previous DMPA use, the researchers found no significant association between mean B12 levels and mean BMD at the spine (P = .107) or femoral neck (P = .877).
Very few women had B12 deficiency, defined as a B12 level of less than 180 pg/mL: There were four in the nonhormonal group, two in the DMPA group, and nine in the OC group.
The researchers noted that the manner by which hormonal contraception causes a decrease in serum B12 remains unclear. "One possible mechanism is a deficiency in the level of serum B12 binders resulting in a false low B12 level in OC users," they hypothesized. "However, several studies have shown no difference in the level of mean unsaturated B12 binders between OC users and nonusers, suggesting that absorption is not affected and that redistribution of B12 throughout the body could be responsible. However, the mechanism of this redistribution is unknown. Furthermore, the mechanism of suppression of B12 levels among DMPA users, which was less severe than that observed among OC users in our study, has not been determined."
The observation that B12 levels were not associated with BMD, regardless of type of contraceptive use, contrasted with findings from several previous studies. "All but one of these prior studies, however, focused on postmenopausal women, who may react differently," the researchers noted. "The one study that did include reproductive-age women was limited to European adolescents who adhered to a specific diet plan (one group consumed a macrobiotic diet for up to 6 years followed by a vegetarian/omnivorous diet, while the other group ate an omnivorous diet throughout their life). Furthermore, it differed from the current study in that it was cross-sectional in design and included males as well as females. These differences could explain the variation in findings between their study and ours."
The study was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development and by the National Institutes of Health.
Hormonal contraception caused serum levels of vitamin B12 to decrease over a period of 3 years, but the reduction did not appear to be clinically significant or to impact bone mineral density in a large study limited to women of child-bearing age.
"This longitudinal study adds to the literature by demonstrating a cause and effect relationship between OC use and serum B12 levels. In addition, we observed lower B12 levels among DMPA [depot medroxyprogesterone acetate] users, which has not been previously reported in the literature," lead author Dr. Abbey B. Berenson reported online March 28 in the journal Contraception (Contraception 2012 March 28 [doi: 10.1016/j.contraception.2012.02.015]). "However, levels still remained in the normal range for almost all users. By and large, these findings agree with previous cross-sectional and case-control studies on the effect of OCs on serum B12 levels.
Dr. Berenson, of the department of obstetrics and gynecology and the Center for Interdisciplinary Research at The University of Texas Medical Branch, Galveston, and her associate, Dr. Mahbubur Rahman, recruited 703 women aged 16-33 years. The women were asked to choose one of three birth control methods: 245 chose an OC (0.15 mg desogestrel+20 mcg EE taken for 21 days, followed by 2 days of placebo and 5 days of 10 mcg EE); 240 chose DMPA; and 218 chose a nonhormonal method (barrier method, 53%; tubal ligation, 33%; copper T, 10%; and partner vasectomy, 4%). DMPA and the OC were dispensed every 3 months. Nonhormonal contraceptive users also attended a clinic every 3 months and were offered a supply of condoms at each visit.
At each 6-month visit over the course of 3 years, study participants underwent weight and height measurements, BMD measures of the lumbar spine and femoral neck via dual energy X-ray absorptiometry, and serum B12 measures via immunoassay.
The mean age of the study participants was 24 years, 36% were Hispanic, 35% were non-Hispanic white, and 29% were non-Hispanic black.
Women in all three groups had a decrease in mean B12 levels over the 3-year time period. Those taking an OC or DMPA had greater B12 decreases than women using nonhormonal contraception (P less than .001). During the first 6 months of use, for example, women in the OC and DMPA groups experienced B12 decreases of 97 pg/mL and 64 pg/mL, respectively, which represented declines of 20% and 13% from their baseline values. At the same time, women in the nonhormonal group experienced a decrease of 14 pg/mL, or a decline of 3% from the baseline value. Over the remaining 30 months of the study, B12 levels did not further decline among women in the OC group, but dropped another 22 pg/mL in the women in the DMPA group, and fell another 17 pg/mL in the women in using nonhormonal contraception.
After adjusting for age, whole body lean mass, duration of use, contraceptive methods, race/ethnicity, smoking status, and duration of previous DMPA use, the researchers found no significant association between mean B12 levels and mean BMD at the spine (P = .107) or femoral neck (P = .877).
Very few women had B12 deficiency, defined as a B12 level of less than 180 pg/mL: There were four in the nonhormonal group, two in the DMPA group, and nine in the OC group.
The researchers noted that the manner by which hormonal contraception causes a decrease in serum B12 remains unclear. "One possible mechanism is a deficiency in the level of serum B12 binders resulting in a false low B12 level in OC users," they hypothesized. "However, several studies have shown no difference in the level of mean unsaturated B12 binders between OC users and nonusers, suggesting that absorption is not affected and that redistribution of B12 throughout the body could be responsible. However, the mechanism of this redistribution is unknown. Furthermore, the mechanism of suppression of B12 levels among DMPA users, which was less severe than that observed among OC users in our study, has not been determined."
The observation that B12 levels were not associated with BMD, regardless of type of contraceptive use, contrasted with findings from several previous studies. "All but one of these prior studies, however, focused on postmenopausal women, who may react differently," the researchers noted. "The one study that did include reproductive-age women was limited to European adolescents who adhered to a specific diet plan (one group consumed a macrobiotic diet for up to 6 years followed by a vegetarian/omnivorous diet, while the other group ate an omnivorous diet throughout their life). Furthermore, it differed from the current study in that it was cross-sectional in design and included males as well as females. These differences could explain the variation in findings between their study and ours."
The study was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development and by the National Institutes of Health.
Hormonal contraception caused serum levels of vitamin B12 to decrease over a period of 3 years, but the reduction did not appear to be clinically significant or to impact bone mineral density in a large study limited to women of child-bearing age.
"This longitudinal study adds to the literature by demonstrating a cause and effect relationship between OC use and serum B12 levels. In addition, we observed lower B12 levels among DMPA [depot medroxyprogesterone acetate] users, which has not been previously reported in the literature," lead author Dr. Abbey B. Berenson reported online March 28 in the journal Contraception (Contraception 2012 March 28 [doi: 10.1016/j.contraception.2012.02.015]). "However, levels still remained in the normal range for almost all users. By and large, these findings agree with previous cross-sectional and case-control studies on the effect of OCs on serum B12 levels.
Dr. Berenson, of the department of obstetrics and gynecology and the Center for Interdisciplinary Research at The University of Texas Medical Branch, Galveston, and her associate, Dr. Mahbubur Rahman, recruited 703 women aged 16-33 years. The women were asked to choose one of three birth control methods: 245 chose an OC (0.15 mg desogestrel+20 mcg EE taken for 21 days, followed by 2 days of placebo and 5 days of 10 mcg EE); 240 chose DMPA; and 218 chose a nonhormonal method (barrier method, 53%; tubal ligation, 33%; copper T, 10%; and partner vasectomy, 4%). DMPA and the OC were dispensed every 3 months. Nonhormonal contraceptive users also attended a clinic every 3 months and were offered a supply of condoms at each visit.
At each 6-month visit over the course of 3 years, study participants underwent weight and height measurements, BMD measures of the lumbar spine and femoral neck via dual energy X-ray absorptiometry, and serum B12 measures via immunoassay.
The mean age of the study participants was 24 years, 36% were Hispanic, 35% were non-Hispanic white, and 29% were non-Hispanic black.
Women in all three groups had a decrease in mean B12 levels over the 3-year time period. Those taking an OC or DMPA had greater B12 decreases than women using nonhormonal contraception (P less than .001). During the first 6 months of use, for example, women in the OC and DMPA groups experienced B12 decreases of 97 pg/mL and 64 pg/mL, respectively, which represented declines of 20% and 13% from their baseline values. At the same time, women in the nonhormonal group experienced a decrease of 14 pg/mL, or a decline of 3% from the baseline value. Over the remaining 30 months of the study, B12 levels did not further decline among women in the OC group, but dropped another 22 pg/mL in the women in the DMPA group, and fell another 17 pg/mL in the women in using nonhormonal contraception.
After adjusting for age, whole body lean mass, duration of use, contraceptive methods, race/ethnicity, smoking status, and duration of previous DMPA use, the researchers found no significant association between mean B12 levels and mean BMD at the spine (P = .107) or femoral neck (P = .877).
Very few women had B12 deficiency, defined as a B12 level of less than 180 pg/mL: There were four in the nonhormonal group, two in the DMPA group, and nine in the OC group.
The researchers noted that the manner by which hormonal contraception causes a decrease in serum B12 remains unclear. "One possible mechanism is a deficiency in the level of serum B12 binders resulting in a false low B12 level in OC users," they hypothesized. "However, several studies have shown no difference in the level of mean unsaturated B12 binders between OC users and nonusers, suggesting that absorption is not affected and that redistribution of B12 throughout the body could be responsible. However, the mechanism of this redistribution is unknown. Furthermore, the mechanism of suppression of B12 levels among DMPA users, which was less severe than that observed among OC users in our study, has not been determined."
The observation that B12 levels were not associated with BMD, regardless of type of contraceptive use, contrasted with findings from several previous studies. "All but one of these prior studies, however, focused on postmenopausal women, who may react differently," the researchers noted. "The one study that did include reproductive-age women was limited to European adolescents who adhered to a specific diet plan (one group consumed a macrobiotic diet for up to 6 years followed by a vegetarian/omnivorous diet, while the other group ate an omnivorous diet throughout their life). Furthermore, it differed from the current study in that it was cross-sectional in design and included males as well as females. These differences could explain the variation in findings between their study and ours."
The study was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development and by the National Institutes of Health.
FROM CONTRACEPTION
Major Finding: During the first 6 months after the women started hormonal contraception, B12 levels declined by 97 pg/mL (20%) in the OC group and by 64 pg/mL (13%) in the DMPA group. Women in the nonhormonal contraception group experienced a decrease of 14 pg/mL (3%).
Data Source: A 3-year study of 703 women, aged 16-33, who chose one of three birth control methods and who had measures of serum B12 and BMD every 6 months.
Disclosures: The study was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development and by the National Institutes of Health.
Psoriasis Not Independent Cardiac Risk Factor, Study Finds
SAN DIEGO – A large study has determined that psoriasis is not an independent risk factor for ischemic heart disease, based on Framingham Risk Scores.
While previous studies have demonstrated that cardiac mortality is increased in patients with psoriasis, it is also known that people with the skin condition have a higher prevalence of smoking, alcohol consumption, obesity, diabetes, and dyslipidemia, researchers reported in a poster session at the annual meeting of the American Academy of Dermatology.
"Is this increase in ischemic heart disease due to traditional risk factors, or is psoriasis an additional independent risk factor?" wrote the investigators, who were led by Dr. Marian T. McEvoy, in the abstract.
They performed a population-based analysis of 1,338 adults with psoriasis who resided in Olmstead County, Minn., between 1998 and 2008 to evaluate the validity of the Framingham Risk Score (FRS) in predicting the incidence of ischemic heart disease in the study cohort. The FRS is a validated measure of standard ischemic heart disease risk factors.
Dr. McEvoy, a dermatologist at the Mayo Clinic in Rochester, Minn., and her associates compared the risk of cardiac death and myocardial infarction based on the FRS with the actual incidence of myocardial infarction and cardiac death in the study population, which was limited to patients older than age 30 but younger than age 80. They used Poisson regression models and standardized incidence ratios for statistical analysis.
The researchers reported that full Framingham risk factors were available for 974 of the 1,338 patients (73%). They predicted that the median 10-year risk of cardiac events based on the FRS was 3.8%, while the observed 10-year risk of cardiac events was 5.5%. However, there were 44 observed cardiac events compared with 47.7 FRS-predicted cardiac events, which translated into a standardized incidence ratio between the two groups of 0.9. Standardized incidence ratios also showed no statistically significant differences between the two groups when analyzed by gender, by age greater than 65 years, by age less than 65 years, and by whether patients were receiving systemic treatment or not.
"If psoriasis was an independent risk factor for ischemic heart disease, the observed incidence of cardiac events would have been in excess of predicted," the researchers wrote in their poster. "Since there was no statistical difference between actual and predicted events, psoriasis is not an independent risk factor for ischemic heart disease."
In a later interview, Dr. McEvoy acknowledged certain limitations of the study, including the fact that the small number of observed cardiac events (44) limited the statistical power of the study. "Since this is a retrospective study, we did not have a good tool to assess severity of psoriasis," she said. "We used surrogates based on therapy to identify those with ‘more severe disease.’ The Framingham Risk Score was valid for this group also."
The study was partially funded by a grant from Pfizer, and was made possible by a grant from Amgen/Wyeth and by the Rochester Epidemiology Project.
SAN DIEGO – A large study has determined that psoriasis is not an independent risk factor for ischemic heart disease, based on Framingham Risk Scores.
While previous studies have demonstrated that cardiac mortality is increased in patients with psoriasis, it is also known that people with the skin condition have a higher prevalence of smoking, alcohol consumption, obesity, diabetes, and dyslipidemia, researchers reported in a poster session at the annual meeting of the American Academy of Dermatology.
"Is this increase in ischemic heart disease due to traditional risk factors, or is psoriasis an additional independent risk factor?" wrote the investigators, who were led by Dr. Marian T. McEvoy, in the abstract.
They performed a population-based analysis of 1,338 adults with psoriasis who resided in Olmstead County, Minn., between 1998 and 2008 to evaluate the validity of the Framingham Risk Score (FRS) in predicting the incidence of ischemic heart disease in the study cohort. The FRS is a validated measure of standard ischemic heart disease risk factors.
Dr. McEvoy, a dermatologist at the Mayo Clinic in Rochester, Minn., and her associates compared the risk of cardiac death and myocardial infarction based on the FRS with the actual incidence of myocardial infarction and cardiac death in the study population, which was limited to patients older than age 30 but younger than age 80. They used Poisson regression models and standardized incidence ratios for statistical analysis.
The researchers reported that full Framingham risk factors were available for 974 of the 1,338 patients (73%). They predicted that the median 10-year risk of cardiac events based on the FRS was 3.8%, while the observed 10-year risk of cardiac events was 5.5%. However, there were 44 observed cardiac events compared with 47.7 FRS-predicted cardiac events, which translated into a standardized incidence ratio between the two groups of 0.9. Standardized incidence ratios also showed no statistically significant differences between the two groups when analyzed by gender, by age greater than 65 years, by age less than 65 years, and by whether patients were receiving systemic treatment or not.
"If psoriasis was an independent risk factor for ischemic heart disease, the observed incidence of cardiac events would have been in excess of predicted," the researchers wrote in their poster. "Since there was no statistical difference between actual and predicted events, psoriasis is not an independent risk factor for ischemic heart disease."
In a later interview, Dr. McEvoy acknowledged certain limitations of the study, including the fact that the small number of observed cardiac events (44) limited the statistical power of the study. "Since this is a retrospective study, we did not have a good tool to assess severity of psoriasis," she said. "We used surrogates based on therapy to identify those with ‘more severe disease.’ The Framingham Risk Score was valid for this group also."
The study was partially funded by a grant from Pfizer, and was made possible by a grant from Amgen/Wyeth and by the Rochester Epidemiology Project.
SAN DIEGO – A large study has determined that psoriasis is not an independent risk factor for ischemic heart disease, based on Framingham Risk Scores.
While previous studies have demonstrated that cardiac mortality is increased in patients with psoriasis, it is also known that people with the skin condition have a higher prevalence of smoking, alcohol consumption, obesity, diabetes, and dyslipidemia, researchers reported in a poster session at the annual meeting of the American Academy of Dermatology.
"Is this increase in ischemic heart disease due to traditional risk factors, or is psoriasis an additional independent risk factor?" wrote the investigators, who were led by Dr. Marian T. McEvoy, in the abstract.
They performed a population-based analysis of 1,338 adults with psoriasis who resided in Olmstead County, Minn., between 1998 and 2008 to evaluate the validity of the Framingham Risk Score (FRS) in predicting the incidence of ischemic heart disease in the study cohort. The FRS is a validated measure of standard ischemic heart disease risk factors.
Dr. McEvoy, a dermatologist at the Mayo Clinic in Rochester, Minn., and her associates compared the risk of cardiac death and myocardial infarction based on the FRS with the actual incidence of myocardial infarction and cardiac death in the study population, which was limited to patients older than age 30 but younger than age 80. They used Poisson regression models and standardized incidence ratios for statistical analysis.
The researchers reported that full Framingham risk factors were available for 974 of the 1,338 patients (73%). They predicted that the median 10-year risk of cardiac events based on the FRS was 3.8%, while the observed 10-year risk of cardiac events was 5.5%. However, there were 44 observed cardiac events compared with 47.7 FRS-predicted cardiac events, which translated into a standardized incidence ratio between the two groups of 0.9. Standardized incidence ratios also showed no statistically significant differences between the two groups when analyzed by gender, by age greater than 65 years, by age less than 65 years, and by whether patients were receiving systemic treatment or not.
"If psoriasis was an independent risk factor for ischemic heart disease, the observed incidence of cardiac events would have been in excess of predicted," the researchers wrote in their poster. "Since there was no statistical difference between actual and predicted events, psoriasis is not an independent risk factor for ischemic heart disease."
In a later interview, Dr. McEvoy acknowledged certain limitations of the study, including the fact that the small number of observed cardiac events (44) limited the statistical power of the study. "Since this is a retrospective study, we did not have a good tool to assess severity of psoriasis," she said. "We used surrogates based on therapy to identify those with ‘more severe disease.’ The Framingham Risk Score was valid for this group also."
The study was partially funded by a grant from Pfizer, and was made possible by a grant from Amgen/Wyeth and by the Rochester Epidemiology Project.
FROM THE ANNUAL MEETING OF THE AMERICAN ACADEMY OF DERMATOLOGY
New Results Challenge Laser Effectiveness for Onychomycosis
SAN DIEGO – Although previous studies of 1,064-nm Nd:YAG lasers for the treatment of onychomycosis have reported significant mycological cure rates, results from a newer, ongoing study challenge such findings.
At the annual meeting of the American Academy of Dermatology, Dr. Boni E. Elewski reported findings from a trial in which she and her colleagues set out to answer two questions: Can fungal organisms be killed by heat at temperatures tolerable to patients? And, can a laser be directly fungicidal on a growing fungal colony or dilute suspension?
Dr. Elewski and her colleagues conducted a heat kill study of three fungi: Trichophyton rubrum, Epidermophyton floccosum, and Scytalidium.
For T. rubrum, they found that growth was halted after the nail was treated at 50° C for 15 minutes. For E. floccosum, they found that growth was halted after the nail was treated at 50° C for 10 minutes, and for Scytalidium, growth was halted after the nail was treated at 55° C for 5 minutes. "However, nail temperatures that reach 40-41° C cause enough pain for patients to pull away, and the maximum nail temperature patients could tolerate was 45° C," said Dr. Elewski, professor of dermatology at the University of Alabama at Birmingham.
In part two of the study, the researchers used a 1,064-nm Nd:YAG laser to treat colonies and dilute solutions before growth occurred. They used various laser parameters, including spot sizes ranging from 3 to 5 mm, pulse durations that ranged from 100 to 300 milliseconds, fluences of 15 to 50 J/cm2, and frequencies of 2 to 10 Hz. "We found that there was no effect on fungal growth at numerous settings, and the temperature of the agar plate reached about 40° C," Dr. Elewski said. "We decided to take this knowledge and move it into our clinical practice."
In an ongoing study that has enrolled 10 patients to date, she and her associates use a 1,064-nm Nd:YAG laser for a protocol that involves five treatments at settings of 16 J/cm2, 0.3 microseconds, and 2 Hz. They use a 5-mm spot size with more than 300 pulses over the nail in a predetermined pattern. "You see some clearance," Dr. Elewski said of the results so far. "We have no mycological cures to date, but improvement as shown by other companies is noted. Some improvement may be seen, but in my opinion a durable response is needed to satisfy patients."
In a 2010 study from Yugoslavia, mycological cures were reported using a fluence of 35-40 J/cm2, Dr. Elewski said, "but our patients could not tolerate anything above 16 J/cm2, at least without a digital block. All I can assume is that patients in Yugoslavia are significantly heartier than patients in Alabama" (J. Laser and Health Academy 2010;1:1-8).
She concluded her remarks by noting that photodynamic therapy "may be an option for your patients who have failed other treatments or who could not tolerate another treatment. Laser treatment is still under investigation, but might be desirable for those who would be satisfied with an improvement."
Dr. Elewski disclosed that Cutera provided research funds for the study. She said that she had no financial conflicts of interest to disclose, and emphasized that she has never been a paid consultant to any laser device company.
SAN DIEGO – Although previous studies of 1,064-nm Nd:YAG lasers for the treatment of onychomycosis have reported significant mycological cure rates, results from a newer, ongoing study challenge such findings.
At the annual meeting of the American Academy of Dermatology, Dr. Boni E. Elewski reported findings from a trial in which she and her colleagues set out to answer two questions: Can fungal organisms be killed by heat at temperatures tolerable to patients? And, can a laser be directly fungicidal on a growing fungal colony or dilute suspension?
Dr. Elewski and her colleagues conducted a heat kill study of three fungi: Trichophyton rubrum, Epidermophyton floccosum, and Scytalidium.
For T. rubrum, they found that growth was halted after the nail was treated at 50° C for 15 minutes. For E. floccosum, they found that growth was halted after the nail was treated at 50° C for 10 minutes, and for Scytalidium, growth was halted after the nail was treated at 55° C for 5 minutes. "However, nail temperatures that reach 40-41° C cause enough pain for patients to pull away, and the maximum nail temperature patients could tolerate was 45° C," said Dr. Elewski, professor of dermatology at the University of Alabama at Birmingham.
In part two of the study, the researchers used a 1,064-nm Nd:YAG laser to treat colonies and dilute solutions before growth occurred. They used various laser parameters, including spot sizes ranging from 3 to 5 mm, pulse durations that ranged from 100 to 300 milliseconds, fluences of 15 to 50 J/cm2, and frequencies of 2 to 10 Hz. "We found that there was no effect on fungal growth at numerous settings, and the temperature of the agar plate reached about 40° C," Dr. Elewski said. "We decided to take this knowledge and move it into our clinical practice."
In an ongoing study that has enrolled 10 patients to date, she and her associates use a 1,064-nm Nd:YAG laser for a protocol that involves five treatments at settings of 16 J/cm2, 0.3 microseconds, and 2 Hz. They use a 5-mm spot size with more than 300 pulses over the nail in a predetermined pattern. "You see some clearance," Dr. Elewski said of the results so far. "We have no mycological cures to date, but improvement as shown by other companies is noted. Some improvement may be seen, but in my opinion a durable response is needed to satisfy patients."
In a 2010 study from Yugoslavia, mycological cures were reported using a fluence of 35-40 J/cm2, Dr. Elewski said, "but our patients could not tolerate anything above 16 J/cm2, at least without a digital block. All I can assume is that patients in Yugoslavia are significantly heartier than patients in Alabama" (J. Laser and Health Academy 2010;1:1-8).
She concluded her remarks by noting that photodynamic therapy "may be an option for your patients who have failed other treatments or who could not tolerate another treatment. Laser treatment is still under investigation, but might be desirable for those who would be satisfied with an improvement."
Dr. Elewski disclosed that Cutera provided research funds for the study. She said that she had no financial conflicts of interest to disclose, and emphasized that she has never been a paid consultant to any laser device company.
SAN DIEGO – Although previous studies of 1,064-nm Nd:YAG lasers for the treatment of onychomycosis have reported significant mycological cure rates, results from a newer, ongoing study challenge such findings.
At the annual meeting of the American Academy of Dermatology, Dr. Boni E. Elewski reported findings from a trial in which she and her colleagues set out to answer two questions: Can fungal organisms be killed by heat at temperatures tolerable to patients? And, can a laser be directly fungicidal on a growing fungal colony or dilute suspension?
Dr. Elewski and her colleagues conducted a heat kill study of three fungi: Trichophyton rubrum, Epidermophyton floccosum, and Scytalidium.
For T. rubrum, they found that growth was halted after the nail was treated at 50° C for 15 minutes. For E. floccosum, they found that growth was halted after the nail was treated at 50° C for 10 minutes, and for Scytalidium, growth was halted after the nail was treated at 55° C for 5 minutes. "However, nail temperatures that reach 40-41° C cause enough pain for patients to pull away, and the maximum nail temperature patients could tolerate was 45° C," said Dr. Elewski, professor of dermatology at the University of Alabama at Birmingham.
In part two of the study, the researchers used a 1,064-nm Nd:YAG laser to treat colonies and dilute solutions before growth occurred. They used various laser parameters, including spot sizes ranging from 3 to 5 mm, pulse durations that ranged from 100 to 300 milliseconds, fluences of 15 to 50 J/cm2, and frequencies of 2 to 10 Hz. "We found that there was no effect on fungal growth at numerous settings, and the temperature of the agar plate reached about 40° C," Dr. Elewski said. "We decided to take this knowledge and move it into our clinical practice."
In an ongoing study that has enrolled 10 patients to date, she and her associates use a 1,064-nm Nd:YAG laser for a protocol that involves five treatments at settings of 16 J/cm2, 0.3 microseconds, and 2 Hz. They use a 5-mm spot size with more than 300 pulses over the nail in a predetermined pattern. "You see some clearance," Dr. Elewski said of the results so far. "We have no mycological cures to date, but improvement as shown by other companies is noted. Some improvement may be seen, but in my opinion a durable response is needed to satisfy patients."
In a 2010 study from Yugoslavia, mycological cures were reported using a fluence of 35-40 J/cm2, Dr. Elewski said, "but our patients could not tolerate anything above 16 J/cm2, at least without a digital block. All I can assume is that patients in Yugoslavia are significantly heartier than patients in Alabama" (J. Laser and Health Academy 2010;1:1-8).
She concluded her remarks by noting that photodynamic therapy "may be an option for your patients who have failed other treatments or who could not tolerate another treatment. Laser treatment is still under investigation, but might be desirable for those who would be satisfied with an improvement."
Dr. Elewski disclosed that Cutera provided research funds for the study. She said that she had no financial conflicts of interest to disclose, and emphasized that she has never been a paid consultant to any laser device company.
EXPERT ANALYSIS FROM THE ANNUAL MEETING OF THE AMERICAN ACADEMY OF DERMATOLOGY
Ask Alzheimer's Patient Caregiver: Patch or Pill Therapy?
LAS VEGAS – If you’re stumped about which cholinesterase inhibitor to prescribe for your patients with newly diagnosed Alzheimer’s disease, rest assured that the clinical effects are similar with all such agents.
"There is no substantive scientific evidence that says one of the cholinesterase inhibitors is better than another, so get comfortable with one or two of them," Dr. Jeffrey L. Cummings advised during a psychopharmacology conference sponsored by the Nevada Psychiatric Association. "The effect is demonstrable late in the disease, so if you take patients with Mini-Mental State Exam scores of less than 10, you still get the same response that you do in somebody whose Mini-Mental State Exam score is 20. There’s no proven effect on the underlying disease state."
Clinical evidence from nearly 20 years of cholinesterase inhibitor use suggests that 25% of patients who take them will experience modest cognitive improvements, defined as a 2-4 point increase on the Alzheimer’s Disease Assessment Scale-cognitive subscale or a 1-2 point increase on the Mini-Mental State Exam. This "makes it very difficult to see a response if it’s not on the upper end of that [response]," Dr. Cummings said. "But about 80% of patients on cholinesterase inhibitors have a delay in decline of 6-9 months. That’s worthwhile, because patients are usually only on therapy for about 5 or 6 years over the course of their disease. So if you can delay almost 20% of that, that’s fantastic."
Dr. Cummings, director of the Cleveland Clinic’s Lou Ruvo Center for Brain Health, lets convenience drive the choice of which cholinesterase inhibitor to prescribe. "I sit with the patient and the caregiver and I ask: ‘Do you want a pill or a transdermal patch?’ If they say, ‘I want a pill,’ I give them donepezil. If they say, ‘I want a patch,’ I give them the rivastigmine transdermal patch," he said.
"I’m trying to respond to the perceived convenience of the caregiver. That’s the question I pose, and those are the two drugs I use."
He went on to note that donepezil "is more likely to give you diarrhea than rivastigmine is, and the rivastigmine patch will give you a rash in 5-10% of patients. Bradycardia is a contraindication for all cholinesterase inhibitors."
Donepezil is available in 5-mg, 10-mg, and 23-mg formulations. The 23-mg form is approved only for patients with moderate to severe disease. "There is more diarrhea with the 23-mg dose; maybe 15% of patients will get diarrhea with that higher dose," he said. "To ameliorate this, for 1 month I have patients go from 10 mg to 15 mg before jumping to the 23-mg dose. I think that helps rather than going directly from 10 mg to 23 mg."
Rivastigmine is approved for mild to moderate Alzheimer’s and for patients with mild to moderate Parkinson’s disease dementia. It’s available in 1.5-mg, 3-mg, 4.5-mg, and 6-mg capsules, or as a 4.6-mg or 9.5-mg transdermal patch.
Galantamine is another cholinesterase inhibitor approved for patients with mild to moderate Alzheimer’s disease, and it has dosing options of 6 mg, 8 mg, or 12 mg b.i.d. Extended formulations are available in 12-mg and 24-mg doses.
The NMDA (N-methyl-d-aspartate) receptor antagonist memantine is also approved for patients with mild to moderate Alzheimer’s, with optimal dosing titration to 10 mg b.i.d. "Side effects are quite rare, but can include somnolence, headache, and dizziness," said Dr. Cummings, who is also chair of neurotherapeutics at the Cleveland Clinic’s Neurological Institute. "Most patients, by the time they reach mid-disease, are on both a cholinesterase inhibitor and memantine."
Other treatment options include the medical foods CerefolinNAC and Axona, which are generally recognized as safe by the Food and Drug Administration and are available by prescription. "No demonstration of clinical benefit is required for these agents," Dr. Cummings said. "The data set supporting medical foods is not as robust as the data set supporting cholinesterase inhibitors."
CerefolinNAC is a combination of vitamin B6, vitamin B12, and folate that Dr. Cummings uses for hyperhomocysteinemia. "I know that high levels of homocysteine are correlated with cognitive impairment, so I try to reduce that by giving CerefolinNAC," he said. "However, there are no data which prove that lowering will necessarily improve the prognosis of the patient. What you are doing is piecing together various types of data to support that use, but it’s not as strong as a double-blind, placebo-controlled trial showing a direct benefit."
Axona is a proprietary formulation of medium-chain triglycerides that increase plasma concentrations of ketone bodies. "This is considered an energy source for neurons," Dr. Cummings said.
Dr. Cummings disclosed that he has provided consultation to the following pharmaceutical companies: Abbott, Acadia, Adamas, Anavex, Astellas, Avanir, Bayer, BMS, Eisai, EnVivo, ExonHit, Janssen, Forest, Genentech, GSK, Lundbeck, Merck, Neurokos, Novartis, Otsuka, Pfizer, Prana, QR Pharma, Sanofi-Aventis, and Takeda.
LAS VEGAS – If you’re stumped about which cholinesterase inhibitor to prescribe for your patients with newly diagnosed Alzheimer’s disease, rest assured that the clinical effects are similar with all such agents.
"There is no substantive scientific evidence that says one of the cholinesterase inhibitors is better than another, so get comfortable with one or two of them," Dr. Jeffrey L. Cummings advised during a psychopharmacology conference sponsored by the Nevada Psychiatric Association. "The effect is demonstrable late in the disease, so if you take patients with Mini-Mental State Exam scores of less than 10, you still get the same response that you do in somebody whose Mini-Mental State Exam score is 20. There’s no proven effect on the underlying disease state."
Clinical evidence from nearly 20 years of cholinesterase inhibitor use suggests that 25% of patients who take them will experience modest cognitive improvements, defined as a 2-4 point increase on the Alzheimer’s Disease Assessment Scale-cognitive subscale or a 1-2 point increase on the Mini-Mental State Exam. This "makes it very difficult to see a response if it’s not on the upper end of that [response]," Dr. Cummings said. "But about 80% of patients on cholinesterase inhibitors have a delay in decline of 6-9 months. That’s worthwhile, because patients are usually only on therapy for about 5 or 6 years over the course of their disease. So if you can delay almost 20% of that, that’s fantastic."
Dr. Cummings, director of the Cleveland Clinic’s Lou Ruvo Center for Brain Health, lets convenience drive the choice of which cholinesterase inhibitor to prescribe. "I sit with the patient and the caregiver and I ask: ‘Do you want a pill or a transdermal patch?’ If they say, ‘I want a pill,’ I give them donepezil. If they say, ‘I want a patch,’ I give them the rivastigmine transdermal patch," he said.
"I’m trying to respond to the perceived convenience of the caregiver. That’s the question I pose, and those are the two drugs I use."
He went on to note that donepezil "is more likely to give you diarrhea than rivastigmine is, and the rivastigmine patch will give you a rash in 5-10% of patients. Bradycardia is a contraindication for all cholinesterase inhibitors."
Donepezil is available in 5-mg, 10-mg, and 23-mg formulations. The 23-mg form is approved only for patients with moderate to severe disease. "There is more diarrhea with the 23-mg dose; maybe 15% of patients will get diarrhea with that higher dose," he said. "To ameliorate this, for 1 month I have patients go from 10 mg to 15 mg before jumping to the 23-mg dose. I think that helps rather than going directly from 10 mg to 23 mg."
Rivastigmine is approved for mild to moderate Alzheimer’s and for patients with mild to moderate Parkinson’s disease dementia. It’s available in 1.5-mg, 3-mg, 4.5-mg, and 6-mg capsules, or as a 4.6-mg or 9.5-mg transdermal patch.
Galantamine is another cholinesterase inhibitor approved for patients with mild to moderate Alzheimer’s disease, and it has dosing options of 6 mg, 8 mg, or 12 mg b.i.d. Extended formulations are available in 12-mg and 24-mg doses.
The NMDA (N-methyl-d-aspartate) receptor antagonist memantine is also approved for patients with mild to moderate Alzheimer’s, with optimal dosing titration to 10 mg b.i.d. "Side effects are quite rare, but can include somnolence, headache, and dizziness," said Dr. Cummings, who is also chair of neurotherapeutics at the Cleveland Clinic’s Neurological Institute. "Most patients, by the time they reach mid-disease, are on both a cholinesterase inhibitor and memantine."
Other treatment options include the medical foods CerefolinNAC and Axona, which are generally recognized as safe by the Food and Drug Administration and are available by prescription. "No demonstration of clinical benefit is required for these agents," Dr. Cummings said. "The data set supporting medical foods is not as robust as the data set supporting cholinesterase inhibitors."
CerefolinNAC is a combination of vitamin B6, vitamin B12, and folate that Dr. Cummings uses for hyperhomocysteinemia. "I know that high levels of homocysteine are correlated with cognitive impairment, so I try to reduce that by giving CerefolinNAC," he said. "However, there are no data which prove that lowering will necessarily improve the prognosis of the patient. What you are doing is piecing together various types of data to support that use, but it’s not as strong as a double-blind, placebo-controlled trial showing a direct benefit."
Axona is a proprietary formulation of medium-chain triglycerides that increase plasma concentrations of ketone bodies. "This is considered an energy source for neurons," Dr. Cummings said.
Dr. Cummings disclosed that he has provided consultation to the following pharmaceutical companies: Abbott, Acadia, Adamas, Anavex, Astellas, Avanir, Bayer, BMS, Eisai, EnVivo, ExonHit, Janssen, Forest, Genentech, GSK, Lundbeck, Merck, Neurokos, Novartis, Otsuka, Pfizer, Prana, QR Pharma, Sanofi-Aventis, and Takeda.
LAS VEGAS – If you’re stumped about which cholinesterase inhibitor to prescribe for your patients with newly diagnosed Alzheimer’s disease, rest assured that the clinical effects are similar with all such agents.
"There is no substantive scientific evidence that says one of the cholinesterase inhibitors is better than another, so get comfortable with one or two of them," Dr. Jeffrey L. Cummings advised during a psychopharmacology conference sponsored by the Nevada Psychiatric Association. "The effect is demonstrable late in the disease, so if you take patients with Mini-Mental State Exam scores of less than 10, you still get the same response that you do in somebody whose Mini-Mental State Exam score is 20. There’s no proven effect on the underlying disease state."
Clinical evidence from nearly 20 years of cholinesterase inhibitor use suggests that 25% of patients who take them will experience modest cognitive improvements, defined as a 2-4 point increase on the Alzheimer’s Disease Assessment Scale-cognitive subscale or a 1-2 point increase on the Mini-Mental State Exam. This "makes it very difficult to see a response if it’s not on the upper end of that [response]," Dr. Cummings said. "But about 80% of patients on cholinesterase inhibitors have a delay in decline of 6-9 months. That’s worthwhile, because patients are usually only on therapy for about 5 or 6 years over the course of their disease. So if you can delay almost 20% of that, that’s fantastic."
Dr. Cummings, director of the Cleveland Clinic’s Lou Ruvo Center for Brain Health, lets convenience drive the choice of which cholinesterase inhibitor to prescribe. "I sit with the patient and the caregiver and I ask: ‘Do you want a pill or a transdermal patch?’ If they say, ‘I want a pill,’ I give them donepezil. If they say, ‘I want a patch,’ I give them the rivastigmine transdermal patch," he said.
"I’m trying to respond to the perceived convenience of the caregiver. That’s the question I pose, and those are the two drugs I use."
He went on to note that donepezil "is more likely to give you diarrhea than rivastigmine is, and the rivastigmine patch will give you a rash in 5-10% of patients. Bradycardia is a contraindication for all cholinesterase inhibitors."
Donepezil is available in 5-mg, 10-mg, and 23-mg formulations. The 23-mg form is approved only for patients with moderate to severe disease. "There is more diarrhea with the 23-mg dose; maybe 15% of patients will get diarrhea with that higher dose," he said. "To ameliorate this, for 1 month I have patients go from 10 mg to 15 mg before jumping to the 23-mg dose. I think that helps rather than going directly from 10 mg to 23 mg."
Rivastigmine is approved for mild to moderate Alzheimer’s and for patients with mild to moderate Parkinson’s disease dementia. It’s available in 1.5-mg, 3-mg, 4.5-mg, and 6-mg capsules, or as a 4.6-mg or 9.5-mg transdermal patch.
Galantamine is another cholinesterase inhibitor approved for patients with mild to moderate Alzheimer’s disease, and it has dosing options of 6 mg, 8 mg, or 12 mg b.i.d. Extended formulations are available in 12-mg and 24-mg doses.
The NMDA (N-methyl-d-aspartate) receptor antagonist memantine is also approved for patients with mild to moderate Alzheimer’s, with optimal dosing titration to 10 mg b.i.d. "Side effects are quite rare, but can include somnolence, headache, and dizziness," said Dr. Cummings, who is also chair of neurotherapeutics at the Cleveland Clinic’s Neurological Institute. "Most patients, by the time they reach mid-disease, are on both a cholinesterase inhibitor and memantine."
Other treatment options include the medical foods CerefolinNAC and Axona, which are generally recognized as safe by the Food and Drug Administration and are available by prescription. "No demonstration of clinical benefit is required for these agents," Dr. Cummings said. "The data set supporting medical foods is not as robust as the data set supporting cholinesterase inhibitors."
CerefolinNAC is a combination of vitamin B6, vitamin B12, and folate that Dr. Cummings uses for hyperhomocysteinemia. "I know that high levels of homocysteine are correlated with cognitive impairment, so I try to reduce that by giving CerefolinNAC," he said. "However, there are no data which prove that lowering will necessarily improve the prognosis of the patient. What you are doing is piecing together various types of data to support that use, but it’s not as strong as a double-blind, placebo-controlled trial showing a direct benefit."
Axona is a proprietary formulation of medium-chain triglycerides that increase plasma concentrations of ketone bodies. "This is considered an energy source for neurons," Dr. Cummings said.
Dr. Cummings disclosed that he has provided consultation to the following pharmaceutical companies: Abbott, Acadia, Adamas, Anavex, Astellas, Avanir, Bayer, BMS, Eisai, EnVivo, ExonHit, Janssen, Forest, Genentech, GSK, Lundbeck, Merck, Neurokos, Novartis, Otsuka, Pfizer, Prana, QR Pharma, Sanofi-Aventis, and Takeda.
EXPERT ANALYSIS FROM A PSYCHOPHARMACOLOGY CONFERENCE SPONSORED BY THE NEVADA PSYCHIATRIC ASSOCIATION
Thorough Work-Up Crucial in Sarcoidosis Cases
SAN DIEGO – While it’s well known that sarcoidosis commonly affects pulmonary function, it’s perhaps less known that the disorder can be detrimental to cardiac function in approximately 5% of cases.
"A common way that patients present with cardiac sarcoidosis is with sudden cardiac death," Dr. Misha Rosenbach said at the annual meeting of the American Academy of Dermatology. "This is a terrible way to present to your doctor with a problem."
A multisystem disorder of unknown cause, sarcoidosis commonly affects young and middle-aged adults and frequently presents with bilateral hilar lymphadenopathy, pulmonary infiltration, and ocular and skin lesions. Other organs may be involved. The diagnosis is established when clinicoradiologic findings are supported by histologic evidence of noncaseating epithelioid cell granulomas.
"Sarcoidosis is primarily a pulmonary disease, but patients can also present with profound systemic symptoms," said Dr. Rosenbach of the departments of dermatology and internal medicine at the University of Pennsylvania, Philadelphia. "When you’re evaluating a patient with cutaneous sarcoidosis, and making a diagnosis of granulomatous disease of the skin, and looking for extracutaneous involvement, it’s important to know what else can be affected."
Although pulmonary function is affected in more than 90% of cases, other commonly affected sites include the eyes (25%-50% of cases), lymph nodes (about 33% of cases), musculoskeletal system (25%-40% of cases), endocrine system (10%-25% of cases), and liver (20%-50% of cases). The initial evaluation should consist of history and physical exam; chest x-ray; pulmonary function tests (including carbon monoxide diffusing capacity); ophthalmologic examination; complete blood count and serum chemistries (including calcium); urinalysis; EKG (plus additional testing if there is a history of palpitations); tuberculin skin test (TST) or interferon (IFN)–gamma release assay; and thyroid and vitamin D testing.
"Patients with sarcoidosis often have low levels of 25-hydroxyvitamin D, but elevated levels of 1,25-dihydroxyvitamin D3," Dr. Rosenbach said. "Inappropriate supplementation can lead to hypercalcemia."
For latent tuberculosis testing, he pointed out that the IFN-gamma release assay (IGRA) is thought to be more accurate than the TST. "IGRA significantly reduces false-positive results" in bacille Calmette-Guérin–vaccinated patients, said Dr. Rosenbach, who is also director of the cutaneous sarcoidosis clinic at the University of Pennsylvania. "Cost-benefit analyses suggest that IGRA [is] cost equivalent to TST, and the Centers for Disease Control and Prevention recommends that IGRA may be used in all circumstances in which the TST is currently used. However, both TST and IGRA have decreased responsiveness and lower sensitivity in patients with impaired immune systems."
In terms of the impact of sarcoidosis on the thyroid gland, a recent analysis of a large database in the United Kingdom found that hyper- and hypothyroidism were twice as common in patients with sarcoidosis, compared with a control population (Postgrad. Med. J. 2009;85:233-7).
A more recent study of 50 patients with cutaneous sarcoidosis conducted by Dr. Rosenbach and his colleagues found that 25% of patients had abnormal thyroid laboratory test results (J. Am. Acad. Dermatol. 2012;66:167-8).
The precise association between sarcoidosis and malignancy remains unclear, he said, but the best available studies suggest that the incidence of lymphoproliferative disorder may be increased in patients with sarcoidosis. Other granulomatous dermatitides may be associated with hematologic abnormalities. Authors of one review found that granulomatous dermatitides may be the first sign of underlying myelodysplastic syndrome (MDS), and recommended that clinicians consider looking for underlying MDS in patients with unexplained or atypical granulomatous skin eruptions (Arch. Dermatol. 2011;147:331-5).
A common stepwise approach for treating patients, Dr. Rosenbach said, begins with skin-directed therapies in the form of steroids or injections. The second step involves the use of antimalarials and tetracycline-class antibiotics; the third step involves methotrexate and/or prednisone, and the fourth step involves consideration for treatment with infliximab or adalimumab. "At this point, etanercept should probably not be used," Dr. Rosenbach said. "It appears to be less effective, and in a few reports has been associated with worsening of disease."
The data are strongest for infliximab, he said, at a recommended dosage of 5 mg/kg at 0, 2, and 6 weeks, and then with maintenance therapy every 6-8 weeks. Adalimumab appeared to work best at 40 mg every week, he said, "but the addition of low-dose methotrexate is sometimes necessary to either regimen."
Dr. Rosenbach disclosed that he was an investigator for a clinical trial sponsored by Centocor and Johnson & Johnson to investigate biologics for chronic/refractory sarcoidosis.
SAN DIEGO – While it’s well known that sarcoidosis commonly affects pulmonary function, it’s perhaps less known that the disorder can be detrimental to cardiac function in approximately 5% of cases.
"A common way that patients present with cardiac sarcoidosis is with sudden cardiac death," Dr. Misha Rosenbach said at the annual meeting of the American Academy of Dermatology. "This is a terrible way to present to your doctor with a problem."
A multisystem disorder of unknown cause, sarcoidosis commonly affects young and middle-aged adults and frequently presents with bilateral hilar lymphadenopathy, pulmonary infiltration, and ocular and skin lesions. Other organs may be involved. The diagnosis is established when clinicoradiologic findings are supported by histologic evidence of noncaseating epithelioid cell granulomas.
"Sarcoidosis is primarily a pulmonary disease, but patients can also present with profound systemic symptoms," said Dr. Rosenbach of the departments of dermatology and internal medicine at the University of Pennsylvania, Philadelphia. "When you’re evaluating a patient with cutaneous sarcoidosis, and making a diagnosis of granulomatous disease of the skin, and looking for extracutaneous involvement, it’s important to know what else can be affected."
Although pulmonary function is affected in more than 90% of cases, other commonly affected sites include the eyes (25%-50% of cases), lymph nodes (about 33% of cases), musculoskeletal system (25%-40% of cases), endocrine system (10%-25% of cases), and liver (20%-50% of cases). The initial evaluation should consist of history and physical exam; chest x-ray; pulmonary function tests (including carbon monoxide diffusing capacity); ophthalmologic examination; complete blood count and serum chemistries (including calcium); urinalysis; EKG (plus additional testing if there is a history of palpitations); tuberculin skin test (TST) or interferon (IFN)–gamma release assay; and thyroid and vitamin D testing.
"Patients with sarcoidosis often have low levels of 25-hydroxyvitamin D, but elevated levels of 1,25-dihydroxyvitamin D3," Dr. Rosenbach said. "Inappropriate supplementation can lead to hypercalcemia."
For latent tuberculosis testing, he pointed out that the IFN-gamma release assay (IGRA) is thought to be more accurate than the TST. "IGRA significantly reduces false-positive results" in bacille Calmette-Guérin–vaccinated patients, said Dr. Rosenbach, who is also director of the cutaneous sarcoidosis clinic at the University of Pennsylvania. "Cost-benefit analyses suggest that IGRA [is] cost equivalent to TST, and the Centers for Disease Control and Prevention recommends that IGRA may be used in all circumstances in which the TST is currently used. However, both TST and IGRA have decreased responsiveness and lower sensitivity in patients with impaired immune systems."
In terms of the impact of sarcoidosis on the thyroid gland, a recent analysis of a large database in the United Kingdom found that hyper- and hypothyroidism were twice as common in patients with sarcoidosis, compared with a control population (Postgrad. Med. J. 2009;85:233-7).
A more recent study of 50 patients with cutaneous sarcoidosis conducted by Dr. Rosenbach and his colleagues found that 25% of patients had abnormal thyroid laboratory test results (J. Am. Acad. Dermatol. 2012;66:167-8).
The precise association between sarcoidosis and malignancy remains unclear, he said, but the best available studies suggest that the incidence of lymphoproliferative disorder may be increased in patients with sarcoidosis. Other granulomatous dermatitides may be associated with hematologic abnormalities. Authors of one review found that granulomatous dermatitides may be the first sign of underlying myelodysplastic syndrome (MDS), and recommended that clinicians consider looking for underlying MDS in patients with unexplained or atypical granulomatous skin eruptions (Arch. Dermatol. 2011;147:331-5).
A common stepwise approach for treating patients, Dr. Rosenbach said, begins with skin-directed therapies in the form of steroids or injections. The second step involves the use of antimalarials and tetracycline-class antibiotics; the third step involves methotrexate and/or prednisone, and the fourth step involves consideration for treatment with infliximab or adalimumab. "At this point, etanercept should probably not be used," Dr. Rosenbach said. "It appears to be less effective, and in a few reports has been associated with worsening of disease."
The data are strongest for infliximab, he said, at a recommended dosage of 5 mg/kg at 0, 2, and 6 weeks, and then with maintenance therapy every 6-8 weeks. Adalimumab appeared to work best at 40 mg every week, he said, "but the addition of low-dose methotrexate is sometimes necessary to either regimen."
Dr. Rosenbach disclosed that he was an investigator for a clinical trial sponsored by Centocor and Johnson & Johnson to investigate biologics for chronic/refractory sarcoidosis.
SAN DIEGO – While it’s well known that sarcoidosis commonly affects pulmonary function, it’s perhaps less known that the disorder can be detrimental to cardiac function in approximately 5% of cases.
"A common way that patients present with cardiac sarcoidosis is with sudden cardiac death," Dr. Misha Rosenbach said at the annual meeting of the American Academy of Dermatology. "This is a terrible way to present to your doctor with a problem."
A multisystem disorder of unknown cause, sarcoidosis commonly affects young and middle-aged adults and frequently presents with bilateral hilar lymphadenopathy, pulmonary infiltration, and ocular and skin lesions. Other organs may be involved. The diagnosis is established when clinicoradiologic findings are supported by histologic evidence of noncaseating epithelioid cell granulomas.
"Sarcoidosis is primarily a pulmonary disease, but patients can also present with profound systemic symptoms," said Dr. Rosenbach of the departments of dermatology and internal medicine at the University of Pennsylvania, Philadelphia. "When you’re evaluating a patient with cutaneous sarcoidosis, and making a diagnosis of granulomatous disease of the skin, and looking for extracutaneous involvement, it’s important to know what else can be affected."
Although pulmonary function is affected in more than 90% of cases, other commonly affected sites include the eyes (25%-50% of cases), lymph nodes (about 33% of cases), musculoskeletal system (25%-40% of cases), endocrine system (10%-25% of cases), and liver (20%-50% of cases). The initial evaluation should consist of history and physical exam; chest x-ray; pulmonary function tests (including carbon monoxide diffusing capacity); ophthalmologic examination; complete blood count and serum chemistries (including calcium); urinalysis; EKG (plus additional testing if there is a history of palpitations); tuberculin skin test (TST) or interferon (IFN)–gamma release assay; and thyroid and vitamin D testing.
"Patients with sarcoidosis often have low levels of 25-hydroxyvitamin D, but elevated levels of 1,25-dihydroxyvitamin D3," Dr. Rosenbach said. "Inappropriate supplementation can lead to hypercalcemia."
For latent tuberculosis testing, he pointed out that the IFN-gamma release assay (IGRA) is thought to be more accurate than the TST. "IGRA significantly reduces false-positive results" in bacille Calmette-Guérin–vaccinated patients, said Dr. Rosenbach, who is also director of the cutaneous sarcoidosis clinic at the University of Pennsylvania. "Cost-benefit analyses suggest that IGRA [is] cost equivalent to TST, and the Centers for Disease Control and Prevention recommends that IGRA may be used in all circumstances in which the TST is currently used. However, both TST and IGRA have decreased responsiveness and lower sensitivity in patients with impaired immune systems."
In terms of the impact of sarcoidosis on the thyroid gland, a recent analysis of a large database in the United Kingdom found that hyper- and hypothyroidism were twice as common in patients with sarcoidosis, compared with a control population (Postgrad. Med. J. 2009;85:233-7).
A more recent study of 50 patients with cutaneous sarcoidosis conducted by Dr. Rosenbach and his colleagues found that 25% of patients had abnormal thyroid laboratory test results (J. Am. Acad. Dermatol. 2012;66:167-8).
The precise association between sarcoidosis and malignancy remains unclear, he said, but the best available studies suggest that the incidence of lymphoproliferative disorder may be increased in patients with sarcoidosis. Other granulomatous dermatitides may be associated with hematologic abnormalities. Authors of one review found that granulomatous dermatitides may be the first sign of underlying myelodysplastic syndrome (MDS), and recommended that clinicians consider looking for underlying MDS in patients with unexplained or atypical granulomatous skin eruptions (Arch. Dermatol. 2011;147:331-5).
A common stepwise approach for treating patients, Dr. Rosenbach said, begins with skin-directed therapies in the form of steroids or injections. The second step involves the use of antimalarials and tetracycline-class antibiotics; the third step involves methotrexate and/or prednisone, and the fourth step involves consideration for treatment with infliximab or adalimumab. "At this point, etanercept should probably not be used," Dr. Rosenbach said. "It appears to be less effective, and in a few reports has been associated with worsening of disease."
The data are strongest for infliximab, he said, at a recommended dosage of 5 mg/kg at 0, 2, and 6 weeks, and then with maintenance therapy every 6-8 weeks. Adalimumab appeared to work best at 40 mg every week, he said, "but the addition of low-dose methotrexate is sometimes necessary to either regimen."
Dr. Rosenbach disclosed that he was an investigator for a clinical trial sponsored by Centocor and Johnson & Johnson to investigate biologics for chronic/refractory sarcoidosis.
EXPERT ANALYSIS FROM THE ANNUAL MEETING OF THE AMERICAN ACADEMY OF DERMATOLOGY
Appendectomy Outcomes in Elderly Compared
SAN DIEGO – Elderly patients who underwent a laparoscopic appendectomy had less minor morbidity, less overall morbidity, a lower rate of superficial surgical site infection, and a shorter length of hospital stay compared with their counterparts who underwent an open appendectomy, results from a study of national data demonstrated.
"Laparoscopic appendectomy is becoming the procedure of choice for appendicitis due to the lower rate of surgical site infection, lower length of hospital stay, and faster return to normal life," Dr. Ashkan Moazzez said at the annual meeting of the Society of American Gastrointestinal and Endoscopic Surgeons.
However, most of the current published studies on the topic are limited by having small sample sizes, comparing laparoscopy in elderly versus younger adults only, and having no analysis of 30-day outcomes, said Dr. Moazzez of the H. Claude Hudson Comprehensive Health Center, Los Angeles. At the same time, he continued, life expectancy in the United States has increased over the last few years, "and many people have projected that there will be an increased rate of appendicitis in the elderly."
Using the American College of Surgeons National Surgical Quality Improvement Program databases from 2005 to 2009, he and his associates identified 3,674 patients aged 65 and older who underwent a single laparoscopic or open appendectomy and had a discharge diagnosis of appendicitis.
To compare 30-day outcomes in the two groups, the researchers conducted statistical analysis in two cohorts: an aggregate cohort, which included all 3,674 patients, and a matched cohort, which included 2,060 patients: 1,030 from the laparoscopic appendectomy group and 1,030 from the open appendectomy group, determined by propensity score matching based on 25 preoperative risk factors. This was done because patients in the study "were not randomized to a particular treatment; that can introduce selection bias in the data, which can affect the outcomes," Dr. Moazzez explained.
In the aggregate cohort, the mean age of patients in the open appendectomy group was 74 years compared with 73 years in the laparoscopic group. The mean age of patients in the matched cohort was 74 years. Overall sex distribution was almost 1:1 and 88% of patients were white.
In the aggregate cohort, the rate of overall morbidity in the open group was 13.4% vs. 8.2% in the laparoscopic group, a difference that was statistically significant (P less than .001). This group of patients also had significantly higher rates of mortality (2% vs. 0.9%, P = .003), superficial surgical site infection (3.8% vs. 1.4%, P less than .001), and deep incisional surgical site infection (0.8% vs. 0.2%, P = .003), yet the rate of serious morbidity was statistically similar (6.7% vs. 5.2%, P = .08).
In the matched cohort, the rate of overall morbidity was also statistically significant (10.1%, P = .020), but the rate of mortality was not (1.5%, P = .313). "This shows that when elderly patients are matched based on their preoperative risk factors, laparoscopic surgery does not have a benefit over open surgery as far as mortality," Dr. Moazzez said.
Patients who underwent an open appendectomy in the matched cohort had significantly higher rates of superficial surgical site infection (3.8% vs. 1.4%, P = .001), but the rate of deep incisional surgical site infections did not reach statistical significance (0.8% vs. 0.3%, P = .131).
In the aggregate cohort, patients in the open group had a significantly longer hospital length of stay compared with their counterparts in the laparoscopic group (a mean of 4.7 vs. 2.9 days, P less than .001). The mean length of stay among patients in the laparoscopic group in the matched cohort was 3.6 days (P less than .001).
In a subgroup analysis, aggregate cohort patients with an American Society of Anesthesiologists physical classification of 3 or 4 had higher overall morbidity (19.4% vs. 11.9%, P less than .001) and mortality (3.9% vs. 1.8%, P = .009) rates in the open appendectomy group.
Dr. Moazzez said that he had no relevant financial disclosures.
SAN DIEGO – Elderly patients who underwent a laparoscopic appendectomy had less minor morbidity, less overall morbidity, a lower rate of superficial surgical site infection, and a shorter length of hospital stay compared with their counterparts who underwent an open appendectomy, results from a study of national data demonstrated.
"Laparoscopic appendectomy is becoming the procedure of choice for appendicitis due to the lower rate of surgical site infection, lower length of hospital stay, and faster return to normal life," Dr. Ashkan Moazzez said at the annual meeting of the Society of American Gastrointestinal and Endoscopic Surgeons.
However, most of the current published studies on the topic are limited by having small sample sizes, comparing laparoscopy in elderly versus younger adults only, and having no analysis of 30-day outcomes, said Dr. Moazzez of the H. Claude Hudson Comprehensive Health Center, Los Angeles. At the same time, he continued, life expectancy in the United States has increased over the last few years, "and many people have projected that there will be an increased rate of appendicitis in the elderly."
Using the American College of Surgeons National Surgical Quality Improvement Program databases from 2005 to 2009, he and his associates identified 3,674 patients aged 65 and older who underwent a single laparoscopic or open appendectomy and had a discharge diagnosis of appendicitis.
To compare 30-day outcomes in the two groups, the researchers conducted statistical analysis in two cohorts: an aggregate cohort, which included all 3,674 patients, and a matched cohort, which included 2,060 patients: 1,030 from the laparoscopic appendectomy group and 1,030 from the open appendectomy group, determined by propensity score matching based on 25 preoperative risk factors. This was done because patients in the study "were not randomized to a particular treatment; that can introduce selection bias in the data, which can affect the outcomes," Dr. Moazzez explained.
In the aggregate cohort, the mean age of patients in the open appendectomy group was 74 years compared with 73 years in the laparoscopic group. The mean age of patients in the matched cohort was 74 years. Overall sex distribution was almost 1:1 and 88% of patients were white.
In the aggregate cohort, the rate of overall morbidity in the open group was 13.4% vs. 8.2% in the laparoscopic group, a difference that was statistically significant (P less than .001). This group of patients also had significantly higher rates of mortality (2% vs. 0.9%, P = .003), superficial surgical site infection (3.8% vs. 1.4%, P less than .001), and deep incisional surgical site infection (0.8% vs. 0.2%, P = .003), yet the rate of serious morbidity was statistically similar (6.7% vs. 5.2%, P = .08).
In the matched cohort, the rate of overall morbidity was also statistically significant (10.1%, P = .020), but the rate of mortality was not (1.5%, P = .313). "This shows that when elderly patients are matched based on their preoperative risk factors, laparoscopic surgery does not have a benefit over open surgery as far as mortality," Dr. Moazzez said.
Patients who underwent an open appendectomy in the matched cohort had significantly higher rates of superficial surgical site infection (3.8% vs. 1.4%, P = .001), but the rate of deep incisional surgical site infections did not reach statistical significance (0.8% vs. 0.3%, P = .131).
In the aggregate cohort, patients in the open group had a significantly longer hospital length of stay compared with their counterparts in the laparoscopic group (a mean of 4.7 vs. 2.9 days, P less than .001). The mean length of stay among patients in the laparoscopic group in the matched cohort was 3.6 days (P less than .001).
In a subgroup analysis, aggregate cohort patients with an American Society of Anesthesiologists physical classification of 3 or 4 had higher overall morbidity (19.4% vs. 11.9%, P less than .001) and mortality (3.9% vs. 1.8%, P = .009) rates in the open appendectomy group.
Dr. Moazzez said that he had no relevant financial disclosures.
SAN DIEGO – Elderly patients who underwent a laparoscopic appendectomy had less minor morbidity, less overall morbidity, a lower rate of superficial surgical site infection, and a shorter length of hospital stay compared with their counterparts who underwent an open appendectomy, results from a study of national data demonstrated.
"Laparoscopic appendectomy is becoming the procedure of choice for appendicitis due to the lower rate of surgical site infection, lower length of hospital stay, and faster return to normal life," Dr. Ashkan Moazzez said at the annual meeting of the Society of American Gastrointestinal and Endoscopic Surgeons.
However, most of the current published studies on the topic are limited by having small sample sizes, comparing laparoscopy in elderly versus younger adults only, and having no analysis of 30-day outcomes, said Dr. Moazzez of the H. Claude Hudson Comprehensive Health Center, Los Angeles. At the same time, he continued, life expectancy in the United States has increased over the last few years, "and many people have projected that there will be an increased rate of appendicitis in the elderly."
Using the American College of Surgeons National Surgical Quality Improvement Program databases from 2005 to 2009, he and his associates identified 3,674 patients aged 65 and older who underwent a single laparoscopic or open appendectomy and had a discharge diagnosis of appendicitis.
To compare 30-day outcomes in the two groups, the researchers conducted statistical analysis in two cohorts: an aggregate cohort, which included all 3,674 patients, and a matched cohort, which included 2,060 patients: 1,030 from the laparoscopic appendectomy group and 1,030 from the open appendectomy group, determined by propensity score matching based on 25 preoperative risk factors. This was done because patients in the study "were not randomized to a particular treatment; that can introduce selection bias in the data, which can affect the outcomes," Dr. Moazzez explained.
In the aggregate cohort, the mean age of patients in the open appendectomy group was 74 years compared with 73 years in the laparoscopic group. The mean age of patients in the matched cohort was 74 years. Overall sex distribution was almost 1:1 and 88% of patients were white.
In the aggregate cohort, the rate of overall morbidity in the open group was 13.4% vs. 8.2% in the laparoscopic group, a difference that was statistically significant (P less than .001). This group of patients also had significantly higher rates of mortality (2% vs. 0.9%, P = .003), superficial surgical site infection (3.8% vs. 1.4%, P less than .001), and deep incisional surgical site infection (0.8% vs. 0.2%, P = .003), yet the rate of serious morbidity was statistically similar (6.7% vs. 5.2%, P = .08).
In the matched cohort, the rate of overall morbidity was also statistically significant (10.1%, P = .020), but the rate of mortality was not (1.5%, P = .313). "This shows that when elderly patients are matched based on their preoperative risk factors, laparoscopic surgery does not have a benefit over open surgery as far as mortality," Dr. Moazzez said.
Patients who underwent an open appendectomy in the matched cohort had significantly higher rates of superficial surgical site infection (3.8% vs. 1.4%, P = .001), but the rate of deep incisional surgical site infections did not reach statistical significance (0.8% vs. 0.3%, P = .131).
In the aggregate cohort, patients in the open group had a significantly longer hospital length of stay compared with their counterparts in the laparoscopic group (a mean of 4.7 vs. 2.9 days, P less than .001). The mean length of stay among patients in the laparoscopic group in the matched cohort was 3.6 days (P less than .001).
In a subgroup analysis, aggregate cohort patients with an American Society of Anesthesiologists physical classification of 3 or 4 had higher overall morbidity (19.4% vs. 11.9%, P less than .001) and mortality (3.9% vs. 1.8%, P = .009) rates in the open appendectomy group.
Dr. Moazzez said that he had no relevant financial disclosures.
FROM THE ANNUAL MEETING OF THE SOCIETY OF AMERICAN GASTROINTESTINAL AND ENDOSCOPIC SURGEONS
Major Finding: The rate of overall morbidity among elderly patients who underwent open appendectomy was 13.4% vs. 8.2% in those who underwent laparoscopic appendectomy, a statistically significant difference (P less than .001). The open appendectomy group also had significantly higher rates of mortality (2% vs. 0.9%, P = .003) and superficial surgical site infection (3.8% vs. 1.4%, P less than .001).
Data Source: A group of 3,674 patients aged 65 and older who underwent a single laparoscopic or open appendectomy and had a discharge diagnosis of appendicitis were analyzed. Data were obtained from the American College of Surgeons National Surgical Quality Improvement Program databases from 2005 to 2009.
Disclosures: Dr. Moazzez said that he had no relevant financial disclosures.
Study Finds Pruritus Common in Elderly Patients
SAN DIEGO – Nearly half of elderly patients admitted to a geriatric ward reported having symptoms of pruritus, results from a single-center study found.
"Pruritus is a common complaint among the elderly," Dr. Yee Leng Teoh and colleagues wrote in an abstract presented during a poster session at the annual meeting of the American Academy of Dermatology. "It may have a significant impact on quality of life, but may be underestimated and poorly addressed."
In a study of 194 patients admitted to the geriatric ward of Changi General Hospital, Singapore, between March and May of 2010, Dr. Teoh and associates used a structured questionnaire including the Dermatology Life Quality Index (DLQI) to assess the prevalence and severity of itch and its impact on quality of life. They reviewed the patients’ hospital records regarding comorbid conditions, prior skin disease, medication use, and social background.
The mean age of the patients was 85 years, and their mean DLQI score was 6.7. Nearly half (49%) reported having a problem with itch for a mean of 15.3 months. Negative impacts on quality of life included disruption of sleep (reported by 35% of respondents) and disruption of mental concentration (reported by 31% of respondents).
More than half of patients (61%) said they had informed a physician about the problem, yet 26% believed that it was not sufficiently addressed. Twenty patients (10%) were diagnosed with a specific dermatologic condition, most commonly eczema and dermatophyte infection.
Of the participants who had informed their physician about the problem, 58% were treated with topical agents, 4% with oral antihistamines, and 33% with topical agents and oral antihistamines; 5% did not receive any treatment.
Study participants with a history of cerebrovascular accidents or transient ischemic attacks were 3.5 times more likely to have itch, compared with those who did not have this history. And patients with diabetes were 2.2 times more likely to have itch, compared with those who did not have the condition.
This may be because "sympathetic dysfunction caused hypohidrosis and resulted in xerosis," they hypothesized. "Another possible explanation is that patients with diabetic polyneuropathy, usually as a result of poor diabetic control, have damaged sensory C fibers, which can cause pruritus."
The study also found that elderly patients being treated with laxatives were 2.1 times less likely to have itch, compared with patients not taking laxatives. "We postulate that laxatives may facilitate the clearing of bile acids, resulting in a reduced incidence of itch," Dr. Teoh and associates wrote. "Lactulose can inhibit formation of deoxycholic acid from primary bile acids. This effect is thought to be mediated through the acidification of the proximal bowel, leading to reduction in 7 alpha-dehydroxylase, which converts primary to secondary bile acids."
The researchers reported having no relevant financial disclosures.
SAN DIEGO – Nearly half of elderly patients admitted to a geriatric ward reported having symptoms of pruritus, results from a single-center study found.
"Pruritus is a common complaint among the elderly," Dr. Yee Leng Teoh and colleagues wrote in an abstract presented during a poster session at the annual meeting of the American Academy of Dermatology. "It may have a significant impact on quality of life, but may be underestimated and poorly addressed."
In a study of 194 patients admitted to the geriatric ward of Changi General Hospital, Singapore, between March and May of 2010, Dr. Teoh and associates used a structured questionnaire including the Dermatology Life Quality Index (DLQI) to assess the prevalence and severity of itch and its impact on quality of life. They reviewed the patients’ hospital records regarding comorbid conditions, prior skin disease, medication use, and social background.
The mean age of the patients was 85 years, and their mean DLQI score was 6.7. Nearly half (49%) reported having a problem with itch for a mean of 15.3 months. Negative impacts on quality of life included disruption of sleep (reported by 35% of respondents) and disruption of mental concentration (reported by 31% of respondents).
More than half of patients (61%) said they had informed a physician about the problem, yet 26% believed that it was not sufficiently addressed. Twenty patients (10%) were diagnosed with a specific dermatologic condition, most commonly eczema and dermatophyte infection.
Of the participants who had informed their physician about the problem, 58% were treated with topical agents, 4% with oral antihistamines, and 33% with topical agents and oral antihistamines; 5% did not receive any treatment.
Study participants with a history of cerebrovascular accidents or transient ischemic attacks were 3.5 times more likely to have itch, compared with those who did not have this history. And patients with diabetes were 2.2 times more likely to have itch, compared with those who did not have the condition.
This may be because "sympathetic dysfunction caused hypohidrosis and resulted in xerosis," they hypothesized. "Another possible explanation is that patients with diabetic polyneuropathy, usually as a result of poor diabetic control, have damaged sensory C fibers, which can cause pruritus."
The study also found that elderly patients being treated with laxatives were 2.1 times less likely to have itch, compared with patients not taking laxatives. "We postulate that laxatives may facilitate the clearing of bile acids, resulting in a reduced incidence of itch," Dr. Teoh and associates wrote. "Lactulose can inhibit formation of deoxycholic acid from primary bile acids. This effect is thought to be mediated through the acidification of the proximal bowel, leading to reduction in 7 alpha-dehydroxylase, which converts primary to secondary bile acids."
The researchers reported having no relevant financial disclosures.
SAN DIEGO – Nearly half of elderly patients admitted to a geriatric ward reported having symptoms of pruritus, results from a single-center study found.
"Pruritus is a common complaint among the elderly," Dr. Yee Leng Teoh and colleagues wrote in an abstract presented during a poster session at the annual meeting of the American Academy of Dermatology. "It may have a significant impact on quality of life, but may be underestimated and poorly addressed."
In a study of 194 patients admitted to the geriatric ward of Changi General Hospital, Singapore, between March and May of 2010, Dr. Teoh and associates used a structured questionnaire including the Dermatology Life Quality Index (DLQI) to assess the prevalence and severity of itch and its impact on quality of life. They reviewed the patients’ hospital records regarding comorbid conditions, prior skin disease, medication use, and social background.
The mean age of the patients was 85 years, and their mean DLQI score was 6.7. Nearly half (49%) reported having a problem with itch for a mean of 15.3 months. Negative impacts on quality of life included disruption of sleep (reported by 35% of respondents) and disruption of mental concentration (reported by 31% of respondents).
More than half of patients (61%) said they had informed a physician about the problem, yet 26% believed that it was not sufficiently addressed. Twenty patients (10%) were diagnosed with a specific dermatologic condition, most commonly eczema and dermatophyte infection.
Of the participants who had informed their physician about the problem, 58% were treated with topical agents, 4% with oral antihistamines, and 33% with topical agents and oral antihistamines; 5% did not receive any treatment.
Study participants with a history of cerebrovascular accidents or transient ischemic attacks were 3.5 times more likely to have itch, compared with those who did not have this history. And patients with diabetes were 2.2 times more likely to have itch, compared with those who did not have the condition.
This may be because "sympathetic dysfunction caused hypohidrosis and resulted in xerosis," they hypothesized. "Another possible explanation is that patients with diabetic polyneuropathy, usually as a result of poor diabetic control, have damaged sensory C fibers, which can cause pruritus."
The study also found that elderly patients being treated with laxatives were 2.1 times less likely to have itch, compared with patients not taking laxatives. "We postulate that laxatives may facilitate the clearing of bile acids, resulting in a reduced incidence of itch," Dr. Teoh and associates wrote. "Lactulose can inhibit formation of deoxycholic acid from primary bile acids. This effect is thought to be mediated through the acidification of the proximal bowel, leading to reduction in 7 alpha-dehydroxylase, which converts primary to secondary bile acids."
The researchers reported having no relevant financial disclosures.
FROM THE ANNUAL MEETING OF THE AMERICAN ACADEMY OF DERMATOLOGY
Major Finding: Nearly half of elderly patients (49%) reported having a problem with itch for a mean of 15.3 months.
Data Source: The study involved 194 patients admitted to the geriatric ward of Changi General Hospital, Singapore, between March and May of 2010.
Disclosures: The researchers reported having no relevant financial disclosures.
Antibiotics Lead Outpatient Cutaneous Adverse Drug Events
SAN DIEGO – Antimicrobial agents were the most common identifiable causes of outpatient cutaneous adverse drug events, with amoxicillin being the most frequent single causative substance, in a study of data collected from 1995 to 2005.
Those are the key findings from the analysis of data from the National Ambulatory Medical Care Survey (NAMC) and the National Hospital and Ambulatory Medical Care Survey (NHAMCS).
While cutaneous reactions are thought to account for 16%-30% of reported adverse drug events, "there is limited information in the medical literature regarding the frequency of outpatient CADEs," Dr. Cheryl L. Gustafson wrote in a poster presented at the annual meeting of the American Academy of Dermatology.
Dr. Gustafson of the department of dermatology at Wake Forest University, Winston-Salem, N.C., and her associates queried the NAMCS and the NHAMC for data regarding CADEs (cutaneous adverse drug events) reported between 1995 and 2005. They used sample weights to estimate the national annual incidence of outpatient CADEs in the United States.
During the time period studied, a total of 635,982 CADE-related visits occurred, which translated into an annual incidence of 2.26 CADEs per 1,000 persons. Patients took an average of 2.2 medications in addition to the one causing the CADE. The incidence of CADEs increased with age, with a peak rate occurring in those aged 70-79 years.
Antibiotics were the most commonly implicated drug class (23%, chiefly amoxicillin), followed by cardiovascular agents (7%) and agents primarily affecting the skin and mucous membranes (6%). Unspecified or unknown agents accounted for a quarter of all CADEs.
Dermatitis and urticaria were the most common skin reactions reported (71% vs. 13%).
"We also found that patients with a dermatologic diagnosis experienced a CADE caused by a drug treating the initial skin condition in 11% of cases," Dr. Gustafson said in an interview.
Dr. Gustafson reported no relevant financial disclosures.
SAN DIEGO – Antimicrobial agents were the most common identifiable causes of outpatient cutaneous adverse drug events, with amoxicillin being the most frequent single causative substance, in a study of data collected from 1995 to 2005.
Those are the key findings from the analysis of data from the National Ambulatory Medical Care Survey (NAMC) and the National Hospital and Ambulatory Medical Care Survey (NHAMCS).
While cutaneous reactions are thought to account for 16%-30% of reported adverse drug events, "there is limited information in the medical literature regarding the frequency of outpatient CADEs," Dr. Cheryl L. Gustafson wrote in a poster presented at the annual meeting of the American Academy of Dermatology.
Dr. Gustafson of the department of dermatology at Wake Forest University, Winston-Salem, N.C., and her associates queried the NAMCS and the NHAMC for data regarding CADEs (cutaneous adverse drug events) reported between 1995 and 2005. They used sample weights to estimate the national annual incidence of outpatient CADEs in the United States.
During the time period studied, a total of 635,982 CADE-related visits occurred, which translated into an annual incidence of 2.26 CADEs per 1,000 persons. Patients took an average of 2.2 medications in addition to the one causing the CADE. The incidence of CADEs increased with age, with a peak rate occurring in those aged 70-79 years.
Antibiotics were the most commonly implicated drug class (23%, chiefly amoxicillin), followed by cardiovascular agents (7%) and agents primarily affecting the skin and mucous membranes (6%). Unspecified or unknown agents accounted for a quarter of all CADEs.
Dermatitis and urticaria were the most common skin reactions reported (71% vs. 13%).
"We also found that patients with a dermatologic diagnosis experienced a CADE caused by a drug treating the initial skin condition in 11% of cases," Dr. Gustafson said in an interview.
Dr. Gustafson reported no relevant financial disclosures.
SAN DIEGO – Antimicrobial agents were the most common identifiable causes of outpatient cutaneous adverse drug events, with amoxicillin being the most frequent single causative substance, in a study of data collected from 1995 to 2005.
Those are the key findings from the analysis of data from the National Ambulatory Medical Care Survey (NAMC) and the National Hospital and Ambulatory Medical Care Survey (NHAMCS).
While cutaneous reactions are thought to account for 16%-30% of reported adverse drug events, "there is limited information in the medical literature regarding the frequency of outpatient CADEs," Dr. Cheryl L. Gustafson wrote in a poster presented at the annual meeting of the American Academy of Dermatology.
Dr. Gustafson of the department of dermatology at Wake Forest University, Winston-Salem, N.C., and her associates queried the NAMCS and the NHAMC for data regarding CADEs (cutaneous adverse drug events) reported between 1995 and 2005. They used sample weights to estimate the national annual incidence of outpatient CADEs in the United States.
During the time period studied, a total of 635,982 CADE-related visits occurred, which translated into an annual incidence of 2.26 CADEs per 1,000 persons. Patients took an average of 2.2 medications in addition to the one causing the CADE. The incidence of CADEs increased with age, with a peak rate occurring in those aged 70-79 years.
Antibiotics were the most commonly implicated drug class (23%, chiefly amoxicillin), followed by cardiovascular agents (7%) and agents primarily affecting the skin and mucous membranes (6%). Unspecified or unknown agents accounted for a quarter of all CADEs.
Dermatitis and urticaria were the most common skin reactions reported (71% vs. 13%).
"We also found that patients with a dermatologic diagnosis experienced a CADE caused by a drug treating the initial skin condition in 11% of cases," Dr. Gustafson said in an interview.
Dr. Gustafson reported no relevant financial disclosures.
FROM THE ANNUAL MEETING OF THE AMERICAN ACADEMY OF DERMATOLOGY
Major Finding: Antibiotics caused the most cutaneous adverse drug events (23%), followed by cardiovascular agents (7%) and agents primarily affecting the skin and mucous membranes (6%).
Data Source: A study of outpatient CADEs occurring between 1995 and 2005 was conducted using data from the National Ambulatory Medical Care Survey and the National Hospital and Ambulatory Medical Care Survey.
Disclosures: Dr. Gustafson said that she had no relevant financial disclosures.