User login
Gestational diabetes linked to sleep apnea
Women with gestational diabetes are nearly seven times more likely to have sleep apnea and to sleep an average of one hour less nightly than do expectant mothers without the condition, a small observational study has shown.
"It is common for pregnant women to experience sleep disruptions, but the risk of developing obstructive sleep apnea increases substantially in women who have gestational diabetes," Dr. Sirimon Reutrakul of Rush University Medical Center in Chicago, said in a statement. "Nearly 75% of the participants in our study who had gestational diabetes also suffered from obstructive sleep apnea."
Reports in the literature link sleep apnea in pregnancy to complications such as preeclampsia, hypertension, low birth weights, preterm delivery, and other pregnancy-related adverse outcomes.
Use of continuous positive airway pressure (CPAP) early in pregnancy for women with hypertension and chronic snoring was noted by the investigators to be associated with better blood pressure control and pregnancy outcomes. CPAP treatment in nonpregnant type 2 diabetes patients with sleep apnea has been shown occasionally effective in improving glucose control. Because there are currently no data on the effects of sleep apnea treatment in gestational diabetes, whether CPAP treatment might affect glucose metabolism and pregnancy outcomes is unknown.
Dr. Reutrakul and his associates cited previous studies that increased sleep apnea leads to poor glycemic control, as well as reports that sleep apnea is a major risk factor for insulin resistance regardless of body mass index (BMI). With these data in mind, the investigators compared metabolic and sleep apnea measures in 15 pregnant women who did not have gestational diabetes; 15 pregnant women with gestational diabetes; and 15 obese controls (BMI = 31.0 +/– 4.3 kg/m2) who were neither pregnant nor diabetic. The groups were matched for age, race, and in the pregnant groups, and prepregnancy BMI.
All gestating women were expecting singletons and were either in the latter part of their second term, or the early part of their third. The average gestational age was 28.2 +/– 3.7 weeks in the women with gestational diabetes, and 30.9 +/– 2.0 weeks in the pregnant group without.
The women with gestational diabetes had notably higher measures of prepregnancy BMI than did the pregnant women without gestational diabetes. The BMI of the gestational diabetes cohort at the time of the sleep apnea monitoring also tended to be higher. Most of the pregnant women were overweight (BMI 25.0-29.9 kg/m2) or obese (BMI = 30 kg/m2) based on the prepregnancy BMI (93% of those with gestational diabetes; 67% of those without).
The number of apneas and hypopneas per hour of sleep were measured using polysomnography. Sleep apnea was diagnosed in the women if their apnea-hypopnea index (AHI) score was greater than or equal to 5. Dr. Reutrakul and his colleagues wrote that theirs was the first study to use polysomnography to evaluate overall sleep quality, including apnea, in women with gestational diabetes, compared with pregnant women without gestational diabetes, controlling for confounding factors. Wake time after sleep onset equaled the number of minutes participants were awake between sleep onset and end of the session.
There were a number of statistically significant findings:
After they adjusted for prepregnancy BMI, Dr. Reutrakul and his associates found that a diagnosis of gestational diabetes was strongly associated with a diagnosis of sleep apnea (odds ratio, 6.60; 95% confidence interval, 1.15-37.96). The researchers also found that pregnant women with gestational diabetes slept a median average of 1 hour less than the other pregnant women did (397 minutes vs. 464 minutes). The gestational diabetes cohort also had a median AHI approximately four times higher than the pregnant women without gestational diabetes had (8.2 vs. 2.0), and a higher overall rate of sleep apnea than did the nongestational diabetes group (73% vs. 27%). Compared with controls, a higher AHI was found in pregnant women without gestational diabetes (2.0 vs. 0.5), as was more disrupted sleep as reflected by a higher wake time after sleep onset (66 vs. 21 minutes). Their median microarousal index was also higher (16.4 vs. 10.6).
The researchers noted that women with gestational diabetes gained less weight during pregnancy than the pregnant women without gestational diabetes (BMI increased respectively by 2.2 +/– 2.0 vs. 4.6 +/– 1.9 kg/m2), ruling out any "strong association" between gestational weight gain, gestational diabetes, and sleep apnea.
According to the authors, the study also was limited by its cross-sectional design, which does not indicate whether sleep apnea causes gestational diabetes, or vice versa. Regardless, Dr. Reutrakul stated, "Based on these findings, women who have gestational diabetes should be considered for evaluation for obstructive sleep apnea, especially if other risk factors such as hypertension or obesity are present, and women already diagnosed with sleep apnea should be monitored for signs of gestational diabetes during pregnancy."
A member of the research team reported financial ties with Pfizer and other industry-related research grant support. See study for list of disclosures. This study was supported by the ResMed Foundation; the diabetes research training center at the University of Chicago, a specialized center of research on Women’s Health; and the National Institutes of Health.
Women with gestational diabetes are nearly seven times more likely to have sleep apnea and to sleep an average of one hour less nightly than do expectant mothers without the condition, a small observational study has shown.
"It is common for pregnant women to experience sleep disruptions, but the risk of developing obstructive sleep apnea increases substantially in women who have gestational diabetes," Dr. Sirimon Reutrakul of Rush University Medical Center in Chicago, said in a statement. "Nearly 75% of the participants in our study who had gestational diabetes also suffered from obstructive sleep apnea."
Reports in the literature link sleep apnea in pregnancy to complications such as preeclampsia, hypertension, low birth weights, preterm delivery, and other pregnancy-related adverse outcomes.
Use of continuous positive airway pressure (CPAP) early in pregnancy for women with hypertension and chronic snoring was noted by the investigators to be associated with better blood pressure control and pregnancy outcomes. CPAP treatment in nonpregnant type 2 diabetes patients with sleep apnea has been shown occasionally effective in improving glucose control. Because there are currently no data on the effects of sleep apnea treatment in gestational diabetes, whether CPAP treatment might affect glucose metabolism and pregnancy outcomes is unknown.
Dr. Reutrakul and his associates cited previous studies that increased sleep apnea leads to poor glycemic control, as well as reports that sleep apnea is a major risk factor for insulin resistance regardless of body mass index (BMI). With these data in mind, the investigators compared metabolic and sleep apnea measures in 15 pregnant women who did not have gestational diabetes; 15 pregnant women with gestational diabetes; and 15 obese controls (BMI = 31.0 +/– 4.3 kg/m2) who were neither pregnant nor diabetic. The groups were matched for age, race, and in the pregnant groups, and prepregnancy BMI.
All gestating women were expecting singletons and were either in the latter part of their second term, or the early part of their third. The average gestational age was 28.2 +/– 3.7 weeks in the women with gestational diabetes, and 30.9 +/– 2.0 weeks in the pregnant group without.
The women with gestational diabetes had notably higher measures of prepregnancy BMI than did the pregnant women without gestational diabetes. The BMI of the gestational diabetes cohort at the time of the sleep apnea monitoring also tended to be higher. Most of the pregnant women were overweight (BMI 25.0-29.9 kg/m2) or obese (BMI = 30 kg/m2) based on the prepregnancy BMI (93% of those with gestational diabetes; 67% of those without).
The number of apneas and hypopneas per hour of sleep were measured using polysomnography. Sleep apnea was diagnosed in the women if their apnea-hypopnea index (AHI) score was greater than or equal to 5. Dr. Reutrakul and his colleagues wrote that theirs was the first study to use polysomnography to evaluate overall sleep quality, including apnea, in women with gestational diabetes, compared with pregnant women without gestational diabetes, controlling for confounding factors. Wake time after sleep onset equaled the number of minutes participants were awake between sleep onset and end of the session.
There were a number of statistically significant findings:
After they adjusted for prepregnancy BMI, Dr. Reutrakul and his associates found that a diagnosis of gestational diabetes was strongly associated with a diagnosis of sleep apnea (odds ratio, 6.60; 95% confidence interval, 1.15-37.96). The researchers also found that pregnant women with gestational diabetes slept a median average of 1 hour less than the other pregnant women did (397 minutes vs. 464 minutes). The gestational diabetes cohort also had a median AHI approximately four times higher than the pregnant women without gestational diabetes had (8.2 vs. 2.0), and a higher overall rate of sleep apnea than did the nongestational diabetes group (73% vs. 27%). Compared with controls, a higher AHI was found in pregnant women without gestational diabetes (2.0 vs. 0.5), as was more disrupted sleep as reflected by a higher wake time after sleep onset (66 vs. 21 minutes). Their median microarousal index was also higher (16.4 vs. 10.6).
The researchers noted that women with gestational diabetes gained less weight during pregnancy than the pregnant women without gestational diabetes (BMI increased respectively by 2.2 +/– 2.0 vs. 4.6 +/– 1.9 kg/m2), ruling out any "strong association" between gestational weight gain, gestational diabetes, and sleep apnea.
According to the authors, the study also was limited by its cross-sectional design, which does not indicate whether sleep apnea causes gestational diabetes, or vice versa. Regardless, Dr. Reutrakul stated, "Based on these findings, women who have gestational diabetes should be considered for evaluation for obstructive sleep apnea, especially if other risk factors such as hypertension or obesity are present, and women already diagnosed with sleep apnea should be monitored for signs of gestational diabetes during pregnancy."
A member of the research team reported financial ties with Pfizer and other industry-related research grant support. See study for list of disclosures. This study was supported by the ResMed Foundation; the diabetes research training center at the University of Chicago, a specialized center of research on Women’s Health; and the National Institutes of Health.
Women with gestational diabetes are nearly seven times more likely to have sleep apnea and to sleep an average of one hour less nightly than do expectant mothers without the condition, a small observational study has shown.
"It is common for pregnant women to experience sleep disruptions, but the risk of developing obstructive sleep apnea increases substantially in women who have gestational diabetes," Dr. Sirimon Reutrakul of Rush University Medical Center in Chicago, said in a statement. "Nearly 75% of the participants in our study who had gestational diabetes also suffered from obstructive sleep apnea."
Reports in the literature link sleep apnea in pregnancy to complications such as preeclampsia, hypertension, low birth weights, preterm delivery, and other pregnancy-related adverse outcomes.
Use of continuous positive airway pressure (CPAP) early in pregnancy for women with hypertension and chronic snoring was noted by the investigators to be associated with better blood pressure control and pregnancy outcomes. CPAP treatment in nonpregnant type 2 diabetes patients with sleep apnea has been shown occasionally effective in improving glucose control. Because there are currently no data on the effects of sleep apnea treatment in gestational diabetes, whether CPAP treatment might affect glucose metabolism and pregnancy outcomes is unknown.
Dr. Reutrakul and his associates cited previous studies that increased sleep apnea leads to poor glycemic control, as well as reports that sleep apnea is a major risk factor for insulin resistance regardless of body mass index (BMI). With these data in mind, the investigators compared metabolic and sleep apnea measures in 15 pregnant women who did not have gestational diabetes; 15 pregnant women with gestational diabetes; and 15 obese controls (BMI = 31.0 +/– 4.3 kg/m2) who were neither pregnant nor diabetic. The groups were matched for age, race, and in the pregnant groups, and prepregnancy BMI.
All gestating women were expecting singletons and were either in the latter part of their second term, or the early part of their third. The average gestational age was 28.2 +/– 3.7 weeks in the women with gestational diabetes, and 30.9 +/– 2.0 weeks in the pregnant group without.
The women with gestational diabetes had notably higher measures of prepregnancy BMI than did the pregnant women without gestational diabetes. The BMI of the gestational diabetes cohort at the time of the sleep apnea monitoring also tended to be higher. Most of the pregnant women were overweight (BMI 25.0-29.9 kg/m2) or obese (BMI = 30 kg/m2) based on the prepregnancy BMI (93% of those with gestational diabetes; 67% of those without).
The number of apneas and hypopneas per hour of sleep were measured using polysomnography. Sleep apnea was diagnosed in the women if their apnea-hypopnea index (AHI) score was greater than or equal to 5. Dr. Reutrakul and his colleagues wrote that theirs was the first study to use polysomnography to evaluate overall sleep quality, including apnea, in women with gestational diabetes, compared with pregnant women without gestational diabetes, controlling for confounding factors. Wake time after sleep onset equaled the number of minutes participants were awake between sleep onset and end of the session.
There were a number of statistically significant findings:
After they adjusted for prepregnancy BMI, Dr. Reutrakul and his associates found that a diagnosis of gestational diabetes was strongly associated with a diagnosis of sleep apnea (odds ratio, 6.60; 95% confidence interval, 1.15-37.96). The researchers also found that pregnant women with gestational diabetes slept a median average of 1 hour less than the other pregnant women did (397 minutes vs. 464 minutes). The gestational diabetes cohort also had a median AHI approximately four times higher than the pregnant women without gestational diabetes had (8.2 vs. 2.0), and a higher overall rate of sleep apnea than did the nongestational diabetes group (73% vs. 27%). Compared with controls, a higher AHI was found in pregnant women without gestational diabetes (2.0 vs. 0.5), as was more disrupted sleep as reflected by a higher wake time after sleep onset (66 vs. 21 minutes). Their median microarousal index was also higher (16.4 vs. 10.6).
The researchers noted that women with gestational diabetes gained less weight during pregnancy than the pregnant women without gestational diabetes (BMI increased respectively by 2.2 +/– 2.0 vs. 4.6 +/– 1.9 kg/m2), ruling out any "strong association" between gestational weight gain, gestational diabetes, and sleep apnea.
According to the authors, the study also was limited by its cross-sectional design, which does not indicate whether sleep apnea causes gestational diabetes, or vice versa. Regardless, Dr. Reutrakul stated, "Based on these findings, women who have gestational diabetes should be considered for evaluation for obstructive sleep apnea, especially if other risk factors such as hypertension or obesity are present, and women already diagnosed with sleep apnea should be monitored for signs of gestational diabetes during pregnancy."
A member of the research team reported financial ties with Pfizer and other industry-related research grant support. See study for list of disclosures. This study was supported by the ResMed Foundation; the diabetes research training center at the University of Chicago, a specialized center of research on Women’s Health; and the National Institutes of Health.
JOURNAL OF CLINICAL ENDOCRINOLOGY & METABOLISM
Major finding: Women with gestational diabetes are nearly seven times more likely to have sleep apnea and sleep an hour less than other pregnant women do.
Data source: Observational case-control study of 45 women.
Disclosures: A member of the research team reported financial ties with Pfizer and other industry-related research grant support. See study for list of disclosures. This study was supported by the ResMed Foundation; the diabetes research training center at the University of Chicago, a specialized center of research on Women’s Health; and the National Institutes of Health.
New systemic JIA recommendations highlight thin evidence base for use of new biologics
Canakinumab, rilonacept, and tocilizumab have been added to the list of medications recommended by the American College of Rheumatology for the treatment of systemic juvenile idiopathic arthritis, although no specific dosages or durations are listed. Secondary screening for tuberculosis in patients treated with biologics is also included in the updated guidelines.
Initially published in 2011, the American College of Rheumatology’s Guidelines for Systemic JIA were revised to include "data from randomized trials of new IL-1 [interleukin-1] inhibitors and IL-6 inhibitors," wrote lead coauthors Dr. Sarah Ringold and Dr. Pamela F. Weiss and their colleagues. "The overarching objective of this project was to update the 2011 ACR recommendations for the use of nonbiologic disease-modifying antirheumatic drugs (DMARDs) and biologic DMARDs," they also wrote. Canakinumab, rilonacept, and tocilizumab are biologic DMARDs.
The recommendations are based on a systematic review of 1,226 clinical scenarios that were developed according to the RAND/University of California, Los Angeles (UCLA) appropriateness methodology. The authors include an international panel of JIA experts, an expert evidence-based medicine evaluator, and the parent of a child with systemic JIA. Biologic combination therapies were not considered due to safety concerns and a lack of data, the authors wrote. Appropriateness was defined according to whether the benefits exceeded the risks when rated on a scale from 1 to 9, with 1-3 considered "inappropriate," 4-6 "uncertain," and 7-9 "appropriate."
Evidence was then rated according to the Oxford Centre for Evidence-Based Medicine, as was the case for the original guidelines. The scale is from A through D, with A being the highest level of evidence, based on randomized controlled trials; B level is derived from nonrandomized studies; C-level evidence comes from uncontrolled trials; and D-level evidence is that based on expert opinion.
In all, the panel decided upon far more recommendations at the C and D levels, and advised caution when applying its A-level recommendations, such as the use of the IL-1 receptor antagonist anakinra after treatment with glucocorticoid (GC) monotherapy in children with active systemic features, because the study sample, while assigned randomly, numbered only 24 (Arthritis Rheum. 2013;65:2499-2512).
"The goal of this project was not to provide strict recommendations for therapy, but to provide a set of recommendations that physicians could apply to their patients. The timing of these recommendations is appropriate, as canakinumab and tocilizumab are recently FDA-approved and physicians are learning how to incorporate them into their practice. Given the nature of the field, we would not expect there to be significantly more data generated over the next year that would lead to different recommendations. As in the case of any recommendations, they will need reconsideration in the future as new therapies and new data become available," Dr. Ringold of Seattle Children’s Hospital wrote in an e-mail.
The panel evaluated scenarios delineated by three discrete phenotypes. The first was systemic JIA with active features and various degrees of synovitis. Instead of considering the combinations of symptoms such as fever and evanescent rash, panel members were asked to review the treatments in patients with a physician global assessment (MD global) of less than 5 joints and 5 or greater joints on a 10-point scale, and by active joint count (AJC) assessment of 0 joints, 1-4 joints, or more than 4 joints.
The second scenario included systemic JIA with active synovitis but not active systemic features. Treatments were rated for appropriateness based on the total number of active joints of 4 or less or 4 or greater. The third phenotype was systematic JIA with features of concern for macrophage activation syndrome (MAS). The update’s authors wrote that the requisite manifestations considered for MAS were "left intentionally broad given the lack of validated classification criteria for MAS." They also excluded children who were admitted to intensive care.
A fourth scenario was developed by the authors to address repeat tuberculosis (TB) screening across all JIA phenotypes where the patient was receiving biologic agents. In that case, evaluators were asked to consider the appropriateness of the various screening approaches.
Canakinumab, which targets interleukin-1 beta (IL-1B) and was approved for use in systemic JIA by the Food and Drug Administration (FDA) in May of this year, was recommended at level A for patients with continued disease activity after treatment with GC monotherapy, methotrexate, or leflunomide. Treatment with canakinumab following anakinra was rated level B, and the use of canakinumab after tocilizumab, irrespective of the MD global and AJC, was considered level C.
Canakinumab was also recommended for patients with an MD global of 5 or greater regardless of the AJC, and even if there was prior monotherapy with NSAIDs. The drug received a level B recommendation for an AJC assessment of more than 4 joints, but only after a trial of a DMARD plus anakinra or tocilizumab, or a DMARD plus a tumor necrosis factor–alpha inhibitor. A level C recommendation was assigned to canakinumab after abatacept for patients with an AJC assessment of more than 4 joints. The authors wrote that use of canakinumab was "uncertain, with the exception of patients with an MD global [less than] 5 who had received no prior therapy, GC monotherapy, or calcineurin monotherapy, in which case it was inappropriate (level D)."
Rilonacept, also an IL-1 blocker, was considered by the panel to be inappropriate and at level D for evidence as initial therapy, regardless of the MD global and AJC. "Use of rilonacept was uncertain for continued disease activity after a trial of other therapeutic options irrespective of the AJC and MD global," the authors wrote.
Tocilizumab, which targets interleukin-6 (IL-6) receptors and was FDA-approved for use in systemic JIA in 2011, was generally found to be uncertain or inappropriate, and typically recommended at level D, although it was given a level A rating for use in patients with continued disease activity following GC monotherapy and a level B rating following MTX or leflunomide, or anakinra regardless of the MD global and AJC ratings. It was also recommended at the C level in patients with an MD global rating of 5 or greater, regardless of the AJC rating, despite prior NSAID monotherapy.
Although annual screening of children with systemic JIA who had an initial negative TB test prior to starting a biologic agent was rated at level D, the updated recommendations are to repeat TB screenings if a patient’s TB risk becomes moderate or high, "as determined by regional infectious disease guidelines." However, this recommendation was also only given a level D rating.
For treating MAS in JIA, the authors stated their belief in the current recommendation to use anakinra, calcineurin inhibitors, and GCs, but wrote that they anticipated more treatment data would soon provide "a better understanding of the disease process, [and so] these recommendations may be modified."
The authors also made reference to the recently published Childhood Arthritis and Rheumatology Research Alliance (CARRA) systemic JIA care treatment plans (CTPs), which are based on group consensus techniques, although not according to phenotype, but instead more focused on medications. "It is anticipated that data from comparative effectiveness research based upon the CARRA CTPs will likely be very informative in future guidelines projects."
Dr. Ringold reports that she was supported by the Agency for Healthcare Research and Quality. Dr. Weiss was supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases.
It is important that clinicians dealing with children with systemic-onset JIA (SoJIA) have the best available insight into the pathogenesis and mechanisms of the disease in order to provide the optimal therapy. The new guidelines provided by Dr. Ringold and others are a useful step in this direction. The task force panel has summarized the clinical literature for physicians who have not kept up with it, but provides no new insight for those who have. Surprisingly absent from this statement is any information regarding appropriate dosing of the medications, their relative speed of onset of efficacy, their relative probability of efficacy, or their relative ease of administration or their associated side effects.
Unfortunately, the vast majority of the recommendations carry only level C or D gradings using the Oxford Centre for Evidence-Based Medicine scale, meaning they are derived from uncontrolled case series or "expert opinion." There is little that can be done to surmount the obstacles to randomized trials imposed by the relative rarity of SoJIA. Nonetheless, much of this consensus guideline is entirely level D evidence without reference to the source or nature of the expert opinion. Indeed, the entire paragraph on abatacept for continued disease activity for children with SoJIA with active systemic features and varying degrees of synovitis is without reference. Perhaps the choice to list treatments alphabetically was unfortunate since it gives priority of place inappropriately.
|
|
Most disappointing is the section of SoJIA with features concerning for macrophage activation syndrome. Although the standard medications are graded level C, many of us are confronted with the children who have not responded to these. The task force’s statements of "inappropriate or uncertain" are all level D and totally unsubstantiated. There is no help here for those of us dealing with difficult cases. I’m sure the authors would state that the disclaimers associated with the article clearly indicate it is not intended for use in such cases, but then without doses or a discussion of relative efficacy and toxicity of the medications, one might ask, whom is this guideline intended for? It lacks both the detailed information needed by a nonspecialist confronted by a child with SoJIA and the careful discussion of the difficult cases needed by the specialists.
The opinion of most pediatric rheumatologists remains that because of the risks and complexity associated with SoJIA, the only appropriate guideline for those who do not specialize in their care should be "send the child to a trained pediatric rheumatologist." For those who are trained pediatric rheumatologists, there is little here.
Thomas J.A. Lehman, M.D., is chief of pediatric rheumatology at the Hospital for Special Surgery, New York. He has served as a consultant to EpiGenetics, Genentech, Genzyme, and Novartis and has served on the speakers bureaus for AbbVie, Amgen, and Pfizer.
It is important that clinicians dealing with children with systemic-onset JIA (SoJIA) have the best available insight into the pathogenesis and mechanisms of the disease in order to provide the optimal therapy. The new guidelines provided by Dr. Ringold and others are a useful step in this direction. The task force panel has summarized the clinical literature for physicians who have not kept up with it, but provides no new insight for those who have. Surprisingly absent from this statement is any information regarding appropriate dosing of the medications, their relative speed of onset of efficacy, their relative probability of efficacy, or their relative ease of administration or their associated side effects.
Unfortunately, the vast majority of the recommendations carry only level C or D gradings using the Oxford Centre for Evidence-Based Medicine scale, meaning they are derived from uncontrolled case series or "expert opinion." There is little that can be done to surmount the obstacles to randomized trials imposed by the relative rarity of SoJIA. Nonetheless, much of this consensus guideline is entirely level D evidence without reference to the source or nature of the expert opinion. Indeed, the entire paragraph on abatacept for continued disease activity for children with SoJIA with active systemic features and varying degrees of synovitis is without reference. Perhaps the choice to list treatments alphabetically was unfortunate since it gives priority of place inappropriately.
|
|
Most disappointing is the section of SoJIA with features concerning for macrophage activation syndrome. Although the standard medications are graded level C, many of us are confronted with the children who have not responded to these. The task force’s statements of "inappropriate or uncertain" are all level D and totally unsubstantiated. There is no help here for those of us dealing with difficult cases. I’m sure the authors would state that the disclaimers associated with the article clearly indicate it is not intended for use in such cases, but then without doses or a discussion of relative efficacy and toxicity of the medications, one might ask, whom is this guideline intended for? It lacks both the detailed information needed by a nonspecialist confronted by a child with SoJIA and the careful discussion of the difficult cases needed by the specialists.
The opinion of most pediatric rheumatologists remains that because of the risks and complexity associated with SoJIA, the only appropriate guideline for those who do not specialize in their care should be "send the child to a trained pediatric rheumatologist." For those who are trained pediatric rheumatologists, there is little here.
Thomas J.A. Lehman, M.D., is chief of pediatric rheumatology at the Hospital for Special Surgery, New York. He has served as a consultant to EpiGenetics, Genentech, Genzyme, and Novartis and has served on the speakers bureaus for AbbVie, Amgen, and Pfizer.
It is important that clinicians dealing with children with systemic-onset JIA (SoJIA) have the best available insight into the pathogenesis and mechanisms of the disease in order to provide the optimal therapy. The new guidelines provided by Dr. Ringold and others are a useful step in this direction. The task force panel has summarized the clinical literature for physicians who have not kept up with it, but provides no new insight for those who have. Surprisingly absent from this statement is any information regarding appropriate dosing of the medications, their relative speed of onset of efficacy, their relative probability of efficacy, or their relative ease of administration or their associated side effects.
Unfortunately, the vast majority of the recommendations carry only level C or D gradings using the Oxford Centre for Evidence-Based Medicine scale, meaning they are derived from uncontrolled case series or "expert opinion." There is little that can be done to surmount the obstacles to randomized trials imposed by the relative rarity of SoJIA. Nonetheless, much of this consensus guideline is entirely level D evidence without reference to the source or nature of the expert opinion. Indeed, the entire paragraph on abatacept for continued disease activity for children with SoJIA with active systemic features and varying degrees of synovitis is without reference. Perhaps the choice to list treatments alphabetically was unfortunate since it gives priority of place inappropriately.
|
|
Most disappointing is the section of SoJIA with features concerning for macrophage activation syndrome. Although the standard medications are graded level C, many of us are confronted with the children who have not responded to these. The task force’s statements of "inappropriate or uncertain" are all level D and totally unsubstantiated. There is no help here for those of us dealing with difficult cases. I’m sure the authors would state that the disclaimers associated with the article clearly indicate it is not intended for use in such cases, but then without doses or a discussion of relative efficacy and toxicity of the medications, one might ask, whom is this guideline intended for? It lacks both the detailed information needed by a nonspecialist confronted by a child with SoJIA and the careful discussion of the difficult cases needed by the specialists.
The opinion of most pediatric rheumatologists remains that because of the risks and complexity associated with SoJIA, the only appropriate guideline for those who do not specialize in their care should be "send the child to a trained pediatric rheumatologist." For those who are trained pediatric rheumatologists, there is little here.
Thomas J.A. Lehman, M.D., is chief of pediatric rheumatology at the Hospital for Special Surgery, New York. He has served as a consultant to EpiGenetics, Genentech, Genzyme, and Novartis and has served on the speakers bureaus for AbbVie, Amgen, and Pfizer.
Canakinumab, rilonacept, and tocilizumab have been added to the list of medications recommended by the American College of Rheumatology for the treatment of systemic juvenile idiopathic arthritis, although no specific dosages or durations are listed. Secondary screening for tuberculosis in patients treated with biologics is also included in the updated guidelines.
Initially published in 2011, the American College of Rheumatology’s Guidelines for Systemic JIA were revised to include "data from randomized trials of new IL-1 [interleukin-1] inhibitors and IL-6 inhibitors," wrote lead coauthors Dr. Sarah Ringold and Dr. Pamela F. Weiss and their colleagues. "The overarching objective of this project was to update the 2011 ACR recommendations for the use of nonbiologic disease-modifying antirheumatic drugs (DMARDs) and biologic DMARDs," they also wrote. Canakinumab, rilonacept, and tocilizumab are biologic DMARDs.
The recommendations are based on a systematic review of 1,226 clinical scenarios that were developed according to the RAND/University of California, Los Angeles (UCLA) appropriateness methodology. The authors include an international panel of JIA experts, an expert evidence-based medicine evaluator, and the parent of a child with systemic JIA. Biologic combination therapies were not considered due to safety concerns and a lack of data, the authors wrote. Appropriateness was defined according to whether the benefits exceeded the risks when rated on a scale from 1 to 9, with 1-3 considered "inappropriate," 4-6 "uncertain," and 7-9 "appropriate."
Evidence was then rated according to the Oxford Centre for Evidence-Based Medicine, as was the case for the original guidelines. The scale is from A through D, with A being the highest level of evidence, based on randomized controlled trials; B level is derived from nonrandomized studies; C-level evidence comes from uncontrolled trials; and D-level evidence is that based on expert opinion.
In all, the panel decided upon far more recommendations at the C and D levels, and advised caution when applying its A-level recommendations, such as the use of the IL-1 receptor antagonist anakinra after treatment with glucocorticoid (GC) monotherapy in children with active systemic features, because the study sample, while assigned randomly, numbered only 24 (Arthritis Rheum. 2013;65:2499-2512).
"The goal of this project was not to provide strict recommendations for therapy, but to provide a set of recommendations that physicians could apply to their patients. The timing of these recommendations is appropriate, as canakinumab and tocilizumab are recently FDA-approved and physicians are learning how to incorporate them into their practice. Given the nature of the field, we would not expect there to be significantly more data generated over the next year that would lead to different recommendations. As in the case of any recommendations, they will need reconsideration in the future as new therapies and new data become available," Dr. Ringold of Seattle Children’s Hospital wrote in an e-mail.
The panel evaluated scenarios delineated by three discrete phenotypes. The first was systemic JIA with active features and various degrees of synovitis. Instead of considering the combinations of symptoms such as fever and evanescent rash, panel members were asked to review the treatments in patients with a physician global assessment (MD global) of less than 5 joints and 5 or greater joints on a 10-point scale, and by active joint count (AJC) assessment of 0 joints, 1-4 joints, or more than 4 joints.
The second scenario included systemic JIA with active synovitis but not active systemic features. Treatments were rated for appropriateness based on the total number of active joints of 4 or less or 4 or greater. The third phenotype was systematic JIA with features of concern for macrophage activation syndrome (MAS). The update’s authors wrote that the requisite manifestations considered for MAS were "left intentionally broad given the lack of validated classification criteria for MAS." They also excluded children who were admitted to intensive care.
A fourth scenario was developed by the authors to address repeat tuberculosis (TB) screening across all JIA phenotypes where the patient was receiving biologic agents. In that case, evaluators were asked to consider the appropriateness of the various screening approaches.
Canakinumab, which targets interleukin-1 beta (IL-1B) and was approved for use in systemic JIA by the Food and Drug Administration (FDA) in May of this year, was recommended at level A for patients with continued disease activity after treatment with GC monotherapy, methotrexate, or leflunomide. Treatment with canakinumab following anakinra was rated level B, and the use of canakinumab after tocilizumab, irrespective of the MD global and AJC, was considered level C.
Canakinumab was also recommended for patients with an MD global of 5 or greater regardless of the AJC, and even if there was prior monotherapy with NSAIDs. The drug received a level B recommendation for an AJC assessment of more than 4 joints, but only after a trial of a DMARD plus anakinra or tocilizumab, or a DMARD plus a tumor necrosis factor–alpha inhibitor. A level C recommendation was assigned to canakinumab after abatacept for patients with an AJC assessment of more than 4 joints. The authors wrote that use of canakinumab was "uncertain, with the exception of patients with an MD global [less than] 5 who had received no prior therapy, GC monotherapy, or calcineurin monotherapy, in which case it was inappropriate (level D)."
Rilonacept, also an IL-1 blocker, was considered by the panel to be inappropriate and at level D for evidence as initial therapy, regardless of the MD global and AJC. "Use of rilonacept was uncertain for continued disease activity after a trial of other therapeutic options irrespective of the AJC and MD global," the authors wrote.
Tocilizumab, which targets interleukin-6 (IL-6) receptors and was FDA-approved for use in systemic JIA in 2011, was generally found to be uncertain or inappropriate, and typically recommended at level D, although it was given a level A rating for use in patients with continued disease activity following GC monotherapy and a level B rating following MTX or leflunomide, or anakinra regardless of the MD global and AJC ratings. It was also recommended at the C level in patients with an MD global rating of 5 or greater, regardless of the AJC rating, despite prior NSAID monotherapy.
Although annual screening of children with systemic JIA who had an initial negative TB test prior to starting a biologic agent was rated at level D, the updated recommendations are to repeat TB screenings if a patient’s TB risk becomes moderate or high, "as determined by regional infectious disease guidelines." However, this recommendation was also only given a level D rating.
For treating MAS in JIA, the authors stated their belief in the current recommendation to use anakinra, calcineurin inhibitors, and GCs, but wrote that they anticipated more treatment data would soon provide "a better understanding of the disease process, [and so] these recommendations may be modified."
The authors also made reference to the recently published Childhood Arthritis and Rheumatology Research Alliance (CARRA) systemic JIA care treatment plans (CTPs), which are based on group consensus techniques, although not according to phenotype, but instead more focused on medications. "It is anticipated that data from comparative effectiveness research based upon the CARRA CTPs will likely be very informative in future guidelines projects."
Dr. Ringold reports that she was supported by the Agency for Healthcare Research and Quality. Dr. Weiss was supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases.
Canakinumab, rilonacept, and tocilizumab have been added to the list of medications recommended by the American College of Rheumatology for the treatment of systemic juvenile idiopathic arthritis, although no specific dosages or durations are listed. Secondary screening for tuberculosis in patients treated with biologics is also included in the updated guidelines.
Initially published in 2011, the American College of Rheumatology’s Guidelines for Systemic JIA were revised to include "data from randomized trials of new IL-1 [interleukin-1] inhibitors and IL-6 inhibitors," wrote lead coauthors Dr. Sarah Ringold and Dr. Pamela F. Weiss and their colleagues. "The overarching objective of this project was to update the 2011 ACR recommendations for the use of nonbiologic disease-modifying antirheumatic drugs (DMARDs) and biologic DMARDs," they also wrote. Canakinumab, rilonacept, and tocilizumab are biologic DMARDs.
The recommendations are based on a systematic review of 1,226 clinical scenarios that were developed according to the RAND/University of California, Los Angeles (UCLA) appropriateness methodology. The authors include an international panel of JIA experts, an expert evidence-based medicine evaluator, and the parent of a child with systemic JIA. Biologic combination therapies were not considered due to safety concerns and a lack of data, the authors wrote. Appropriateness was defined according to whether the benefits exceeded the risks when rated on a scale from 1 to 9, with 1-3 considered "inappropriate," 4-6 "uncertain," and 7-9 "appropriate."
Evidence was then rated according to the Oxford Centre for Evidence-Based Medicine, as was the case for the original guidelines. The scale is from A through D, with A being the highest level of evidence, based on randomized controlled trials; B level is derived from nonrandomized studies; C-level evidence comes from uncontrolled trials; and D-level evidence is that based on expert opinion.
In all, the panel decided upon far more recommendations at the C and D levels, and advised caution when applying its A-level recommendations, such as the use of the IL-1 receptor antagonist anakinra after treatment with glucocorticoid (GC) monotherapy in children with active systemic features, because the study sample, while assigned randomly, numbered only 24 (Arthritis Rheum. 2013;65:2499-2512).
"The goal of this project was not to provide strict recommendations for therapy, but to provide a set of recommendations that physicians could apply to their patients. The timing of these recommendations is appropriate, as canakinumab and tocilizumab are recently FDA-approved and physicians are learning how to incorporate them into their practice. Given the nature of the field, we would not expect there to be significantly more data generated over the next year that would lead to different recommendations. As in the case of any recommendations, they will need reconsideration in the future as new therapies and new data become available," Dr. Ringold of Seattle Children’s Hospital wrote in an e-mail.
The panel evaluated scenarios delineated by three discrete phenotypes. The first was systemic JIA with active features and various degrees of synovitis. Instead of considering the combinations of symptoms such as fever and evanescent rash, panel members were asked to review the treatments in patients with a physician global assessment (MD global) of less than 5 joints and 5 or greater joints on a 10-point scale, and by active joint count (AJC) assessment of 0 joints, 1-4 joints, or more than 4 joints.
The second scenario included systemic JIA with active synovitis but not active systemic features. Treatments were rated for appropriateness based on the total number of active joints of 4 or less or 4 or greater. The third phenotype was systematic JIA with features of concern for macrophage activation syndrome (MAS). The update’s authors wrote that the requisite manifestations considered for MAS were "left intentionally broad given the lack of validated classification criteria for MAS." They also excluded children who were admitted to intensive care.
A fourth scenario was developed by the authors to address repeat tuberculosis (TB) screening across all JIA phenotypes where the patient was receiving biologic agents. In that case, evaluators were asked to consider the appropriateness of the various screening approaches.
Canakinumab, which targets interleukin-1 beta (IL-1B) and was approved for use in systemic JIA by the Food and Drug Administration (FDA) in May of this year, was recommended at level A for patients with continued disease activity after treatment with GC monotherapy, methotrexate, or leflunomide. Treatment with canakinumab following anakinra was rated level B, and the use of canakinumab after tocilizumab, irrespective of the MD global and AJC, was considered level C.
Canakinumab was also recommended for patients with an MD global of 5 or greater regardless of the AJC, and even if there was prior monotherapy with NSAIDs. The drug received a level B recommendation for an AJC assessment of more than 4 joints, but only after a trial of a DMARD plus anakinra or tocilizumab, or a DMARD plus a tumor necrosis factor–alpha inhibitor. A level C recommendation was assigned to canakinumab after abatacept for patients with an AJC assessment of more than 4 joints. The authors wrote that use of canakinumab was "uncertain, with the exception of patients with an MD global [less than] 5 who had received no prior therapy, GC monotherapy, or calcineurin monotherapy, in which case it was inappropriate (level D)."
Rilonacept, also an IL-1 blocker, was considered by the panel to be inappropriate and at level D for evidence as initial therapy, regardless of the MD global and AJC. "Use of rilonacept was uncertain for continued disease activity after a trial of other therapeutic options irrespective of the AJC and MD global," the authors wrote.
Tocilizumab, which targets interleukin-6 (IL-6) receptors and was FDA-approved for use in systemic JIA in 2011, was generally found to be uncertain or inappropriate, and typically recommended at level D, although it was given a level A rating for use in patients with continued disease activity following GC monotherapy and a level B rating following MTX or leflunomide, or anakinra regardless of the MD global and AJC ratings. It was also recommended at the C level in patients with an MD global rating of 5 or greater, regardless of the AJC rating, despite prior NSAID monotherapy.
Although annual screening of children with systemic JIA who had an initial negative TB test prior to starting a biologic agent was rated at level D, the updated recommendations are to repeat TB screenings if a patient’s TB risk becomes moderate or high, "as determined by regional infectious disease guidelines." However, this recommendation was also only given a level D rating.
For treating MAS in JIA, the authors stated their belief in the current recommendation to use anakinra, calcineurin inhibitors, and GCs, but wrote that they anticipated more treatment data would soon provide "a better understanding of the disease process, [and so] these recommendations may be modified."
The authors also made reference to the recently published Childhood Arthritis and Rheumatology Research Alliance (CARRA) systemic JIA care treatment plans (CTPs), which are based on group consensus techniques, although not according to phenotype, but instead more focused on medications. "It is anticipated that data from comparative effectiveness research based upon the CARRA CTPs will likely be very informative in future guidelines projects."
Dr. Ringold reports that she was supported by the Agency for Healthcare Research and Quality. Dr. Weiss was supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases.
FROM ARTHRITIS & RHEUMATISM
Selective screening missed 13% of Lynch syndrome–associated colorectal cancers
All stage IV colorectal cancer patients should be screened for Lynch syndrome, according to Dr. Douglas J. Hartman, and his colleagues at the University of Pittsburgh.
Published pathology models for predicting high microsatellite instability failed to identify up to 13% of Lynch syndrome–associated colorectal carcinomas in a study conducted by Dr. Hartman and his associates.
Current predictive pathology models for the detection of high microsatellite instability (MSI-H) colorectal cancers, including PREDICT and MSPath, rely primarily on right-sided tumor detection; however, Dr. Hartman and his associates determined that MSI-H tumors within the left colon and rectum were associated with Lynch syndrome in 57% of cases. In addition, revised Bethesda Guidelines for Lynch syndrome screening in colorectal cancer patients do not call for microsatellite instability testing in patients over age 60 years, yet 32% of left-sided tumors were identified in patients over age 60 years (Hum Pathol. 2013 Sep 10 [doi: 10.1016/j.humpath.2013.06.012]).
The researchers used MSI polymerase chain reaction (MSI-PCR) and DNA mismatch repair protein immunohistochemistry to prospectively analyze 1,292 colorectal cancers between January 2009 and July 2012. During 2009-2011, cases were evaluated using MSI-PCR only. In 2012, cases were first evaluated with immunohistochemistry. MSI-PCR was performed if there were any equivocal findings in the first round of testing.
MSI-H was found in 150 tumors; 112 were sporadic and 38 were probable Lynch syndrome as determined by BRAF V600E mutation, MLH1 promoter hypermethylation, cancer history, and germline mismatch repair gene mutation. All MSI-H tumors were analyzed for their grade, location, and histology; left-sided MSI-H tumors (n = 12; 57%) were more likely to be Lynch syndrome–related than were right-sided tumors (n = 26; 20%).
Neither PREDICT nor MSPath identified all Lynch syndrome–related tumors: PREDICT found 87% (33 of the 38), while MSPath identified 92% (35 of the 38). The researchers attributed these shortcomings to current predictive models’ "reliance on right-sided location."
"At our institution, we employ a universal screening approach for all resected colorectal carcinomas and biopsies from stage IV colorectal carcinomas regardless of patient age, tumor histology, and tumor location. Patients with tumors exhibiting loss of MSH2 and/or MSH6 and isolated loss of PMS2 are referred for further genetic counseling. Tumors with loss of MLH1 and PMS2 protein expression are further evaluated with BRAF mutation analysis and MLH1 promoter hypermethylation analysis before referral to genetic counseling," the authors wrote.
Contrary to some previously published data, the researchers also found that sporadic MSI-H tumors shared similar morphologies with tumors associated with probable Lynch syndrome, with the exception of a slightly higher proportion of sporadic MSI-H tumors (81%) demonstrating tumor infiltrating lymphocytes as compared with Lynch syndrome–associated tumors (61%).
The authors wrote that this is the "largest study" to date of consecutively identified Lynch syndrome–associated and sporadic tumors. As previous reports were based on retrospective data of primarily Lynch syndrome–associated tumors, those results were "vastly enriched for patients selected for analysis based on cancer-related personal or family history."
The failure of current predictive pathology models to predict all MSI-H and Lynch syndrome–related tumors, particularly left-sided ones, is a strong argument for universal screening, which "does not require widespread application of clinicopathologic criteria by clinicians and pathologists in order to select patients for testing." Further, universal screening is cost effective, and provides useful predictive and prognostic data in reducing cancer risks for patients and relatives who may or may not benefit from genetic counseling, particularly in patients over 60 years, for whom most of the MSI-H tumors were sporadic (91.5%), requiring no additional genetic counseling or testing.
Dr. Hartman and his associates reported no relevant disclosures.
All stage IV colorectal cancer patients should be screened for Lynch syndrome, according to Dr. Douglas J. Hartman, and his colleagues at the University of Pittsburgh.
Published pathology models for predicting high microsatellite instability failed to identify up to 13% of Lynch syndrome–associated colorectal carcinomas in a study conducted by Dr. Hartman and his associates.
Current predictive pathology models for the detection of high microsatellite instability (MSI-H) colorectal cancers, including PREDICT and MSPath, rely primarily on right-sided tumor detection; however, Dr. Hartman and his associates determined that MSI-H tumors within the left colon and rectum were associated with Lynch syndrome in 57% of cases. In addition, revised Bethesda Guidelines for Lynch syndrome screening in colorectal cancer patients do not call for microsatellite instability testing in patients over age 60 years, yet 32% of left-sided tumors were identified in patients over age 60 years (Hum Pathol. 2013 Sep 10 [doi: 10.1016/j.humpath.2013.06.012]).
The researchers used MSI polymerase chain reaction (MSI-PCR) and DNA mismatch repair protein immunohistochemistry to prospectively analyze 1,292 colorectal cancers between January 2009 and July 2012. During 2009-2011, cases were evaluated using MSI-PCR only. In 2012, cases were first evaluated with immunohistochemistry. MSI-PCR was performed if there were any equivocal findings in the first round of testing.
MSI-H was found in 150 tumors; 112 were sporadic and 38 were probable Lynch syndrome as determined by BRAF V600E mutation, MLH1 promoter hypermethylation, cancer history, and germline mismatch repair gene mutation. All MSI-H tumors were analyzed for their grade, location, and histology; left-sided MSI-H tumors (n = 12; 57%) were more likely to be Lynch syndrome–related than were right-sided tumors (n = 26; 20%).
Neither PREDICT nor MSPath identified all Lynch syndrome–related tumors: PREDICT found 87% (33 of the 38), while MSPath identified 92% (35 of the 38). The researchers attributed these shortcomings to current predictive models’ "reliance on right-sided location."
"At our institution, we employ a universal screening approach for all resected colorectal carcinomas and biopsies from stage IV colorectal carcinomas regardless of patient age, tumor histology, and tumor location. Patients with tumors exhibiting loss of MSH2 and/or MSH6 and isolated loss of PMS2 are referred for further genetic counseling. Tumors with loss of MLH1 and PMS2 protein expression are further evaluated with BRAF mutation analysis and MLH1 promoter hypermethylation analysis before referral to genetic counseling," the authors wrote.
Contrary to some previously published data, the researchers also found that sporadic MSI-H tumors shared similar morphologies with tumors associated with probable Lynch syndrome, with the exception of a slightly higher proportion of sporadic MSI-H tumors (81%) demonstrating tumor infiltrating lymphocytes as compared with Lynch syndrome–associated tumors (61%).
The authors wrote that this is the "largest study" to date of consecutively identified Lynch syndrome–associated and sporadic tumors. As previous reports were based on retrospective data of primarily Lynch syndrome–associated tumors, those results were "vastly enriched for patients selected for analysis based on cancer-related personal or family history."
The failure of current predictive pathology models to predict all MSI-H and Lynch syndrome–related tumors, particularly left-sided ones, is a strong argument for universal screening, which "does not require widespread application of clinicopathologic criteria by clinicians and pathologists in order to select patients for testing." Further, universal screening is cost effective, and provides useful predictive and prognostic data in reducing cancer risks for patients and relatives who may or may not benefit from genetic counseling, particularly in patients over 60 years, for whom most of the MSI-H tumors were sporadic (91.5%), requiring no additional genetic counseling or testing.
Dr. Hartman and his associates reported no relevant disclosures.
All stage IV colorectal cancer patients should be screened for Lynch syndrome, according to Dr. Douglas J. Hartman, and his colleagues at the University of Pittsburgh.
Published pathology models for predicting high microsatellite instability failed to identify up to 13% of Lynch syndrome–associated colorectal carcinomas in a study conducted by Dr. Hartman and his associates.
Current predictive pathology models for the detection of high microsatellite instability (MSI-H) colorectal cancers, including PREDICT and MSPath, rely primarily on right-sided tumor detection; however, Dr. Hartman and his associates determined that MSI-H tumors within the left colon and rectum were associated with Lynch syndrome in 57% of cases. In addition, revised Bethesda Guidelines for Lynch syndrome screening in colorectal cancer patients do not call for microsatellite instability testing in patients over age 60 years, yet 32% of left-sided tumors were identified in patients over age 60 years (Hum Pathol. 2013 Sep 10 [doi: 10.1016/j.humpath.2013.06.012]).
The researchers used MSI polymerase chain reaction (MSI-PCR) and DNA mismatch repair protein immunohistochemistry to prospectively analyze 1,292 colorectal cancers between January 2009 and July 2012. During 2009-2011, cases were evaluated using MSI-PCR only. In 2012, cases were first evaluated with immunohistochemistry. MSI-PCR was performed if there were any equivocal findings in the first round of testing.
MSI-H was found in 150 tumors; 112 were sporadic and 38 were probable Lynch syndrome as determined by BRAF V600E mutation, MLH1 promoter hypermethylation, cancer history, and germline mismatch repair gene mutation. All MSI-H tumors were analyzed for their grade, location, and histology; left-sided MSI-H tumors (n = 12; 57%) were more likely to be Lynch syndrome–related than were right-sided tumors (n = 26; 20%).
Neither PREDICT nor MSPath identified all Lynch syndrome–related tumors: PREDICT found 87% (33 of the 38), while MSPath identified 92% (35 of the 38). The researchers attributed these shortcomings to current predictive models’ "reliance on right-sided location."
"At our institution, we employ a universal screening approach for all resected colorectal carcinomas and biopsies from stage IV colorectal carcinomas regardless of patient age, tumor histology, and tumor location. Patients with tumors exhibiting loss of MSH2 and/or MSH6 and isolated loss of PMS2 are referred for further genetic counseling. Tumors with loss of MLH1 and PMS2 protein expression are further evaluated with BRAF mutation analysis and MLH1 promoter hypermethylation analysis before referral to genetic counseling," the authors wrote.
Contrary to some previously published data, the researchers also found that sporadic MSI-H tumors shared similar morphologies with tumors associated with probable Lynch syndrome, with the exception of a slightly higher proportion of sporadic MSI-H tumors (81%) demonstrating tumor infiltrating lymphocytes as compared with Lynch syndrome–associated tumors (61%).
The authors wrote that this is the "largest study" to date of consecutively identified Lynch syndrome–associated and sporadic tumors. As previous reports were based on retrospective data of primarily Lynch syndrome–associated tumors, those results were "vastly enriched for patients selected for analysis based on cancer-related personal or family history."
The failure of current predictive pathology models to predict all MSI-H and Lynch syndrome–related tumors, particularly left-sided ones, is a strong argument for universal screening, which "does not require widespread application of clinicopathologic criteria by clinicians and pathologists in order to select patients for testing." Further, universal screening is cost effective, and provides useful predictive and prognostic data in reducing cancer risks for patients and relatives who may or may not benefit from genetic counseling, particularly in patients over 60 years, for whom most of the MSI-H tumors were sporadic (91.5%), requiring no additional genetic counseling or testing.
Dr. Hartman and his associates reported no relevant disclosures.
FROM HUMAN PATHOLOGY
Major finding: Left-sided tumors with high microsatellite instability (n = 12; 57%) were more likely to be Lynch syndrome–related than were right-sided tumors (n=26; 20%).
Data source: Prospective analysis of 1,292 colorectal carcinomas using microsatellite instability polymerase chain reaction (MSI-PCR) and DNA mismatch repair protein immunohistochemistry.
Disclosures: Dr. Hartman and his associates reported no relevant financial disclosures.
Hospice usage up but not delivered soon enough, study finds
LEBANON, N.H. – Medicare patients with advanced cancer are more likely to receive hospice care than in previous years, although it is still too late in their treatment to deliver the full benefits of palliative care, according to a report issued by the Dartmouth Institute for Health Policy and Clinical Practice.
The report also states that geography and the treatment styles favored by individual health systems, rather than patient preferences, drive the level of intensive, end-of-life treatments used.
The findings are part of the Dartmouth Atlas Project, which uses Medicare data to examine how health care resources are allocated nationally. The report is the institute’s first longitudinal analysis of trends in end-of-life care for advanced cancer patients across regions, academic medical centers, and National Cancer Institute–designated cancer centers.
Controlling for patient age, sex, race, tumor type, and non–cancer-related comorbidities, the investigators found that, when compared with similar data collected from 2003 to 2007 and published by the institute in 2010, the number of advanced cancer patients on Medicare dying in the hospital decreased by an average of 28.8% during 2003-2007 to 24.7% in 2010. An increase from 54.6% to 61.3% in the number of patients who were enrolled in hospice in the last month of life was also found. The number of patients for whom hospice was initiated during the last 3 days of life increased from 8.3% during 2003-2007 to 10.9% in 2010.
When asked in an interview about the importance of starting hospice sooner in terminal care, Dr. Lorenzo Norris, director of psycho-oncology services at George Washington University Medical Center in Washington, said, "The biggest misconception is that hospice is strictly for end of life. Palliative care is just good medicine. If you limit the hospice care to the last 3 days, you’ve already limited the options a patient has. If you offer palliative care 5 or 6 months out, you can start reducing symptom burdens and increase a patient’s quality of life, which is very important because during that last year to 6 months is when patients are finishing unresolved financial and relationship issues. Palliative care allows them to more fully engage in their life."
When viewed according to the medical center delivering the care, between 13% and 50% of Medicare patients with advanced cancer died in a hospital in 2010, rather than in a hospice setting – typically the patient’s home. These figures include data from NCI-designated cancer centers. Hospice treatment in the last month of life for patients treated in mid- and northwestern states such as Oregon and Iowa trended as much as nearly 50% higher than in places such as Alaska and New York City.
Addressing the reasons for regional variations in care, Dr. Ira Byock, director of palliative medicine at Dartmouth-Hitchcock Medical Center, Lebanon, N.H., wrote in an accompanying editorial that, "Previous research has also shown that regional supply of health care resources, such as hospital and intensive care beds and imaging equipment, is one driver of the intensity of care, irrespective of the patient’s particular condition or illness level."
The analysts found the overall rate of ICU admissions for treatments such as intubation, a feeding tube, or cardiopulmonary resuscitation during the last month of life increased from 23.7% in the period between 2003 and 2007 to 28.8% in 2010. The number of days patients spent in the ICU in the last month of life varied more than fivefold across all centers, the analysts wrote.
"Our research continues to find that patients with advanced cancer are often receiving aggressive care until their final days, when we know that most patients would prefer care directed toward a better quality of life through hospice and palliative services. The increase in patients admitted to hospice care only days before death suggests that hospice services are often provided too late to provide much benefit," Dr. David C. Goodman, coprincipal investigator for the Dartmouth Atlas Project, said in a statement.
When asked why some oncologists are not referring their patients to hospice sooner, Dr. Clifford Hudis, president of the American Society of Clinical Oncology said, "There are many circumstances, based on culture, family dynamics, and patient’s wishes, where it is hard to communicate the value of hospice services. Some patients remain fearful of the very word and, in some situations, there is an unwillingness to acknowledge the severity of illnesses. These barriers can often be overcome through an increase in communication between doctors and patients about care goals and wishes."
When asked about the potential economic implications of the data, Dr. Goodman said, "The goal of better end-of-life care is to improve patient well-being. Often, it is less expensive to provide good care that patients want, [rather] than the usual care that patients receive."
The report also indicated that the number of patients who saw 10 or more different physicians during the last six months of their lives rose from 46.2% to 58.5%. The analysts interpreted this to mean "more patients may have experienced fragmented care."
In a statement Dr. Hudis encouraged the oncology community to "keep striving to deliver the right care at the right time." In an e-mail interview, Dr. Hudis wrote that, "The overall trend is a good one because it is concordant with the overall goals of ASCO: to make sure that every patient has access to the highest quality care throughout their disease experience."
Dr. Goodman and Dr. Byock report no relevant disclosures. The report was principally funded by the Robert Wood Johnson Foundation, with support from a consortium of funders including the WellPoint Foundation, the United Health Foundation, and the California HealthCare Foundation.
LEBANON, N.H. – Medicare patients with advanced cancer are more likely to receive hospice care than in previous years, although it is still too late in their treatment to deliver the full benefits of palliative care, according to a report issued by the Dartmouth Institute for Health Policy and Clinical Practice.
The report also states that geography and the treatment styles favored by individual health systems, rather than patient preferences, drive the level of intensive, end-of-life treatments used.
The findings are part of the Dartmouth Atlas Project, which uses Medicare data to examine how health care resources are allocated nationally. The report is the institute’s first longitudinal analysis of trends in end-of-life care for advanced cancer patients across regions, academic medical centers, and National Cancer Institute–designated cancer centers.
Controlling for patient age, sex, race, tumor type, and non–cancer-related comorbidities, the investigators found that, when compared with similar data collected from 2003 to 2007 and published by the institute in 2010, the number of advanced cancer patients on Medicare dying in the hospital decreased by an average of 28.8% during 2003-2007 to 24.7% in 2010. An increase from 54.6% to 61.3% in the number of patients who were enrolled in hospice in the last month of life was also found. The number of patients for whom hospice was initiated during the last 3 days of life increased from 8.3% during 2003-2007 to 10.9% in 2010.
When asked in an interview about the importance of starting hospice sooner in terminal care, Dr. Lorenzo Norris, director of psycho-oncology services at George Washington University Medical Center in Washington, said, "The biggest misconception is that hospice is strictly for end of life. Palliative care is just good medicine. If you limit the hospice care to the last 3 days, you’ve already limited the options a patient has. If you offer palliative care 5 or 6 months out, you can start reducing symptom burdens and increase a patient’s quality of life, which is very important because during that last year to 6 months is when patients are finishing unresolved financial and relationship issues. Palliative care allows them to more fully engage in their life."
When viewed according to the medical center delivering the care, between 13% and 50% of Medicare patients with advanced cancer died in a hospital in 2010, rather than in a hospice setting – typically the patient’s home. These figures include data from NCI-designated cancer centers. Hospice treatment in the last month of life for patients treated in mid- and northwestern states such as Oregon and Iowa trended as much as nearly 50% higher than in places such as Alaska and New York City.
Addressing the reasons for regional variations in care, Dr. Ira Byock, director of palliative medicine at Dartmouth-Hitchcock Medical Center, Lebanon, N.H., wrote in an accompanying editorial that, "Previous research has also shown that regional supply of health care resources, such as hospital and intensive care beds and imaging equipment, is one driver of the intensity of care, irrespective of the patient’s particular condition or illness level."
The analysts found the overall rate of ICU admissions for treatments such as intubation, a feeding tube, or cardiopulmonary resuscitation during the last month of life increased from 23.7% in the period between 2003 and 2007 to 28.8% in 2010. The number of days patients spent in the ICU in the last month of life varied more than fivefold across all centers, the analysts wrote.
"Our research continues to find that patients with advanced cancer are often receiving aggressive care until their final days, when we know that most patients would prefer care directed toward a better quality of life through hospice and palliative services. The increase in patients admitted to hospice care only days before death suggests that hospice services are often provided too late to provide much benefit," Dr. David C. Goodman, coprincipal investigator for the Dartmouth Atlas Project, said in a statement.
When asked why some oncologists are not referring their patients to hospice sooner, Dr. Clifford Hudis, president of the American Society of Clinical Oncology said, "There are many circumstances, based on culture, family dynamics, and patient’s wishes, where it is hard to communicate the value of hospice services. Some patients remain fearful of the very word and, in some situations, there is an unwillingness to acknowledge the severity of illnesses. These barriers can often be overcome through an increase in communication between doctors and patients about care goals and wishes."
When asked about the potential economic implications of the data, Dr. Goodman said, "The goal of better end-of-life care is to improve patient well-being. Often, it is less expensive to provide good care that patients want, [rather] than the usual care that patients receive."
The report also indicated that the number of patients who saw 10 or more different physicians during the last six months of their lives rose from 46.2% to 58.5%. The analysts interpreted this to mean "more patients may have experienced fragmented care."
In a statement Dr. Hudis encouraged the oncology community to "keep striving to deliver the right care at the right time." In an e-mail interview, Dr. Hudis wrote that, "The overall trend is a good one because it is concordant with the overall goals of ASCO: to make sure that every patient has access to the highest quality care throughout their disease experience."
Dr. Goodman and Dr. Byock report no relevant disclosures. The report was principally funded by the Robert Wood Johnson Foundation, with support from a consortium of funders including the WellPoint Foundation, the United Health Foundation, and the California HealthCare Foundation.
LEBANON, N.H. – Medicare patients with advanced cancer are more likely to receive hospice care than in previous years, although it is still too late in their treatment to deliver the full benefits of palliative care, according to a report issued by the Dartmouth Institute for Health Policy and Clinical Practice.
The report also states that geography and the treatment styles favored by individual health systems, rather than patient preferences, drive the level of intensive, end-of-life treatments used.
The findings are part of the Dartmouth Atlas Project, which uses Medicare data to examine how health care resources are allocated nationally. The report is the institute’s first longitudinal analysis of trends in end-of-life care for advanced cancer patients across regions, academic medical centers, and National Cancer Institute–designated cancer centers.
Controlling for patient age, sex, race, tumor type, and non–cancer-related comorbidities, the investigators found that, when compared with similar data collected from 2003 to 2007 and published by the institute in 2010, the number of advanced cancer patients on Medicare dying in the hospital decreased by an average of 28.8% during 2003-2007 to 24.7% in 2010. An increase from 54.6% to 61.3% in the number of patients who were enrolled in hospice in the last month of life was also found. The number of patients for whom hospice was initiated during the last 3 days of life increased from 8.3% during 2003-2007 to 10.9% in 2010.
When asked in an interview about the importance of starting hospice sooner in terminal care, Dr. Lorenzo Norris, director of psycho-oncology services at George Washington University Medical Center in Washington, said, "The biggest misconception is that hospice is strictly for end of life. Palliative care is just good medicine. If you limit the hospice care to the last 3 days, you’ve already limited the options a patient has. If you offer palliative care 5 or 6 months out, you can start reducing symptom burdens and increase a patient’s quality of life, which is very important because during that last year to 6 months is when patients are finishing unresolved financial and relationship issues. Palliative care allows them to more fully engage in their life."
When viewed according to the medical center delivering the care, between 13% and 50% of Medicare patients with advanced cancer died in a hospital in 2010, rather than in a hospice setting – typically the patient’s home. These figures include data from NCI-designated cancer centers. Hospice treatment in the last month of life for patients treated in mid- and northwestern states such as Oregon and Iowa trended as much as nearly 50% higher than in places such as Alaska and New York City.
Addressing the reasons for regional variations in care, Dr. Ira Byock, director of palliative medicine at Dartmouth-Hitchcock Medical Center, Lebanon, N.H., wrote in an accompanying editorial that, "Previous research has also shown that regional supply of health care resources, such as hospital and intensive care beds and imaging equipment, is one driver of the intensity of care, irrespective of the patient’s particular condition or illness level."
The analysts found the overall rate of ICU admissions for treatments such as intubation, a feeding tube, or cardiopulmonary resuscitation during the last month of life increased from 23.7% in the period between 2003 and 2007 to 28.8% in 2010. The number of days patients spent in the ICU in the last month of life varied more than fivefold across all centers, the analysts wrote.
"Our research continues to find that patients with advanced cancer are often receiving aggressive care until their final days, when we know that most patients would prefer care directed toward a better quality of life through hospice and palliative services. The increase in patients admitted to hospice care only days before death suggests that hospice services are often provided too late to provide much benefit," Dr. David C. Goodman, coprincipal investigator for the Dartmouth Atlas Project, said in a statement.
When asked why some oncologists are not referring their patients to hospice sooner, Dr. Clifford Hudis, president of the American Society of Clinical Oncology said, "There are many circumstances, based on culture, family dynamics, and patient’s wishes, where it is hard to communicate the value of hospice services. Some patients remain fearful of the very word and, in some situations, there is an unwillingness to acknowledge the severity of illnesses. These barriers can often be overcome through an increase in communication between doctors and patients about care goals and wishes."
When asked about the potential economic implications of the data, Dr. Goodman said, "The goal of better end-of-life care is to improve patient well-being. Often, it is less expensive to provide good care that patients want, [rather] than the usual care that patients receive."
The report also indicated that the number of patients who saw 10 or more different physicians during the last six months of their lives rose from 46.2% to 58.5%. The analysts interpreted this to mean "more patients may have experienced fragmented care."
In a statement Dr. Hudis encouraged the oncology community to "keep striving to deliver the right care at the right time." In an e-mail interview, Dr. Hudis wrote that, "The overall trend is a good one because it is concordant with the overall goals of ASCO: to make sure that every patient has access to the highest quality care throughout their disease experience."
Dr. Goodman and Dr. Byock report no relevant disclosures. The report was principally funded by the Robert Wood Johnson Foundation, with support from a consortium of funders including the WellPoint Foundation, the United Health Foundation, and the California HealthCare Foundation.
EXPERT ANALYSIS FROM THE DARTMOUTH ATLAS PROJECT
Chronic cocaine abuse tied to ‘profound metabolic alteration’ in men
Weight loss in chronic cocaine users appears to be associated with disturbances in body fat mass, not reduced caloric intake or increased physical activity, researchers have found.
The study challenges the widely held belief that chronic cocaine use suppresses appetite. "The cocaine-dependent men in our study reported increased food intake, specifically in foods that are high in fat and carbohydrates, but there was no concomitant increase in body weight," wrote Dr. Karen Ersche and her colleagues from the University of Cambridge, United Kingdom, in the journal Appetite (2013;71:75-80). [Our findings] "suggest a profound metabolic alteration that needs to be taken into account if we are able to understand the fully deleterious physical consequences of repeated use of this drug."
The cocaine users in the study also had lower levels of the energy-regulating hormone leptin, which ordinarily is equated with weight gain. These findings potentially open a new avenue of therapeutic intervention, since behavior is affected by noradrenergic manipulation of basal metabolism and inhibitory processes. "Such intervention, at a sufficiently early stage, could have the potential to prevent weight gain during recovery, thereby reducing personal suffering and improving compliance during the recovery process," the investigators wrote.
Sixty-five men were recruited from the community, 35 of whom were cocaine dependent, according to DSM-IV-TR criteria, and had been using cocaine either in powdered or freebase forms for an average of 15.3 years since the age of 19.2 years. The mean age of the men was about 36 years in both groups. The other half (n = 30) had no history of cocaine use. Those with a lifetime history of psychotic disorders, neurologic illness or head trauma, metabolic or autoimmune disorders, or HIV infection were excluded.
The investigators decided to focus on men "to avoid confounding factors related to hormone fluctuations that affect appetite and weight in women," Dr. Ersche, also the lead investigator, said in an interview.
All participants were assessed via questionnaire for eating behavior, food and alcohol intake, other substance use, education and verbal intelligence, and impulsivity. Plasma leptin levels were assessed in both groups; X-ray absorptiometry scans were used to determine anthropometric measurements such as fat mass, non-bone lean mass, and bone mineral density. Other measures such as body mass index (BMI), weight, height, waist-hip ratio, and skin-fold thickness also were assessed. The fat mass index (FMI) was derived by dividing the individual’s fat mass, fat free mass, and lean mass by the person’s height squared.
A multivariate analysis for variance was conducted on all findings, and covariates such as tobacco or other substance use also was taken into account.
Dr. Ersche and her associates found no differences between BMI, waist-hip ratio, and skin-fold thickness, although cocaine users had higher rates of fat, carbohydrate, and alcohol consumption, and higher levels of uncontrolled eating (although they were more likely to skip breakfast), while their FMI was significantly less than the nonuser group. Leptin levels were found to correlate significantly in both groups, with BMI and FMI. In the cocaine-user group, leptin levels, but not BMI or FMI, correlated with the duration of cocaine use. Controlling for covariates did not change the results.
The investigators concluded that while current treatment programs promote healthy eating to help recovering cocaine users manage their weight, "We argue that a more nuanced view is needed, one that acknowledges a major disturbance in eating behaviors and metabolism."
This study was funded by a research grant from the Medical Research Council and the Wellcome Trust. Dr. Ersche is supported by the Medical Research Council; neither she nor her colleagues reported any relevant disclosures.
Weight loss in chronic cocaine users appears to be associated with disturbances in body fat mass, not reduced caloric intake or increased physical activity, researchers have found.
The study challenges the widely held belief that chronic cocaine use suppresses appetite. "The cocaine-dependent men in our study reported increased food intake, specifically in foods that are high in fat and carbohydrates, but there was no concomitant increase in body weight," wrote Dr. Karen Ersche and her colleagues from the University of Cambridge, United Kingdom, in the journal Appetite (2013;71:75-80). [Our findings] "suggest a profound metabolic alteration that needs to be taken into account if we are able to understand the fully deleterious physical consequences of repeated use of this drug."
The cocaine users in the study also had lower levels of the energy-regulating hormone leptin, which ordinarily is equated with weight gain. These findings potentially open a new avenue of therapeutic intervention, since behavior is affected by noradrenergic manipulation of basal metabolism and inhibitory processes. "Such intervention, at a sufficiently early stage, could have the potential to prevent weight gain during recovery, thereby reducing personal suffering and improving compliance during the recovery process," the investigators wrote.
Sixty-five men were recruited from the community, 35 of whom were cocaine dependent, according to DSM-IV-TR criteria, and had been using cocaine either in powdered or freebase forms for an average of 15.3 years since the age of 19.2 years. The mean age of the men was about 36 years in both groups. The other half (n = 30) had no history of cocaine use. Those with a lifetime history of psychotic disorders, neurologic illness or head trauma, metabolic or autoimmune disorders, or HIV infection were excluded.
The investigators decided to focus on men "to avoid confounding factors related to hormone fluctuations that affect appetite and weight in women," Dr. Ersche, also the lead investigator, said in an interview.
All participants were assessed via questionnaire for eating behavior, food and alcohol intake, other substance use, education and verbal intelligence, and impulsivity. Plasma leptin levels were assessed in both groups; X-ray absorptiometry scans were used to determine anthropometric measurements such as fat mass, non-bone lean mass, and bone mineral density. Other measures such as body mass index (BMI), weight, height, waist-hip ratio, and skin-fold thickness also were assessed. The fat mass index (FMI) was derived by dividing the individual’s fat mass, fat free mass, and lean mass by the person’s height squared.
A multivariate analysis for variance was conducted on all findings, and covariates such as tobacco or other substance use also was taken into account.
Dr. Ersche and her associates found no differences between BMI, waist-hip ratio, and skin-fold thickness, although cocaine users had higher rates of fat, carbohydrate, and alcohol consumption, and higher levels of uncontrolled eating (although they were more likely to skip breakfast), while their FMI was significantly less than the nonuser group. Leptin levels were found to correlate significantly in both groups, with BMI and FMI. In the cocaine-user group, leptin levels, but not BMI or FMI, correlated with the duration of cocaine use. Controlling for covariates did not change the results.
The investigators concluded that while current treatment programs promote healthy eating to help recovering cocaine users manage their weight, "We argue that a more nuanced view is needed, one that acknowledges a major disturbance in eating behaviors and metabolism."
This study was funded by a research grant from the Medical Research Council and the Wellcome Trust. Dr. Ersche is supported by the Medical Research Council; neither she nor her colleagues reported any relevant disclosures.
Weight loss in chronic cocaine users appears to be associated with disturbances in body fat mass, not reduced caloric intake or increased physical activity, researchers have found.
The study challenges the widely held belief that chronic cocaine use suppresses appetite. "The cocaine-dependent men in our study reported increased food intake, specifically in foods that are high in fat and carbohydrates, but there was no concomitant increase in body weight," wrote Dr. Karen Ersche and her colleagues from the University of Cambridge, United Kingdom, in the journal Appetite (2013;71:75-80). [Our findings] "suggest a profound metabolic alteration that needs to be taken into account if we are able to understand the fully deleterious physical consequences of repeated use of this drug."
The cocaine users in the study also had lower levels of the energy-regulating hormone leptin, which ordinarily is equated with weight gain. These findings potentially open a new avenue of therapeutic intervention, since behavior is affected by noradrenergic manipulation of basal metabolism and inhibitory processes. "Such intervention, at a sufficiently early stage, could have the potential to prevent weight gain during recovery, thereby reducing personal suffering and improving compliance during the recovery process," the investigators wrote.
Sixty-five men were recruited from the community, 35 of whom were cocaine dependent, according to DSM-IV-TR criteria, and had been using cocaine either in powdered or freebase forms for an average of 15.3 years since the age of 19.2 years. The mean age of the men was about 36 years in both groups. The other half (n = 30) had no history of cocaine use. Those with a lifetime history of psychotic disorders, neurologic illness or head trauma, metabolic or autoimmune disorders, or HIV infection were excluded.
The investigators decided to focus on men "to avoid confounding factors related to hormone fluctuations that affect appetite and weight in women," Dr. Ersche, also the lead investigator, said in an interview.
All participants were assessed via questionnaire for eating behavior, food and alcohol intake, other substance use, education and verbal intelligence, and impulsivity. Plasma leptin levels were assessed in both groups; X-ray absorptiometry scans were used to determine anthropometric measurements such as fat mass, non-bone lean mass, and bone mineral density. Other measures such as body mass index (BMI), weight, height, waist-hip ratio, and skin-fold thickness also were assessed. The fat mass index (FMI) was derived by dividing the individual’s fat mass, fat free mass, and lean mass by the person’s height squared.
A multivariate analysis for variance was conducted on all findings, and covariates such as tobacco or other substance use also was taken into account.
Dr. Ersche and her associates found no differences between BMI, waist-hip ratio, and skin-fold thickness, although cocaine users had higher rates of fat, carbohydrate, and alcohol consumption, and higher levels of uncontrolled eating (although they were more likely to skip breakfast), while their FMI was significantly less than the nonuser group. Leptin levels were found to correlate significantly in both groups, with BMI and FMI. In the cocaine-user group, leptin levels, but not BMI or FMI, correlated with the duration of cocaine use. Controlling for covariates did not change the results.
The investigators concluded that while current treatment programs promote healthy eating to help recovering cocaine users manage their weight, "We argue that a more nuanced view is needed, one that acknowledges a major disturbance in eating behaviors and metabolism."
This study was funded by a research grant from the Medical Research Council and the Wellcome Trust. Dr. Ersche is supported by the Medical Research Council; neither she nor her colleagues reported any relevant disclosures.
FROM APPETITE
Chronic cocaine abuse tied to ‘profound metabolic alteration’ in men
Weight loss in chronic cocaine users appears to be associated with disturbances in body fat mass, not reduced caloric intake or increased physical activity, researchers have found.
The study challenges the widely held belief that chronic cocaine use suppresses appetite. "The cocaine-dependent men in our study reported increased food intake, specifically in foods that are high in fat and carbohydrates, but there was no concomitant increase in body weight," wrote Dr. Karen Ersche and her colleagues from the University of Cambridge, United Kingdom, in the journal Appetite (2013;71:75-80). [Our findings] "suggest a profound metabolic alteration that needs to be taken into account if we are able to understand the fully deleterious physical consequences of repeated use of this drug."
The cocaine users in the study also had lower levels of the energy-regulating hormone leptin, which ordinarily is equated with weight gain. These findings potentially open a new avenue of therapeutic intervention, since behavior is affected by noradrenergic manipulation of basal metabolism and inhibitory processes. "Such intervention, at a sufficiently early stage, could have the potential to prevent weight gain during recovery, thereby reducing personal suffering and improving compliance during the recovery process," the investigators wrote.
Sixty-five men were recruited from the community, 35 of whom were cocaine dependent, according to DSM-IV-TR criteria, and had been using cocaine either in powdered or freebase forms for an average of 15.3 years since the age of 19.2 years. The mean age of the men was about 36 years in both groups. The other half (n = 30) had no history of cocaine use. Those with a lifetime history of psychotic disorders, neurologic illness or head trauma, metabolic or autoimmune disorders, or HIV infection were excluded.
The investigators decided to focus on men "to avoid confounding factors related to hormone fluctuations that affect appetite and weight in women," Dr. Ersche, also the lead investigator, said in an interview.
All participants were assessed via questionnaire for eating behavior, food and alcohol intake, other substance use, education and verbal intelligence, and impulsivity. Plasma leptin levels were assessed in both groups; X-ray absorptiometry scans were used to determine anthropometric measurements such as fat mass, non-bone lean mass, and bone mineral density. Other measures such as body mass index (BMI), weight, height, waist-hip ratio, and skin-fold thickness also were assessed. The fat mass index (FMI) was derived by dividing the individual’s fat mass, fat free mass, and lean mass by the person’s height squared.
A multivariate analysis for variance was conducted on all findings, and covariates such as tobacco or other substance use also was taken into account.
Dr. Ersche and her associates found no differences between BMI, waist-hip ratio, and skin-fold thickness, although cocaine users had higher rates of fat, carbohydrate, and alcohol consumption, and higher levels of uncontrolled eating (although they were more likely to skip breakfast), while their FMI was significantly less than the nonuser group. Leptin levels were found to correlate significantly in both groups, with BMI and FMI. In the cocaine-user group, leptin levels, but not BMI or FMI, correlated with the duration of cocaine use. Controlling for covariates did not change the results.
The investigators concluded that while current treatment programs promote healthy eating to help recovering cocaine users manage their weight, "We argue that a more nuanced view is needed, one that acknowledges a major disturbance in eating behaviors and metabolism."
This study was funded by a research grant from the Medical Research Council and the Wellcome Trust. Dr. Ersche is supported by the Medical Research Council; neither she nor her colleagues reported any relevant disclosures.
Weight loss in chronic cocaine users appears to be associated with disturbances in body fat mass, not reduced caloric intake or increased physical activity, researchers have found.
The study challenges the widely held belief that chronic cocaine use suppresses appetite. "The cocaine-dependent men in our study reported increased food intake, specifically in foods that are high in fat and carbohydrates, but there was no concomitant increase in body weight," wrote Dr. Karen Ersche and her colleagues from the University of Cambridge, United Kingdom, in the journal Appetite (2013;71:75-80). [Our findings] "suggest a profound metabolic alteration that needs to be taken into account if we are able to understand the fully deleterious physical consequences of repeated use of this drug."
The cocaine users in the study also had lower levels of the energy-regulating hormone leptin, which ordinarily is equated with weight gain. These findings potentially open a new avenue of therapeutic intervention, since behavior is affected by noradrenergic manipulation of basal metabolism and inhibitory processes. "Such intervention, at a sufficiently early stage, could have the potential to prevent weight gain during recovery, thereby reducing personal suffering and improving compliance during the recovery process," the investigators wrote.
Sixty-five men were recruited from the community, 35 of whom were cocaine dependent, according to DSM-IV-TR criteria, and had been using cocaine either in powdered or freebase forms for an average of 15.3 years since the age of 19.2 years. The mean age of the men was about 36 years in both groups. The other half (n = 30) had no history of cocaine use. Those with a lifetime history of psychotic disorders, neurologic illness or head trauma, metabolic or autoimmune disorders, or HIV infection were excluded.
The investigators decided to focus on men "to avoid confounding factors related to hormone fluctuations that affect appetite and weight in women," Dr. Ersche, also the lead investigator, said in an interview.
All participants were assessed via questionnaire for eating behavior, food and alcohol intake, other substance use, education and verbal intelligence, and impulsivity. Plasma leptin levels were assessed in both groups; X-ray absorptiometry scans were used to determine anthropometric measurements such as fat mass, non-bone lean mass, and bone mineral density. Other measures such as body mass index (BMI), weight, height, waist-hip ratio, and skin-fold thickness also were assessed. The fat mass index (FMI) was derived by dividing the individual’s fat mass, fat free mass, and lean mass by the person’s height squared.
A multivariate analysis for variance was conducted on all findings, and covariates such as tobacco or other substance use also was taken into account.
Dr. Ersche and her associates found no differences between BMI, waist-hip ratio, and skin-fold thickness, although cocaine users had higher rates of fat, carbohydrate, and alcohol consumption, and higher levels of uncontrolled eating (although they were more likely to skip breakfast), while their FMI was significantly less than the nonuser group. Leptin levels were found to correlate significantly in both groups, with BMI and FMI. In the cocaine-user group, leptin levels, but not BMI or FMI, correlated with the duration of cocaine use. Controlling for covariates did not change the results.
The investigators concluded that while current treatment programs promote healthy eating to help recovering cocaine users manage their weight, "We argue that a more nuanced view is needed, one that acknowledges a major disturbance in eating behaviors and metabolism."
This study was funded by a research grant from the Medical Research Council and the Wellcome Trust. Dr. Ersche is supported by the Medical Research Council; neither she nor her colleagues reported any relevant disclosures.
Weight loss in chronic cocaine users appears to be associated with disturbances in body fat mass, not reduced caloric intake or increased physical activity, researchers have found.
The study challenges the widely held belief that chronic cocaine use suppresses appetite. "The cocaine-dependent men in our study reported increased food intake, specifically in foods that are high in fat and carbohydrates, but there was no concomitant increase in body weight," wrote Dr. Karen Ersche and her colleagues from the University of Cambridge, United Kingdom, in the journal Appetite (2013;71:75-80). [Our findings] "suggest a profound metabolic alteration that needs to be taken into account if we are able to understand the fully deleterious physical consequences of repeated use of this drug."
The cocaine users in the study also had lower levels of the energy-regulating hormone leptin, which ordinarily is equated with weight gain. These findings potentially open a new avenue of therapeutic intervention, since behavior is affected by noradrenergic manipulation of basal metabolism and inhibitory processes. "Such intervention, at a sufficiently early stage, could have the potential to prevent weight gain during recovery, thereby reducing personal suffering and improving compliance during the recovery process," the investigators wrote.
Sixty-five men were recruited from the community, 35 of whom were cocaine dependent, according to DSM-IV-TR criteria, and had been using cocaine either in powdered or freebase forms for an average of 15.3 years since the age of 19.2 years. The mean age of the men was about 36 years in both groups. The other half (n = 30) had no history of cocaine use. Those with a lifetime history of psychotic disorders, neurologic illness or head trauma, metabolic or autoimmune disorders, or HIV infection were excluded.
The investigators decided to focus on men "to avoid confounding factors related to hormone fluctuations that affect appetite and weight in women," Dr. Ersche, also the lead investigator, said in an interview.
All participants were assessed via questionnaire for eating behavior, food and alcohol intake, other substance use, education and verbal intelligence, and impulsivity. Plasma leptin levels were assessed in both groups; X-ray absorptiometry scans were used to determine anthropometric measurements such as fat mass, non-bone lean mass, and bone mineral density. Other measures such as body mass index (BMI), weight, height, waist-hip ratio, and skin-fold thickness also were assessed. The fat mass index (FMI) was derived by dividing the individual’s fat mass, fat free mass, and lean mass by the person’s height squared.
A multivariate analysis for variance was conducted on all findings, and covariates such as tobacco or other substance use also was taken into account.
Dr. Ersche and her associates found no differences between BMI, waist-hip ratio, and skin-fold thickness, although cocaine users had higher rates of fat, carbohydrate, and alcohol consumption, and higher levels of uncontrolled eating (although they were more likely to skip breakfast), while their FMI was significantly less than the nonuser group. Leptin levels were found to correlate significantly in both groups, with BMI and FMI. In the cocaine-user group, leptin levels, but not BMI or FMI, correlated with the duration of cocaine use. Controlling for covariates did not change the results.
The investigators concluded that while current treatment programs promote healthy eating to help recovering cocaine users manage their weight, "We argue that a more nuanced view is needed, one that acknowledges a major disturbance in eating behaviors and metabolism."
This study was funded by a research grant from the Medical Research Council and the Wellcome Trust. Dr. Ersche is supported by the Medical Research Council; neither she nor her colleagues reported any relevant disclosures.
FROM APPETITE
Major finding: Cocaine affects fat regulation, not appetite suppression, in cocaine use.
Data source: Cross-sectional, case-control study of 65 men recruited from community, including 35 self-reported cocaine-dependent men.
Disclosures: This study was funded by a research grant from the Medical Research Council and the Wellcome Trust. Dr. Ersche is supported by the Medical Research Council; neither she nor her colleagues reported any relevant disclosures.
Poor sleep quality associated with poor cannabis cessation outcomes
Poor sleep quality was associated with higher rates of mean cannabis use and lower rates of cessation during the first 6 months following a self-guided attempt to quit, although sleep efficiency/duration was not linked to cannabis use outcomes, a study of U.S. veterans has shown.
Citing previous research associating sleep quality with cannabis quit outcomes, including a study that found 48%-77% of cannabis users reported either relapsing or turning to other substances in order to improve their sleep, Kimberly A. Babson, Ph.D., and her colleagues hypothesized that perceived sleep quality rather than actual sleep efficiency/duration would affect cannabis use following a self-guided quit attempt. The results were published recently in Addictive Behaviors (2013;38:2707-13).
Dr. Babson and her colleagues, all affiliated with the VA Palo Alto Health Care System, Menlo Park, Calif., recruited 102 veterans (95% male) with a mean age of 51 years. All participants met DSM-5 criteria for cannabis dependence, had a self-reported desire to quit of 5 or greater on a scale of 0 to 10 (0 = no interest in quitting, 10 = definite interest), and had a self-reported desire to follow a self-guided cessation program. Candidates were excluded if they already had decreased their cannabis use by 25% or more in the previous month, were pregnant or breast-feeding, or had suicidal ideation.
Anxiety disorders were present in 88% of participants, and 43% had a co-occurring mood disorder. Those with and without an anxiety disorder or mood disorder (analyzed separately) did not differ significantly in terms of perceived sleep quality (self-reported overall quality of sleep), sleep efficiency/duration (self-reported quantity of sleep), or cannabis use over the course of the study, the authors reported.
Dependence on other substances was present in 29% of the study group; 4% had a substance abuse disorder. Just more than half of the entire group reported using a sleep medication at baseline, although this group "did not differ from those who did not use a sleep medication at baseline in terms of perceived sleep quality during the study period." Participants using a sleep medication used less cannabis at baseline than did those not using a sleep medication; however, the two groups did not differ when it came to cannabis use over the course of the study.
Participants were rated on the Clinician-Administered PTSD Scale, minus evaluation for two sleep-related symptoms – nightmares and insomnia. The severity of withdrawal from cannabis symptoms also was assessed at baseline and at each follow-up assessment over the 6-month course, using the Marijuana Withdrawal Checklist-Short Form. Post-traumatic stress disorder (PTSD) symptom severity was used as a covariate; symptom withdrawal was considered a time-varying covariate.
The Pittsburgh Sleep Quality Index was used to determine perceived sleep quality in terms of subjective sleep quality, sleep latency, sleep disturbances, and daytime dysfunction. Sleep efficiency/duration was assessed in terms of the combined amount of time spent in bed, whether or not the person was asleep (efficiency), and the actual amount of time spent sleeping (duration).
To determine links between cannabis use, and pre- and post-quit period sleep disturbances, the investigators created two generalized linear mixed models, applying to each a quadratic function to account for a wider array of variables. The first model was not adjusted for covariates, while the second model was adjusted for baseline age, PTSD, and withdrawal symptom severity as measured at each follow-up over the 6-month period.
The researchers did not find a significant intercept between perceived sleep quality and cannabis use. However, they did find a significant linear and quadratic slope showing that while cannabis use declined sharply at first, it leveled off in the context of perceived sleep quality.
"This indicates that over the course of the study (i.e., aggregated across time points), lower perceived sleep quality was associated with higher mean cannabis use," the investigators wrote. "However, perceived sleep quality did not interact with the linear or quadratic slopes, meaning that the association between perceived sleep quality and mean cannabis use could not be tied to a discrete time during the follow-up period." When the investigators adjusted the results for covariates, they found the that results were consistent.
Dr. Babson and her colleagues concluded that their findings indicated one of two possibilities: Either individuals use cannabis to regulate sleep, or chronic use disrupts sleep. Further prospective studies of mechanisms such as emotional regulation would help explain the associations, they wrote. However, their findings, while potentially limited by the study’s self-report nature and the question of whether it is generalizable to a larger population, could serve to help "inform the timing of sleep interventions in cannabis treatments in order to optimize outcomes."
This study was supported by a Veterans Affairs (VA) Clinical Science Research and Development Career Development Award, and by funds from the VA Health Services Research and Development Service. The authors reported no relevant disclosures.
Poor sleep quality was associated with higher rates of mean cannabis use and lower rates of cessation during the first 6 months following a self-guided attempt to quit, although sleep efficiency/duration was not linked to cannabis use outcomes, a study of U.S. veterans has shown.
Citing previous research associating sleep quality with cannabis quit outcomes, including a study that found 48%-77% of cannabis users reported either relapsing or turning to other substances in order to improve their sleep, Kimberly A. Babson, Ph.D., and her colleagues hypothesized that perceived sleep quality rather than actual sleep efficiency/duration would affect cannabis use following a self-guided quit attempt. The results were published recently in Addictive Behaviors (2013;38:2707-13).
Dr. Babson and her colleagues, all affiliated with the VA Palo Alto Health Care System, Menlo Park, Calif., recruited 102 veterans (95% male) with a mean age of 51 years. All participants met DSM-5 criteria for cannabis dependence, had a self-reported desire to quit of 5 or greater on a scale of 0 to 10 (0 = no interest in quitting, 10 = definite interest), and had a self-reported desire to follow a self-guided cessation program. Candidates were excluded if they already had decreased their cannabis use by 25% or more in the previous month, were pregnant or breast-feeding, or had suicidal ideation.
Anxiety disorders were present in 88% of participants, and 43% had a co-occurring mood disorder. Those with and without an anxiety disorder or mood disorder (analyzed separately) did not differ significantly in terms of perceived sleep quality (self-reported overall quality of sleep), sleep efficiency/duration (self-reported quantity of sleep), or cannabis use over the course of the study, the authors reported.
Dependence on other substances was present in 29% of the study group; 4% had a substance abuse disorder. Just more than half of the entire group reported using a sleep medication at baseline, although this group "did not differ from those who did not use a sleep medication at baseline in terms of perceived sleep quality during the study period." Participants using a sleep medication used less cannabis at baseline than did those not using a sleep medication; however, the two groups did not differ when it came to cannabis use over the course of the study.
Participants were rated on the Clinician-Administered PTSD Scale, minus evaluation for two sleep-related symptoms – nightmares and insomnia. The severity of withdrawal from cannabis symptoms also was assessed at baseline and at each follow-up assessment over the 6-month course, using the Marijuana Withdrawal Checklist-Short Form. Post-traumatic stress disorder (PTSD) symptom severity was used as a covariate; symptom withdrawal was considered a time-varying covariate.
The Pittsburgh Sleep Quality Index was used to determine perceived sleep quality in terms of subjective sleep quality, sleep latency, sleep disturbances, and daytime dysfunction. Sleep efficiency/duration was assessed in terms of the combined amount of time spent in bed, whether or not the person was asleep (efficiency), and the actual amount of time spent sleeping (duration).
To determine links between cannabis use, and pre- and post-quit period sleep disturbances, the investigators created two generalized linear mixed models, applying to each a quadratic function to account for a wider array of variables. The first model was not adjusted for covariates, while the second model was adjusted for baseline age, PTSD, and withdrawal symptom severity as measured at each follow-up over the 6-month period.
The researchers did not find a significant intercept between perceived sleep quality and cannabis use. However, they did find a significant linear and quadratic slope showing that while cannabis use declined sharply at first, it leveled off in the context of perceived sleep quality.
"This indicates that over the course of the study (i.e., aggregated across time points), lower perceived sleep quality was associated with higher mean cannabis use," the investigators wrote. "However, perceived sleep quality did not interact with the linear or quadratic slopes, meaning that the association between perceived sleep quality and mean cannabis use could not be tied to a discrete time during the follow-up period." When the investigators adjusted the results for covariates, they found the that results were consistent.
Dr. Babson and her colleagues concluded that their findings indicated one of two possibilities: Either individuals use cannabis to regulate sleep, or chronic use disrupts sleep. Further prospective studies of mechanisms such as emotional regulation would help explain the associations, they wrote. However, their findings, while potentially limited by the study’s self-report nature and the question of whether it is generalizable to a larger population, could serve to help "inform the timing of sleep interventions in cannabis treatments in order to optimize outcomes."
This study was supported by a Veterans Affairs (VA) Clinical Science Research and Development Career Development Award, and by funds from the VA Health Services Research and Development Service. The authors reported no relevant disclosures.
Poor sleep quality was associated with higher rates of mean cannabis use and lower rates of cessation during the first 6 months following a self-guided attempt to quit, although sleep efficiency/duration was not linked to cannabis use outcomes, a study of U.S. veterans has shown.
Citing previous research associating sleep quality with cannabis quit outcomes, including a study that found 48%-77% of cannabis users reported either relapsing or turning to other substances in order to improve their sleep, Kimberly A. Babson, Ph.D., and her colleagues hypothesized that perceived sleep quality rather than actual sleep efficiency/duration would affect cannabis use following a self-guided quit attempt. The results were published recently in Addictive Behaviors (2013;38:2707-13).
Dr. Babson and her colleagues, all affiliated with the VA Palo Alto Health Care System, Menlo Park, Calif., recruited 102 veterans (95% male) with a mean age of 51 years. All participants met DSM-5 criteria for cannabis dependence, had a self-reported desire to quit of 5 or greater on a scale of 0 to 10 (0 = no interest in quitting, 10 = definite interest), and had a self-reported desire to follow a self-guided cessation program. Candidates were excluded if they already had decreased their cannabis use by 25% or more in the previous month, were pregnant or breast-feeding, or had suicidal ideation.
Anxiety disorders were present in 88% of participants, and 43% had a co-occurring mood disorder. Those with and without an anxiety disorder or mood disorder (analyzed separately) did not differ significantly in terms of perceived sleep quality (self-reported overall quality of sleep), sleep efficiency/duration (self-reported quantity of sleep), or cannabis use over the course of the study, the authors reported.
Dependence on other substances was present in 29% of the study group; 4% had a substance abuse disorder. Just more than half of the entire group reported using a sleep medication at baseline, although this group "did not differ from those who did not use a sleep medication at baseline in terms of perceived sleep quality during the study period." Participants using a sleep medication used less cannabis at baseline than did those not using a sleep medication; however, the two groups did not differ when it came to cannabis use over the course of the study.
Participants were rated on the Clinician-Administered PTSD Scale, minus evaluation for two sleep-related symptoms – nightmares and insomnia. The severity of withdrawal from cannabis symptoms also was assessed at baseline and at each follow-up assessment over the 6-month course, using the Marijuana Withdrawal Checklist-Short Form. Post-traumatic stress disorder (PTSD) symptom severity was used as a covariate; symptom withdrawal was considered a time-varying covariate.
The Pittsburgh Sleep Quality Index was used to determine perceived sleep quality in terms of subjective sleep quality, sleep latency, sleep disturbances, and daytime dysfunction. Sleep efficiency/duration was assessed in terms of the combined amount of time spent in bed, whether or not the person was asleep (efficiency), and the actual amount of time spent sleeping (duration).
To determine links between cannabis use, and pre- and post-quit period sleep disturbances, the investigators created two generalized linear mixed models, applying to each a quadratic function to account for a wider array of variables. The first model was not adjusted for covariates, while the second model was adjusted for baseline age, PTSD, and withdrawal symptom severity as measured at each follow-up over the 6-month period.
The researchers did not find a significant intercept between perceived sleep quality and cannabis use. However, they did find a significant linear and quadratic slope showing that while cannabis use declined sharply at first, it leveled off in the context of perceived sleep quality.
"This indicates that over the course of the study (i.e., aggregated across time points), lower perceived sleep quality was associated with higher mean cannabis use," the investigators wrote. "However, perceived sleep quality did not interact with the linear or quadratic slopes, meaning that the association between perceived sleep quality and mean cannabis use could not be tied to a discrete time during the follow-up period." When the investigators adjusted the results for covariates, they found the that results were consistent.
Dr. Babson and her colleagues concluded that their findings indicated one of two possibilities: Either individuals use cannabis to regulate sleep, or chronic use disrupts sleep. Further prospective studies of mechanisms such as emotional regulation would help explain the associations, they wrote. However, their findings, while potentially limited by the study’s self-report nature and the question of whether it is generalizable to a larger population, could serve to help "inform the timing of sleep interventions in cannabis treatments in order to optimize outcomes."
This study was supported by a Veterans Affairs (VA) Clinical Science Research and Development Career Development Award, and by funds from the VA Health Services Research and Development Service. The authors reported no relevant disclosures.
FROM ADDICTIVE BEHAVIORS
Major finding: Lower perceived sleep quality was linked to higher mean cannabis use during cessation attempts.
Data source: A study of 102 U.S. veterans engaged in a self-guided cannabis cessation program.
Disclosures: This study was supported by a Veterans Affairs (VA) Clinical Science Research and Development Career Development Award, and by funds from the VA Health Services Research and Development Service. The authors reported no relevant disclosures.
Antibiotic-related illness in U.S. tops 2 million annually
Each year in the United States, more than 2 million people contract drug-resistant infections and 23,000 die, primarily in a hospital setting. The figures are part of a first-of-its-kind report from the Centers for Disease Control and Prevention, detailing in actual numbers the extent of the nation’s growing antibiotic crisis.
"One of the reasons we’re issuing the report now is that it is not too late. If we’re not careful, the medicine chest will be empty when we go there to look for a life-saving antibiotic for a deadly infection. But if we act now, we can preserve these medications, while we work on the development of new medications," Dr. Thomas R. Frieden, the CDC’s director, said in a media briefing.
Noting that the numbers are only "bare minimum, very conservative estimates," Dr. Frieden said that many infections are resistant to more than just one medication and that for health care-associated infections, some cases are still unaccounted for in nonhospital settings such as nursing homes and dialysis facilities.
Of particular concern, according to the report, is Clostridium difficile, because its annual death toll of an estimated 14,000, accounts for more than half of all deaths from drug-resistant infection. This is the case even though C. difficile is a bacterium that – although it is not particularly resistant to treatment – is made stronger when antibiotics are used to treat other infections.
The public health threat, as delineated in the report, is tertiary. The first level is "urgent" and includes C. difficile, carbapenem-resistant Enterobacteriaceae, and drug-resistant Neisseria gonorrhoeae for fear of it developing cephalosporin resistance. "These threats may not be currently widespread but have the potential to become so and require urgent public health attention to identify infections and to limit transmission," the report states.
Second most threatening are "serious" infections and include multidrug-resistant Acinetobacter; drug-resistant Campylobacter; fluconazole-resistant Candida; extended spectrum beta-lactamase producing Enterobacteriaceae; vancomycin-resistant Enterococcus; multidrug-resistant Pseudomonas aeruginosa; drug-resistant, nontyphoidal Salmonella; drug-resistant S. typhi; drug-resistant Shigella; methicillin-resistant Staphylococcus aureus; drug-resistant Streptococcus pneumoniae; and drug-resistant tuberculosis.
The CDC considers these "significant" threats that "will worsen and may become urgent without ongoing public health monitoring and prevention activities."
The third level is "concerning," characterized by currently low threat of antibiotic resistance with several antibiotic therapies available. Vancomycin-resistant S. aureus, erythromycin-resistant Streptococcus group A and clindamycin-resistant Streptococcus group B are in this category.
Seven criteria were used in the assessment of threat: the clinical and economic impact of each, the combined numbers of infection, the incidence rate, the 10-year projection of incidence, transmissibility, availability of effective antibiotics, and barriers to prevention. Dr. Frieden outlined what he said the CDC believed were four important measures for turning the tide in the public’s favor, including infection prevention (safe food handling, hand washing, etc.) and using state-of-the-art surveillance to track the occurrence of drug-resistant infections nationally. But antibiotic stewardship, and research and development received the most discussion during the briefing.
Dr. Michael Bell, deputy director of the CDC’s division of health care quality promotion, said that since the advent of penicillin: "We’ve seen every last antibiotic end up having substantial resistance. Just having a new drug is not going to be enough.
"We applaud FDA’s efforts to make new drug development less burdensome and more rapid, but at the same time, we are reassured to know they are proven and safe. We need to make sure we intensely maintain stewardship and that we don’t waste yet another precious drug."
Dr. Frieden pointed to collaboration with the Center for Medicare and Medicaid Services, which he said has begun "incentivizing" hospitals to track infection rates and use good stewardship practices. He also stated that patients who insist their doctors prescribe antibiotics need to understand that getting "more medication" isn’t the solution, but the "right medication" is.
Although drug-resistance–related mortality occurs in the community at large, both Dr. Frieden and Dr. Bell stressed that the first line of action was to curb infection rates in the health care setting and pointed to several recent CDC initiatives, such as the National Healthcare Safety Network, a surveillance database that allows health care facilities and departments of health across the country to share data about outbreaks in their communities, among other information.
When asked about environmental factors contributing to the rising levels of drug-resistance, such as the use of antibiotics in agricultural livestock production, Dr. Bell responded: "We support appropriate antibiotic use across the board. There is always going to be bleed-over in the environment and the ecosystem. Where some of these bacteria are making people sick, there is an overlap with intensive care units, so we do continue to focus a great deal on the health care system. That’s the priority."
In June, Sen. Dianne Feinstein, (D-Calif.) sponsored the Preventing Antibiotic Resistance Act, which is aimed at tightening the way in which antibiotics are dispensed to animals. The bill was written in response to a March statement from Dr. Frieden calling CRE "a nightmare bacteria" for which there is no treatment and issuing an "urgent warning" to leaders to take action. In the briefing about the report ranking the level of threat, Dr. Frieden said that there had been a "cross-government focus" in response to the CDC’s warnings and that he could see "a ray of hope."
Neither Dr. Frieden nor Dr. Bell had relevant disclosures to report.
Each year in the United States, more than 2 million people contract drug-resistant infections and 23,000 die, primarily in a hospital setting. The figures are part of a first-of-its-kind report from the Centers for Disease Control and Prevention, detailing in actual numbers the extent of the nation’s growing antibiotic crisis.
"One of the reasons we’re issuing the report now is that it is not too late. If we’re not careful, the medicine chest will be empty when we go there to look for a life-saving antibiotic for a deadly infection. But if we act now, we can preserve these medications, while we work on the development of new medications," Dr. Thomas R. Frieden, the CDC’s director, said in a media briefing.
Noting that the numbers are only "bare minimum, very conservative estimates," Dr. Frieden said that many infections are resistant to more than just one medication and that for health care-associated infections, some cases are still unaccounted for in nonhospital settings such as nursing homes and dialysis facilities.
Of particular concern, according to the report, is Clostridium difficile, because its annual death toll of an estimated 14,000, accounts for more than half of all deaths from drug-resistant infection. This is the case even though C. difficile is a bacterium that – although it is not particularly resistant to treatment – is made stronger when antibiotics are used to treat other infections.
The public health threat, as delineated in the report, is tertiary. The first level is "urgent" and includes C. difficile, carbapenem-resistant Enterobacteriaceae, and drug-resistant Neisseria gonorrhoeae for fear of it developing cephalosporin resistance. "These threats may not be currently widespread but have the potential to become so and require urgent public health attention to identify infections and to limit transmission," the report states.
Second most threatening are "serious" infections and include multidrug-resistant Acinetobacter; drug-resistant Campylobacter; fluconazole-resistant Candida; extended spectrum beta-lactamase producing Enterobacteriaceae; vancomycin-resistant Enterococcus; multidrug-resistant Pseudomonas aeruginosa; drug-resistant, nontyphoidal Salmonella; drug-resistant S. typhi; drug-resistant Shigella; methicillin-resistant Staphylococcus aureus; drug-resistant Streptococcus pneumoniae; and drug-resistant tuberculosis.
The CDC considers these "significant" threats that "will worsen and may become urgent without ongoing public health monitoring and prevention activities."
The third level is "concerning," characterized by currently low threat of antibiotic resistance with several antibiotic therapies available. Vancomycin-resistant S. aureus, erythromycin-resistant Streptococcus group A and clindamycin-resistant Streptococcus group B are in this category.
Seven criteria were used in the assessment of threat: the clinical and economic impact of each, the combined numbers of infection, the incidence rate, the 10-year projection of incidence, transmissibility, availability of effective antibiotics, and barriers to prevention. Dr. Frieden outlined what he said the CDC believed were four important measures for turning the tide in the public’s favor, including infection prevention (safe food handling, hand washing, etc.) and using state-of-the-art surveillance to track the occurrence of drug-resistant infections nationally. But antibiotic stewardship, and research and development received the most discussion during the briefing.
Dr. Michael Bell, deputy director of the CDC’s division of health care quality promotion, said that since the advent of penicillin: "We’ve seen every last antibiotic end up having substantial resistance. Just having a new drug is not going to be enough.
"We applaud FDA’s efforts to make new drug development less burdensome and more rapid, but at the same time, we are reassured to know they are proven and safe. We need to make sure we intensely maintain stewardship and that we don’t waste yet another precious drug."
Dr. Frieden pointed to collaboration with the Center for Medicare and Medicaid Services, which he said has begun "incentivizing" hospitals to track infection rates and use good stewardship practices. He also stated that patients who insist their doctors prescribe antibiotics need to understand that getting "more medication" isn’t the solution, but the "right medication" is.
Although drug-resistance–related mortality occurs in the community at large, both Dr. Frieden and Dr. Bell stressed that the first line of action was to curb infection rates in the health care setting and pointed to several recent CDC initiatives, such as the National Healthcare Safety Network, a surveillance database that allows health care facilities and departments of health across the country to share data about outbreaks in their communities, among other information.
When asked about environmental factors contributing to the rising levels of drug-resistance, such as the use of antibiotics in agricultural livestock production, Dr. Bell responded: "We support appropriate antibiotic use across the board. There is always going to be bleed-over in the environment and the ecosystem. Where some of these bacteria are making people sick, there is an overlap with intensive care units, so we do continue to focus a great deal on the health care system. That’s the priority."
In June, Sen. Dianne Feinstein, (D-Calif.) sponsored the Preventing Antibiotic Resistance Act, which is aimed at tightening the way in which antibiotics are dispensed to animals. The bill was written in response to a March statement from Dr. Frieden calling CRE "a nightmare bacteria" for which there is no treatment and issuing an "urgent warning" to leaders to take action. In the briefing about the report ranking the level of threat, Dr. Frieden said that there had been a "cross-government focus" in response to the CDC’s warnings and that he could see "a ray of hope."
Neither Dr. Frieden nor Dr. Bell had relevant disclosures to report.
Each year in the United States, more than 2 million people contract drug-resistant infections and 23,000 die, primarily in a hospital setting. The figures are part of a first-of-its-kind report from the Centers for Disease Control and Prevention, detailing in actual numbers the extent of the nation’s growing antibiotic crisis.
"One of the reasons we’re issuing the report now is that it is not too late. If we’re not careful, the medicine chest will be empty when we go there to look for a life-saving antibiotic for a deadly infection. But if we act now, we can preserve these medications, while we work on the development of new medications," Dr. Thomas R. Frieden, the CDC’s director, said in a media briefing.
Noting that the numbers are only "bare minimum, very conservative estimates," Dr. Frieden said that many infections are resistant to more than just one medication and that for health care-associated infections, some cases are still unaccounted for in nonhospital settings such as nursing homes and dialysis facilities.
Of particular concern, according to the report, is Clostridium difficile, because its annual death toll of an estimated 14,000, accounts for more than half of all deaths from drug-resistant infection. This is the case even though C. difficile is a bacterium that – although it is not particularly resistant to treatment – is made stronger when antibiotics are used to treat other infections.
The public health threat, as delineated in the report, is tertiary. The first level is "urgent" and includes C. difficile, carbapenem-resistant Enterobacteriaceae, and drug-resistant Neisseria gonorrhoeae for fear of it developing cephalosporin resistance. "These threats may not be currently widespread but have the potential to become so and require urgent public health attention to identify infections and to limit transmission," the report states.
Second most threatening are "serious" infections and include multidrug-resistant Acinetobacter; drug-resistant Campylobacter; fluconazole-resistant Candida; extended spectrum beta-lactamase producing Enterobacteriaceae; vancomycin-resistant Enterococcus; multidrug-resistant Pseudomonas aeruginosa; drug-resistant, nontyphoidal Salmonella; drug-resistant S. typhi; drug-resistant Shigella; methicillin-resistant Staphylococcus aureus; drug-resistant Streptococcus pneumoniae; and drug-resistant tuberculosis.
The CDC considers these "significant" threats that "will worsen and may become urgent without ongoing public health monitoring and prevention activities."
The third level is "concerning," characterized by currently low threat of antibiotic resistance with several antibiotic therapies available. Vancomycin-resistant S. aureus, erythromycin-resistant Streptococcus group A and clindamycin-resistant Streptococcus group B are in this category.
Seven criteria were used in the assessment of threat: the clinical and economic impact of each, the combined numbers of infection, the incidence rate, the 10-year projection of incidence, transmissibility, availability of effective antibiotics, and barriers to prevention. Dr. Frieden outlined what he said the CDC believed were four important measures for turning the tide in the public’s favor, including infection prevention (safe food handling, hand washing, etc.) and using state-of-the-art surveillance to track the occurrence of drug-resistant infections nationally. But antibiotic stewardship, and research and development received the most discussion during the briefing.
Dr. Michael Bell, deputy director of the CDC’s division of health care quality promotion, said that since the advent of penicillin: "We’ve seen every last antibiotic end up having substantial resistance. Just having a new drug is not going to be enough.
"We applaud FDA’s efforts to make new drug development less burdensome and more rapid, but at the same time, we are reassured to know they are proven and safe. We need to make sure we intensely maintain stewardship and that we don’t waste yet another precious drug."
Dr. Frieden pointed to collaboration with the Center for Medicare and Medicaid Services, which he said has begun "incentivizing" hospitals to track infection rates and use good stewardship practices. He also stated that patients who insist their doctors prescribe antibiotics need to understand that getting "more medication" isn’t the solution, but the "right medication" is.
Although drug-resistance–related mortality occurs in the community at large, both Dr. Frieden and Dr. Bell stressed that the first line of action was to curb infection rates in the health care setting and pointed to several recent CDC initiatives, such as the National Healthcare Safety Network, a surveillance database that allows health care facilities and departments of health across the country to share data about outbreaks in their communities, among other information.
When asked about environmental factors contributing to the rising levels of drug-resistance, such as the use of antibiotics in agricultural livestock production, Dr. Bell responded: "We support appropriate antibiotic use across the board. There is always going to be bleed-over in the environment and the ecosystem. Where some of these bacteria are making people sick, there is an overlap with intensive care units, so we do continue to focus a great deal on the health care system. That’s the priority."
In June, Sen. Dianne Feinstein, (D-Calif.) sponsored the Preventing Antibiotic Resistance Act, which is aimed at tightening the way in which antibiotics are dispensed to animals. The bill was written in response to a March statement from Dr. Frieden calling CRE "a nightmare bacteria" for which there is no treatment and issuing an "urgent warning" to leaders to take action. In the briefing about the report ranking the level of threat, Dr. Frieden said that there had been a "cross-government focus" in response to the CDC’s warnings and that he could see "a ray of hope."
Neither Dr. Frieden nor Dr. Bell had relevant disclosures to report.
Antibiotic-related illness in U.S. tops 2 million annually
Each year in the United States, more than 2 million people contract drug-resistant infections and 23,000 die, primarily in a hospital setting. The figures are part of a first-of-its-kind report from the Centers for Disease Control and Prevention, detailing in actual numbers the extent of the nation’s growing antibiotic crisis.
"One of the reasons we’re issuing the report now is that it is not too late. If we’re not careful, the medicine chest will be empty when we go there to look for a life-saving antibiotic for a deadly infection. But if we act now, we can preserve these medications, while we work on the development of new medications," Dr. Thomas R. Frieden, the CDC’s director, said in a media briefing.
Noting that the numbers are only "bare minimum, very conservative estimates," Dr. Frieden said that many infections are resistant to more than just one medication and that for health care-associated infections, some cases are still unaccounted for in nonhospital settings such as nursing homes and dialysis facilities.
Of particular concern, according to the report, is Clostridium difficile, because its annual death toll of an estimated 14,000, accounts for more than half of all deaths from drug-resistant infection. This is the case even though C. difficile is a bacterium that – although it is not particularly resistant to treatment – is made stronger when antibiotics are used to treat other infections.
The public health threat, as delineated in the report, is tertiary. The first level is "urgent" and includes C. difficile, carbapenem-resistant Enterobacteriaceae, and drug-resistant Neisseria gonorrhoeae for fear of it developing cephalosporin resistance. "These threats may not be currently widespread but have the potential to become so and require urgent public health attention to identify infections and to limit transmission," the report states.
Second most threatening are "serious" infections and include multidrug-resistant Acinetobacter; drug-resistant Campylobacter; fluconazole-resistant Candida; extended spectrum beta-lactamase producing Enterobacteriaceae; vancomycin-resistant Enterococcus; multidrug-resistant Pseudomonas aeruginosa; drug-resistant, nontyphoidal Salmonella; drug-resistant S. typhi; drug-resistant Shigella; methicillin-resistant Staphylococcus aureus; drug-resistant Streptococcus pneumoniae; and drug-resistant tuberculosis.
The CDC considers these "significant" threats that "will worsen and may become urgent without ongoing public health monitoring and prevention activities."
The third level is "concerning," characterized by currently low threat of antibiotic resistance with several antibiotic therapies available. Vancomycin-resistant S. aureus, erythromycin-resistant Streptococcus group A and clindamycin-resistant Streptococcus group B are in this category.
Seven criteria were used in the assessment of threat: the clinical and economic impact of each, the combined numbers of infection, the incidence rate, the 10-year projection of incidence, transmissibility, availability of effective antibiotics, and barriers to prevention. Dr. Frieden outlined what he said the CDC believed were four important measures for turning the tide in the public’s favor, including infection prevention (safe food handling, hand washing, etc.) and using state-of-the-art surveillance to track the occurrence of drug-resistant infections nationally. But antibiotic stewardship, and research and development received the most discussion during the briefing.
Dr. Michael Bell, deputy director of the CDC’s division of health care quality promotion, said that since the advent of penicillin: "We’ve seen every last antibiotic end up having substantial resistance. Just having a new drug is not going to be enough.
"We applaud FDA’s efforts to make new drug development less burdensome and more rapid, but at the same time, we are reassured to know they are proven and safe. We need to make sure we intensely maintain stewardship and that we don’t waste yet another precious drug."
Dr. Frieden pointed to collaboration with the Center for Medicare and Medicaid Services, which he said has begun "incentivizing" hospitals to track infection rates and use good stewardship practices. He also stated that patients who insist their doctors prescribe antibiotics need to understand that getting "more medication" isn’t the solution, but the "right medication" is.
Although drug-resistance–related mortality occurs in the community at large, both Dr. Frieden and Dr. Bell stressed that the first line of action was to curb infection rates in the health care setting and pointed to several recent CDC initiatives, such as the National Healthcare Safety Network, a surveillance database that allows health care facilities and departments of health across the country to share data about outbreaks in their communities, among other information.
When asked about environmental factors contributing to the rising levels of drug-resistance, such as the use of antibiotics in agricultural livestock production, Dr. Bell responded: "We support appropriate antibiotic use across the board. There is always going to be bleed-over in the environment and the ecosystem. Where some of these bacteria are making people sick, there is an overlap with intensive care units, so we do continue to focus a great deal on the health care system. That’s the priority."
In June, Sen. Dianne Feinstein, (D-Calif.) sponsored the Preventing Antibiotic Resistance Act, which is aimed at tightening the way in which antibiotics are dispensed to animals. The bill was written in response to a March statement from Dr. Frieden calling CRE "a nightmare bacteria" for which there is no treatment and issuing an "urgent warning" to leaders to take action. In the briefing about the report ranking the level of threat, Dr. Frieden said that there had been a "cross-government focus" in response to the CDC’s warnings and that he could see "a ray of hope."
Neither Dr. Frieden nor Dr. Bell had relevant disclosures to report.
Each year in the United States, more than 2 million people contract drug-resistant infections and 23,000 die, primarily in a hospital setting. The figures are part of a first-of-its-kind report from the Centers for Disease Control and Prevention, detailing in actual numbers the extent of the nation’s growing antibiotic crisis.
"One of the reasons we’re issuing the report now is that it is not too late. If we’re not careful, the medicine chest will be empty when we go there to look for a life-saving antibiotic for a deadly infection. But if we act now, we can preserve these medications, while we work on the development of new medications," Dr. Thomas R. Frieden, the CDC’s director, said in a media briefing.
Noting that the numbers are only "bare minimum, very conservative estimates," Dr. Frieden said that many infections are resistant to more than just one medication and that for health care-associated infections, some cases are still unaccounted for in nonhospital settings such as nursing homes and dialysis facilities.
Of particular concern, according to the report, is Clostridium difficile, because its annual death toll of an estimated 14,000, accounts for more than half of all deaths from drug-resistant infection. This is the case even though C. difficile is a bacterium that – although it is not particularly resistant to treatment – is made stronger when antibiotics are used to treat other infections.
The public health threat, as delineated in the report, is tertiary. The first level is "urgent" and includes C. difficile, carbapenem-resistant Enterobacteriaceae, and drug-resistant Neisseria gonorrhoeae for fear of it developing cephalosporin resistance. "These threats may not be currently widespread but have the potential to become so and require urgent public health attention to identify infections and to limit transmission," the report states.
Second most threatening are "serious" infections and include multidrug-resistant Acinetobacter; drug-resistant Campylobacter; fluconazole-resistant Candida; extended spectrum beta-lactamase producing Enterobacteriaceae; vancomycin-resistant Enterococcus; multidrug-resistant Pseudomonas aeruginosa; drug-resistant, nontyphoidal Salmonella; drug-resistant S. typhi; drug-resistant Shigella; methicillin-resistant Staphylococcus aureus; drug-resistant Streptococcus pneumoniae; and drug-resistant tuberculosis.
The CDC considers these "significant" threats that "will worsen and may become urgent without ongoing public health monitoring and prevention activities."
The third level is "concerning," characterized by currently low threat of antibiotic resistance with several antibiotic therapies available. Vancomycin-resistant S. aureus, erythromycin-resistant Streptococcus group A and clindamycin-resistant Streptococcus group B are in this category.
Seven criteria were used in the assessment of threat: the clinical and economic impact of each, the combined numbers of infection, the incidence rate, the 10-year projection of incidence, transmissibility, availability of effective antibiotics, and barriers to prevention. Dr. Frieden outlined what he said the CDC believed were four important measures for turning the tide in the public’s favor, including infection prevention (safe food handling, hand washing, etc.) and using state-of-the-art surveillance to track the occurrence of drug-resistant infections nationally. But antibiotic stewardship, and research and development received the most discussion during the briefing.
Dr. Michael Bell, deputy director of the CDC’s division of health care quality promotion, said that since the advent of penicillin: "We’ve seen every last antibiotic end up having substantial resistance. Just having a new drug is not going to be enough.
"We applaud FDA’s efforts to make new drug development less burdensome and more rapid, but at the same time, we are reassured to know they are proven and safe. We need to make sure we intensely maintain stewardship and that we don’t waste yet another precious drug."
Dr. Frieden pointed to collaboration with the Center for Medicare and Medicaid Services, which he said has begun "incentivizing" hospitals to track infection rates and use good stewardship practices. He also stated that patients who insist their doctors prescribe antibiotics need to understand that getting "more medication" isn’t the solution, but the "right medication" is.
Although drug-resistance–related mortality occurs in the community at large, both Dr. Frieden and Dr. Bell stressed that the first line of action was to curb infection rates in the health care setting and pointed to several recent CDC initiatives, such as the National Healthcare Safety Network, a surveillance database that allows health care facilities and departments of health across the country to share data about outbreaks in their communities, among other information.
When asked about environmental factors contributing to the rising levels of drug-resistance, such as the use of antibiotics in agricultural livestock production, Dr. Bell responded: "We support appropriate antibiotic use across the board. There is always going to be bleed-over in the environment and the ecosystem. Where some of these bacteria are making people sick, there is an overlap with intensive care units, so we do continue to focus a great deal on the health care system. That’s the priority."
In June, Sen. Dianne Feinstein, (D-Calif.) sponsored the Preventing Antibiotic Resistance Act, which is aimed at tightening the way in which antibiotics are dispensed to animals. The bill was written in response to a March statement from Dr. Frieden calling CRE "a nightmare bacteria" for which there is no treatment and issuing an "urgent warning" to leaders to take action. In the briefing about the report ranking the level of threat, Dr. Frieden said that there had been a "cross-government focus" in response to the CDC’s warnings and that he could see "a ray of hope."
Neither Dr. Frieden nor Dr. Bell had relevant disclosures to report.
Each year in the United States, more than 2 million people contract drug-resistant infections and 23,000 die, primarily in a hospital setting. The figures are part of a first-of-its-kind report from the Centers for Disease Control and Prevention, detailing in actual numbers the extent of the nation’s growing antibiotic crisis.
"One of the reasons we’re issuing the report now is that it is not too late. If we’re not careful, the medicine chest will be empty when we go there to look for a life-saving antibiotic for a deadly infection. But if we act now, we can preserve these medications, while we work on the development of new medications," Dr. Thomas R. Frieden, the CDC’s director, said in a media briefing.
Noting that the numbers are only "bare minimum, very conservative estimates," Dr. Frieden said that many infections are resistant to more than just one medication and that for health care-associated infections, some cases are still unaccounted for in nonhospital settings such as nursing homes and dialysis facilities.
Of particular concern, according to the report, is Clostridium difficile, because its annual death toll of an estimated 14,000, accounts for more than half of all deaths from drug-resistant infection. This is the case even though C. difficile is a bacterium that – although it is not particularly resistant to treatment – is made stronger when antibiotics are used to treat other infections.
The public health threat, as delineated in the report, is tertiary. The first level is "urgent" and includes C. difficile, carbapenem-resistant Enterobacteriaceae, and drug-resistant Neisseria gonorrhoeae for fear of it developing cephalosporin resistance. "These threats may not be currently widespread but have the potential to become so and require urgent public health attention to identify infections and to limit transmission," the report states.
Second most threatening are "serious" infections and include multidrug-resistant Acinetobacter; drug-resistant Campylobacter; fluconazole-resistant Candida; extended spectrum beta-lactamase producing Enterobacteriaceae; vancomycin-resistant Enterococcus; multidrug-resistant Pseudomonas aeruginosa; drug-resistant, nontyphoidal Salmonella; drug-resistant S. typhi; drug-resistant Shigella; methicillin-resistant Staphylococcus aureus; drug-resistant Streptococcus pneumoniae; and drug-resistant tuberculosis.
The CDC considers these "significant" threats that "will worsen and may become urgent without ongoing public health monitoring and prevention activities."
The third level is "concerning," characterized by currently low threat of antibiotic resistance with several antibiotic therapies available. Vancomycin-resistant S. aureus, erythromycin-resistant Streptococcus group A and clindamycin-resistant Streptococcus group B are in this category.
Seven criteria were used in the assessment of threat: the clinical and economic impact of each, the combined numbers of infection, the incidence rate, the 10-year projection of incidence, transmissibility, availability of effective antibiotics, and barriers to prevention. Dr. Frieden outlined what he said the CDC believed were four important measures for turning the tide in the public’s favor, including infection prevention (safe food handling, hand washing, etc.) and using state-of-the-art surveillance to track the occurrence of drug-resistant infections nationally. But antibiotic stewardship, and research and development received the most discussion during the briefing.
Dr. Michael Bell, deputy director of the CDC’s division of health care quality promotion, said that since the advent of penicillin: "We’ve seen every last antibiotic end up having substantial resistance. Just having a new drug is not going to be enough.
"We applaud FDA’s efforts to make new drug development less burdensome and more rapid, but at the same time, we are reassured to know they are proven and safe. We need to make sure we intensely maintain stewardship and that we don’t waste yet another precious drug."
Dr. Frieden pointed to collaboration with the Center for Medicare and Medicaid Services, which he said has begun "incentivizing" hospitals to track infection rates and use good stewardship practices. He also stated that patients who insist their doctors prescribe antibiotics need to understand that getting "more medication" isn’t the solution, but the "right medication" is.
Although drug-resistance–related mortality occurs in the community at large, both Dr. Frieden and Dr. Bell stressed that the first line of action was to curb infection rates in the health care setting and pointed to several recent CDC initiatives, such as the National Healthcare Safety Network, a surveillance database that allows health care facilities and departments of health across the country to share data about outbreaks in their communities, among other information.
When asked about environmental factors contributing to the rising levels of drug-resistance, such as the use of antibiotics in agricultural livestock production, Dr. Bell responded: "We support appropriate antibiotic use across the board. There is always going to be bleed-over in the environment and the ecosystem. Where some of these bacteria are making people sick, there is an overlap with intensive care units, so we do continue to focus a great deal on the health care system. That’s the priority."
In June, Sen. Dianne Feinstein, (D-Calif.) sponsored the Preventing Antibiotic Resistance Act, which is aimed at tightening the way in which antibiotics are dispensed to animals. The bill was written in response to a March statement from Dr. Frieden calling CRE "a nightmare bacteria" for which there is no treatment and issuing an "urgent warning" to leaders to take action. In the briefing about the report ranking the level of threat, Dr. Frieden said that there had been a "cross-government focus" in response to the CDC’s warnings and that he could see "a ray of hope."
Neither Dr. Frieden nor Dr. Bell had relevant disclosures to report.
Major finding: Annually, 23,000 Americans die after contracting antibiotic-resistant infections.
Data source: U.S. Centers for Disease Control and Prevention report ranking threat levels of drug-resistant microbes.
Disclosures: Neither Dr. Frieden nor Dr. Bell had relevant disclosures to report.
MammoSite has comparatively high long-term complication rate
The rate of long-term complications such as palpable masses and telangiectasias was nearly five times higher in women who had MammoSite therapy, compared with those who underwent whole breast radiation therapy, results from a retrospective study showed.
Dr. Kari Rosenkranz and her associates at Dartmouth Hitchcock Medical Center, Hanover, N.H., analyzed the data charts of all women who met criteria for brachytherapy and underwent MammoSite (n = 71) or whole breast radiation therapy (WBRT) (n = 245) at the center between 2003 and 2008.
The incidence of palpable masses at the site of the lumpectomy, telangiectasias, and local recurrence were the studied endpoints. No significant differences existed between the study groups regarding age (average was 63.5 years), mean size of tumor (average was 1.1 cm), the percentage of patients with estrogen receptor–positive tumors (92% in total), or the length of follow-up (median was 4 years).
In the MammoSite cohort with hormone receptor–positive tumors, 83% received adjuvant endocrine therapy; 94% of the WBRT group with hormone receptor–positive tumors had endocrine therapy. No significant difference was found in systemic chemotherapy rates.
The rate of long-term complications such as palpable masses, telangiectasias, or both were found to have occurred in 42% of MammoSite patients, compared with 9% in the WBRT group (J. Am. Coll. Surg. 2013;217:497-502).
During follow-up in the MammoSite group, the incidence rate of palpable mass detection at the lumpectomy site was nearly 27%; for the WBRT group, the rate was approximately 7%. MammoSite patients were three times more likely to require a core biopsy of the mass to rule out malignancy than were WBRT patients (16.9% vs. 4.9%, respectively). Telangiectasia was six times more likely to develop in MammoSite patients than in WBRT patients (24% vs. 4%).
Dr. Rosenkranz and her colleagues reported that a prospective, randomized clinical trial which began in 2005, sponsored by the National Surgical Adjuvant Breast and Bowel Project (NSABP)/Radiation Therapy Oncology Group (RTOG), is currently underway to compare WBRT with partial radiation therapies such as MammoSite. The primary endpoint of this study in women who have had surgery for ductal carcinoma in situ or stage I or stage II breast cancer, is breast tumor recurrence; secondary endpoints include toxicity. The study is expected to end in 2015, according to the researchers.
"Until this prospective, randomized trial reports, the increased rate of long-term local toxicity found in our institution’s experience with MammoSite brachytherapy should be considered when counseling women on options for adjuvant radiation therapy after breast-conserving surgery," concluded Dr. Rosenkranz and her associates.
The researchers reported no relevant disclosures.
The rate of long-term complications such as palpable masses and telangiectasias was nearly five times higher in women who had MammoSite therapy, compared with those who underwent whole breast radiation therapy, results from a retrospective study showed.
Dr. Kari Rosenkranz and her associates at Dartmouth Hitchcock Medical Center, Hanover, N.H., analyzed the data charts of all women who met criteria for brachytherapy and underwent MammoSite (n = 71) or whole breast radiation therapy (WBRT) (n = 245) at the center between 2003 and 2008.
The incidence of palpable masses at the site of the lumpectomy, telangiectasias, and local recurrence were the studied endpoints. No significant differences existed between the study groups regarding age (average was 63.5 years), mean size of tumor (average was 1.1 cm), the percentage of patients with estrogen receptor–positive tumors (92% in total), or the length of follow-up (median was 4 years).
In the MammoSite cohort with hormone receptor–positive tumors, 83% received adjuvant endocrine therapy; 94% of the WBRT group with hormone receptor–positive tumors had endocrine therapy. No significant difference was found in systemic chemotherapy rates.
The rate of long-term complications such as palpable masses, telangiectasias, or both were found to have occurred in 42% of MammoSite patients, compared with 9% in the WBRT group (J. Am. Coll. Surg. 2013;217:497-502).
During follow-up in the MammoSite group, the incidence rate of palpable mass detection at the lumpectomy site was nearly 27%; for the WBRT group, the rate was approximately 7%. MammoSite patients were three times more likely to require a core biopsy of the mass to rule out malignancy than were WBRT patients (16.9% vs. 4.9%, respectively). Telangiectasia was six times more likely to develop in MammoSite patients than in WBRT patients (24% vs. 4%).
Dr. Rosenkranz and her colleagues reported that a prospective, randomized clinical trial which began in 2005, sponsored by the National Surgical Adjuvant Breast and Bowel Project (NSABP)/Radiation Therapy Oncology Group (RTOG), is currently underway to compare WBRT with partial radiation therapies such as MammoSite. The primary endpoint of this study in women who have had surgery for ductal carcinoma in situ or stage I or stage II breast cancer, is breast tumor recurrence; secondary endpoints include toxicity. The study is expected to end in 2015, according to the researchers.
"Until this prospective, randomized trial reports, the increased rate of long-term local toxicity found in our institution’s experience with MammoSite brachytherapy should be considered when counseling women on options for adjuvant radiation therapy after breast-conserving surgery," concluded Dr. Rosenkranz and her associates.
The researchers reported no relevant disclosures.
The rate of long-term complications such as palpable masses and telangiectasias was nearly five times higher in women who had MammoSite therapy, compared with those who underwent whole breast radiation therapy, results from a retrospective study showed.
Dr. Kari Rosenkranz and her associates at Dartmouth Hitchcock Medical Center, Hanover, N.H., analyzed the data charts of all women who met criteria for brachytherapy and underwent MammoSite (n = 71) or whole breast radiation therapy (WBRT) (n = 245) at the center between 2003 and 2008.
The incidence of palpable masses at the site of the lumpectomy, telangiectasias, and local recurrence were the studied endpoints. No significant differences existed between the study groups regarding age (average was 63.5 years), mean size of tumor (average was 1.1 cm), the percentage of patients with estrogen receptor–positive tumors (92% in total), or the length of follow-up (median was 4 years).
In the MammoSite cohort with hormone receptor–positive tumors, 83% received adjuvant endocrine therapy; 94% of the WBRT group with hormone receptor–positive tumors had endocrine therapy. No significant difference was found in systemic chemotherapy rates.
The rate of long-term complications such as palpable masses, telangiectasias, or both were found to have occurred in 42% of MammoSite patients, compared with 9% in the WBRT group (J. Am. Coll. Surg. 2013;217:497-502).
During follow-up in the MammoSite group, the incidence rate of palpable mass detection at the lumpectomy site was nearly 27%; for the WBRT group, the rate was approximately 7%. MammoSite patients were three times more likely to require a core biopsy of the mass to rule out malignancy than were WBRT patients (16.9% vs. 4.9%, respectively). Telangiectasia was six times more likely to develop in MammoSite patients than in WBRT patients (24% vs. 4%).
Dr. Rosenkranz and her colleagues reported that a prospective, randomized clinical trial which began in 2005, sponsored by the National Surgical Adjuvant Breast and Bowel Project (NSABP)/Radiation Therapy Oncology Group (RTOG), is currently underway to compare WBRT with partial radiation therapies such as MammoSite. The primary endpoint of this study in women who have had surgery for ductal carcinoma in situ or stage I or stage II breast cancer, is breast tumor recurrence; secondary endpoints include toxicity. The study is expected to end in 2015, according to the researchers.
"Until this prospective, randomized trial reports, the increased rate of long-term local toxicity found in our institution’s experience with MammoSite brachytherapy should be considered when counseling women on options for adjuvant radiation therapy after breast-conserving surgery," concluded Dr. Rosenkranz and her associates.
The researchers reported no relevant disclosures.
FROM THE JOURNAL OF THE AMERICAN COLLEGE OF SURGEONS
Major finding: Forty-two percent of MammoSite patients had long-term complications, vs. 9% of those who had whole breast radiation therapy.
Data source: Retrospective study of 71 women who underwent MammoSite brachytherapy and 245 who had whole breast radiation therapy at a single academic medical center between 2003 and 2008.
Disclosures: Dr. Rosenkranz and her associates reported no relevant disclosures.