User login
Are Mortality Benefits from Bariatric Surgery Observed in a Nontraditional Surgical Population? Evidence from a VA Dataset
Study Overview
Objective. To determine the association between bariatric surgery and long-term mortality rates among patients with severe obesity.
Design. Retrospective cohort study.
Setting and participants. This analysis relied upon data from Veteran’s Administration (VA) patients undergoing bariatric surgery between 2000 and 2011 and a group of matched controls. For this data-only study, a waiver of informed consent was obtained. Investigators first used the VA Surgical Quality Improvement Program (SQIP) dataset to identify all bariatric surgical procedures performed at VA hospitals between 2000 and the end of 2011, excluding patients who had any evidence of body mass index (BMI) less than 35 kg/m2 and those with certain baseline diagnoses that would be considered contraindications for surgery, as well as those who had prolonged inpatient stays immediately prior to their surgical date. No upper or lower age limits appear to have been specified, and no upper BMI limit appeared to have been set.
Once all surgical patients were identified, the investigators attempted to find a group of similar control patients who had not undergone surgery. Initially they pulled candidate matches for each surgical patient based on having the same sex, age-group (within 5 years), BMI category (35-40, 40-50, >50), diabetes status (present or absent), racial category, and VA region. From these candidates, they selected up to 3 of the closest matches on age, BMI, and a composite comorbidity score based on inpatient and outpatient claims in the year prior to surgery. The authors specified that controls could convert to surgical patients during the follow-up period, in which case their data was censored beginning with the surgical procedure. However, if a control patient underwent surgery during 2012 or 2013, censoring was not possible given that the dataset for identifying surgeries contained only procedures performed through the end of 2011.
Main outcome measures. The primary outcome of interest was time to death (any cause) beginning at the date of surgery (or baseline date for nonsurgical controls) through the end of 2013. The investigators built Cox proportional hazards models to evaluate survival using multivariable models to adjust for baseline characteristics, including those involved in the matching process, as well as others that might have differentially impacted both likelihood of undergoing surgery and mortality risk. These included marital status, insurance markers of low income or disability, and a number of comorbid medical and psychiatric diagnoses.
In addition to the main analyses, the investigators also looked for effect modification of the surgery-mortality relationship by a patient’s sex and presence or absence of diabetes at the time of surgery, as well as the time period in which their surgery was conducted, dichotomized around the year 2006. This year was selected for several reasons, including that it was the year in which a VA-wide comprehensive weight management and surgical selection program was instituted.
Results. The surgical cohort was made up of 2500 patients, and there were 7462 matched controls. The surgical and control groups were similar with respect to matched baseline characteristics, tested using standardized differences (as opposed to t test or chi-square). Mean (SD) age was 52 (8.8) years for surgical patients versus 53 (8.7) years for controls. 74% of patients in both the surgical and control groups were men, and 81% in both groups were white (ethnicity not specified). Mean (SD) baseline BMI was 47 (7.9) kg/m2 in the surgical group and 46 (7.3) kg/m2 for controls.
Some between-group differences were present for baseline characteristics that had not been included in the matching protocol. More surgical patients than controls had diagnoses of hypertension (80% surgical vs. 70% control), dyslipidemia (61% vs. 52%), arthritis (27% vs. 15%), depression (44% vs. 32%), GERD (35% vs.19%), and fatty liver disease (6.6% vs. 0.6%). In contrast, more control patients than surgical patients had diagnoses of alcohol abuse (6.2% in controls vs. 3.9% in surgical) and schizophrenia (4.9% vs. 1.8%). Also, although a number of different surgical types were represented in the cohort, the vast majority of procedures were classified as Roux-en-Y gastric bypasses (RYGB). 53% of the procedures were open RYGB, 21% were laparoscopic RYGB, 10% were adjustable gastric bands (AGB), and 15% were vertical sleeve gastrectomies (VSG).
Mortality was lower among surgical patients than among matched controls during a mean follow-up time of 6.9 years for surgical patients and 6.6 years for controls. Namely, the 1-, 5- and 10-year cumulative mortality rates for surgical patients were: 2.4%, 6.4%, and 13.8%. Unadjusted mortality rates for nonsurgical controls were lower initially (1.7% at 1 year), but then much higher at years 5 (10.4%), and 10 (23.9%). In multivariable Cox models, the hazard ratio (HR) for mortality in bariatric patients versus controls was nonsignificant at 1 year of follow-up. However, between 1 and 5 years after surgery (or after baseline), multivariable models showed an HR (95% CI) of 0.45 (0.36–0.56) for mortality among surgical patients versus controls. For those with more than 5 years of follow up, the HR was similar (0.47, 95% CI 0.39–0.58) for death among surgical versus control patients. The investigators found that the year during which a patient underwent surgery (before or after 2006) did impact mortality during the first postoperative year, with those who had earlier procedures (2000-2005) exhibiting a significantly higher risk of death in that year relative to non-operative controls (HR 1.66, 95% CI 1.19–2.33). No significant sex or diabetes interactions were observed for the surgery-mortality relationship in multivariable Cox models. There was no information provided as to the breakdown of cause of death within the larger “all-cause mortality” outcome.
Conclusion. Bariatric surgery was associated with significantly lower all-cause mortality among surgical patients in the VA over a 5- to 14-year follow-up period compared with a group of severely obese VA patients who did not undergo surgery.
Commentary
Rates of severe obesity (BMI ≥ 35 kg/m2) have risen at a faster pace than those of obesity in the United States over the past decade [1], driving clinicians, patients and payers to search for effective methods of treating this condition. Bariatric surgery has emerged as the most effective treatment for severe obesity; however, the existing surgical literature is predominated by studies with short- or medium-term postoperative follow-up and homogenous participant populations containing large numbers of younger non-Hispanic white women. Research from the Swedish Obesity Study (SOS), as well as smaller US-based studies, has suggested that severely obese patients who undergo bariatric surgery have better long-term survival than their nonsurgical counterparts [2,3].Counter to this finding, a previous medium-term study utilizing data from VA hospitals did not find that surgery conferred a mortality benefit among this largely male, older, and sicker patient population [4].The current study, by the same group of investigators, attempts to update the previous finding by including more recent surgical data and a longer follow-up period, to see whether or not a survival benefit appears to emerge for VA patients undergoing bariatric surgery.
A major strength of this study was the use of a large and comprehensive clinical dataset, a strength of many studies utilizing data from the VA. The availability of clinical data such as BMI, as well as diagnostic codes and sociodemographic variables, allowed the authors to match and adjust for a number of potential confounders of the surgery-mortality relationship. Another unique feature of VA data is that members of this health care system can often be followed regardless of their location, as the unified medical record transfers between states. This is in contrast to many claims-based or single-center studies of surgery, where patients are lost to follow-up if they move or transfer insurance providers. This study clearly benefited from this aspect of VA data, with a mean postoperative follow-up period of over 5 years in both study groups, much longer than is typically observed in bariatric surgical studies, and probably a necessary feature for examining more of a rare outcome such as mortality (as opposed to comparing weight loss or diabetes remission). Another clear contribution of this study is that it focused on a group of patients not typical of bariatric cohorts—this group was slightly older and sicker, with far more men than women, and therefore at a much higher risk of mortality than the typically younger females that are part of most studies.
Although the authors did adjust for many factors when comparing the surgical and nonsurgical groups, it is possible, as with any observational study, that unmeasured confounders may have been present. Psychosocial and behavioral features that may be linked both to a person’s likelihood of undergoing surgery, and to their mortality risk are of particular concern. It is worth noting, for example, that far more patients in the nonsurgical group were identified as schizophrenic, and that the rate of schizophrenia in that severely obese group was much higher than that of the general population. This pattern may have some relationship to the weight-gain promoting effect of antipsychotic medications and the unfortunate reality that patients with severe obesity and severe mental illness may not be as well equipped to seek out surgery (or viewed as acceptable candidates) as those without severe mental illness. One possible limitation mentioned by the authors was that control group patients who underwent surgery in 2012 or 2013 would not have been recognized (and thus had their data censored in this study), possibly leading to incorrect categorization of exposure category for some amount of person-time during follow-up. In general, though, there is a low likelihood of this phenomenon impacting the findings, given both the relative infrequency of crossover observed in the cohort prior to 2011, and the relatively short amount of person-time any later crossovers would have contributed in the later years of the study.
Although codes for baseline disease states were adjusted for in multivariable analyses, the surgical patients were in general a medically sicker group at baseline than control patients. As the authors point out, if anything, this should have biased the findings in favor of seeing higher mortality rate in the surgical group, the opposite of what was found. Further strengthening the finding of a correlation between survival and surgery is the mix of procedure types included in this study. Over half of the procedures were open RYGB surgeries, with far fewer of the more modern and lower risk procedures (eg, laparoscopic RYGB) represented. Again, this mix of procedures would be expected to result in an overestimation of mortality in surgical patients relative to what might be observed if all patients had been drawn from later years of the cohort, as surgical technique evolved.
Applications for Clinical Practice
This study adds to the evidence that patients with severe obesity who undergo bariatric surgery have a lower risk of death up to 10 years after their surgery compared with patients who do not have these procedures. The findings of this work should provide encouragement, particularly for managaing older adults with more longstanding comorbidities. Those who are strongly motivated to pursue weight loss surgery, and who are deemed good candidates by bariatric teams, may add years to their lives by undergoing one of these procedures. As always, however, the quality of life experienced by patients after surgery, and a realistic expectation of the ways in which surgery will fundamentally change their lifestyle, must be a critical part of the discussion.
—Kristina Lewis, MD, MPH
1. Sturm R, Hattori A. Morbid obesity rates continue to rise rapidly in the United States. Int J Obesity 2013;37:889-91.
2. Sjostrom L, Narbo K, Sjostrom CD, et al. Effects of bariatric surgery on mortality in Swedish obese subjects. N Engl J Med 2007;357:741–52.
3. Adams TD, Gress RE, Smith SC, et al. Long-term mortality after gastric bypass surgery. N Engl J Med 2007;357:753–61.
4. Maciejewski ML, Livingston EH, Smith VA, et al. Survival among high-risk patients after bariatric surgery. JAMA 2011;305:2419–26.
Study Overview
Objective. To determine the association between bariatric surgery and long-term mortality rates among patients with severe obesity.
Design. Retrospective cohort study.
Setting and participants. This analysis relied upon data from Veteran’s Administration (VA) patients undergoing bariatric surgery between 2000 and 2011 and a group of matched controls. For this data-only study, a waiver of informed consent was obtained. Investigators first used the VA Surgical Quality Improvement Program (SQIP) dataset to identify all bariatric surgical procedures performed at VA hospitals between 2000 and the end of 2011, excluding patients who had any evidence of body mass index (BMI) less than 35 kg/m2 and those with certain baseline diagnoses that would be considered contraindications for surgery, as well as those who had prolonged inpatient stays immediately prior to their surgical date. No upper or lower age limits appear to have been specified, and no upper BMI limit appeared to have been set.
Once all surgical patients were identified, the investigators attempted to find a group of similar control patients who had not undergone surgery. Initially they pulled candidate matches for each surgical patient based on having the same sex, age-group (within 5 years), BMI category (35-40, 40-50, >50), diabetes status (present or absent), racial category, and VA region. From these candidates, they selected up to 3 of the closest matches on age, BMI, and a composite comorbidity score based on inpatient and outpatient claims in the year prior to surgery. The authors specified that controls could convert to surgical patients during the follow-up period, in which case their data was censored beginning with the surgical procedure. However, if a control patient underwent surgery during 2012 or 2013, censoring was not possible given that the dataset for identifying surgeries contained only procedures performed through the end of 2011.
Main outcome measures. The primary outcome of interest was time to death (any cause) beginning at the date of surgery (or baseline date for nonsurgical controls) through the end of 2013. The investigators built Cox proportional hazards models to evaluate survival using multivariable models to adjust for baseline characteristics, including those involved in the matching process, as well as others that might have differentially impacted both likelihood of undergoing surgery and mortality risk. These included marital status, insurance markers of low income or disability, and a number of comorbid medical and psychiatric diagnoses.
In addition to the main analyses, the investigators also looked for effect modification of the surgery-mortality relationship by a patient’s sex and presence or absence of diabetes at the time of surgery, as well as the time period in which their surgery was conducted, dichotomized around the year 2006. This year was selected for several reasons, including that it was the year in which a VA-wide comprehensive weight management and surgical selection program was instituted.
Results. The surgical cohort was made up of 2500 patients, and there were 7462 matched controls. The surgical and control groups were similar with respect to matched baseline characteristics, tested using standardized differences (as opposed to t test or chi-square). Mean (SD) age was 52 (8.8) years for surgical patients versus 53 (8.7) years for controls. 74% of patients in both the surgical and control groups were men, and 81% in both groups were white (ethnicity not specified). Mean (SD) baseline BMI was 47 (7.9) kg/m2 in the surgical group and 46 (7.3) kg/m2 for controls.
Some between-group differences were present for baseline characteristics that had not been included in the matching protocol. More surgical patients than controls had diagnoses of hypertension (80% surgical vs. 70% control), dyslipidemia (61% vs. 52%), arthritis (27% vs. 15%), depression (44% vs. 32%), GERD (35% vs.19%), and fatty liver disease (6.6% vs. 0.6%). In contrast, more control patients than surgical patients had diagnoses of alcohol abuse (6.2% in controls vs. 3.9% in surgical) and schizophrenia (4.9% vs. 1.8%). Also, although a number of different surgical types were represented in the cohort, the vast majority of procedures were classified as Roux-en-Y gastric bypasses (RYGB). 53% of the procedures were open RYGB, 21% were laparoscopic RYGB, 10% were adjustable gastric bands (AGB), and 15% were vertical sleeve gastrectomies (VSG).
Mortality was lower among surgical patients than among matched controls during a mean follow-up time of 6.9 years for surgical patients and 6.6 years for controls. Namely, the 1-, 5- and 10-year cumulative mortality rates for surgical patients were: 2.4%, 6.4%, and 13.8%. Unadjusted mortality rates for nonsurgical controls were lower initially (1.7% at 1 year), but then much higher at years 5 (10.4%), and 10 (23.9%). In multivariable Cox models, the hazard ratio (HR) for mortality in bariatric patients versus controls was nonsignificant at 1 year of follow-up. However, between 1 and 5 years after surgery (or after baseline), multivariable models showed an HR (95% CI) of 0.45 (0.36–0.56) for mortality among surgical patients versus controls. For those with more than 5 years of follow up, the HR was similar (0.47, 95% CI 0.39–0.58) for death among surgical versus control patients. The investigators found that the year during which a patient underwent surgery (before or after 2006) did impact mortality during the first postoperative year, with those who had earlier procedures (2000-2005) exhibiting a significantly higher risk of death in that year relative to non-operative controls (HR 1.66, 95% CI 1.19–2.33). No significant sex or diabetes interactions were observed for the surgery-mortality relationship in multivariable Cox models. There was no information provided as to the breakdown of cause of death within the larger “all-cause mortality” outcome.
Conclusion. Bariatric surgery was associated with significantly lower all-cause mortality among surgical patients in the VA over a 5- to 14-year follow-up period compared with a group of severely obese VA patients who did not undergo surgery.
Commentary
Rates of severe obesity (BMI ≥ 35 kg/m2) have risen at a faster pace than those of obesity in the United States over the past decade [1], driving clinicians, patients and payers to search for effective methods of treating this condition. Bariatric surgery has emerged as the most effective treatment for severe obesity; however, the existing surgical literature is predominated by studies with short- or medium-term postoperative follow-up and homogenous participant populations containing large numbers of younger non-Hispanic white women. Research from the Swedish Obesity Study (SOS), as well as smaller US-based studies, has suggested that severely obese patients who undergo bariatric surgery have better long-term survival than their nonsurgical counterparts [2,3].Counter to this finding, a previous medium-term study utilizing data from VA hospitals did not find that surgery conferred a mortality benefit among this largely male, older, and sicker patient population [4].The current study, by the same group of investigators, attempts to update the previous finding by including more recent surgical data and a longer follow-up period, to see whether or not a survival benefit appears to emerge for VA patients undergoing bariatric surgery.
A major strength of this study was the use of a large and comprehensive clinical dataset, a strength of many studies utilizing data from the VA. The availability of clinical data such as BMI, as well as diagnostic codes and sociodemographic variables, allowed the authors to match and adjust for a number of potential confounders of the surgery-mortality relationship. Another unique feature of VA data is that members of this health care system can often be followed regardless of their location, as the unified medical record transfers between states. This is in contrast to many claims-based or single-center studies of surgery, where patients are lost to follow-up if they move or transfer insurance providers. This study clearly benefited from this aspect of VA data, with a mean postoperative follow-up period of over 5 years in both study groups, much longer than is typically observed in bariatric surgical studies, and probably a necessary feature for examining more of a rare outcome such as mortality (as opposed to comparing weight loss or diabetes remission). Another clear contribution of this study is that it focused on a group of patients not typical of bariatric cohorts—this group was slightly older and sicker, with far more men than women, and therefore at a much higher risk of mortality than the typically younger females that are part of most studies.
Although the authors did adjust for many factors when comparing the surgical and nonsurgical groups, it is possible, as with any observational study, that unmeasured confounders may have been present. Psychosocial and behavioral features that may be linked both to a person’s likelihood of undergoing surgery, and to their mortality risk are of particular concern. It is worth noting, for example, that far more patients in the nonsurgical group were identified as schizophrenic, and that the rate of schizophrenia in that severely obese group was much higher than that of the general population. This pattern may have some relationship to the weight-gain promoting effect of antipsychotic medications and the unfortunate reality that patients with severe obesity and severe mental illness may not be as well equipped to seek out surgery (or viewed as acceptable candidates) as those without severe mental illness. One possible limitation mentioned by the authors was that control group patients who underwent surgery in 2012 or 2013 would not have been recognized (and thus had their data censored in this study), possibly leading to incorrect categorization of exposure category for some amount of person-time during follow-up. In general, though, there is a low likelihood of this phenomenon impacting the findings, given both the relative infrequency of crossover observed in the cohort prior to 2011, and the relatively short amount of person-time any later crossovers would have contributed in the later years of the study.
Although codes for baseline disease states were adjusted for in multivariable analyses, the surgical patients were in general a medically sicker group at baseline than control patients. As the authors point out, if anything, this should have biased the findings in favor of seeing higher mortality rate in the surgical group, the opposite of what was found. Further strengthening the finding of a correlation between survival and surgery is the mix of procedure types included in this study. Over half of the procedures were open RYGB surgeries, with far fewer of the more modern and lower risk procedures (eg, laparoscopic RYGB) represented. Again, this mix of procedures would be expected to result in an overestimation of mortality in surgical patients relative to what might be observed if all patients had been drawn from later years of the cohort, as surgical technique evolved.
Applications for Clinical Practice
This study adds to the evidence that patients with severe obesity who undergo bariatric surgery have a lower risk of death up to 10 years after their surgery compared with patients who do not have these procedures. The findings of this work should provide encouragement, particularly for managaing older adults with more longstanding comorbidities. Those who are strongly motivated to pursue weight loss surgery, and who are deemed good candidates by bariatric teams, may add years to their lives by undergoing one of these procedures. As always, however, the quality of life experienced by patients after surgery, and a realistic expectation of the ways in which surgery will fundamentally change their lifestyle, must be a critical part of the discussion.
—Kristina Lewis, MD, MPH
Study Overview
Objective. To determine the association between bariatric surgery and long-term mortality rates among patients with severe obesity.
Design. Retrospective cohort study.
Setting and participants. This analysis relied upon data from Veteran’s Administration (VA) patients undergoing bariatric surgery between 2000 and 2011 and a group of matched controls. For this data-only study, a waiver of informed consent was obtained. Investigators first used the VA Surgical Quality Improvement Program (SQIP) dataset to identify all bariatric surgical procedures performed at VA hospitals between 2000 and the end of 2011, excluding patients who had any evidence of body mass index (BMI) less than 35 kg/m2 and those with certain baseline diagnoses that would be considered contraindications for surgery, as well as those who had prolonged inpatient stays immediately prior to their surgical date. No upper or lower age limits appear to have been specified, and no upper BMI limit appeared to have been set.
Once all surgical patients were identified, the investigators attempted to find a group of similar control patients who had not undergone surgery. Initially they pulled candidate matches for each surgical patient based on having the same sex, age-group (within 5 years), BMI category (35-40, 40-50, >50), diabetes status (present or absent), racial category, and VA region. From these candidates, they selected up to 3 of the closest matches on age, BMI, and a composite comorbidity score based on inpatient and outpatient claims in the year prior to surgery. The authors specified that controls could convert to surgical patients during the follow-up period, in which case their data was censored beginning with the surgical procedure. However, if a control patient underwent surgery during 2012 or 2013, censoring was not possible given that the dataset for identifying surgeries contained only procedures performed through the end of 2011.
Main outcome measures. The primary outcome of interest was time to death (any cause) beginning at the date of surgery (or baseline date for nonsurgical controls) through the end of 2013. The investigators built Cox proportional hazards models to evaluate survival using multivariable models to adjust for baseline characteristics, including those involved in the matching process, as well as others that might have differentially impacted both likelihood of undergoing surgery and mortality risk. These included marital status, insurance markers of low income or disability, and a number of comorbid medical and psychiatric diagnoses.
In addition to the main analyses, the investigators also looked for effect modification of the surgery-mortality relationship by a patient’s sex and presence or absence of diabetes at the time of surgery, as well as the time period in which their surgery was conducted, dichotomized around the year 2006. This year was selected for several reasons, including that it was the year in which a VA-wide comprehensive weight management and surgical selection program was instituted.
Results. The surgical cohort was made up of 2500 patients, and there were 7462 matched controls. The surgical and control groups were similar with respect to matched baseline characteristics, tested using standardized differences (as opposed to t test or chi-square). Mean (SD) age was 52 (8.8) years for surgical patients versus 53 (8.7) years for controls. 74% of patients in both the surgical and control groups were men, and 81% in both groups were white (ethnicity not specified). Mean (SD) baseline BMI was 47 (7.9) kg/m2 in the surgical group and 46 (7.3) kg/m2 for controls.
Some between-group differences were present for baseline characteristics that had not been included in the matching protocol. More surgical patients than controls had diagnoses of hypertension (80% surgical vs. 70% control), dyslipidemia (61% vs. 52%), arthritis (27% vs. 15%), depression (44% vs. 32%), GERD (35% vs.19%), and fatty liver disease (6.6% vs. 0.6%). In contrast, more control patients than surgical patients had diagnoses of alcohol abuse (6.2% in controls vs. 3.9% in surgical) and schizophrenia (4.9% vs. 1.8%). Also, although a number of different surgical types were represented in the cohort, the vast majority of procedures were classified as Roux-en-Y gastric bypasses (RYGB). 53% of the procedures were open RYGB, 21% were laparoscopic RYGB, 10% were adjustable gastric bands (AGB), and 15% were vertical sleeve gastrectomies (VSG).
Mortality was lower among surgical patients than among matched controls during a mean follow-up time of 6.9 years for surgical patients and 6.6 years for controls. Namely, the 1-, 5- and 10-year cumulative mortality rates for surgical patients were: 2.4%, 6.4%, and 13.8%. Unadjusted mortality rates for nonsurgical controls were lower initially (1.7% at 1 year), but then much higher at years 5 (10.4%), and 10 (23.9%). In multivariable Cox models, the hazard ratio (HR) for mortality in bariatric patients versus controls was nonsignificant at 1 year of follow-up. However, between 1 and 5 years after surgery (or after baseline), multivariable models showed an HR (95% CI) of 0.45 (0.36–0.56) for mortality among surgical patients versus controls. For those with more than 5 years of follow up, the HR was similar (0.47, 95% CI 0.39–0.58) for death among surgical versus control patients. The investigators found that the year during which a patient underwent surgery (before or after 2006) did impact mortality during the first postoperative year, with those who had earlier procedures (2000-2005) exhibiting a significantly higher risk of death in that year relative to non-operative controls (HR 1.66, 95% CI 1.19–2.33). No significant sex or diabetes interactions were observed for the surgery-mortality relationship in multivariable Cox models. There was no information provided as to the breakdown of cause of death within the larger “all-cause mortality” outcome.
Conclusion. Bariatric surgery was associated with significantly lower all-cause mortality among surgical patients in the VA over a 5- to 14-year follow-up period compared with a group of severely obese VA patients who did not undergo surgery.
Commentary
Rates of severe obesity (BMI ≥ 35 kg/m2) have risen at a faster pace than those of obesity in the United States over the past decade [1], driving clinicians, patients and payers to search for effective methods of treating this condition. Bariatric surgery has emerged as the most effective treatment for severe obesity; however, the existing surgical literature is predominated by studies with short- or medium-term postoperative follow-up and homogenous participant populations containing large numbers of younger non-Hispanic white women. Research from the Swedish Obesity Study (SOS), as well as smaller US-based studies, has suggested that severely obese patients who undergo bariatric surgery have better long-term survival than their nonsurgical counterparts [2,3].Counter to this finding, a previous medium-term study utilizing data from VA hospitals did not find that surgery conferred a mortality benefit among this largely male, older, and sicker patient population [4].The current study, by the same group of investigators, attempts to update the previous finding by including more recent surgical data and a longer follow-up period, to see whether or not a survival benefit appears to emerge for VA patients undergoing bariatric surgery.
A major strength of this study was the use of a large and comprehensive clinical dataset, a strength of many studies utilizing data from the VA. The availability of clinical data such as BMI, as well as diagnostic codes and sociodemographic variables, allowed the authors to match and adjust for a number of potential confounders of the surgery-mortality relationship. Another unique feature of VA data is that members of this health care system can often be followed regardless of their location, as the unified medical record transfers between states. This is in contrast to many claims-based or single-center studies of surgery, where patients are lost to follow-up if they move or transfer insurance providers. This study clearly benefited from this aspect of VA data, with a mean postoperative follow-up period of over 5 years in both study groups, much longer than is typically observed in bariatric surgical studies, and probably a necessary feature for examining more of a rare outcome such as mortality (as opposed to comparing weight loss or diabetes remission). Another clear contribution of this study is that it focused on a group of patients not typical of bariatric cohorts—this group was slightly older and sicker, with far more men than women, and therefore at a much higher risk of mortality than the typically younger females that are part of most studies.
Although the authors did adjust for many factors when comparing the surgical and nonsurgical groups, it is possible, as with any observational study, that unmeasured confounders may have been present. Psychosocial and behavioral features that may be linked both to a person’s likelihood of undergoing surgery, and to their mortality risk are of particular concern. It is worth noting, for example, that far more patients in the nonsurgical group were identified as schizophrenic, and that the rate of schizophrenia in that severely obese group was much higher than that of the general population. This pattern may have some relationship to the weight-gain promoting effect of antipsychotic medications and the unfortunate reality that patients with severe obesity and severe mental illness may not be as well equipped to seek out surgery (or viewed as acceptable candidates) as those without severe mental illness. One possible limitation mentioned by the authors was that control group patients who underwent surgery in 2012 or 2013 would not have been recognized (and thus had their data censored in this study), possibly leading to incorrect categorization of exposure category for some amount of person-time during follow-up. In general, though, there is a low likelihood of this phenomenon impacting the findings, given both the relative infrequency of crossover observed in the cohort prior to 2011, and the relatively short amount of person-time any later crossovers would have contributed in the later years of the study.
Although codes for baseline disease states were adjusted for in multivariable analyses, the surgical patients were in general a medically sicker group at baseline than control patients. As the authors point out, if anything, this should have biased the findings in favor of seeing higher mortality rate in the surgical group, the opposite of what was found. Further strengthening the finding of a correlation between survival and surgery is the mix of procedure types included in this study. Over half of the procedures were open RYGB surgeries, with far fewer of the more modern and lower risk procedures (eg, laparoscopic RYGB) represented. Again, this mix of procedures would be expected to result in an overestimation of mortality in surgical patients relative to what might be observed if all patients had been drawn from later years of the cohort, as surgical technique evolved.
Applications for Clinical Practice
This study adds to the evidence that patients with severe obesity who undergo bariatric surgery have a lower risk of death up to 10 years after their surgery compared with patients who do not have these procedures. The findings of this work should provide encouragement, particularly for managaing older adults with more longstanding comorbidities. Those who are strongly motivated to pursue weight loss surgery, and who are deemed good candidates by bariatric teams, may add years to their lives by undergoing one of these procedures. As always, however, the quality of life experienced by patients after surgery, and a realistic expectation of the ways in which surgery will fundamentally change their lifestyle, must be a critical part of the discussion.
—Kristina Lewis, MD, MPH
1. Sturm R, Hattori A. Morbid obesity rates continue to rise rapidly in the United States. Int J Obesity 2013;37:889-91.
2. Sjostrom L, Narbo K, Sjostrom CD, et al. Effects of bariatric surgery on mortality in Swedish obese subjects. N Engl J Med 2007;357:741–52.
3. Adams TD, Gress RE, Smith SC, et al. Long-term mortality after gastric bypass surgery. N Engl J Med 2007;357:753–61.
4. Maciejewski ML, Livingston EH, Smith VA, et al. Survival among high-risk patients after bariatric surgery. JAMA 2011;305:2419–26.
1. Sturm R, Hattori A. Morbid obesity rates continue to rise rapidly in the United States. Int J Obesity 2013;37:889-91.
2. Sjostrom L, Narbo K, Sjostrom CD, et al. Effects of bariatric surgery on mortality in Swedish obese subjects. N Engl J Med 2007;357:741–52.
3. Adams TD, Gress RE, Smith SC, et al. Long-term mortality after gastric bypass surgery. N Engl J Med 2007;357:753–61.
4. Maciejewski ML, Livingston EH, Smith VA, et al. Survival among high-risk patients after bariatric surgery. JAMA 2011;305:2419–26.
Perfect Depression Care Spread: The Traction of Zero Suicides
From The Menninger Clinic, Houston, TX.
Abstract
- Objective: To summarize the Perfect Depression Care initiative and describe recent work to spread this quality improvement initiative.
- Methods: We summarize the background and methodology of the Perfect Depression Care initiative within the specialty behavioral health care setting and then describe the application of this methodology to 2 examples of spreading Perfect Depression Care to general medical settings: primary care and general hospitals.
- Results: In the primary care setting, Perfect Depression Care spread successfully in association with the development and implementation of a practice guideline for managing the potentially suicidal patient. In the general hospital setting, Perfect Depression Care is spreading successfully in association with the development and implementation of a simple and efficient tool to screen not for suicide risk specifically, but for common psychiatric conditions associated with increased risk of suicide.
- Conclusion: Both examples of spreading Perfect Depression Care to general medical settings illustrate the social traction of “zero suicides,” the audacious and transformative goal of the Perfect Depression Care Initiative.
Each year depression affects roughly 10% of adults in the United States [1]. The leading cause of disability in developed countries, depression results in substantial medical care expenditures, lost productivity, and absenteeism [1]. It is a chronic condition, and one that is associated with tremendous comorbidity from multiple chronic general medical conditions, including congestive heart failure, coronary artery disease, and diabetes [2]. Moreover, the presence of depression has deleterious effects on the outcomes of those comorbid conditions [2]. Untreated or poorly treated, depression can be deadly—each year as many as 10% of patients with major depression die from suicide [1].
In 1999 the Behavioral Health Services (BHS) division of Henry Ford Health System in Detroit, Michigan, set out to eliminate suicide among all patients with depression in our HMO network. This audacious goal was a key lever in a broader aim, which was to build a system of perfect depression care. We aimed to achieve breakthrough improvement in quality and safety by completely redesigning the delivery of depression care using the 6 aims and 10 new rules set forth in the Institute of Medicine’s (IOM) report Crossing the Quality Chasm [3]. To communicate our bold vision, we called the initiative Perfect Depression Care. Today, we can report a dramatic and sustained reduction in suicide that is unprecedented in the clinical and quality improvement literature [4].
In the Chasm report, the IOM cast a spotlight on behavioral health care, placing depression and anxiety disorders on the short list of priority conditions for immediate national attention and improvement. Importantly, the IOM called for a focus on not only behavioral health care benefits and coverage, but access and quality of care for all persons with depression. Finding inspiration from our success in the specialty behavioral health care setting, we decided to answer the IOM’s call. We set out to build a system of depression care that is not confined to the specialty behavioral health care setting, a system that delivers perfect care to every patient with depression, regardless of general medical comorbidity or care setting. We called this work Perfect Depression Care Spread.
In this article, we first summarize the background and methodology of the Perfect Depression Care initiative. We then describe the application of this methodology to spreading Perfect Depression Care into 2 nonspecialty care settings—primary care and general hospitals. Finally, we review some of the challenges and lessons learned from our efforts to sustain this important work.
Building a System of Perfect Depression Care
One example of the transformative power of a “zero defects” approach is the case of the Effectiveness aim. Our team engaged in vigorous debate about the goal for this aim. While some team members eagerly embraced the “zero defects” ambition and argued that truly perfect care could only mean “no suicides,” others challenged it, viewing it as lofty but unrealistic. After all, we had been taught that for some number of individuals with depression, suicide was the tragic yet inevitable outcome of their illness. How could it be possible to eliminate every single suicide? The debate was ultimately resolved when one team member asked, “If zero isn’t the right number of suicides, then what is? Two? Four? Forty?” The answer was obvious and undeniable. It was at that moment that setting “zero suicides” as the goal became a galvanizing force within BHS for the Perfect Depression Care initiative.
The pursuit of zero defects must take place within a “just culture,” an organizational environment in which frontline staff feel comfortable disclosing errors, especially their own, while still maintaining professional accountability [6]. Without a just culture, good but imperfect performance can breed disengagement and resentment. By contrast, within a just culture, it becomes possible to implement specific strategies and tactics to pursue perfection. Along the way, each step towards “zero defects” is celebrated because each defect that does occur is identified as an opportunity for learning.
One core strategy for Perfect Depression Care was organizing care according to the planned care model, a locally tailored version of the chronic care model [7]. We developed a clear vision for how each patient’s care would change in a system of Perfect Depression Care. We partnered with patients to ensure their voice in the redesign of our depression care services. We then conceptualized, designed, and tested strategies for improvement in 4 high-leverage domains (patient partnership, clinical practice, access to care, and information systems), which were identified through mapping our current care processes. Once this new model of care was in place, we implemented relevant measures of care quality and began continually assessing progress and then adjusting the plan as needed (ie, following the Model for Improvement).
Spread to Primary Care
The spread to primary care began in 2005, about 5 years after the initial launch of Perfect Depression Care in BHS. (There had been some previous work done aimed at integrating depression screening into a small number of specialty chronic disease management initiatives, although that work was not sustained.) We based the overall clinical structure on the IMPACT model of integrated behavioral health care [10]. Primary care providers collaborated with depression care managers, typically nurses, who had been trained to provide education to primary care providers and problem solving therapy to patients. The care managers were supervised by a project leader (a full-time clinical psychologist) and supported by 2 full-time psychiatric nurse practitioners who were embedded in each clinic during the early phases of implementation. An electronic medical record (EMR) was comfortably in place and facilitated the delivery of evidence-based depression care, as well as the collection of relevant process and outcome measures, which were fed back to the care teams on a regular basis. And, importantly, the primary care leadership team formally sanctioned depression care to be spread to all 27 primary care clinics.
Overcoming the Challenges of the Primary Care Visits
From 2005 to 2010, the model was spread tenuously to 5 primary care clinics. At that rate (1 clinic per year), it would have taken over 20 years to spread depression care through all 27 primary care clinics. Not satisfied with this progress, we stepped back to consider why adoption was happening so slowly. First, we spoke with leaders. Although the project was on a shoestring budget, our leaders understood the business case for integrating some version of depression care into the primary care setting [11]. They advised limiting the scope of the project to focus only on adults with 1 of 6 chronic diseases: diabetes mellitus, congestive heart failure, coronary artery disease, chronic obstructive pulmonary disease (COPD), asthma, and chronic kidney disease. This narrower focus was aimed at using the project’s limited resources more effectively on behalf of patients who were more frequent utilizers of care and statistically more likely to have a comorbid depressive illness. Through the use of time studies, however, we learned that the time consumed discerning which patients each day were eligible for depression screening created delays in clinic workflow that were untenable. It turned out that the process of screening all patients was far more efficient that the process of identifying which patients “should” be screened and then screening only those who were identified. This pragmatic approach to daily workflow in the clinics was a key driver of successful spread.
Next, we spoke to patients. In an effort to assess patient engagement, we reviewed the records of 830 patients who had been seen in one of the clinics where depression care was up and running. Among this group, less than 1% had declined to receive depression screening. In fact, during informal discussions with patients and clinic staff, patients were thanking their primary care providers for talking with them about depression. When it came to spreading depression care, patient engagement was not the problem.
Finally, we spoke with primary care providers, physicians who were viewed as leaders in their clinics. They described trepidation among their teams about adopting an innovation that would lead to patients being identified as at risk for suicide. Their concern was not that integrating depression care was not the right thing to do in the primary care setting; indeed, they had a strong and genuine desire to provide better depression care for their patients. Their concern was that the primary care clinic was not equipped to manage a suicidal patient safely and effectively. This concern was real, and it was pervasive. After all, the typical primary care office visit was already replete with problem lists too long to be managed effectively in the diminishing amount of time allotted to each visit. Screening for depression would only make matters worse [12]. Furthermore, identifying a patient at risk for suicide was not uncommon in our primary care setting. Between 2006 and 2012, an average of 16% of primary care patients screened each year had reported some degree of suicidal ideation (as measures by a positive response on question 9 of the PHQ-9). These discussions showed us that the model of depression care we were trying to spread into primary care was not designed with an explicit and confident approach to suicide—it was not Perfect Depression Care.
Leveraging Suicide As a Driver of Spread
When we realized that the anxiety surrounding the management of a suicidal patient was the biggest obstacle to Perfect Depression Care spread to primary care, we decided to turn this obstacle into an opportunity. First, an interdisciplinary team developed a practice guideline for managing the suicidal patient in general medical settings. The guideline was based on the World Health Organization’s evidence-based guidelines for addressing mental health disorders in nonspecialized health settings [13] and modified into a single page to make it easy to adopt. Following the guideline was not at all a requirement, but doing so made it very easy to identify patients at potential risk for suicide and to refer them safely and seamlessly to the next most appropriate level of care.
Second, and most importantly, BHS made a formal commitment to provide immediate access for any patient referred by a primary care provider following the practice guideline. BHS pledged to perform the evaluation on the same day as the referral was made and without any questions asked. Delivering on this promise required BHS to develop and implement reliable processes for its ambulatory centers to receive same-day referrals from any one of 27 primary care clinics. Success meant delighting our customers in primary care while obviating the expense and trauma associated with sending patients to local emergency departments. This work was hard. And it was made possible by the culture within BHS of pursuing perfection.
During this time of successful spread, project resources remained similar, no new or additional financial support was provided, and no new leadership directives had been communicated. The only new features of Perfect Depression Care spread were a 1-page practice guideline and a promise. Making suicide an explicit target of the intervention, and doing so in a ruthlessly practical way, created the conditions for the intervention to diffuse and be adopted more readily.
Spread to General Hospitals
In 2006, the Joint Commission established National Patient Safety Goal (NPSG) 15.01.01 for hospitals and health care facilities “to identify patients at risk for suicide” [14]. NPSG 15.01.01 applies not just to patients in psychiatric hospitals, but to all patients “being treated for emotional or behavioral disorders in general hospitals,” including emergency departments. As a measure of safety, suicide is the second most common sentinel event among hospitalized patients—only wrong-site surgery occurs more often. And when a suicide does take place in a hospital, the impact on patients, families, health care workers, and administrators is profound.
Still, completed suicide among hospitalized patients is statistically a very rare event. As a result, general hospitals find it challenging to meet the expectations set forth in NPSG 15.01.01, which seemingly asks hospitals to search for a needle in a haystack. Is it really valuable to ask a patient about suicide when that patient is a 16-year-old teenager who presented to the emergency department for minor scrapes and bruises sustained while skateboarding? Should all patients with “do not resuscitate” orders receive a mandatory, comprehensive suicide risk assessment? In 2010, general hospitals in our organization enlisted our Perfect Depression Care team to help them develop a meaningful approach to NPSG 15.01.01, and so Perfect Depression Care spread to general hospitals began.
The goal of NPSG 15.01.01 is “to identify patients at risk for suicide.” To accomplish this goal, hospital care teams need simple, efficient, evidence-based tools for identifying such patients and responding appropriately to the identified risk. In a general hospital setting, implementing targeted suicide risk assessments is simply not feasible. Assessing every single hospitalized patient for suicide risk seems clinically unnecessary, if not wasteful, and yet the processes needed to identify reliably which patients ought to be assessed end up taking far longer than simply screening everybody. With these considerations in mind, our Perfect Depression Care team took a different approach.
The DAPS Tool
We developed a simple and easy tool to screen, not for suicide risk specifically, but for common psychiatric conditions associated with increased risk of suicide. The Depression, Anxiety, Polysubstance Use, and Suicide screen (DAPS) [15] consists of 7 questions coming from 5 individual evidence-based screening measures: the PHQ-2 for depression, the GAD-2 for anxiety, question 9 from the PHQ-9 for suicidal ideation, the SASQ for problem alcohol use, and a single drug use question for substance use. Each of these questionnaires has been validated as a sensitive screening measure for the psychiatric condition of interest (eg, major depression, generalized anxiety, current problem drinking). Some of them have been validated specifically in general medical settings or among general medical patient populations. Moreover, each questionnaire is valid whether clinician-administered or self-completed. Some have also been validated in languages other than English.
The DAPS tool bundles together these separate screening measures into one easy to use and efficient tool. As a bundle, the DAPS tool offers 3 major advantages over traditional screening tools. First, the tool takes a broader approach to suicide risk with the aim of increasing utility. Suicide is a statistically rare event, especially in general medical settings. On the other hand, psychiatric conditions that themselves increase people’s risk of suicide are quite common, particularly in hospital settings. Rather than screening exclusively for suicidal thoughts and behavior, the DAPS tool screens for psychiatric conditions associated with an increased risk of suicide that are common in general medical settings. This approach to suicide screening is novel. It allows for the recognition of higher number of patients who may benefit from behavioral health interventions, whether or not they are “actively suicidal” at that moment. By not including extensive assessments of numerous suicide risk factors, the DAPS tool offers practical utility without losing much specificity. After all, persons in general hospital settings who at acutely increased risk of suicide (eg, a person admitted to the hospital following a suicide attempt via overdose) are already being identified.
The second advantage of the DAPS tool is that the information it obtains is actionable. Suicide screening tools, whether brief or comprehensive, are not immediately predictive and arrive at essentially the same conclusion—the person screened is deemed to fall into some risk stratification (eg, high, medium, low risk; acute vs non-acute risk). In general hospital settings, the responses to these stratifications are limited (eg, order a sitter, call a psychiatry consultation) and not specific to the level of risk. Furthermore, persons with psychiatric disorders may be at increased risk of suicide even if they deny having suicidal thoughts. The DAPS tool allows for the recognition of these persons, thus identifying opportunities for intervention. For example, a person who screens positive on the PHQ-2 portion of the DAPS but who denies having recent suicidal thoughts or behavior may not benefit from an immediate safety measure (eg, ordering a sitter) but may benefit from an evaluation and, if indicated, treatment for depression. Treating that person’s depression would decrease the longitudinal risk of suicide. If another person screens negative on the PHQ-2 but positive on the SASQ, then that person may benefit most from interventions targeting problem alcohol use, such as the initiation of a CIWA protocol in order to prevent the emergence of alcohol withdrawal during the hospitalization, but not necessarily from depression treatment.
The third main advantage of the DAPS tool is its ease of use. There are a limited number of psychiatrists and other mental health care workers in general hospitals, and that number is not adequate to have all psychiatric screens and assessments in performed by a specialist. The DAPS tool consists of scripted questions that any health care provider can read and follow. This type of instruction may be especially beneficial to health care providers who are unsure or uncomfortable about how to screen patients for suicide or psychiatric disorders. The DAPS tool provides these clinicians with language they can use comfortably when talking with patients. Alternatively, patients themselves can complete the DAPS questions, which frees up valuable time for providers to deliver other types of care. During a pilot project at one of our general hospitals, 20 general floor nurses were asked to implement the DAPS with their patients after receiving only a very brief set of instructions. On average, it took a nurse less than 4 minutes to complete the DAPS. Ninety percent of the nurses stated the DAPS tool would take “less time” or “no additional time” compared with the behavioral health questions in the current nursing admission assessment they were required to complete on every patient. Eighty-five percent found the tool “easy” or “very easy” to use.
At the time of publication of this article, one of our general hospitals is set to roll out DAPS screening hospital wide with the goal of prospectively identifying patients who might benefit from some form of behavioral health intervention and thus reducing length of stay. Another of our general hospitals is already using the DAPS to reduce hospital readmissions [15]. What started out as an initiative simply to meet a regulatory requirement turned into a novel and efficient means to bring mental health care services to hospitalized patients.
Lessons Learned
Our goal in the Perfect Depression Care initiative was to eliminate suicide, and we have come remarkably close to achieving that goal. Our determination to strive for perfection rather than incremental goals had a powerful effect on our results. To move to a different order of performance required us to challenge our most basic assumptions and required new learning and new behavior.
This social aspect of our improvement work was fundamental to every effort made to spread Perfect Depression Care outside of the specialty behavioral health care setting. Indeed, the diffusion of all innovation occurs within a social context [16]. Ideas do not spread by themselves—they are spread from one person (the messenger) to another (the adopter). Successful spread, therefore, depends in large part on the communication between messenger and adopter.
Implementing Perfect Depression Care within BHS involved like-minded messengers and adopters from the same department, whereas spreading the initiative to the general medical setting involved messengers from one specialty and adopters from another. The nature of such a social system demands that the goals of the messenger be aligned with the incentives of the adopter. In health service organizations, such alignment requires effective leadership, not just local champions [17]. For example, spreading the initiative to the primary care setting really only became possible when our departmental leaders made a public promise to the leaders of primary care that BHS would see any patient referred from primary care on the same day of referral with no questions asked. And while it is true that operationalizing that promise was a more arduous task than articulating it, the promise itself is what created a social space within which the innovation could diffuse.
Even if leaders are successful at aligning the messenger’s goals and the adopter’s incentives, spread still must actually occur locally between 2 people. This social context means that a “good” idea in the mind of the messenger must be a “better” idea in the mind of the adopter. In other words, an idea or innovation is more likely to be adopted if it is better than the status quo [18]. And it is the adopter’s definition of “better” that matters. For example, our organization’s primary care clinics agreed that improving their depression care was a good idea. However, specific interventions were not adopted (or adoptable) until they became a way to make daily life easier for the front-line clinic staff (eg, by facilitating more efficient referrals to BHS). Furthermore, because daily life in each clinic was a little bit different, the specific interventions adopted were allowed to vary. Similarly, in the general hospital setting, DAPS screening was nothing more than a good idea until the nurses learned that it took less time and yielded more actionable results than the long list of behavioral health screening questions they were currently required to complete on every patient being admitted. When replacing those questions with the DAPS screen saved time and added value, the DAPS became better than the status quo, a tipping point was reached, and spread took place.
Future Spread
The 2 examples of Perfect Depression Care Spread described herein are testaments to the social traction of “zero suicides.” Importantly, the success of each effort has hinged on its creative, practical approach to suicide, even though there is scant scientific evidence to support suicide prevention initiatives in general medical settings [19].
As it turns out, there is also little scientific knowledge about how innovations in health service organizations are successfully sustained [16]. It is our hope that the 15 years of Perfect Depression Care shed some light on this question, and that the initiative can continue to be sustained in today’s turbulent and increasingly austere health care environment. We are confident that we will keep improving as long as we keep learning.
In addition, we find tremendous inspiration in the many others who are learning and improving with us. In 2012, for instance, the US Surgeon General promoted the adoption “zero suicides” as a national strategic objective [1]. And in 2015, the Deputy Prime Minister of the United Kingdom called for the adoption of “zero suicides” across the entire National Health Service [20]. As the Perfect Depression Care team continues to grow, the pursuit of perfection becomes even more stirring.
Acknowledgment: The author acknowledges Brian K. Ahmedani, PhD, Charles E. Coffey, MD, MS, C. Edward Coffey, MD, Terri Robertson, PhD, and the entire Perfect Depression Care team.
Corresponding author: M. Justin Coffey, MD, The Menninger Clinic, 12301 S. Main St., Houston, TX 77035, [email protected].
Financial disclosures: None.
1. U.S. Department of Health and Human Services (HHS) Office of the Surgeon General and National Action Alliance for Suicide Prevention. 2012 National Strategy for Suicide Prevention: goals and objectives for action. Washington, DC: HHS; 2012.
2. Druss BG, Walker ER. Mental disorders and medical comorbidity: research synthesis report no. 21. Robert Wood Johnson Foundation 2011.
3. Committee on Quality Health Care in America, Institute of Medicine. Crossing the Quality Chasm. Washington, DC: National Academy Press; 2001.
4. Coffey CE, Coffey MJ, Ahmedani BK. An update on Perfect Depression Care. Psychiatric Services 2013;64:396.
5. Robert Wood Johnson Foundation. Pursuing Perfection: Raising the bar in health care performance. Robert Wood Johnson Foundation; 2014.
6. Marx D. Patient safety and the “just culture”: a primer for health care executives. New York: Columbia University; 2001.
7. Coleman K, Austin BT, Brach C, Wagner EH. Evidence on the chronic care model in the new millennium. Health Aff 2009;28:75–85.
8. Coffey CE. Building a system of perfect depression care in behavioral health. Jt Comm J Qual Patient Saf 2007;33:193–9.
9. Hampton T. Depression care effort brings dramatic drop in large HMO population’s suicide rate. JAMA 2010;303: 1903–5.
10. Unützer J, Powers D, Katon W, Langston C. From establishing an evidence-based practice to implementation in real-world settings: IMPACT as a case study. Psychiatr Clin North Am 2005;28:1079–92.
11. Melek SP, Norris DT, Paulus J. Economic impact of integrated medical-behavioral healthcare: implications for psychiatry. Milliman; 2014.
12. Schmitt MR, Miller MJ, Harrison DL, Touchet BK. Relationship of depression screening and physician office visit duration in a national sample. Psych Svc 2010;61:1126–31.
13. mhGAP intervention guide for mental, neurological, and substance use disorders in non-specialized health settings: Mental Health Gap Action Programme (mhGAP). World Health Organization; 2010.
14. National Patient Safety Goals 2008. The Joint Commission. Oakbrook, IL.
15. Coffey CE, Johns J, Veliz S, Coffey MJ. The DAPS tool: an actionable screen for psychiatric risk factors for rehospitalization. J Hosp Med 2012;7(suppl 2):S100–101.
16. Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004;82:581–629.
17. Berwick DM. Disseminating innovations in health care. JAMA 2003;289:1969–75.
18. Rogers EM. Diffusion of innovations. 4th ed. New York: The Free Press; 1995.
19. LeFevre MF. Screening for suicide risk in adolescents, adults, and older adults in primary care: US Preventive Services Task Force Recommendation Statement. Ann Intern Med 2014;160:719–26.
20. Clegg N. Speech at mental health conference. Available at www.gov.uk/government/speeches/nick-clegg-at-mental-health-conference.
From The Menninger Clinic, Houston, TX.
Abstract
- Objective: To summarize the Perfect Depression Care initiative and describe recent work to spread this quality improvement initiative.
- Methods: We summarize the background and methodology of the Perfect Depression Care initiative within the specialty behavioral health care setting and then describe the application of this methodology to 2 examples of spreading Perfect Depression Care to general medical settings: primary care and general hospitals.
- Results: In the primary care setting, Perfect Depression Care spread successfully in association with the development and implementation of a practice guideline for managing the potentially suicidal patient. In the general hospital setting, Perfect Depression Care is spreading successfully in association with the development and implementation of a simple and efficient tool to screen not for suicide risk specifically, but for common psychiatric conditions associated with increased risk of suicide.
- Conclusion: Both examples of spreading Perfect Depression Care to general medical settings illustrate the social traction of “zero suicides,” the audacious and transformative goal of the Perfect Depression Care Initiative.
Each year depression affects roughly 10% of adults in the United States [1]. The leading cause of disability in developed countries, depression results in substantial medical care expenditures, lost productivity, and absenteeism [1]. It is a chronic condition, and one that is associated with tremendous comorbidity from multiple chronic general medical conditions, including congestive heart failure, coronary artery disease, and diabetes [2]. Moreover, the presence of depression has deleterious effects on the outcomes of those comorbid conditions [2]. Untreated or poorly treated, depression can be deadly—each year as many as 10% of patients with major depression die from suicide [1].
In 1999 the Behavioral Health Services (BHS) division of Henry Ford Health System in Detroit, Michigan, set out to eliminate suicide among all patients with depression in our HMO network. This audacious goal was a key lever in a broader aim, which was to build a system of perfect depression care. We aimed to achieve breakthrough improvement in quality and safety by completely redesigning the delivery of depression care using the 6 aims and 10 new rules set forth in the Institute of Medicine’s (IOM) report Crossing the Quality Chasm [3]. To communicate our bold vision, we called the initiative Perfect Depression Care. Today, we can report a dramatic and sustained reduction in suicide that is unprecedented in the clinical and quality improvement literature [4].
In the Chasm report, the IOM cast a spotlight on behavioral health care, placing depression and anxiety disorders on the short list of priority conditions for immediate national attention and improvement. Importantly, the IOM called for a focus on not only behavioral health care benefits and coverage, but access and quality of care for all persons with depression. Finding inspiration from our success in the specialty behavioral health care setting, we decided to answer the IOM’s call. We set out to build a system of depression care that is not confined to the specialty behavioral health care setting, a system that delivers perfect care to every patient with depression, regardless of general medical comorbidity or care setting. We called this work Perfect Depression Care Spread.
In this article, we first summarize the background and methodology of the Perfect Depression Care initiative. We then describe the application of this methodology to spreading Perfect Depression Care into 2 nonspecialty care settings—primary care and general hospitals. Finally, we review some of the challenges and lessons learned from our efforts to sustain this important work.
Building a System of Perfect Depression Care
One example of the transformative power of a “zero defects” approach is the case of the Effectiveness aim. Our team engaged in vigorous debate about the goal for this aim. While some team members eagerly embraced the “zero defects” ambition and argued that truly perfect care could only mean “no suicides,” others challenged it, viewing it as lofty but unrealistic. After all, we had been taught that for some number of individuals with depression, suicide was the tragic yet inevitable outcome of their illness. How could it be possible to eliminate every single suicide? The debate was ultimately resolved when one team member asked, “If zero isn’t the right number of suicides, then what is? Two? Four? Forty?” The answer was obvious and undeniable. It was at that moment that setting “zero suicides” as the goal became a galvanizing force within BHS for the Perfect Depression Care initiative.
The pursuit of zero defects must take place within a “just culture,” an organizational environment in which frontline staff feel comfortable disclosing errors, especially their own, while still maintaining professional accountability [6]. Without a just culture, good but imperfect performance can breed disengagement and resentment. By contrast, within a just culture, it becomes possible to implement specific strategies and tactics to pursue perfection. Along the way, each step towards “zero defects” is celebrated because each defect that does occur is identified as an opportunity for learning.
One core strategy for Perfect Depression Care was organizing care according to the planned care model, a locally tailored version of the chronic care model [7]. We developed a clear vision for how each patient’s care would change in a system of Perfect Depression Care. We partnered with patients to ensure their voice in the redesign of our depression care services. We then conceptualized, designed, and tested strategies for improvement in 4 high-leverage domains (patient partnership, clinical practice, access to care, and information systems), which were identified through mapping our current care processes. Once this new model of care was in place, we implemented relevant measures of care quality and began continually assessing progress and then adjusting the plan as needed (ie, following the Model for Improvement).
Spread to Primary Care
The spread to primary care began in 2005, about 5 years after the initial launch of Perfect Depression Care in BHS. (There had been some previous work done aimed at integrating depression screening into a small number of specialty chronic disease management initiatives, although that work was not sustained.) We based the overall clinical structure on the IMPACT model of integrated behavioral health care [10]. Primary care providers collaborated with depression care managers, typically nurses, who had been trained to provide education to primary care providers and problem solving therapy to patients. The care managers were supervised by a project leader (a full-time clinical psychologist) and supported by 2 full-time psychiatric nurse practitioners who were embedded in each clinic during the early phases of implementation. An electronic medical record (EMR) was comfortably in place and facilitated the delivery of evidence-based depression care, as well as the collection of relevant process and outcome measures, which were fed back to the care teams on a regular basis. And, importantly, the primary care leadership team formally sanctioned depression care to be spread to all 27 primary care clinics.
Overcoming the Challenges of the Primary Care Visits
From 2005 to 2010, the model was spread tenuously to 5 primary care clinics. At that rate (1 clinic per year), it would have taken over 20 years to spread depression care through all 27 primary care clinics. Not satisfied with this progress, we stepped back to consider why adoption was happening so slowly. First, we spoke with leaders. Although the project was on a shoestring budget, our leaders understood the business case for integrating some version of depression care into the primary care setting [11]. They advised limiting the scope of the project to focus only on adults with 1 of 6 chronic diseases: diabetes mellitus, congestive heart failure, coronary artery disease, chronic obstructive pulmonary disease (COPD), asthma, and chronic kidney disease. This narrower focus was aimed at using the project’s limited resources more effectively on behalf of patients who were more frequent utilizers of care and statistically more likely to have a comorbid depressive illness. Through the use of time studies, however, we learned that the time consumed discerning which patients each day were eligible for depression screening created delays in clinic workflow that were untenable. It turned out that the process of screening all patients was far more efficient that the process of identifying which patients “should” be screened and then screening only those who were identified. This pragmatic approach to daily workflow in the clinics was a key driver of successful spread.
Next, we spoke to patients. In an effort to assess patient engagement, we reviewed the records of 830 patients who had been seen in one of the clinics where depression care was up and running. Among this group, less than 1% had declined to receive depression screening. In fact, during informal discussions with patients and clinic staff, patients were thanking their primary care providers for talking with them about depression. When it came to spreading depression care, patient engagement was not the problem.
Finally, we spoke with primary care providers, physicians who were viewed as leaders in their clinics. They described trepidation among their teams about adopting an innovation that would lead to patients being identified as at risk for suicide. Their concern was not that integrating depression care was not the right thing to do in the primary care setting; indeed, they had a strong and genuine desire to provide better depression care for their patients. Their concern was that the primary care clinic was not equipped to manage a suicidal patient safely and effectively. This concern was real, and it was pervasive. After all, the typical primary care office visit was already replete with problem lists too long to be managed effectively in the diminishing amount of time allotted to each visit. Screening for depression would only make matters worse [12]. Furthermore, identifying a patient at risk for suicide was not uncommon in our primary care setting. Between 2006 and 2012, an average of 16% of primary care patients screened each year had reported some degree of suicidal ideation (as measures by a positive response on question 9 of the PHQ-9). These discussions showed us that the model of depression care we were trying to spread into primary care was not designed with an explicit and confident approach to suicide—it was not Perfect Depression Care.
Leveraging Suicide As a Driver of Spread
When we realized that the anxiety surrounding the management of a suicidal patient was the biggest obstacle to Perfect Depression Care spread to primary care, we decided to turn this obstacle into an opportunity. First, an interdisciplinary team developed a practice guideline for managing the suicidal patient in general medical settings. The guideline was based on the World Health Organization’s evidence-based guidelines for addressing mental health disorders in nonspecialized health settings [13] and modified into a single page to make it easy to adopt. Following the guideline was not at all a requirement, but doing so made it very easy to identify patients at potential risk for suicide and to refer them safely and seamlessly to the next most appropriate level of care.
Second, and most importantly, BHS made a formal commitment to provide immediate access for any patient referred by a primary care provider following the practice guideline. BHS pledged to perform the evaluation on the same day as the referral was made and without any questions asked. Delivering on this promise required BHS to develop and implement reliable processes for its ambulatory centers to receive same-day referrals from any one of 27 primary care clinics. Success meant delighting our customers in primary care while obviating the expense and trauma associated with sending patients to local emergency departments. This work was hard. And it was made possible by the culture within BHS of pursuing perfection.
During this time of successful spread, project resources remained similar, no new or additional financial support was provided, and no new leadership directives had been communicated. The only new features of Perfect Depression Care spread were a 1-page practice guideline and a promise. Making suicide an explicit target of the intervention, and doing so in a ruthlessly practical way, created the conditions for the intervention to diffuse and be adopted more readily.
Spread to General Hospitals
In 2006, the Joint Commission established National Patient Safety Goal (NPSG) 15.01.01 for hospitals and health care facilities “to identify patients at risk for suicide” [14]. NPSG 15.01.01 applies not just to patients in psychiatric hospitals, but to all patients “being treated for emotional or behavioral disorders in general hospitals,” including emergency departments. As a measure of safety, suicide is the second most common sentinel event among hospitalized patients—only wrong-site surgery occurs more often. And when a suicide does take place in a hospital, the impact on patients, families, health care workers, and administrators is profound.
Still, completed suicide among hospitalized patients is statistically a very rare event. As a result, general hospitals find it challenging to meet the expectations set forth in NPSG 15.01.01, which seemingly asks hospitals to search for a needle in a haystack. Is it really valuable to ask a patient about suicide when that patient is a 16-year-old teenager who presented to the emergency department for minor scrapes and bruises sustained while skateboarding? Should all patients with “do not resuscitate” orders receive a mandatory, comprehensive suicide risk assessment? In 2010, general hospitals in our organization enlisted our Perfect Depression Care team to help them develop a meaningful approach to NPSG 15.01.01, and so Perfect Depression Care spread to general hospitals began.
The goal of NPSG 15.01.01 is “to identify patients at risk for suicide.” To accomplish this goal, hospital care teams need simple, efficient, evidence-based tools for identifying such patients and responding appropriately to the identified risk. In a general hospital setting, implementing targeted suicide risk assessments is simply not feasible. Assessing every single hospitalized patient for suicide risk seems clinically unnecessary, if not wasteful, and yet the processes needed to identify reliably which patients ought to be assessed end up taking far longer than simply screening everybody. With these considerations in mind, our Perfect Depression Care team took a different approach.
The DAPS Tool
We developed a simple and easy tool to screen, not for suicide risk specifically, but for common psychiatric conditions associated with increased risk of suicide. The Depression, Anxiety, Polysubstance Use, and Suicide screen (DAPS) [15] consists of 7 questions coming from 5 individual evidence-based screening measures: the PHQ-2 for depression, the GAD-2 for anxiety, question 9 from the PHQ-9 for suicidal ideation, the SASQ for problem alcohol use, and a single drug use question for substance use. Each of these questionnaires has been validated as a sensitive screening measure for the psychiatric condition of interest (eg, major depression, generalized anxiety, current problem drinking). Some of them have been validated specifically in general medical settings or among general medical patient populations. Moreover, each questionnaire is valid whether clinician-administered or self-completed. Some have also been validated in languages other than English.
The DAPS tool bundles together these separate screening measures into one easy to use and efficient tool. As a bundle, the DAPS tool offers 3 major advantages over traditional screening tools. First, the tool takes a broader approach to suicide risk with the aim of increasing utility. Suicide is a statistically rare event, especially in general medical settings. On the other hand, psychiatric conditions that themselves increase people’s risk of suicide are quite common, particularly in hospital settings. Rather than screening exclusively for suicidal thoughts and behavior, the DAPS tool screens for psychiatric conditions associated with an increased risk of suicide that are common in general medical settings. This approach to suicide screening is novel. It allows for the recognition of higher number of patients who may benefit from behavioral health interventions, whether or not they are “actively suicidal” at that moment. By not including extensive assessments of numerous suicide risk factors, the DAPS tool offers practical utility without losing much specificity. After all, persons in general hospital settings who at acutely increased risk of suicide (eg, a person admitted to the hospital following a suicide attempt via overdose) are already being identified.
The second advantage of the DAPS tool is that the information it obtains is actionable. Suicide screening tools, whether brief or comprehensive, are not immediately predictive and arrive at essentially the same conclusion—the person screened is deemed to fall into some risk stratification (eg, high, medium, low risk; acute vs non-acute risk). In general hospital settings, the responses to these stratifications are limited (eg, order a sitter, call a psychiatry consultation) and not specific to the level of risk. Furthermore, persons with psychiatric disorders may be at increased risk of suicide even if they deny having suicidal thoughts. The DAPS tool allows for the recognition of these persons, thus identifying opportunities for intervention. For example, a person who screens positive on the PHQ-2 portion of the DAPS but who denies having recent suicidal thoughts or behavior may not benefit from an immediate safety measure (eg, ordering a sitter) but may benefit from an evaluation and, if indicated, treatment for depression. Treating that person’s depression would decrease the longitudinal risk of suicide. If another person screens negative on the PHQ-2 but positive on the SASQ, then that person may benefit most from interventions targeting problem alcohol use, such as the initiation of a CIWA protocol in order to prevent the emergence of alcohol withdrawal during the hospitalization, but not necessarily from depression treatment.
The third main advantage of the DAPS tool is its ease of use. There are a limited number of psychiatrists and other mental health care workers in general hospitals, and that number is not adequate to have all psychiatric screens and assessments in performed by a specialist. The DAPS tool consists of scripted questions that any health care provider can read and follow. This type of instruction may be especially beneficial to health care providers who are unsure or uncomfortable about how to screen patients for suicide or psychiatric disorders. The DAPS tool provides these clinicians with language they can use comfortably when talking with patients. Alternatively, patients themselves can complete the DAPS questions, which frees up valuable time for providers to deliver other types of care. During a pilot project at one of our general hospitals, 20 general floor nurses were asked to implement the DAPS with their patients after receiving only a very brief set of instructions. On average, it took a nurse less than 4 minutes to complete the DAPS. Ninety percent of the nurses stated the DAPS tool would take “less time” or “no additional time” compared with the behavioral health questions in the current nursing admission assessment they were required to complete on every patient. Eighty-five percent found the tool “easy” or “very easy” to use.
At the time of publication of this article, one of our general hospitals is set to roll out DAPS screening hospital wide with the goal of prospectively identifying patients who might benefit from some form of behavioral health intervention and thus reducing length of stay. Another of our general hospitals is already using the DAPS to reduce hospital readmissions [15]. What started out as an initiative simply to meet a regulatory requirement turned into a novel and efficient means to bring mental health care services to hospitalized patients.
Lessons Learned
Our goal in the Perfect Depression Care initiative was to eliminate suicide, and we have come remarkably close to achieving that goal. Our determination to strive for perfection rather than incremental goals had a powerful effect on our results. To move to a different order of performance required us to challenge our most basic assumptions and required new learning and new behavior.
This social aspect of our improvement work was fundamental to every effort made to spread Perfect Depression Care outside of the specialty behavioral health care setting. Indeed, the diffusion of all innovation occurs within a social context [16]. Ideas do not spread by themselves—they are spread from one person (the messenger) to another (the adopter). Successful spread, therefore, depends in large part on the communication between messenger and adopter.
Implementing Perfect Depression Care within BHS involved like-minded messengers and adopters from the same department, whereas spreading the initiative to the general medical setting involved messengers from one specialty and adopters from another. The nature of such a social system demands that the goals of the messenger be aligned with the incentives of the adopter. In health service organizations, such alignment requires effective leadership, not just local champions [17]. For example, spreading the initiative to the primary care setting really only became possible when our departmental leaders made a public promise to the leaders of primary care that BHS would see any patient referred from primary care on the same day of referral with no questions asked. And while it is true that operationalizing that promise was a more arduous task than articulating it, the promise itself is what created a social space within which the innovation could diffuse.
Even if leaders are successful at aligning the messenger’s goals and the adopter’s incentives, spread still must actually occur locally between 2 people. This social context means that a “good” idea in the mind of the messenger must be a “better” idea in the mind of the adopter. In other words, an idea or innovation is more likely to be adopted if it is better than the status quo [18]. And it is the adopter’s definition of “better” that matters. For example, our organization’s primary care clinics agreed that improving their depression care was a good idea. However, specific interventions were not adopted (or adoptable) until they became a way to make daily life easier for the front-line clinic staff (eg, by facilitating more efficient referrals to BHS). Furthermore, because daily life in each clinic was a little bit different, the specific interventions adopted were allowed to vary. Similarly, in the general hospital setting, DAPS screening was nothing more than a good idea until the nurses learned that it took less time and yielded more actionable results than the long list of behavioral health screening questions they were currently required to complete on every patient being admitted. When replacing those questions with the DAPS screen saved time and added value, the DAPS became better than the status quo, a tipping point was reached, and spread took place.
Future Spread
The 2 examples of Perfect Depression Care Spread described herein are testaments to the social traction of “zero suicides.” Importantly, the success of each effort has hinged on its creative, practical approach to suicide, even though there is scant scientific evidence to support suicide prevention initiatives in general medical settings [19].
As it turns out, there is also little scientific knowledge about how innovations in health service organizations are successfully sustained [16]. It is our hope that the 15 years of Perfect Depression Care shed some light on this question, and that the initiative can continue to be sustained in today’s turbulent and increasingly austere health care environment. We are confident that we will keep improving as long as we keep learning.
In addition, we find tremendous inspiration in the many others who are learning and improving with us. In 2012, for instance, the US Surgeon General promoted the adoption “zero suicides” as a national strategic objective [1]. And in 2015, the Deputy Prime Minister of the United Kingdom called for the adoption of “zero suicides” across the entire National Health Service [20]. As the Perfect Depression Care team continues to grow, the pursuit of perfection becomes even more stirring.
Acknowledgment: The author acknowledges Brian K. Ahmedani, PhD, Charles E. Coffey, MD, MS, C. Edward Coffey, MD, Terri Robertson, PhD, and the entire Perfect Depression Care team.
Corresponding author: M. Justin Coffey, MD, The Menninger Clinic, 12301 S. Main St., Houston, TX 77035, [email protected].
Financial disclosures: None.
From The Menninger Clinic, Houston, TX.
Abstract
- Objective: To summarize the Perfect Depression Care initiative and describe recent work to spread this quality improvement initiative.
- Methods: We summarize the background and methodology of the Perfect Depression Care initiative within the specialty behavioral health care setting and then describe the application of this methodology to 2 examples of spreading Perfect Depression Care to general medical settings: primary care and general hospitals.
- Results: In the primary care setting, Perfect Depression Care spread successfully in association with the development and implementation of a practice guideline for managing the potentially suicidal patient. In the general hospital setting, Perfect Depression Care is spreading successfully in association with the development and implementation of a simple and efficient tool to screen not for suicide risk specifically, but for common psychiatric conditions associated with increased risk of suicide.
- Conclusion: Both examples of spreading Perfect Depression Care to general medical settings illustrate the social traction of “zero suicides,” the audacious and transformative goal of the Perfect Depression Care Initiative.
Each year depression affects roughly 10% of adults in the United States [1]. The leading cause of disability in developed countries, depression results in substantial medical care expenditures, lost productivity, and absenteeism [1]. It is a chronic condition, and one that is associated with tremendous comorbidity from multiple chronic general medical conditions, including congestive heart failure, coronary artery disease, and diabetes [2]. Moreover, the presence of depression has deleterious effects on the outcomes of those comorbid conditions [2]. Untreated or poorly treated, depression can be deadly—each year as many as 10% of patients with major depression die from suicide [1].
In 1999 the Behavioral Health Services (BHS) division of Henry Ford Health System in Detroit, Michigan, set out to eliminate suicide among all patients with depression in our HMO network. This audacious goal was a key lever in a broader aim, which was to build a system of perfect depression care. We aimed to achieve breakthrough improvement in quality and safety by completely redesigning the delivery of depression care using the 6 aims and 10 new rules set forth in the Institute of Medicine’s (IOM) report Crossing the Quality Chasm [3]. To communicate our bold vision, we called the initiative Perfect Depression Care. Today, we can report a dramatic and sustained reduction in suicide that is unprecedented in the clinical and quality improvement literature [4].
In the Chasm report, the IOM cast a spotlight on behavioral health care, placing depression and anxiety disorders on the short list of priority conditions for immediate national attention and improvement. Importantly, the IOM called for a focus on not only behavioral health care benefits and coverage, but access and quality of care for all persons with depression. Finding inspiration from our success in the specialty behavioral health care setting, we decided to answer the IOM’s call. We set out to build a system of depression care that is not confined to the specialty behavioral health care setting, a system that delivers perfect care to every patient with depression, regardless of general medical comorbidity or care setting. We called this work Perfect Depression Care Spread.
In this article, we first summarize the background and methodology of the Perfect Depression Care initiative. We then describe the application of this methodology to spreading Perfect Depression Care into 2 nonspecialty care settings—primary care and general hospitals. Finally, we review some of the challenges and lessons learned from our efforts to sustain this important work.
Building a System of Perfect Depression Care
One example of the transformative power of a “zero defects” approach is the case of the Effectiveness aim. Our team engaged in vigorous debate about the goal for this aim. While some team members eagerly embraced the “zero defects” ambition and argued that truly perfect care could only mean “no suicides,” others challenged it, viewing it as lofty but unrealistic. After all, we had been taught that for some number of individuals with depression, suicide was the tragic yet inevitable outcome of their illness. How could it be possible to eliminate every single suicide? The debate was ultimately resolved when one team member asked, “If zero isn’t the right number of suicides, then what is? Two? Four? Forty?” The answer was obvious and undeniable. It was at that moment that setting “zero suicides” as the goal became a galvanizing force within BHS for the Perfect Depression Care initiative.
The pursuit of zero defects must take place within a “just culture,” an organizational environment in which frontline staff feel comfortable disclosing errors, especially their own, while still maintaining professional accountability [6]. Without a just culture, good but imperfect performance can breed disengagement and resentment. By contrast, within a just culture, it becomes possible to implement specific strategies and tactics to pursue perfection. Along the way, each step towards “zero defects” is celebrated because each defect that does occur is identified as an opportunity for learning.
One core strategy for Perfect Depression Care was organizing care according to the planned care model, a locally tailored version of the chronic care model [7]. We developed a clear vision for how each patient’s care would change in a system of Perfect Depression Care. We partnered with patients to ensure their voice in the redesign of our depression care services. We then conceptualized, designed, and tested strategies for improvement in 4 high-leverage domains (patient partnership, clinical practice, access to care, and information systems), which were identified through mapping our current care processes. Once this new model of care was in place, we implemented relevant measures of care quality and began continually assessing progress and then adjusting the plan as needed (ie, following the Model for Improvement).
Spread to Primary Care
The spread to primary care began in 2005, about 5 years after the initial launch of Perfect Depression Care in BHS. (There had been some previous work done aimed at integrating depression screening into a small number of specialty chronic disease management initiatives, although that work was not sustained.) We based the overall clinical structure on the IMPACT model of integrated behavioral health care [10]. Primary care providers collaborated with depression care managers, typically nurses, who had been trained to provide education to primary care providers and problem solving therapy to patients. The care managers were supervised by a project leader (a full-time clinical psychologist) and supported by 2 full-time psychiatric nurse practitioners who were embedded in each clinic during the early phases of implementation. An electronic medical record (EMR) was comfortably in place and facilitated the delivery of evidence-based depression care, as well as the collection of relevant process and outcome measures, which were fed back to the care teams on a regular basis. And, importantly, the primary care leadership team formally sanctioned depression care to be spread to all 27 primary care clinics.
Overcoming the Challenges of the Primary Care Visits
From 2005 to 2010, the model was spread tenuously to 5 primary care clinics. At that rate (1 clinic per year), it would have taken over 20 years to spread depression care through all 27 primary care clinics. Not satisfied with this progress, we stepped back to consider why adoption was happening so slowly. First, we spoke with leaders. Although the project was on a shoestring budget, our leaders understood the business case for integrating some version of depression care into the primary care setting [11]. They advised limiting the scope of the project to focus only on adults with 1 of 6 chronic diseases: diabetes mellitus, congestive heart failure, coronary artery disease, chronic obstructive pulmonary disease (COPD), asthma, and chronic kidney disease. This narrower focus was aimed at using the project’s limited resources more effectively on behalf of patients who were more frequent utilizers of care and statistically more likely to have a comorbid depressive illness. Through the use of time studies, however, we learned that the time consumed discerning which patients each day were eligible for depression screening created delays in clinic workflow that were untenable. It turned out that the process of screening all patients was far more efficient that the process of identifying which patients “should” be screened and then screening only those who were identified. This pragmatic approach to daily workflow in the clinics was a key driver of successful spread.
Next, we spoke to patients. In an effort to assess patient engagement, we reviewed the records of 830 patients who had been seen in one of the clinics where depression care was up and running. Among this group, less than 1% had declined to receive depression screening. In fact, during informal discussions with patients and clinic staff, patients were thanking their primary care providers for talking with them about depression. When it came to spreading depression care, patient engagement was not the problem.
Finally, we spoke with primary care providers, physicians who were viewed as leaders in their clinics. They described trepidation among their teams about adopting an innovation that would lead to patients being identified as at risk for suicide. Their concern was not that integrating depression care was not the right thing to do in the primary care setting; indeed, they had a strong and genuine desire to provide better depression care for their patients. Their concern was that the primary care clinic was not equipped to manage a suicidal patient safely and effectively. This concern was real, and it was pervasive. After all, the typical primary care office visit was already replete with problem lists too long to be managed effectively in the diminishing amount of time allotted to each visit. Screening for depression would only make matters worse [12]. Furthermore, identifying a patient at risk for suicide was not uncommon in our primary care setting. Between 2006 and 2012, an average of 16% of primary care patients screened each year had reported some degree of suicidal ideation (as measures by a positive response on question 9 of the PHQ-9). These discussions showed us that the model of depression care we were trying to spread into primary care was not designed with an explicit and confident approach to suicide—it was not Perfect Depression Care.
Leveraging Suicide As a Driver of Spread
When we realized that the anxiety surrounding the management of a suicidal patient was the biggest obstacle to Perfect Depression Care spread to primary care, we decided to turn this obstacle into an opportunity. First, an interdisciplinary team developed a practice guideline for managing the suicidal patient in general medical settings. The guideline was based on the World Health Organization’s evidence-based guidelines for addressing mental health disorders in nonspecialized health settings [13] and modified into a single page to make it easy to adopt. Following the guideline was not at all a requirement, but doing so made it very easy to identify patients at potential risk for suicide and to refer them safely and seamlessly to the next most appropriate level of care.
Second, and most importantly, BHS made a formal commitment to provide immediate access for any patient referred by a primary care provider following the practice guideline. BHS pledged to perform the evaluation on the same day as the referral was made and without any questions asked. Delivering on this promise required BHS to develop and implement reliable processes for its ambulatory centers to receive same-day referrals from any one of 27 primary care clinics. Success meant delighting our customers in primary care while obviating the expense and trauma associated with sending patients to local emergency departments. This work was hard. And it was made possible by the culture within BHS of pursuing perfection.
During this time of successful spread, project resources remained similar, no new or additional financial support was provided, and no new leadership directives had been communicated. The only new features of Perfect Depression Care spread were a 1-page practice guideline and a promise. Making suicide an explicit target of the intervention, and doing so in a ruthlessly practical way, created the conditions for the intervention to diffuse and be adopted more readily.
Spread to General Hospitals
In 2006, the Joint Commission established National Patient Safety Goal (NPSG) 15.01.01 for hospitals and health care facilities “to identify patients at risk for suicide” [14]. NPSG 15.01.01 applies not just to patients in psychiatric hospitals, but to all patients “being treated for emotional or behavioral disorders in general hospitals,” including emergency departments. As a measure of safety, suicide is the second most common sentinel event among hospitalized patients—only wrong-site surgery occurs more often. And when a suicide does take place in a hospital, the impact on patients, families, health care workers, and administrators is profound.
Still, completed suicide among hospitalized patients is statistically a very rare event. As a result, general hospitals find it challenging to meet the expectations set forth in NPSG 15.01.01, which seemingly asks hospitals to search for a needle in a haystack. Is it really valuable to ask a patient about suicide when that patient is a 16-year-old teenager who presented to the emergency department for minor scrapes and bruises sustained while skateboarding? Should all patients with “do not resuscitate” orders receive a mandatory, comprehensive suicide risk assessment? In 2010, general hospitals in our organization enlisted our Perfect Depression Care team to help them develop a meaningful approach to NPSG 15.01.01, and so Perfect Depression Care spread to general hospitals began.
The goal of NPSG 15.01.01 is “to identify patients at risk for suicide.” To accomplish this goal, hospital care teams need simple, efficient, evidence-based tools for identifying such patients and responding appropriately to the identified risk. In a general hospital setting, implementing targeted suicide risk assessments is simply not feasible. Assessing every single hospitalized patient for suicide risk seems clinically unnecessary, if not wasteful, and yet the processes needed to identify reliably which patients ought to be assessed end up taking far longer than simply screening everybody. With these considerations in mind, our Perfect Depression Care team took a different approach.
The DAPS Tool
We developed a simple and easy tool to screen, not for suicide risk specifically, but for common psychiatric conditions associated with increased risk of suicide. The Depression, Anxiety, Polysubstance Use, and Suicide screen (DAPS) [15] consists of 7 questions coming from 5 individual evidence-based screening measures: the PHQ-2 for depression, the GAD-2 for anxiety, question 9 from the PHQ-9 for suicidal ideation, the SASQ for problem alcohol use, and a single drug use question for substance use. Each of these questionnaires has been validated as a sensitive screening measure for the psychiatric condition of interest (eg, major depression, generalized anxiety, current problem drinking). Some of them have been validated specifically in general medical settings or among general medical patient populations. Moreover, each questionnaire is valid whether clinician-administered or self-completed. Some have also been validated in languages other than English.
The DAPS tool bundles together these separate screening measures into one easy to use and efficient tool. As a bundle, the DAPS tool offers 3 major advantages over traditional screening tools. First, the tool takes a broader approach to suicide risk with the aim of increasing utility. Suicide is a statistically rare event, especially in general medical settings. On the other hand, psychiatric conditions that themselves increase people’s risk of suicide are quite common, particularly in hospital settings. Rather than screening exclusively for suicidal thoughts and behavior, the DAPS tool screens for psychiatric conditions associated with an increased risk of suicide that are common in general medical settings. This approach to suicide screening is novel. It allows for the recognition of higher number of patients who may benefit from behavioral health interventions, whether or not they are “actively suicidal” at that moment. By not including extensive assessments of numerous suicide risk factors, the DAPS tool offers practical utility without losing much specificity. After all, persons in general hospital settings who at acutely increased risk of suicide (eg, a person admitted to the hospital following a suicide attempt via overdose) are already being identified.
The second advantage of the DAPS tool is that the information it obtains is actionable. Suicide screening tools, whether brief or comprehensive, are not immediately predictive and arrive at essentially the same conclusion—the person screened is deemed to fall into some risk stratification (eg, high, medium, low risk; acute vs non-acute risk). In general hospital settings, the responses to these stratifications are limited (eg, order a sitter, call a psychiatry consultation) and not specific to the level of risk. Furthermore, persons with psychiatric disorders may be at increased risk of suicide even if they deny having suicidal thoughts. The DAPS tool allows for the recognition of these persons, thus identifying opportunities for intervention. For example, a person who screens positive on the PHQ-2 portion of the DAPS but who denies having recent suicidal thoughts or behavior may not benefit from an immediate safety measure (eg, ordering a sitter) but may benefit from an evaluation and, if indicated, treatment for depression. Treating that person’s depression would decrease the longitudinal risk of suicide. If another person screens negative on the PHQ-2 but positive on the SASQ, then that person may benefit most from interventions targeting problem alcohol use, such as the initiation of a CIWA protocol in order to prevent the emergence of alcohol withdrawal during the hospitalization, but not necessarily from depression treatment.
The third main advantage of the DAPS tool is its ease of use. There are a limited number of psychiatrists and other mental health care workers in general hospitals, and that number is not adequate to have all psychiatric screens and assessments in performed by a specialist. The DAPS tool consists of scripted questions that any health care provider can read and follow. This type of instruction may be especially beneficial to health care providers who are unsure or uncomfortable about how to screen patients for suicide or psychiatric disorders. The DAPS tool provides these clinicians with language they can use comfortably when talking with patients. Alternatively, patients themselves can complete the DAPS questions, which frees up valuable time for providers to deliver other types of care. During a pilot project at one of our general hospitals, 20 general floor nurses were asked to implement the DAPS with their patients after receiving only a very brief set of instructions. On average, it took a nurse less than 4 minutes to complete the DAPS. Ninety percent of the nurses stated the DAPS tool would take “less time” or “no additional time” compared with the behavioral health questions in the current nursing admission assessment they were required to complete on every patient. Eighty-five percent found the tool “easy” or “very easy” to use.
At the time of publication of this article, one of our general hospitals is set to roll out DAPS screening hospital wide with the goal of prospectively identifying patients who might benefit from some form of behavioral health intervention and thus reducing length of stay. Another of our general hospitals is already using the DAPS to reduce hospital readmissions [15]. What started out as an initiative simply to meet a regulatory requirement turned into a novel and efficient means to bring mental health care services to hospitalized patients.
Lessons Learned
Our goal in the Perfect Depression Care initiative was to eliminate suicide, and we have come remarkably close to achieving that goal. Our determination to strive for perfection rather than incremental goals had a powerful effect on our results. To move to a different order of performance required us to challenge our most basic assumptions and required new learning and new behavior.
This social aspect of our improvement work was fundamental to every effort made to spread Perfect Depression Care outside of the specialty behavioral health care setting. Indeed, the diffusion of all innovation occurs within a social context [16]. Ideas do not spread by themselves—they are spread from one person (the messenger) to another (the adopter). Successful spread, therefore, depends in large part on the communication between messenger and adopter.
Implementing Perfect Depression Care within BHS involved like-minded messengers and adopters from the same department, whereas spreading the initiative to the general medical setting involved messengers from one specialty and adopters from another. The nature of such a social system demands that the goals of the messenger be aligned with the incentives of the adopter. In health service organizations, such alignment requires effective leadership, not just local champions [17]. For example, spreading the initiative to the primary care setting really only became possible when our departmental leaders made a public promise to the leaders of primary care that BHS would see any patient referred from primary care on the same day of referral with no questions asked. And while it is true that operationalizing that promise was a more arduous task than articulating it, the promise itself is what created a social space within which the innovation could diffuse.
Even if leaders are successful at aligning the messenger’s goals and the adopter’s incentives, spread still must actually occur locally between 2 people. This social context means that a “good” idea in the mind of the messenger must be a “better” idea in the mind of the adopter. In other words, an idea or innovation is more likely to be adopted if it is better than the status quo [18]. And it is the adopter’s definition of “better” that matters. For example, our organization’s primary care clinics agreed that improving their depression care was a good idea. However, specific interventions were not adopted (or adoptable) until they became a way to make daily life easier for the front-line clinic staff (eg, by facilitating more efficient referrals to BHS). Furthermore, because daily life in each clinic was a little bit different, the specific interventions adopted were allowed to vary. Similarly, in the general hospital setting, DAPS screening was nothing more than a good idea until the nurses learned that it took less time and yielded more actionable results than the long list of behavioral health screening questions they were currently required to complete on every patient being admitted. When replacing those questions with the DAPS screen saved time and added value, the DAPS became better than the status quo, a tipping point was reached, and spread took place.
Future Spread
The 2 examples of Perfect Depression Care Spread described herein are testaments to the social traction of “zero suicides.” Importantly, the success of each effort has hinged on its creative, practical approach to suicide, even though there is scant scientific evidence to support suicide prevention initiatives in general medical settings [19].
As it turns out, there is also little scientific knowledge about how innovations in health service organizations are successfully sustained [16]. It is our hope that the 15 years of Perfect Depression Care shed some light on this question, and that the initiative can continue to be sustained in today’s turbulent and increasingly austere health care environment. We are confident that we will keep improving as long as we keep learning.
In addition, we find tremendous inspiration in the many others who are learning and improving with us. In 2012, for instance, the US Surgeon General promoted the adoption “zero suicides” as a national strategic objective [1]. And in 2015, the Deputy Prime Minister of the United Kingdom called for the adoption of “zero suicides” across the entire National Health Service [20]. As the Perfect Depression Care team continues to grow, the pursuit of perfection becomes even more stirring.
Acknowledgment: The author acknowledges Brian K. Ahmedani, PhD, Charles E. Coffey, MD, MS, C. Edward Coffey, MD, Terri Robertson, PhD, and the entire Perfect Depression Care team.
Corresponding author: M. Justin Coffey, MD, The Menninger Clinic, 12301 S. Main St., Houston, TX 77035, [email protected].
Financial disclosures: None.
1. U.S. Department of Health and Human Services (HHS) Office of the Surgeon General and National Action Alliance for Suicide Prevention. 2012 National Strategy for Suicide Prevention: goals and objectives for action. Washington, DC: HHS; 2012.
2. Druss BG, Walker ER. Mental disorders and medical comorbidity: research synthesis report no. 21. Robert Wood Johnson Foundation 2011.
3. Committee on Quality Health Care in America, Institute of Medicine. Crossing the Quality Chasm. Washington, DC: National Academy Press; 2001.
4. Coffey CE, Coffey MJ, Ahmedani BK. An update on Perfect Depression Care. Psychiatric Services 2013;64:396.
5. Robert Wood Johnson Foundation. Pursuing Perfection: Raising the bar in health care performance. Robert Wood Johnson Foundation; 2014.
6. Marx D. Patient safety and the “just culture”: a primer for health care executives. New York: Columbia University; 2001.
7. Coleman K, Austin BT, Brach C, Wagner EH. Evidence on the chronic care model in the new millennium. Health Aff 2009;28:75–85.
8. Coffey CE. Building a system of perfect depression care in behavioral health. Jt Comm J Qual Patient Saf 2007;33:193–9.
9. Hampton T. Depression care effort brings dramatic drop in large HMO population’s suicide rate. JAMA 2010;303: 1903–5.
10. Unützer J, Powers D, Katon W, Langston C. From establishing an evidence-based practice to implementation in real-world settings: IMPACT as a case study. Psychiatr Clin North Am 2005;28:1079–92.
11. Melek SP, Norris DT, Paulus J. Economic impact of integrated medical-behavioral healthcare: implications for psychiatry. Milliman; 2014.
12. Schmitt MR, Miller MJ, Harrison DL, Touchet BK. Relationship of depression screening and physician office visit duration in a national sample. Psych Svc 2010;61:1126–31.
13. mhGAP intervention guide for mental, neurological, and substance use disorders in non-specialized health settings: Mental Health Gap Action Programme (mhGAP). World Health Organization; 2010.
14. National Patient Safety Goals 2008. The Joint Commission. Oakbrook, IL.
15. Coffey CE, Johns J, Veliz S, Coffey MJ. The DAPS tool: an actionable screen for psychiatric risk factors for rehospitalization. J Hosp Med 2012;7(suppl 2):S100–101.
16. Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004;82:581–629.
17. Berwick DM. Disseminating innovations in health care. JAMA 2003;289:1969–75.
18. Rogers EM. Diffusion of innovations. 4th ed. New York: The Free Press; 1995.
19. LeFevre MF. Screening for suicide risk in adolescents, adults, and older adults in primary care: US Preventive Services Task Force Recommendation Statement. Ann Intern Med 2014;160:719–26.
20. Clegg N. Speech at mental health conference. Available at www.gov.uk/government/speeches/nick-clegg-at-mental-health-conference.
1. U.S. Department of Health and Human Services (HHS) Office of the Surgeon General and National Action Alliance for Suicide Prevention. 2012 National Strategy for Suicide Prevention: goals and objectives for action. Washington, DC: HHS; 2012.
2. Druss BG, Walker ER. Mental disorders and medical comorbidity: research synthesis report no. 21. Robert Wood Johnson Foundation 2011.
3. Committee on Quality Health Care in America, Institute of Medicine. Crossing the Quality Chasm. Washington, DC: National Academy Press; 2001.
4. Coffey CE, Coffey MJ, Ahmedani BK. An update on Perfect Depression Care. Psychiatric Services 2013;64:396.
5. Robert Wood Johnson Foundation. Pursuing Perfection: Raising the bar in health care performance. Robert Wood Johnson Foundation; 2014.
6. Marx D. Patient safety and the “just culture”: a primer for health care executives. New York: Columbia University; 2001.
7. Coleman K, Austin BT, Brach C, Wagner EH. Evidence on the chronic care model in the new millennium. Health Aff 2009;28:75–85.
8. Coffey CE. Building a system of perfect depression care in behavioral health. Jt Comm J Qual Patient Saf 2007;33:193–9.
9. Hampton T. Depression care effort brings dramatic drop in large HMO population’s suicide rate. JAMA 2010;303: 1903–5.
10. Unützer J, Powers D, Katon W, Langston C. From establishing an evidence-based practice to implementation in real-world settings: IMPACT as a case study. Psychiatr Clin North Am 2005;28:1079–92.
11. Melek SP, Norris DT, Paulus J. Economic impact of integrated medical-behavioral healthcare: implications for psychiatry. Milliman; 2014.
12. Schmitt MR, Miller MJ, Harrison DL, Touchet BK. Relationship of depression screening and physician office visit duration in a national sample. Psych Svc 2010;61:1126–31.
13. mhGAP intervention guide for mental, neurological, and substance use disorders in non-specialized health settings: Mental Health Gap Action Programme (mhGAP). World Health Organization; 2010.
14. National Patient Safety Goals 2008. The Joint Commission. Oakbrook, IL.
15. Coffey CE, Johns J, Veliz S, Coffey MJ. The DAPS tool: an actionable screen for psychiatric risk factors for rehospitalization. J Hosp Med 2012;7(suppl 2):S100–101.
16. Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004;82:581–629.
17. Berwick DM. Disseminating innovations in health care. JAMA 2003;289:1969–75.
18. Rogers EM. Diffusion of innovations. 4th ed. New York: The Free Press; 1995.
19. LeFevre MF. Screening for suicide risk in adolescents, adults, and older adults in primary care: US Preventive Services Task Force Recommendation Statement. Ann Intern Med 2014;160:719–26.
20. Clegg N. Speech at mental health conference. Available at www.gov.uk/government/speeches/nick-clegg-at-mental-health-conference.
Cost Drivers Associated withClostridium difficile-Associated Diarrhea in a Hospital Setting
From HealthCore, Wilmington, DE, and Cubist Pharma-ceuticals, San Diego, CA.
Abstract
- Objectives: To describe trends in inpatient resource utilization and potential cost drivers of Clostridium difficile-associated diarrhea (CDAD) treated in the hospital.
- Methods: Retrospective medical record review included 500 patients with ≥1 inpatient medical claim diagnosis of CDAD (ICD-9-CM: 008.45) between 01/01/2005-10/31/2010. Information was collected on patient demographics, admission diagnoses, laboratory data, and CDAD-related characteristics and discharge. Hospital costs were evaluated for the entire inpatient episode and prorated for the duration of the CDAD episode (ie, CDAD diagnosis date to diarrhea resolution/discharge date).
- Results: The cohort was mostly female (62%), Caucasian (72%), with mean (SD) age 66 (±17.6) years. 60% had diagnosis of CDAD or presence of diarrhea at admission. CDAD diagnosis was confirmed with laboratory test in 92% of patients. ~44% had mild CDAD, 35% had severe CDAD. Following CDAD diagnosis, approximately 53% of patients were isolated for ≥1 days, 12% transferred to the ICU for a median (Q1–Q3) length of stay of 8 (5–15) days. Two-thirds received gastrointestinal or infectious disease consult. Median time from CDAD diagnosis to discharge was 6 (4–9) days; 5.5 (4–8) days for patients admitted with CDAD, 6.5 (4–10) days for those with hospital-acquired CDAD. The mean and median costs (2011 USD) for CDAD-associated hospitalization were $35,621 and $13,153, respectively.
- Conclusion: Patients with CDAD utilize numerous expensive resources during hospitalization including laboratory tests, isolation, prolonged ICU stay, and specialist consultations.
Clostridium difficile, classified as an urgent public health threat by the Centers for Disease Control and Prevention (CDC), causes approximately 250,000 hospitalizations and an estimated 14,000 deaths per year in the United States [1]. An estimated 15% to 25% of patients with C. difficile-associated diarrhea (CDAD) will experience at least 1 recurrence [2-4], frequently requiring rehospitalization [5]. The high incidence of primary and recurrent infections contributes to a substantial burden associated with CDAD in terms of extended and repeat hospital stays [6,7].
Conservative estimates of the direct annual costs of CDAD in the United States over the past 15 years range from $1.1 billion [8] to $3.2 billion, with an average cost per stay of $10,212 for patients hospitalized with a principal diagnosis of CDAD or a CDAD-related symptom [5]. O’Brien et al estimated that costs associated with rehospitalizations accounted for 11% of overall CDAD-related hospital costs;when considering all CDAD-related hospitalizations, including both initial and subsequent rehospitalizations for recurrent infection and not accounting for post-acute or outpatient care, the 2-year cumulative cost was estimated to be $51.2 million. While studies have yielded varying assessments of the actual CDAD burden [5–10], they all suggest that CDAD burden is considerable and that extended hospital stays are the major component of CDAD-associated costs [9,10]. In a claims-based study by Quimbo et al [11], when multiple and diverse cohorts of CDAD patients at elevated risk for recurrence were matched with patients with similar underlying at-risk condition(s) but no CDAD, the CDAD at-risk groups had an incremental LOS per hospitalization ranging from approximately 3 to 18 days and an incremental cost burden ranging from a mean of $11,179 to $115,632 (2011 USD) per stay.
While it is recognized that CDAD carries significant cost burden and is driven by LOS, current literature is lacking regarding the characteristics of these hospital stays. Building on the Quimbo et al study, the current study was designed to probe further into the nature of the burden (ie, resource use) incurred during the course of CDAD hospitalizations. As such, the objective of this study was to identify the common trends in hospital-related resource utilization and describe the potential drivers that affect the cost burden of CDAD using hospital medical record data.
Methods
Population
Patients were selected for this retrospective medical record review from the HealthCore Integrated Research Database (HealthCore, Wilmington, DE). The database contains a broad, clinically rich and geographically diverse spectrum of longitudinal claims information from one of the largest commercially insured populations in the United States, representing 48 million lives. We identified 21,177 adult (≥ 18 years) patients with at least 1 inpatient claim with an International Classification of Diseases, 9th Edition, Clinical Modification (ICD-9-CM) diagnosis code for C. difficile infection (CDI; 008.45) between 1 January 2005 and 31 October 2010 (intake period). All patients had at least 12 months of prior and continuous medical and pharmacy health plan eligibility prior to the incident CDAD-associated hospitalization within the database. Additional details regarding this cohort identification has been published previously [11]. The study was undertaken in accordance with Health Insurance Portability and Accountability Act (HIPAA) guidelines and the necessary central institutional review board approval was obtained prior to medical record identification and abstraction.
Sampling Strategy
Medical Record Abstraction
During the record abstraction process, information was collected on patients’ race/ethnicity, body mass index (BMI), admission diagnosis and other conditions, point of entry and prior location, body temperature and laboratory data (eg, creatinine and albumin values, white blood cell [WBC] count), diarrhea and stool characteristics, CDAD diagnosis date, CDAD-specific characteristics, severity, complications, and related tests/procedures, CDAD treatments (eg, dose, duration, and formulation of medications), hospital LOS, including stays in the intensive care unit (ICU), cardiac care unit (CCU) following CDAD diagnosis; consultations provided by gastrointestinal, infectious disease, intensivists, or surgery care specialists, and discharge summary on disposition, CDAD status, and medications prescribed. Standardized data collection forms were used by trained nurses or pharmacists to collect information from the medical records and inter-rater reliability testing with a 0.9 cutoff was required to confirm accuracy. To ensure consistency, a pilot test of the first 20 abstracted records were re-abstracted by the research team. Last, quality checks were implemented throughout the abstraction process to identify any inconsistencies or data entry errors including coding errors and atypical, unrealistic data entry patterns (eg, identical values for a particular data field entered on multiple records; implausible or erratic inputs; or a high percentage of missing data points). Missing data were not imputed.
Study Definitions
Diarrhea was defined as 3 or more unformed (includes bloody, watery, thin, loose, soft, and/or unformed stool) bowel movements per day.CDAD severity was classified as mild (4–5 unformed bowel movements per day or WBC ≤ 2000/mm3); moderate (6–9 unformed bowel movements per day or WBC between 12,001/mm3 and 15,000/mm3); or severe (≥10 unformed bowel movements per day or WBC ≥15,001/mm3) [12,13]. Diarrhea was considered to be resolved when the patient had no more than 3 unformed stools for 2 consecutive days and lasting until treatment was completed, with no additional therapy required for CDAD as of the second day after the end of the course of therapy [2,14].CDAD episode was defined as the duration from the date of the CDAD diagnosis or confirmation (whichever occurred first), to the date of diarrhea resolution (where documented) or discharge date.
Cost Measures
The total hospital health plan paid costs for the entire inpatient episode (includes treatment costs, diagnostics, services provided, etc.) were estimated using medical claims present in the database and pertaining to the hospitalization from where medical records were abstracted. Then the proportionate amount for the duration of the CDAD episode (from CDAD diagnosis to the diarrhea resolution date or the discharge date in cases where the resolution date could not be ascertained) was calculated to estimate the average CDAD associated in-hospital costs.
Analysis
Means (± standard deviation [SD]), medians (interquartile range Q1 to Q3), and relative frequencies were calculated for continuous and categorical data, respectively. This analysis was descriptive in nature; hence, no statistical tests to determine significance were conducted.
Results
We had a 55.3% success rate in obtaining the medical records from the contacted hospitals with refusal to participate/consent by the hospital in question being the most frequent reason for failure in 3 out of 4 cases. An additional attrition of 39.3% was observed among the medical records received, with absence of a MAR form (23.9%) and confirmatory CDAD diagnosis or note (9.1%) being the most frequent criteria for discarding an available record prior to abstraction (Figure).
Patient Characteristics
CDAD Characteristics and Complications
Using a derived definition of severity, most CDAD cases were classified either as
CDAD-Related Resource Utilization
Following CDAD diagnosis, more than half of the study patients were isolated for 1 or more days. While the majority of patients with CDAD (74.0%) stayed in a general hospital room, 12.4% stayed in the ICU for a mean duration of 12.1 (± 12.3) days (Table 3). Half of these ICU patients required
About one-third of patients consulted a gastrointestinal or infectious disease specialist at least once. Among these patients, assuming that a patient following an initial specialist consultation would have follow-up visits at least once a day (formal or informal) for the remainder of the CDAD episode, we estimate that there were an average of 8.7 (± 15.6) and 11.6 (± 19.4) GI or ID specialist visits respectively during the CDAD episode.
Nearly all patients had their CDAD diagnosis confirmed by laboratory tests. CDAD virulence was identified as toxin A and/or toxin B in 47.6% of the samples. However, nearly three-fifths of patients also underwent 1 or more nondiagnostic tests including endoscopy, colonoscopy, computed axial tomography (CAT), or magnetic resonance imaging (MRI) scans, sigmoidoscopy, and/or other obstructive series tests during the CDAD episode.
CDAD Treatment
CDAD at Discharge
Hospitalization Costs
Based on claims data, the mean (±SD) and median (Q1–Q3) plan costs for the duration of a CDAD-associated hospitalization (2011 USD) for these 500 patients were found to be $35,621 (± $100,502) and $13,153 ($8,209–$26,893), respectively.
Discussion
While multiple studies have documented the considerable economic burden associated with CDAD [5–10], this study was the first to our knowledge to evaluate the specific hospital resources that are used during an extended hospital stay for CDAD. This real-world analysis, in conjunction with the Quimbo et al claims analysis, demonstrated the significant burden associated with CDAD in terms of both fixed costs (eg, hospital stay) as well as the variable components that drive these expenditures (eg, consultations, ICU stay).
The mean ($35,621) and median ($13,153) total costs associated with the CDAD segment of the hospitalization, as measured via the claims, were quite high despite a greater prevalence of mild CDAD rather than severe infection, and required only a general hospital room stay. Both of the above CDAD hospital cost measures were well above the mean US general hospitalization cost of $11,666 and the median cost of $7334 measured from Healthcare Cost and Utilization Project data [15]. However, the mean cost of hospitalization reported in the current study falls within the range of previously reported costs for CDAD-associated hospitalizations [5,8,10]. While the mean cost may have been disproportionately inflated by a few extreme cases, the median CDAD-associated hospitalization cost was nearly twice the median cost of an average general hospital stay in the US [15]. Our finding that these elevated costs were observed among patients with mild CDAD and its relative magnitude compared with the average hospitalization costs (approximately 3-fold higher) were also consistent with the literature. For instance, Pakyz and colleagues reported that relative to patients without CDAD, hospital costs were tripled for patients with low-severity CDAD and 10% higher for those with more severe CDAD, presumably because CDAD resulted in costly complications that prolonged what would have otherwise been a short, simple hospital stay [10].
Type of hospital room could also be an important driver of cost. While most patients stayed in general hospital rooms, more than half were isolated for at least a day, and 12% of patients required nearly 2 weeks of intensive care. Taken together, 26% of patients in the current study were required to stay in a special care unit or a non–general hospital room for 5.5 to 12.2 days. This is consistent with the 28% of patients with CDAD that required stay on a special care unit previously reported by O’Brien et al [5].Additionally, previous research from Canadian health care data has shown that a single ICU stay costs an average of $7000 more per patient per day than a general hospital room (1992 Canadian dollars) or $9589 (2013 USD calculated using historical exchange rate data and adjusted for inflation) [16].However, despite this additional cost and resource burden, it appears that overall only 53.4% of all patients received care within an isolated setting as guidelines recommended.
Repeated specialist visits, procedures and multiple testing (concomitant diagnostic EIA and nondiagnostic tests) potentially added to the health care resource utilization and costs, along with the extra resources associated with specialized hospital care. We found that roughly one-third of patients consulted a specialist, although we did not distinguish between ‘formal’ and ‘informal’ consultations. Numerous studies published over the past 2 decades have demonstrated increased costs and resource utilization associated with specialist consultations [17–21]. Although the focused knowledge and experience of specialists may reduce morbidity and mortality [18,21], specialists are more likely than generalists to order more diagnostic tests, perform more procedures, and keep patients hospitalized longer and in ICUs, all of which contribute to higher costs without necessarily leading to improved health outcomes [21].
Limitations
One major limitation of this study was the inability to assess the individual costs of the resources used for each individual patient either through the medical charts or via claims. Additionally, the burden of CDAD was found to continue beyond the hospital stay, with documented evidence of persisting infection in 84% of patients at the point of discharge. Since the medical records obtained were limited to a single hospitalization and a single place of service, the data capture of an entire CDAD episode remains potentially incomplete for a number of patients who had recurrences or who had visited multiple sites of care in addition to the hospital (ie, emergency department or outpatient facility). The transition to outpatient care is often multifaceted and challenging for patients, especially those who are elderly and have multiple underlying conditions [18]. Access to care become more difficult, and patients become wholly responsible for taking their medication as prescribed and following other post-discharge treatment stratagems. Furthermore, no differentiation was made between patients having a primary versus secondary CDAD diagnosis.
Another limitation is that the costs of the hospitalization was calculated from claims and as such do not include either patient paid costs (eg, deductible) or indirect costs (eg, lost work or productivity or caregiver costs) due to CDAD. This study likely underestimates the true costs associated with CDAD. Finally, the patients included in this analysis were all members of large commercial health plans in the US and who are also working and relatively healthy. Therefore, these results may not be generalizable to patients with other types of health insurance or no insurance or to those living outside of the United States.
It is important to note that the trends and drivers described in this study are “potential” influencers contributing to the burden of CDAD. Given that this study is descriptive in nature, formal analyses aimed at confirming these factors as “drivers” should be conducted in future. CDAD-related hospitalizations have previously been shown to be associated with increased inpatient LOS and a substantial economic burden. Our study demonstrates that the CDAD-associated cost burden in hospital settings may be driven by the use of numerous high-cost hospital resources including prolonged ICU stays, isolation, frequent GI and ID consultations, CDAD-related non-diagnostic tests/procedures, and symptomatic CDAD treatment.
Acknowledgments: The authors acknowledge Cheryl Jones for her editorial assistance in preparing this manuscript.
Corresponding author: Swetha Rao Palli, CTI Clinical Trial and Consulting, 1775 Lexington Ave, Ste. 200, Cincinnati, OH 45209, [email protected]
Funding/support: Funding for this study was provided Cubist Pharmaceuticals.
Financial disclosures: Ms. Palli and Mr. Quimbo are former and current employees of HealthCore, respectively. HealthCore is an independent research organization that received funding from Cubist Pharmaceuticals for the conduct of this study. Dr. Broderick is an employee of Cubist Pharmaceuticals. Ms. Strauss was an employee of Optimer Pharmaceuticals during the time the study was carried out.
1. Centers for Disease Control and Prevention. Antibiotic resistance threats in the United States, 2013. Available at: www.cdc.gov/drugresistance/threat-report-2013/pdf/ar-threats-2013-508.pdf. Accessed March 6, 2013.
2. Louie TJ, Miller MA, Mullane KM, et al. Fidaxomicin versus vancomycin for Clostridium difficile infection. N Engl J Med 2011;364:422–31.
3. Lowy I, Molrine DC, Leav BA, et al. Treatment with monoclonal antibodies against Clostridium difficile toxins. N Engl J Med 2010;362:197–205.
4. Bouza E, Dryden M, Mohammed R, et al. Results of a phase III trial comparing tolevamer, vancomycin and metronidazole in patients with Clostridium difficile-associated diarrhoea [ECCMID abstract O464]. Clin Microbiol Infect 2008;14(Suppl s7):S103–4.
5. O’Brien JA, Betsy JL, Caro J, Davidson DM. The emerging infectious challenge of Clostridium difficile-associated disease in Massachusetts hospitals: clinical and economic consequences. Infect Control Hosp Epidemiol 2007;28:1219–27.
6. Dubberke ER, Wertheimer AI. Review of current literature on the economic burden of Clostridium difficile infection. Infect Control Hosp Epidemiol 2009;30:57–66.
7. Ghantoji SS, Sail K, Lairson DR, et al. Economic healthcare costs of Clostridium difficile infection: a systematic review. J Hosp Infect 2010;74:309–18.
8. Kyne L, Hamel MB, Polavaram R, Kelly CP. Health care costs and mortality associated with nosocomial diarrhea due to Clostridium difficile. Clin Infec Dis 2002;34:346–53.
9. Forster AJ, Taljaard M, Oake N, et al. The effect of hospital-acquired infection with Clostridium difficile on length of stay in hospital. CMAJ 2012;184:37–42.
10. Pakyz A, Carroll NV, Harpe SE, et al. Economic impact of Clostridium difficile infection in a multihospital cohort of academic health centers. Pharmacotherapy 2011;31:546–51.
11. Quimbo RA, Palli SR, Singer J, et al. Burden of Clostridium difficile-associated diarrhea among hospitalized patients at high risk of recurrent infection. J Clin Outcomes Manag 2013;20:544–54.
12. Golan Y, Mullane KM, Miller MA, et al. Low recurrence rate among patients with C. difficile infection treated with fidaxomicin. Poster presented at: 49th Annual Interscience Conference on Antimicrobial Agents and Chemotherapy; 12–15 Sep 2009; San Francisco, CA.
13. Lewis SJ, Heaton KW. Stool form scale as a useful guide to intestinal transit time. Scand J Gastroenterol 1997;32:920–4.
14. Cornely OA, Crook DW, Esposito R, et al. Fidaxomicin versus vancomycin for infection with Clostridium difficile in Europe, Canada, and the USA: a double-blind, non-inferiority, randomised controlled trial. Lancet Infect Dis 2012;12:281–9.
15. Palli SR, Strauss M, Quimbo RA, et al. Cost drivers associated with Clostridium-difficile infection in a hospital setting. Poster presented at American Society of Health System Pharmacists Midyear Clinical Meeting; December 2012; Las Vegas, NV.
16. Noseworthy TW, Konopad E, Shustack A, et al. Cost accounting of adult intensive care: methods and human and capital inputs. Crit Care Med 1996;24:1168–72.
17. Classen DC, Burke JP, Wenzel RP. Infectious diseases consultation: impact on outcomes for hospitalized patients and results of a preliminary study. Clin Infect Dis 1997;24:468–70.
18. Petrak RM, Sexton DJ, Butera ML, et al. The value of an infectious diseases specialist. Clin Infect Dis 2003;36:1013–7.
19. Sellier E, Pavese P, Gennai S, et al. Factors and outcomes associated with physicians’ adherence to recommendations of infectious disease consultations for patients. J Antimicrob Chemother 2010;65:156–62.
20. Jollis JG, DeLong ER, Peterson ED, et al. Outcome of acute myocardial infarction according to the specialty of the admitting physician. N Engl J Med 1996;335:1880–7.
21. Harrold LR, Field TS, Gurwitz JH. Knowledge, patterns of care, and outcomes of care for generalists and specialists. J Gen Intern Med 1999;14:499–511.
From HealthCore, Wilmington, DE, and Cubist Pharma-ceuticals, San Diego, CA.
Abstract
- Objectives: To describe trends in inpatient resource utilization and potential cost drivers of Clostridium difficile-associated diarrhea (CDAD) treated in the hospital.
- Methods: Retrospective medical record review included 500 patients with ≥1 inpatient medical claim diagnosis of CDAD (ICD-9-CM: 008.45) between 01/01/2005-10/31/2010. Information was collected on patient demographics, admission diagnoses, laboratory data, and CDAD-related characteristics and discharge. Hospital costs were evaluated for the entire inpatient episode and prorated for the duration of the CDAD episode (ie, CDAD diagnosis date to diarrhea resolution/discharge date).
- Results: The cohort was mostly female (62%), Caucasian (72%), with mean (SD) age 66 (±17.6) years. 60% had diagnosis of CDAD or presence of diarrhea at admission. CDAD diagnosis was confirmed with laboratory test in 92% of patients. ~44% had mild CDAD, 35% had severe CDAD. Following CDAD diagnosis, approximately 53% of patients were isolated for ≥1 days, 12% transferred to the ICU for a median (Q1–Q3) length of stay of 8 (5–15) days. Two-thirds received gastrointestinal or infectious disease consult. Median time from CDAD diagnosis to discharge was 6 (4–9) days; 5.5 (4–8) days for patients admitted with CDAD, 6.5 (4–10) days for those with hospital-acquired CDAD. The mean and median costs (2011 USD) for CDAD-associated hospitalization were $35,621 and $13,153, respectively.
- Conclusion: Patients with CDAD utilize numerous expensive resources during hospitalization including laboratory tests, isolation, prolonged ICU stay, and specialist consultations.
Clostridium difficile, classified as an urgent public health threat by the Centers for Disease Control and Prevention (CDC), causes approximately 250,000 hospitalizations and an estimated 14,000 deaths per year in the United States [1]. An estimated 15% to 25% of patients with C. difficile-associated diarrhea (CDAD) will experience at least 1 recurrence [2-4], frequently requiring rehospitalization [5]. The high incidence of primary and recurrent infections contributes to a substantial burden associated with CDAD in terms of extended and repeat hospital stays [6,7].
Conservative estimates of the direct annual costs of CDAD in the United States over the past 15 years range from $1.1 billion [8] to $3.2 billion, with an average cost per stay of $10,212 for patients hospitalized with a principal diagnosis of CDAD or a CDAD-related symptom [5]. O’Brien et al estimated that costs associated with rehospitalizations accounted for 11% of overall CDAD-related hospital costs;when considering all CDAD-related hospitalizations, including both initial and subsequent rehospitalizations for recurrent infection and not accounting for post-acute or outpatient care, the 2-year cumulative cost was estimated to be $51.2 million. While studies have yielded varying assessments of the actual CDAD burden [5–10], they all suggest that CDAD burden is considerable and that extended hospital stays are the major component of CDAD-associated costs [9,10]. In a claims-based study by Quimbo et al [11], when multiple and diverse cohorts of CDAD patients at elevated risk for recurrence were matched with patients with similar underlying at-risk condition(s) but no CDAD, the CDAD at-risk groups had an incremental LOS per hospitalization ranging from approximately 3 to 18 days and an incremental cost burden ranging from a mean of $11,179 to $115,632 (2011 USD) per stay.
While it is recognized that CDAD carries significant cost burden and is driven by LOS, current literature is lacking regarding the characteristics of these hospital stays. Building on the Quimbo et al study, the current study was designed to probe further into the nature of the burden (ie, resource use) incurred during the course of CDAD hospitalizations. As such, the objective of this study was to identify the common trends in hospital-related resource utilization and describe the potential drivers that affect the cost burden of CDAD using hospital medical record data.
Methods
Population
Patients were selected for this retrospective medical record review from the HealthCore Integrated Research Database (HealthCore, Wilmington, DE). The database contains a broad, clinically rich and geographically diverse spectrum of longitudinal claims information from one of the largest commercially insured populations in the United States, representing 48 million lives. We identified 21,177 adult (≥ 18 years) patients with at least 1 inpatient claim with an International Classification of Diseases, 9th Edition, Clinical Modification (ICD-9-CM) diagnosis code for C. difficile infection (CDI; 008.45) between 1 January 2005 and 31 October 2010 (intake period). All patients had at least 12 months of prior and continuous medical and pharmacy health plan eligibility prior to the incident CDAD-associated hospitalization within the database. Additional details regarding this cohort identification has been published previously [11]. The study was undertaken in accordance with Health Insurance Portability and Accountability Act (HIPAA) guidelines and the necessary central institutional review board approval was obtained prior to medical record identification and abstraction.
Sampling Strategy
Medical Record Abstraction
During the record abstraction process, information was collected on patients’ race/ethnicity, body mass index (BMI), admission diagnosis and other conditions, point of entry and prior location, body temperature and laboratory data (eg, creatinine and albumin values, white blood cell [WBC] count), diarrhea and stool characteristics, CDAD diagnosis date, CDAD-specific characteristics, severity, complications, and related tests/procedures, CDAD treatments (eg, dose, duration, and formulation of medications), hospital LOS, including stays in the intensive care unit (ICU), cardiac care unit (CCU) following CDAD diagnosis; consultations provided by gastrointestinal, infectious disease, intensivists, or surgery care specialists, and discharge summary on disposition, CDAD status, and medications prescribed. Standardized data collection forms were used by trained nurses or pharmacists to collect information from the medical records and inter-rater reliability testing with a 0.9 cutoff was required to confirm accuracy. To ensure consistency, a pilot test of the first 20 abstracted records were re-abstracted by the research team. Last, quality checks were implemented throughout the abstraction process to identify any inconsistencies or data entry errors including coding errors and atypical, unrealistic data entry patterns (eg, identical values for a particular data field entered on multiple records; implausible or erratic inputs; or a high percentage of missing data points). Missing data were not imputed.
Study Definitions
Diarrhea was defined as 3 or more unformed (includes bloody, watery, thin, loose, soft, and/or unformed stool) bowel movements per day.CDAD severity was classified as mild (4–5 unformed bowel movements per day or WBC ≤ 2000/mm3); moderate (6–9 unformed bowel movements per day or WBC between 12,001/mm3 and 15,000/mm3); or severe (≥10 unformed bowel movements per day or WBC ≥15,001/mm3) [12,13]. Diarrhea was considered to be resolved when the patient had no more than 3 unformed stools for 2 consecutive days and lasting until treatment was completed, with no additional therapy required for CDAD as of the second day after the end of the course of therapy [2,14].CDAD episode was defined as the duration from the date of the CDAD diagnosis or confirmation (whichever occurred first), to the date of diarrhea resolution (where documented) or discharge date.
Cost Measures
The total hospital health plan paid costs for the entire inpatient episode (includes treatment costs, diagnostics, services provided, etc.) were estimated using medical claims present in the database and pertaining to the hospitalization from where medical records were abstracted. Then the proportionate amount for the duration of the CDAD episode (from CDAD diagnosis to the diarrhea resolution date or the discharge date in cases where the resolution date could not be ascertained) was calculated to estimate the average CDAD associated in-hospital costs.
Analysis
Means (± standard deviation [SD]), medians (interquartile range Q1 to Q3), and relative frequencies were calculated for continuous and categorical data, respectively. This analysis was descriptive in nature; hence, no statistical tests to determine significance were conducted.
Results
We had a 55.3% success rate in obtaining the medical records from the contacted hospitals with refusal to participate/consent by the hospital in question being the most frequent reason for failure in 3 out of 4 cases. An additional attrition of 39.3% was observed among the medical records received, with absence of a MAR form (23.9%) and confirmatory CDAD diagnosis or note (9.1%) being the most frequent criteria for discarding an available record prior to abstraction (Figure).
Patient Characteristics
CDAD Characteristics and Complications
Using a derived definition of severity, most CDAD cases were classified either as
CDAD-Related Resource Utilization
Following CDAD diagnosis, more than half of the study patients were isolated for 1 or more days. While the majority of patients with CDAD (74.0%) stayed in a general hospital room, 12.4% stayed in the ICU for a mean duration of 12.1 (± 12.3) days (Table 3). Half of these ICU patients required
About one-third of patients consulted a gastrointestinal or infectious disease specialist at least once. Among these patients, assuming that a patient following an initial specialist consultation would have follow-up visits at least once a day (formal or informal) for the remainder of the CDAD episode, we estimate that there were an average of 8.7 (± 15.6) and 11.6 (± 19.4) GI or ID specialist visits respectively during the CDAD episode.
Nearly all patients had their CDAD diagnosis confirmed by laboratory tests. CDAD virulence was identified as toxin A and/or toxin B in 47.6% of the samples. However, nearly three-fifths of patients also underwent 1 or more nondiagnostic tests including endoscopy, colonoscopy, computed axial tomography (CAT), or magnetic resonance imaging (MRI) scans, sigmoidoscopy, and/or other obstructive series tests during the CDAD episode.
CDAD Treatment
CDAD at Discharge
Hospitalization Costs
Based on claims data, the mean (±SD) and median (Q1–Q3) plan costs for the duration of a CDAD-associated hospitalization (2011 USD) for these 500 patients were found to be $35,621 (± $100,502) and $13,153 ($8,209–$26,893), respectively.
Discussion
While multiple studies have documented the considerable economic burden associated with CDAD [5–10], this study was the first to our knowledge to evaluate the specific hospital resources that are used during an extended hospital stay for CDAD. This real-world analysis, in conjunction with the Quimbo et al claims analysis, demonstrated the significant burden associated with CDAD in terms of both fixed costs (eg, hospital stay) as well as the variable components that drive these expenditures (eg, consultations, ICU stay).
The mean ($35,621) and median ($13,153) total costs associated with the CDAD segment of the hospitalization, as measured via the claims, were quite high despite a greater prevalence of mild CDAD rather than severe infection, and required only a general hospital room stay. Both of the above CDAD hospital cost measures were well above the mean US general hospitalization cost of $11,666 and the median cost of $7334 measured from Healthcare Cost and Utilization Project data [15]. However, the mean cost of hospitalization reported in the current study falls within the range of previously reported costs for CDAD-associated hospitalizations [5,8,10]. While the mean cost may have been disproportionately inflated by a few extreme cases, the median CDAD-associated hospitalization cost was nearly twice the median cost of an average general hospital stay in the US [15]. Our finding that these elevated costs were observed among patients with mild CDAD and its relative magnitude compared with the average hospitalization costs (approximately 3-fold higher) were also consistent with the literature. For instance, Pakyz and colleagues reported that relative to patients without CDAD, hospital costs were tripled for patients with low-severity CDAD and 10% higher for those with more severe CDAD, presumably because CDAD resulted in costly complications that prolonged what would have otherwise been a short, simple hospital stay [10].
Type of hospital room could also be an important driver of cost. While most patients stayed in general hospital rooms, more than half were isolated for at least a day, and 12% of patients required nearly 2 weeks of intensive care. Taken together, 26% of patients in the current study were required to stay in a special care unit or a non–general hospital room for 5.5 to 12.2 days. This is consistent with the 28% of patients with CDAD that required stay on a special care unit previously reported by O’Brien et al [5].Additionally, previous research from Canadian health care data has shown that a single ICU stay costs an average of $7000 more per patient per day than a general hospital room (1992 Canadian dollars) or $9589 (2013 USD calculated using historical exchange rate data and adjusted for inflation) [16].However, despite this additional cost and resource burden, it appears that overall only 53.4% of all patients received care within an isolated setting as guidelines recommended.
Repeated specialist visits, procedures and multiple testing (concomitant diagnostic EIA and nondiagnostic tests) potentially added to the health care resource utilization and costs, along with the extra resources associated with specialized hospital care. We found that roughly one-third of patients consulted a specialist, although we did not distinguish between ‘formal’ and ‘informal’ consultations. Numerous studies published over the past 2 decades have demonstrated increased costs and resource utilization associated with specialist consultations [17–21]. Although the focused knowledge and experience of specialists may reduce morbidity and mortality [18,21], specialists are more likely than generalists to order more diagnostic tests, perform more procedures, and keep patients hospitalized longer and in ICUs, all of which contribute to higher costs without necessarily leading to improved health outcomes [21].
Limitations
One major limitation of this study was the inability to assess the individual costs of the resources used for each individual patient either through the medical charts or via claims. Additionally, the burden of CDAD was found to continue beyond the hospital stay, with documented evidence of persisting infection in 84% of patients at the point of discharge. Since the medical records obtained were limited to a single hospitalization and a single place of service, the data capture of an entire CDAD episode remains potentially incomplete for a number of patients who had recurrences or who had visited multiple sites of care in addition to the hospital (ie, emergency department or outpatient facility). The transition to outpatient care is often multifaceted and challenging for patients, especially those who are elderly and have multiple underlying conditions [18]. Access to care become more difficult, and patients become wholly responsible for taking their medication as prescribed and following other post-discharge treatment stratagems. Furthermore, no differentiation was made between patients having a primary versus secondary CDAD diagnosis.
Another limitation is that the costs of the hospitalization was calculated from claims and as such do not include either patient paid costs (eg, deductible) or indirect costs (eg, lost work or productivity or caregiver costs) due to CDAD. This study likely underestimates the true costs associated with CDAD. Finally, the patients included in this analysis were all members of large commercial health plans in the US and who are also working and relatively healthy. Therefore, these results may not be generalizable to patients with other types of health insurance or no insurance or to those living outside of the United States.
It is important to note that the trends and drivers described in this study are “potential” influencers contributing to the burden of CDAD. Given that this study is descriptive in nature, formal analyses aimed at confirming these factors as “drivers” should be conducted in future. CDAD-related hospitalizations have previously been shown to be associated with increased inpatient LOS and a substantial economic burden. Our study demonstrates that the CDAD-associated cost burden in hospital settings may be driven by the use of numerous high-cost hospital resources including prolonged ICU stays, isolation, frequent GI and ID consultations, CDAD-related non-diagnostic tests/procedures, and symptomatic CDAD treatment.
Acknowledgments: The authors acknowledge Cheryl Jones for her editorial assistance in preparing this manuscript.
Corresponding author: Swetha Rao Palli, CTI Clinical Trial and Consulting, 1775 Lexington Ave, Ste. 200, Cincinnati, OH 45209, [email protected]
Funding/support: Funding for this study was provided Cubist Pharmaceuticals.
Financial disclosures: Ms. Palli and Mr. Quimbo are former and current employees of HealthCore, respectively. HealthCore is an independent research organization that received funding from Cubist Pharmaceuticals for the conduct of this study. Dr. Broderick is an employee of Cubist Pharmaceuticals. Ms. Strauss was an employee of Optimer Pharmaceuticals during the time the study was carried out.
From HealthCore, Wilmington, DE, and Cubist Pharma-ceuticals, San Diego, CA.
Abstract
- Objectives: To describe trends in inpatient resource utilization and potential cost drivers of Clostridium difficile-associated diarrhea (CDAD) treated in the hospital.
- Methods: Retrospective medical record review included 500 patients with ≥1 inpatient medical claim diagnosis of CDAD (ICD-9-CM: 008.45) between 01/01/2005-10/31/2010. Information was collected on patient demographics, admission diagnoses, laboratory data, and CDAD-related characteristics and discharge. Hospital costs were evaluated for the entire inpatient episode and prorated for the duration of the CDAD episode (ie, CDAD diagnosis date to diarrhea resolution/discharge date).
- Results: The cohort was mostly female (62%), Caucasian (72%), with mean (SD) age 66 (±17.6) years. 60% had diagnosis of CDAD or presence of diarrhea at admission. CDAD diagnosis was confirmed with laboratory test in 92% of patients. ~44% had mild CDAD, 35% had severe CDAD. Following CDAD diagnosis, approximately 53% of patients were isolated for ≥1 days, 12% transferred to the ICU for a median (Q1–Q3) length of stay of 8 (5–15) days. Two-thirds received gastrointestinal or infectious disease consult. Median time from CDAD diagnosis to discharge was 6 (4–9) days; 5.5 (4–8) days for patients admitted with CDAD, 6.5 (4–10) days for those with hospital-acquired CDAD. The mean and median costs (2011 USD) for CDAD-associated hospitalization were $35,621 and $13,153, respectively.
- Conclusion: Patients with CDAD utilize numerous expensive resources during hospitalization including laboratory tests, isolation, prolonged ICU stay, and specialist consultations.
Clostridium difficile, classified as an urgent public health threat by the Centers for Disease Control and Prevention (CDC), causes approximately 250,000 hospitalizations and an estimated 14,000 deaths per year in the United States [1]. An estimated 15% to 25% of patients with C. difficile-associated diarrhea (CDAD) will experience at least 1 recurrence [2-4], frequently requiring rehospitalization [5]. The high incidence of primary and recurrent infections contributes to a substantial burden associated with CDAD in terms of extended and repeat hospital stays [6,7].
Conservative estimates of the direct annual costs of CDAD in the United States over the past 15 years range from $1.1 billion [8] to $3.2 billion, with an average cost per stay of $10,212 for patients hospitalized with a principal diagnosis of CDAD or a CDAD-related symptom [5]. O’Brien et al estimated that costs associated with rehospitalizations accounted for 11% of overall CDAD-related hospital costs;when considering all CDAD-related hospitalizations, including both initial and subsequent rehospitalizations for recurrent infection and not accounting for post-acute or outpatient care, the 2-year cumulative cost was estimated to be $51.2 million. While studies have yielded varying assessments of the actual CDAD burden [5–10], they all suggest that CDAD burden is considerable and that extended hospital stays are the major component of CDAD-associated costs [9,10]. In a claims-based study by Quimbo et al [11], when multiple and diverse cohorts of CDAD patients at elevated risk for recurrence were matched with patients with similar underlying at-risk condition(s) but no CDAD, the CDAD at-risk groups had an incremental LOS per hospitalization ranging from approximately 3 to 18 days and an incremental cost burden ranging from a mean of $11,179 to $115,632 (2011 USD) per stay.
While it is recognized that CDAD carries significant cost burden and is driven by LOS, current literature is lacking regarding the characteristics of these hospital stays. Building on the Quimbo et al study, the current study was designed to probe further into the nature of the burden (ie, resource use) incurred during the course of CDAD hospitalizations. As such, the objective of this study was to identify the common trends in hospital-related resource utilization and describe the potential drivers that affect the cost burden of CDAD using hospital medical record data.
Methods
Population
Patients were selected for this retrospective medical record review from the HealthCore Integrated Research Database (HealthCore, Wilmington, DE). The database contains a broad, clinically rich and geographically diverse spectrum of longitudinal claims information from one of the largest commercially insured populations in the United States, representing 48 million lives. We identified 21,177 adult (≥ 18 years) patients with at least 1 inpatient claim with an International Classification of Diseases, 9th Edition, Clinical Modification (ICD-9-CM) diagnosis code for C. difficile infection (CDI; 008.45) between 1 January 2005 and 31 October 2010 (intake period). All patients had at least 12 months of prior and continuous medical and pharmacy health plan eligibility prior to the incident CDAD-associated hospitalization within the database. Additional details regarding this cohort identification has been published previously [11]. The study was undertaken in accordance with Health Insurance Portability and Accountability Act (HIPAA) guidelines and the necessary central institutional review board approval was obtained prior to medical record identification and abstraction.
Sampling Strategy
Medical Record Abstraction
During the record abstraction process, information was collected on patients’ race/ethnicity, body mass index (BMI), admission diagnosis and other conditions, point of entry and prior location, body temperature and laboratory data (eg, creatinine and albumin values, white blood cell [WBC] count), diarrhea and stool characteristics, CDAD diagnosis date, CDAD-specific characteristics, severity, complications, and related tests/procedures, CDAD treatments (eg, dose, duration, and formulation of medications), hospital LOS, including stays in the intensive care unit (ICU), cardiac care unit (CCU) following CDAD diagnosis; consultations provided by gastrointestinal, infectious disease, intensivists, or surgery care specialists, and discharge summary on disposition, CDAD status, and medications prescribed. Standardized data collection forms were used by trained nurses or pharmacists to collect information from the medical records and inter-rater reliability testing with a 0.9 cutoff was required to confirm accuracy. To ensure consistency, a pilot test of the first 20 abstracted records were re-abstracted by the research team. Last, quality checks were implemented throughout the abstraction process to identify any inconsistencies or data entry errors including coding errors and atypical, unrealistic data entry patterns (eg, identical values for a particular data field entered on multiple records; implausible or erratic inputs; or a high percentage of missing data points). Missing data were not imputed.
Study Definitions
Diarrhea was defined as 3 or more unformed (includes bloody, watery, thin, loose, soft, and/or unformed stool) bowel movements per day.CDAD severity was classified as mild (4–5 unformed bowel movements per day or WBC ≤ 2000/mm3); moderate (6–9 unformed bowel movements per day or WBC between 12,001/mm3 and 15,000/mm3); or severe (≥10 unformed bowel movements per day or WBC ≥15,001/mm3) [12,13]. Diarrhea was considered to be resolved when the patient had no more than 3 unformed stools for 2 consecutive days and lasting until treatment was completed, with no additional therapy required for CDAD as of the second day after the end of the course of therapy [2,14].CDAD episode was defined as the duration from the date of the CDAD diagnosis or confirmation (whichever occurred first), to the date of diarrhea resolution (where documented) or discharge date.
Cost Measures
The total hospital health plan paid costs for the entire inpatient episode (includes treatment costs, diagnostics, services provided, etc.) were estimated using medical claims present in the database and pertaining to the hospitalization from where medical records were abstracted. Then the proportionate amount for the duration of the CDAD episode (from CDAD diagnosis to the diarrhea resolution date or the discharge date in cases where the resolution date could not be ascertained) was calculated to estimate the average CDAD associated in-hospital costs.
Analysis
Means (± standard deviation [SD]), medians (interquartile range Q1 to Q3), and relative frequencies were calculated for continuous and categorical data, respectively. This analysis was descriptive in nature; hence, no statistical tests to determine significance were conducted.
Results
We had a 55.3% success rate in obtaining the medical records from the contacted hospitals with refusal to participate/consent by the hospital in question being the most frequent reason for failure in 3 out of 4 cases. An additional attrition of 39.3% was observed among the medical records received, with absence of a MAR form (23.9%) and confirmatory CDAD diagnosis or note (9.1%) being the most frequent criteria for discarding an available record prior to abstraction (Figure).
Patient Characteristics
CDAD Characteristics and Complications
Using a derived definition of severity, most CDAD cases were classified either as
CDAD-Related Resource Utilization
Following CDAD diagnosis, more than half of the study patients were isolated for 1 or more days. While the majority of patients with CDAD (74.0%) stayed in a general hospital room, 12.4% stayed in the ICU for a mean duration of 12.1 (± 12.3) days (Table 3). Half of these ICU patients required
About one-third of patients consulted a gastrointestinal or infectious disease specialist at least once. Among these patients, assuming that a patient following an initial specialist consultation would have follow-up visits at least once a day (formal or informal) for the remainder of the CDAD episode, we estimate that there were an average of 8.7 (± 15.6) and 11.6 (± 19.4) GI or ID specialist visits respectively during the CDAD episode.
Nearly all patients had their CDAD diagnosis confirmed by laboratory tests. CDAD virulence was identified as toxin A and/or toxin B in 47.6% of the samples. However, nearly three-fifths of patients also underwent 1 or more nondiagnostic tests including endoscopy, colonoscopy, computed axial tomography (CAT), or magnetic resonance imaging (MRI) scans, sigmoidoscopy, and/or other obstructive series tests during the CDAD episode.
CDAD Treatment
CDAD at Discharge
Hospitalization Costs
Based on claims data, the mean (±SD) and median (Q1–Q3) plan costs for the duration of a CDAD-associated hospitalization (2011 USD) for these 500 patients were found to be $35,621 (± $100,502) and $13,153 ($8,209–$26,893), respectively.
Discussion
While multiple studies have documented the considerable economic burden associated with CDAD [5–10], this study was the first to our knowledge to evaluate the specific hospital resources that are used during an extended hospital stay for CDAD. This real-world analysis, in conjunction with the Quimbo et al claims analysis, demonstrated the significant burden associated with CDAD in terms of both fixed costs (eg, hospital stay) as well as the variable components that drive these expenditures (eg, consultations, ICU stay).
The mean ($35,621) and median ($13,153) total costs associated with the CDAD segment of the hospitalization, as measured via the claims, were quite high despite a greater prevalence of mild CDAD rather than severe infection, and required only a general hospital room stay. Both of the above CDAD hospital cost measures were well above the mean US general hospitalization cost of $11,666 and the median cost of $7334 measured from Healthcare Cost and Utilization Project data [15]. However, the mean cost of hospitalization reported in the current study falls within the range of previously reported costs for CDAD-associated hospitalizations [5,8,10]. While the mean cost may have been disproportionately inflated by a few extreme cases, the median CDAD-associated hospitalization cost was nearly twice the median cost of an average general hospital stay in the US [15]. Our finding that these elevated costs were observed among patients with mild CDAD and its relative magnitude compared with the average hospitalization costs (approximately 3-fold higher) were also consistent with the literature. For instance, Pakyz and colleagues reported that relative to patients without CDAD, hospital costs were tripled for patients with low-severity CDAD and 10% higher for those with more severe CDAD, presumably because CDAD resulted in costly complications that prolonged what would have otherwise been a short, simple hospital stay [10].
Type of hospital room could also be an important driver of cost. While most patients stayed in general hospital rooms, more than half were isolated for at least a day, and 12% of patients required nearly 2 weeks of intensive care. Taken together, 26% of patients in the current study were required to stay in a special care unit or a non–general hospital room for 5.5 to 12.2 days. This is consistent with the 28% of patients with CDAD that required stay on a special care unit previously reported by O’Brien et al [5].Additionally, previous research from Canadian health care data has shown that a single ICU stay costs an average of $7000 more per patient per day than a general hospital room (1992 Canadian dollars) or $9589 (2013 USD calculated using historical exchange rate data and adjusted for inflation) [16].However, despite this additional cost and resource burden, it appears that overall only 53.4% of all patients received care within an isolated setting as guidelines recommended.
Repeated specialist visits, procedures and multiple testing (concomitant diagnostic EIA and nondiagnostic tests) potentially added to the health care resource utilization and costs, along with the extra resources associated with specialized hospital care. We found that roughly one-third of patients consulted a specialist, although we did not distinguish between ‘formal’ and ‘informal’ consultations. Numerous studies published over the past 2 decades have demonstrated increased costs and resource utilization associated with specialist consultations [17–21]. Although the focused knowledge and experience of specialists may reduce morbidity and mortality [18,21], specialists are more likely than generalists to order more diagnostic tests, perform more procedures, and keep patients hospitalized longer and in ICUs, all of which contribute to higher costs without necessarily leading to improved health outcomes [21].
Limitations
One major limitation of this study was the inability to assess the individual costs of the resources used for each individual patient either through the medical charts or via claims. Additionally, the burden of CDAD was found to continue beyond the hospital stay, with documented evidence of persisting infection in 84% of patients at the point of discharge. Since the medical records obtained were limited to a single hospitalization and a single place of service, the data capture of an entire CDAD episode remains potentially incomplete for a number of patients who had recurrences or who had visited multiple sites of care in addition to the hospital (ie, emergency department or outpatient facility). The transition to outpatient care is often multifaceted and challenging for patients, especially those who are elderly and have multiple underlying conditions [18]. Access to care become more difficult, and patients become wholly responsible for taking their medication as prescribed and following other post-discharge treatment stratagems. Furthermore, no differentiation was made between patients having a primary versus secondary CDAD diagnosis.
Another limitation is that the costs of the hospitalization was calculated from claims and as such do not include either patient paid costs (eg, deductible) or indirect costs (eg, lost work or productivity or caregiver costs) due to CDAD. This study likely underestimates the true costs associated with CDAD. Finally, the patients included in this analysis were all members of large commercial health plans in the US and who are also working and relatively healthy. Therefore, these results may not be generalizable to patients with other types of health insurance or no insurance or to those living outside of the United States.
It is important to note that the trends and drivers described in this study are “potential” influencers contributing to the burden of CDAD. Given that this study is descriptive in nature, formal analyses aimed at confirming these factors as “drivers” should be conducted in future. CDAD-related hospitalizations have previously been shown to be associated with increased inpatient LOS and a substantial economic burden. Our study demonstrates that the CDAD-associated cost burden in hospital settings may be driven by the use of numerous high-cost hospital resources including prolonged ICU stays, isolation, frequent GI and ID consultations, CDAD-related non-diagnostic tests/procedures, and symptomatic CDAD treatment.
Acknowledgments: The authors acknowledge Cheryl Jones for her editorial assistance in preparing this manuscript.
Corresponding author: Swetha Rao Palli, CTI Clinical Trial and Consulting, 1775 Lexington Ave, Ste. 200, Cincinnati, OH 45209, [email protected]
Funding/support: Funding for this study was provided Cubist Pharmaceuticals.
Financial disclosures: Ms. Palli and Mr. Quimbo are former and current employees of HealthCore, respectively. HealthCore is an independent research organization that received funding from Cubist Pharmaceuticals for the conduct of this study. Dr. Broderick is an employee of Cubist Pharmaceuticals. Ms. Strauss was an employee of Optimer Pharmaceuticals during the time the study was carried out.
1. Centers for Disease Control and Prevention. Antibiotic resistance threats in the United States, 2013. Available at: www.cdc.gov/drugresistance/threat-report-2013/pdf/ar-threats-2013-508.pdf. Accessed March 6, 2013.
2. Louie TJ, Miller MA, Mullane KM, et al. Fidaxomicin versus vancomycin for Clostridium difficile infection. N Engl J Med 2011;364:422–31.
3. Lowy I, Molrine DC, Leav BA, et al. Treatment with monoclonal antibodies against Clostridium difficile toxins. N Engl J Med 2010;362:197–205.
4. Bouza E, Dryden M, Mohammed R, et al. Results of a phase III trial comparing tolevamer, vancomycin and metronidazole in patients with Clostridium difficile-associated diarrhoea [ECCMID abstract O464]. Clin Microbiol Infect 2008;14(Suppl s7):S103–4.
5. O’Brien JA, Betsy JL, Caro J, Davidson DM. The emerging infectious challenge of Clostridium difficile-associated disease in Massachusetts hospitals: clinical and economic consequences. Infect Control Hosp Epidemiol 2007;28:1219–27.
6. Dubberke ER, Wertheimer AI. Review of current literature on the economic burden of Clostridium difficile infection. Infect Control Hosp Epidemiol 2009;30:57–66.
7. Ghantoji SS, Sail K, Lairson DR, et al. Economic healthcare costs of Clostridium difficile infection: a systematic review. J Hosp Infect 2010;74:309–18.
8. Kyne L, Hamel MB, Polavaram R, Kelly CP. Health care costs and mortality associated with nosocomial diarrhea due to Clostridium difficile. Clin Infec Dis 2002;34:346–53.
9. Forster AJ, Taljaard M, Oake N, et al. The effect of hospital-acquired infection with Clostridium difficile on length of stay in hospital. CMAJ 2012;184:37–42.
10. Pakyz A, Carroll NV, Harpe SE, et al. Economic impact of Clostridium difficile infection in a multihospital cohort of academic health centers. Pharmacotherapy 2011;31:546–51.
11. Quimbo RA, Palli SR, Singer J, et al. Burden of Clostridium difficile-associated diarrhea among hospitalized patients at high risk of recurrent infection. J Clin Outcomes Manag 2013;20:544–54.
12. Golan Y, Mullane KM, Miller MA, et al. Low recurrence rate among patients with C. difficile infection treated with fidaxomicin. Poster presented at: 49th Annual Interscience Conference on Antimicrobial Agents and Chemotherapy; 12–15 Sep 2009; San Francisco, CA.
13. Lewis SJ, Heaton KW. Stool form scale as a useful guide to intestinal transit time. Scand J Gastroenterol 1997;32:920–4.
14. Cornely OA, Crook DW, Esposito R, et al. Fidaxomicin versus vancomycin for infection with Clostridium difficile in Europe, Canada, and the USA: a double-blind, non-inferiority, randomised controlled trial. Lancet Infect Dis 2012;12:281–9.
15. Palli SR, Strauss M, Quimbo RA, et al. Cost drivers associated with Clostridium-difficile infection in a hospital setting. Poster presented at American Society of Health System Pharmacists Midyear Clinical Meeting; December 2012; Las Vegas, NV.
16. Noseworthy TW, Konopad E, Shustack A, et al. Cost accounting of adult intensive care: methods and human and capital inputs. Crit Care Med 1996;24:1168–72.
17. Classen DC, Burke JP, Wenzel RP. Infectious diseases consultation: impact on outcomes for hospitalized patients and results of a preliminary study. Clin Infect Dis 1997;24:468–70.
18. Petrak RM, Sexton DJ, Butera ML, et al. The value of an infectious diseases specialist. Clin Infect Dis 2003;36:1013–7.
19. Sellier E, Pavese P, Gennai S, et al. Factors and outcomes associated with physicians’ adherence to recommendations of infectious disease consultations for patients. J Antimicrob Chemother 2010;65:156–62.
20. Jollis JG, DeLong ER, Peterson ED, et al. Outcome of acute myocardial infarction according to the specialty of the admitting physician. N Engl J Med 1996;335:1880–7.
21. Harrold LR, Field TS, Gurwitz JH. Knowledge, patterns of care, and outcomes of care for generalists and specialists. J Gen Intern Med 1999;14:499–511.
1. Centers for Disease Control and Prevention. Antibiotic resistance threats in the United States, 2013. Available at: www.cdc.gov/drugresistance/threat-report-2013/pdf/ar-threats-2013-508.pdf. Accessed March 6, 2013.
2. Louie TJ, Miller MA, Mullane KM, et al. Fidaxomicin versus vancomycin for Clostridium difficile infection. N Engl J Med 2011;364:422–31.
3. Lowy I, Molrine DC, Leav BA, et al. Treatment with monoclonal antibodies against Clostridium difficile toxins. N Engl J Med 2010;362:197–205.
4. Bouza E, Dryden M, Mohammed R, et al. Results of a phase III trial comparing tolevamer, vancomycin and metronidazole in patients with Clostridium difficile-associated diarrhoea [ECCMID abstract O464]. Clin Microbiol Infect 2008;14(Suppl s7):S103–4.
5. O’Brien JA, Betsy JL, Caro J, Davidson DM. The emerging infectious challenge of Clostridium difficile-associated disease in Massachusetts hospitals: clinical and economic consequences. Infect Control Hosp Epidemiol 2007;28:1219–27.
6. Dubberke ER, Wertheimer AI. Review of current literature on the economic burden of Clostridium difficile infection. Infect Control Hosp Epidemiol 2009;30:57–66.
7. Ghantoji SS, Sail K, Lairson DR, et al. Economic healthcare costs of Clostridium difficile infection: a systematic review. J Hosp Infect 2010;74:309–18.
8. Kyne L, Hamel MB, Polavaram R, Kelly CP. Health care costs and mortality associated with nosocomial diarrhea due to Clostridium difficile. Clin Infec Dis 2002;34:346–53.
9. Forster AJ, Taljaard M, Oake N, et al. The effect of hospital-acquired infection with Clostridium difficile on length of stay in hospital. CMAJ 2012;184:37–42.
10. Pakyz A, Carroll NV, Harpe SE, et al. Economic impact of Clostridium difficile infection in a multihospital cohort of academic health centers. Pharmacotherapy 2011;31:546–51.
11. Quimbo RA, Palli SR, Singer J, et al. Burden of Clostridium difficile-associated diarrhea among hospitalized patients at high risk of recurrent infection. J Clin Outcomes Manag 2013;20:544–54.
12. Golan Y, Mullane KM, Miller MA, et al. Low recurrence rate among patients with C. difficile infection treated with fidaxomicin. Poster presented at: 49th Annual Interscience Conference on Antimicrobial Agents and Chemotherapy; 12–15 Sep 2009; San Francisco, CA.
13. Lewis SJ, Heaton KW. Stool form scale as a useful guide to intestinal transit time. Scand J Gastroenterol 1997;32:920–4.
14. Cornely OA, Crook DW, Esposito R, et al. Fidaxomicin versus vancomycin for infection with Clostridium difficile in Europe, Canada, and the USA: a double-blind, non-inferiority, randomised controlled trial. Lancet Infect Dis 2012;12:281–9.
15. Palli SR, Strauss M, Quimbo RA, et al. Cost drivers associated with Clostridium-difficile infection in a hospital setting. Poster presented at American Society of Health System Pharmacists Midyear Clinical Meeting; December 2012; Las Vegas, NV.
16. Noseworthy TW, Konopad E, Shustack A, et al. Cost accounting of adult intensive care: methods and human and capital inputs. Crit Care Med 1996;24:1168–72.
17. Classen DC, Burke JP, Wenzel RP. Infectious diseases consultation: impact on outcomes for hospitalized patients and results of a preliminary study. Clin Infect Dis 1997;24:468–70.
18. Petrak RM, Sexton DJ, Butera ML, et al. The value of an infectious diseases specialist. Clin Infect Dis 2003;36:1013–7.
19. Sellier E, Pavese P, Gennai S, et al. Factors and outcomes associated with physicians’ adherence to recommendations of infectious disease consultations for patients. J Antimicrob Chemother 2010;65:156–62.
20. Jollis JG, DeLong ER, Peterson ED, et al. Outcome of acute myocardial infarction according to the specialty of the admitting physician. N Engl J Med 1996;335:1880–7.
21. Harrold LR, Field TS, Gurwitz JH. Knowledge, patterns of care, and outcomes of care for generalists and specialists. J Gen Intern Med 1999;14:499–511.
Newer antifungals shorten tinea pedis treatment duration, promote adherence
MIAMI BEACH – Two new antifungal agents on the market – luliconazole and naftifine – each have something unique to offer when it comes to treating tinea pedis, according to Dr. Boni E. Elewski.
Luliconazole is an azole drug, meaning it is broad spectrum and kills dermatophytes, yeast, and molds. Also, like all azoles, it has some antibacterial activity, she said at the South Beach Symposium.
Naftifine is an allylamine drug, and is mainly an antidermatophyte agent – albeit a “very, very potent antidermatophyte” – with no antibacterial activity, she said.
Both are approved for once-daily use for 2 weeks, and that’s good because the short treatment duration improves adherence to the regimen, especially compared with other drugs that require 4-6 weeks of treatment to eradicate the problem, noted Dr. Elewski, professor of dermatology and director of clinical trials research at the University of Alabama at Birmingham.
Both drugs also stay in the skin and continue working after treatment stops.
Making the choice regarding which drug or class of drugs to use depends on the patient’s symptoms.
“First of all, tinea pedis may not be obvious. People don’t often tell you, ‘This is what I have – it’s tinea pedis,’ ” she said.
Keep in mind that tinea pedis and onychomycosis are related. If you have a patient who you think has onychomycosis, look at the bottom of their foot, she advised.
“If they don’t have tinea pedis, they probably don’t have onychomycosis unless they’ve had tinea pedis recently and got rid of it,” she said.
Also, look for collarettes of scale, which may be very subtle and may look like “tiny little circular pieces of scale on the medial or lateral foot.”
“If you are not sure, just keep looking harder because you might see it,” Dr. Elewski said.
Interdigital tinea pedis will be a little more obvious, with scaling and crusting between the toes, as well as maceration and oozing in many cases.
When the toe web is oozing, you’re likely dealing with intertrigo, she said.
In such cases, an azole cream is the better treatment choice, because azoles will kill Candida, bacteria, and dermatophytes that are there, she said.
“So when I have a moist macerated space, I like an azole. If you have a dry scaly process – with or without the collarettes – you’re probably better with an allylamine, particularly if you use a keratolytic with it, something that has urea or a lactic acid,” she said.
Dr. Elewski is a consultant for Valeant Pharmaceuticals International and a contracted researcher for Anacor Pharmaceuticals.
MIAMI BEACH – Two new antifungal agents on the market – luliconazole and naftifine – each have something unique to offer when it comes to treating tinea pedis, according to Dr. Boni E. Elewski.
Luliconazole is an azole drug, meaning it is broad spectrum and kills dermatophytes, yeast, and molds. Also, like all azoles, it has some antibacterial activity, she said at the South Beach Symposium.
Naftifine is an allylamine drug, and is mainly an antidermatophyte agent – albeit a “very, very potent antidermatophyte” – with no antibacterial activity, she said.
Both are approved for once-daily use for 2 weeks, and that’s good because the short treatment duration improves adherence to the regimen, especially compared with other drugs that require 4-6 weeks of treatment to eradicate the problem, noted Dr. Elewski, professor of dermatology and director of clinical trials research at the University of Alabama at Birmingham.
Both drugs also stay in the skin and continue working after treatment stops.
Making the choice regarding which drug or class of drugs to use depends on the patient’s symptoms.
“First of all, tinea pedis may not be obvious. People don’t often tell you, ‘This is what I have – it’s tinea pedis,’ ” she said.
Keep in mind that tinea pedis and onychomycosis are related. If you have a patient who you think has onychomycosis, look at the bottom of their foot, she advised.
“If they don’t have tinea pedis, they probably don’t have onychomycosis unless they’ve had tinea pedis recently and got rid of it,” she said.
Also, look for collarettes of scale, which may be very subtle and may look like “tiny little circular pieces of scale on the medial or lateral foot.”
“If you are not sure, just keep looking harder because you might see it,” Dr. Elewski said.
Interdigital tinea pedis will be a little more obvious, with scaling and crusting between the toes, as well as maceration and oozing in many cases.
When the toe web is oozing, you’re likely dealing with intertrigo, she said.
In such cases, an azole cream is the better treatment choice, because azoles will kill Candida, bacteria, and dermatophytes that are there, she said.
“So when I have a moist macerated space, I like an azole. If you have a dry scaly process – with or without the collarettes – you’re probably better with an allylamine, particularly if you use a keratolytic with it, something that has urea or a lactic acid,” she said.
Dr. Elewski is a consultant for Valeant Pharmaceuticals International and a contracted researcher for Anacor Pharmaceuticals.
MIAMI BEACH – Two new antifungal agents on the market – luliconazole and naftifine – each have something unique to offer when it comes to treating tinea pedis, according to Dr. Boni E. Elewski.
Luliconazole is an azole drug, meaning it is broad spectrum and kills dermatophytes, yeast, and molds. Also, like all azoles, it has some antibacterial activity, she said at the South Beach Symposium.
Naftifine is an allylamine drug, and is mainly an antidermatophyte agent – albeit a “very, very potent antidermatophyte” – with no antibacterial activity, she said.
Both are approved for once-daily use for 2 weeks, and that’s good because the short treatment duration improves adherence to the regimen, especially compared with other drugs that require 4-6 weeks of treatment to eradicate the problem, noted Dr. Elewski, professor of dermatology and director of clinical trials research at the University of Alabama at Birmingham.
Both drugs also stay in the skin and continue working after treatment stops.
Making the choice regarding which drug or class of drugs to use depends on the patient’s symptoms.
“First of all, tinea pedis may not be obvious. People don’t often tell you, ‘This is what I have – it’s tinea pedis,’ ” she said.
Keep in mind that tinea pedis and onychomycosis are related. If you have a patient who you think has onychomycosis, look at the bottom of their foot, she advised.
“If they don’t have tinea pedis, they probably don’t have onychomycosis unless they’ve had tinea pedis recently and got rid of it,” she said.
Also, look for collarettes of scale, which may be very subtle and may look like “tiny little circular pieces of scale on the medial or lateral foot.”
“If you are not sure, just keep looking harder because you might see it,” Dr. Elewski said.
Interdigital tinea pedis will be a little more obvious, with scaling and crusting between the toes, as well as maceration and oozing in many cases.
When the toe web is oozing, you’re likely dealing with intertrigo, she said.
In such cases, an azole cream is the better treatment choice, because azoles will kill Candida, bacteria, and dermatophytes that are there, she said.
“So when I have a moist macerated space, I like an azole. If you have a dry scaly process – with or without the collarettes – you’re probably better with an allylamine, particularly if you use a keratolytic with it, something that has urea or a lactic acid,” she said.
Dr. Elewski is a consultant for Valeant Pharmaceuticals International and a contracted researcher for Anacor Pharmaceuticals.
AT THE SOUTH BEACH SYMPOSIUM
Acute renal failure biggest short-term risk in I-EVAR explantation
SCOTTSDALE, ARIZ. – Acute renal failure occurred postoperatively in one-third of patients who underwent endograft explantation after endovascular abdominal aortic aneurysm repair (EVAR), according to the results of a small retrospective study.
The perioperative infected EVAR (I-EVAR) mortality across the study’s 36 patient records (83% male patients, average age 69 years), culled from four surgery centers’ data from 1997 to 2014, was 8%. The overall mortality was 25%, according to Dr. Victor J. Davila of Mayo Clinic Arizona, Phoenix, and his colleagues. Dr. Davila presented the findings at the Southern Association for Vascular Surgery annual meeting.
“These data show that I-EVAR explantation can be performed safely, with acceptable morbidity and mortality,” said Dr. Davila, who noted that while acceptable, the rates were still high, particularly for acute renal failure.
“We did not find any difference between the patients who developed renal failure and the type of graft, whether or not there was suprarenal fixation, and an incidence of postoperative acute renal failure,” Dr. Davila said, “However, because acute renal failure is multifactorial, we need to minimize aortic clamp time, as well as minimize the aortic intimal disruption around the renal arteries.”
Three deaths occurred within 30 days post operation, all from anastomotic dehiscence. Additional short-term morbidities included respiratory failure that required tracheostomy in three patients, and bleeding and sepsis in two patients each. Six patients required re-exploration because of infected hematoma, lymphatic leak, small-bowel perforation, open abdomen at initial operation, and anastomotic bleeding. Six more deaths occurred at a mean follow-up of 402 days. One death was attributable to a ruptured aneurysm, another to a progressive inflammatory illness, and four deaths were of indeterminate cause.
Only three of the explantations reviewed by Dr. Davila and his colleagues were considered emergent. The rest (92%) were either elective or urgent. Infected patients tended to present with leukocytosis (63%), pain (58%), and fever (56%), usually about 65 days prior to explantation. The average time between EVAR and presentation with infection was 589 days.
Although most underwent total graft excision, two patients underwent partial excision, including one with a distal iliac limb infection that showed no sign of infection within the main portion of the endograft. Nearly three-quarters of patients had in situ reconstruction.
While nearly a third of patients had positive preoperative blood cultures indicating infection, 81% of intraoperative cultures taken from the explanted graft, aneurysm wall, or sac contents indicated infection.
The gram-positive Staphylococcus and Streptococcus were the most common organisms found in cultures (33% and 17%, respectively), although anaerobics were found in a third of patients, gram negatives in a quarter of patients, and fungal infections in 14%. A majority (58%) of patients received long-term suppressive antibiotic therapy.
Surgeons should reserve the option to keep a graft in situ only in infected EVAR patients who likely would not survive surgical explantation and reconstruction, Dr. Davila said. “Although I believe [medical management] is an alternative, the best course of action is to remove the endograft.”
On Twitter @whitneymcknight
SCOTTSDALE, ARIZ. – Acute renal failure occurred postoperatively in one-third of patients who underwent endograft explantation after endovascular abdominal aortic aneurysm repair (EVAR), according to the results of a small retrospective study.
The perioperative infected EVAR (I-EVAR) mortality across the study’s 36 patient records (83% male patients, average age 69 years), culled from four surgery centers’ data from 1997 to 2014, was 8%. The overall mortality was 25%, according to Dr. Victor J. Davila of Mayo Clinic Arizona, Phoenix, and his colleagues. Dr. Davila presented the findings at the Southern Association for Vascular Surgery annual meeting.
“These data show that I-EVAR explantation can be performed safely, with acceptable morbidity and mortality,” said Dr. Davila, who noted that while acceptable, the rates were still high, particularly for acute renal failure.
“We did not find any difference between the patients who developed renal failure and the type of graft, whether or not there was suprarenal fixation, and an incidence of postoperative acute renal failure,” Dr. Davila said, “However, because acute renal failure is multifactorial, we need to minimize aortic clamp time, as well as minimize the aortic intimal disruption around the renal arteries.”
Three deaths occurred within 30 days post operation, all from anastomotic dehiscence. Additional short-term morbidities included respiratory failure that required tracheostomy in three patients, and bleeding and sepsis in two patients each. Six patients required re-exploration because of infected hematoma, lymphatic leak, small-bowel perforation, open abdomen at initial operation, and anastomotic bleeding. Six more deaths occurred at a mean follow-up of 402 days. One death was attributable to a ruptured aneurysm, another to a progressive inflammatory illness, and four deaths were of indeterminate cause.
Only three of the explantations reviewed by Dr. Davila and his colleagues were considered emergent. The rest (92%) were either elective or urgent. Infected patients tended to present with leukocytosis (63%), pain (58%), and fever (56%), usually about 65 days prior to explantation. The average time between EVAR and presentation with infection was 589 days.
Although most underwent total graft excision, two patients underwent partial excision, including one with a distal iliac limb infection that showed no sign of infection within the main portion of the endograft. Nearly three-quarters of patients had in situ reconstruction.
While nearly a third of patients had positive preoperative blood cultures indicating infection, 81% of intraoperative cultures taken from the explanted graft, aneurysm wall, or sac contents indicated infection.
The gram-positive Staphylococcus and Streptococcus were the most common organisms found in cultures (33% and 17%, respectively), although anaerobics were found in a third of patients, gram negatives in a quarter of patients, and fungal infections in 14%. A majority (58%) of patients received long-term suppressive antibiotic therapy.
Surgeons should reserve the option to keep a graft in situ only in infected EVAR patients who likely would not survive surgical explantation and reconstruction, Dr. Davila said. “Although I believe [medical management] is an alternative, the best course of action is to remove the endograft.”
On Twitter @whitneymcknight
SCOTTSDALE, ARIZ. – Acute renal failure occurred postoperatively in one-third of patients who underwent endograft explantation after endovascular abdominal aortic aneurysm repair (EVAR), according to the results of a small retrospective study.
The perioperative infected EVAR (I-EVAR) mortality across the study’s 36 patient records (83% male patients, average age 69 years), culled from four surgery centers’ data from 1997 to 2014, was 8%. The overall mortality was 25%, according to Dr. Victor J. Davila of Mayo Clinic Arizona, Phoenix, and his colleagues. Dr. Davila presented the findings at the Southern Association for Vascular Surgery annual meeting.
“These data show that I-EVAR explantation can be performed safely, with acceptable morbidity and mortality,” said Dr. Davila, who noted that while acceptable, the rates were still high, particularly for acute renal failure.
“We did not find any difference between the patients who developed renal failure and the type of graft, whether or not there was suprarenal fixation, and an incidence of postoperative acute renal failure,” Dr. Davila said, “However, because acute renal failure is multifactorial, we need to minimize aortic clamp time, as well as minimize the aortic intimal disruption around the renal arteries.”
Three deaths occurred within 30 days post operation, all from anastomotic dehiscence. Additional short-term morbidities included respiratory failure that required tracheostomy in three patients, and bleeding and sepsis in two patients each. Six patients required re-exploration because of infected hematoma, lymphatic leak, small-bowel perforation, open abdomen at initial operation, and anastomotic bleeding. Six more deaths occurred at a mean follow-up of 402 days. One death was attributable to a ruptured aneurysm, another to a progressive inflammatory illness, and four deaths were of indeterminate cause.
Only three of the explantations reviewed by Dr. Davila and his colleagues were considered emergent. The rest (92%) were either elective or urgent. Infected patients tended to present with leukocytosis (63%), pain (58%), and fever (56%), usually about 65 days prior to explantation. The average time between EVAR and presentation with infection was 589 days.
Although most underwent total graft excision, two patients underwent partial excision, including one with a distal iliac limb infection that showed no sign of infection within the main portion of the endograft. Nearly three-quarters of patients had in situ reconstruction.
While nearly a third of patients had positive preoperative blood cultures indicating infection, 81% of intraoperative cultures taken from the explanted graft, aneurysm wall, or sac contents indicated infection.
The gram-positive Staphylococcus and Streptococcus were the most common organisms found in cultures (33% and 17%, respectively), although anaerobics were found in a third of patients, gram negatives in a quarter of patients, and fungal infections in 14%. A majority (58%) of patients received long-term suppressive antibiotic therapy.
Surgeons should reserve the option to keep a graft in situ only in infected EVAR patients who likely would not survive surgical explantation and reconstruction, Dr. Davila said. “Although I believe [medical management] is an alternative, the best course of action is to remove the endograft.”
On Twitter @whitneymcknight
AT THE SAVS ANNUAL MEETING
Key clinical point: Minimizing cross-clamp time may reduce the rate of acute renal failure 30 days post op in infected EVAR explantation patients.
Major finding: One-third of I-EVAR patients had postoperative acute renal failure; perioperative mortality in I-EVAR was 8%, and overall mortality was 25%.
Data source: Retrospective analysis of 36 patients with infected EVAR explants performed between 1997 and 2014 across four surgical centers.
Disclosures: Dr. Davila reported he had no relevant disclosures.
Achieving pregnancy after gynecological cancer
Gynecological cancer in a woman of reproductive age is devastating news. Many women facing cancer treatment are interested in maintaining fertility. Fortunately, fertility-sparing treatment options are increasingly available and successful pregnancies have been reported.
These pregnancies present unique challenges to optimizing care of the mother and the fetus. In this article, we review the literature on pregnancies after successful treatment of ovarian, cervical, and endometrial cancer, and gestational trophoblastic disease.
Ovarian cancer
For young women diagnosed with ovarian cancer, the question of fertility preservation is often paramount. The American Society for Reproductive Medicine and the American Society of Clinical Oncology have published guidelines endorsing embryo and oocyte cryopreservation as viable strategies for maintaining fertility (J. Clin. Oncol. 2013;31:2500-10/ Fertil. Steril. 2013;100:1224-31).
Particularly with non–epithelial cell (germ cell) and borderline tumors, innovations in cryopreservation have become more widely available. Cryopreservation of immature oocytes in young girls is still considered investigational and should be undertaken as part of a research protocol. In a study of 62 women with epithelial ovarian cancer who underwent oocyte cryopreservation, there were 19 conceptions and 22 deliveries – all at term with no anomalies (Gynecol. Oncol. 2008;110:345-53).
However, pregnancies resulting from in vitro fertilization are at increased risk for anomalies and a targeted ultrasound and fetal echocardiogram are recommended.
Cervical cancer
In the United States, 43% of women diagnosed with cervical cancer are under age 45. For women with early-stage cancer with radiographically negative lymph nodes, tumors less than 2 cm, and no deep stromal invasion, fertility-sparing procedures include radical trachelectomy and simple vaginal trachelectomy.
Trachelectomy for appropriately selected patients is safe with recurrence rates of 2%-3% and death rates of 2%-5%. While experimental, for women with bulky disease (greater than 2 cm), neoadjuvant chemotherapy and subsequent trachelectomy has been reported (Gynecol. Oncol. 2014;135:213-6). While there is no consensus, most experts recommend 6 months to 1 year after surgery to attempt conception.
Conception rates after trachelectomy are promising with 60%-80% able to conceive. Approximately 10%-15% of these women will experience cervical stenosis, often attributed to the cerclage, resulting in menstrual or fertility issues (Gynecol. Oncol. 2005;99:S152-6/ Gyncol. Oncol. 2013;131:77-82). Placement of an intrauterine cannula (Smith sleeve) at the time of trachelectomy decreases the rate of stenosis (Gynecol. Oncol. 2012;124:276-80).
Pregnancy outcomes in several case series after trachelectomy have demonstrated a rate of first trimester loss of 13%-20%, second trimester loss of 5%-8%, and preterm delivery of 27%-51%, mostly secondary to preterm premature rupture of membranes (PPROM) and/or chorioamnionitis. Both preterm deliveries and midtrimester losses are thought to be secondary to cervical insufficiency, decreased cervical mucus, and ascending infection.
Women who have undergone fertility-sparing treatment for cervical cancer should be counseled about the challenges of pregnancy, including decreased fertility, risk of early and late miscarriage, and preterm delivery. Practitioners should consider cervical length surveillance, especially for those without a cerclage, and vaginal progesterone. The potential utility of preemptive antibiotics in this population is unclear, though early treatment of urinary or genital tract infections is prudent.
Endometrial cancer
As a consequence of the obesity epidemic, younger women are being diagnosed with endometrial hyperplasia and cancer. Approximately 25% of early stage endometrial cancers are diagnosed in premenopausal women, and 5% in women under age 40.
While hysterectomy is standard, fertility-sparing treatment with progestin for well-differentiated grade 1 stage 1A endometrial cancer has been successful and is not associated with any increase in disease progression and/or death (Obstet. Gynecol. 2013; 121:136-42).
Nearly two-thirds of the successfully treated women will require fertility medications and/or assisted reproductive technology (ART). Among those who conceive, 25% will miscarry. Following childbearing, definitive hysterectomy is recommended given the high recurrence rate (Gynecol. Oncol. 2014;133:229-33).
Gestational trophoblastic disease
Women with a history of complete and partial molar pregnancies and persistent gestational trophoblastic neoplasia (GTN) often pursue subsequent pregnancy. In a large cohort of more than 2,400 pregnancies after GTN, pregnancy outcomes were similar to those of the general population (J. Reprod. Med. 2014;59:188-94).
Among women with a history of a complete or partial mole, 1.7% had a subsequent pregnancy complicated by another molar pregnancy. Women who received chemotherapy for GTN may have a slightly higher risk of stillbirth (1.3%) and higher rates of anxiety in subsequent pregnancies (BJOG 2003;110:560-6).
Young women experiencing gynecologic malignancies are often concerned about the safety of pregnancy. In appropriately selected patients, fertility preservation is safe and pregnancy outcomes overall are favorable, although women should be counseled regarding reduced fertility, the need for ART, and the risks of prematurity and stillbirth.
Pregnant women with a history of cancer or gestational trophoblastic disease are also at high risk for depression and anxiety. Women with a personal history of gynecologic cancer or GTD should be followed by a multidisciplinary team that can address the obstetrical, oncological, and psychological aspects of pregnancy.
Dr. Smid is a second-year fellow in maternal-fetal medicine at the University of North Carolina at Chapel Hill. Dr. Ivester is an associate professor of maternal-fetal medicine and an associate professor of maternal child health at UNC-Chapel Hill. The authors reported having no financial disclosures.
Gynecological cancer in a woman of reproductive age is devastating news. Many women facing cancer treatment are interested in maintaining fertility. Fortunately, fertility-sparing treatment options are increasingly available and successful pregnancies have been reported.
These pregnancies present unique challenges to optimizing care of the mother and the fetus. In this article, we review the literature on pregnancies after successful treatment of ovarian, cervical, and endometrial cancer, and gestational trophoblastic disease.
Ovarian cancer
For young women diagnosed with ovarian cancer, the question of fertility preservation is often paramount. The American Society for Reproductive Medicine and the American Society of Clinical Oncology have published guidelines endorsing embryo and oocyte cryopreservation as viable strategies for maintaining fertility (J. Clin. Oncol. 2013;31:2500-10/ Fertil. Steril. 2013;100:1224-31).
Particularly with non–epithelial cell (germ cell) and borderline tumors, innovations in cryopreservation have become more widely available. Cryopreservation of immature oocytes in young girls is still considered investigational and should be undertaken as part of a research protocol. In a study of 62 women with epithelial ovarian cancer who underwent oocyte cryopreservation, there were 19 conceptions and 22 deliveries – all at term with no anomalies (Gynecol. Oncol. 2008;110:345-53).
However, pregnancies resulting from in vitro fertilization are at increased risk for anomalies and a targeted ultrasound and fetal echocardiogram are recommended.
Cervical cancer
In the United States, 43% of women diagnosed with cervical cancer are under age 45. For women with early-stage cancer with radiographically negative lymph nodes, tumors less than 2 cm, and no deep stromal invasion, fertility-sparing procedures include radical trachelectomy and simple vaginal trachelectomy.
Trachelectomy for appropriately selected patients is safe with recurrence rates of 2%-3% and death rates of 2%-5%. While experimental, for women with bulky disease (greater than 2 cm), neoadjuvant chemotherapy and subsequent trachelectomy has been reported (Gynecol. Oncol. 2014;135:213-6). While there is no consensus, most experts recommend 6 months to 1 year after surgery to attempt conception.
Conception rates after trachelectomy are promising with 60%-80% able to conceive. Approximately 10%-15% of these women will experience cervical stenosis, often attributed to the cerclage, resulting in menstrual or fertility issues (Gynecol. Oncol. 2005;99:S152-6/ Gyncol. Oncol. 2013;131:77-82). Placement of an intrauterine cannula (Smith sleeve) at the time of trachelectomy decreases the rate of stenosis (Gynecol. Oncol. 2012;124:276-80).
Pregnancy outcomes in several case series after trachelectomy have demonstrated a rate of first trimester loss of 13%-20%, second trimester loss of 5%-8%, and preterm delivery of 27%-51%, mostly secondary to preterm premature rupture of membranes (PPROM) and/or chorioamnionitis. Both preterm deliveries and midtrimester losses are thought to be secondary to cervical insufficiency, decreased cervical mucus, and ascending infection.
Women who have undergone fertility-sparing treatment for cervical cancer should be counseled about the challenges of pregnancy, including decreased fertility, risk of early and late miscarriage, and preterm delivery. Practitioners should consider cervical length surveillance, especially for those without a cerclage, and vaginal progesterone. The potential utility of preemptive antibiotics in this population is unclear, though early treatment of urinary or genital tract infections is prudent.
Endometrial cancer
As a consequence of the obesity epidemic, younger women are being diagnosed with endometrial hyperplasia and cancer. Approximately 25% of early stage endometrial cancers are diagnosed in premenopausal women, and 5% in women under age 40.
While hysterectomy is standard, fertility-sparing treatment with progestin for well-differentiated grade 1 stage 1A endometrial cancer has been successful and is not associated with any increase in disease progression and/or death (Obstet. Gynecol. 2013; 121:136-42).
Nearly two-thirds of the successfully treated women will require fertility medications and/or assisted reproductive technology (ART). Among those who conceive, 25% will miscarry. Following childbearing, definitive hysterectomy is recommended given the high recurrence rate (Gynecol. Oncol. 2014;133:229-33).
Gestational trophoblastic disease
Women with a history of complete and partial molar pregnancies and persistent gestational trophoblastic neoplasia (GTN) often pursue subsequent pregnancy. In a large cohort of more than 2,400 pregnancies after GTN, pregnancy outcomes were similar to those of the general population (J. Reprod. Med. 2014;59:188-94).
Among women with a history of a complete or partial mole, 1.7% had a subsequent pregnancy complicated by another molar pregnancy. Women who received chemotherapy for GTN may have a slightly higher risk of stillbirth (1.3%) and higher rates of anxiety in subsequent pregnancies (BJOG 2003;110:560-6).
Young women experiencing gynecologic malignancies are often concerned about the safety of pregnancy. In appropriately selected patients, fertility preservation is safe and pregnancy outcomes overall are favorable, although women should be counseled regarding reduced fertility, the need for ART, and the risks of prematurity and stillbirth.
Pregnant women with a history of cancer or gestational trophoblastic disease are also at high risk for depression and anxiety. Women with a personal history of gynecologic cancer or GTD should be followed by a multidisciplinary team that can address the obstetrical, oncological, and psychological aspects of pregnancy.
Dr. Smid is a second-year fellow in maternal-fetal medicine at the University of North Carolina at Chapel Hill. Dr. Ivester is an associate professor of maternal-fetal medicine and an associate professor of maternal child health at UNC-Chapel Hill. The authors reported having no financial disclosures.
Gynecological cancer in a woman of reproductive age is devastating news. Many women facing cancer treatment are interested in maintaining fertility. Fortunately, fertility-sparing treatment options are increasingly available and successful pregnancies have been reported.
These pregnancies present unique challenges to optimizing care of the mother and the fetus. In this article, we review the literature on pregnancies after successful treatment of ovarian, cervical, and endometrial cancer, and gestational trophoblastic disease.
Ovarian cancer
For young women diagnosed with ovarian cancer, the question of fertility preservation is often paramount. The American Society for Reproductive Medicine and the American Society of Clinical Oncology have published guidelines endorsing embryo and oocyte cryopreservation as viable strategies for maintaining fertility (J. Clin. Oncol. 2013;31:2500-10/ Fertil. Steril. 2013;100:1224-31).
Particularly with non–epithelial cell (germ cell) and borderline tumors, innovations in cryopreservation have become more widely available. Cryopreservation of immature oocytes in young girls is still considered investigational and should be undertaken as part of a research protocol. In a study of 62 women with epithelial ovarian cancer who underwent oocyte cryopreservation, there were 19 conceptions and 22 deliveries – all at term with no anomalies (Gynecol. Oncol. 2008;110:345-53).
However, pregnancies resulting from in vitro fertilization are at increased risk for anomalies and a targeted ultrasound and fetal echocardiogram are recommended.
Cervical cancer
In the United States, 43% of women diagnosed with cervical cancer are under age 45. For women with early-stage cancer with radiographically negative lymph nodes, tumors less than 2 cm, and no deep stromal invasion, fertility-sparing procedures include radical trachelectomy and simple vaginal trachelectomy.
Trachelectomy for appropriately selected patients is safe with recurrence rates of 2%-3% and death rates of 2%-5%. While experimental, for women with bulky disease (greater than 2 cm), neoadjuvant chemotherapy and subsequent trachelectomy has been reported (Gynecol. Oncol. 2014;135:213-6). While there is no consensus, most experts recommend 6 months to 1 year after surgery to attempt conception.
Conception rates after trachelectomy are promising with 60%-80% able to conceive. Approximately 10%-15% of these women will experience cervical stenosis, often attributed to the cerclage, resulting in menstrual or fertility issues (Gynecol. Oncol. 2005;99:S152-6/ Gyncol. Oncol. 2013;131:77-82). Placement of an intrauterine cannula (Smith sleeve) at the time of trachelectomy decreases the rate of stenosis (Gynecol. Oncol. 2012;124:276-80).
Pregnancy outcomes in several case series after trachelectomy have demonstrated a rate of first trimester loss of 13%-20%, second trimester loss of 5%-8%, and preterm delivery of 27%-51%, mostly secondary to preterm premature rupture of membranes (PPROM) and/or chorioamnionitis. Both preterm deliveries and midtrimester losses are thought to be secondary to cervical insufficiency, decreased cervical mucus, and ascending infection.
Women who have undergone fertility-sparing treatment for cervical cancer should be counseled about the challenges of pregnancy, including decreased fertility, risk of early and late miscarriage, and preterm delivery. Practitioners should consider cervical length surveillance, especially for those without a cerclage, and vaginal progesterone. The potential utility of preemptive antibiotics in this population is unclear, though early treatment of urinary or genital tract infections is prudent.
Endometrial cancer
As a consequence of the obesity epidemic, younger women are being diagnosed with endometrial hyperplasia and cancer. Approximately 25% of early stage endometrial cancers are diagnosed in premenopausal women, and 5% in women under age 40.
While hysterectomy is standard, fertility-sparing treatment with progestin for well-differentiated grade 1 stage 1A endometrial cancer has been successful and is not associated with any increase in disease progression and/or death (Obstet. Gynecol. 2013; 121:136-42).
Nearly two-thirds of the successfully treated women will require fertility medications and/or assisted reproductive technology (ART). Among those who conceive, 25% will miscarry. Following childbearing, definitive hysterectomy is recommended given the high recurrence rate (Gynecol. Oncol. 2014;133:229-33).
Gestational trophoblastic disease
Women with a history of complete and partial molar pregnancies and persistent gestational trophoblastic neoplasia (GTN) often pursue subsequent pregnancy. In a large cohort of more than 2,400 pregnancies after GTN, pregnancy outcomes were similar to those of the general population (J. Reprod. Med. 2014;59:188-94).
Among women with a history of a complete or partial mole, 1.7% had a subsequent pregnancy complicated by another molar pregnancy. Women who received chemotherapy for GTN may have a slightly higher risk of stillbirth (1.3%) and higher rates of anxiety in subsequent pregnancies (BJOG 2003;110:560-6).
Young women experiencing gynecologic malignancies are often concerned about the safety of pregnancy. In appropriately selected patients, fertility preservation is safe and pregnancy outcomes overall are favorable, although women should be counseled regarding reduced fertility, the need for ART, and the risks of prematurity and stillbirth.
Pregnant women with a history of cancer or gestational trophoblastic disease are also at high risk for depression and anxiety. Women with a personal history of gynecologic cancer or GTD should be followed by a multidisciplinary team that can address the obstetrical, oncological, and psychological aspects of pregnancy.
Dr. Smid is a second-year fellow in maternal-fetal medicine at the University of North Carolina at Chapel Hill. Dr. Ivester is an associate professor of maternal-fetal medicine and an associate professor of maternal child health at UNC-Chapel Hill. The authors reported having no financial disclosures.
Group uses gene editing to fight lymphoma
Aubrey, Gemma Kelly,
and Marco Herold
Photo courtesy of the
Walter and Eliza Hall Institute
The gene-editing technique CRISPR/Cas9 can be used to target and kill lymphoma cells with high accuracy, according to preclinical research published in Cell Reports.
Using a lentiviral CRISPR/Cas9 platform, researchers were able to kill human Burkitt lymphoma cells by locating and deleting MCL-1, a gene known to be essential for cancer cell survival.
These results suggest the technology could be used as a direct treatment for diseases arising from genetic errors.
“Our study showed that the CRISPR technology can directly kill cancer cells by targeting factors that are essential for their survival and growth,” said study author Brandon Aubrey, a PhD candidate at the Walter and Eliza Hall Institute of Medical Research in Parkville, Victoria, Australia.
Aubrey and his colleagues said they engineered a lentiviral vector platform that allows for efficient cell transduction and subsequent inducible expression of small guide RNAs with concomitant constitutive expression of Cas9.
After finding they could use this system to knock out the pro-apoptotic BH3-only protein BIM in human and mouse cell lines, the team wanted to determine if it could target genes that are essential for sustained cell growth.
So they used the technique to delete MCL-1 in human Burkitt lymphoma cells. And they observed a “very high frequency” of cell killing.
The researchers also used their system to produce hematopoietic-cell-restricted TRP53-knockout mice. Along with mutations that caused loss of the TRP53 protein, the team found they had generated novel mutant TRP53 proteins that could promote lymphoma development.
“[W]e showed, for the first time, that it is possible for CRISPR technology to be used in cancer therapy,” said Marco Herold, PhD, of the Walter and Eliza Hall Institute.
“In addition to its very exciting potential for disease treatment, we have shown that it has the potential to identify novel mutations in cancer-causing genes and genes that ‘suppress’ cancer development, which will help us to identify how they initiate or accelerate the development of cancer.”
Aubrey, Gemma Kelly,
and Marco Herold
Photo courtesy of the
Walter and Eliza Hall Institute
The gene-editing technique CRISPR/Cas9 can be used to target and kill lymphoma cells with high accuracy, according to preclinical research published in Cell Reports.
Using a lentiviral CRISPR/Cas9 platform, researchers were able to kill human Burkitt lymphoma cells by locating and deleting MCL-1, a gene known to be essential for cancer cell survival.
These results suggest the technology could be used as a direct treatment for diseases arising from genetic errors.
“Our study showed that the CRISPR technology can directly kill cancer cells by targeting factors that are essential for their survival and growth,” said study author Brandon Aubrey, a PhD candidate at the Walter and Eliza Hall Institute of Medical Research in Parkville, Victoria, Australia.
Aubrey and his colleagues said they engineered a lentiviral vector platform that allows for efficient cell transduction and subsequent inducible expression of small guide RNAs with concomitant constitutive expression of Cas9.
After finding they could use this system to knock out the pro-apoptotic BH3-only protein BIM in human and mouse cell lines, the team wanted to determine if it could target genes that are essential for sustained cell growth.
So they used the technique to delete MCL-1 in human Burkitt lymphoma cells. And they observed a “very high frequency” of cell killing.
The researchers also used their system to produce hematopoietic-cell-restricted TRP53-knockout mice. Along with mutations that caused loss of the TRP53 protein, the team found they had generated novel mutant TRP53 proteins that could promote lymphoma development.
“[W]e showed, for the first time, that it is possible for CRISPR technology to be used in cancer therapy,” said Marco Herold, PhD, of the Walter and Eliza Hall Institute.
“In addition to its very exciting potential for disease treatment, we have shown that it has the potential to identify novel mutations in cancer-causing genes and genes that ‘suppress’ cancer development, which will help us to identify how they initiate or accelerate the development of cancer.”
Aubrey, Gemma Kelly,
and Marco Herold
Photo courtesy of the
Walter and Eliza Hall Institute
The gene-editing technique CRISPR/Cas9 can be used to target and kill lymphoma cells with high accuracy, according to preclinical research published in Cell Reports.
Using a lentiviral CRISPR/Cas9 platform, researchers were able to kill human Burkitt lymphoma cells by locating and deleting MCL-1, a gene known to be essential for cancer cell survival.
These results suggest the technology could be used as a direct treatment for diseases arising from genetic errors.
“Our study showed that the CRISPR technology can directly kill cancer cells by targeting factors that are essential for their survival and growth,” said study author Brandon Aubrey, a PhD candidate at the Walter and Eliza Hall Institute of Medical Research in Parkville, Victoria, Australia.
Aubrey and his colleagues said they engineered a lentiviral vector platform that allows for efficient cell transduction and subsequent inducible expression of small guide RNAs with concomitant constitutive expression of Cas9.
After finding they could use this system to knock out the pro-apoptotic BH3-only protein BIM in human and mouse cell lines, the team wanted to determine if it could target genes that are essential for sustained cell growth.
So they used the technique to delete MCL-1 in human Burkitt lymphoma cells. And they observed a “very high frequency” of cell killing.
The researchers also used their system to produce hematopoietic-cell-restricted TRP53-knockout mice. Along with mutations that caused loss of the TRP53 protein, the team found they had generated novel mutant TRP53 proteins that could promote lymphoma development.
“[W]e showed, for the first time, that it is possible for CRISPR technology to be used in cancer therapy,” said Marco Herold, PhD, of the Walter and Eliza Hall Institute.
“In addition to its very exciting potential for disease treatment, we have shown that it has the potential to identify novel mutations in cancer-causing genes and genes that ‘suppress’ cancer development, which will help us to identify how they initiate or accelerate the development of cancer.”
Genotyping can help predict bleeding risk with warfarin
An analysis of data from the ENGAGE AF-TIMI 48 trial has shown that patients with a genetic sensitivity to warfarin had higher rates of bleeding during the first several months of treatment and benefitted from treatment with a different anticoagulant.
The research, published in The Lancet, suggests that using genetic analyses to identify patients who are most at risk of bleeding with warfarin could offer safety benefits, particularly in the first 90 days of treatment.
“We were able to look at patients from around the world who were being treated with warfarin and found that certain genetic variants make a difference for an individual’s risk for bleeding,” said study author Jessica L. Mega, MD, of Brigham and Women’s Hospital in Boston, Massachusetts.
“For these patients who are sensitive or highly sensitive responders based on genetics, we observed a higher risk of bleeding in the first several months with warfarin, and consequently, a big reduction in bleeding when treated with the drug edoxaban instead of warfarin.”
The FDA label for warfarin notes that genetic variants in 2 genes—CYP2C9 and VKORC1—can assist in determining the right warfarin dosage. But a conclusive link between variation in these genes and bleeding has been debated.
By leveraging data from the ENGAGE AF-TIMI 48 trial—in which patients with atrial fibrillation received warfarin or 2 different doses of edoxaban—investigators were able to observe connections between genetic differences and patient outcomes.
A subgroup of patients was genotyped for variants in CYP2C9 and VKORC1, and the results were used to identify normal responders, sensitive responders, and highly sensitive responders to warfarin.
Dr Mega and her colleagues looked at data from 14,348 patients. Of the 4833 patients taking warfarin, 61.7% were normal responders, 35.4% were sensitive responders, and 9.2% were highly sensitive responders.
In the first 90 days of treatment, normal responders were over-anticoagulated a median of 2.2% of the time, compared to 8.4% of the time for sensitive responders, and 18.3% of the time for highly sensitive responders (Ptrend<0.0001).
Both sensitive and highly sensitive responders also had an increased risk of bleeding in the first 90 days when compared to normal responders. The hazard ratios were 1.31 for sensitive responders (P=0.0179) and 2.66 for highly sensitive responders (P<0.0001).
As a result, during the first 90 days, edoxaban was more effective than warfarin at reducing bleeding in sensitive and highly sensitive responders.
Compared with warfarin, both the higher and lower doses of edoxaban reduced bleeding more in sensitive and highly sensitive responders than in normal responders (Pinteraction=0.0066 and 0.0036 for the higher and lower doses, respectively).
“These findings demonstrate the power of genetics in personalizing medicine and tailoring specific therapies for our patients,” said Marc S. Sabatine, MD, also of Brigham and Women’s Hospital.
The ENGAGE AF-TIMI 48 trial was supported by research grants from Daiichi Sankyo, makers of edoxaban.
An analysis of data from the ENGAGE AF-TIMI 48 trial has shown that patients with a genetic sensitivity to warfarin had higher rates of bleeding during the first several months of treatment and benefitted from treatment with a different anticoagulant.
The research, published in The Lancet, suggests that using genetic analyses to identify patients who are most at risk of bleeding with warfarin could offer safety benefits, particularly in the first 90 days of treatment.
“We were able to look at patients from around the world who were being treated with warfarin and found that certain genetic variants make a difference for an individual’s risk for bleeding,” said study author Jessica L. Mega, MD, of Brigham and Women’s Hospital in Boston, Massachusetts.
“For these patients who are sensitive or highly sensitive responders based on genetics, we observed a higher risk of bleeding in the first several months with warfarin, and consequently, a big reduction in bleeding when treated with the drug edoxaban instead of warfarin.”
The FDA label for warfarin notes that genetic variants in 2 genes—CYP2C9 and VKORC1—can assist in determining the right warfarin dosage. But a conclusive link between variation in these genes and bleeding has been debated.
By leveraging data from the ENGAGE AF-TIMI 48 trial—in which patients with atrial fibrillation received warfarin or 2 different doses of edoxaban—investigators were able to observe connections between genetic differences and patient outcomes.
A subgroup of patients was genotyped for variants in CYP2C9 and VKORC1, and the results were used to identify normal responders, sensitive responders, and highly sensitive responders to warfarin.
Dr Mega and her colleagues looked at data from 14,348 patients. Of the 4833 patients taking warfarin, 61.7% were normal responders, 35.4% were sensitive responders, and 9.2% were highly sensitive responders.
In the first 90 days of treatment, normal responders were over-anticoagulated a median of 2.2% of the time, compared to 8.4% of the time for sensitive responders, and 18.3% of the time for highly sensitive responders (Ptrend<0.0001).
Both sensitive and highly sensitive responders also had an increased risk of bleeding in the first 90 days when compared to normal responders. The hazard ratios were 1.31 for sensitive responders (P=0.0179) and 2.66 for highly sensitive responders (P<0.0001).
As a result, during the first 90 days, edoxaban was more effective than warfarin at reducing bleeding in sensitive and highly sensitive responders.
Compared with warfarin, both the higher and lower doses of edoxaban reduced bleeding more in sensitive and highly sensitive responders than in normal responders (Pinteraction=0.0066 and 0.0036 for the higher and lower doses, respectively).
“These findings demonstrate the power of genetics in personalizing medicine and tailoring specific therapies for our patients,” said Marc S. Sabatine, MD, also of Brigham and Women’s Hospital.
The ENGAGE AF-TIMI 48 trial was supported by research grants from Daiichi Sankyo, makers of edoxaban.
An analysis of data from the ENGAGE AF-TIMI 48 trial has shown that patients with a genetic sensitivity to warfarin had higher rates of bleeding during the first several months of treatment and benefitted from treatment with a different anticoagulant.
The research, published in The Lancet, suggests that using genetic analyses to identify patients who are most at risk of bleeding with warfarin could offer safety benefits, particularly in the first 90 days of treatment.
“We were able to look at patients from around the world who were being treated with warfarin and found that certain genetic variants make a difference for an individual’s risk for bleeding,” said study author Jessica L. Mega, MD, of Brigham and Women’s Hospital in Boston, Massachusetts.
“For these patients who are sensitive or highly sensitive responders based on genetics, we observed a higher risk of bleeding in the first several months with warfarin, and consequently, a big reduction in bleeding when treated with the drug edoxaban instead of warfarin.”
The FDA label for warfarin notes that genetic variants in 2 genes—CYP2C9 and VKORC1—can assist in determining the right warfarin dosage. But a conclusive link between variation in these genes and bleeding has been debated.
By leveraging data from the ENGAGE AF-TIMI 48 trial—in which patients with atrial fibrillation received warfarin or 2 different doses of edoxaban—investigators were able to observe connections between genetic differences and patient outcomes.
A subgroup of patients was genotyped for variants in CYP2C9 and VKORC1, and the results were used to identify normal responders, sensitive responders, and highly sensitive responders to warfarin.
Dr Mega and her colleagues looked at data from 14,348 patients. Of the 4833 patients taking warfarin, 61.7% were normal responders, 35.4% were sensitive responders, and 9.2% were highly sensitive responders.
In the first 90 days of treatment, normal responders were over-anticoagulated a median of 2.2% of the time, compared to 8.4% of the time for sensitive responders, and 18.3% of the time for highly sensitive responders (Ptrend<0.0001).
Both sensitive and highly sensitive responders also had an increased risk of bleeding in the first 90 days when compared to normal responders. The hazard ratios were 1.31 for sensitive responders (P=0.0179) and 2.66 for highly sensitive responders (P<0.0001).
As a result, during the first 90 days, edoxaban was more effective than warfarin at reducing bleeding in sensitive and highly sensitive responders.
Compared with warfarin, both the higher and lower doses of edoxaban reduced bleeding more in sensitive and highly sensitive responders than in normal responders (Pinteraction=0.0066 and 0.0036 for the higher and lower doses, respectively).
“These findings demonstrate the power of genetics in personalizing medicine and tailoring specific therapies for our patients,” said Marc S. Sabatine, MD, also of Brigham and Women’s Hospital.
The ENGAGE AF-TIMI 48 trial was supported by research grants from Daiichi Sankyo, makers of edoxaban.
iPSCs reveal new insight into Fanconi anemia
Image by James Thompson
Induced pluripotent stem cells (iPSCs) may help elucidate the pathogenesis of bone marrow failure (BMF) in Fanconi anemia (FA), researchers say.
They generated iPSCs from FA patients and found evidence suggesting that hematopoietic consequences originate at the earliest hematopoietic stage.
Specifically, hemoangiogenic progenitor cells (HAPCs) from FA-iPSCs produced significantly fewer hematopoietic and endothelial cells than controls.
“Although various consequences in hematopoietic stem cells have been attributed to FA-BMF, its cause is still unknown,” said study author Megumu K. Saito, MD, PhD, of Kyoto University in Japan.
“To address the issue, our team established iPSCs from 2 FA patients who have the FANCA gene mutation that is typical in FA. We were then able to obtain fetal-type immature blood cells [KDR+ CD34+ HAPCs] from these iPSCs.”
The researchers assessed differentiation in the FA-iPSC-derived HAPCs (FA-HAPCs) and found they produced significantly fewer CD34+ CD45+ hematopoietic precursors—and later, myeloid and erythroid lineage hematopoietic cells—than control cells. Likewise, FA-HAPCs produced fewer CD31+ endothelial cells than controls.
Cell cycle distribution in FA-HAPCs was comparable to that of controls, and FA-HAPCs were not apoptotic. This, according to the researchers, suggests a defect in FA-HAPCs’ ability to differentiate into hematopoietic and endothelial cells.
Further study of FA-HAPCs revealed significant downregulation of transcription factors that are critical for hematopoietic differentiation. This suggests the FA pathway might be involved in maintaining the transcriptional network critical for determining the differentiation propensity of HAPCs, the researchers said.
They also identified 227 genes that were significantly upregulated and 396 genes that were significantly downregulated in FA-HAPCs. The downregulated genes included those associated with mesodermal differentiation, vascular formation, and hematopoiesis.
“These data indicate that the hematopoietic consequences in FA patients originate from the earliest hematopoietic stage and highlight the potential usefulness of iPSC technology for explaining how FA-BMF occurs,” Dr Saito said.
“Since conducting a comprehensive analysis of patient-derived affected stem cells is not feasible without iPSC technology, the technology provides an unprecedented opportunity to gain further insight into this disease.”
Dr Saito and colleagues described this research in STEM CELLS Translational Medicine.
Image by James Thompson
Induced pluripotent stem cells (iPSCs) may help elucidate the pathogenesis of bone marrow failure (BMF) in Fanconi anemia (FA), researchers say.
They generated iPSCs from FA patients and found evidence suggesting that hematopoietic consequences originate at the earliest hematopoietic stage.
Specifically, hemoangiogenic progenitor cells (HAPCs) from FA-iPSCs produced significantly fewer hematopoietic and endothelial cells than controls.
“Although various consequences in hematopoietic stem cells have been attributed to FA-BMF, its cause is still unknown,” said study author Megumu K. Saito, MD, PhD, of Kyoto University in Japan.
“To address the issue, our team established iPSCs from 2 FA patients who have the FANCA gene mutation that is typical in FA. We were then able to obtain fetal-type immature blood cells [KDR+ CD34+ HAPCs] from these iPSCs.”
The researchers assessed differentiation in the FA-iPSC-derived HAPCs (FA-HAPCs) and found they produced significantly fewer CD34+ CD45+ hematopoietic precursors—and later, myeloid and erythroid lineage hematopoietic cells—than control cells. Likewise, FA-HAPCs produced fewer CD31+ endothelial cells than controls.
Cell cycle distribution in FA-HAPCs was comparable to that of controls, and FA-HAPCs were not apoptotic. This, according to the researchers, suggests a defect in FA-HAPCs’ ability to differentiate into hematopoietic and endothelial cells.
Further study of FA-HAPCs revealed significant downregulation of transcription factors that are critical for hematopoietic differentiation. This suggests the FA pathway might be involved in maintaining the transcriptional network critical for determining the differentiation propensity of HAPCs, the researchers said.
They also identified 227 genes that were significantly upregulated and 396 genes that were significantly downregulated in FA-HAPCs. The downregulated genes included those associated with mesodermal differentiation, vascular formation, and hematopoiesis.
“These data indicate that the hematopoietic consequences in FA patients originate from the earliest hematopoietic stage and highlight the potential usefulness of iPSC technology for explaining how FA-BMF occurs,” Dr Saito said.
“Since conducting a comprehensive analysis of patient-derived affected stem cells is not feasible without iPSC technology, the technology provides an unprecedented opportunity to gain further insight into this disease.”
Dr Saito and colleagues described this research in STEM CELLS Translational Medicine.
Image by James Thompson
Induced pluripotent stem cells (iPSCs) may help elucidate the pathogenesis of bone marrow failure (BMF) in Fanconi anemia (FA), researchers say.
They generated iPSCs from FA patients and found evidence suggesting that hematopoietic consequences originate at the earliest hematopoietic stage.
Specifically, hemoangiogenic progenitor cells (HAPCs) from FA-iPSCs produced significantly fewer hematopoietic and endothelial cells than controls.
“Although various consequences in hematopoietic stem cells have been attributed to FA-BMF, its cause is still unknown,” said study author Megumu K. Saito, MD, PhD, of Kyoto University in Japan.
“To address the issue, our team established iPSCs from 2 FA patients who have the FANCA gene mutation that is typical in FA. We were then able to obtain fetal-type immature blood cells [KDR+ CD34+ HAPCs] from these iPSCs.”
The researchers assessed differentiation in the FA-iPSC-derived HAPCs (FA-HAPCs) and found they produced significantly fewer CD34+ CD45+ hematopoietic precursors—and later, myeloid and erythroid lineage hematopoietic cells—than control cells. Likewise, FA-HAPCs produced fewer CD31+ endothelial cells than controls.
Cell cycle distribution in FA-HAPCs was comparable to that of controls, and FA-HAPCs were not apoptotic. This, according to the researchers, suggests a defect in FA-HAPCs’ ability to differentiate into hematopoietic and endothelial cells.
Further study of FA-HAPCs revealed significant downregulation of transcription factors that are critical for hematopoietic differentiation. This suggests the FA pathway might be involved in maintaining the transcriptional network critical for determining the differentiation propensity of HAPCs, the researchers said.
They also identified 227 genes that were significantly upregulated and 396 genes that were significantly downregulated in FA-HAPCs. The downregulated genes included those associated with mesodermal differentiation, vascular formation, and hematopoiesis.
“These data indicate that the hematopoietic consequences in FA patients originate from the earliest hematopoietic stage and highlight the potential usefulness of iPSC technology for explaining how FA-BMF occurs,” Dr Saito said.
“Since conducting a comprehensive analysis of patient-derived affected stem cells is not feasible without iPSC technology, the technology provides an unprecedented opportunity to gain further insight into this disease.”
Dr Saito and colleagues described this research in STEM CELLS Translational Medicine.
News reports on stem cell research often unrealistic, team says
Media coverage of translational stem cell research might generate unrealistic expectations, according to a pair of researchers.
The team analyzed reports on stem cell research published in major daily newspapers in Canada, the US, and the UK between 2010 and 2013.
They found that most reports were highly optimistic about the future of stem cell therapies and indicated that therapies would be available for clinical use within 5 to 10 years or sooner.
The researchers said that, as spokespeople, scientists need to be mindful of harnessing public expectations.
“As the dominant voice in respect to timelines for stem cell therapies, the scientists quoted in these stories need to be more aware of the importance of communicating realistic timelines to the press,“ said Kalina Kamenova, PhD, of the University of Alberta in Edmonton, Canada.
Dr Kamenova conducted this research with Timothy Caulfield, also of the University of Alberta, and the pair disclosed their results in Science Translational Medicine.
The researchers examined 307 news reports covering translational research on stem cells, including human embryonic stem cells (21.5%), induced pluripotent stem cells (12.1%), cord blood stem cells (2.9%), other tissue-specific stem cells such as bone marrow or mesenchymal stem cells (23.8%), multiple types of stem cells (18.9%), and stem cells of an unspecified type (20.8%).
The team assessed perspectives on the future of stem cell therapies and found that 57.7% of news reports were optimistic, 10.4% were pessimistic, and 31.9% were neutral.
In addition, 69% of all news stories citing timelines predicted that stem cell therapies would be available within 5 to 10 years or sooner.
“The approval process for new treatments is long and complicated, and only a few of all drugs that enter preclinical testing are approved for human clinical trials,” Dr Kamenova pointed out. “It takes, on average, 12 years to get a new drug from the lab to the market and [an] additional 11 to 14 years of post-market surveillance.”
“Our findings showed that many scientists have often provided, either by implication or direct quotes, authoritative statements regarding unrealistic timelines for stem cell therapies,” Caulfield added.
“[M]edia hype can foster unrealistic public expectations about clinical translation and increased patient demand for unproven stem cell therapies. Care needs to be taken by the media and the research community so that advances in research and therapy are portrayed in a realistic manner.”
Media coverage of translational stem cell research might generate unrealistic expectations, according to a pair of researchers.
The team analyzed reports on stem cell research published in major daily newspapers in Canada, the US, and the UK between 2010 and 2013.
They found that most reports were highly optimistic about the future of stem cell therapies and indicated that therapies would be available for clinical use within 5 to 10 years or sooner.
The researchers said that, as spokespeople, scientists need to be mindful of harnessing public expectations.
“As the dominant voice in respect to timelines for stem cell therapies, the scientists quoted in these stories need to be more aware of the importance of communicating realistic timelines to the press,“ said Kalina Kamenova, PhD, of the University of Alberta in Edmonton, Canada.
Dr Kamenova conducted this research with Timothy Caulfield, also of the University of Alberta, and the pair disclosed their results in Science Translational Medicine.
The researchers examined 307 news reports covering translational research on stem cells, including human embryonic stem cells (21.5%), induced pluripotent stem cells (12.1%), cord blood stem cells (2.9%), other tissue-specific stem cells such as bone marrow or mesenchymal stem cells (23.8%), multiple types of stem cells (18.9%), and stem cells of an unspecified type (20.8%).
The team assessed perspectives on the future of stem cell therapies and found that 57.7% of news reports were optimistic, 10.4% were pessimistic, and 31.9% were neutral.
In addition, 69% of all news stories citing timelines predicted that stem cell therapies would be available within 5 to 10 years or sooner.
“The approval process for new treatments is long and complicated, and only a few of all drugs that enter preclinical testing are approved for human clinical trials,” Dr Kamenova pointed out. “It takes, on average, 12 years to get a new drug from the lab to the market and [an] additional 11 to 14 years of post-market surveillance.”
“Our findings showed that many scientists have often provided, either by implication or direct quotes, authoritative statements regarding unrealistic timelines for stem cell therapies,” Caulfield added.
“[M]edia hype can foster unrealistic public expectations about clinical translation and increased patient demand for unproven stem cell therapies. Care needs to be taken by the media and the research community so that advances in research and therapy are portrayed in a realistic manner.”
Media coverage of translational stem cell research might generate unrealistic expectations, according to a pair of researchers.
The team analyzed reports on stem cell research published in major daily newspapers in Canada, the US, and the UK between 2010 and 2013.
They found that most reports were highly optimistic about the future of stem cell therapies and indicated that therapies would be available for clinical use within 5 to 10 years or sooner.
The researchers said that, as spokespeople, scientists need to be mindful of harnessing public expectations.
“As the dominant voice in respect to timelines for stem cell therapies, the scientists quoted in these stories need to be more aware of the importance of communicating realistic timelines to the press,“ said Kalina Kamenova, PhD, of the University of Alberta in Edmonton, Canada.
Dr Kamenova conducted this research with Timothy Caulfield, also of the University of Alberta, and the pair disclosed their results in Science Translational Medicine.
The researchers examined 307 news reports covering translational research on stem cells, including human embryonic stem cells (21.5%), induced pluripotent stem cells (12.1%), cord blood stem cells (2.9%), other tissue-specific stem cells such as bone marrow or mesenchymal stem cells (23.8%), multiple types of stem cells (18.9%), and stem cells of an unspecified type (20.8%).
The team assessed perspectives on the future of stem cell therapies and found that 57.7% of news reports were optimistic, 10.4% were pessimistic, and 31.9% were neutral.
In addition, 69% of all news stories citing timelines predicted that stem cell therapies would be available within 5 to 10 years or sooner.
“The approval process for new treatments is long and complicated, and only a few of all drugs that enter preclinical testing are approved for human clinical trials,” Dr Kamenova pointed out. “It takes, on average, 12 years to get a new drug from the lab to the market and [an] additional 11 to 14 years of post-market surveillance.”
“Our findings showed that many scientists have often provided, either by implication or direct quotes, authoritative statements regarding unrealistic timelines for stem cell therapies,” Caulfield added.
“[M]edia hype can foster unrealistic public expectations about clinical translation and increased patient demand for unproven stem cell therapies. Care needs to be taken by the media and the research community so that advances in research and therapy are portrayed in a realistic manner.”