User login
How Valid Is the “Healthy Obese” Phenotype For Older Women?
Study Overview
Objective. To determine whether having a body mass index (BMI) in the obese range (30 kg/m2) as an older adult woman is associated with changes in late-age survival and morbidity.
Design. Observational cohort study.
Setting and participants. This study relied upon data collected as part of the Women’s Health Initiative (WHI), an observational study and clinical trial focusing on the health of postmenopausal women aged 50–79 years at enrollment. For the purposes of the WHI, women were recruited from centers across the United States between 1993 and 1998 and could participate in several intervention studies (hormone replacement therapy, low-fat diet, calcium/vitamin D supplementation) or an observational study [1].
For this paper, the authors utilized data from those WHI participants who, based on their age at enrollment, could have reached age 85 years by September of 2012. The authors excluded women who did not provide follow-up health information within 18 months of their 85th birthdays or who reported mobility disabilities at their baseline data collection. This resulted in a total of 36,611 women for analysis.
There were a number of baseline measures collected on the study participants. Via written survey, participants self-reported their race and ethnicity, hormone use status, smoking status, alcohol consumption, physical activity level, depressive symptoms, and a number of demographic characteristics. Study personnel objectively measured height and weight to calculate baseline BMI and also measured waist circumference (WC, in cm).
The primary exposure measure for this study was BMI category at trial entry categorized as follows: underweight (< 18.5 kg/m2), healthy weight (18.5–24.9 kg/m2), overweight (25.0–29.9 kg/m2) or obese class I (30–34.9 kg/m2), II (35–39.9 kg/m2) or III (≥ 40 kg/m2), using standard accepted cut-points except for Asian/Pacific Islander participants, where alternative World Health Organization (WHO) cut-points were used. The WHO cut-points are slightly lower to account for usual body habitus and disease risk in that population. BMI changes over study follow-up were not included in the exposure measure for this study. WC (dichotomized around 88 cm) was also used as an exposure measure.
Main outcome measures. Disease-free survival status during the follow-up period. In the year at which participants were supposed to reach their 85th birthdays, they were categorized as to whether they had survived or not. Survival status was ascertained by hospital record review, autopsy reports, death certificates and review of the National Death Index. Those who survived were sub-grouped according to type of survival into 1 of the following categories: (1) no incident disease and no mobility disability (healthy), (2) baseline disease present but no incident disease or mobility disability during follow-up (prevalent disease), (3) incident disease but no mobility disability during follow-up (incident disease), and (4) incident mobility disability with or without incident disease (disabled).
Diseases of interest (prevalent and incident) included coronary and cerebrovascular disease, cancer, diabetes and hip fracture—the conditions the investigators felt most increased risk of death or morbidity and mobility disability in this population of aging women. Baseline disease status was defined using self-report, but incident disease in follow-up was more rigorously defined using self-report plus medical record review, except for incident diabetes, which required only self-report of diagnosis plus report of new oral hypoglycemic or insulin use.
Because the outcome of interest (survival status) had 5 possible categories, multinomial logistic regression was used as the analytic technique, with baseline BMI category and WC categories as predictors. The authors adjusted for baseline characteristics including age, race/ethnicity, study arm (intervention or observational for WHI), educational level, marital status, smoking status, ethanol use, self-reported physical activity and depression symptoms. Because of the possibly interrelated predictors (BMI and WC), the authors built BMI models with and without WC, and when WC was the primary predictor they adjusted for a participant’s BMI in order to try to isolate the impact of central adiposity. Additionally, they performed the analyses stratified by race and ethnicity as well as by smoking status.
Results. The mean (SD) baseline age of participants was 72.4 (3) years, and the vast majority (88.5%) self-identified as non-Hispanic white. At the end of the follow-up period, of the initial 36,611 participants, 9079 (24.8%) had died, 6702 (18.3%) had become disabled, 8512 (23.2%) had developed incident disease without disability, 5366 (14.6%) had prevalent but no incident disease, and 6952 (18.9%) were categorized as healthy. There were a number of potentially confounding baseline characteristics that differed between the survival categories. Importantly, race was associated with survival status—non-Hispanic white women were more likely to be in the “healthy” category at follow-up than their counterparts from other races/ethnicities. Baseline smokers were more likely not to live to 85 years, and those with less than a high school education were also more likely not to live to 85 years.
In models adjusting for baseline covariates, with BMI category as the primary predictor, women with an obese baseline BMI had significantly increased odds of not living to 85 years of age, relative to women in a healthy baseline BMI category, with increasing odds of death among those with higher baseline BMI levels (class I obesity odds ratio [OR] 1.72 [95% CI 1.55–1.92], class II obesity OR 3.28 [95% CI 2.69–4.01], class III obesity OR 3.48 [95% CI 2.52–4.80]). Amongst survivors, baseline obesity was also associated with greater odds of developing incident disease, relative to healthy weight women (class I obesity OR 1.65 [95% CI 1.48–1.84], class II obesity OR 2.44 (95% CI 2.02–2.96), class III obesity OR 1.73 [95% CI 1.21–2.46]). There was a striking relationship between baseline obesity and the odds of incident disability during follow-up (class I obesity OR 3.22 [95% CI 2.87–3.61], class II obesity OR 6.62 [95% CI 5.41–8.09], class III obesity OR 6.65 [95% CI 4.80–9.21]).
Women who were overweight at baseline also displayed statistically significant but more modestly increased odds of incident disease, mobility disability, and death relative to their normal-weight counterparts. Importantly, even in multivariable models, being underweight at baseline was also associated with significantly increased odds of death before age 85 relative to healthy weight individuals (OR 2.09 [95% CI 1.54–2.85]) but not with increased odds of incident disease or disability.
When WC status was adjusted for in the “BMI-outcome” models, the odds of death, disability, and incident disease were attenuated for obese women but remained elevated, particularly for women with class II or III obesity. When WC was examined as a primary predictor in multivariable models (adjusted for BMI category), those women with baseline WC ≥ 88 cm experienced increased odds of incident disease (OR 1.47 [95% CI 1.33–1.62]), mobility disability (OR 1.64 [95% CI 1.49–1.84]) and death (OR 1.83 [95% CI 1.66–2.03]) compared to women with smaller baseline WC.
When participants were stratified by race/ethnicity, the relationships for increasing odds of incident disease/disability with baseline obesity persisted for non-Hispanic white and black/African-American participants. Hispanic/Latina participants who were obese at baseline, however, did not have significantly increased odds of death before 85 years relative to healthy weight counterparts, although there were far fewer of these women represented in the cohort (n = 600). Asian/Pacific Islander (API) participants (n = 781), the majority of whom were in the healthy weight range at baseline (57%), showed a somewhat different pattern. Odds ratios for incident disease and death among obese API women were not significantly elevated relative to healthy weight women (although the “n ”s for these groups was relatively small), however the odds of incident disability was significantly elevated amongst API women who were obese at baseline (OR 4.95 [95% CI 1.51–16.23]).
Conclusion. Compared to older women with a healthy BMI, obese women and those with increased abdominal circumference had a lower chance of surviving to age 85 years. Those who did survive were more likely to develop incident disease and/or disability than their healthy weight counterparts.
Commentary
The prevalence of obesity has risen substantially over the past several decades, and few demographic groups have found themselves spared from the epidemic [2]. Although much focus is placed on obesity incidence and prevalence among children and young adults, adults over age 60, a growing segment of the US population, are heavily impacted by the rising rates of obesity as well, with 42% of women and 37% of men in this group characterized as obese in 2010 [2]. This trend has potentially major implications for policy makers who are tasked with cutting the cost of programs such as Medicare.
Obesity has only recently been recognized as a disease by the American Medical Association, and yet it has long been associated with costly and debilitating chronic conditions such as type 2 diabetes, hypertension, sleep apnea, and degenerative joint disease [3]. Despite this fact, several epidemiologic studies have suggested an “obesity paradox”—older adults who are mildly obese have mortality rates similar to normal weight adults, and those who are overweight appear to have lower mortality [4]. These papers have generated controversy among obesity researchers and epidemiologists who have grappled with the following question: How is it possible that overweight and obesity, while clearly linked to so many chronic conditions that increase mortality and morbidity, might be a good thing? Is there such a thing as a “healthy level of obesity,” or, can you be “fit and fat?” In the midst of these discussions and the media storm that inevitably surrounds them, patients are confronted with confusing mixed messages, possibly making them less likely to attempt to maintain a healthy body weight. Unfortunately, as many prior authors have asserted, most of the epidemiologic studies that assert this protective effect of overweight and obesity have not accounted for potentially important confounders of the “weight category–mortality” relationship, such as smoking status [5]. Among older adults, a substantial fraction of those in the normal weight category are at a so-called healthy BMI for very unhealthy reasons, such as cigarette smoking, cancer, or other chronic conditions (ie, they were heavier but lost weight due to underlying illness). Including these sick (but so-called “healthy weight”) people alongside those who are truly healthy and in a healthy BMI range muddies the picture and does not effectively isolate the impact of weight status on morbidity and mortality.
This cohort study by Rillamas-Sun et al makes an important contribution to the discussion by relying on a very large and comprehensive dataset, with an impressive follow-up period of nearly 2 decades, to more fully isolate the relationship between BMI category and survival for postmenopausal women. By adjusting for important potential confounders such as baseline smoking status, alcohol use, chronic disease status and a number of sociodemographic factors, and by separating out the chronically ill patients from the beginning, the investigators reached conclusions that seem to align better with all that we know about the increased health risks conferred by obesity. They found that postmenopausal women who were obese but without prevalent disease at baseline had increased odds of death before age 85, as well as increased odds of incident chronic disease (such as cardiovascular disease or diabetes) and increased odds of incident disability relative to postmenopausal women starting out in a healthy BMI range. Degree of obesity seemed to matter as well; those with class II and III obesity had significantly increased odds of developing mobility impairment, in particular, relative to normal weight women. This is particularly important when viewed through the lens of caring for an aging population—those who have significant mobility impairment will have a much harder time caring for themselves as they age. Furthermore, they found that overweight women also faced slightly increased odds of these outcomes relative to normal weight women. Abdominal adiposity, in particular, appeared to confer risk of death and disease, as elevated odds of mortality and incident disease or disability persisted in women with waist circumference ≥ 88 cm even after adjusting for BMI. As has been suggested by prior research on this topic, this study also supported the finding that being underweight increases ones odds of death, however, there was no increased incidence of disease or mobility disability for underweight women (relative to healthy starting weight).
The authors of the study made a wise decision in separating women with baseline chronic illness from those who had not yet been diagnosed with diabetes, cardiovascular disease or other chronic condition at baseline. As is pointed out in an editorial accompanying this study [6], this creates a scenario where the exposure (obesity) clearly predates the outcome (chronic illness), helping to avoid contamination of risk estimates by reverse causation (ie, is chronic illness leading to increased obesity, with the downstream increase in mortality actually due to the chronic illness?).
Despite the clear strengths of the study, there are several important limitations that must be acknowledged in interpreting the results. The most obvious is that BMI status was only measured at baseline. There is no way of knowing either what a participant’s weight trajectory had been in their younger years, or what happened to the BMI during the study follow-up period, both of which could certainly impact a participant’s risk of morbidity or mortality. Given a follow-up period of nearly 20 years, it is possible that there was crossover between BMI (exposure) categories after baseline assignment. Furthermore, the study does not address the very important question of how an intervention to promote weight loss in older women might impact morbidity and mortality—it is possible that encouraging weight loss in this population may in fact worsen health outcomes for some patients [6].
The generalizability of the study may be somewhat limited. The study population itself represented a group of women who were likely relatively healthy and motivated, having self-selected to participate in the WHI, thus they could have been healthier than groups studied in previous population-based samples. Furthermore, the study results may not generalize to men, however other similar cohort studies with male participants have reached similar conclusions [7].
Applications for Clinical Practice
To promote longevity and maintenance of independence in our growing population of postmenopausal women, it is important that physicians continue to educate and assist their patients in maintaining a healthy weight as they age. Although the impact of intentional weight loss in obese older women is not addressed by this paper, it does support the idea that obese postmenopausal women are at higher risk of death before age 85 years and disability. Therefore, for these patients, physicians should take particular care to reinforce healthy lifestyle choices such as good nutrition and regular physical activity.
—Kristina Lewis, MD, MPH
1. Design of the Women’s Health Initiative clinical trial and observational study. The Women’s Health Initiative Study Group. Control Clin Trials 1998;19:61–109.
2. Flegal KM, Carroll MD, Kit BK, Ogden CL. Prevalence of obesity and trends in the distribution of body mass index among US adults, 1999-2010. JAMA 2012;307:491–7.
3. Must A, Spadano J, Coakley EH, et al. The disease burden associated with overweight and obesity. JAMA 1999;282:1523–9.
4. Flegal KM, Kit BK, Orpana H, Graubard BI. Association of all-cause mortality with overweight and obesity using standard body mass index categories: a systematic review and meta-analysis. JAMA 2013;309:71–82.
5. Jackson CL, Stampfer MJ. Maintaining a healthy body weight is paramount. JAMA Intern Med 2014;174:23–4.
6. Dixon JB, Egger GJ, Finkelstein EA, et al. ‘Obesity Paradox’ misunderstands the biology of optimal weight throughout the life cycle. Int J Obesity 2014.
7. Reed DM, Foley DJ, White LR, et al. Predictors of healthy aging in men with high life expectancies. Am J Public Health 1998;88:1463–8.
Study Overview
Objective. To determine whether having a body mass index (BMI) in the obese range (30 kg/m2) as an older adult woman is associated with changes in late-age survival and morbidity.
Design. Observational cohort study.
Setting and participants. This study relied upon data collected as part of the Women’s Health Initiative (WHI), an observational study and clinical trial focusing on the health of postmenopausal women aged 50–79 years at enrollment. For the purposes of the WHI, women were recruited from centers across the United States between 1993 and 1998 and could participate in several intervention studies (hormone replacement therapy, low-fat diet, calcium/vitamin D supplementation) or an observational study [1].
For this paper, the authors utilized data from those WHI participants who, based on their age at enrollment, could have reached age 85 years by September of 2012. The authors excluded women who did not provide follow-up health information within 18 months of their 85th birthdays or who reported mobility disabilities at their baseline data collection. This resulted in a total of 36,611 women for analysis.
There were a number of baseline measures collected on the study participants. Via written survey, participants self-reported their race and ethnicity, hormone use status, smoking status, alcohol consumption, physical activity level, depressive symptoms, and a number of demographic characteristics. Study personnel objectively measured height and weight to calculate baseline BMI and also measured waist circumference (WC, in cm).
The primary exposure measure for this study was BMI category at trial entry categorized as follows: underweight (< 18.5 kg/m2), healthy weight (18.5–24.9 kg/m2), overweight (25.0–29.9 kg/m2) or obese class I (30–34.9 kg/m2), II (35–39.9 kg/m2) or III (≥ 40 kg/m2), using standard accepted cut-points except for Asian/Pacific Islander participants, where alternative World Health Organization (WHO) cut-points were used. The WHO cut-points are slightly lower to account for usual body habitus and disease risk in that population. BMI changes over study follow-up were not included in the exposure measure for this study. WC (dichotomized around 88 cm) was also used as an exposure measure.
Main outcome measures. Disease-free survival status during the follow-up period. In the year at which participants were supposed to reach their 85th birthdays, they were categorized as to whether they had survived or not. Survival status was ascertained by hospital record review, autopsy reports, death certificates and review of the National Death Index. Those who survived were sub-grouped according to type of survival into 1 of the following categories: (1) no incident disease and no mobility disability (healthy), (2) baseline disease present but no incident disease or mobility disability during follow-up (prevalent disease), (3) incident disease but no mobility disability during follow-up (incident disease), and (4) incident mobility disability with or without incident disease (disabled).
Diseases of interest (prevalent and incident) included coronary and cerebrovascular disease, cancer, diabetes and hip fracture—the conditions the investigators felt most increased risk of death or morbidity and mobility disability in this population of aging women. Baseline disease status was defined using self-report, but incident disease in follow-up was more rigorously defined using self-report plus medical record review, except for incident diabetes, which required only self-report of diagnosis plus report of new oral hypoglycemic or insulin use.
Because the outcome of interest (survival status) had 5 possible categories, multinomial logistic regression was used as the analytic technique, with baseline BMI category and WC categories as predictors. The authors adjusted for baseline characteristics including age, race/ethnicity, study arm (intervention or observational for WHI), educational level, marital status, smoking status, ethanol use, self-reported physical activity and depression symptoms. Because of the possibly interrelated predictors (BMI and WC), the authors built BMI models with and without WC, and when WC was the primary predictor they adjusted for a participant’s BMI in order to try to isolate the impact of central adiposity. Additionally, they performed the analyses stratified by race and ethnicity as well as by smoking status.
Results. The mean (SD) baseline age of participants was 72.4 (3) years, and the vast majority (88.5%) self-identified as non-Hispanic white. At the end of the follow-up period, of the initial 36,611 participants, 9079 (24.8%) had died, 6702 (18.3%) had become disabled, 8512 (23.2%) had developed incident disease without disability, 5366 (14.6%) had prevalent but no incident disease, and 6952 (18.9%) were categorized as healthy. There were a number of potentially confounding baseline characteristics that differed between the survival categories. Importantly, race was associated with survival status—non-Hispanic white women were more likely to be in the “healthy” category at follow-up than their counterparts from other races/ethnicities. Baseline smokers were more likely not to live to 85 years, and those with less than a high school education were also more likely not to live to 85 years.
In models adjusting for baseline covariates, with BMI category as the primary predictor, women with an obese baseline BMI had significantly increased odds of not living to 85 years of age, relative to women in a healthy baseline BMI category, with increasing odds of death among those with higher baseline BMI levels (class I obesity odds ratio [OR] 1.72 [95% CI 1.55–1.92], class II obesity OR 3.28 [95% CI 2.69–4.01], class III obesity OR 3.48 [95% CI 2.52–4.80]). Amongst survivors, baseline obesity was also associated with greater odds of developing incident disease, relative to healthy weight women (class I obesity OR 1.65 [95% CI 1.48–1.84], class II obesity OR 2.44 (95% CI 2.02–2.96), class III obesity OR 1.73 [95% CI 1.21–2.46]). There was a striking relationship between baseline obesity and the odds of incident disability during follow-up (class I obesity OR 3.22 [95% CI 2.87–3.61], class II obesity OR 6.62 [95% CI 5.41–8.09], class III obesity OR 6.65 [95% CI 4.80–9.21]).
Women who were overweight at baseline also displayed statistically significant but more modestly increased odds of incident disease, mobility disability, and death relative to their normal-weight counterparts. Importantly, even in multivariable models, being underweight at baseline was also associated with significantly increased odds of death before age 85 relative to healthy weight individuals (OR 2.09 [95% CI 1.54–2.85]) but not with increased odds of incident disease or disability.
When WC status was adjusted for in the “BMI-outcome” models, the odds of death, disability, and incident disease were attenuated for obese women but remained elevated, particularly for women with class II or III obesity. When WC was examined as a primary predictor in multivariable models (adjusted for BMI category), those women with baseline WC ≥ 88 cm experienced increased odds of incident disease (OR 1.47 [95% CI 1.33–1.62]), mobility disability (OR 1.64 [95% CI 1.49–1.84]) and death (OR 1.83 [95% CI 1.66–2.03]) compared to women with smaller baseline WC.
When participants were stratified by race/ethnicity, the relationships for increasing odds of incident disease/disability with baseline obesity persisted for non-Hispanic white and black/African-American participants. Hispanic/Latina participants who were obese at baseline, however, did not have significantly increased odds of death before 85 years relative to healthy weight counterparts, although there were far fewer of these women represented in the cohort (n = 600). Asian/Pacific Islander (API) participants (n = 781), the majority of whom were in the healthy weight range at baseline (57%), showed a somewhat different pattern. Odds ratios for incident disease and death among obese API women were not significantly elevated relative to healthy weight women (although the “n ”s for these groups was relatively small), however the odds of incident disability was significantly elevated amongst API women who were obese at baseline (OR 4.95 [95% CI 1.51–16.23]).
Conclusion. Compared to older women with a healthy BMI, obese women and those with increased abdominal circumference had a lower chance of surviving to age 85 years. Those who did survive were more likely to develop incident disease and/or disability than their healthy weight counterparts.
Commentary
The prevalence of obesity has risen substantially over the past several decades, and few demographic groups have found themselves spared from the epidemic [2]. Although much focus is placed on obesity incidence and prevalence among children and young adults, adults over age 60, a growing segment of the US population, are heavily impacted by the rising rates of obesity as well, with 42% of women and 37% of men in this group characterized as obese in 2010 [2]. This trend has potentially major implications for policy makers who are tasked with cutting the cost of programs such as Medicare.
Obesity has only recently been recognized as a disease by the American Medical Association, and yet it has long been associated with costly and debilitating chronic conditions such as type 2 diabetes, hypertension, sleep apnea, and degenerative joint disease [3]. Despite this fact, several epidemiologic studies have suggested an “obesity paradox”—older adults who are mildly obese have mortality rates similar to normal weight adults, and those who are overweight appear to have lower mortality [4]. These papers have generated controversy among obesity researchers and epidemiologists who have grappled with the following question: How is it possible that overweight and obesity, while clearly linked to so many chronic conditions that increase mortality and morbidity, might be a good thing? Is there such a thing as a “healthy level of obesity,” or, can you be “fit and fat?” In the midst of these discussions and the media storm that inevitably surrounds them, patients are confronted with confusing mixed messages, possibly making them less likely to attempt to maintain a healthy body weight. Unfortunately, as many prior authors have asserted, most of the epidemiologic studies that assert this protective effect of overweight and obesity have not accounted for potentially important confounders of the “weight category–mortality” relationship, such as smoking status [5]. Among older adults, a substantial fraction of those in the normal weight category are at a so-called healthy BMI for very unhealthy reasons, such as cigarette smoking, cancer, or other chronic conditions (ie, they were heavier but lost weight due to underlying illness). Including these sick (but so-called “healthy weight”) people alongside those who are truly healthy and in a healthy BMI range muddies the picture and does not effectively isolate the impact of weight status on morbidity and mortality.
This cohort study by Rillamas-Sun et al makes an important contribution to the discussion by relying on a very large and comprehensive dataset, with an impressive follow-up period of nearly 2 decades, to more fully isolate the relationship between BMI category and survival for postmenopausal women. By adjusting for important potential confounders such as baseline smoking status, alcohol use, chronic disease status and a number of sociodemographic factors, and by separating out the chronically ill patients from the beginning, the investigators reached conclusions that seem to align better with all that we know about the increased health risks conferred by obesity. They found that postmenopausal women who were obese but without prevalent disease at baseline had increased odds of death before age 85, as well as increased odds of incident chronic disease (such as cardiovascular disease or diabetes) and increased odds of incident disability relative to postmenopausal women starting out in a healthy BMI range. Degree of obesity seemed to matter as well; those with class II and III obesity had significantly increased odds of developing mobility impairment, in particular, relative to normal weight women. This is particularly important when viewed through the lens of caring for an aging population—those who have significant mobility impairment will have a much harder time caring for themselves as they age. Furthermore, they found that overweight women also faced slightly increased odds of these outcomes relative to normal weight women. Abdominal adiposity, in particular, appeared to confer risk of death and disease, as elevated odds of mortality and incident disease or disability persisted in women with waist circumference ≥ 88 cm even after adjusting for BMI. As has been suggested by prior research on this topic, this study also supported the finding that being underweight increases ones odds of death, however, there was no increased incidence of disease or mobility disability for underweight women (relative to healthy starting weight).
The authors of the study made a wise decision in separating women with baseline chronic illness from those who had not yet been diagnosed with diabetes, cardiovascular disease or other chronic condition at baseline. As is pointed out in an editorial accompanying this study [6], this creates a scenario where the exposure (obesity) clearly predates the outcome (chronic illness), helping to avoid contamination of risk estimates by reverse causation (ie, is chronic illness leading to increased obesity, with the downstream increase in mortality actually due to the chronic illness?).
Despite the clear strengths of the study, there are several important limitations that must be acknowledged in interpreting the results. The most obvious is that BMI status was only measured at baseline. There is no way of knowing either what a participant’s weight trajectory had been in their younger years, or what happened to the BMI during the study follow-up period, both of which could certainly impact a participant’s risk of morbidity or mortality. Given a follow-up period of nearly 20 years, it is possible that there was crossover between BMI (exposure) categories after baseline assignment. Furthermore, the study does not address the very important question of how an intervention to promote weight loss in older women might impact morbidity and mortality—it is possible that encouraging weight loss in this population may in fact worsen health outcomes for some patients [6].
The generalizability of the study may be somewhat limited. The study population itself represented a group of women who were likely relatively healthy and motivated, having self-selected to participate in the WHI, thus they could have been healthier than groups studied in previous population-based samples. Furthermore, the study results may not generalize to men, however other similar cohort studies with male participants have reached similar conclusions [7].
Applications for Clinical Practice
To promote longevity and maintenance of independence in our growing population of postmenopausal women, it is important that physicians continue to educate and assist their patients in maintaining a healthy weight as they age. Although the impact of intentional weight loss in obese older women is not addressed by this paper, it does support the idea that obese postmenopausal women are at higher risk of death before age 85 years and disability. Therefore, for these patients, physicians should take particular care to reinforce healthy lifestyle choices such as good nutrition and regular physical activity.
—Kristina Lewis, MD, MPH
Study Overview
Objective. To determine whether having a body mass index (BMI) in the obese range (30 kg/m2) as an older adult woman is associated with changes in late-age survival and morbidity.
Design. Observational cohort study.
Setting and participants. This study relied upon data collected as part of the Women’s Health Initiative (WHI), an observational study and clinical trial focusing on the health of postmenopausal women aged 50–79 years at enrollment. For the purposes of the WHI, women were recruited from centers across the United States between 1993 and 1998 and could participate in several intervention studies (hormone replacement therapy, low-fat diet, calcium/vitamin D supplementation) or an observational study [1].
For this paper, the authors utilized data from those WHI participants who, based on their age at enrollment, could have reached age 85 years by September of 2012. The authors excluded women who did not provide follow-up health information within 18 months of their 85th birthdays or who reported mobility disabilities at their baseline data collection. This resulted in a total of 36,611 women for analysis.
There were a number of baseline measures collected on the study participants. Via written survey, participants self-reported their race and ethnicity, hormone use status, smoking status, alcohol consumption, physical activity level, depressive symptoms, and a number of demographic characteristics. Study personnel objectively measured height and weight to calculate baseline BMI and also measured waist circumference (WC, in cm).
The primary exposure measure for this study was BMI category at trial entry categorized as follows: underweight (< 18.5 kg/m2), healthy weight (18.5–24.9 kg/m2), overweight (25.0–29.9 kg/m2) or obese class I (30–34.9 kg/m2), II (35–39.9 kg/m2) or III (≥ 40 kg/m2), using standard accepted cut-points except for Asian/Pacific Islander participants, where alternative World Health Organization (WHO) cut-points were used. The WHO cut-points are slightly lower to account for usual body habitus and disease risk in that population. BMI changes over study follow-up were not included in the exposure measure for this study. WC (dichotomized around 88 cm) was also used as an exposure measure.
Main outcome measures. Disease-free survival status during the follow-up period. In the year at which participants were supposed to reach their 85th birthdays, they were categorized as to whether they had survived or not. Survival status was ascertained by hospital record review, autopsy reports, death certificates and review of the National Death Index. Those who survived were sub-grouped according to type of survival into 1 of the following categories: (1) no incident disease and no mobility disability (healthy), (2) baseline disease present but no incident disease or mobility disability during follow-up (prevalent disease), (3) incident disease but no mobility disability during follow-up (incident disease), and (4) incident mobility disability with or without incident disease (disabled).
Diseases of interest (prevalent and incident) included coronary and cerebrovascular disease, cancer, diabetes and hip fracture—the conditions the investigators felt most increased risk of death or morbidity and mobility disability in this population of aging women. Baseline disease status was defined using self-report, but incident disease in follow-up was more rigorously defined using self-report plus medical record review, except for incident diabetes, which required only self-report of diagnosis plus report of new oral hypoglycemic or insulin use.
Because the outcome of interest (survival status) had 5 possible categories, multinomial logistic regression was used as the analytic technique, with baseline BMI category and WC categories as predictors. The authors adjusted for baseline characteristics including age, race/ethnicity, study arm (intervention or observational for WHI), educational level, marital status, smoking status, ethanol use, self-reported physical activity and depression symptoms. Because of the possibly interrelated predictors (BMI and WC), the authors built BMI models with and without WC, and when WC was the primary predictor they adjusted for a participant’s BMI in order to try to isolate the impact of central adiposity. Additionally, they performed the analyses stratified by race and ethnicity as well as by smoking status.
Results. The mean (SD) baseline age of participants was 72.4 (3) years, and the vast majority (88.5%) self-identified as non-Hispanic white. At the end of the follow-up period, of the initial 36,611 participants, 9079 (24.8%) had died, 6702 (18.3%) had become disabled, 8512 (23.2%) had developed incident disease without disability, 5366 (14.6%) had prevalent but no incident disease, and 6952 (18.9%) were categorized as healthy. There were a number of potentially confounding baseline characteristics that differed between the survival categories. Importantly, race was associated with survival status—non-Hispanic white women were more likely to be in the “healthy” category at follow-up than their counterparts from other races/ethnicities. Baseline smokers were more likely not to live to 85 years, and those with less than a high school education were also more likely not to live to 85 years.
In models adjusting for baseline covariates, with BMI category as the primary predictor, women with an obese baseline BMI had significantly increased odds of not living to 85 years of age, relative to women in a healthy baseline BMI category, with increasing odds of death among those with higher baseline BMI levels (class I obesity odds ratio [OR] 1.72 [95% CI 1.55–1.92], class II obesity OR 3.28 [95% CI 2.69–4.01], class III obesity OR 3.48 [95% CI 2.52–4.80]). Amongst survivors, baseline obesity was also associated with greater odds of developing incident disease, relative to healthy weight women (class I obesity OR 1.65 [95% CI 1.48–1.84], class II obesity OR 2.44 (95% CI 2.02–2.96), class III obesity OR 1.73 [95% CI 1.21–2.46]). There was a striking relationship between baseline obesity and the odds of incident disability during follow-up (class I obesity OR 3.22 [95% CI 2.87–3.61], class II obesity OR 6.62 [95% CI 5.41–8.09], class III obesity OR 6.65 [95% CI 4.80–9.21]).
Women who were overweight at baseline also displayed statistically significant but more modestly increased odds of incident disease, mobility disability, and death relative to their normal-weight counterparts. Importantly, even in multivariable models, being underweight at baseline was also associated with significantly increased odds of death before age 85 relative to healthy weight individuals (OR 2.09 [95% CI 1.54–2.85]) but not with increased odds of incident disease or disability.
When WC status was adjusted for in the “BMI-outcome” models, the odds of death, disability, and incident disease were attenuated for obese women but remained elevated, particularly for women with class II or III obesity. When WC was examined as a primary predictor in multivariable models (adjusted for BMI category), those women with baseline WC ≥ 88 cm experienced increased odds of incident disease (OR 1.47 [95% CI 1.33–1.62]), mobility disability (OR 1.64 [95% CI 1.49–1.84]) and death (OR 1.83 [95% CI 1.66–2.03]) compared to women with smaller baseline WC.
When participants were stratified by race/ethnicity, the relationships for increasing odds of incident disease/disability with baseline obesity persisted for non-Hispanic white and black/African-American participants. Hispanic/Latina participants who were obese at baseline, however, did not have significantly increased odds of death before 85 years relative to healthy weight counterparts, although there were far fewer of these women represented in the cohort (n = 600). Asian/Pacific Islander (API) participants (n = 781), the majority of whom were in the healthy weight range at baseline (57%), showed a somewhat different pattern. Odds ratios for incident disease and death among obese API women were not significantly elevated relative to healthy weight women (although the “n ”s for these groups was relatively small), however the odds of incident disability was significantly elevated amongst API women who were obese at baseline (OR 4.95 [95% CI 1.51–16.23]).
Conclusion. Compared to older women with a healthy BMI, obese women and those with increased abdominal circumference had a lower chance of surviving to age 85 years. Those who did survive were more likely to develop incident disease and/or disability than their healthy weight counterparts.
Commentary
The prevalence of obesity has risen substantially over the past several decades, and few demographic groups have found themselves spared from the epidemic [2]. Although much focus is placed on obesity incidence and prevalence among children and young adults, adults over age 60, a growing segment of the US population, are heavily impacted by the rising rates of obesity as well, with 42% of women and 37% of men in this group characterized as obese in 2010 [2]. This trend has potentially major implications for policy makers who are tasked with cutting the cost of programs such as Medicare.
Obesity has only recently been recognized as a disease by the American Medical Association, and yet it has long been associated with costly and debilitating chronic conditions such as type 2 diabetes, hypertension, sleep apnea, and degenerative joint disease [3]. Despite this fact, several epidemiologic studies have suggested an “obesity paradox”—older adults who are mildly obese have mortality rates similar to normal weight adults, and those who are overweight appear to have lower mortality [4]. These papers have generated controversy among obesity researchers and epidemiologists who have grappled with the following question: How is it possible that overweight and obesity, while clearly linked to so many chronic conditions that increase mortality and morbidity, might be a good thing? Is there such a thing as a “healthy level of obesity,” or, can you be “fit and fat?” In the midst of these discussions and the media storm that inevitably surrounds them, patients are confronted with confusing mixed messages, possibly making them less likely to attempt to maintain a healthy body weight. Unfortunately, as many prior authors have asserted, most of the epidemiologic studies that assert this protective effect of overweight and obesity have not accounted for potentially important confounders of the “weight category–mortality” relationship, such as smoking status [5]. Among older adults, a substantial fraction of those in the normal weight category are at a so-called healthy BMI for very unhealthy reasons, such as cigarette smoking, cancer, or other chronic conditions (ie, they were heavier but lost weight due to underlying illness). Including these sick (but so-called “healthy weight”) people alongside those who are truly healthy and in a healthy BMI range muddies the picture and does not effectively isolate the impact of weight status on morbidity and mortality.
This cohort study by Rillamas-Sun et al makes an important contribution to the discussion by relying on a very large and comprehensive dataset, with an impressive follow-up period of nearly 2 decades, to more fully isolate the relationship between BMI category and survival for postmenopausal women. By adjusting for important potential confounders such as baseline smoking status, alcohol use, chronic disease status and a number of sociodemographic factors, and by separating out the chronically ill patients from the beginning, the investigators reached conclusions that seem to align better with all that we know about the increased health risks conferred by obesity. They found that postmenopausal women who were obese but without prevalent disease at baseline had increased odds of death before age 85, as well as increased odds of incident chronic disease (such as cardiovascular disease or diabetes) and increased odds of incident disability relative to postmenopausal women starting out in a healthy BMI range. Degree of obesity seemed to matter as well; those with class II and III obesity had significantly increased odds of developing mobility impairment, in particular, relative to normal weight women. This is particularly important when viewed through the lens of caring for an aging population—those who have significant mobility impairment will have a much harder time caring for themselves as they age. Furthermore, they found that overweight women also faced slightly increased odds of these outcomes relative to normal weight women. Abdominal adiposity, in particular, appeared to confer risk of death and disease, as elevated odds of mortality and incident disease or disability persisted in women with waist circumference ≥ 88 cm even after adjusting for BMI. As has been suggested by prior research on this topic, this study also supported the finding that being underweight increases ones odds of death, however, there was no increased incidence of disease or mobility disability for underweight women (relative to healthy starting weight).
The authors of the study made a wise decision in separating women with baseline chronic illness from those who had not yet been diagnosed with diabetes, cardiovascular disease or other chronic condition at baseline. As is pointed out in an editorial accompanying this study [6], this creates a scenario where the exposure (obesity) clearly predates the outcome (chronic illness), helping to avoid contamination of risk estimates by reverse causation (ie, is chronic illness leading to increased obesity, with the downstream increase in mortality actually due to the chronic illness?).
Despite the clear strengths of the study, there are several important limitations that must be acknowledged in interpreting the results. The most obvious is that BMI status was only measured at baseline. There is no way of knowing either what a participant’s weight trajectory had been in their younger years, or what happened to the BMI during the study follow-up period, both of which could certainly impact a participant’s risk of morbidity or mortality. Given a follow-up period of nearly 20 years, it is possible that there was crossover between BMI (exposure) categories after baseline assignment. Furthermore, the study does not address the very important question of how an intervention to promote weight loss in older women might impact morbidity and mortality—it is possible that encouraging weight loss in this population may in fact worsen health outcomes for some patients [6].
The generalizability of the study may be somewhat limited. The study population itself represented a group of women who were likely relatively healthy and motivated, having self-selected to participate in the WHI, thus they could have been healthier than groups studied in previous population-based samples. Furthermore, the study results may not generalize to men, however other similar cohort studies with male participants have reached similar conclusions [7].
Applications for Clinical Practice
To promote longevity and maintenance of independence in our growing population of postmenopausal women, it is important that physicians continue to educate and assist their patients in maintaining a healthy weight as they age. Although the impact of intentional weight loss in obese older women is not addressed by this paper, it does support the idea that obese postmenopausal women are at higher risk of death before age 85 years and disability. Therefore, for these patients, physicians should take particular care to reinforce healthy lifestyle choices such as good nutrition and regular physical activity.
—Kristina Lewis, MD, MPH
1. Design of the Women’s Health Initiative clinical trial and observational study. The Women’s Health Initiative Study Group. Control Clin Trials 1998;19:61–109.
2. Flegal KM, Carroll MD, Kit BK, Ogden CL. Prevalence of obesity and trends in the distribution of body mass index among US adults, 1999-2010. JAMA 2012;307:491–7.
3. Must A, Spadano J, Coakley EH, et al. The disease burden associated with overweight and obesity. JAMA 1999;282:1523–9.
4. Flegal KM, Kit BK, Orpana H, Graubard BI. Association of all-cause mortality with overweight and obesity using standard body mass index categories: a systematic review and meta-analysis. JAMA 2013;309:71–82.
5. Jackson CL, Stampfer MJ. Maintaining a healthy body weight is paramount. JAMA Intern Med 2014;174:23–4.
6. Dixon JB, Egger GJ, Finkelstein EA, et al. ‘Obesity Paradox’ misunderstands the biology of optimal weight throughout the life cycle. Int J Obesity 2014.
7. Reed DM, Foley DJ, White LR, et al. Predictors of healthy aging in men with high life expectancies. Am J Public Health 1998;88:1463–8.
1. Design of the Women’s Health Initiative clinical trial and observational study. The Women’s Health Initiative Study Group. Control Clin Trials 1998;19:61–109.
2. Flegal KM, Carroll MD, Kit BK, Ogden CL. Prevalence of obesity and trends in the distribution of body mass index among US adults, 1999-2010. JAMA 2012;307:491–7.
3. Must A, Spadano J, Coakley EH, et al. The disease burden associated with overweight and obesity. JAMA 1999;282:1523–9.
4. Flegal KM, Kit BK, Orpana H, Graubard BI. Association of all-cause mortality with overweight and obesity using standard body mass index categories: a systematic review and meta-analysis. JAMA 2013;309:71–82.
5. Jackson CL, Stampfer MJ. Maintaining a healthy body weight is paramount. JAMA Intern Med 2014;174:23–4.
6. Dixon JB, Egger GJ, Finkelstein EA, et al. ‘Obesity Paradox’ misunderstands the biology of optimal weight throughout the life cycle. Int J Obesity 2014.
7. Reed DM, Foley DJ, White LR, et al. Predictors of healthy aging in men with high life expectancies. Am J Public Health 1998;88:1463–8.
Finding the Optimum in the Use of Elective Percutaneous Coronary Intervention
From the VA Eastern Colorado Health Care System, University of Colorado School of Medicine, and the Colorado Cardiovascular Outcomes Research Group, Denver and Aurora, CO.
Abstract
- Objective: To review the use of elective percutaneous coronary intervention (PCI), evaluate what is currently known about elective PCI in the context of appropriate use criteria, and offer insight into next steps to optimize the use of elective PCI to achieve high-quality care.
- Methods: Review of the scientific literature, appropriate use criteria, and professional society guidelines relevant to elective PCI.
- Results: Recent studies have demonstrated as many as 1 in 6 elective PCIs are inappropriate as determined by appropriate use criteria. These inappropriate PCIs are not anticipated to benefit patients and result in unnecessary patient risk and cost. While these studies are consistent with regard to overuse of elective PCI, less is known about potential underuse of PCI for elective indications. We lack health status data on populations of ischemic heart disease patients to inform PCI underuse that may contribute to patient symptom burden, functional status, and quality of life. Optimal use of PCI will be attained with longitudinal capture of patient-reported health status, study of factors contributing to overuse and underuse, refinement of the appropriate use criteria with particular focus on patient-centered measures, and incorporation of patient preference and shared decision making into appropriateness evaluation tools.
- Conclusion: The use of elective PCI is less than optimal in current clinical practice. Continued effort is needed to ensure elective PCI is targeted to patients with anticipated benefit and use of the procedure is aligned with patient preferences.
Providing the right care to the right patient at the right time is essential to the practice of high-quality care. Reducing overuse of health care services is part of this equation, and initiatives to reduce inappropriate use and to encourage physicians and patients to “choose wisely” have been introduced [1]. One procedure that is being examined with a focus on appropriateness is percutaneous coronary intervention (PCI). This procedure is common (nearly 1 million inpatient PCI procedures performed in 2010), presents risks to the patient, and is expensive (attributable cost approximately $10 billion in 2010) [2,3]. While the clinical benefit of PCI in acute settings such as ST-segment elevation myocardial infarction is well established [4], the benefit of PCI in nonacute (elective) settings is less robust [5–7]. Prior studies have demonstrated PCI for stable ischemic heart disease does not result in mortality benefit [6]. Furthermore, PCI as an initial strategy for symptom relief of stable angina may offer little benefit relative to medications alone [5]. Given that PCI is common, costly, and associated with both short- and long-term risks [8,9], ensuring this therapy is provided to the right patient at the right time is important.
In 2009, appropriate use criteria (AUC) were developed by 6 professional organizations to support the rational and judicious use of PCI [10]; a focused update was published in 2012 [11]. In this review, we discuss the recommendations for appropriate use and their application and offer thoughts on next steps to optimize the use of elective PCI as part of high-quality care.
Variation in the Use of PCI
Additionally, significant public attention has been focused on the issue of overuse after lay press investigations into community practice patterns. In particular, a case study presented in the New York Times highlighted the community of Elyria, Ohio, which was found to have a PCI rate that was 4 times the national average [16]. This investigation sparked public debate and further focused attention on the issue of overuse of elective PCI. Conversely, others have pointed to data that suggest underuse of coronary procedural care, particularly among women and racial and ethnic minorities [17–22].
Appropriate Use Criteria
Development Methodology
AUC for PCI, which were developed through the collaborative efforts of 6 major cardiovascular professional organizations, are intended to support the effective, efficient, and equitable use of PCI [10,11]. They were developed in response to a growing need to support rational use of cardiovascular procedures as part of high-quality care. The methods of development for the AUC have been described in detail in the criteria publications [10,11]. We briefly review these methods here.
Panel members first individually assigned ratings to each clinical scenario that ranged from 1 (least appropriate) to 9 (most appropriate). This was followed by an in-person meeting in which technical panel members discussed scenarios for which there was wide variation in appropriateness assessment. After this meeting, technical panel members again assigned ratings for each scenario from 1 to 9. After this second round, the median values for the pooled ratings were used as the appropriateness classification for each scenario. Scenarios with median values of 1–3 were classified as “inappropriate,” 4–6 as “uncertain,” and 7–9 as “appropriate.” A rating of “appropriate” represented clinical scenarios in which the indication is considered generally acceptable and likely to improve health outcomes or survival. A rating of “uncertain” represented clinical scenarios where the indication may be reasonable but more research is necessary to further understand the relative benefits and risks of PCI in this setting. Finally, a rating of “inappropriate” represented clinical scenarios in which the indication is not generally acceptable as it is unlikely to improve health outcomes or survival.
The approach used for AUC development appears to be valid, as Class III indications for PCI in the ACC/AHA clinical guideline [24] (Class III = PCI should NOT be performed since it is not helpful and may be harmful) and AUC scenarios rated as inappropriate are in 100% agreement (personal communication, Ralph Brindis, past president of the American College of Cardiology).
Application
It is important to remember that the AUC are intended to aid in patient selection and are not absolute. Unique clinical factors and patient preference cannot feasibly be captured by the AUC scenarios. It should also be noted that the intent of the AUC is not to be punitive but rather to identify and assess variation in practice patterns. To reflect this intent, the terminology applied to appropriateness ratings has recently changed. Clinical scenarios previously classified as “inappropriate” are now termed “rarely appropriate” and clinical scenarios classified as “uncertain” are now termed “may be appropriate.”
Although the AUC were developed to help evaluate practice patterns of care delivery and serve as guides for clinical decision making, they were not intended to serve as mandates for or against treatment in individual patients or to be tied to reimbursement for individual patients. Despite this, health care organizations and payors have used other AUC documents for incentive pay and prior authorization programs, specifically for cardiovascular imaging [25]. Use of the AUC in this manner may still be reasonable if application and measurement is at the level of the practice, rather than the individual patient, but much remains to be understood about the implications of applying AUC in reimbursement
decisions.
Refinement
The AUC for PCI are designed to be dynamic and continually updated. As additional evidence becomes available regarding the efficacy of PCI in specific clinical scenarios, there will be ongoing efforts to update the AUC to reflect this new evidence. This is highlighted by the first update to the AUC occurring less than 3 years after the original publication date [11].
In addition to perpetual review of the data used to inform scenario ratings, there are opportunities to improve measurement of the clinical variables that are considered in rating PCI appropriateness (eg, clinical presentation, symptom severity, ischemia severity, extent of medical therapy, extent of anatomic disease). For example, in the current AUC, symptom severity is dependent on clinician assessment using the Canadian Cardiovascular Society Classification [25]. Moving toward a patient-centered assessment of symptom severity would ensure that the AUC more closely reflect the patient-perceived symptom burden. Further, the use of a patient-centered instrument would reduce the possibility of physician manipulation of symptom severity to influence the apparent appropriateness of PCI. There are similar opportunities to improve reporting of noninvasive stress test data, such as through standardized reporting of ischemic risk. Finally, the use of physiologic assessments of stenosis severity (eg, fractional flow reserve) and quantitative coronary angiography to standardize interpretations of diagnostic angiography may further optimize the assessment of PCI appropriateness.
Application of the Appropriate Use Criteria in Clinical Practice—Study Results
Application of the AUC to clinical practice has highlighted potential overuse of PCI (Table). The first report came from applying the AUC to the National Cardiovascular Data Registry (NCDR) CathPCI Registry [26]. In this study of more than 500,000 PCIs from over 1000 facilities across the country, the authors found that PCIs performed in the acute setting (STEMI, NSTEMI, and high-risk unstable angina) were almost uniformly classified as appropriate. However, for nonacute (elective) PCI, application of the AUC resulted in the classification of 50% as appropriate, 38% as uncertain, and 12% as inappropriate. The majority of patients who received inappropriate PCI had a low-risk stress test (72%) or were asymptomatic (54%). Additionally, 96% of patients who received PCI classified as inappropriate had not been given a trial of adequate anti-anginal therapy. This analysis was supported by subsequent analyses of 2 other state-specific registries (New York and Washington), which found similar rates of PCI for nonacute indications rated as inappropriate [27,28]. Additionally, all 3 studies showed wide facility-level variation in the percentage of appropriate and inappropriate PCI for elective indications.
These studies also highlight a gap in preprocedural care. The anticipated benefit of elective PCI is related to patient symptom burden, adequacy of anti-anginal therapy, and ischemic risk as determined by noninvasive stress testing. However, 30% to 50% of patients undergo elective PCI without evidence of preprocedural stress testing. Attempts are being made to address this gap with the recent release of PCI performance measures [29]. These performance measures, intended for cardiac catheterization labs, include comprehensive documentation of the indication for PCI, which is central to determination of appropriateness. This integration of procedural indication into a performance measure marks the first such occurrence in cardiology.
As documentation of procedural indication and appropriateness have become part and parcel of assessing quality of care, concerns about “gaming” have become more pertinent. Providers who perform PCI could potentially enhance the appearance of appropriateness by overstating the clinical symptom burden or stress test findings. The incorporation of validated, patient-centered health status questionnaires along with data audit programs have been proposed as measures to prevent this type of abuse. Addressing quality gaps in preprocedural assessment and documentation is critical to optimizing use of elective PCI [28].
The apparent overuse of PCI for elective indications may be a reflection of our fragmented, fee-for-service health care delivery system. However, recent studies challenge these assumptions. In a Canadian study, Ko et al found that 18% of elective PCIs were classified as inappropriate, a proportion similar to what had been found previously in the United States [30]. In a US study of Medicare beneficiaries, Matlock and colleagues observed a fourfold regional variation in use of elective coronary angiography and PCI in both Medicare fee-for-service and capitated Medicare Advantage beneficiaries [31]. Collectively, these studies suggest barriers to optimal patient selection for invasive coronary procedures in both capitated and fee-for-service health care systems. Without addressing factors that contribute to variation in the absence of fee-for-service incentives, efforts to improve integration and reduce fee-for-service reimbursement may be inadequate to optimize PCI use.
Evaluating Underuse
While potential underuse of PCI has been described for acute indications [17–22], study of underuse of PCI for elective indications is more challenging. Population data on the effect of underuse of elective PCI on patient symptom burden, functional status, and quality of life is lacking.
A population-based study from Australia highlights the potential importance of underuse in the care of patients with stable coronary disease. This study assessed symptom burden among patients with chronic stable angina using the Seattle Angina Questionnaire and included patients cared for by 207 primary care practitioners [32]. The authors noted that there was considerable variation in patient symptom burden between practices, with 14% of practices having no patients with more than 1 episode of angina per week and 18% of clinics having more than half of enrolled patients with at least 1 episode of angina per week. The authors postulate that this variability may be due to differences among providers in the identification and management of angina, including using PCI to minimize symptom burden.
In the Ko study mentioned earlier, the AUC was used to examine potential underuse of coronary revascularization procedures. In this study, they analyzed the association between AUC ratings and outcomes in patients undergoing diagnostic coronary angiography [30]. Of patients considered “appropriate” for revascularization following completion of diagnostic angiography, only 69% underwent revascularization. However, the clinical aspects that influence the decision to proceed with revascularization may not be fully captured in this study. Thus, the true degree of underuse of PCI remains elusive.
In summary, the relative lack of data that would allow for the assessment of underuse of elective PCI is an important quality concern. Health systems should work to systematically capture patient-reported health status, including symptom burden data, to identify inadequate symptom control and potential underuse of procedural care for CAD.
Facilitating Optimal Use
In current practice, the AUC hold promise to minimize the overuse of elective PCI. This likely involves addressing processes occurring upstream of the cardiac catheterization lab, including employing systems to ensure that procedures are avoided in patients who are unlikely to benefit (eg, asymptomatic, low ischemic burden) (Figure 3) [33]. Studying hospitals that already have low rates of inappropriate PCI may inform the design and dissemination of strategies that will help improve patient selection at hospitals with higher rates. Although professional organizations have developed tools intended to facilitate appropriateness evaluation at the point-of-care [34], the use of these tools are likely to be sporadic without greater integration into the health care delivery system. Further, these applications are currently limited to determination of appropriateness of PCI after completion of the diagnostic coronary angiogram. Identifying processes prior to catheterization that contribute to PCI appropriateness may also streamline appropriate ad hoc PCI, as the need to reassess appropriateness after the diagnostic angiogram may be mitigated.
Significant barriers exist to the application of the AUC for determination of procedural underuse. As described above, we lack adequate data to ascertain gaps in symptom management that could be mitigated by proper use of PCI. Further study of symptom burden in populations of patients with coronary artery disease is needed. This may help in the identification of patient populations whose symptom burden may warrant consideration of invasive coronary procedures, including coronary angiography and PCI.
Finally, it is important to note that the AUC are based on technical considerations, ie, practice guidelines and trial evidence. They do not take into consideration patient preferences. For example, PCI can be technically appropriate for the scenario but inappropriate for the individual if the procedure is not desired by the patient. Similarly, a procedure may be of uncertain benefit but appropriate if the patient desires more aggressive procedural care and has a full understanding of the risks and benefits. Currently, we fail to convey this information to patients, as evidenced by patients’ overestimation of the benefits of PCI [34]. As we continue to work toward optimal use of PCI, we must not only address the technical appropriateness of care, but move toward incorporating patient preferences through a robust process of shared decision-making.
Corresponding author: Preston M. Schneider, MD, VA Eastern Colorado Health Care System, Cardiology Section (111B), 1055 Clermont St., Denver, CO 80220, [email protected].
Funding/support: Dr. Schneider is supported by a T32 training grant from the National Institutes of Health (5T32HL00
7822-15). Dr. Bradley is supported by a Career Development Award (HSR&D-CDA2 10-199) from VA Health Services Research & Development.
Financial disclosures: None.
1. Cassel CK, Guest JA. Choosing wisely: helping physicians and patients make smart decisions about their care. JAMA 2012;307:1801–2.
2. Go AS, Mozaffarian D, Roger VL, et al. Heart disease and stroke statistics—2013 update: a report from the American Heart Association. Circulation 2013;127:e6–e245.
3. HCUPnet: A tool for identifying, tracking, and analyzing national hospital statistics. Accessed 22 Oct 2013 at http://hcupnet.ahrq.gov/HCUPnet.jsp?Parms=
H4sIAAAAAAAAABXBMQ6AIBAEwC9JAg.gsLAhRvjAnnuXgGihFb9XZwYe3EhLdpN2h2aIcsnQLCp9jQVbLDN3ksq
DnSeqVXzNfIAP9mtmLy0rZhdIAAAA83D0C2BCAE02DD1508408B2C5C094F1ADF6E788C&JS=Y.
4. Keeley EC, Boura JA, Grines CL. Primary angioplasty versus intravenous thrombolytic therapy for acute myocardial infarction: a quantitative review of 23 randomised trials. Lancet 2003;361:13–20.
5. Boden WE, O’Rourke RA, Teo KK, et al. Optimal medical therapy with or without PCI for stable coronary disease. N Engl J Med 2007;356:1503–16.
6. Boden WE, O’Rourke RA, Teo KK, et al. Impact of optimal medical therapy with or without percutaneous coronary intervention on long-term cardiovascular end points in patients with stable coronary artery disease (from the COURAGE Trial). Am J Cardiol 2009;104:1–4.
7. Stergiopoulos K, Brown DL. Initial coronary stent implantation with medical therapy vs medical therapy alone for stable coronary artery disease: Meta-analysis of randomized controlled trials. Arch Intern Med 2012;172:312–9.
8. McCullough PA, Adam A, Becker CR, et al. Epidemiology and prognostic implications of contrast-induced nephropathy. Contrast-Induc Nephrop Clin Insights Pract Guid Rep CIN Consens Work Panel 2006;98:5–13.
9. Roe MT, Messenger JC, Weintraub WS, et al. Treatments, trends, and outcomes of acute myocardial infarction and percutaneous coronary intervention. J Am Coll Cardiol 2010;56:254–63.
10. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/AHA/ASNC 2009 Appropriateness Criteria for Coronary Revascularization: A Report by the American College of Cardiology Foundation Appropriateness Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, and the American Society of Nuclear Cardiology Endorsed by the American Society of Echocardiography, the Heart Failure Society of America, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2009;53:530–53.
11. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/AHA/ASNC/HFSA/SCCT 2012 Appropriate Use Criteria for Coronary Revascularization Focused Update: A Report of the American College of Cardiology Foundation Appropriate Use Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, American Society of Nuclear Cardiology, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2012;59:857–81.
12. Dartmouth Atlas of Health Care. Accessed 8 Jan 2014 at www.dartmouthatlas.org.
13. Dartmouth Atlas of Health Care: Studies of surgical variation. Cardiac surgery report. 2005. Accessed 8 Jan 2014 at www.dartmouthatlas.org/publications/reports.aspx.
14. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in medicare spending. part 1: the content, quality, and accessibility of care. Ann Intern Med 2003;138:273–87.
15. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in medicare spending. part 2: health outcomes and satisfaction with care. Ann Intern Med 2003;138:288–98.
16. Abelson R. Heart procedure is off the charts in an Ohio city. New York Times 2006. Accessed 23 Apr 2013 at www.nytimes.com/2006/08/18/business/18stent.html.
17. Akhter N, Milford-Beland S, Roe MT, et al. Gender differences among patients with acute coronary syndromes undergoing percutaneous coronary intervention in the American College of Cardiology-National Cardiovascular Data Registry (ACC-NCDR). Am Heart J 2009;157:141–8.
18. Blomkalns AL, Chen AY, Hochman JS, et al. Gender disparities in the diagnosis and treatment of non–ST-segment elevation acute coronary syndromesLarge-scale observations from the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the American College of Cardiology/American Heart Association Guidelines) National Quality Improvement Initiative. J Am Coll Cardiol 2005;45:832–7.
19. Daly C, Clemens F, Lopez Sendon JL, et al. Gender differences in the management and clinical outcome of stable angina. Circulation 2006;113:490–8.
20. Groeneveld PW, Heidenreich PA, Garber AM. Racial disparity in cardiac procedures and mortality among long-term survivors of cardiac arrest. Circulation 2003;108:286–91.
21. Hannan EL, Zhong Y, Walford G, et al. Underutilization of percutaneous coronary intervention for ST-elevation myocardial infarction in Medicaid patients relative to private insurance patients. J Intervent Cardiol 2013;26:470–81.
22. Sonel AF, Good CB, Mulgund J, et al. Racial variations in treatment and outcomes of black and white patients with high-risk non–ST-elevation acute coronary syndromes: insights From CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the ACC/AHA Guidelines?). Circulation 2005;111:1225–32.
23. Patel MR, Spertus JA, Brindis RG, et al. ACCF proposed method for evaluating the appropriateness of cardiovascular imaging. J Am Coll Cardiol 2005;46:1606–13.
24. Levine GN, Bates ER, Blankenship JC, et al. 2011 ACCF/AHA/SCAI Guideline for percutaneous coronary intervention: executive summary: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines and the Society for Cardiovascular Angiography and Interventions. Circulation 2011;124:2574–609.
25. Campeau L. Letter: Grading of angina pectoris. Circulation 1976;54:522–3.
26. Chan PS, Patel MR, Klein LW, et al. Appropriateness of percutaneous coronary intervention. JAMA 2011;306:53–61.
27. Hannan EL, Cozzens K, Samadashvili Z, et al. Appropriateness of coronary revascularization for patients without acute coronary syndromes. J Am Coll Cardiol 2012;59:1870–6.
28. Bradley SM, Maynard C, Bryson CL. Appropriateness of percutaneous coronary interventions in Washington State. Circ Cardiovasc Qual Outcomes 2012;5:445–53.
29. Nallamothu BK, Tommaso CL, Anderson HV, et al. ACC/AHA/SCAI/AMA–Convened PCPI/NCQA 2013 Performance measures for adults undergoing percutaneous coronary intervention. A report of the American College of Cardiology/American Heart Association Task Force on Performance Measures, the Society for Cardiovascular Angiography and Interventions, the American Medical Association–Convened Physician Consortium for Performance Improvement, and the National Committee for Quality Assurance. J Am Coll Cardiol 2014;63:722–45.
30. Ko DT, Guo H, Wijeysundera HC, et al. Assessing the association of appropriateness of coronary revascularization and clinical outcomes for patients with stable coronary artery disease. J Am Coll Cardiol 2012;60:1876–84.
31. Matlock DD, Groeneveld PW, Sidney S, et al. Geographic variation in cardiovascular procedure use among medicare fee-for-service vs medicare advantage beneficiaries. JAMA 2013;310:155–62.
32. Beltrame JF, Weekes AJ, Morgan C, et al. The prevalence of weekly angina among patients with chronic stable angina in primary care practices: The coronary artery disease in general practice (cadence) study. Arch Intern Med 2009;169:1491–9.
33. Bradley SM, Spertus JA, Nallamothu BK, et al. The association between patient selection for diagnostic coronary angiography and hospital-level PCI appropriateness: Insights from the NCDR. Circ Cardiovasc Qual Outcomes 2013;6:A1. Accessed 20 Nov 2013 at http://circoutcomes.ahajournals.org/cgi/content/short/6/3_MeetingAbstracts/A1?rss=1.
34. Lee J, Chuu K, Spertus J, et al. Patients overestimate the potential benefits of elective percutaneous coronary intervention. Mo Med 2012;109:79.
From the VA Eastern Colorado Health Care System, University of Colorado School of Medicine, and the Colorado Cardiovascular Outcomes Research Group, Denver and Aurora, CO.
Abstract
- Objective: To review the use of elective percutaneous coronary intervention (PCI), evaluate what is currently known about elective PCI in the context of appropriate use criteria, and offer insight into next steps to optimize the use of elective PCI to achieve high-quality care.
- Methods: Review of the scientific literature, appropriate use criteria, and professional society guidelines relevant to elective PCI.
- Results: Recent studies have demonstrated as many as 1 in 6 elective PCIs are inappropriate as determined by appropriate use criteria. These inappropriate PCIs are not anticipated to benefit patients and result in unnecessary patient risk and cost. While these studies are consistent with regard to overuse of elective PCI, less is known about potential underuse of PCI for elective indications. We lack health status data on populations of ischemic heart disease patients to inform PCI underuse that may contribute to patient symptom burden, functional status, and quality of life. Optimal use of PCI will be attained with longitudinal capture of patient-reported health status, study of factors contributing to overuse and underuse, refinement of the appropriate use criteria with particular focus on patient-centered measures, and incorporation of patient preference and shared decision making into appropriateness evaluation tools.
- Conclusion: The use of elective PCI is less than optimal in current clinical practice. Continued effort is needed to ensure elective PCI is targeted to patients with anticipated benefit and use of the procedure is aligned with patient preferences.
Providing the right care to the right patient at the right time is essential to the practice of high-quality care. Reducing overuse of health care services is part of this equation, and initiatives to reduce inappropriate use and to encourage physicians and patients to “choose wisely” have been introduced [1]. One procedure that is being examined with a focus on appropriateness is percutaneous coronary intervention (PCI). This procedure is common (nearly 1 million inpatient PCI procedures performed in 2010), presents risks to the patient, and is expensive (attributable cost approximately $10 billion in 2010) [2,3]. While the clinical benefit of PCI in acute settings such as ST-segment elevation myocardial infarction is well established [4], the benefit of PCI in nonacute (elective) settings is less robust [5–7]. Prior studies have demonstrated PCI for stable ischemic heart disease does not result in mortality benefit [6]. Furthermore, PCI as an initial strategy for symptom relief of stable angina may offer little benefit relative to medications alone [5]. Given that PCI is common, costly, and associated with both short- and long-term risks [8,9], ensuring this therapy is provided to the right patient at the right time is important.
In 2009, appropriate use criteria (AUC) were developed by 6 professional organizations to support the rational and judicious use of PCI [10]; a focused update was published in 2012 [11]. In this review, we discuss the recommendations for appropriate use and their application and offer thoughts on next steps to optimize the use of elective PCI as part of high-quality care.
Variation in the Use of PCI
Additionally, significant public attention has been focused on the issue of overuse after lay press investigations into community practice patterns. In particular, a case study presented in the New York Times highlighted the community of Elyria, Ohio, which was found to have a PCI rate that was 4 times the national average [16]. This investigation sparked public debate and further focused attention on the issue of overuse of elective PCI. Conversely, others have pointed to data that suggest underuse of coronary procedural care, particularly among women and racial and ethnic minorities [17–22].
Appropriate Use Criteria
Development Methodology
AUC for PCI, which were developed through the collaborative efforts of 6 major cardiovascular professional organizations, are intended to support the effective, efficient, and equitable use of PCI [10,11]. They were developed in response to a growing need to support rational use of cardiovascular procedures as part of high-quality care. The methods of development for the AUC have been described in detail in the criteria publications [10,11]. We briefly review these methods here.
Panel members first individually assigned ratings to each clinical scenario that ranged from 1 (least appropriate) to 9 (most appropriate). This was followed by an in-person meeting in which technical panel members discussed scenarios for which there was wide variation in appropriateness assessment. After this meeting, technical panel members again assigned ratings for each scenario from 1 to 9. After this second round, the median values for the pooled ratings were used as the appropriateness classification for each scenario. Scenarios with median values of 1–3 were classified as “inappropriate,” 4–6 as “uncertain,” and 7–9 as “appropriate.” A rating of “appropriate” represented clinical scenarios in which the indication is considered generally acceptable and likely to improve health outcomes or survival. A rating of “uncertain” represented clinical scenarios where the indication may be reasonable but more research is necessary to further understand the relative benefits and risks of PCI in this setting. Finally, a rating of “inappropriate” represented clinical scenarios in which the indication is not generally acceptable as it is unlikely to improve health outcomes or survival.
The approach used for AUC development appears to be valid, as Class III indications for PCI in the ACC/AHA clinical guideline [24] (Class III = PCI should NOT be performed since it is not helpful and may be harmful) and AUC scenarios rated as inappropriate are in 100% agreement (personal communication, Ralph Brindis, past president of the American College of Cardiology).
Application
It is important to remember that the AUC are intended to aid in patient selection and are not absolute. Unique clinical factors and patient preference cannot feasibly be captured by the AUC scenarios. It should also be noted that the intent of the AUC is not to be punitive but rather to identify and assess variation in practice patterns. To reflect this intent, the terminology applied to appropriateness ratings has recently changed. Clinical scenarios previously classified as “inappropriate” are now termed “rarely appropriate” and clinical scenarios classified as “uncertain” are now termed “may be appropriate.”
Although the AUC were developed to help evaluate practice patterns of care delivery and serve as guides for clinical decision making, they were not intended to serve as mandates for or against treatment in individual patients or to be tied to reimbursement for individual patients. Despite this, health care organizations and payors have used other AUC documents for incentive pay and prior authorization programs, specifically for cardiovascular imaging [25]. Use of the AUC in this manner may still be reasonable if application and measurement is at the level of the practice, rather than the individual patient, but much remains to be understood about the implications of applying AUC in reimbursement
decisions.
Refinement
The AUC for PCI are designed to be dynamic and continually updated. As additional evidence becomes available regarding the efficacy of PCI in specific clinical scenarios, there will be ongoing efforts to update the AUC to reflect this new evidence. This is highlighted by the first update to the AUC occurring less than 3 years after the original publication date [11].
In addition to perpetual review of the data used to inform scenario ratings, there are opportunities to improve measurement of the clinical variables that are considered in rating PCI appropriateness (eg, clinical presentation, symptom severity, ischemia severity, extent of medical therapy, extent of anatomic disease). For example, in the current AUC, symptom severity is dependent on clinician assessment using the Canadian Cardiovascular Society Classification [25]. Moving toward a patient-centered assessment of symptom severity would ensure that the AUC more closely reflect the patient-perceived symptom burden. Further, the use of a patient-centered instrument would reduce the possibility of physician manipulation of symptom severity to influence the apparent appropriateness of PCI. There are similar opportunities to improve reporting of noninvasive stress test data, such as through standardized reporting of ischemic risk. Finally, the use of physiologic assessments of stenosis severity (eg, fractional flow reserve) and quantitative coronary angiography to standardize interpretations of diagnostic angiography may further optimize the assessment of PCI appropriateness.
Application of the Appropriate Use Criteria in Clinical Practice—Study Results
Application of the AUC to clinical practice has highlighted potential overuse of PCI (Table). The first report came from applying the AUC to the National Cardiovascular Data Registry (NCDR) CathPCI Registry [26]. In this study of more than 500,000 PCIs from over 1000 facilities across the country, the authors found that PCIs performed in the acute setting (STEMI, NSTEMI, and high-risk unstable angina) were almost uniformly classified as appropriate. However, for nonacute (elective) PCI, application of the AUC resulted in the classification of 50% as appropriate, 38% as uncertain, and 12% as inappropriate. The majority of patients who received inappropriate PCI had a low-risk stress test (72%) or were asymptomatic (54%). Additionally, 96% of patients who received PCI classified as inappropriate had not been given a trial of adequate anti-anginal therapy. This analysis was supported by subsequent analyses of 2 other state-specific registries (New York and Washington), which found similar rates of PCI for nonacute indications rated as inappropriate [27,28]. Additionally, all 3 studies showed wide facility-level variation in the percentage of appropriate and inappropriate PCI for elective indications.
These studies also highlight a gap in preprocedural care. The anticipated benefit of elective PCI is related to patient symptom burden, adequacy of anti-anginal therapy, and ischemic risk as determined by noninvasive stress testing. However, 30% to 50% of patients undergo elective PCI without evidence of preprocedural stress testing. Attempts are being made to address this gap with the recent release of PCI performance measures [29]. These performance measures, intended for cardiac catheterization labs, include comprehensive documentation of the indication for PCI, which is central to determination of appropriateness. This integration of procedural indication into a performance measure marks the first such occurrence in cardiology.
As documentation of procedural indication and appropriateness have become part and parcel of assessing quality of care, concerns about “gaming” have become more pertinent. Providers who perform PCI could potentially enhance the appearance of appropriateness by overstating the clinical symptom burden or stress test findings. The incorporation of validated, patient-centered health status questionnaires along with data audit programs have been proposed as measures to prevent this type of abuse. Addressing quality gaps in preprocedural assessment and documentation is critical to optimizing use of elective PCI [28].
The apparent overuse of PCI for elective indications may be a reflection of our fragmented, fee-for-service health care delivery system. However, recent studies challenge these assumptions. In a Canadian study, Ko et al found that 18% of elective PCIs were classified as inappropriate, a proportion similar to what had been found previously in the United States [30]. In a US study of Medicare beneficiaries, Matlock and colleagues observed a fourfold regional variation in use of elective coronary angiography and PCI in both Medicare fee-for-service and capitated Medicare Advantage beneficiaries [31]. Collectively, these studies suggest barriers to optimal patient selection for invasive coronary procedures in both capitated and fee-for-service health care systems. Without addressing factors that contribute to variation in the absence of fee-for-service incentives, efforts to improve integration and reduce fee-for-service reimbursement may be inadequate to optimize PCI use.
Evaluating Underuse
While potential underuse of PCI has been described for acute indications [17–22], study of underuse of PCI for elective indications is more challenging. Population data on the effect of underuse of elective PCI on patient symptom burden, functional status, and quality of life is lacking.
A population-based study from Australia highlights the potential importance of underuse in the care of patients with stable coronary disease. This study assessed symptom burden among patients with chronic stable angina using the Seattle Angina Questionnaire and included patients cared for by 207 primary care practitioners [32]. The authors noted that there was considerable variation in patient symptom burden between practices, with 14% of practices having no patients with more than 1 episode of angina per week and 18% of clinics having more than half of enrolled patients with at least 1 episode of angina per week. The authors postulate that this variability may be due to differences among providers in the identification and management of angina, including using PCI to minimize symptom burden.
In the Ko study mentioned earlier, the AUC was used to examine potential underuse of coronary revascularization procedures. In this study, they analyzed the association between AUC ratings and outcomes in patients undergoing diagnostic coronary angiography [30]. Of patients considered “appropriate” for revascularization following completion of diagnostic angiography, only 69% underwent revascularization. However, the clinical aspects that influence the decision to proceed with revascularization may not be fully captured in this study. Thus, the true degree of underuse of PCI remains elusive.
In summary, the relative lack of data that would allow for the assessment of underuse of elective PCI is an important quality concern. Health systems should work to systematically capture patient-reported health status, including symptom burden data, to identify inadequate symptom control and potential underuse of procedural care for CAD.
Facilitating Optimal Use
In current practice, the AUC hold promise to minimize the overuse of elective PCI. This likely involves addressing processes occurring upstream of the cardiac catheterization lab, including employing systems to ensure that procedures are avoided in patients who are unlikely to benefit (eg, asymptomatic, low ischemic burden) (Figure 3) [33]. Studying hospitals that already have low rates of inappropriate PCI may inform the design and dissemination of strategies that will help improve patient selection at hospitals with higher rates. Although professional organizations have developed tools intended to facilitate appropriateness evaluation at the point-of-care [34], the use of these tools are likely to be sporadic without greater integration into the health care delivery system. Further, these applications are currently limited to determination of appropriateness of PCI after completion of the diagnostic coronary angiogram. Identifying processes prior to catheterization that contribute to PCI appropriateness may also streamline appropriate ad hoc PCI, as the need to reassess appropriateness after the diagnostic angiogram may be mitigated.
Significant barriers exist to the application of the AUC for determination of procedural underuse. As described above, we lack adequate data to ascertain gaps in symptom management that could be mitigated by proper use of PCI. Further study of symptom burden in populations of patients with coronary artery disease is needed. This may help in the identification of patient populations whose symptom burden may warrant consideration of invasive coronary procedures, including coronary angiography and PCI.
Finally, it is important to note that the AUC are based on technical considerations, ie, practice guidelines and trial evidence. They do not take into consideration patient preferences. For example, PCI can be technically appropriate for the scenario but inappropriate for the individual if the procedure is not desired by the patient. Similarly, a procedure may be of uncertain benefit but appropriate if the patient desires more aggressive procedural care and has a full understanding of the risks and benefits. Currently, we fail to convey this information to patients, as evidenced by patients’ overestimation of the benefits of PCI [34]. As we continue to work toward optimal use of PCI, we must not only address the technical appropriateness of care, but move toward incorporating patient preferences through a robust process of shared decision-making.
Corresponding author: Preston M. Schneider, MD, VA Eastern Colorado Health Care System, Cardiology Section (111B), 1055 Clermont St., Denver, CO 80220, [email protected].
Funding/support: Dr. Schneider is supported by a T32 training grant from the National Institutes of Health (5T32HL00
7822-15). Dr. Bradley is supported by a Career Development Award (HSR&D-CDA2 10-199) from VA Health Services Research & Development.
Financial disclosures: None.
From the VA Eastern Colorado Health Care System, University of Colorado School of Medicine, and the Colorado Cardiovascular Outcomes Research Group, Denver and Aurora, CO.
Abstract
- Objective: To review the use of elective percutaneous coronary intervention (PCI), evaluate what is currently known about elective PCI in the context of appropriate use criteria, and offer insight into next steps to optimize the use of elective PCI to achieve high-quality care.
- Methods: Review of the scientific literature, appropriate use criteria, and professional society guidelines relevant to elective PCI.
- Results: Recent studies have demonstrated as many as 1 in 6 elective PCIs are inappropriate as determined by appropriate use criteria. These inappropriate PCIs are not anticipated to benefit patients and result in unnecessary patient risk and cost. While these studies are consistent with regard to overuse of elective PCI, less is known about potential underuse of PCI for elective indications. We lack health status data on populations of ischemic heart disease patients to inform PCI underuse that may contribute to patient symptom burden, functional status, and quality of life. Optimal use of PCI will be attained with longitudinal capture of patient-reported health status, study of factors contributing to overuse and underuse, refinement of the appropriate use criteria with particular focus on patient-centered measures, and incorporation of patient preference and shared decision making into appropriateness evaluation tools.
- Conclusion: The use of elective PCI is less than optimal in current clinical practice. Continued effort is needed to ensure elective PCI is targeted to patients with anticipated benefit and use of the procedure is aligned with patient preferences.
Providing the right care to the right patient at the right time is essential to the practice of high-quality care. Reducing overuse of health care services is part of this equation, and initiatives to reduce inappropriate use and to encourage physicians and patients to “choose wisely” have been introduced [1]. One procedure that is being examined with a focus on appropriateness is percutaneous coronary intervention (PCI). This procedure is common (nearly 1 million inpatient PCI procedures performed in 2010), presents risks to the patient, and is expensive (attributable cost approximately $10 billion in 2010) [2,3]. While the clinical benefit of PCI in acute settings such as ST-segment elevation myocardial infarction is well established [4], the benefit of PCI in nonacute (elective) settings is less robust [5–7]. Prior studies have demonstrated PCI for stable ischemic heart disease does not result in mortality benefit [6]. Furthermore, PCI as an initial strategy for symptom relief of stable angina may offer little benefit relative to medications alone [5]. Given that PCI is common, costly, and associated with both short- and long-term risks [8,9], ensuring this therapy is provided to the right patient at the right time is important.
In 2009, appropriate use criteria (AUC) were developed by 6 professional organizations to support the rational and judicious use of PCI [10]; a focused update was published in 2012 [11]. In this review, we discuss the recommendations for appropriate use and their application and offer thoughts on next steps to optimize the use of elective PCI as part of high-quality care.
Variation in the Use of PCI
Additionally, significant public attention has been focused on the issue of overuse after lay press investigations into community practice patterns. In particular, a case study presented in the New York Times highlighted the community of Elyria, Ohio, which was found to have a PCI rate that was 4 times the national average [16]. This investigation sparked public debate and further focused attention on the issue of overuse of elective PCI. Conversely, others have pointed to data that suggest underuse of coronary procedural care, particularly among women and racial and ethnic minorities [17–22].
Appropriate Use Criteria
Development Methodology
AUC for PCI, which were developed through the collaborative efforts of 6 major cardiovascular professional organizations, are intended to support the effective, efficient, and equitable use of PCI [10,11]. They were developed in response to a growing need to support rational use of cardiovascular procedures as part of high-quality care. The methods of development for the AUC have been described in detail in the criteria publications [10,11]. We briefly review these methods here.
Panel members first individually assigned ratings to each clinical scenario that ranged from 1 (least appropriate) to 9 (most appropriate). This was followed by an in-person meeting in which technical panel members discussed scenarios for which there was wide variation in appropriateness assessment. After this meeting, technical panel members again assigned ratings for each scenario from 1 to 9. After this second round, the median values for the pooled ratings were used as the appropriateness classification for each scenario. Scenarios with median values of 1–3 were classified as “inappropriate,” 4–6 as “uncertain,” and 7–9 as “appropriate.” A rating of “appropriate” represented clinical scenarios in which the indication is considered generally acceptable and likely to improve health outcomes or survival. A rating of “uncertain” represented clinical scenarios where the indication may be reasonable but more research is necessary to further understand the relative benefits and risks of PCI in this setting. Finally, a rating of “inappropriate” represented clinical scenarios in which the indication is not generally acceptable as it is unlikely to improve health outcomes or survival.
The approach used for AUC development appears to be valid, as Class III indications for PCI in the ACC/AHA clinical guideline [24] (Class III = PCI should NOT be performed since it is not helpful and may be harmful) and AUC scenarios rated as inappropriate are in 100% agreement (personal communication, Ralph Brindis, past president of the American College of Cardiology).
Application
It is important to remember that the AUC are intended to aid in patient selection and are not absolute. Unique clinical factors and patient preference cannot feasibly be captured by the AUC scenarios. It should also be noted that the intent of the AUC is not to be punitive but rather to identify and assess variation in practice patterns. To reflect this intent, the terminology applied to appropriateness ratings has recently changed. Clinical scenarios previously classified as “inappropriate” are now termed “rarely appropriate” and clinical scenarios classified as “uncertain” are now termed “may be appropriate.”
Although the AUC were developed to help evaluate practice patterns of care delivery and serve as guides for clinical decision making, they were not intended to serve as mandates for or against treatment in individual patients or to be tied to reimbursement for individual patients. Despite this, health care organizations and payors have used other AUC documents for incentive pay and prior authorization programs, specifically for cardiovascular imaging [25]. Use of the AUC in this manner may still be reasonable if application and measurement is at the level of the practice, rather than the individual patient, but much remains to be understood about the implications of applying AUC in reimbursement
decisions.
Refinement
The AUC for PCI are designed to be dynamic and continually updated. As additional evidence becomes available regarding the efficacy of PCI in specific clinical scenarios, there will be ongoing efforts to update the AUC to reflect this new evidence. This is highlighted by the first update to the AUC occurring less than 3 years after the original publication date [11].
In addition to perpetual review of the data used to inform scenario ratings, there are opportunities to improve measurement of the clinical variables that are considered in rating PCI appropriateness (eg, clinical presentation, symptom severity, ischemia severity, extent of medical therapy, extent of anatomic disease). For example, in the current AUC, symptom severity is dependent on clinician assessment using the Canadian Cardiovascular Society Classification [25]. Moving toward a patient-centered assessment of symptom severity would ensure that the AUC more closely reflect the patient-perceived symptom burden. Further, the use of a patient-centered instrument would reduce the possibility of physician manipulation of symptom severity to influence the apparent appropriateness of PCI. There are similar opportunities to improve reporting of noninvasive stress test data, such as through standardized reporting of ischemic risk. Finally, the use of physiologic assessments of stenosis severity (eg, fractional flow reserve) and quantitative coronary angiography to standardize interpretations of diagnostic angiography may further optimize the assessment of PCI appropriateness.
Application of the Appropriate Use Criteria in Clinical Practice—Study Results
Application of the AUC to clinical practice has highlighted potential overuse of PCI (Table). The first report came from applying the AUC to the National Cardiovascular Data Registry (NCDR) CathPCI Registry [26]. In this study of more than 500,000 PCIs from over 1000 facilities across the country, the authors found that PCIs performed in the acute setting (STEMI, NSTEMI, and high-risk unstable angina) were almost uniformly classified as appropriate. However, for nonacute (elective) PCI, application of the AUC resulted in the classification of 50% as appropriate, 38% as uncertain, and 12% as inappropriate. The majority of patients who received inappropriate PCI had a low-risk stress test (72%) or were asymptomatic (54%). Additionally, 96% of patients who received PCI classified as inappropriate had not been given a trial of adequate anti-anginal therapy. This analysis was supported by subsequent analyses of 2 other state-specific registries (New York and Washington), which found similar rates of PCI for nonacute indications rated as inappropriate [27,28]. Additionally, all 3 studies showed wide facility-level variation in the percentage of appropriate and inappropriate PCI for elective indications.
These studies also highlight a gap in preprocedural care. The anticipated benefit of elective PCI is related to patient symptom burden, adequacy of anti-anginal therapy, and ischemic risk as determined by noninvasive stress testing. However, 30% to 50% of patients undergo elective PCI without evidence of preprocedural stress testing. Attempts are being made to address this gap with the recent release of PCI performance measures [29]. These performance measures, intended for cardiac catheterization labs, include comprehensive documentation of the indication for PCI, which is central to determination of appropriateness. This integration of procedural indication into a performance measure marks the first such occurrence in cardiology.
As documentation of procedural indication and appropriateness have become part and parcel of assessing quality of care, concerns about “gaming” have become more pertinent. Providers who perform PCI could potentially enhance the appearance of appropriateness by overstating the clinical symptom burden or stress test findings. The incorporation of validated, patient-centered health status questionnaires along with data audit programs have been proposed as measures to prevent this type of abuse. Addressing quality gaps in preprocedural assessment and documentation is critical to optimizing use of elective PCI [28].
The apparent overuse of PCI for elective indications may be a reflection of our fragmented, fee-for-service health care delivery system. However, recent studies challenge these assumptions. In a Canadian study, Ko et al found that 18% of elective PCIs were classified as inappropriate, a proportion similar to what had been found previously in the United States [30]. In a US study of Medicare beneficiaries, Matlock and colleagues observed a fourfold regional variation in use of elective coronary angiography and PCI in both Medicare fee-for-service and capitated Medicare Advantage beneficiaries [31]. Collectively, these studies suggest barriers to optimal patient selection for invasive coronary procedures in both capitated and fee-for-service health care systems. Without addressing factors that contribute to variation in the absence of fee-for-service incentives, efforts to improve integration and reduce fee-for-service reimbursement may be inadequate to optimize PCI use.
Evaluating Underuse
While potential underuse of PCI has been described for acute indications [17–22], study of underuse of PCI for elective indications is more challenging. Population data on the effect of underuse of elective PCI on patient symptom burden, functional status, and quality of life is lacking.
A population-based study from Australia highlights the potential importance of underuse in the care of patients with stable coronary disease. This study assessed symptom burden among patients with chronic stable angina using the Seattle Angina Questionnaire and included patients cared for by 207 primary care practitioners [32]. The authors noted that there was considerable variation in patient symptom burden between practices, with 14% of practices having no patients with more than 1 episode of angina per week and 18% of clinics having more than half of enrolled patients with at least 1 episode of angina per week. The authors postulate that this variability may be due to differences among providers in the identification and management of angina, including using PCI to minimize symptom burden.
In the Ko study mentioned earlier, the AUC was used to examine potential underuse of coronary revascularization procedures. In this study, they analyzed the association between AUC ratings and outcomes in patients undergoing diagnostic coronary angiography [30]. Of patients considered “appropriate” for revascularization following completion of diagnostic angiography, only 69% underwent revascularization. However, the clinical aspects that influence the decision to proceed with revascularization may not be fully captured in this study. Thus, the true degree of underuse of PCI remains elusive.
In summary, the relative lack of data that would allow for the assessment of underuse of elective PCI is an important quality concern. Health systems should work to systematically capture patient-reported health status, including symptom burden data, to identify inadequate symptom control and potential underuse of procedural care for CAD.
Facilitating Optimal Use
In current practice, the AUC hold promise to minimize the overuse of elective PCI. This likely involves addressing processes occurring upstream of the cardiac catheterization lab, including employing systems to ensure that procedures are avoided in patients who are unlikely to benefit (eg, asymptomatic, low ischemic burden) (Figure 3) [33]. Studying hospitals that already have low rates of inappropriate PCI may inform the design and dissemination of strategies that will help improve patient selection at hospitals with higher rates. Although professional organizations have developed tools intended to facilitate appropriateness evaluation at the point-of-care [34], the use of these tools are likely to be sporadic without greater integration into the health care delivery system. Further, these applications are currently limited to determination of appropriateness of PCI after completion of the diagnostic coronary angiogram. Identifying processes prior to catheterization that contribute to PCI appropriateness may also streamline appropriate ad hoc PCI, as the need to reassess appropriateness after the diagnostic angiogram may be mitigated.
Significant barriers exist to the application of the AUC for determination of procedural underuse. As described above, we lack adequate data to ascertain gaps in symptom management that could be mitigated by proper use of PCI. Further study of symptom burden in populations of patients with coronary artery disease is needed. This may help in the identification of patient populations whose symptom burden may warrant consideration of invasive coronary procedures, including coronary angiography and PCI.
Finally, it is important to note that the AUC are based on technical considerations, ie, practice guidelines and trial evidence. They do not take into consideration patient preferences. For example, PCI can be technically appropriate for the scenario but inappropriate for the individual if the procedure is not desired by the patient. Similarly, a procedure may be of uncertain benefit but appropriate if the patient desires more aggressive procedural care and has a full understanding of the risks and benefits. Currently, we fail to convey this information to patients, as evidenced by patients’ overestimation of the benefits of PCI [34]. As we continue to work toward optimal use of PCI, we must not only address the technical appropriateness of care, but move toward incorporating patient preferences through a robust process of shared decision-making.
Corresponding author: Preston M. Schneider, MD, VA Eastern Colorado Health Care System, Cardiology Section (111B), 1055 Clermont St., Denver, CO 80220, [email protected].
Funding/support: Dr. Schneider is supported by a T32 training grant from the National Institutes of Health (5T32HL00
7822-15). Dr. Bradley is supported by a Career Development Award (HSR&D-CDA2 10-199) from VA Health Services Research & Development.
Financial disclosures: None.
1. Cassel CK, Guest JA. Choosing wisely: helping physicians and patients make smart decisions about their care. JAMA 2012;307:1801–2.
2. Go AS, Mozaffarian D, Roger VL, et al. Heart disease and stroke statistics—2013 update: a report from the American Heart Association. Circulation 2013;127:e6–e245.
3. HCUPnet: A tool for identifying, tracking, and analyzing national hospital statistics. Accessed 22 Oct 2013 at http://hcupnet.ahrq.gov/HCUPnet.jsp?Parms=
H4sIAAAAAAAAABXBMQ6AIBAEwC9JAg.gsLAhRvjAnnuXgGihFb9XZwYe3EhLdpN2h2aIcsnQLCp9jQVbLDN3ksq
DnSeqVXzNfIAP9mtmLy0rZhdIAAAA83D0C2BCAE02DD1508408B2C5C094F1ADF6E788C&JS=Y.
4. Keeley EC, Boura JA, Grines CL. Primary angioplasty versus intravenous thrombolytic therapy for acute myocardial infarction: a quantitative review of 23 randomised trials. Lancet 2003;361:13–20.
5. Boden WE, O’Rourke RA, Teo KK, et al. Optimal medical therapy with or without PCI for stable coronary disease. N Engl J Med 2007;356:1503–16.
6. Boden WE, O’Rourke RA, Teo KK, et al. Impact of optimal medical therapy with or without percutaneous coronary intervention on long-term cardiovascular end points in patients with stable coronary artery disease (from the COURAGE Trial). Am J Cardiol 2009;104:1–4.
7. Stergiopoulos K, Brown DL. Initial coronary stent implantation with medical therapy vs medical therapy alone for stable coronary artery disease: Meta-analysis of randomized controlled trials. Arch Intern Med 2012;172:312–9.
8. McCullough PA, Adam A, Becker CR, et al. Epidemiology and prognostic implications of contrast-induced nephropathy. Contrast-Induc Nephrop Clin Insights Pract Guid Rep CIN Consens Work Panel 2006;98:5–13.
9. Roe MT, Messenger JC, Weintraub WS, et al. Treatments, trends, and outcomes of acute myocardial infarction and percutaneous coronary intervention. J Am Coll Cardiol 2010;56:254–63.
10. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/AHA/ASNC 2009 Appropriateness Criteria for Coronary Revascularization: A Report by the American College of Cardiology Foundation Appropriateness Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, and the American Society of Nuclear Cardiology Endorsed by the American Society of Echocardiography, the Heart Failure Society of America, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2009;53:530–53.
11. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/AHA/ASNC/HFSA/SCCT 2012 Appropriate Use Criteria for Coronary Revascularization Focused Update: A Report of the American College of Cardiology Foundation Appropriate Use Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, American Society of Nuclear Cardiology, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2012;59:857–81.
12. Dartmouth Atlas of Health Care. Accessed 8 Jan 2014 at www.dartmouthatlas.org.
13. Dartmouth Atlas of Health Care: Studies of surgical variation. Cardiac surgery report. 2005. Accessed 8 Jan 2014 at www.dartmouthatlas.org/publications/reports.aspx.
14. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in medicare spending. part 1: the content, quality, and accessibility of care. Ann Intern Med 2003;138:273–87.
15. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in medicare spending. part 2: health outcomes and satisfaction with care. Ann Intern Med 2003;138:288–98.
16. Abelson R. Heart procedure is off the charts in an Ohio city. New York Times 2006. Accessed 23 Apr 2013 at www.nytimes.com/2006/08/18/business/18stent.html.
17. Akhter N, Milford-Beland S, Roe MT, et al. Gender differences among patients with acute coronary syndromes undergoing percutaneous coronary intervention in the American College of Cardiology-National Cardiovascular Data Registry (ACC-NCDR). Am Heart J 2009;157:141–8.
18. Blomkalns AL, Chen AY, Hochman JS, et al. Gender disparities in the diagnosis and treatment of non–ST-segment elevation acute coronary syndromesLarge-scale observations from the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the American College of Cardiology/American Heart Association Guidelines) National Quality Improvement Initiative. J Am Coll Cardiol 2005;45:832–7.
19. Daly C, Clemens F, Lopez Sendon JL, et al. Gender differences in the management and clinical outcome of stable angina. Circulation 2006;113:490–8.
20. Groeneveld PW, Heidenreich PA, Garber AM. Racial disparity in cardiac procedures and mortality among long-term survivors of cardiac arrest. Circulation 2003;108:286–91.
21. Hannan EL, Zhong Y, Walford G, et al. Underutilization of percutaneous coronary intervention for ST-elevation myocardial infarction in Medicaid patients relative to private insurance patients. J Intervent Cardiol 2013;26:470–81.
22. Sonel AF, Good CB, Mulgund J, et al. Racial variations in treatment and outcomes of black and white patients with high-risk non–ST-elevation acute coronary syndromes: insights From CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the ACC/AHA Guidelines?). Circulation 2005;111:1225–32.
23. Patel MR, Spertus JA, Brindis RG, et al. ACCF proposed method for evaluating the appropriateness of cardiovascular imaging. J Am Coll Cardiol 2005;46:1606–13.
24. Levine GN, Bates ER, Blankenship JC, et al. 2011 ACCF/AHA/SCAI Guideline for percutaneous coronary intervention: executive summary: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines and the Society for Cardiovascular Angiography and Interventions. Circulation 2011;124:2574–609.
25. Campeau L. Letter: Grading of angina pectoris. Circulation 1976;54:522–3.
26. Chan PS, Patel MR, Klein LW, et al. Appropriateness of percutaneous coronary intervention. JAMA 2011;306:53–61.
27. Hannan EL, Cozzens K, Samadashvili Z, et al. Appropriateness of coronary revascularization for patients without acute coronary syndromes. J Am Coll Cardiol 2012;59:1870–6.
28. Bradley SM, Maynard C, Bryson CL. Appropriateness of percutaneous coronary interventions in Washington State. Circ Cardiovasc Qual Outcomes 2012;5:445–53.
29. Nallamothu BK, Tommaso CL, Anderson HV, et al. ACC/AHA/SCAI/AMA–Convened PCPI/NCQA 2013 Performance measures for adults undergoing percutaneous coronary intervention. A report of the American College of Cardiology/American Heart Association Task Force on Performance Measures, the Society for Cardiovascular Angiography and Interventions, the American Medical Association–Convened Physician Consortium for Performance Improvement, and the National Committee for Quality Assurance. J Am Coll Cardiol 2014;63:722–45.
30. Ko DT, Guo H, Wijeysundera HC, et al. Assessing the association of appropriateness of coronary revascularization and clinical outcomes for patients with stable coronary artery disease. J Am Coll Cardiol 2012;60:1876–84.
31. Matlock DD, Groeneveld PW, Sidney S, et al. Geographic variation in cardiovascular procedure use among medicare fee-for-service vs medicare advantage beneficiaries. JAMA 2013;310:155–62.
32. Beltrame JF, Weekes AJ, Morgan C, et al. The prevalence of weekly angina among patients with chronic stable angina in primary care practices: The coronary artery disease in general practice (cadence) study. Arch Intern Med 2009;169:1491–9.
33. Bradley SM, Spertus JA, Nallamothu BK, et al. The association between patient selection for diagnostic coronary angiography and hospital-level PCI appropriateness: Insights from the NCDR. Circ Cardiovasc Qual Outcomes 2013;6:A1. Accessed 20 Nov 2013 at http://circoutcomes.ahajournals.org/cgi/content/short/6/3_MeetingAbstracts/A1?rss=1.
34. Lee J, Chuu K, Spertus J, et al. Patients overestimate the potential benefits of elective percutaneous coronary intervention. Mo Med 2012;109:79.
1. Cassel CK, Guest JA. Choosing wisely: helping physicians and patients make smart decisions about their care. JAMA 2012;307:1801–2.
2. Go AS, Mozaffarian D, Roger VL, et al. Heart disease and stroke statistics—2013 update: a report from the American Heart Association. Circulation 2013;127:e6–e245.
3. HCUPnet: A tool for identifying, tracking, and analyzing national hospital statistics. Accessed 22 Oct 2013 at http://hcupnet.ahrq.gov/HCUPnet.jsp?Parms=
H4sIAAAAAAAAABXBMQ6AIBAEwC9JAg.gsLAhRvjAnnuXgGihFb9XZwYe3EhLdpN2h2aIcsnQLCp9jQVbLDN3ksq
DnSeqVXzNfIAP9mtmLy0rZhdIAAAA83D0C2BCAE02DD1508408B2C5C094F1ADF6E788C&JS=Y.
4. Keeley EC, Boura JA, Grines CL. Primary angioplasty versus intravenous thrombolytic therapy for acute myocardial infarction: a quantitative review of 23 randomised trials. Lancet 2003;361:13–20.
5. Boden WE, O’Rourke RA, Teo KK, et al. Optimal medical therapy with or without PCI for stable coronary disease. N Engl J Med 2007;356:1503–16.
6. Boden WE, O’Rourke RA, Teo KK, et al. Impact of optimal medical therapy with or without percutaneous coronary intervention on long-term cardiovascular end points in patients with stable coronary artery disease (from the COURAGE Trial). Am J Cardiol 2009;104:1–4.
7. Stergiopoulos K, Brown DL. Initial coronary stent implantation with medical therapy vs medical therapy alone for stable coronary artery disease: Meta-analysis of randomized controlled trials. Arch Intern Med 2012;172:312–9.
8. McCullough PA, Adam A, Becker CR, et al. Epidemiology and prognostic implications of contrast-induced nephropathy. Contrast-Induc Nephrop Clin Insights Pract Guid Rep CIN Consens Work Panel 2006;98:5–13.
9. Roe MT, Messenger JC, Weintraub WS, et al. Treatments, trends, and outcomes of acute myocardial infarction and percutaneous coronary intervention. J Am Coll Cardiol 2010;56:254–63.
10. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/AHA/ASNC 2009 Appropriateness Criteria for Coronary Revascularization: A Report by the American College of Cardiology Foundation Appropriateness Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, and the American Society of Nuclear Cardiology Endorsed by the American Society of Echocardiography, the Heart Failure Society of America, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2009;53:530–53.
11. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/AHA/ASNC/HFSA/SCCT 2012 Appropriate Use Criteria for Coronary Revascularization Focused Update: A Report of the American College of Cardiology Foundation Appropriate Use Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, American Society of Nuclear Cardiology, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2012;59:857–81.
12. Dartmouth Atlas of Health Care. Accessed 8 Jan 2014 at www.dartmouthatlas.org.
13. Dartmouth Atlas of Health Care: Studies of surgical variation. Cardiac surgery report. 2005. Accessed 8 Jan 2014 at www.dartmouthatlas.org/publications/reports.aspx.
14. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in medicare spending. part 1: the content, quality, and accessibility of care. Ann Intern Med 2003;138:273–87.
15. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in medicare spending. part 2: health outcomes and satisfaction with care. Ann Intern Med 2003;138:288–98.
16. Abelson R. Heart procedure is off the charts in an Ohio city. New York Times 2006. Accessed 23 Apr 2013 at www.nytimes.com/2006/08/18/business/18stent.html.
17. Akhter N, Milford-Beland S, Roe MT, et al. Gender differences among patients with acute coronary syndromes undergoing percutaneous coronary intervention in the American College of Cardiology-National Cardiovascular Data Registry (ACC-NCDR). Am Heart J 2009;157:141–8.
18. Blomkalns AL, Chen AY, Hochman JS, et al. Gender disparities in the diagnosis and treatment of non–ST-segment elevation acute coronary syndromesLarge-scale observations from the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the American College of Cardiology/American Heart Association Guidelines) National Quality Improvement Initiative. J Am Coll Cardiol 2005;45:832–7.
19. Daly C, Clemens F, Lopez Sendon JL, et al. Gender differences in the management and clinical outcome of stable angina. Circulation 2006;113:490–8.
20. Groeneveld PW, Heidenreich PA, Garber AM. Racial disparity in cardiac procedures and mortality among long-term survivors of cardiac arrest. Circulation 2003;108:286–91.
21. Hannan EL, Zhong Y, Walford G, et al. Underutilization of percutaneous coronary intervention for ST-elevation myocardial infarction in Medicaid patients relative to private insurance patients. J Intervent Cardiol 2013;26:470–81.
22. Sonel AF, Good CB, Mulgund J, et al. Racial variations in treatment and outcomes of black and white patients with high-risk non–ST-elevation acute coronary syndromes: insights From CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the ACC/AHA Guidelines?). Circulation 2005;111:1225–32.
23. Patel MR, Spertus JA, Brindis RG, et al. ACCF proposed method for evaluating the appropriateness of cardiovascular imaging. J Am Coll Cardiol 2005;46:1606–13.
24. Levine GN, Bates ER, Blankenship JC, et al. 2011 ACCF/AHA/SCAI Guideline for percutaneous coronary intervention: executive summary: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines and the Society for Cardiovascular Angiography and Interventions. Circulation 2011;124:2574–609.
25. Campeau L. Letter: Grading of angina pectoris. Circulation 1976;54:522–3.
26. Chan PS, Patel MR, Klein LW, et al. Appropriateness of percutaneous coronary intervention. JAMA 2011;306:53–61.
27. Hannan EL, Cozzens K, Samadashvili Z, et al. Appropriateness of coronary revascularization for patients without acute coronary syndromes. J Am Coll Cardiol 2012;59:1870–6.
28. Bradley SM, Maynard C, Bryson CL. Appropriateness of percutaneous coronary interventions in Washington State. Circ Cardiovasc Qual Outcomes 2012;5:445–53.
29. Nallamothu BK, Tommaso CL, Anderson HV, et al. ACC/AHA/SCAI/AMA–Convened PCPI/NCQA 2013 Performance measures for adults undergoing percutaneous coronary intervention. A report of the American College of Cardiology/American Heart Association Task Force on Performance Measures, the Society for Cardiovascular Angiography and Interventions, the American Medical Association–Convened Physician Consortium for Performance Improvement, and the National Committee for Quality Assurance. J Am Coll Cardiol 2014;63:722–45.
30. Ko DT, Guo H, Wijeysundera HC, et al. Assessing the association of appropriateness of coronary revascularization and clinical outcomes for patients with stable coronary artery disease. J Am Coll Cardiol 2012;60:1876–84.
31. Matlock DD, Groeneveld PW, Sidney S, et al. Geographic variation in cardiovascular procedure use among medicare fee-for-service vs medicare advantage beneficiaries. JAMA 2013;310:155–62.
32. Beltrame JF, Weekes AJ, Morgan C, et al. The prevalence of weekly angina among patients with chronic stable angina in primary care practices: The coronary artery disease in general practice (cadence) study. Arch Intern Med 2009;169:1491–9.
33. Bradley SM, Spertus JA, Nallamothu BK, et al. The association between patient selection for diagnostic coronary angiography and hospital-level PCI appropriateness: Insights from the NCDR. Circ Cardiovasc Qual Outcomes 2013;6:A1. Accessed 20 Nov 2013 at http://circoutcomes.ahajournals.org/cgi/content/short/6/3_MeetingAbstracts/A1?rss=1.
34. Lee J, Chuu K, Spertus J, et al. Patients overestimate the potential benefits of elective percutaneous coronary intervention. Mo Med 2012;109:79.
Transition Readiness Assessment for Sickle Cell Patients: A Quality Improvement Project
From the St. Jude Children’s Research Hospital, Memphis, TN.
This article is the fourth in our Hemoglobinopathy Learning Collaborative series. See the related editorial by Oyeku et al in the February 2014 issue of JCOM. (—Ed.)
Abstract
- Objective: To describe the use of quality improvement (QI) methodology to implement an assessment tool to evaluate transition readiness in youth with sickle cell disease (SCD).
- Methods: Plan-Do-Study-Act (PDSA) cycles were run to evaluate the feasibility and effectiveness of a provider-based transition readiness assessment.
- Results: Seventy-two adolescents aged 17 years (53% male) were assessed for transition readiness from August 2011 to June 2013. Results indicated that it is feasible for a provider transition readiness assessment (PTRA) tool to be integrated into a transition program. The newly created PTRA tool can inform the level of preparedness of adolescents with SCD during planning for adult transition.
- Conclusion: The PTRA tool may be helpful for planning and preparation of youth with SCD to successfully transition to adult care.
Sickle cell disease (SCD) is one of the most common genetic disorders in the world and is caused by a mutation producing the abnormal sickle hemoglobin. Patients with SCD are living longer and transitioning from pediatric to adult providers. However, the transition years are associated with high mortality [1–4], risk for increased utilization of emergency care, and underutilization of care maintenance visits [5,6]. Successful transition from pediatric care to adult care is critical in ensuring care continuity and optimal health [7]. Barriers to successful transition include lack of preparation for transition [8,9]. To address this limitation, transition programs have been created to help foster transition preparation and readiness.
Often, chronological age determines when SCD programs transfer patients to adult care; however, age is an inadequate measure of readiness. To determine the appropriate time for transition and to individualize the subsequent preparation and planning prior to transfer, an assessment of transition readiness is needed. A number of checklists exist in the unpublished literature (eg, on institution and program websites), and a few empirically tested transition readiness measures have been developed through literature review, semi-structured interviews, and pilot testing in patient samples [10–13]. The Transition Readiness Assessment Questionnaire (TRAQ) and TRxANSITION scale are non-disease-specific measures that assess self-management and advocacy skills of youth with special health care needs; the TRAQ is self-report whereas the TRxANSITION scale is provider-administered [10,11]. Disease-specific measures have been developed for pediatric kidney transplant recipients [12] and adolescents with cystic fibrosis [13]. Studies using these measures suggest that transition readiness is associated with age, gender, disease type, increased adolescent responsibility/decreased parental involvement, and adherence [10–12].
For patients with SCD, there is no well-validated measure available to assess transition readiness [14]. Telfair and colleagues developed a sickle cell transfer questionnaire that focused on transition concerns and feelings and suggestions for transition intervention programming from the perspective of adolescents, their primary caregivers, and adults with SCD [15]. In addition, McPherson and colleagues examined SCD transition readiness in 4 areas: prior thought about transition, knowledge about steps to transition, interest in learning more about the transition process, and perceived importance of continuing care with a hematologist as an adult provider [8]. They found that adolescents in general were not prepared for transition but that readiness improved with age [8]. Overall, most readiness measures have involved patient self-report or parent proxy report. No current readiness assessment scales incorporate the provider’s assessment, which could help better define the most appropriate next steps in education and preparation for the upcoming transfer to adult care.
The St. Jude Children’s Research Hospital SCD Transition to Adult Care program was started in 2007 and is a companion program to the SCD teen clinic, serving 250 adolescents aged 12 to 18 years. The transition program curriculum addresses all aspects of the transition process. Based on the curriculum components, St. Jude developed and implemented a transition readiness assessment tool to be completed by providers in the SCD transition program. In this article, we describe our use of quality improvement (QI) methodology to evaluate the utility and impact of the newly created SCD transition readiness assessment tool.
Methods
Transition Program
The transition program is directed by a multidisciplinary team; disciplines represented on the team are medical (hematologist, genetic educator, physician assistant, and nurse coordinators), psychosocial (social workers), emotional/cognitive (psychologists), and academic (academic coordinator). In the program, adolescents with SCD and their families are introduced to the concept of transition to adult care at the age of 12. Every 6 months from 12 to 18 years of age, members of the team address relevant topics with patients to increase patients’ disease knowledge and improve their disease self-management skills. Some of the program components include training in completing a personal health record (PHR), genetic education, academic planning, and independent living skills.
Needs Assessment
Prior to initiation of the project, members of the transition program met monthly to informally discuss the progress of patients who were approaching the age of transition to adult care. We found that adolescents did not appear to be ready or well prepared for transition, including not being aware of the various familial and psychosocial issues that needed to be addressed prior to the transfer to adult care. We realized that these discussions needed to occur earlier to allow more time for preparation and transition planning of the patient, family, and medical team. In addition, members of the team each has differing perspectives and did not have the same information with regard to existing familial and psychosocial issues. The discussions were necessary to ensure all team members had pertinent information to make informed decisions about the patient’s level of transition readiness. Finally, our criteria for readiness were not standardized or quantifiable. As a result, each patient discussion was lengthy, not structured, and not very informative. In 2011, a core group from the transition team attended a Health Resources Services Administration–sponsored Hemoglobinopathies Quality Improvement Workshop to receive training in QI processes. We decided to create a formal, quantitative, individualized assessment of patients’ progress toward transition at age 17.
Readiness Assessment Tool
The emotional/cognitive domain checklist was developed by the pediatric psychologist and pediatric neuropsychologist. Because the psychology service is set up to see patients referred by the medical team and is unable to see all patients coming to hematology clinic, the emotional/cognitive checklist is based on identifying previous utilization of psychological services including psychotherapy and cognitive testing and determining whether initiation of services is warranted. The academic domain checklist was developed by the academic coordinator who serves as a liaison between the medical team and the school system. This checklist assesses whether the adolescent is meeting high school graduation requirements, able to verbalize an educational/job training plan, on track with future planning (eg, completed required testing), knowledgeable about community educational services, and able to self-advocate (eg, apply for SSI benefits).
Items within each domain have equal value (ie, each question on the checklist is worth 1 point) and the sum of points yields the quantifiable assessment of how well patients are performing in each area of their health. Assessment meetings occur monthly when eligible patients are discussed. Domains are evaluated by the health care provider responsible for his/her own domain (eg, social worker completes the psychosocial domain, the academic coordinator completes the academic domain, etc.).
PDSA Methodology
Cycle 1
The objective of the first cycle was to assess feasibility and acceptability of the assessment tool. Patients were assessed during the month of their 17th birthday. Fourteen out of 16 eligible patients (87.5%) were assessed: 1 patient was lost to follow-up, and 1 patient inadvertently was not included in the assessment due to an administrative error. Feedback from the first cycle revealed that some items on the emotional/cognitive domain checklist were not clearly defined, and there was some overlap with the psychosocial domain checklist. Additionally, some items were not readily assessed by psychology based on the structure of psychology services at the institution. Not all patients are seen by psychology; patients are referred to psychology by the team and appointments occur in the psychology clinic and were not well-integrated within the hematology clinic visit.
Cycle 2
The second cycle addressed some of the problems identified during Cycle 1. The emotional/cognitive domain checklist was revised to reflect psychology clinic utilization (psychotherapy and testing) and a section was added where team members could indicate individualized action plans. Seventeen patients out of 18 eligible patients were assessed (94.4%): 1 patient was lost to follow-up. At the conclusion of this cycle, we found that several patients had not completed certain transition program components, such as genetic education or their PHR. Therefore, we decided that we needed to indicate this and create a Plan of Action (POA) to ensure completion of program components. The POA indicated which components were outstanding, when these components would be completed, and when the team would discuss the patient again to track their progress with program components (eg, 6 months later).
Cycle 3
Following a few months using the assessment process, each member of the team provided feedback about their observations from the second cycle. The third cycle of the PDSA addressed some of the barriers identified in Cycle 2 by adding the POA and timeline for reassessment. With this information, the nurse case manager was able to identify and contact families who had significant gaps in the learning curriculum. Additionally, services such as psychological testing were scheduled in a timely manner to address academic problems and to provide rationale for accommodations and academic/vocational services before patients transferred care to the adult provider. With the number of assessed patients increasing, it was determined that a reliable tracking system to monitor progress was essential. Thus, a transition database was created to document the domain scores, individualized plan of action, and other components of the transition program, such as medical literacy quiz scores, completion of pre-transfer visits to adult providers, and completion of the PHR. During this cycle, 20 patients were assessed out of a total of 22 eligible patients (90.9%); 2 patients were lost to follow-up.
Cycle 4
This cycle is currently underway and comprises monthly assessments of eligible 17-year-old patients with SCD. From January 2013 to May 2013 we have assessed 100% of the eligible patients (21/21). All information obtained through the assessment tool is added to the transition database. Future adjustments and modifications are planned for this tool as we continue to evaluate its impact and value.
Discussion
The transition readiness assessment tool was developed to evaluate adolescent patients with SCD aged 17 years regarding their progress in the transition program and level of transition readiness. Most transition readiness measures available in the literature consider the patient and parent perspective but do not include the health care provider perspective or determine if the patient received the information necessary for successful transition. Our readiness assessment tool has been helpful in providing a structured and quantifiable means to identify at-risk patients and families prior to the transfer of care and revealing important gaps in transition planning. It also provides information in a timely manner about points of intervention to ensure patients receive adequate preparation and services (eg, psychological/neuropsychological testing). Additionally, monthly meetings are held during which the tool is scored and discussed, providing an opportunity for members of the transition team to examine patients’ progress toward transition readiness. Finally, completing an individualized tool in a multidisciplinary setting has the added benefit of encouraging increased staff collaboration and creating a venue for ongoing re-evaluation of the QI process.
We achieved our objective of completing the assessment tool for 80% of eligible patients throughout the cycles. The majority of our nonassessed patients was lost to follow-up and had not had a clinic visit in 2 to 3 years. Implementing the tool has provided us with an additional mechanism to verify transition eligibility and has afforded the transition program a systematic way to screen and track patients who are approaching the age of transition and who may have not been seen for an extended period of time. As with any large program following children with special health care and complex needs, the large volume of patients and their complexity may pose a challenge to the program, therefore having an additional tracking system in place may help mitigate possible losses to follow-up. In fact, since the implementation of tool, our team has been able to contact families and in some cases have reinstated services. As a by-product of tool implementation, we have implemented new policies to prevent extended losses to follow-up and patient attrition.
Limitations
A limitation of the assessment tool is that it does not incorporate the perspectives of the other stakeholders (adolescents, parents, adult providers). Further, some of the items in our tool are measuring utilization of services and not specifically transition readiness. As with most transition readiness measures, our provider tool does not have established reliability and validity [14]. We plan to test for reliability and validity once enough data and patient outcomes have been collected. Additionally, because of the small number of patients who have transferred to adult care since implementation of the tool, we did not examine the association between readiness scores and clinical outcomes, such as fulfillment of first adult provider visit and hospital utilization following transition to adult care. As we continue to assess adolescent patients and track their progress following transition, we will be able to examine these associations with a larger group.
Future Plans
Since the implementation of the tool in our program, we have realized that we may need to start assessing patients at an earlier age and perhaps multiple times throughout adolescence. Some of our patients have guardianship and conservatorship issues and require more time to discuss options with the family and put in place the appropriate support and assistance prior to the transfer of care. Further, patients that have low compliance to clinic appointments are not receiving all elements of the transition program curriculum and in turn have fewer opportunities to prepare for transition. To address some of our current limitations, we plan to incorporate a patient and parent readiness assessment and examine the associations between the provider assessment and patient information such as medical literacy quizzes, clinic compliance, and fulfillment of the first adult provider visit. Assessment from all 3 perspectives (patient, parent, and provider) will offer a 360-degree view of transition readiness perception which should improve our ability to identify at-risk families and tailor transition planning to address barriers to care. In addition, our future plans include development of a mechanism to inform patients and families about the domain scores and action plans following the transition readiness meetings and include scores into the electronic medical records. Finally, the readiness assessment tool has revealed some gaps in our transition educational curriculum. Most of our transition learning involves providing and evaluating information provided, but we are not systematically assessing actual acquired transition skills. We are in the process of developing and implementing skill-based learning for activities such as calling to make or reschedule an appointment with an adult provider, arranging transportation, etc.
Conclusion
In conclusion, the provider transition readiness assessment has been a helpful tool to monitor progress of adolescents with SCD towards readiness for transition. The QI methodology and PDSA cycle approach has not only allowed for testing, development, and implementation of the tool, but is also allowing ongoing systematic refinement of our instrument. This approach highlighted the psychosocial challenges of our families as they move toward the transfer of care, in addition to the need for more individualized planning. The next important step is to evaluate the validity and reliability of the measure so we can better evaluate the impact of transition programming on the transfer from pediatric to adult care. We found the PDSA cycle approach to be a framework that can efficiently and systematically improve the quality of care of transitioning patients with SCD and their families.
Corresponding author: Jerlym Porter, PhD, MPH, St. Jude Children’s Research Hosp., 262 Danny Thomas Pl., Mail stop 740, Memphis, TN 38105, [email protected].
Funding/support: This work was supported in part by HRSA grant 6 U1EMC19331-03-02.
Financial disclosures: None.
1. Quinn CT, Rogers ZR, McCavit TL, Buchanan GR. Improved survival of children and adolescents with sickle cell disease. Blood 2010;115:3447–52.
2. Hassell KL. Population estimates of sickle cell disease in the U.S. Am J Prev Med 2010;38(4 Suppl):S512–S521.
3. Hamideh D, Alvarez O. Sickle cell disease related mortality in the United States (1999-2009). Pediatr Blood Cancer 2013;60:1482–6.
4. Lanzkron S, Carroll CP, Haywood C, Jr. Mortality rates and age at death from sickle cell disease: U.S., 1979-2005. Public Health Rep 2013;128:110–6.
5. Brousseau DC, Owens PL, Mosso AL, et al. Acute care utilization and rehospitalizations for sickle cell disease. JAMA 2010;303:1288–94.
6. Hemker BG, Brousseau DC, Yan K, et al. When children with sickle-cell disease become adults: lack of outpatient care leads to increased use of the emergency department. Am J Hematol 2011;86:863–5.
7. Jordan L, Swerdlow P, Coates TD. Systematic review of transition from adolescent to adult care in patients with sickle cell disease. J Pediatr Hematol Oncol 2013;35:165–9.
8. McPherson M, Thaniel L, Minniti CP. Transition of patients with sickle cell disease from pediatric to adult care: assessing patient readiness. Pediatr Blood Cancer 2009;52:838–41.
9. Lebensburger JD, Bemrich-Stolz CJ, Howard TH. Barriers in transition from pediatrics to adult medicine in sickle cell anemia. J Blood Med 2012;3:105–12.
10. Sawicki GS, Lukens-Bull K, Yin X, et al. Measuring the transition readiness of youth with special healthcare needs: validation of the TRAQ--Transition Readiness Assessment Questionnaire. J Pediatr Psychol 2011;36:160–71.
11. Ferris ME, Harward DH, Bickford K, et al. A clinical tool to measure the components of health-care transition from pediatric care to adult care: the UNC TR(x)ANSITION scale. Ren Fail 2012;34:744–53.
12. Gilleland J, Amaral S, Mee L, Blount R. Getting ready to leave: transition readiness in adolescent kidney transplant recipients. J Pediatr Psychol 2012;37:85–96.
13. Cappelli M, MacDonald NE, McGrath PJ. Assessment of readiness to transfer to adult care for adolescents with cystic fibrosis. Child Health Care 1989;18:218–24.
14. Stinson J, Kohut SA, Spiegel L, et al. A systematic review of transition readiness and transfer satisfaction measures for adolescents with chronic illness. Int J Adolesc Med Health 2013:1–16.
15. Telfair J, Myers J, Drezner S. Transfer as a component of the transition of adolescents with sickle cell disease to adult care: adolescent, adult, and parent perspectives. J Adolesc Health 1994;15:558–65.
16. Walley P, Gowland B. Completing the circle: from PD to PDSA. Int J Health Care Qual Assur Inc Leadersh Health Serv 2004;17:349–58.
From the St. Jude Children’s Research Hospital, Memphis, TN.
This article is the fourth in our Hemoglobinopathy Learning Collaborative series. See the related editorial by Oyeku et al in the February 2014 issue of JCOM. (—Ed.)
Abstract
- Objective: To describe the use of quality improvement (QI) methodology to implement an assessment tool to evaluate transition readiness in youth with sickle cell disease (SCD).
- Methods: Plan-Do-Study-Act (PDSA) cycles were run to evaluate the feasibility and effectiveness of a provider-based transition readiness assessment.
- Results: Seventy-two adolescents aged 17 years (53% male) were assessed for transition readiness from August 2011 to June 2013. Results indicated that it is feasible for a provider transition readiness assessment (PTRA) tool to be integrated into a transition program. The newly created PTRA tool can inform the level of preparedness of adolescents with SCD during planning for adult transition.
- Conclusion: The PTRA tool may be helpful for planning and preparation of youth with SCD to successfully transition to adult care.
Sickle cell disease (SCD) is one of the most common genetic disorders in the world and is caused by a mutation producing the abnormal sickle hemoglobin. Patients with SCD are living longer and transitioning from pediatric to adult providers. However, the transition years are associated with high mortality [1–4], risk for increased utilization of emergency care, and underutilization of care maintenance visits [5,6]. Successful transition from pediatric care to adult care is critical in ensuring care continuity and optimal health [7]. Barriers to successful transition include lack of preparation for transition [8,9]. To address this limitation, transition programs have been created to help foster transition preparation and readiness.
Often, chronological age determines when SCD programs transfer patients to adult care; however, age is an inadequate measure of readiness. To determine the appropriate time for transition and to individualize the subsequent preparation and planning prior to transfer, an assessment of transition readiness is needed. A number of checklists exist in the unpublished literature (eg, on institution and program websites), and a few empirically tested transition readiness measures have been developed through literature review, semi-structured interviews, and pilot testing in patient samples [10–13]. The Transition Readiness Assessment Questionnaire (TRAQ) and TRxANSITION scale are non-disease-specific measures that assess self-management and advocacy skills of youth with special health care needs; the TRAQ is self-report whereas the TRxANSITION scale is provider-administered [10,11]. Disease-specific measures have been developed for pediatric kidney transplant recipients [12] and adolescents with cystic fibrosis [13]. Studies using these measures suggest that transition readiness is associated with age, gender, disease type, increased adolescent responsibility/decreased parental involvement, and adherence [10–12].
For patients with SCD, there is no well-validated measure available to assess transition readiness [14]. Telfair and colleagues developed a sickle cell transfer questionnaire that focused on transition concerns and feelings and suggestions for transition intervention programming from the perspective of adolescents, their primary caregivers, and adults with SCD [15]. In addition, McPherson and colleagues examined SCD transition readiness in 4 areas: prior thought about transition, knowledge about steps to transition, interest in learning more about the transition process, and perceived importance of continuing care with a hematologist as an adult provider [8]. They found that adolescents in general were not prepared for transition but that readiness improved with age [8]. Overall, most readiness measures have involved patient self-report or parent proxy report. No current readiness assessment scales incorporate the provider’s assessment, which could help better define the most appropriate next steps in education and preparation for the upcoming transfer to adult care.
The St. Jude Children’s Research Hospital SCD Transition to Adult Care program was started in 2007 and is a companion program to the SCD teen clinic, serving 250 adolescents aged 12 to 18 years. The transition program curriculum addresses all aspects of the transition process. Based on the curriculum components, St. Jude developed and implemented a transition readiness assessment tool to be completed by providers in the SCD transition program. In this article, we describe our use of quality improvement (QI) methodology to evaluate the utility and impact of the newly created SCD transition readiness assessment tool.
Methods
Transition Program
The transition program is directed by a multidisciplinary team; disciplines represented on the team are medical (hematologist, genetic educator, physician assistant, and nurse coordinators), psychosocial (social workers), emotional/cognitive (psychologists), and academic (academic coordinator). In the program, adolescents with SCD and their families are introduced to the concept of transition to adult care at the age of 12. Every 6 months from 12 to 18 years of age, members of the team address relevant topics with patients to increase patients’ disease knowledge and improve their disease self-management skills. Some of the program components include training in completing a personal health record (PHR), genetic education, academic planning, and independent living skills.
Needs Assessment
Prior to initiation of the project, members of the transition program met monthly to informally discuss the progress of patients who were approaching the age of transition to adult care. We found that adolescents did not appear to be ready or well prepared for transition, including not being aware of the various familial and psychosocial issues that needed to be addressed prior to the transfer to adult care. We realized that these discussions needed to occur earlier to allow more time for preparation and transition planning of the patient, family, and medical team. In addition, members of the team each has differing perspectives and did not have the same information with regard to existing familial and psychosocial issues. The discussions were necessary to ensure all team members had pertinent information to make informed decisions about the patient’s level of transition readiness. Finally, our criteria for readiness were not standardized or quantifiable. As a result, each patient discussion was lengthy, not structured, and not very informative. In 2011, a core group from the transition team attended a Health Resources Services Administration–sponsored Hemoglobinopathies Quality Improvement Workshop to receive training in QI processes. We decided to create a formal, quantitative, individualized assessment of patients’ progress toward transition at age 17.
Readiness Assessment Tool
The emotional/cognitive domain checklist was developed by the pediatric psychologist and pediatric neuropsychologist. Because the psychology service is set up to see patients referred by the medical team and is unable to see all patients coming to hematology clinic, the emotional/cognitive checklist is based on identifying previous utilization of psychological services including psychotherapy and cognitive testing and determining whether initiation of services is warranted. The academic domain checklist was developed by the academic coordinator who serves as a liaison between the medical team and the school system. This checklist assesses whether the adolescent is meeting high school graduation requirements, able to verbalize an educational/job training plan, on track with future planning (eg, completed required testing), knowledgeable about community educational services, and able to self-advocate (eg, apply for SSI benefits).
Items within each domain have equal value (ie, each question on the checklist is worth 1 point) and the sum of points yields the quantifiable assessment of how well patients are performing in each area of their health. Assessment meetings occur monthly when eligible patients are discussed. Domains are evaluated by the health care provider responsible for his/her own domain (eg, social worker completes the psychosocial domain, the academic coordinator completes the academic domain, etc.).
PDSA Methodology
Cycle 1
The objective of the first cycle was to assess feasibility and acceptability of the assessment tool. Patients were assessed during the month of their 17th birthday. Fourteen out of 16 eligible patients (87.5%) were assessed: 1 patient was lost to follow-up, and 1 patient inadvertently was not included in the assessment due to an administrative error. Feedback from the first cycle revealed that some items on the emotional/cognitive domain checklist were not clearly defined, and there was some overlap with the psychosocial domain checklist. Additionally, some items were not readily assessed by psychology based on the structure of psychology services at the institution. Not all patients are seen by psychology; patients are referred to psychology by the team and appointments occur in the psychology clinic and were not well-integrated within the hematology clinic visit.
Cycle 2
The second cycle addressed some of the problems identified during Cycle 1. The emotional/cognitive domain checklist was revised to reflect psychology clinic utilization (psychotherapy and testing) and a section was added where team members could indicate individualized action plans. Seventeen patients out of 18 eligible patients were assessed (94.4%): 1 patient was lost to follow-up. At the conclusion of this cycle, we found that several patients had not completed certain transition program components, such as genetic education or their PHR. Therefore, we decided that we needed to indicate this and create a Plan of Action (POA) to ensure completion of program components. The POA indicated which components were outstanding, when these components would be completed, and when the team would discuss the patient again to track their progress with program components (eg, 6 months later).
Cycle 3
Following a few months using the assessment process, each member of the team provided feedback about their observations from the second cycle. The third cycle of the PDSA addressed some of the barriers identified in Cycle 2 by adding the POA and timeline for reassessment. With this information, the nurse case manager was able to identify and contact families who had significant gaps in the learning curriculum. Additionally, services such as psychological testing were scheduled in a timely manner to address academic problems and to provide rationale for accommodations and academic/vocational services before patients transferred care to the adult provider. With the number of assessed patients increasing, it was determined that a reliable tracking system to monitor progress was essential. Thus, a transition database was created to document the domain scores, individualized plan of action, and other components of the transition program, such as medical literacy quiz scores, completion of pre-transfer visits to adult providers, and completion of the PHR. During this cycle, 20 patients were assessed out of a total of 22 eligible patients (90.9%); 2 patients were lost to follow-up.
Cycle 4
This cycle is currently underway and comprises monthly assessments of eligible 17-year-old patients with SCD. From January 2013 to May 2013 we have assessed 100% of the eligible patients (21/21). All information obtained through the assessment tool is added to the transition database. Future adjustments and modifications are planned for this tool as we continue to evaluate its impact and value.
Discussion
The transition readiness assessment tool was developed to evaluate adolescent patients with SCD aged 17 years regarding their progress in the transition program and level of transition readiness. Most transition readiness measures available in the literature consider the patient and parent perspective but do not include the health care provider perspective or determine if the patient received the information necessary for successful transition. Our readiness assessment tool has been helpful in providing a structured and quantifiable means to identify at-risk patients and families prior to the transfer of care and revealing important gaps in transition planning. It also provides information in a timely manner about points of intervention to ensure patients receive adequate preparation and services (eg, psychological/neuropsychological testing). Additionally, monthly meetings are held during which the tool is scored and discussed, providing an opportunity for members of the transition team to examine patients’ progress toward transition readiness. Finally, completing an individualized tool in a multidisciplinary setting has the added benefit of encouraging increased staff collaboration and creating a venue for ongoing re-evaluation of the QI process.
We achieved our objective of completing the assessment tool for 80% of eligible patients throughout the cycles. The majority of our nonassessed patients was lost to follow-up and had not had a clinic visit in 2 to 3 years. Implementing the tool has provided us with an additional mechanism to verify transition eligibility and has afforded the transition program a systematic way to screen and track patients who are approaching the age of transition and who may have not been seen for an extended period of time. As with any large program following children with special health care and complex needs, the large volume of patients and their complexity may pose a challenge to the program, therefore having an additional tracking system in place may help mitigate possible losses to follow-up. In fact, since the implementation of tool, our team has been able to contact families and in some cases have reinstated services. As a by-product of tool implementation, we have implemented new policies to prevent extended losses to follow-up and patient attrition.
Limitations
A limitation of the assessment tool is that it does not incorporate the perspectives of the other stakeholders (adolescents, parents, adult providers). Further, some of the items in our tool are measuring utilization of services and not specifically transition readiness. As with most transition readiness measures, our provider tool does not have established reliability and validity [14]. We plan to test for reliability and validity once enough data and patient outcomes have been collected. Additionally, because of the small number of patients who have transferred to adult care since implementation of the tool, we did not examine the association between readiness scores and clinical outcomes, such as fulfillment of first adult provider visit and hospital utilization following transition to adult care. As we continue to assess adolescent patients and track their progress following transition, we will be able to examine these associations with a larger group.
Future Plans
Since the implementation of the tool in our program, we have realized that we may need to start assessing patients at an earlier age and perhaps multiple times throughout adolescence. Some of our patients have guardianship and conservatorship issues and require more time to discuss options with the family and put in place the appropriate support and assistance prior to the transfer of care. Further, patients that have low compliance to clinic appointments are not receiving all elements of the transition program curriculum and in turn have fewer opportunities to prepare for transition. To address some of our current limitations, we plan to incorporate a patient and parent readiness assessment and examine the associations between the provider assessment and patient information such as medical literacy quizzes, clinic compliance, and fulfillment of the first adult provider visit. Assessment from all 3 perspectives (patient, parent, and provider) will offer a 360-degree view of transition readiness perception which should improve our ability to identify at-risk families and tailor transition planning to address barriers to care. In addition, our future plans include development of a mechanism to inform patients and families about the domain scores and action plans following the transition readiness meetings and include scores into the electronic medical records. Finally, the readiness assessment tool has revealed some gaps in our transition educational curriculum. Most of our transition learning involves providing and evaluating information provided, but we are not systematically assessing actual acquired transition skills. We are in the process of developing and implementing skill-based learning for activities such as calling to make or reschedule an appointment with an adult provider, arranging transportation, etc.
Conclusion
In conclusion, the provider transition readiness assessment has been a helpful tool to monitor progress of adolescents with SCD towards readiness for transition. The QI methodology and PDSA cycle approach has not only allowed for testing, development, and implementation of the tool, but is also allowing ongoing systematic refinement of our instrument. This approach highlighted the psychosocial challenges of our families as they move toward the transfer of care, in addition to the need for more individualized planning. The next important step is to evaluate the validity and reliability of the measure so we can better evaluate the impact of transition programming on the transfer from pediatric to adult care. We found the PDSA cycle approach to be a framework that can efficiently and systematically improve the quality of care of transitioning patients with SCD and their families.
Corresponding author: Jerlym Porter, PhD, MPH, St. Jude Children’s Research Hosp., 262 Danny Thomas Pl., Mail stop 740, Memphis, TN 38105, [email protected].
Funding/support: This work was supported in part by HRSA grant 6 U1EMC19331-03-02.
Financial disclosures: None.
From the St. Jude Children’s Research Hospital, Memphis, TN.
This article is the fourth in our Hemoglobinopathy Learning Collaborative series. See the related editorial by Oyeku et al in the February 2014 issue of JCOM. (—Ed.)
Abstract
- Objective: To describe the use of quality improvement (QI) methodology to implement an assessment tool to evaluate transition readiness in youth with sickle cell disease (SCD).
- Methods: Plan-Do-Study-Act (PDSA) cycles were run to evaluate the feasibility and effectiveness of a provider-based transition readiness assessment.
- Results: Seventy-two adolescents aged 17 years (53% male) were assessed for transition readiness from August 2011 to June 2013. Results indicated that it is feasible for a provider transition readiness assessment (PTRA) tool to be integrated into a transition program. The newly created PTRA tool can inform the level of preparedness of adolescents with SCD during planning for adult transition.
- Conclusion: The PTRA tool may be helpful for planning and preparation of youth with SCD to successfully transition to adult care.
Sickle cell disease (SCD) is one of the most common genetic disorders in the world and is caused by a mutation producing the abnormal sickle hemoglobin. Patients with SCD are living longer and transitioning from pediatric to adult providers. However, the transition years are associated with high mortality [1–4], risk for increased utilization of emergency care, and underutilization of care maintenance visits [5,6]. Successful transition from pediatric care to adult care is critical in ensuring care continuity and optimal health [7]. Barriers to successful transition include lack of preparation for transition [8,9]. To address this limitation, transition programs have been created to help foster transition preparation and readiness.
Often, chronological age determines when SCD programs transfer patients to adult care; however, age is an inadequate measure of readiness. To determine the appropriate time for transition and to individualize the subsequent preparation and planning prior to transfer, an assessment of transition readiness is needed. A number of checklists exist in the unpublished literature (eg, on institution and program websites), and a few empirically tested transition readiness measures have been developed through literature review, semi-structured interviews, and pilot testing in patient samples [10–13]. The Transition Readiness Assessment Questionnaire (TRAQ) and TRxANSITION scale are non-disease-specific measures that assess self-management and advocacy skills of youth with special health care needs; the TRAQ is self-report whereas the TRxANSITION scale is provider-administered [10,11]. Disease-specific measures have been developed for pediatric kidney transplant recipients [12] and adolescents with cystic fibrosis [13]. Studies using these measures suggest that transition readiness is associated with age, gender, disease type, increased adolescent responsibility/decreased parental involvement, and adherence [10–12].
For patients with SCD, there is no well-validated measure available to assess transition readiness [14]. Telfair and colleagues developed a sickle cell transfer questionnaire that focused on transition concerns and feelings and suggestions for transition intervention programming from the perspective of adolescents, their primary caregivers, and adults with SCD [15]. In addition, McPherson and colleagues examined SCD transition readiness in 4 areas: prior thought about transition, knowledge about steps to transition, interest in learning more about the transition process, and perceived importance of continuing care with a hematologist as an adult provider [8]. They found that adolescents in general were not prepared for transition but that readiness improved with age [8]. Overall, most readiness measures have involved patient self-report or parent proxy report. No current readiness assessment scales incorporate the provider’s assessment, which could help better define the most appropriate next steps in education and preparation for the upcoming transfer to adult care.
The St. Jude Children’s Research Hospital SCD Transition to Adult Care program was started in 2007 and is a companion program to the SCD teen clinic, serving 250 adolescents aged 12 to 18 years. The transition program curriculum addresses all aspects of the transition process. Based on the curriculum components, St. Jude developed and implemented a transition readiness assessment tool to be completed by providers in the SCD transition program. In this article, we describe our use of quality improvement (QI) methodology to evaluate the utility and impact of the newly created SCD transition readiness assessment tool.
Methods
Transition Program
The transition program is directed by a multidisciplinary team; disciplines represented on the team are medical (hematologist, genetic educator, physician assistant, and nurse coordinators), psychosocial (social workers), emotional/cognitive (psychologists), and academic (academic coordinator). In the program, adolescents with SCD and their families are introduced to the concept of transition to adult care at the age of 12. Every 6 months from 12 to 18 years of age, members of the team address relevant topics with patients to increase patients’ disease knowledge and improve their disease self-management skills. Some of the program components include training in completing a personal health record (PHR), genetic education, academic planning, and independent living skills.
Needs Assessment
Prior to initiation of the project, members of the transition program met monthly to informally discuss the progress of patients who were approaching the age of transition to adult care. We found that adolescents did not appear to be ready or well prepared for transition, including not being aware of the various familial and psychosocial issues that needed to be addressed prior to the transfer to adult care. We realized that these discussions needed to occur earlier to allow more time for preparation and transition planning of the patient, family, and medical team. In addition, members of the team each has differing perspectives and did not have the same information with regard to existing familial and psychosocial issues. The discussions were necessary to ensure all team members had pertinent information to make informed decisions about the patient’s level of transition readiness. Finally, our criteria for readiness were not standardized or quantifiable. As a result, each patient discussion was lengthy, not structured, and not very informative. In 2011, a core group from the transition team attended a Health Resources Services Administration–sponsored Hemoglobinopathies Quality Improvement Workshop to receive training in QI processes. We decided to create a formal, quantitative, individualized assessment of patients’ progress toward transition at age 17.
Readiness Assessment Tool
The emotional/cognitive domain checklist was developed by the pediatric psychologist and pediatric neuropsychologist. Because the psychology service is set up to see patients referred by the medical team and is unable to see all patients coming to hematology clinic, the emotional/cognitive checklist is based on identifying previous utilization of psychological services including psychotherapy and cognitive testing and determining whether initiation of services is warranted. The academic domain checklist was developed by the academic coordinator who serves as a liaison between the medical team and the school system. This checklist assesses whether the adolescent is meeting high school graduation requirements, able to verbalize an educational/job training plan, on track with future planning (eg, completed required testing), knowledgeable about community educational services, and able to self-advocate (eg, apply for SSI benefits).
Items within each domain have equal value (ie, each question on the checklist is worth 1 point) and the sum of points yields the quantifiable assessment of how well patients are performing in each area of their health. Assessment meetings occur monthly when eligible patients are discussed. Domains are evaluated by the health care provider responsible for his/her own domain (eg, social worker completes the psychosocial domain, the academic coordinator completes the academic domain, etc.).
PDSA Methodology
Cycle 1
The objective of the first cycle was to assess feasibility and acceptability of the assessment tool. Patients were assessed during the month of their 17th birthday. Fourteen out of 16 eligible patients (87.5%) were assessed: 1 patient was lost to follow-up, and 1 patient inadvertently was not included in the assessment due to an administrative error. Feedback from the first cycle revealed that some items on the emotional/cognitive domain checklist were not clearly defined, and there was some overlap with the psychosocial domain checklist. Additionally, some items were not readily assessed by psychology based on the structure of psychology services at the institution. Not all patients are seen by psychology; patients are referred to psychology by the team and appointments occur in the psychology clinic and were not well-integrated within the hematology clinic visit.
Cycle 2
The second cycle addressed some of the problems identified during Cycle 1. The emotional/cognitive domain checklist was revised to reflect psychology clinic utilization (psychotherapy and testing) and a section was added where team members could indicate individualized action plans. Seventeen patients out of 18 eligible patients were assessed (94.4%): 1 patient was lost to follow-up. At the conclusion of this cycle, we found that several patients had not completed certain transition program components, such as genetic education or their PHR. Therefore, we decided that we needed to indicate this and create a Plan of Action (POA) to ensure completion of program components. The POA indicated which components were outstanding, when these components would be completed, and when the team would discuss the patient again to track their progress with program components (eg, 6 months later).
Cycle 3
Following a few months using the assessment process, each member of the team provided feedback about their observations from the second cycle. The third cycle of the PDSA addressed some of the barriers identified in Cycle 2 by adding the POA and timeline for reassessment. With this information, the nurse case manager was able to identify and contact families who had significant gaps in the learning curriculum. Additionally, services such as psychological testing were scheduled in a timely manner to address academic problems and to provide rationale for accommodations and academic/vocational services before patients transferred care to the adult provider. With the number of assessed patients increasing, it was determined that a reliable tracking system to monitor progress was essential. Thus, a transition database was created to document the domain scores, individualized plan of action, and other components of the transition program, such as medical literacy quiz scores, completion of pre-transfer visits to adult providers, and completion of the PHR. During this cycle, 20 patients were assessed out of a total of 22 eligible patients (90.9%); 2 patients were lost to follow-up.
Cycle 4
This cycle is currently underway and comprises monthly assessments of eligible 17-year-old patients with SCD. From January 2013 to May 2013 we have assessed 100% of the eligible patients (21/21). All information obtained through the assessment tool is added to the transition database. Future adjustments and modifications are planned for this tool as we continue to evaluate its impact and value.
Discussion
The transition readiness assessment tool was developed to evaluate adolescent patients with SCD aged 17 years regarding their progress in the transition program and level of transition readiness. Most transition readiness measures available in the literature consider the patient and parent perspective but do not include the health care provider perspective or determine if the patient received the information necessary for successful transition. Our readiness assessment tool has been helpful in providing a structured and quantifiable means to identify at-risk patients and families prior to the transfer of care and revealing important gaps in transition planning. It also provides information in a timely manner about points of intervention to ensure patients receive adequate preparation and services (eg, psychological/neuropsychological testing). Additionally, monthly meetings are held during which the tool is scored and discussed, providing an opportunity for members of the transition team to examine patients’ progress toward transition readiness. Finally, completing an individualized tool in a multidisciplinary setting has the added benefit of encouraging increased staff collaboration and creating a venue for ongoing re-evaluation of the QI process.
We achieved our objective of completing the assessment tool for 80% of eligible patients throughout the cycles. The majority of our nonassessed patients was lost to follow-up and had not had a clinic visit in 2 to 3 years. Implementing the tool has provided us with an additional mechanism to verify transition eligibility and has afforded the transition program a systematic way to screen and track patients who are approaching the age of transition and who may have not been seen for an extended period of time. As with any large program following children with special health care and complex needs, the large volume of patients and their complexity may pose a challenge to the program, therefore having an additional tracking system in place may help mitigate possible losses to follow-up. In fact, since the implementation of tool, our team has been able to contact families and in some cases have reinstated services. As a by-product of tool implementation, we have implemented new policies to prevent extended losses to follow-up and patient attrition.
Limitations
A limitation of the assessment tool is that it does not incorporate the perspectives of the other stakeholders (adolescents, parents, adult providers). Further, some of the items in our tool are measuring utilization of services and not specifically transition readiness. As with most transition readiness measures, our provider tool does not have established reliability and validity [14]. We plan to test for reliability and validity once enough data and patient outcomes have been collected. Additionally, because of the small number of patients who have transferred to adult care since implementation of the tool, we did not examine the association between readiness scores and clinical outcomes, such as fulfillment of first adult provider visit and hospital utilization following transition to adult care. As we continue to assess adolescent patients and track their progress following transition, we will be able to examine these associations with a larger group.
Future Plans
Since the implementation of the tool in our program, we have realized that we may need to start assessing patients at an earlier age and perhaps multiple times throughout adolescence. Some of our patients have guardianship and conservatorship issues and require more time to discuss options with the family and put in place the appropriate support and assistance prior to the transfer of care. Further, patients that have low compliance to clinic appointments are not receiving all elements of the transition program curriculum and in turn have fewer opportunities to prepare for transition. To address some of our current limitations, we plan to incorporate a patient and parent readiness assessment and examine the associations between the provider assessment and patient information such as medical literacy quizzes, clinic compliance, and fulfillment of the first adult provider visit. Assessment from all 3 perspectives (patient, parent, and provider) will offer a 360-degree view of transition readiness perception which should improve our ability to identify at-risk families and tailor transition planning to address barriers to care. In addition, our future plans include development of a mechanism to inform patients and families about the domain scores and action plans following the transition readiness meetings and include scores into the electronic medical records. Finally, the readiness assessment tool has revealed some gaps in our transition educational curriculum. Most of our transition learning involves providing and evaluating information provided, but we are not systematically assessing actual acquired transition skills. We are in the process of developing and implementing skill-based learning for activities such as calling to make or reschedule an appointment with an adult provider, arranging transportation, etc.
Conclusion
In conclusion, the provider transition readiness assessment has been a helpful tool to monitor progress of adolescents with SCD towards readiness for transition. The QI methodology and PDSA cycle approach has not only allowed for testing, development, and implementation of the tool, but is also allowing ongoing systematic refinement of our instrument. This approach highlighted the psychosocial challenges of our families as they move toward the transfer of care, in addition to the need for more individualized planning. The next important step is to evaluate the validity and reliability of the measure so we can better evaluate the impact of transition programming on the transfer from pediatric to adult care. We found the PDSA cycle approach to be a framework that can efficiently and systematically improve the quality of care of transitioning patients with SCD and their families.
Corresponding author: Jerlym Porter, PhD, MPH, St. Jude Children’s Research Hosp., 262 Danny Thomas Pl., Mail stop 740, Memphis, TN 38105, [email protected].
Funding/support: This work was supported in part by HRSA grant 6 U1EMC19331-03-02.
Financial disclosures: None.
1. Quinn CT, Rogers ZR, McCavit TL, Buchanan GR. Improved survival of children and adolescents with sickle cell disease. Blood 2010;115:3447–52.
2. Hassell KL. Population estimates of sickle cell disease in the U.S. Am J Prev Med 2010;38(4 Suppl):S512–S521.
3. Hamideh D, Alvarez O. Sickle cell disease related mortality in the United States (1999-2009). Pediatr Blood Cancer 2013;60:1482–6.
4. Lanzkron S, Carroll CP, Haywood C, Jr. Mortality rates and age at death from sickle cell disease: U.S., 1979-2005. Public Health Rep 2013;128:110–6.
5. Brousseau DC, Owens PL, Mosso AL, et al. Acute care utilization and rehospitalizations for sickle cell disease. JAMA 2010;303:1288–94.
6. Hemker BG, Brousseau DC, Yan K, et al. When children with sickle-cell disease become adults: lack of outpatient care leads to increased use of the emergency department. Am J Hematol 2011;86:863–5.
7. Jordan L, Swerdlow P, Coates TD. Systematic review of transition from adolescent to adult care in patients with sickle cell disease. J Pediatr Hematol Oncol 2013;35:165–9.
8. McPherson M, Thaniel L, Minniti CP. Transition of patients with sickle cell disease from pediatric to adult care: assessing patient readiness. Pediatr Blood Cancer 2009;52:838–41.
9. Lebensburger JD, Bemrich-Stolz CJ, Howard TH. Barriers in transition from pediatrics to adult medicine in sickle cell anemia. J Blood Med 2012;3:105–12.
10. Sawicki GS, Lukens-Bull K, Yin X, et al. Measuring the transition readiness of youth with special healthcare needs: validation of the TRAQ--Transition Readiness Assessment Questionnaire. J Pediatr Psychol 2011;36:160–71.
11. Ferris ME, Harward DH, Bickford K, et al. A clinical tool to measure the components of health-care transition from pediatric care to adult care: the UNC TR(x)ANSITION scale. Ren Fail 2012;34:744–53.
12. Gilleland J, Amaral S, Mee L, Blount R. Getting ready to leave: transition readiness in adolescent kidney transplant recipients. J Pediatr Psychol 2012;37:85–96.
13. Cappelli M, MacDonald NE, McGrath PJ. Assessment of readiness to transfer to adult care for adolescents with cystic fibrosis. Child Health Care 1989;18:218–24.
14. Stinson J, Kohut SA, Spiegel L, et al. A systematic review of transition readiness and transfer satisfaction measures for adolescents with chronic illness. Int J Adolesc Med Health 2013:1–16.
15. Telfair J, Myers J, Drezner S. Transfer as a component of the transition of adolescents with sickle cell disease to adult care: adolescent, adult, and parent perspectives. J Adolesc Health 1994;15:558–65.
16. Walley P, Gowland B. Completing the circle: from PD to PDSA. Int J Health Care Qual Assur Inc Leadersh Health Serv 2004;17:349–58.
1. Quinn CT, Rogers ZR, McCavit TL, Buchanan GR. Improved survival of children and adolescents with sickle cell disease. Blood 2010;115:3447–52.
2. Hassell KL. Population estimates of sickle cell disease in the U.S. Am J Prev Med 2010;38(4 Suppl):S512–S521.
3. Hamideh D, Alvarez O. Sickle cell disease related mortality in the United States (1999-2009). Pediatr Blood Cancer 2013;60:1482–6.
4. Lanzkron S, Carroll CP, Haywood C, Jr. Mortality rates and age at death from sickle cell disease: U.S., 1979-2005. Public Health Rep 2013;128:110–6.
5. Brousseau DC, Owens PL, Mosso AL, et al. Acute care utilization and rehospitalizations for sickle cell disease. JAMA 2010;303:1288–94.
6. Hemker BG, Brousseau DC, Yan K, et al. When children with sickle-cell disease become adults: lack of outpatient care leads to increased use of the emergency department. Am J Hematol 2011;86:863–5.
7. Jordan L, Swerdlow P, Coates TD. Systematic review of transition from adolescent to adult care in patients with sickle cell disease. J Pediatr Hematol Oncol 2013;35:165–9.
8. McPherson M, Thaniel L, Minniti CP. Transition of patients with sickle cell disease from pediatric to adult care: assessing patient readiness. Pediatr Blood Cancer 2009;52:838–41.
9. Lebensburger JD, Bemrich-Stolz CJ, Howard TH. Barriers in transition from pediatrics to adult medicine in sickle cell anemia. J Blood Med 2012;3:105–12.
10. Sawicki GS, Lukens-Bull K, Yin X, et al. Measuring the transition readiness of youth with special healthcare needs: validation of the TRAQ--Transition Readiness Assessment Questionnaire. J Pediatr Psychol 2011;36:160–71.
11. Ferris ME, Harward DH, Bickford K, et al. A clinical tool to measure the components of health-care transition from pediatric care to adult care: the UNC TR(x)ANSITION scale. Ren Fail 2012;34:744–53.
12. Gilleland J, Amaral S, Mee L, Blount R. Getting ready to leave: transition readiness in adolescent kidney transplant recipients. J Pediatr Psychol 2012;37:85–96.
13. Cappelli M, MacDonald NE, McGrath PJ. Assessment of readiness to transfer to adult care for adolescents with cystic fibrosis. Child Health Care 1989;18:218–24.
14. Stinson J, Kohut SA, Spiegel L, et al. A systematic review of transition readiness and transfer satisfaction measures for adolescents with chronic illness. Int J Adolesc Med Health 2013:1–16.
15. Telfair J, Myers J, Drezner S. Transfer as a component of the transition of adolescents with sickle cell disease to adult care: adolescent, adult, and parent perspectives. J Adolesc Health 1994;15:558–65.
16. Walley P, Gowland B. Completing the circle: from PD to PDSA. Int J Health Care Qual Assur Inc Leadersh Health Serv 2004;17:349–58.
Long-Term Outcomes of Bariatric Surgery in Obese Adults
Study Overview
Objective. To identify the long-term outcomes of bariatric surgery in adults with severe obesity.
Design. Prospective longitudinal observational cohort study (the Longitudinal Assessment of Bariatric Surgery Consortium [LABS]). LABS was established to collect long-term data on safety and efficacy of bariatric surgeries.
Participants and setting. 2458 patients who underwent Roux-en-Y gastric bypass (RYGB) or laparoscopic adjustable gastric banding (LAGB) at 10 hospitals in 6 clinical centers in the United States. Participants were included if they had a body mass index (BMI) greater than 35 kg/m2 , were over the age of 18 years, and had not undergone prior bariatric surgeries. Participants were recruited between 2006 and 2009, and follow-up continued until September 2012. Data collection occurred at baseline prior to surgery and then at 6 months, 12 months, and annually until 3 years following surgery.
Main outcomes measures. 3-year change in weight and resolution of diabetes, hypertension, and dyslipidemia.
Main results. Participants were between the ages of 18 and 78 years. The majority of participants were female (79%) and white (86%). Median BMI was 45.9 (interquartile range [IQR], 41.7–51.5). At baseline, 774 (33%) had diabetes, 1252 (63%) had dyslipidemia, and 1601 (68%) had hypertension. Three years after surgery, the LAGB group exhibited greater weight loss (median 41 kg vs. 20 kg). Participants experienced most of their total weight loss during the first year following surgery. As for the health parameters assessed, at 3 years 67.5% of RYGB patients and 28.6% of LAGB patients had at least partial diabetes remission, 61.9% of RYGB patients and 27.1% of LAGB patients had dyslipidemia remission, and 38.2% of RYGB patients and 17.4 % of LAGB patients had hypertension remission.
Conclusion. Three years following bariatric surgery, participants with severe obesity exhibited significant weight loss. There was variability in the amount of weight loss and in resolution of diabetes, hypertension and dyslipidemia observed.
Commentary
Obesity in the United States increased threefold between 1950 and 2000 [1]. Currently, more than one-third of adult Americans are obese [2]. The relationship between obesity and risk for morbidity from type 2 diabetes, hypertension, stroke, sleep apnea, osteoarthritis, and several cancers is well documented [3]. Finkelstein et al [4] estimated that health care costs related to obesity and consequent morbidity were approximately $148 billion in 2008. The use of bariatric surgery to address obesity has grown in recent years. However, there is a dearth of knowledge regarding the long-term outcomes of these procedures.
In this study of RYGB and LAGB patients, 5 weight change patterns were identified in each group for a total of 10 trajectories. Although most weight loss was observed during the first year following surgery, 76% of RYGB patients had continued weight loss for 2 years with a small weight increase the subsequent year. Only 4% of LAGB patients experienced consistent weight loss after 3 years. Overall, participants who underwent LAGB had greater variability in outcomes than RYGB patients. RYGB patients experienced greater remission of all chronic conditions examined and fewer new diagnoses of hypertension and dyslipidemia. The RYGB group experienced 3 deaths occurring within 30 days post-surgery while the LAGB group had none.
This study has several strengths, including its longitudinal design and the generalizability of study findings. Several factors contribute to the generalizability, including the large sample size (n = 2458), which includes participants from 10 hospitals in 6 clinical centers and was more diverse than prior longitudinal studies of patients following bariatric surgery. In addition, the study had clear inclusion criteria, and attrition rates were low; data were collected for 79% and 85% of the RYGB and LAGB patients, respectively. Additionally, study personnel were trained on data collection, which occurred at several time-points.
There are also a few limitations, including that researchers used several methods for collecting data on associated physical and physiologic indicators. Most weights were collected using a standardized scale; however, weights recorded on other scales and self-reported weights were collected if an in-person weight was not obtained. Similarly, different measures were used to identify chronic conditions. Diabetes was identified by 3 different measures: taking a diabetes medication, glycated hemoglobin of 6.5% or greater, and fasting plasma glucose of 126 mg/dL or greater. Hypertension was defined as either taking an antihypertensive medication, elevated systolic (≥ 140 mm Hg) or elevated diastolic blood pressure (≥ 90 mm Hg). Likewise, high low-density lipoprotein (≥ 160 mg/dL ) and taking a lipid-lowering medication were used as indicators of hyperlipidemia. Therefore, chronic conditions were not identified or measured in a uniform manner. Accordingly, the authors observed high variability in remission rates among participants in the LAGB group, which may be directly attributed to the inconsistencies in identification of disease status. Although the sample is identified as diverse compared with similar studies, it primarily consisted of white females.
A significant finding was that non-white and younger participants had more missing data, as they were less likely to return for follow-up visits. Additionally, large discrepancies in weight loss were noted. Authors assert that both these findings suggest more education and support are needed for lasting adherence in some subgroups of patients undergoing bariatric surgery. Further evaluation of which factors contribute to these differences in weight loss is also needed.
Applications for Clinical Practice
This study is relevant to practitioners caring for patients with multiple chronic conditions related to severe obesity. The results indicate that bariatric surgery is associated with significant improvements in weight and remission of several chronic conditions. Practitioners can inform patients about the safety and efficacy of bariatric surgery procedures and discuss the evidence supporting its long-term efficacy as an intervention. As obesity rates continue to increase, it is important to understand the long-term benefits and risks of bariatric surgery.
—Billy A. Caceres, MSN, RN, and Allison Squires, PhD, RN
1. Picot J, Jones J, Colquitt JL, et al. The clinical effectiveness and cost-effectiveness of bariatric (weight loss) surgery for obesity: A systematic review and economic evaluation, Health Tech Assess 2009;13: 1–190, 215–357.
2. Ogden CL, Carroll MD, Kit BK, et al. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.
3. National Institutes of Health. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults 1998. Available at www.nhlbi.nih.gov/guidelines/obesity/ob_gdlns.pdf.
4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff 2009;28:822–31.
Study Overview
Objective. To identify the long-term outcomes of bariatric surgery in adults with severe obesity.
Design. Prospective longitudinal observational cohort study (the Longitudinal Assessment of Bariatric Surgery Consortium [LABS]). LABS was established to collect long-term data on safety and efficacy of bariatric surgeries.
Participants and setting. 2458 patients who underwent Roux-en-Y gastric bypass (RYGB) or laparoscopic adjustable gastric banding (LAGB) at 10 hospitals in 6 clinical centers in the United States. Participants were included if they had a body mass index (BMI) greater than 35 kg/m2 , were over the age of 18 years, and had not undergone prior bariatric surgeries. Participants were recruited between 2006 and 2009, and follow-up continued until September 2012. Data collection occurred at baseline prior to surgery and then at 6 months, 12 months, and annually until 3 years following surgery.
Main outcomes measures. 3-year change in weight and resolution of diabetes, hypertension, and dyslipidemia.
Main results. Participants were between the ages of 18 and 78 years. The majority of participants were female (79%) and white (86%). Median BMI was 45.9 (interquartile range [IQR], 41.7–51.5). At baseline, 774 (33%) had diabetes, 1252 (63%) had dyslipidemia, and 1601 (68%) had hypertension. Three years after surgery, the LAGB group exhibited greater weight loss (median 41 kg vs. 20 kg). Participants experienced most of their total weight loss during the first year following surgery. As for the health parameters assessed, at 3 years 67.5% of RYGB patients and 28.6% of LAGB patients had at least partial diabetes remission, 61.9% of RYGB patients and 27.1% of LAGB patients had dyslipidemia remission, and 38.2% of RYGB patients and 17.4 % of LAGB patients had hypertension remission.
Conclusion. Three years following bariatric surgery, participants with severe obesity exhibited significant weight loss. There was variability in the amount of weight loss and in resolution of diabetes, hypertension and dyslipidemia observed.
Commentary
Obesity in the United States increased threefold between 1950 and 2000 [1]. Currently, more than one-third of adult Americans are obese [2]. The relationship between obesity and risk for morbidity from type 2 diabetes, hypertension, stroke, sleep apnea, osteoarthritis, and several cancers is well documented [3]. Finkelstein et al [4] estimated that health care costs related to obesity and consequent morbidity were approximately $148 billion in 2008. The use of bariatric surgery to address obesity has grown in recent years. However, there is a dearth of knowledge regarding the long-term outcomes of these procedures.
In this study of RYGB and LAGB patients, 5 weight change patterns were identified in each group for a total of 10 trajectories. Although most weight loss was observed during the first year following surgery, 76% of RYGB patients had continued weight loss for 2 years with a small weight increase the subsequent year. Only 4% of LAGB patients experienced consistent weight loss after 3 years. Overall, participants who underwent LAGB had greater variability in outcomes than RYGB patients. RYGB patients experienced greater remission of all chronic conditions examined and fewer new diagnoses of hypertension and dyslipidemia. The RYGB group experienced 3 deaths occurring within 30 days post-surgery while the LAGB group had none.
This study has several strengths, including its longitudinal design and the generalizability of study findings. Several factors contribute to the generalizability, including the large sample size (n = 2458), which includes participants from 10 hospitals in 6 clinical centers and was more diverse than prior longitudinal studies of patients following bariatric surgery. In addition, the study had clear inclusion criteria, and attrition rates were low; data were collected for 79% and 85% of the RYGB and LAGB patients, respectively. Additionally, study personnel were trained on data collection, which occurred at several time-points.
There are also a few limitations, including that researchers used several methods for collecting data on associated physical and physiologic indicators. Most weights were collected using a standardized scale; however, weights recorded on other scales and self-reported weights were collected if an in-person weight was not obtained. Similarly, different measures were used to identify chronic conditions. Diabetes was identified by 3 different measures: taking a diabetes medication, glycated hemoglobin of 6.5% or greater, and fasting plasma glucose of 126 mg/dL or greater. Hypertension was defined as either taking an antihypertensive medication, elevated systolic (≥ 140 mm Hg) or elevated diastolic blood pressure (≥ 90 mm Hg). Likewise, high low-density lipoprotein (≥ 160 mg/dL ) and taking a lipid-lowering medication were used as indicators of hyperlipidemia. Therefore, chronic conditions were not identified or measured in a uniform manner. Accordingly, the authors observed high variability in remission rates among participants in the LAGB group, which may be directly attributed to the inconsistencies in identification of disease status. Although the sample is identified as diverse compared with similar studies, it primarily consisted of white females.
A significant finding was that non-white and younger participants had more missing data, as they were less likely to return for follow-up visits. Additionally, large discrepancies in weight loss were noted. Authors assert that both these findings suggest more education and support are needed for lasting adherence in some subgroups of patients undergoing bariatric surgery. Further evaluation of which factors contribute to these differences in weight loss is also needed.
Applications for Clinical Practice
This study is relevant to practitioners caring for patients with multiple chronic conditions related to severe obesity. The results indicate that bariatric surgery is associated with significant improvements in weight and remission of several chronic conditions. Practitioners can inform patients about the safety and efficacy of bariatric surgery procedures and discuss the evidence supporting its long-term efficacy as an intervention. As obesity rates continue to increase, it is important to understand the long-term benefits and risks of bariatric surgery.
—Billy A. Caceres, MSN, RN, and Allison Squires, PhD, RN
Study Overview
Objective. To identify the long-term outcomes of bariatric surgery in adults with severe obesity.
Design. Prospective longitudinal observational cohort study (the Longitudinal Assessment of Bariatric Surgery Consortium [LABS]). LABS was established to collect long-term data on safety and efficacy of bariatric surgeries.
Participants and setting. 2458 patients who underwent Roux-en-Y gastric bypass (RYGB) or laparoscopic adjustable gastric banding (LAGB) at 10 hospitals in 6 clinical centers in the United States. Participants were included if they had a body mass index (BMI) greater than 35 kg/m2 , were over the age of 18 years, and had not undergone prior bariatric surgeries. Participants were recruited between 2006 and 2009, and follow-up continued until September 2012. Data collection occurred at baseline prior to surgery and then at 6 months, 12 months, and annually until 3 years following surgery.
Main outcomes measures. 3-year change in weight and resolution of diabetes, hypertension, and dyslipidemia.
Main results. Participants were between the ages of 18 and 78 years. The majority of participants were female (79%) and white (86%). Median BMI was 45.9 (interquartile range [IQR], 41.7–51.5). At baseline, 774 (33%) had diabetes, 1252 (63%) had dyslipidemia, and 1601 (68%) had hypertension. Three years after surgery, the LAGB group exhibited greater weight loss (median 41 kg vs. 20 kg). Participants experienced most of their total weight loss during the first year following surgery. As for the health parameters assessed, at 3 years 67.5% of RYGB patients and 28.6% of LAGB patients had at least partial diabetes remission, 61.9% of RYGB patients and 27.1% of LAGB patients had dyslipidemia remission, and 38.2% of RYGB patients and 17.4 % of LAGB patients had hypertension remission.
Conclusion. Three years following bariatric surgery, participants with severe obesity exhibited significant weight loss. There was variability in the amount of weight loss and in resolution of diabetes, hypertension and dyslipidemia observed.
Commentary
Obesity in the United States increased threefold between 1950 and 2000 [1]. Currently, more than one-third of adult Americans are obese [2]. The relationship between obesity and risk for morbidity from type 2 diabetes, hypertension, stroke, sleep apnea, osteoarthritis, and several cancers is well documented [3]. Finkelstein et al [4] estimated that health care costs related to obesity and consequent morbidity were approximately $148 billion in 2008. The use of bariatric surgery to address obesity has grown in recent years. However, there is a dearth of knowledge regarding the long-term outcomes of these procedures.
In this study of RYGB and LAGB patients, 5 weight change patterns were identified in each group for a total of 10 trajectories. Although most weight loss was observed during the first year following surgery, 76% of RYGB patients had continued weight loss for 2 years with a small weight increase the subsequent year. Only 4% of LAGB patients experienced consistent weight loss after 3 years. Overall, participants who underwent LAGB had greater variability in outcomes than RYGB patients. RYGB patients experienced greater remission of all chronic conditions examined and fewer new diagnoses of hypertension and dyslipidemia. The RYGB group experienced 3 deaths occurring within 30 days post-surgery while the LAGB group had none.
This study has several strengths, including its longitudinal design and the generalizability of study findings. Several factors contribute to the generalizability, including the large sample size (n = 2458), which includes participants from 10 hospitals in 6 clinical centers and was more diverse than prior longitudinal studies of patients following bariatric surgery. In addition, the study had clear inclusion criteria, and attrition rates were low; data were collected for 79% and 85% of the RYGB and LAGB patients, respectively. Additionally, study personnel were trained on data collection, which occurred at several time-points.
There are also a few limitations, including that researchers used several methods for collecting data on associated physical and physiologic indicators. Most weights were collected using a standardized scale; however, weights recorded on other scales and self-reported weights were collected if an in-person weight was not obtained. Similarly, different measures were used to identify chronic conditions. Diabetes was identified by 3 different measures: taking a diabetes medication, glycated hemoglobin of 6.5% or greater, and fasting plasma glucose of 126 mg/dL or greater. Hypertension was defined as either taking an antihypertensive medication, elevated systolic (≥ 140 mm Hg) or elevated diastolic blood pressure (≥ 90 mm Hg). Likewise, high low-density lipoprotein (≥ 160 mg/dL ) and taking a lipid-lowering medication were used as indicators of hyperlipidemia. Therefore, chronic conditions were not identified or measured in a uniform manner. Accordingly, the authors observed high variability in remission rates among participants in the LAGB group, which may be directly attributed to the inconsistencies in identification of disease status. Although the sample is identified as diverse compared with similar studies, it primarily consisted of white females.
A significant finding was that non-white and younger participants had more missing data, as they were less likely to return for follow-up visits. Additionally, large discrepancies in weight loss were noted. Authors assert that both these findings suggest more education and support are needed for lasting adherence in some subgroups of patients undergoing bariatric surgery. Further evaluation of which factors contribute to these differences in weight loss is also needed.
Applications for Clinical Practice
This study is relevant to practitioners caring for patients with multiple chronic conditions related to severe obesity. The results indicate that bariatric surgery is associated with significant improvements in weight and remission of several chronic conditions. Practitioners can inform patients about the safety and efficacy of bariatric surgery procedures and discuss the evidence supporting its long-term efficacy as an intervention. As obesity rates continue to increase, it is important to understand the long-term benefits and risks of bariatric surgery.
—Billy A. Caceres, MSN, RN, and Allison Squires, PhD, RN
1. Picot J, Jones J, Colquitt JL, et al. The clinical effectiveness and cost-effectiveness of bariatric (weight loss) surgery for obesity: A systematic review and economic evaluation, Health Tech Assess 2009;13: 1–190, 215–357.
2. Ogden CL, Carroll MD, Kit BK, et al. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.
3. National Institutes of Health. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults 1998. Available at www.nhlbi.nih.gov/guidelines/obesity/ob_gdlns.pdf.
4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff 2009;28:822–31.
1. Picot J, Jones J, Colquitt JL, et al. The clinical effectiveness and cost-effectiveness of bariatric (weight loss) surgery for obesity: A systematic review and economic evaluation, Health Tech Assess 2009;13: 1–190, 215–357.
2. Ogden CL, Carroll MD, Kit BK, et al. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.
3. National Institutes of Health. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults 1998. Available at www.nhlbi.nih.gov/guidelines/obesity/ob_gdlns.pdf.
4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff 2009;28:822–31.
Light Intensity Physical Activity May Reduce Risk of Disability Among Adults with or At Risk For Knee Osteoarthritis
Study Overview
Objective. To determine if time spent in light intensity physical activity is related to incident disability and disability progression.
Design. Prospective cohort study.
Setting and participants. This study uses a subcohort from the Osteoarthritis Initiative, a longitudinal study that enrolled 4796 men and women aged 45 to 79 years with or at high risk of developing knee osteoarthritis. Inclusion criteria for the main cohort study were: (1) presence of osteoarthritis with symptoms in at least 1 knee (with a definite tibiofemoral osteophyte) and pain, aching, or stiffness on most days for at least 1 month during the previous 12 months; or (2) presence of at least 1 from a set of established risk factors for knee osteoarthritis: knee symptoms in the previous 12 months; overweight; knee injury causing difficulty walking for at least a week; history of knee surgery; family history of a total knee replacement for osteoarthritis; Heberden’s nodes; repetitive knee bending at work or outside work; and age 70–79 years. The subcohort of the current study draws from the 2127 participants that enrolled in the substudy with accelerometer monitoring, included those without disability at study onset; exclusion criteria include insufficient baseline accelerometer monitoring, incomplete outcome or covariate data, decedents and those lost to follow up. A total of 1680 were included in the main analysis, and an additional 134 participants (for a total of 1814) with baseline mild or moderate disability were included in a secondary analysis. between September 2008 to December 2012 at 4 sites (Baltimore, Pittsburgh, Columbus, Ohio, and Pawtucket, Rhode Island)
Main outcome measure. Disability at the 2-year follow-up visit among those without disability at baseline. Disability was ascertained by using a set of questions asking if participants have any difficulty performing each basic or instrumental activity of daily living because of a health or memory problem. Basic activities include walking across a room, dressing, bathing, eating, using the toilet and bed transfer. Instrumental activities of daily living include preparing hot meals, grocery shopping, making telephone calls, taking drugs, and managing money. Disability levels were defined as none, mild (only instrumental activities limitations), moderate (1–2 basic activities limitations), and severe (more than 2 basic activities limitations).
Statistical analysis. Main predictor variable was physical activity monitored using accelerometers measured at baseline. Participants wear the accelerometer for 7 consecutive days on a belt from arising in the morning until retiring, except during water activities. Participants also recorded on a daily log the time spent in water and cycling. Intensity thresholds were applied on a minute by minute basis to identify non-sedentary activity of light intensity and moderate to vigorous intensity. The primary variable was the accelerometer assessment of physical activity measured as daily minutes spent in light or moderate-vigorous activity. The time spent was divided in quartiles; the quartile cut-points for light activity were 229, 277, and 331 minutes, and the cut-points for moderate-vigorous activity were 4.3, 12.2, and 28.2 average minutes per day. Other covariates were socioeconomic factors including race and ethnicity, age, sex education and income, health factors including chronic conditions by self report, body mass index, knee-specific health factors and symptoms, smoking, and gait speed. The main analysis of the relationship between baseline physical activity and the development of disability was done using survival analysis techniques and hazard ratios. Secondary analysis using the larger cohort evaluated hazard ratios for disability progression as defined by progression to a more severe level among the 1814 participants.
Main results. In the main analysis, with 1680 participants without disability at baseline, 149 participants had new disability over the 2 years of follow-up. Average age of the cohort was 65 years, the majority (85%) were white, and approximately 54% were female. The cohort averaged 302 minutes a day of non-sedentary activity, the majority of which was light-intensity activities (284 minutes). Older age was associated with lower physical activity (P < 0.001), as was male sex (P < 0.001), higher body mass index, a number of chronic medical conditions (cancer, cerebrovascular disease, congestive heart failure), lower extremity pain, and higher grade of knee osteoarthritis severity. Onset of disability was associated with daily light-intensity activity times, even after adjusting for covariates. Using the group with the lowest quartile of light intensity activity time as reference, groups with higher quartiles of activity level had lower hazard ratios for onset of disability—hazard ratios were 0.64, 0.51, and 0.67 for the second, third, and highest quartile, respectively. Using daily moderate to vigorous activity time–defined quartile, longer duration of moderate-vigorous activity time was associated with delayed onset of disability. In the secondary analysis using the cohort with and without disability at baseline (n = 1814), similar results were found. Participants who spent more time in light intensity activity were associated with less incident disability.
Conclusion. Greater daily time spent in light intensity physical activity was associated with lower risk of onset and progression of disability among adults with knee osteoarthritis and those with risk factors for knee osteoarthritis.
Commentary
Disability, such as the inability to dress, bathe, or manage one’s medications, is prevalent among older adults in the United States [1,2]. The development of such disability among older adults is often complex and multifactorial. One significant contributor is osteoarthritis of the knee [3]. Although prior observational and randomized controlled trials have established that moderate to vigorous physical activity reduces disability incidence and progression [4,5], less is known about light intensity physical activity—activities that may be more realistically introduced for adults with symptomatic knee arthritis.
The current prospective cohort study included adults with and at risk for knee osteoarthritis; the authors found that physical activity, even if it is of light intensity, is associated with lower risk of disability onset and progression. A major strength of the study is the objective measurements of physical activity using an accelerometer rather than relying on recall or diaries, which are more subject to bias. Another strength is the long follow-up period, which allowed for the examination of incident disability or disability progression over 2 years. The results confirm that even light intensity activity is associated with reduced risk of incident disability.
It is important to note that causation cannot be inferred in this study. As the authors stated, those who can do longer periods of physical activity may be at lower risk of developing incident disability because of factors other than the physical activity itself. A different study design, such as a randomized trial, is needed to demonstrate that light intensity physical activity, when introduced to adults with or at risk for knee arthritis, may lead to reduced risk of disability.
Applications for Clinical Practice
Prior studies suggest that introducing regular exercise have significant health benefits. The recommendation for exercise for adults with knee arthritis remains the same. Whether introducing light intensity activity, particularly for those who are unable to perform more vigorous exercises, yields similar benefits will need further studies that are designed to determine therapeutic effect.
—William Hung, MD, MPH
1. Manton KG, Gu XL, Lamb VL. Change in chronic disability from 1982 to 2004/2005 as measured by long-term changes in function and health in the U.S. elderly population. PNAS 2006;103:18374–9.
2. Hung WW, Ross JS, Boockvar KS, Siu AL. Recent trends in chronic disease, impairment and disability among older adults in the United States. BMC Geriatrics 2011;11:47.
3. Ettinger, WH, Davis MA, Neuhaus JM, Mallon KP. Long-term physical functioning in persons with knee osteoarthritis from NHANES I: Effects of comorbid medical conditions. J Clin Epidemiol 1994;47:809–15.
4. Penninx BW, Messier SP, Rejesko WJ, et al. Physical exercise and the prevention of disability in activities of daily living in older persons with osteoarthritis. Arch Intern Med 2001;161:2309–16.
5. Ettinger WH, Burns R, Messier SP, et al. A randomized trial comparing aerobic exercise and resistance exercise with a health education program in older adults with knee osteoarthritis. The Fitness Arthritis and Seniors Trial (FAST). JAMA 1997;277:25–31.
Study Overview
Objective. To determine if time spent in light intensity physical activity is related to incident disability and disability progression.
Design. Prospective cohort study.
Setting and participants. This study uses a subcohort from the Osteoarthritis Initiative, a longitudinal study that enrolled 4796 men and women aged 45 to 79 years with or at high risk of developing knee osteoarthritis. Inclusion criteria for the main cohort study were: (1) presence of osteoarthritis with symptoms in at least 1 knee (with a definite tibiofemoral osteophyte) and pain, aching, or stiffness on most days for at least 1 month during the previous 12 months; or (2) presence of at least 1 from a set of established risk factors for knee osteoarthritis: knee symptoms in the previous 12 months; overweight; knee injury causing difficulty walking for at least a week; history of knee surgery; family history of a total knee replacement for osteoarthritis; Heberden’s nodes; repetitive knee bending at work or outside work; and age 70–79 years. The subcohort of the current study draws from the 2127 participants that enrolled in the substudy with accelerometer monitoring, included those without disability at study onset; exclusion criteria include insufficient baseline accelerometer monitoring, incomplete outcome or covariate data, decedents and those lost to follow up. A total of 1680 were included in the main analysis, and an additional 134 participants (for a total of 1814) with baseline mild or moderate disability were included in a secondary analysis. between September 2008 to December 2012 at 4 sites (Baltimore, Pittsburgh, Columbus, Ohio, and Pawtucket, Rhode Island)
Main outcome measure. Disability at the 2-year follow-up visit among those without disability at baseline. Disability was ascertained by using a set of questions asking if participants have any difficulty performing each basic or instrumental activity of daily living because of a health or memory problem. Basic activities include walking across a room, dressing, bathing, eating, using the toilet and bed transfer. Instrumental activities of daily living include preparing hot meals, grocery shopping, making telephone calls, taking drugs, and managing money. Disability levels were defined as none, mild (only instrumental activities limitations), moderate (1–2 basic activities limitations), and severe (more than 2 basic activities limitations).
Statistical analysis. Main predictor variable was physical activity monitored using accelerometers measured at baseline. Participants wear the accelerometer for 7 consecutive days on a belt from arising in the morning until retiring, except during water activities. Participants also recorded on a daily log the time spent in water and cycling. Intensity thresholds were applied on a minute by minute basis to identify non-sedentary activity of light intensity and moderate to vigorous intensity. The primary variable was the accelerometer assessment of physical activity measured as daily minutes spent in light or moderate-vigorous activity. The time spent was divided in quartiles; the quartile cut-points for light activity were 229, 277, and 331 minutes, and the cut-points for moderate-vigorous activity were 4.3, 12.2, and 28.2 average minutes per day. Other covariates were socioeconomic factors including race and ethnicity, age, sex education and income, health factors including chronic conditions by self report, body mass index, knee-specific health factors and symptoms, smoking, and gait speed. The main analysis of the relationship between baseline physical activity and the development of disability was done using survival analysis techniques and hazard ratios. Secondary analysis using the larger cohort evaluated hazard ratios for disability progression as defined by progression to a more severe level among the 1814 participants.
Main results. In the main analysis, with 1680 participants without disability at baseline, 149 participants had new disability over the 2 years of follow-up. Average age of the cohort was 65 years, the majority (85%) were white, and approximately 54% were female. The cohort averaged 302 minutes a day of non-sedentary activity, the majority of which was light-intensity activities (284 minutes). Older age was associated with lower physical activity (P < 0.001), as was male sex (P < 0.001), higher body mass index, a number of chronic medical conditions (cancer, cerebrovascular disease, congestive heart failure), lower extremity pain, and higher grade of knee osteoarthritis severity. Onset of disability was associated with daily light-intensity activity times, even after adjusting for covariates. Using the group with the lowest quartile of light intensity activity time as reference, groups with higher quartiles of activity level had lower hazard ratios for onset of disability—hazard ratios were 0.64, 0.51, and 0.67 for the second, third, and highest quartile, respectively. Using daily moderate to vigorous activity time–defined quartile, longer duration of moderate-vigorous activity time was associated with delayed onset of disability. In the secondary analysis using the cohort with and without disability at baseline (n = 1814), similar results were found. Participants who spent more time in light intensity activity were associated with less incident disability.
Conclusion. Greater daily time spent in light intensity physical activity was associated with lower risk of onset and progression of disability among adults with knee osteoarthritis and those with risk factors for knee osteoarthritis.
Commentary
Disability, such as the inability to dress, bathe, or manage one’s medications, is prevalent among older adults in the United States [1,2]. The development of such disability among older adults is often complex and multifactorial. One significant contributor is osteoarthritis of the knee [3]. Although prior observational and randomized controlled trials have established that moderate to vigorous physical activity reduces disability incidence and progression [4,5], less is known about light intensity physical activity—activities that may be more realistically introduced for adults with symptomatic knee arthritis.
The current prospective cohort study included adults with and at risk for knee osteoarthritis; the authors found that physical activity, even if it is of light intensity, is associated with lower risk of disability onset and progression. A major strength of the study is the objective measurements of physical activity using an accelerometer rather than relying on recall or diaries, which are more subject to bias. Another strength is the long follow-up period, which allowed for the examination of incident disability or disability progression over 2 years. The results confirm that even light intensity activity is associated with reduced risk of incident disability.
It is important to note that causation cannot be inferred in this study. As the authors stated, those who can do longer periods of physical activity may be at lower risk of developing incident disability because of factors other than the physical activity itself. A different study design, such as a randomized trial, is needed to demonstrate that light intensity physical activity, when introduced to adults with or at risk for knee arthritis, may lead to reduced risk of disability.
Applications for Clinical Practice
Prior studies suggest that introducing regular exercise have significant health benefits. The recommendation for exercise for adults with knee arthritis remains the same. Whether introducing light intensity activity, particularly for those who are unable to perform more vigorous exercises, yields similar benefits will need further studies that are designed to determine therapeutic effect.
—William Hung, MD, MPH
Study Overview
Objective. To determine if time spent in light intensity physical activity is related to incident disability and disability progression.
Design. Prospective cohort study.
Setting and participants. This study uses a subcohort from the Osteoarthritis Initiative, a longitudinal study that enrolled 4796 men and women aged 45 to 79 years with or at high risk of developing knee osteoarthritis. Inclusion criteria for the main cohort study were: (1) presence of osteoarthritis with symptoms in at least 1 knee (with a definite tibiofemoral osteophyte) and pain, aching, or stiffness on most days for at least 1 month during the previous 12 months; or (2) presence of at least 1 from a set of established risk factors for knee osteoarthritis: knee symptoms in the previous 12 months; overweight; knee injury causing difficulty walking for at least a week; history of knee surgery; family history of a total knee replacement for osteoarthritis; Heberden’s nodes; repetitive knee bending at work or outside work; and age 70–79 years. The subcohort of the current study draws from the 2127 participants that enrolled in the substudy with accelerometer monitoring, included those without disability at study onset; exclusion criteria include insufficient baseline accelerometer monitoring, incomplete outcome or covariate data, decedents and those lost to follow up. A total of 1680 were included in the main analysis, and an additional 134 participants (for a total of 1814) with baseline mild or moderate disability were included in a secondary analysis. between September 2008 to December 2012 at 4 sites (Baltimore, Pittsburgh, Columbus, Ohio, and Pawtucket, Rhode Island)
Main outcome measure. Disability at the 2-year follow-up visit among those without disability at baseline. Disability was ascertained by using a set of questions asking if participants have any difficulty performing each basic or instrumental activity of daily living because of a health or memory problem. Basic activities include walking across a room, dressing, bathing, eating, using the toilet and bed transfer. Instrumental activities of daily living include preparing hot meals, grocery shopping, making telephone calls, taking drugs, and managing money. Disability levels were defined as none, mild (only instrumental activities limitations), moderate (1–2 basic activities limitations), and severe (more than 2 basic activities limitations).
Statistical analysis. Main predictor variable was physical activity monitored using accelerometers measured at baseline. Participants wear the accelerometer for 7 consecutive days on a belt from arising in the morning until retiring, except during water activities. Participants also recorded on a daily log the time spent in water and cycling. Intensity thresholds were applied on a minute by minute basis to identify non-sedentary activity of light intensity and moderate to vigorous intensity. The primary variable was the accelerometer assessment of physical activity measured as daily minutes spent in light or moderate-vigorous activity. The time spent was divided in quartiles; the quartile cut-points for light activity were 229, 277, and 331 minutes, and the cut-points for moderate-vigorous activity were 4.3, 12.2, and 28.2 average minutes per day. Other covariates were socioeconomic factors including race and ethnicity, age, sex education and income, health factors including chronic conditions by self report, body mass index, knee-specific health factors and symptoms, smoking, and gait speed. The main analysis of the relationship between baseline physical activity and the development of disability was done using survival analysis techniques and hazard ratios. Secondary analysis using the larger cohort evaluated hazard ratios for disability progression as defined by progression to a more severe level among the 1814 participants.
Main results. In the main analysis, with 1680 participants without disability at baseline, 149 participants had new disability over the 2 years of follow-up. Average age of the cohort was 65 years, the majority (85%) were white, and approximately 54% were female. The cohort averaged 302 minutes a day of non-sedentary activity, the majority of which was light-intensity activities (284 minutes). Older age was associated with lower physical activity (P < 0.001), as was male sex (P < 0.001), higher body mass index, a number of chronic medical conditions (cancer, cerebrovascular disease, congestive heart failure), lower extremity pain, and higher grade of knee osteoarthritis severity. Onset of disability was associated with daily light-intensity activity times, even after adjusting for covariates. Using the group with the lowest quartile of light intensity activity time as reference, groups with higher quartiles of activity level had lower hazard ratios for onset of disability—hazard ratios were 0.64, 0.51, and 0.67 for the second, third, and highest quartile, respectively. Using daily moderate to vigorous activity time–defined quartile, longer duration of moderate-vigorous activity time was associated with delayed onset of disability. In the secondary analysis using the cohort with and without disability at baseline (n = 1814), similar results were found. Participants who spent more time in light intensity activity were associated with less incident disability.
Conclusion. Greater daily time spent in light intensity physical activity was associated with lower risk of onset and progression of disability among adults with knee osteoarthritis and those with risk factors for knee osteoarthritis.
Commentary
Disability, such as the inability to dress, bathe, or manage one’s medications, is prevalent among older adults in the United States [1,2]. The development of such disability among older adults is often complex and multifactorial. One significant contributor is osteoarthritis of the knee [3]. Although prior observational and randomized controlled trials have established that moderate to vigorous physical activity reduces disability incidence and progression [4,5], less is known about light intensity physical activity—activities that may be more realistically introduced for adults with symptomatic knee arthritis.
The current prospective cohort study included adults with and at risk for knee osteoarthritis; the authors found that physical activity, even if it is of light intensity, is associated with lower risk of disability onset and progression. A major strength of the study is the objective measurements of physical activity using an accelerometer rather than relying on recall or diaries, which are more subject to bias. Another strength is the long follow-up period, which allowed for the examination of incident disability or disability progression over 2 years. The results confirm that even light intensity activity is associated with reduced risk of incident disability.
It is important to note that causation cannot be inferred in this study. As the authors stated, those who can do longer periods of physical activity may be at lower risk of developing incident disability because of factors other than the physical activity itself. A different study design, such as a randomized trial, is needed to demonstrate that light intensity physical activity, when introduced to adults with or at risk for knee arthritis, may lead to reduced risk of disability.
Applications for Clinical Practice
Prior studies suggest that introducing regular exercise have significant health benefits. The recommendation for exercise for adults with knee arthritis remains the same. Whether introducing light intensity activity, particularly for those who are unable to perform more vigorous exercises, yields similar benefits will need further studies that are designed to determine therapeutic effect.
—William Hung, MD, MPH
1. Manton KG, Gu XL, Lamb VL. Change in chronic disability from 1982 to 2004/2005 as measured by long-term changes in function and health in the U.S. elderly population. PNAS 2006;103:18374–9.
2. Hung WW, Ross JS, Boockvar KS, Siu AL. Recent trends in chronic disease, impairment and disability among older adults in the United States. BMC Geriatrics 2011;11:47.
3. Ettinger, WH, Davis MA, Neuhaus JM, Mallon KP. Long-term physical functioning in persons with knee osteoarthritis from NHANES I: Effects of comorbid medical conditions. J Clin Epidemiol 1994;47:809–15.
4. Penninx BW, Messier SP, Rejesko WJ, et al. Physical exercise and the prevention of disability in activities of daily living in older persons with osteoarthritis. Arch Intern Med 2001;161:2309–16.
5. Ettinger WH, Burns R, Messier SP, et al. A randomized trial comparing aerobic exercise and resistance exercise with a health education program in older adults with knee osteoarthritis. The Fitness Arthritis and Seniors Trial (FAST). JAMA 1997;277:25–31.
1. Manton KG, Gu XL, Lamb VL. Change in chronic disability from 1982 to 2004/2005 as measured by long-term changes in function and health in the U.S. elderly population. PNAS 2006;103:18374–9.
2. Hung WW, Ross JS, Boockvar KS, Siu AL. Recent trends in chronic disease, impairment and disability among older adults in the United States. BMC Geriatrics 2011;11:47.
3. Ettinger, WH, Davis MA, Neuhaus JM, Mallon KP. Long-term physical functioning in persons with knee osteoarthritis from NHANES I: Effects of comorbid medical conditions. J Clin Epidemiol 1994;47:809–15.
4. Penninx BW, Messier SP, Rejesko WJ, et al. Physical exercise and the prevention of disability in activities of daily living in older persons with osteoarthritis. Arch Intern Med 2001;161:2309–16.
5. Ettinger WH, Burns R, Messier SP, et al. A randomized trial comparing aerobic exercise and resistance exercise with a health education program in older adults with knee osteoarthritis. The Fitness Arthritis and Seniors Trial (FAST). JAMA 1997;277:25–31.
Capturing the Impact of Language Barriers on Asthma Management During an Emergency Department Visit
Study Overview
Objective. To compare rates of asthma action plan use in limited English proficiency (LEP) caregivers compared with English proficient (EP) caregivers.
Design. Cross-sectional survey.
Participants and setting. A convenience sample of 107 Latino caregivers of children with asthma at an urban academic emergency department (ED). Surveys in the preferred language of the patient (English or Spanish, with the translated version previously validated) were distributed at the time of the ED visit. Interpreters were utilized when requested.
Main outcome measure. Caregiver use of an asthma action plan.
Main results. 51 LEP caregivers and 56 EP caregivers completed the survey. Mothers completed the surveys 87% of the time and the average age of patients was 4 years. Among the EP caregivers, 64% reported using an asthma action plan, while only 39% of the LEP caregivers reported using one. The difference was statistally significant (P = 0.01). Through both correlations and regressions, English proficiency was the only variable (others included health insurance status and level of caregiver education) that showed a significant effect on asthma action plan use.
Conclusions. Children whose caregiver had LEP were significantly less likely to have and use an asthma action plan. Asthma education in the language of choice of the patient may help improve asthma care.
Commentary
With 20% of US households now speaking a language other than English at home [1], language barriers between providers and patients present multiple challenges to health services delivery and can significantly contribute to immigrant health disparities. Despite US laws and multiple federal agency policies requiring the use of interpreters during health care encounters, organizations continue to fall short of providing interpreter services and often lack adequate or equivalent materials for patient education. Too often, providers overestimate their language skills [2,3], use colleagues as ad hoc interpreters out of convenience [4], or rely on family members for interpretation [4]—a practice that is universally discouraged.
Recent research does suggest that the timing of interpreter use is critical. In planned encounters such as primary care visits, interpreters can and should be scheduled for visits when a language-concordant provider is not available. During hospitalizations, including ED visits, interpreters are most effective when used on admission, during patient teaching, and upon discharge, and the timing of these visits has been shown to affect length of stay and readmission rates [5,6].
This study magnifies the consequences of failing to provide language-concordant services to patients and their caregivers. It also helps to identify one of the sources of pediatric asthma health disparities in Latino populations. The emphasis on the role of the caregiver in action plan utilization is a unique aspect of this study and it is one of the first to examine the issue in this way. It highlights the importance of caregivers in health system transitions and illustrates how a language barrier can potentially impact transitions.
The authors’ explicit use of a power analysis to calculate their sample size is a strength of the study. Furthermore, the authors differentiated their respondents by country of origin, something that rarely occurs in studies of Latinos [7], and allows the reader to differentiate the impact of the intervention at a micro level within this population. The presentation of Spanish language quotes with their translations within the manuscript provides transparency for bilingual readers to verify the accuracy of the authors’ translation.
There are, however, a number of methodological issues that should be noted. The authors acknowledge that they did not account for asthma severity in the survey nor control for it in the analysis, did not assess health literacy, and did not differentiate their results based on country of origin. The latter point is important because the immigration experience and demographic profiles of Latinos differs significantly by country of origin and could factor in to action plan use. The translation process used for survey instrument translation also did not illustrate how it accounted for the well-established linguistic variation that occurs in the Spanish language. Additionally, US census data shows that the main countries of origin of Latinos in the service area of the study are Puerto Rico, Ecuador, and Mexico [1]. The survey itself had Ecuador as a write in and Dominican as a response option. The combination presented in the survey reflects the Latino demographic composition in the nearest large urban area. Thus, when collecting country of origin data on immigrant patients, country choices should reflect local demographics and not national trends for maximum precision.
Another concern is that Spanish language literacy was not assessed. Many Latino immigrants may have limited reading ability in Spanish. For Mexican immigrants in particular, Spanish may be a second language after their indigenous language. This is also true for some South American Latino immigrants from the Andean region. Many Latino immigrants come to the United States with less than an 8th grade education and likely come from educational systems of poor quality, which subsequently affects their Spanish language reading and writing skills [8]. Assessing education level based on US equivalents is not an accurate way to gauge literacy. Thus, assessing reading literacy in Spanish before surveying patients would have been a useful step that could have further refined the results. These factors will have implications for action plan utilization and implementation for any chronic disease.
Providers often think that language barriers are an obvious factor in health disparities and service delivery, but few studies have actually captured or quantified the effects of language barriers on health outcomes. Most studies only identify language barriers as an access issue. This study provides a good illustration of the impact of a language barrier on a known and effective intervention for pediatric asthma management. Practitioners can take the consequences illustrated in this study and easily extrapolate the contribution to health disparities on a broader scale.
Applications for Clinical Practice
Practitioners caring for patients in EDs where the patient or caregiver has a language barrier should make every effort to use appropriate interpreter services when patient teaching occurs. Assessing not only for health literacy but reading ability in the LEP patient or caregiver is also important, since it will affect dyad’s ability to implement self-care measures recommended in patient teaching sessions or action plan implementation. Asking the patient what their country of origin is, regardless of their legal status, will help practitioners refine patient teaching and the language they (and the interpreter when appropriate) use to illustrate what needs to be done to manage their condition.
—Allison Squires, PhD, RN
1. Ryan C. Language use in the United States : 2011. Migration Policy Institute: Washington, DC; 2013.
2. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health Serv Res 2012;47(1 Pt 2):556–69.
3. Jacobs EA. Patient centeredness in medical encounters requiring an interpreter. Am J Med 2000;109:515.
4. Hsieh E. Understanding medical interpreters: reconceptualizing bilingual health communication. Health Commun 2006;20:177–86.
5. Karliner LS, Kim SE, Meltzer DO, Auerbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med 2010;5:276–82.
6. Lindholm M, Hargraves JL, Ferguson WJ, Reed G. Professional language interpretation and inpatient length of stay and readmission rates. J Gen Intern Med 2012;27:1294–9.
7. Gerchow L, Tagliaferro B, Squires A, et al. Latina food patterns in the United States: a qualitative metasynthesis. Nurs Res 2014;63:182–93.
8. Sudore RL, Landefeld CS, Pérez-Stable EJ, et al. Unraveling the relationship between literacy, language proficiency, and patient-physician communication. Patient Educ Couns 2009;75:398–402.
Study Overview
Objective. To compare rates of asthma action plan use in limited English proficiency (LEP) caregivers compared with English proficient (EP) caregivers.
Design. Cross-sectional survey.
Participants and setting. A convenience sample of 107 Latino caregivers of children with asthma at an urban academic emergency department (ED). Surveys in the preferred language of the patient (English or Spanish, with the translated version previously validated) were distributed at the time of the ED visit. Interpreters were utilized when requested.
Main outcome measure. Caregiver use of an asthma action plan.
Main results. 51 LEP caregivers and 56 EP caregivers completed the survey. Mothers completed the surveys 87% of the time and the average age of patients was 4 years. Among the EP caregivers, 64% reported using an asthma action plan, while only 39% of the LEP caregivers reported using one. The difference was statistally significant (P = 0.01). Through both correlations and regressions, English proficiency was the only variable (others included health insurance status and level of caregiver education) that showed a significant effect on asthma action plan use.
Conclusions. Children whose caregiver had LEP were significantly less likely to have and use an asthma action plan. Asthma education in the language of choice of the patient may help improve asthma care.
Commentary
With 20% of US households now speaking a language other than English at home [1], language barriers between providers and patients present multiple challenges to health services delivery and can significantly contribute to immigrant health disparities. Despite US laws and multiple federal agency policies requiring the use of interpreters during health care encounters, organizations continue to fall short of providing interpreter services and often lack adequate or equivalent materials for patient education. Too often, providers overestimate their language skills [2,3], use colleagues as ad hoc interpreters out of convenience [4], or rely on family members for interpretation [4]—a practice that is universally discouraged.
Recent research does suggest that the timing of interpreter use is critical. In planned encounters such as primary care visits, interpreters can and should be scheduled for visits when a language-concordant provider is not available. During hospitalizations, including ED visits, interpreters are most effective when used on admission, during patient teaching, and upon discharge, and the timing of these visits has been shown to affect length of stay and readmission rates [5,6].
This study magnifies the consequences of failing to provide language-concordant services to patients and their caregivers. It also helps to identify one of the sources of pediatric asthma health disparities in Latino populations. The emphasis on the role of the caregiver in action plan utilization is a unique aspect of this study and it is one of the first to examine the issue in this way. It highlights the importance of caregivers in health system transitions and illustrates how a language barrier can potentially impact transitions.
The authors’ explicit use of a power analysis to calculate their sample size is a strength of the study. Furthermore, the authors differentiated their respondents by country of origin, something that rarely occurs in studies of Latinos [7], and allows the reader to differentiate the impact of the intervention at a micro level within this population. The presentation of Spanish language quotes with their translations within the manuscript provides transparency for bilingual readers to verify the accuracy of the authors’ translation.
There are, however, a number of methodological issues that should be noted. The authors acknowledge that they did not account for asthma severity in the survey nor control for it in the analysis, did not assess health literacy, and did not differentiate their results based on country of origin. The latter point is important because the immigration experience and demographic profiles of Latinos differs significantly by country of origin and could factor in to action plan use. The translation process used for survey instrument translation also did not illustrate how it accounted for the well-established linguistic variation that occurs in the Spanish language. Additionally, US census data shows that the main countries of origin of Latinos in the service area of the study are Puerto Rico, Ecuador, and Mexico [1]. The survey itself had Ecuador as a write in and Dominican as a response option. The combination presented in the survey reflects the Latino demographic composition in the nearest large urban area. Thus, when collecting country of origin data on immigrant patients, country choices should reflect local demographics and not national trends for maximum precision.
Another concern is that Spanish language literacy was not assessed. Many Latino immigrants may have limited reading ability in Spanish. For Mexican immigrants in particular, Spanish may be a second language after their indigenous language. This is also true for some South American Latino immigrants from the Andean region. Many Latino immigrants come to the United States with less than an 8th grade education and likely come from educational systems of poor quality, which subsequently affects their Spanish language reading and writing skills [8]. Assessing education level based on US equivalents is not an accurate way to gauge literacy. Thus, assessing reading literacy in Spanish before surveying patients would have been a useful step that could have further refined the results. These factors will have implications for action plan utilization and implementation for any chronic disease.
Providers often think that language barriers are an obvious factor in health disparities and service delivery, but few studies have actually captured or quantified the effects of language barriers on health outcomes. Most studies only identify language barriers as an access issue. This study provides a good illustration of the impact of a language barrier on a known and effective intervention for pediatric asthma management. Practitioners can take the consequences illustrated in this study and easily extrapolate the contribution to health disparities on a broader scale.
Applications for Clinical Practice
Practitioners caring for patients in EDs where the patient or caregiver has a language barrier should make every effort to use appropriate interpreter services when patient teaching occurs. Assessing not only for health literacy but reading ability in the LEP patient or caregiver is also important, since it will affect dyad’s ability to implement self-care measures recommended in patient teaching sessions or action plan implementation. Asking the patient what their country of origin is, regardless of their legal status, will help practitioners refine patient teaching and the language they (and the interpreter when appropriate) use to illustrate what needs to be done to manage their condition.
—Allison Squires, PhD, RN
Study Overview
Objective. To compare rates of asthma action plan use in limited English proficiency (LEP) caregivers compared with English proficient (EP) caregivers.
Design. Cross-sectional survey.
Participants and setting. A convenience sample of 107 Latino caregivers of children with asthma at an urban academic emergency department (ED). Surveys in the preferred language of the patient (English or Spanish, with the translated version previously validated) were distributed at the time of the ED visit. Interpreters were utilized when requested.
Main outcome measure. Caregiver use of an asthma action plan.
Main results. 51 LEP caregivers and 56 EP caregivers completed the survey. Mothers completed the surveys 87% of the time and the average age of patients was 4 years. Among the EP caregivers, 64% reported using an asthma action plan, while only 39% of the LEP caregivers reported using one. The difference was statistally significant (P = 0.01). Through both correlations and regressions, English proficiency was the only variable (others included health insurance status and level of caregiver education) that showed a significant effect on asthma action plan use.
Conclusions. Children whose caregiver had LEP were significantly less likely to have and use an asthma action plan. Asthma education in the language of choice of the patient may help improve asthma care.
Commentary
With 20% of US households now speaking a language other than English at home [1], language barriers between providers and patients present multiple challenges to health services delivery and can significantly contribute to immigrant health disparities. Despite US laws and multiple federal agency policies requiring the use of interpreters during health care encounters, organizations continue to fall short of providing interpreter services and often lack adequate or equivalent materials for patient education. Too often, providers overestimate their language skills [2,3], use colleagues as ad hoc interpreters out of convenience [4], or rely on family members for interpretation [4]—a practice that is universally discouraged.
Recent research does suggest that the timing of interpreter use is critical. In planned encounters such as primary care visits, interpreters can and should be scheduled for visits when a language-concordant provider is not available. During hospitalizations, including ED visits, interpreters are most effective when used on admission, during patient teaching, and upon discharge, and the timing of these visits has been shown to affect length of stay and readmission rates [5,6].
This study magnifies the consequences of failing to provide language-concordant services to patients and their caregivers. It also helps to identify one of the sources of pediatric asthma health disparities in Latino populations. The emphasis on the role of the caregiver in action plan utilization is a unique aspect of this study and it is one of the first to examine the issue in this way. It highlights the importance of caregivers in health system transitions and illustrates how a language barrier can potentially impact transitions.
The authors’ explicit use of a power analysis to calculate their sample size is a strength of the study. Furthermore, the authors differentiated their respondents by country of origin, something that rarely occurs in studies of Latinos [7], and allows the reader to differentiate the impact of the intervention at a micro level within this population. The presentation of Spanish language quotes with their translations within the manuscript provides transparency for bilingual readers to verify the accuracy of the authors’ translation.
There are, however, a number of methodological issues that should be noted. The authors acknowledge that they did not account for asthma severity in the survey nor control for it in the analysis, did not assess health literacy, and did not differentiate their results based on country of origin. The latter point is important because the immigration experience and demographic profiles of Latinos differs significantly by country of origin and could factor in to action plan use. The translation process used for survey instrument translation also did not illustrate how it accounted for the well-established linguistic variation that occurs in the Spanish language. Additionally, US census data shows that the main countries of origin of Latinos in the service area of the study are Puerto Rico, Ecuador, and Mexico [1]. The survey itself had Ecuador as a write in and Dominican as a response option. The combination presented in the survey reflects the Latino demographic composition in the nearest large urban area. Thus, when collecting country of origin data on immigrant patients, country choices should reflect local demographics and not national trends for maximum precision.
Another concern is that Spanish language literacy was not assessed. Many Latino immigrants may have limited reading ability in Spanish. For Mexican immigrants in particular, Spanish may be a second language after their indigenous language. This is also true for some South American Latino immigrants from the Andean region. Many Latino immigrants come to the United States with less than an 8th grade education and likely come from educational systems of poor quality, which subsequently affects their Spanish language reading and writing skills [8]. Assessing education level based on US equivalents is not an accurate way to gauge literacy. Thus, assessing reading literacy in Spanish before surveying patients would have been a useful step that could have further refined the results. These factors will have implications for action plan utilization and implementation for any chronic disease.
Providers often think that language barriers are an obvious factor in health disparities and service delivery, but few studies have actually captured or quantified the effects of language barriers on health outcomes. Most studies only identify language barriers as an access issue. This study provides a good illustration of the impact of a language barrier on a known and effective intervention for pediatric asthma management. Practitioners can take the consequences illustrated in this study and easily extrapolate the contribution to health disparities on a broader scale.
Applications for Clinical Practice
Practitioners caring for patients in EDs where the patient or caregiver has a language barrier should make every effort to use appropriate interpreter services when patient teaching occurs. Assessing not only for health literacy but reading ability in the LEP patient or caregiver is also important, since it will affect dyad’s ability to implement self-care measures recommended in patient teaching sessions or action plan implementation. Asking the patient what their country of origin is, regardless of their legal status, will help practitioners refine patient teaching and the language they (and the interpreter when appropriate) use to illustrate what needs to be done to manage their condition.
—Allison Squires, PhD, RN
1. Ryan C. Language use in the United States : 2011. Migration Policy Institute: Washington, DC; 2013.
2. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health Serv Res 2012;47(1 Pt 2):556–69.
3. Jacobs EA. Patient centeredness in medical encounters requiring an interpreter. Am J Med 2000;109:515.
4. Hsieh E. Understanding medical interpreters: reconceptualizing bilingual health communication. Health Commun 2006;20:177–86.
5. Karliner LS, Kim SE, Meltzer DO, Auerbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med 2010;5:276–82.
6. Lindholm M, Hargraves JL, Ferguson WJ, Reed G. Professional language interpretation and inpatient length of stay and readmission rates. J Gen Intern Med 2012;27:1294–9.
7. Gerchow L, Tagliaferro B, Squires A, et al. Latina food patterns in the United States: a qualitative metasynthesis. Nurs Res 2014;63:182–93.
8. Sudore RL, Landefeld CS, Pérez-Stable EJ, et al. Unraveling the relationship between literacy, language proficiency, and patient-physician communication. Patient Educ Couns 2009;75:398–402.
1. Ryan C. Language use in the United States : 2011. Migration Policy Institute: Washington, DC; 2013.
2. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health Serv Res 2012;47(1 Pt 2):556–69.
3. Jacobs EA. Patient centeredness in medical encounters requiring an interpreter. Am J Med 2000;109:515.
4. Hsieh E. Understanding medical interpreters: reconceptualizing bilingual health communication. Health Commun 2006;20:177–86.
5. Karliner LS, Kim SE, Meltzer DO, Auerbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med 2010;5:276–82.
6. Lindholm M, Hargraves JL, Ferguson WJ, Reed G. Professional language interpretation and inpatient length of stay and readmission rates. J Gen Intern Med 2012;27:1294–9.
7. Gerchow L, Tagliaferro B, Squires A, et al. Latina food patterns in the United States: a qualitative metasynthesis. Nurs Res 2014;63:182–93.
8. Sudore RL, Landefeld CS, Pérez-Stable EJ, et al. Unraveling the relationship between literacy, language proficiency, and patient-physician communication. Patient Educ Couns 2009;75:398–402.
Blood sterilization processes harmful to platelets

Some processes used to sterilize blood for transfusion are harmful to platelet function and could cause serious health issues in transfusion recipients, researchers say.
They found that some pathogen-reduction treatments impact platelets to the extent that they may be the cause of hemorrhages in recipients.
The pathogen reduction treatments “were developed more than 20 years ago, before we understood the importance of the genetic material contained in platelets,” explained study author Patrick Provost, PhD, of Université Laval and the CHU de Québec Research Center in Canada.
Platelets contain up to a third of the human genome in the form of ribonucleic acid (RNA), which enables them to synthesize over 1,000 proteins essential to the normal functioning of the human body.
The researchers studied the effects of 3 pathogen-reduction strategies—irradiation, riboflavin plus UVB light (Mirasol), and amotosalen plus UVA light (Intercept)—on platelet microRNAs, messenger RNAs (mRNAs), activation, and function.
They reported their findings in the journal Platelets.
The investigators collected 50 single-donor (apheresis) platelet concentrates (PCs) and subjected them to 5 treatments.
The control platelets were stored in donor plasma; additive solution platelets were stored in 65% storage solution and 35% donor plasma; the irradiation platelets were treated with 30Gy gamma irradiation and stored in donor plasma; the platelets treated with Mirasol were stored in donor plasma; and the platelets treated with Intercept were stored in the same solution as the additive solution group.
All treatments followed standard procedures or the manufacturer’s instructions.
After platelet isolation and RNA extraction, the investigators analyzed the levels of microRNA and mRNA levels of the platelets and assessed the impact of those levels on platelet activation and function.
MicroRNA profiles
They learned that platelets stored with additive solution or irradiation had significantly (P<0.05) reduced levels of one microRNA each, and only on day 7 of storage. Additive solution reduced the level of miR-223 and irradiation reduced the level of let-73.
Mirasol did not significantly reduce the level of any of the 11 tested micro RNAs.
And Intercept significantly reduced the level of 6 microRNAs on day 1, 1 microRNA on day 4, and 2 microRNAs on day 7. By day 7, let-7e was reduced by up to 70%.
The microRNA levels remained stable in the control sample for the entire 7-day storage period.
Platelet activation and function
Platelet counts in the Mirasol- and Intercept-treated platelets were significantly lower (P<0.001) on storage days 1, 4, and 7 compared with control platelets.
Pathogen-reduction treatments did not affect platelet microRNA synthesis, platelet microRNA function, nor did they induce the formation of cross-linked RNA adducts.
However, pathogen reduction caused platelet activation, which correlates with the observed reduction in platelet microRNAs.
The investigators measured CD62P expression, a marker of platelet activation, on the platelet surface. The additive solution platelets and Intercept-treated platelets, and to a lesser degree the irradiation group, had greater CD62P surface expression than the control group (P<0.05) on day 1.
The Mirasol group had similar activation to that of the control group.
On day 4, all treatment groups showed more activation than the control group (P<0.05). And on day 7, all groups had about the same activation level as the control group.
Pathogen reduction also impacted the aggregation response of platelets. Mirasol-treated platelets, which had the same aggregation response as that of controls on day 1, had no response on days 4 and 7.
And the aggregation response for Intercept-treated and additive solution platelets was already absent on day 1 and remained so on days 4 and 7.
Additive solution and Intercept also reduced platelet volume on day 1, which the investigators say could be explained by the platelet activation and release of microparticles induced by the treatments.
MicroRNA release
The investigators hypothesized that activated stored platelets could release microRNAs through microparticles in the supernatant. So they collected supernatant from each of the 5 groups and analyzed their total content of miR-223, which is one of the most abundant platelet microRNAs.
They discovered that the total amount of miR-223 was increased 30% to 86% in the microparticles released from additive solution and Intercept-treated platelets. They did not observe this increase in irradiation- or Mirasol-treated platelets compared to controls.
"The platelets end up depleted of RNA so, once transfused, they're unable to do what they normally would," Dr Provost said. Nevertheless, the clinical implications of the reduction in platelet activation and impaired platelet aggregation after Intercept treatment remain to be established.
The pathogen-reduction treatments are already marketed in some European countries, notably Switzerland, France, and Germany, and are under consideration in other countries, including Canada and the United States.
"In light of what we have demonstrated, the potentially harmful effects of these treatments should be carefully evaluated in the countries where they are not yet approved. It should also be re-evaluated in those countries where they are," Dr Provost said. ![]()

Some processes used to sterilize blood for transfusion are harmful to platelet function and could cause serious health issues in transfusion recipients, researchers say.
They found that some pathogen-reduction treatments impact platelets to the extent that they may be the cause of hemorrhages in recipients.
The pathogen reduction treatments “were developed more than 20 years ago, before we understood the importance of the genetic material contained in platelets,” explained study author Patrick Provost, PhD, of Université Laval and the CHU de Québec Research Center in Canada.
Platelets contain up to a third of the human genome in the form of ribonucleic acid (RNA), which enables them to synthesize over 1,000 proteins essential to the normal functioning of the human body.
The researchers studied the effects of 3 pathogen-reduction strategies—irradiation, riboflavin plus UVB light (Mirasol), and amotosalen plus UVA light (Intercept)—on platelet microRNAs, messenger RNAs (mRNAs), activation, and function.
They reported their findings in the journal Platelets.
The investigators collected 50 single-donor (apheresis) platelet concentrates (PCs) and subjected them to 5 treatments.
The control platelets were stored in donor plasma; additive solution platelets were stored in 65% storage solution and 35% donor plasma; the irradiation platelets were treated with 30Gy gamma irradiation and stored in donor plasma; the platelets treated with Mirasol were stored in donor plasma; and the platelets treated with Intercept were stored in the same solution as the additive solution group.
All treatments followed standard procedures or the manufacturer’s instructions.
After platelet isolation and RNA extraction, the investigators analyzed the levels of microRNA and mRNA levels of the platelets and assessed the impact of those levels on platelet activation and function.
MicroRNA profiles
They learned that platelets stored with additive solution or irradiation had significantly (P<0.05) reduced levels of one microRNA each, and only on day 7 of storage. Additive solution reduced the level of miR-223 and irradiation reduced the level of let-73.
Mirasol did not significantly reduce the level of any of the 11 tested micro RNAs.
And Intercept significantly reduced the level of 6 microRNAs on day 1, 1 microRNA on day 4, and 2 microRNAs on day 7. By day 7, let-7e was reduced by up to 70%.
The microRNA levels remained stable in the control sample for the entire 7-day storage period.
Platelet activation and function
Platelet counts in the Mirasol- and Intercept-treated platelets were significantly lower (P<0.001) on storage days 1, 4, and 7 compared with control platelets.
Pathogen-reduction treatments did not affect platelet microRNA synthesis, platelet microRNA function, nor did they induce the formation of cross-linked RNA adducts.
However, pathogen reduction caused platelet activation, which correlates with the observed reduction in platelet microRNAs.
The investigators measured CD62P expression, a marker of platelet activation, on the platelet surface. The additive solution platelets and Intercept-treated platelets, and to a lesser degree the irradiation group, had greater CD62P surface expression than the control group (P<0.05) on day 1.
The Mirasol group had similar activation to that of the control group.
On day 4, all treatment groups showed more activation than the control group (P<0.05). And on day 7, all groups had about the same activation level as the control group.
Pathogen reduction also impacted the aggregation response of platelets. Mirasol-treated platelets, which had the same aggregation response as that of controls on day 1, had no response on days 4 and 7.
And the aggregation response for Intercept-treated and additive solution platelets was already absent on day 1 and remained so on days 4 and 7.
Additive solution and Intercept also reduced platelet volume on day 1, which the investigators say could be explained by the platelet activation and release of microparticles induced by the treatments.
MicroRNA release
The investigators hypothesized that activated stored platelets could release microRNAs through microparticles in the supernatant. So they collected supernatant from each of the 5 groups and analyzed their total content of miR-223, which is one of the most abundant platelet microRNAs.
They discovered that the total amount of miR-223 was increased 30% to 86% in the microparticles released from additive solution and Intercept-treated platelets. They did not observe this increase in irradiation- or Mirasol-treated platelets compared to controls.
"The platelets end up depleted of RNA so, once transfused, they're unable to do what they normally would," Dr Provost said. Nevertheless, the clinical implications of the reduction in platelet activation and impaired platelet aggregation after Intercept treatment remain to be established.
The pathogen-reduction treatments are already marketed in some European countries, notably Switzerland, France, and Germany, and are under consideration in other countries, including Canada and the United States.
"In light of what we have demonstrated, the potentially harmful effects of these treatments should be carefully evaluated in the countries where they are not yet approved. It should also be re-evaluated in those countries where they are," Dr Provost said. ![]()

Some processes used to sterilize blood for transfusion are harmful to platelet function and could cause serious health issues in transfusion recipients, researchers say.
They found that some pathogen-reduction treatments impact platelets to the extent that they may be the cause of hemorrhages in recipients.
The pathogen reduction treatments “were developed more than 20 years ago, before we understood the importance of the genetic material contained in platelets,” explained study author Patrick Provost, PhD, of Université Laval and the CHU de Québec Research Center in Canada.
Platelets contain up to a third of the human genome in the form of ribonucleic acid (RNA), which enables them to synthesize over 1,000 proteins essential to the normal functioning of the human body.
The researchers studied the effects of 3 pathogen-reduction strategies—irradiation, riboflavin plus UVB light (Mirasol), and amotosalen plus UVA light (Intercept)—on platelet microRNAs, messenger RNAs (mRNAs), activation, and function.
They reported their findings in the journal Platelets.
The investigators collected 50 single-donor (apheresis) platelet concentrates (PCs) and subjected them to 5 treatments.
The control platelets were stored in donor plasma; additive solution platelets were stored in 65% storage solution and 35% donor plasma; the irradiation platelets were treated with 30Gy gamma irradiation and stored in donor plasma; the platelets treated with Mirasol were stored in donor plasma; and the platelets treated with Intercept were stored in the same solution as the additive solution group.
All treatments followed standard procedures or the manufacturer’s instructions.
After platelet isolation and RNA extraction, the investigators analyzed the levels of microRNA and mRNA levels of the platelets and assessed the impact of those levels on platelet activation and function.
MicroRNA profiles
They learned that platelets stored with additive solution or irradiation had significantly (P<0.05) reduced levels of one microRNA each, and only on day 7 of storage. Additive solution reduced the level of miR-223 and irradiation reduced the level of let-73.
Mirasol did not significantly reduce the level of any of the 11 tested micro RNAs.
And Intercept significantly reduced the level of 6 microRNAs on day 1, 1 microRNA on day 4, and 2 microRNAs on day 7. By day 7, let-7e was reduced by up to 70%.
The microRNA levels remained stable in the control sample for the entire 7-day storage period.
Platelet activation and function
Platelet counts in the Mirasol- and Intercept-treated platelets were significantly lower (P<0.001) on storage days 1, 4, and 7 compared with control platelets.
Pathogen-reduction treatments did not affect platelet microRNA synthesis, platelet microRNA function, nor did they induce the formation of cross-linked RNA adducts.
However, pathogen reduction caused platelet activation, which correlates with the observed reduction in platelet microRNAs.
The investigators measured CD62P expression, a marker of platelet activation, on the platelet surface. The additive solution platelets and Intercept-treated platelets, and to a lesser degree the irradiation group, had greater CD62P surface expression than the control group (P<0.05) on day 1.
The Mirasol group had similar activation to that of the control group.
On day 4, all treatment groups showed more activation than the control group (P<0.05). And on day 7, all groups had about the same activation level as the control group.
Pathogen reduction also impacted the aggregation response of platelets. Mirasol-treated platelets, which had the same aggregation response as that of controls on day 1, had no response on days 4 and 7.
And the aggregation response for Intercept-treated and additive solution platelets was already absent on day 1 and remained so on days 4 and 7.
Additive solution and Intercept also reduced platelet volume on day 1, which the investigators say could be explained by the platelet activation and release of microparticles induced by the treatments.
MicroRNA release
The investigators hypothesized that activated stored platelets could release microRNAs through microparticles in the supernatant. So they collected supernatant from each of the 5 groups and analyzed their total content of miR-223, which is one of the most abundant platelet microRNAs.
They discovered that the total amount of miR-223 was increased 30% to 86% in the microparticles released from additive solution and Intercept-treated platelets. They did not observe this increase in irradiation- or Mirasol-treated platelets compared to controls.
"The platelets end up depleted of RNA so, once transfused, they're unable to do what they normally would," Dr Provost said. Nevertheless, the clinical implications of the reduction in platelet activation and impaired platelet aggregation after Intercept treatment remain to be established.
The pathogen-reduction treatments are already marketed in some European countries, notably Switzerland, France, and Germany, and are under consideration in other countries, including Canada and the United States.
"In light of what we have demonstrated, the potentially harmful effects of these treatments should be carefully evaluated in the countries where they are not yet approved. It should also be re-evaluated in those countries where they are," Dr Provost said. ![]()
VIDEO: ACC/AHA lipid guidelines and diabetes
SAN FRANCISCO – Those looking for guidance from the American Diabetes Association regarding the guidelines released last fall from the American College of Cardiology and the American Heart Association dropping cholesterol treatment goals will have to wait until next year.
That’s when the ADA’s Clinical Practice Recommendations, released each year in January, will incorporate the Professional Practice Committee’s review of the ACC/AHA guidelines and the evidence behind it. The new recommendations caused some controversy and raised some questions about treatment of certain patient groups, most notably those with diabetes.
The ADA hasn’t recommended any changes to its current guidelines, which still incorporate treatment to target. But it has been reviewing the guidelines to see if it would recommend any changes for its 2015 guidelines.
Dr. Robert E. Ratner, chief scientific and medical officer for the American Diabetes Association, further explained the organization’s position on treatment of lipids in patients with diabetes in a video interview at the annual scientific sessions of the ADA.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
The association is also holding a debate at this year’s meeting to discuss the pros and cons of the new lipid guidelines for patients with diabetes.
In a press conference, Dr. Robert Eckel, professor of medicine and Charles A. Boettcher chair in atherosclerosis at University of Colorado, Anschutz Medical Campus, Aurora, said he was in support of the ACC/AHA guidelines, having served on the Task Force on Practice Guidelines, and that he believed that almost all patients with diabetes should be on a statin. He stressed that the new guidelines are evidence based.
But Dr. Henry Ginsberg, Irving Professor of Medicine and Director of the Irving Institute for Clinical and Translational Research at Columbia University, New York, argued that the guidelines’ evidence-based construct was too narrow.
In a video interview, Dr. Ginsberg further discussed his position and his practice tips for physicians.
Both physicians agreed that patients should be treated on an individual basis. For instance, patients who are statin intolerant won’t meet the guidelines’ criteria and "we’ll have to go beyond the guidelines," said Dr. Eckel.
On Twitter @naseemmiller
SAN FRANCISCO – Those looking for guidance from the American Diabetes Association regarding the guidelines released last fall from the American College of Cardiology and the American Heart Association dropping cholesterol treatment goals will have to wait until next year.
That’s when the ADA’s Clinical Practice Recommendations, released each year in January, will incorporate the Professional Practice Committee’s review of the ACC/AHA guidelines and the evidence behind it. The new recommendations caused some controversy and raised some questions about treatment of certain patient groups, most notably those with diabetes.
The ADA hasn’t recommended any changes to its current guidelines, which still incorporate treatment to target. But it has been reviewing the guidelines to see if it would recommend any changes for its 2015 guidelines.
Dr. Robert E. Ratner, chief scientific and medical officer for the American Diabetes Association, further explained the organization’s position on treatment of lipids in patients with diabetes in a video interview at the annual scientific sessions of the ADA.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
The association is also holding a debate at this year’s meeting to discuss the pros and cons of the new lipid guidelines for patients with diabetes.
In a press conference, Dr. Robert Eckel, professor of medicine and Charles A. Boettcher chair in atherosclerosis at University of Colorado, Anschutz Medical Campus, Aurora, said he was in support of the ACC/AHA guidelines, having served on the Task Force on Practice Guidelines, and that he believed that almost all patients with diabetes should be on a statin. He stressed that the new guidelines are evidence based.
But Dr. Henry Ginsberg, Irving Professor of Medicine and Director of the Irving Institute for Clinical and Translational Research at Columbia University, New York, argued that the guidelines’ evidence-based construct was too narrow.
In a video interview, Dr. Ginsberg further discussed his position and his practice tips for physicians.
Both physicians agreed that patients should be treated on an individual basis. For instance, patients who are statin intolerant won’t meet the guidelines’ criteria and "we’ll have to go beyond the guidelines," said Dr. Eckel.
On Twitter @naseemmiller
SAN FRANCISCO – Those looking for guidance from the American Diabetes Association regarding the guidelines released last fall from the American College of Cardiology and the American Heart Association dropping cholesterol treatment goals will have to wait until next year.
That’s when the ADA’s Clinical Practice Recommendations, released each year in January, will incorporate the Professional Practice Committee’s review of the ACC/AHA guidelines and the evidence behind it. The new recommendations caused some controversy and raised some questions about treatment of certain patient groups, most notably those with diabetes.
The ADA hasn’t recommended any changes to its current guidelines, which still incorporate treatment to target. But it has been reviewing the guidelines to see if it would recommend any changes for its 2015 guidelines.
Dr. Robert E. Ratner, chief scientific and medical officer for the American Diabetes Association, further explained the organization’s position on treatment of lipids in patients with diabetes in a video interview at the annual scientific sessions of the ADA.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
The association is also holding a debate at this year’s meeting to discuss the pros and cons of the new lipid guidelines for patients with diabetes.
In a press conference, Dr. Robert Eckel, professor of medicine and Charles A. Boettcher chair in atherosclerosis at University of Colorado, Anschutz Medical Campus, Aurora, said he was in support of the ACC/AHA guidelines, having served on the Task Force on Practice Guidelines, and that he believed that almost all patients with diabetes should be on a statin. He stressed that the new guidelines are evidence based.
But Dr. Henry Ginsberg, Irving Professor of Medicine and Director of the Irving Institute for Clinical and Translational Research at Columbia University, New York, argued that the guidelines’ evidence-based construct was too narrow.
In a video interview, Dr. Ginsberg further discussed his position and his practice tips for physicians.
Both physicians agreed that patients should be treated on an individual basis. For instance, patients who are statin intolerant won’t meet the guidelines’ criteria and "we’ll have to go beyond the guidelines," said Dr. Eckel.
On Twitter @naseemmiller
AT THE ADA ANNUAL SCIENTIFIC SESSIONS
One in four children with ALL misses maintenance doses

Credit: Bill Branson
Forgetting to take medication is the number one reason for non-adherence to maintenance therapy in children with acute lymphoblastic leukemia (ALL), according to a new study by the Children’s Oncology Group.
And neglecting to take maintenance medication 10% of the time increases the patient’s risk of relapse threefold.
In a study of 298 children receiving 6-mercaptopurine (6MP) as part of maintenance therapy, African Americans and Asians had significantly lower adherence rates than non-Hispanic whites, at 46%, 28%, and 14%, respectively.
Researchers discovered a number of other race-specific characteristics to explain the disparity, including low maternal education, households with a single parent and multiple children, low-income households, and households in which mothers were not the full-time caregivers.
The investigators had studied adherence in Hispanic children in an earlier study and they were not included here.
“While we don’t yet know why children of different races have significantly different survival rates for ALL,” said senior study author Smita Bhatia, MD, MPH, of City of Hope in Duarte, California, “we know that their adherence to their maintenance medication is a critical factor in their survival.”
And so the researchers explored potential sociodemographic differences that impact adherence to 6MP and reported their findings in Blood.
They enrolled 298 children, with a median age of 6 years at study entry (range, 2-20 years). All were in first continuous remission and receiving maintenance therapy that included 6MP.
One-hundred fifty-nine patients were non-Hispanic whites (the referent group), 71 were Asians, and 68 African Americans.
The researchers recorded adherence for up to 5 months per patient using an electronic monitoring device (MEMS®TrackCapTM) that recorded the date and time the pill bottle was opened. These data were downloaded at the end of the adherence-monitoring period.
They also measured erythrocyte TGN levels of the patients on a monthly basis to determine the association between bottle opening and taking the 6MP. Erythrocyte TGN levels reflect 6MP exposure.
Demographics
The researchers found that disease characteristics were comparable across the racial groups, but sociodemographic characteristics varied significantly.
African American families (64%) reported annual household incomes of less than $50,000 compared with 44% of non-Hispanic white and 33% of Asian families.
African American parents had significantly less formal education than non-Hispanic white and Asian parents. Sixty-six percent of African American fathers and 61% of African American mothers reported having less than a college degree.
This compared with 48% and 31% of non-Hispanic white and Asian fathers, respectively, and 46% and 32% of non-Hispanic white and Asian mothers, respectively.
More African American households (37%) were headed by single parents, compared with non-Hispanic white (9%) and Asian (4%) households.
And only 27% of African American children had their mothers as full-time caregivers, compared with 38% of non-Hispanic white children and 52% of Asian children.
Overall adherence
The investigators found that adherence for the entire population declined over the course of the 5 months, from 94.8% to 91.3% (P<0.0001).
Adherence rates were significantly lower in Asians and African Americans than in non-Hispanic whites, and in patients from low-income households.
Adherence rates were significantly higher in patients from single-parent/single-child households (96.9%) and in households where the mothers were full-time caregivers (94.8%).
Adherence by race
In Asian households, adherence was significantly higher with mothers as full-time caregivers (95.6%) compared with all other configurations of caregivers. And adherence rates in households with income of $50,000 or more were also higher (93.9%) than in households with income under $50,000 (84.2%).
In African American households, those with low maternal education had significantly lower adherence rates, 74.6%, than in those households in which mothers held a college degree, 94.6%. And adherence rates were higher in households with single parents and a single child (94.2%) compared with those households with a single parent and multiple children (80.5%) or even from nuclear families (85.5%).
In non-Hispanic white households, paternal education higher than a postgraduate degree resulted in adherence of 97.2%, compared with households in which the father did not have a postgraduate degree, (95.3%). Again, adherence rates were higher in households with single parents and a single child (97.8%) compared with those from single parents with multiple children (94.0%) or from nuclear families (95.6%).
For all racial groups, forgetfulness was the most common reason for missing doses—non-Hispanic whites, 79%; Asians, 90%; and African Americans, 75%.
“Our data demonstrate that one in four children in remission from ALL does not take the medicine needed to remain cancer free,” said Dr Bhatia, “and in an overwhelming majority, the primary reason why is that they forget to take their pills each day,” said Dr. Bhatia.
“These results are the basis for further studies that will examine how physicians can successfully intervene, using technology, for example, to ensure that children do not experience an increased risk of relapse because they did not take their oral chemotherapy.” ![]()

Credit: Bill Branson
Forgetting to take medication is the number one reason for non-adherence to maintenance therapy in children with acute lymphoblastic leukemia (ALL), according to a new study by the Children’s Oncology Group.
And neglecting to take maintenance medication 10% of the time increases the patient’s risk of relapse threefold.
In a study of 298 children receiving 6-mercaptopurine (6MP) as part of maintenance therapy, African Americans and Asians had significantly lower adherence rates than non-Hispanic whites, at 46%, 28%, and 14%, respectively.
Researchers discovered a number of other race-specific characteristics to explain the disparity, including low maternal education, households with a single parent and multiple children, low-income households, and households in which mothers were not the full-time caregivers.
The investigators had studied adherence in Hispanic children in an earlier study and they were not included here.
“While we don’t yet know why children of different races have significantly different survival rates for ALL,” said senior study author Smita Bhatia, MD, MPH, of City of Hope in Duarte, California, “we know that their adherence to their maintenance medication is a critical factor in their survival.”
And so the researchers explored potential sociodemographic differences that impact adherence to 6MP and reported their findings in Blood.
They enrolled 298 children, with a median age of 6 years at study entry (range, 2-20 years). All were in first continuous remission and receiving maintenance therapy that included 6MP.
One-hundred fifty-nine patients were non-Hispanic whites (the referent group), 71 were Asians, and 68 African Americans.
The researchers recorded adherence for up to 5 months per patient using an electronic monitoring device (MEMS®TrackCapTM) that recorded the date and time the pill bottle was opened. These data were downloaded at the end of the adherence-monitoring period.
They also measured erythrocyte TGN levels of the patients on a monthly basis to determine the association between bottle opening and taking the 6MP. Erythrocyte TGN levels reflect 6MP exposure.
Demographics
The researchers found that disease characteristics were comparable across the racial groups, but sociodemographic characteristics varied significantly.
African American families (64%) reported annual household incomes of less than $50,000 compared with 44% of non-Hispanic white and 33% of Asian families.
African American parents had significantly less formal education than non-Hispanic white and Asian parents. Sixty-six percent of African American fathers and 61% of African American mothers reported having less than a college degree.
This compared with 48% and 31% of non-Hispanic white and Asian fathers, respectively, and 46% and 32% of non-Hispanic white and Asian mothers, respectively.
More African American households (37%) were headed by single parents, compared with non-Hispanic white (9%) and Asian (4%) households.
And only 27% of African American children had their mothers as full-time caregivers, compared with 38% of non-Hispanic white children and 52% of Asian children.
Overall adherence
The investigators found that adherence for the entire population declined over the course of the 5 months, from 94.8% to 91.3% (P<0.0001).
Adherence rates were significantly lower in Asians and African Americans than in non-Hispanic whites, and in patients from low-income households.
Adherence rates were significantly higher in patients from single-parent/single-child households (96.9%) and in households where the mothers were full-time caregivers (94.8%).
Adherence by race
In Asian households, adherence was significantly higher with mothers as full-time caregivers (95.6%) compared with all other configurations of caregivers. And adherence rates in households with income of $50,000 or more were also higher (93.9%) than in households with income under $50,000 (84.2%).
In African American households, those with low maternal education had significantly lower adherence rates, 74.6%, than in those households in which mothers held a college degree, 94.6%. And adherence rates were higher in households with single parents and a single child (94.2%) compared with those households with a single parent and multiple children (80.5%) or even from nuclear families (85.5%).
In non-Hispanic white households, paternal education higher than a postgraduate degree resulted in adherence of 97.2%, compared with households in which the father did not have a postgraduate degree, (95.3%). Again, adherence rates were higher in households with single parents and a single child (97.8%) compared with those from single parents with multiple children (94.0%) or from nuclear families (95.6%).
For all racial groups, forgetfulness was the most common reason for missing doses—non-Hispanic whites, 79%; Asians, 90%; and African Americans, 75%.
“Our data demonstrate that one in four children in remission from ALL does not take the medicine needed to remain cancer free,” said Dr Bhatia, “and in an overwhelming majority, the primary reason why is that they forget to take their pills each day,” said Dr. Bhatia.
“These results are the basis for further studies that will examine how physicians can successfully intervene, using technology, for example, to ensure that children do not experience an increased risk of relapse because they did not take their oral chemotherapy.” ![]()

Credit: Bill Branson
Forgetting to take medication is the number one reason for non-adherence to maintenance therapy in children with acute lymphoblastic leukemia (ALL), according to a new study by the Children’s Oncology Group.
And neglecting to take maintenance medication 10% of the time increases the patient’s risk of relapse threefold.
In a study of 298 children receiving 6-mercaptopurine (6MP) as part of maintenance therapy, African Americans and Asians had significantly lower adherence rates than non-Hispanic whites, at 46%, 28%, and 14%, respectively.
Researchers discovered a number of other race-specific characteristics to explain the disparity, including low maternal education, households with a single parent and multiple children, low-income households, and households in which mothers were not the full-time caregivers.
The investigators had studied adherence in Hispanic children in an earlier study and they were not included here.
“While we don’t yet know why children of different races have significantly different survival rates for ALL,” said senior study author Smita Bhatia, MD, MPH, of City of Hope in Duarte, California, “we know that their adherence to their maintenance medication is a critical factor in their survival.”
And so the researchers explored potential sociodemographic differences that impact adherence to 6MP and reported their findings in Blood.
They enrolled 298 children, with a median age of 6 years at study entry (range, 2-20 years). All were in first continuous remission and receiving maintenance therapy that included 6MP.
One-hundred fifty-nine patients were non-Hispanic whites (the referent group), 71 were Asians, and 68 African Americans.
The researchers recorded adherence for up to 5 months per patient using an electronic monitoring device (MEMS®TrackCapTM) that recorded the date and time the pill bottle was opened. These data were downloaded at the end of the adherence-monitoring period.
They also measured erythrocyte TGN levels of the patients on a monthly basis to determine the association between bottle opening and taking the 6MP. Erythrocyte TGN levels reflect 6MP exposure.
Demographics
The researchers found that disease characteristics were comparable across the racial groups, but sociodemographic characteristics varied significantly.
African American families (64%) reported annual household incomes of less than $50,000 compared with 44% of non-Hispanic white and 33% of Asian families.
African American parents had significantly less formal education than non-Hispanic white and Asian parents. Sixty-six percent of African American fathers and 61% of African American mothers reported having less than a college degree.
This compared with 48% and 31% of non-Hispanic white and Asian fathers, respectively, and 46% and 32% of non-Hispanic white and Asian mothers, respectively.
More African American households (37%) were headed by single parents, compared with non-Hispanic white (9%) and Asian (4%) households.
And only 27% of African American children had their mothers as full-time caregivers, compared with 38% of non-Hispanic white children and 52% of Asian children.
Overall adherence
The investigators found that adherence for the entire population declined over the course of the 5 months, from 94.8% to 91.3% (P<0.0001).
Adherence rates were significantly lower in Asians and African Americans than in non-Hispanic whites, and in patients from low-income households.
Adherence rates were significantly higher in patients from single-parent/single-child households (96.9%) and in households where the mothers were full-time caregivers (94.8%).
Adherence by race
In Asian households, adherence was significantly higher with mothers as full-time caregivers (95.6%) compared with all other configurations of caregivers. And adherence rates in households with income of $50,000 or more were also higher (93.9%) than in households with income under $50,000 (84.2%).
In African American households, those with low maternal education had significantly lower adherence rates, 74.6%, than in those households in which mothers held a college degree, 94.6%. And adherence rates were higher in households with single parents and a single child (94.2%) compared with those households with a single parent and multiple children (80.5%) or even from nuclear families (85.5%).
In non-Hispanic white households, paternal education higher than a postgraduate degree resulted in adherence of 97.2%, compared with households in which the father did not have a postgraduate degree, (95.3%). Again, adherence rates were higher in households with single parents and a single child (97.8%) compared with those from single parents with multiple children (94.0%) or from nuclear families (95.6%).
For all racial groups, forgetfulness was the most common reason for missing doses—non-Hispanic whites, 79%; Asians, 90%; and African Americans, 75%.
“Our data demonstrate that one in four children in remission from ALL does not take the medicine needed to remain cancer free,” said Dr Bhatia, “and in an overwhelming majority, the primary reason why is that they forget to take their pills each day,” said Dr. Bhatia.
“These results are the basis for further studies that will examine how physicians can successfully intervene, using technology, for example, to ensure that children do not experience an increased risk of relapse because they did not take their oral chemotherapy.” ![]()
EHR Report: Across the ages
Eighty percent of physicians are now using electronic health records in their offices. We have been impressed that the younger physicians to whom we have spoken often view their experience with EHRs very differently from older physicians. Is such a difference inevitable, perhaps, not just because change is more difficult for many people as they get older but also because expectations are influenced by experience. Noticing these different thoughts and feelings, we’ve asked two physicians more than 55 years old and two younger physicians to share some thoughts on their experiences with electronic records.
Mathew Clark (family physician)
I’ve been in practice for 31 years and using an EHR system for just under 5. I’m not thrilled with it, but I accept that it’s an unavoidable part of my practice now, and so I don’t waste energy being upset about it. I’ve learned to function efficiently with an EHR, doing the best I can. I remember physicians, before the days of SOAP notes, who would write pithy, useful notes such as "probable strep, Pen VK 500 bid for 10 days" on 3x5 index cards. Such notes lacked detail, and it’s not hard to imagine the problems this lack of detail might create, but they were readable at a glance, and told you what you needed to know. On the other hand, the massively detailed, bloated notes we see with our EHRs, obscured by "copy-forward" text and fictional (in other words, never really asked or examined) information, present very significant practical and legal issues of their own, and take hours of physician time to complete. Given a choice, I’d probably go for the index cards.
Natalie McGann (family physician)
I have been a family physician in practice for 4 years since graduating from residency. The advent of the EHR hasn’t been an overwhelming transition for those of us in the early stages of our careers. Much of our schooling to date has included laptops and other electronic devices that for many prove an easier means of communication. Despite that fact that EHRs require a host of extraneous clicks and check boxes, it is still less cumbersome than documenting encounters on paper. For the generation of young physicians accustomed to having answers at their fingertips, the idea of flipping through paper charts to collate a patient’s medical record seems far more complicated than clicking a few tabs without ever leaving your chair. I, and most colleagues in my peer group to whom I’ve spoken, agree that we would not be likely to a join a practice that doesn’t utilize an EHR or have a current plan to adopt one. Anything less would feel like a step back at this point.
Danielle Carcia (intern, family medicine residency)
Overall, I enjoy using electronic medical records. I feel that it places all pertinent information about the patient in an easy-to-follow and concise manner. The ability to read through past providers and even at times specialists visits with a patient can be very helpful when navigating an appointment with a new patient. As a young physician, electronics have been an extension of myself for my entire adult life, so a computer in front of me during an office visit is comforting. I do not feel it distracts from my interaction with patients, or takes away from their experience at all, just the opposite, it allows me to more confidently care for them with up to date, and organized information at my fingertips.
Dave Depietro (family physician)
I have been a family physician for 25 years and feel that the EHRs have affected my office in a number of ways. It has definitely improved the efficacy of office tasks such as doing prescription refills, interoffice communication, and scheduling. Also before EHRs, the turnaround time for a dictated note was about a week, and now most notes are completed by the end of the day. This makes it easier if I am taking care of one of my partner’s patients or dealing with a patient I recently saw. Also in this day of pay for performance we can now gather data much easier. This would be almost impossible to do if we still had paper charts.
EHRs unfortunately also have their downsides. The main problem I see is that they add a significant amount of time for providers to complete tasks. When I dictated a note, I could have completed a note within 1-2 minutes where now with EHRs, it can take maybe 3-5 minutes/patient. Also to approve labs, x-rays, etc. it just takes longer. I feel that EHRs have added about 1½ hr to my day. I feel most of my colleagues have the same complaint. They routinely take work home at night and spend 1-2 hours at home completing notes. Many of my peers seem stressed and frustrated. Even though EHRs make the office more efficient, I feel that the provider pays the price. My other complaint is the cost of IT support to keep the EHRs running smoothly. The promise of EHRs is that they would save physicians’ money and reduce staffing, however I have not seen that happen.
I ask myself, at the end of the day, would I go back to paper charts? The answer is no. Despite their downsides, I feel that the positives of EHRs outweigh the negatives. Older doctors just need to adapt to this new way of practicing medicine.
The Bottom Line
Clearly there is a range of opinion about the effect of electronic health records on our practices and our lives, with those opinions at least partly segregated by age. We are interested in your thoughts and plan to publish some of those thoughts in future columns, so please let us know at [email protected]. Thanks.
Dr. Notte is a family physician and clinical informaticist for Abington Memorial Hospital. He is a partner in EHR Practice Consultants, a firm that aids physicians in adopting electronic health records. Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. He is editor in chief of Redi-Reference Inc., a software company that creates mobile apps.
Eighty percent of physicians are now using electronic health records in their offices. We have been impressed that the younger physicians to whom we have spoken often view their experience with EHRs very differently from older physicians. Is such a difference inevitable, perhaps, not just because change is more difficult for many people as they get older but also because expectations are influenced by experience. Noticing these different thoughts and feelings, we’ve asked two physicians more than 55 years old and two younger physicians to share some thoughts on their experiences with electronic records.
Mathew Clark (family physician)
I’ve been in practice for 31 years and using an EHR system for just under 5. I’m not thrilled with it, but I accept that it’s an unavoidable part of my practice now, and so I don’t waste energy being upset about it. I’ve learned to function efficiently with an EHR, doing the best I can. I remember physicians, before the days of SOAP notes, who would write pithy, useful notes such as "probable strep, Pen VK 500 bid for 10 days" on 3x5 index cards. Such notes lacked detail, and it’s not hard to imagine the problems this lack of detail might create, but they were readable at a glance, and told you what you needed to know. On the other hand, the massively detailed, bloated notes we see with our EHRs, obscured by "copy-forward" text and fictional (in other words, never really asked or examined) information, present very significant practical and legal issues of their own, and take hours of physician time to complete. Given a choice, I’d probably go for the index cards.
Natalie McGann (family physician)
I have been a family physician in practice for 4 years since graduating from residency. The advent of the EHR hasn’t been an overwhelming transition for those of us in the early stages of our careers. Much of our schooling to date has included laptops and other electronic devices that for many prove an easier means of communication. Despite that fact that EHRs require a host of extraneous clicks and check boxes, it is still less cumbersome than documenting encounters on paper. For the generation of young physicians accustomed to having answers at their fingertips, the idea of flipping through paper charts to collate a patient’s medical record seems far more complicated than clicking a few tabs without ever leaving your chair. I, and most colleagues in my peer group to whom I’ve spoken, agree that we would not be likely to a join a practice that doesn’t utilize an EHR or have a current plan to adopt one. Anything less would feel like a step back at this point.
Danielle Carcia (intern, family medicine residency)
Overall, I enjoy using electronic medical records. I feel that it places all pertinent information about the patient in an easy-to-follow and concise manner. The ability to read through past providers and even at times specialists visits with a patient can be very helpful when navigating an appointment with a new patient. As a young physician, electronics have been an extension of myself for my entire adult life, so a computer in front of me during an office visit is comforting. I do not feel it distracts from my interaction with patients, or takes away from their experience at all, just the opposite, it allows me to more confidently care for them with up to date, and organized information at my fingertips.
Dave Depietro (family physician)
I have been a family physician for 25 years and feel that the EHRs have affected my office in a number of ways. It has definitely improved the efficacy of office tasks such as doing prescription refills, interoffice communication, and scheduling. Also before EHRs, the turnaround time for a dictated note was about a week, and now most notes are completed by the end of the day. This makes it easier if I am taking care of one of my partner’s patients or dealing with a patient I recently saw. Also in this day of pay for performance we can now gather data much easier. This would be almost impossible to do if we still had paper charts.
EHRs unfortunately also have their downsides. The main problem I see is that they add a significant amount of time for providers to complete tasks. When I dictated a note, I could have completed a note within 1-2 minutes where now with EHRs, it can take maybe 3-5 minutes/patient. Also to approve labs, x-rays, etc. it just takes longer. I feel that EHRs have added about 1½ hr to my day. I feel most of my colleagues have the same complaint. They routinely take work home at night and spend 1-2 hours at home completing notes. Many of my peers seem stressed and frustrated. Even though EHRs make the office more efficient, I feel that the provider pays the price. My other complaint is the cost of IT support to keep the EHRs running smoothly. The promise of EHRs is that they would save physicians’ money and reduce staffing, however I have not seen that happen.
I ask myself, at the end of the day, would I go back to paper charts? The answer is no. Despite their downsides, I feel that the positives of EHRs outweigh the negatives. Older doctors just need to adapt to this new way of practicing medicine.
The Bottom Line
Clearly there is a range of opinion about the effect of electronic health records on our practices and our lives, with those opinions at least partly segregated by age. We are interested in your thoughts and plan to publish some of those thoughts in future columns, so please let us know at [email protected]. Thanks.
Dr. Notte is a family physician and clinical informaticist for Abington Memorial Hospital. He is a partner in EHR Practice Consultants, a firm that aids physicians in adopting electronic health records. Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. He is editor in chief of Redi-Reference Inc., a software company that creates mobile apps.
Eighty percent of physicians are now using electronic health records in their offices. We have been impressed that the younger physicians to whom we have spoken often view their experience with EHRs very differently from older physicians. Is such a difference inevitable, perhaps, not just because change is more difficult for many people as they get older but also because expectations are influenced by experience. Noticing these different thoughts and feelings, we’ve asked two physicians more than 55 years old and two younger physicians to share some thoughts on their experiences with electronic records.
Mathew Clark (family physician)
I’ve been in practice for 31 years and using an EHR system for just under 5. I’m not thrilled with it, but I accept that it’s an unavoidable part of my practice now, and so I don’t waste energy being upset about it. I’ve learned to function efficiently with an EHR, doing the best I can. I remember physicians, before the days of SOAP notes, who would write pithy, useful notes such as "probable strep, Pen VK 500 bid for 10 days" on 3x5 index cards. Such notes lacked detail, and it’s not hard to imagine the problems this lack of detail might create, but they were readable at a glance, and told you what you needed to know. On the other hand, the massively detailed, bloated notes we see with our EHRs, obscured by "copy-forward" text and fictional (in other words, never really asked or examined) information, present very significant practical and legal issues of their own, and take hours of physician time to complete. Given a choice, I’d probably go for the index cards.
Natalie McGann (family physician)
I have been a family physician in practice for 4 years since graduating from residency. The advent of the EHR hasn’t been an overwhelming transition for those of us in the early stages of our careers. Much of our schooling to date has included laptops and other electronic devices that for many prove an easier means of communication. Despite that fact that EHRs require a host of extraneous clicks and check boxes, it is still less cumbersome than documenting encounters on paper. For the generation of young physicians accustomed to having answers at their fingertips, the idea of flipping through paper charts to collate a patient’s medical record seems far more complicated than clicking a few tabs without ever leaving your chair. I, and most colleagues in my peer group to whom I’ve spoken, agree that we would not be likely to a join a practice that doesn’t utilize an EHR or have a current plan to adopt one. Anything less would feel like a step back at this point.
Danielle Carcia (intern, family medicine residency)
Overall, I enjoy using electronic medical records. I feel that it places all pertinent information about the patient in an easy-to-follow and concise manner. The ability to read through past providers and even at times specialists visits with a patient can be very helpful when navigating an appointment with a new patient. As a young physician, electronics have been an extension of myself for my entire adult life, so a computer in front of me during an office visit is comforting. I do not feel it distracts from my interaction with patients, or takes away from their experience at all, just the opposite, it allows me to more confidently care for them with up to date, and organized information at my fingertips.
Dave Depietro (family physician)
I have been a family physician for 25 years and feel that the EHRs have affected my office in a number of ways. It has definitely improved the efficacy of office tasks such as doing prescription refills, interoffice communication, and scheduling. Also before EHRs, the turnaround time for a dictated note was about a week, and now most notes are completed by the end of the day. This makes it easier if I am taking care of one of my partner’s patients or dealing with a patient I recently saw. Also in this day of pay for performance we can now gather data much easier. This would be almost impossible to do if we still had paper charts.
EHRs unfortunately also have their downsides. The main problem I see is that they add a significant amount of time for providers to complete tasks. When I dictated a note, I could have completed a note within 1-2 minutes where now with EHRs, it can take maybe 3-5 minutes/patient. Also to approve labs, x-rays, etc. it just takes longer. I feel that EHRs have added about 1½ hr to my day. I feel most of my colleagues have the same complaint. They routinely take work home at night and spend 1-2 hours at home completing notes. Many of my peers seem stressed and frustrated. Even though EHRs make the office more efficient, I feel that the provider pays the price. My other complaint is the cost of IT support to keep the EHRs running smoothly. The promise of EHRs is that they would save physicians’ money and reduce staffing, however I have not seen that happen.
I ask myself, at the end of the day, would I go back to paper charts? The answer is no. Despite their downsides, I feel that the positives of EHRs outweigh the negatives. Older doctors just need to adapt to this new way of practicing medicine.
The Bottom Line
Clearly there is a range of opinion about the effect of electronic health records on our practices and our lives, with those opinions at least partly segregated by age. We are interested in your thoughts and plan to publish some of those thoughts in future columns, so please let us know at [email protected]. Thanks.
Dr. Notte is a family physician and clinical informaticist for Abington Memorial Hospital. He is a partner in EHR Practice Consultants, a firm that aids physicians in adopting electronic health records. Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. He is editor in chief of Redi-Reference Inc., a software company that creates mobile apps.




