User login
Resource Utilization and Satisfaction
The patient experience has become increasingly important to healthcare in the United States. It is now a metric used commonly to determine physician compensation and accounts for nearly 30% of the Centers for Medicare and Medicaid Services' (CMS) Value‐Based Purchasing (VBP) reimbursement for fiscal years 2015 and 2016.[1, 2]
In April 2015, CMS added a 5‐star patient experience score to its Hospital Compare website in an attempt to address the Affordable Care Act's call for transparent and easily understandable public reporting.[3] A hospital's principal score is the Summary Star Rating, which is based on responses to the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. The formulas used to calculate Summary Star Ratings have been reported by CMS.[4]
Studies published over the past decade suggest that gender, age, education level, length of hospital stay, travel distance, and other factors may influence patient satisfaction.[5, 6, 7, 8] One study utilizing a national dataset suggested that higher patient satisfaction was associated with greater inpatient healthcare utilization and higher healthcare expenditures.[9] It is therefore possible that emphasizing patient experience scores could adversely impact healthcare resource utilization. However, positive patient experience may also be an important independent dimension of quality for patients and correlate with improved clinical outcomes.[10]
We know of no literature describing patient factors associated with the Summary Star Rating. Given that this rating is now used as a standard metric by which patient experience can be compared across more than 3,500 hospitals,[11] data describing the association between patient‐level factors and the Summary Star Rating may provide hospitals with an opportunity to target improvement efforts. We aimed to determine the degree to which resource utilization is associated with a satisfaction score based on the Summary Star Rating methodology.
METHODS
The study was conducted at the University of Rochester Medical Center (URMC), an 830‐bed tertiary care center in upstate New York. This was a retrospective review of all HCAHPS surveys returned to URMC over a 27‐month period from January 1, 2012 to April 1, 2014. URMC follows the standard CMS process for determining which patients receive surveys as follows. During the study timeframe, HCAHPS surveys were mailed to patients 18 years of age and older who had an inpatient stay spanning at least 1 midnight. Surveys were mailed within 5 days of discharge, and were generally returned within 6 weeks. URMC did not utilize telephone or email surveys during the study period. Surveys were not sent to patients who (1) were transferred to another facility, (2) were discharged to hospice, (3) died during the hospitalization, (4) received psychiatric or rehabilitative services during the hospitalization, (5) had an international address, and/or (6) were prisoners.
The survey vendor (Press Ganey, South Bend, IN) for URMC provided raw data for returned surveys with patient answers to questions. Administrative and billing databases were used to add demographic and clinical data for the corresponding hospitalization to the dataset. These data included age, gender, payer status (public, private, self, charity), length of stay, number of attendings who saw the patient (based on encounters documented in the electronic medical record (EMR)), all discharge International Classification of Diseases, 9th Revision (ICD‐9) diagnoses for the hospitalization, total charges for the hospitalization, and intensive care unit (ICU) utilization as evidenced by a documented encounter with a member of the Division of Critical Care/Pulmonary Medicine.
CMS analyzes surveys within 1 of 3 clinical service categories (medical, surgical, or obstetrics/gynecology) based on the discharging service. To parallel this approach, each returned survey was placed into 1 of these categories based on the clinical service of the discharging physician. Patients placed in the obstetrics/gynecology category (n = 1317, 13%) will be analyzed in a future analysis given inherent differences in patient characteristics that require evaluation of other variables.
Approximations of CMS Summary Star Rating
The HCAHPS survey is a multiple‐choice questionnaire that includes several domains of patient satisfaction. Respondents are asked to rate areas of satisfaction with their hospital experience on a Likert scale. CMS uses a weighted average of Likert responses to a subset of HCAHPS questions to calculate a hospital's raw score in 11 domains, as well as an overall raw summary score. CMS then adjusts each raw score for differences between hospitals (eg, clustering, improvement over time, method of survey) to determine a hospital's star rating in each domain and an overall Summary Star Rating (the Summary Star Rating is the primary factor by which consumers can compare hospitals).[4] Because our data were from a single hospital system, the between‐hospital scoring adjustments utilized by CMS were not applicable. Instead, we calculated the raw scores exactly as CMS does prior to the adjustments. Thus, our scores reflect the scores that CMS would have given URMC during the study period prior to standardized adjustments; we refer to this as the raw satisfaction rating (RSR). We calculated an RSR for every eligible survey. The RSR was calculated as a continuous variable from 0 (lowest) to 1 (highest). Detailed explanation of our RSR calculation is available in the Supporting Information in the online version of this article.
Statistical Analysis
All analyses were performed in aggregate and by service (medical vs surgical). Categorical variables were summarized using frequencies with percentages. Comparisons across levels of categorical variables were performed with the 2 test. We report bivariate associations between the independent variables and RSRs in the top decile using unadjusted odds ratios (ORs) with 95% confidence intervals (CIs). Similarly, multivariable logistic regression was used for adjusted analyses. For the variables of severity of illness and resource intensity, the group with the lowest illness severity and lowest resource use served as the reference groups. We modeled patients without an ICU encounter and with an ICU encounter separately.
Charges, number of unique attendings encountered, and lengths of stay were highly correlated, and likely various measures of the same underlying construct of resource intensity, and therefore could not be entered into our models simultaneously. We combined these into a resource intensity score using factor analysis with a varimax rotation, and extracted factor scores for a single factor (supported by a scree plot). We then placed patients into 4 groups based on the distribution of the factor scores: low (<25th percentile), moderate (25th50th percentile), major (50th75th percentile), and extreme (>75th percentile).
We used the Charlson‐Deyo comorbidity score as our disease severity index.[12] The index uses ICD‐9 diagnoses with points assigned for the impact of each diagnosis on morbidity and the points summed to an overall score. This provides a measure of disease severity for a patient based on the number of diagnoses and relative mortality of the individual diagnoses. Scores were categorized as 0 (representing no major illness burden), 1 to 3, 4 to 6, and >6.
All statistical analyses were performed using SAS version 9.4 (SAS Institute, Cary, NC), and P values <0.05 were considered statistically significant. This study was approved by the institutional review board at the University of Rochester Medical Center.
RESULTS
Our initial search identified 10,007 returned surveys (29% of eligible patients returned surveys during the study period). Of these, 5059 (51%) were categorized as medical, 3630 (36%) as surgical, and 1317 (13%) as obstetrics/gynecology. One survey did not have the service of the discharging physician recorded and was excluded. Cohort demographics and relationship to RSRs in the top decile for the 8689 medical and surgical patients can be found in Table 1. The most common discharge diagnosis‐related groups (DRGs) for medical patients were 247, percutaneous cardiovascular procedure with drug‐eluding stent without major complications or comorbidities (MCC) (3.8%); 871, septicemia or severe sepsis without mechanical ventilation >96 hours with MCC (2.7%); and 392, esophagitis, gastroenteritis, and miscellaneous digestive disorders with MCC (2.3%). The most common DRGs for surgical patients were 460, spinal fusion except cervical without MCC (3.5%); 328, stomach, esophageal and duodenal procedure without complication or comorbidities or MCC (3.3%); and 491, back and neck procedure excluding spinal fusion without complication or comorbidities or MCC (3.1%).
Overall | Medical | Surgical | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Total | <90th | Top Decile | P | Total | <90th | Top Decile | P | Total | <90th | Top Decile | P | |
| ||||||||||||
Overall | 8,689 | 7,789 (90) | 900 (10) | 5,059 | 4,646 (92) | 413 (8) | 3,630 | 3,143 (87) | 487 (13) | |||
Age, y | ||||||||||||
<30 | 419 (5) | 371 (89) | 48 (12) | <0.001 | 218 (4) | 208 (95) | 10 (5) | <0.001 | 201 (6) | 163 (81) | 38 (19) | <0.001 |
3049 | 1,029 (12) | 902 (88) | 127 (12) | 533 (11) | 482 (90) | 51 (10) | 496 (14) | 420 (85) | 76 (15) | |||
5069 | 3,911 (45) | 3,450 (88) | 461 (12) | 2,136 (42) | 1,930 (90) | 206 (10) | 1,775 (49) | 1,520 (86) | 255 (14) | |||
>69 | 3,330 (38) | 3,066 (92) | 264 (8) | 2,172 (43) | 2,026 (93) | 146 (7) | 1,158 (32) | 1,040 (90) | 118 (10) | |||
Gender | ||||||||||||
Male | 4,640 (53) | 4,142 (89) | 498 (11) | 0.220 | 2,596 (51) | 2,379 (92) | 217 (8) | 0.602 | 2,044 (56) | 1,763 (86) | 281 (14) | 0.506 |
Female | 4,049 (47) | 3,647 (90) | 402 (10) | 2,463 (49) | 2,267 (92) | 196 (8) | 1,586 (44) | 1,380 (87) | 206 (13) | |||
ICU encounter | ||||||||||||
No | 7,122 (82) | 6,441 (90) | 681 (10) | <0.001 | 4,547 (90) | 4,193 (92) | 354 (8) | <0.001 | 2,575 (71) | 2,248 (87) | 327 (13) | 0.048 |
Yes | 1,567 (18) | 1,348 (86) | 219 (14) | 512 (10) | 453 (89) | 59 (12) | 1,055 (29) | 895 (85) | 160 (15) | |||
Payer | ||||||||||||
Public | 5,564 (64) | 5,036 (91) | 528 (10) | <0.001 | 3,424 (68) | 3,161 (92) | 263 (8) | 0.163 | 2,140 (59) | 1,875 (88) | 265 (12) | 0.148 |
Private | 3,064 (35) | 2,702 (88) | 362 (12) | 1,603 (32) | 1,458 (91) | 145 (9) | 1,461 (40) | 1,244 (85) | 217 (15) | |||
Charity | 45 (1) | 37 (82) | 8 (18) | 25 (1) | 21 (84) | 4 (16) | 20 (1) | 16 (80) | 4 (20) | |||
Self | 16 (0) | 14 (88) | 2 (13) | 7 (0) | 6 (86) | 1 (14) | 9 (0) | 8 (89) | 1 (11) | |||
Length of stay, d | ||||||||||||
<3 | 3,156 (36) | 2,930 (93) | 226 (7) | <0.001 | 1,961 (39) | 1,865 (95) | 96 (5) | <0.001 | 1,195 (33) | 1,065 (89) | 130 (11) | <0.001 |
36 | 3,330 (38) | 2,959 (89) | 371 (11) | 1,867 (37) | 1,702 (91) | 165 (9) | 1,463 (40) | 1,257 (86) | 206 (14) | |||
>6 | 2,203 (25) | 1,900 (86) | 303 (14) | 1,231 (24) | 1,079 (88) | 152 (12) | 972 (27) | 821 (85) | 151 (16) | |||
No. of attendings | ||||||||||||
<4 | 3,959 (46) | 3,615 (91) | 344 (9) | <0.001 | 2,307 (46) | 2,160 (94) | 147 (6) | <0.001 | 1,652 (46) | 1,455 (88) | 197 (12) | 0.052 |
46 | 3,067 (35) | 2,711 (88) | 356 (12) | 1,836 (36) | 1,663 (91) | 173 (9) | 1,231 (34) | 1,048 (85) | 183 (15) | |||
>6 | 1,663 (19) | 1,463 (88) | 200 (12) | 916 (18) | 823 (90) | 93 (10) | 747 (21) | 640 (86) | 107 (14) | |||
Severity index* | ||||||||||||
0 (lowest) | 2,812 (32) | 2,505 (89) | 307 (11) | 0.272 | 1,273 (25) | 1,185 (93) | 88 (7) | 0.045 | 1,539 (42) | 1,320 (86) | 219 (14) | 0.261 |
13 | 4,253 (49) | 3,827 (90) | 426 (10) | 2,604 (52) | 2,395 (92) | 209 (8) | 1,649 (45) | 1,432 (87) | 217 (13) | |||
46 | 1163 (13) | 1,052 (91) | 111 (10) | 849 (17) | 770 (91) | 79 (9) | 314 (9) | 282 (90) | 32 (10) | |||
>6 (highest) | 461 (5) | 405 (88) | 56 (12) | 333 (7) | 296 (89) | 37 (11) | 128 (4) | 109 (85) | 19 (15) | |||
Charges, | ||||||||||||
Low | 1,820 (21) | 1,707 (94) | 113 (6) | <0.001 | 1,426 (28) | 1,357 (95) | 69 (5) | <0.001 | 394 (11) | 350 (89) | 44 (11) | 0.007 |
Medium | 5,094 (59) | 4,581 (90) | 513 (10) | 2,807 (56) | 2,582 (92) | 225 (8) | 2,287 (63) | 1,999 (87) | 288 (13) | |||
High | 1,775 (20) | 1,501 (85) | 274 (15) | 826 (16) | 707 (86) | 119 (14) | 949 (26) | 794 (84) | 155 (16) |
Unadjusted analysis of medical and surgical patients identified significant associations of several variables with a top decile RSR (Table 2). Patients with longer lengths of stay (OR: 2.07, 95% CI: 1.72‐2.48), more attendings (OR: 1.44, 95% CI: 1.19‐1.73), and higher hospital charges (OR: 2.76, 95% CI: 2.19‐3.47) were more likely to report an RSR in the top decile. Patients without an ICU encounter (OR: 0.65, 95% CI: 0.55‐0.77) and on a medical service (OR: 0.57, 95% CI: 0.5‐ 0.66) were less likely to report an RSR in the top decile. Several associations were identified in only the medical or surgical cohorts. In the medical cohort, patients with the highest illness severity index (OR: 1.68, 95% CI: 1.12‐ 2.52) and with 7 different attending physicians (OR: 1.66, 95% CI: 1.27‐2.18) were more likely to report RSRs in the top decile. In the surgical cohort, patients <30 years of age (OR: 2.05, 95% CI 1.38‐3.07) were more likely to report an RSR in the top decile than patients >69 years of age. Insurance payer category and gender were not significantly associated with top decile RSRs.
Overall | Medical | Surgical | ||||
---|---|---|---|---|---|---|
Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | |
| ||||||
Age, y | ||||||
<30 | 1.5 (1.082.08) | 0.014 | 0.67 (0.351.29) | 0.227 | 2.05 (1.383.07) | <0.001 |
3049 | 1.64 (1.312.05) | <.001 | 1.47 (1.052.05) | 0.024 | 1.59 (1.172.17) | 0.003 |
5069 | 1.55 (1.321.82) | <.001 | 1.48 (1.191.85) | 0.001 | 1.48 (1.171.86) | 0.001 |
>69 | Ref | Ref | Ref | |||
Gender | ||||||
Male | 1.09 (0.951.25) | 0.220 | 1.06 (0.861.29) | 0.602 | 1.07 (0.881.3) | 0.506 |
Female | Ref | Ref | Ref | |||
ICU encounter | ||||||
No | 0.65 (0.550.77) | <0.001 | 0.65 (0.480.87) | 0.004 | 0.81 (0.661) | 0.048 |
Yes | Ref | Ref | Ref | |||
Payer | ||||||
Public | 0.73 (0.173.24) | 0.683 | 0.5 (0.064.16) | 0.521 | 1.13 (0.149.08) | 0.908 |
Private | 0.94 (0.214.14) | 0.933 | 0.6 (0.074.99) | 0.634 | 1.4 (0.1711.21) | 0.754 |
Charity | 1.51 (0.298.02) | 0.626 | 1.14 (0.1112.25) | 0.912 | 2 (0.1920.97) | 0.563 |
Self | Ref | Ref | Ref | |||
Length of stay, d | ||||||
<3 | Ref | Ref | Ref | |||
36 | 1.63 (1.371.93) | <0.001 | 1.88 (1.452.44) | <0.001 | 1.34 (1.061.7) | 0.014 |
>6 | 2.07 (1.722.48) | <0.001 | 2.74 (2.13.57) | <0.001 | 1.51 (1.171.94) | 0.001 |
No. of attendings | ||||||
<4 | Ref | Ref | Ref | |||
46 | 1.38 (1.181.61) | <0.001 | 1.53 (1.221.92) | <0.001 | 1.29 (1.041.6) | 0.021 |
>6 | 1.44 (1.191.73) | <0.001 | 1.66 (1.272.18) | <0.001 | 1.23 (0.961.59) | 0.102 |
Severity index* | ||||||
0 (lowest) | Ref | Ref | Ref | |||
13 | 0.91 (0.781.06) | 0.224 | 1.18 (0.911.52) | 0.221 | 0.91 (0.751.12) | 0.380 |
46 | 0.86 (0.681.08) | 0.200 | 1.38 (1.011.9) | 0.046 | 0.68 (0.461.01) | 0.058 |
>6 (highest) | 1.13 (0.831.53) | 0.436 | 1.68 (1.122.52) | 0.012 | 1.05 (0.631.75) | 0.849 |
Charges | ||||||
Low | Ref | Ref | Ref | |||
Medium | 1.69 (1.372.09) | <0.001 | 1.71 (1.32.26) | <0.001 | 1.15 (0.821.61) | 0.428 |
High | 2.76 (2.193.47) | <0.001 | 3.31 (2.434.51) | <0.001 | 1.55 (1.092.22) | 0.016 |
Service | ||||||
Medical | 0.57 (0.50.66) | <0.001 | ||||
Surgical | Ref |
Multivariable modeling (Table 3) for all patients without an ICU encounter suggested that (1) patients aged <30 years, 30 to 49 years, and 50 to 69 years were more likely to report top decile RSRs when compared to patients 70 years and older (OR: 1.61, 95% CI: 1.09‐2.36; OR: 1.44, 95% CI: 1.08‐1.93; and OR: 1.39, 95% CI: 1.13‐1.71, respectively) and (2), when compared to patients with extreme resource intensity scores, patients with higher resource intensity scores were more likely to report top decile RSRs (moderate [OR: 1.42, 95% CI: 1.11‐1.83], major [OR: 1.56, 95% CI: 1.22‐2.01], and extreme [OR: 2.29, 95% CI: 1.8‐2.92]. These results were relatively consistent within medical and surgical subgroups (Table 3).
Overall | Medical | Surgical | ||||
---|---|---|---|---|---|---|
Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | |
| ||||||
Age, y | ||||||
<30 | 1.61 (1.092.36) | 0.016 | 0.82 (0.41.7) | 0.596 | 2.31 (1.393.82) | 0.001 |
3049 | 1.44 (1.081.93) | 0.014 | 1.55 (1.032.32) | 0.034 | 1.41 (0.912.17) | 0.120 |
5069 | 1.39 (1.131.71) | 0.002 | 1.44 (1.11.88) | 0.008 | 1.39 (11.93) | 0.049 |
>69 | Ref | Ref | Ref | |||
Sex | ||||||
Male | 1 (0.851.17) | 0.964 | 1 (0.81.25) | 0.975 | 0.99 (0.791.26) | 0.965 |
Female | Ref | Ref | Ref | |||
Payer | ||||||
Public | 0.62 (0.142.8) | 0.531 | 0.42 (0.053.67) | 0.432 | 1.03 (0.128.59) | 0.978 |
Private | 0.67 (0.153.02) | 0.599 | 0.42 (0.053.67) | 0.434 | 1.17 (0.149.69) | 0.884 |
Charity | 1.54 (0.288.41) | 0.620 | 1 (0.0911.13) | 0.999 | 2.56 (0.2328.25) | 0.444 |
Self | Ref | Ref | Ref | |||
Severity index | ||||||
0 (lowest) | Ref | Ref | Ref | |||
13 | 1.07 (0.891.29) | 0.485 | 1.18 (0.881.58) | 0.267 | 1 (0.781.29) | 0.986 |
46 | 1.14 (0.861.51) | 0.377 | 1.42 (0.992.04) | 0.056 | 0.6 (0.331.1) | 0.100 |
>6 (highest) | 1.31 (0.911.9) | 0.150 | 1.47 (0.932.33) | 0.097 | 1.1 (0.542.21) | 0.795 |
Resource intensity score | ||||||
Low | Ref | Ref | Ref | |||
Moderate | 1.42 (1.111.83) | 0.006 | 1.6 (1.112.3) | 0.011 | 0.94 (0.661.34) | 0.722 |
Major | 1.56 (1.222.01) | 0.001 | 1.69 (1.182.43) | 0.004 | 1.28 (0.911.8) | 0.151 |
Extreme | 2.29 (1.82.92) | <0.001 | 2.72 (1.943.82) | <0.001 | 1.63 (1.172.26) | 0.004 |
Service | ||||||
Medical | 0.59 (0.50.69) | <0.001 | ||||
Surgical | Ref |
In those with at least 1 ICU attending encounter (see Supporting Table 1 in the online version of this article), no variables demonstrated significant association with top decile RSRs in the overall group or in the medical subgroup. For surgical patients with at least 1 ICU attending encounter (see Supporting Table 1 in the online version of this article), patients aged 30 to 49 and 50 to 69 years were more likely to provide top decile RSRs (OR: 1.93, 95% CI: 1.08‐3.46 and OR: 1.65, 95% CI 1.07‐2.53, respectively). Resource intensity was not significantly associated with top decile RSRs.
DISCUSSION
Our analysis suggests that, for patients on the general care floors, resource utilization is associated with the RSR and, therefore, potentially the CMS Summary Star Rating. Adjusting for severity of illness, patients with higher resource utilization were more likely to report top decile RSRs.
Prior data regarding utilization and satisfaction are mixed. In a 2‐year, prospective, national examination, patients in the highest quartile of patient satisfaction had increased healthcare and prescription drug expenditures as well as increased rates of hospitalization when compared with patients in the lowest quartile of patient satisfaction.[9] However, a recent national study of surgical administrative databases suggested hospitals with high patient satisfaction provided more efficient care.[13]
One reason for the conflicting data may be that large, national evaluations are unable to control for between‐hospital confounders (ie, hospital quality of care). By capturing all eligible returned surveys at 1 institution, our design allowed us to collect granular data. We found that in 1 hospital setting, patient population, facilities, and food services, patients receiving more clinical resources generally assigned higher ratings than patients receiving less.
It is possible that utilization is a proxy for serious illness, and that patients with serious illness receive more attention during hospitalization and are more satisfied when discharged in a good state of health. However, we did adjust for severity of illness in our model using the Charlson‐Deyo index and we suggest that, other factors being equal, hospitals with higher per‐patient expenditures may be assigned higher Summary Star Ratings.
CMS has recently implemented a number of metrics designed to decrease healthcare costs by improving quality, safety, and efficiency. Concurrently, CMS has also prioritized patient experience. The Summary Star Rating was created to provide healthcare consumers with an easy way to compare the patient experience between hospitals[4]; however, our data suggest that this metric may be at odds with inpatient cost savings and efficiency metrics.
Per‐patient spending becomes particularly salient when considering that in fiscal year 2016, CMS' hospital VBP reimbursement will include 2 metrics: an efficiency outcome measure labeled Medicare spending per beneficiary, and a patient experience outcome measure based on HCAHPS survey dimensions.[2] Together, these 2 metrics will comprise nearly half of the total VBP performance score used to determine reimbursement. Although our data suggest that these 2 VBP metrics may be correlated, it should be noted that we measured inpatient hospital charges, whereas the CMS efficiency outcome measure includes costs for episode of care spanning 3 days prior to hospitalization to 30 days after hospitalization.
Patient expectations likely play a role in satisfaction.[14, 15, 16] In an outpatient setting, physician fulfillment of patient requests has been associated with positive patient evaluations of care.[17] However, patients appear to value education, shared decision making, and provider empathy more than testing and intervention.[14, 18, 19, 20, 21, 22, 23] Perhaps, in the absence of the former attributes, patients use additional resource expenditure as a proxy.
It is not clear that higher resource expenditure improves outcomes. A landmark study of nearly 1 million Medicare enrollees by Fisher et al. suggests that, although Medicare patients in higher‐spending regions receive more care than those in lower‐spending regions, this does not result in better health outcomes, specifically with regard to mortality.[24, 25] Patients who live in areas of high hospital capacity use the hospital more frequently than do patients in areas of low hospital capacity, but this does not appear to result in improved mortality rates.[26] In fact, physicians in areas of high healthcare capacity report more difficulty maintaining high‐quality patient relationships and feel less able to provide high‐quality care than physicians in lower‐capacity areas.[27]
We hypothesize the cause of the association between resource utilization and patient satisfaction could be that patients (1) perceive that a doctor who allows them to stay longer in the hospital or who performs additional testing cares more about their well‐being and (2) that these patients feel more strongly that their concerns are being heard and addressed by their physicians. A systematic review of primary care patients identified many studies that found a positive association between meeting patient expectations and satisfaction with care, but also suggested that although patients frequently expect information, physicians misperceive this as an expectation of specific action.[28] A separate systematic review found that patient education in the form of decision aides can help patients develop more reasonable expectations and reduce utilization of certain discretionary procedures such as elective surgeries and prostate‐specific antigen testing.[29]
We did not specifically address clinical outcomes in our analysis because the clinical outcomes on which CMS currently adjusts VBP reimbursement focus on 30‐day mortality for specific diagnoses, nosocomial infections, and iatrogenic events.[30] Our data include only returned surveys from living patients, and it is likely that 30‐day mortality was similar throughout all subsets of patients. Additionally, the nosocomial and iatrogenic outcome measures used by CMS are sufficiently rare on the general floors and are unlikely to have significantly influenced our results.[31]
Our study has several strengths. Nearly all medical and surgical patient surveys returned during the study period were included, and therefore our calculations are likely to accurately reflect the Summary Star Rating that would have been assigned for the period. Second, the large sample size helps attenuate potential differences in commonly used outcome metrics. Third, by adjusting for a variety of demographic and clinical variables, we were able to decrease the likelihood of unidentified confounders.
Notably, we identified 38 (0.4%) surveys returned for patients under 18 years of age at admission. These surveys were included in our analysis because, to the best of our knowledge, they would have existed in the pool of surveys CMS could have used to assign a Summary Star Rating.
Our study also has limitations. First, geographically diverse data are needed to ensure generalizability. Second, we used the Charlson‐Deyo Comorbidity Index to describe the degree of illness for each patient. This index represents a patient's total illness burden but may not describe the relative severity of the patient's current illness relative to another patient. Third, we selected variables we felt were most likely to be associated with patient experience, but unidentified confounding remains possible. Fourth, attendings caring for ICU patients fall within the Division of Critical Care/Pulmonary Medicine. Therefore, we may have inadvertently placed patients into the ICU cohort who received a pulmonary/critical care consult on the general floors. Fifth, our data describe associations only for patients who returned surveys. Although there may be inherent biases in patients who return surveys, HCAHPS survey responses are used by CMS to determine a hospital's overall satisfaction score.
CONCLUSION
For patients who return HCAHPS surveys, resource utilization may be positively associated with a hospital's Summary Star Rating. These data suggest that hospitals with higher per‐patient expenditures may receive higher Summary Star Ratings, which could result in hospitals with higher per‐patient resource utilization appearing more attractive to healthcare consumers. Future studies should attempt to confirm our findings at other institutions and to determine causative factors.
Acknowledgements
The authors thank Jason Machan, PhD (Department of Orthopedics and Surgery, Warren Alpert Medical School, Brown University, Providence, Rhode Island) for his help with study design, and Ms. Brenda Foster (data analyst, University of Rochester Medical Center, Rochester, NY) for her help with data collection.
Disclosures: Nothing to report.
- Redesigning physician compensation and improving ED performance. Healthc Financ Manage. 2011;65(6):114–117. , , .
- QualityNet. Available at: https://www.qualitynet.org/dcs/ContentServer?c=Page97(13):1041–1048.
- Factors determining inpatient satisfaction with care. Soc Sci Med. 2002;54(4):493–504. , , , .
- Patient satisfaction revisited: a multilevel approach. Soc Sci Med. 2009;69(1):68–75. , , , , .
- Predictors of patient satisfaction with hospital health care. BMC Health Serv Res. 2006;6:102. , , , et al.
- The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405–411. , , , .
- Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41–48. , , , , .
- Becker's Infection Control and Clinical Quality. Star Ratings go live on Hospital Compare: how many hospitals got 5 stars? Available at: http://www.beckershospitalreview.com/quality/star‐ratings‐go‐live‐on‐hospital‐compare‐how‐many‐hospitals‐got‐5‐stars.html. Published April 16, 2015. Accessed October 5, 2015.
- Adapting a clinical comorbidity index for use with ICD‐9‐CM administrative databases. J Clin Epidemiol. 1992;45(6):613–619. , , .
- Patient satisfaction and quality of surgical care in US hospitals. Ann Surg. 2015;261(1):2–8. , , .
- Should health care providers be accountable for patients' care experiences? J Gen Intern Med. 2015;30(2):253–256. , , , , .
- Unmet expectations for care and the patient‐physician relationship. J Gen Intern Med. 2002;17(11):817–824. , , , , .
- Do unmet expectations for specific tests, referrals, and new medications reduce patients' satisfaction? J Gen Intern Med. 2004;19(11):1080–1087. , , , et al.
- Request fulfillment in office practice: antecedents and relationship to outcomes. Med Care. 2002;40(1):38–51. , , , , , .
- Factors associated with patient satisfaction with care among dermatological outpatients. Br J Dermatol. 2001;145(4):617–623. , , , et al.
- Patient expectations of emergency department care: phase II—a cross‐sectional survey. CJEM. 2006;8(3):148–157. , , , .
- Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338–344. , , , , .
- What do people want from their health care? A qualitative study. J Participat Med. 2015;18:e10. , .
- Evaluations of care by adults following a denial of an advertisement‐related prescription drug request: the role of expectations, symptom severity, and physician communication style. Soc Sci Med. 2006;62(4):888–899. , , .
- Getting to “no”: strategies primary care physicians use to deny patient requests. Arch Intern Med. 2010;170(4):381–388. , , , , , .
- The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138(4):273–287. , , , , , .
- The implications of regional variations in Medicare spending. Part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288–298. , , , , , .
- Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351–1362. , , , et al.
- Regional variations in health care intensity and physician perceptions of quality of care. Ann Intern Med. 2006;144(9):641–649. , , , .
- Visit‐specific expectations and patient‐centered outcomes: a literature review. Arch Fam Med. 2000;9(10):1148–1155. , , .
- Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2014;1:CD001431. , , , et al.
- Centers for Medicare and Medicaid Services. Hospital Compare. Outcome domain. Available at: https://www.medicare.gov/hospitalcompare/data/outcome‐domain.html. Accessed October 5, 2015.
- Centers for Disease Control and Prevention. 2013 national and state healthcare‐associated infections progress report. Available at: www.cdc.gov/hai/progress‐report/index.html. Accessed October 5, 2015.
The patient experience has become increasingly important to healthcare in the United States. It is now a metric used commonly to determine physician compensation and accounts for nearly 30% of the Centers for Medicare and Medicaid Services' (CMS) Value‐Based Purchasing (VBP) reimbursement for fiscal years 2015 and 2016.[1, 2]
In April 2015, CMS added a 5‐star patient experience score to its Hospital Compare website in an attempt to address the Affordable Care Act's call for transparent and easily understandable public reporting.[3] A hospital's principal score is the Summary Star Rating, which is based on responses to the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. The formulas used to calculate Summary Star Ratings have been reported by CMS.[4]
Studies published over the past decade suggest that gender, age, education level, length of hospital stay, travel distance, and other factors may influence patient satisfaction.[5, 6, 7, 8] One study utilizing a national dataset suggested that higher patient satisfaction was associated with greater inpatient healthcare utilization and higher healthcare expenditures.[9] It is therefore possible that emphasizing patient experience scores could adversely impact healthcare resource utilization. However, positive patient experience may also be an important independent dimension of quality for patients and correlate with improved clinical outcomes.[10]
We know of no literature describing patient factors associated with the Summary Star Rating. Given that this rating is now used as a standard metric by which patient experience can be compared across more than 3,500 hospitals,[11] data describing the association between patient‐level factors and the Summary Star Rating may provide hospitals with an opportunity to target improvement efforts. We aimed to determine the degree to which resource utilization is associated with a satisfaction score based on the Summary Star Rating methodology.
METHODS
The study was conducted at the University of Rochester Medical Center (URMC), an 830‐bed tertiary care center in upstate New York. This was a retrospective review of all HCAHPS surveys returned to URMC over a 27‐month period from January 1, 2012 to April 1, 2014. URMC follows the standard CMS process for determining which patients receive surveys as follows. During the study timeframe, HCAHPS surveys were mailed to patients 18 years of age and older who had an inpatient stay spanning at least 1 midnight. Surveys were mailed within 5 days of discharge, and were generally returned within 6 weeks. URMC did not utilize telephone or email surveys during the study period. Surveys were not sent to patients who (1) were transferred to another facility, (2) were discharged to hospice, (3) died during the hospitalization, (4) received psychiatric or rehabilitative services during the hospitalization, (5) had an international address, and/or (6) were prisoners.
The survey vendor (Press Ganey, South Bend, IN) for URMC provided raw data for returned surveys with patient answers to questions. Administrative and billing databases were used to add demographic and clinical data for the corresponding hospitalization to the dataset. These data included age, gender, payer status (public, private, self, charity), length of stay, number of attendings who saw the patient (based on encounters documented in the electronic medical record (EMR)), all discharge International Classification of Diseases, 9th Revision (ICD‐9) diagnoses for the hospitalization, total charges for the hospitalization, and intensive care unit (ICU) utilization as evidenced by a documented encounter with a member of the Division of Critical Care/Pulmonary Medicine.
CMS analyzes surveys within 1 of 3 clinical service categories (medical, surgical, or obstetrics/gynecology) based on the discharging service. To parallel this approach, each returned survey was placed into 1 of these categories based on the clinical service of the discharging physician. Patients placed in the obstetrics/gynecology category (n = 1317, 13%) will be analyzed in a future analysis given inherent differences in patient characteristics that require evaluation of other variables.
Approximations of CMS Summary Star Rating
The HCAHPS survey is a multiple‐choice questionnaire that includes several domains of patient satisfaction. Respondents are asked to rate areas of satisfaction with their hospital experience on a Likert scale. CMS uses a weighted average of Likert responses to a subset of HCAHPS questions to calculate a hospital's raw score in 11 domains, as well as an overall raw summary score. CMS then adjusts each raw score for differences between hospitals (eg, clustering, improvement over time, method of survey) to determine a hospital's star rating in each domain and an overall Summary Star Rating (the Summary Star Rating is the primary factor by which consumers can compare hospitals).[4] Because our data were from a single hospital system, the between‐hospital scoring adjustments utilized by CMS were not applicable. Instead, we calculated the raw scores exactly as CMS does prior to the adjustments. Thus, our scores reflect the scores that CMS would have given URMC during the study period prior to standardized adjustments; we refer to this as the raw satisfaction rating (RSR). We calculated an RSR for every eligible survey. The RSR was calculated as a continuous variable from 0 (lowest) to 1 (highest). Detailed explanation of our RSR calculation is available in the Supporting Information in the online version of this article.
Statistical Analysis
All analyses were performed in aggregate and by service (medical vs surgical). Categorical variables were summarized using frequencies with percentages. Comparisons across levels of categorical variables were performed with the 2 test. We report bivariate associations between the independent variables and RSRs in the top decile using unadjusted odds ratios (ORs) with 95% confidence intervals (CIs). Similarly, multivariable logistic regression was used for adjusted analyses. For the variables of severity of illness and resource intensity, the group with the lowest illness severity and lowest resource use served as the reference groups. We modeled patients without an ICU encounter and with an ICU encounter separately.
Charges, number of unique attendings encountered, and lengths of stay were highly correlated, and likely various measures of the same underlying construct of resource intensity, and therefore could not be entered into our models simultaneously. We combined these into a resource intensity score using factor analysis with a varimax rotation, and extracted factor scores for a single factor (supported by a scree plot). We then placed patients into 4 groups based on the distribution of the factor scores: low (<25th percentile), moderate (25th50th percentile), major (50th75th percentile), and extreme (>75th percentile).
We used the Charlson‐Deyo comorbidity score as our disease severity index.[12] The index uses ICD‐9 diagnoses with points assigned for the impact of each diagnosis on morbidity and the points summed to an overall score. This provides a measure of disease severity for a patient based on the number of diagnoses and relative mortality of the individual diagnoses. Scores were categorized as 0 (representing no major illness burden), 1 to 3, 4 to 6, and >6.
All statistical analyses were performed using SAS version 9.4 (SAS Institute, Cary, NC), and P values <0.05 were considered statistically significant. This study was approved by the institutional review board at the University of Rochester Medical Center.
RESULTS
Our initial search identified 10,007 returned surveys (29% of eligible patients returned surveys during the study period). Of these, 5059 (51%) were categorized as medical, 3630 (36%) as surgical, and 1317 (13%) as obstetrics/gynecology. One survey did not have the service of the discharging physician recorded and was excluded. Cohort demographics and relationship to RSRs in the top decile for the 8689 medical and surgical patients can be found in Table 1. The most common discharge diagnosis‐related groups (DRGs) for medical patients were 247, percutaneous cardiovascular procedure with drug‐eluding stent without major complications or comorbidities (MCC) (3.8%); 871, septicemia or severe sepsis without mechanical ventilation >96 hours with MCC (2.7%); and 392, esophagitis, gastroenteritis, and miscellaneous digestive disorders with MCC (2.3%). The most common DRGs for surgical patients were 460, spinal fusion except cervical without MCC (3.5%); 328, stomach, esophageal and duodenal procedure without complication or comorbidities or MCC (3.3%); and 491, back and neck procedure excluding spinal fusion without complication or comorbidities or MCC (3.1%).
Overall | Medical | Surgical | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Total | <90th | Top Decile | P | Total | <90th | Top Decile | P | Total | <90th | Top Decile | P | |
| ||||||||||||
Overall | 8,689 | 7,789 (90) | 900 (10) | 5,059 | 4,646 (92) | 413 (8) | 3,630 | 3,143 (87) | 487 (13) | |||
Age, y | ||||||||||||
<30 | 419 (5) | 371 (89) | 48 (12) | <0.001 | 218 (4) | 208 (95) | 10 (5) | <0.001 | 201 (6) | 163 (81) | 38 (19) | <0.001 |
3049 | 1,029 (12) | 902 (88) | 127 (12) | 533 (11) | 482 (90) | 51 (10) | 496 (14) | 420 (85) | 76 (15) | |||
5069 | 3,911 (45) | 3,450 (88) | 461 (12) | 2,136 (42) | 1,930 (90) | 206 (10) | 1,775 (49) | 1,520 (86) | 255 (14) | |||
>69 | 3,330 (38) | 3,066 (92) | 264 (8) | 2,172 (43) | 2,026 (93) | 146 (7) | 1,158 (32) | 1,040 (90) | 118 (10) | |||
Gender | ||||||||||||
Male | 4,640 (53) | 4,142 (89) | 498 (11) | 0.220 | 2,596 (51) | 2,379 (92) | 217 (8) | 0.602 | 2,044 (56) | 1,763 (86) | 281 (14) | 0.506 |
Female | 4,049 (47) | 3,647 (90) | 402 (10) | 2,463 (49) | 2,267 (92) | 196 (8) | 1,586 (44) | 1,380 (87) | 206 (13) | |||
ICU encounter | ||||||||||||
No | 7,122 (82) | 6,441 (90) | 681 (10) | <0.001 | 4,547 (90) | 4,193 (92) | 354 (8) | <0.001 | 2,575 (71) | 2,248 (87) | 327 (13) | 0.048 |
Yes | 1,567 (18) | 1,348 (86) | 219 (14) | 512 (10) | 453 (89) | 59 (12) | 1,055 (29) | 895 (85) | 160 (15) | |||
Payer | ||||||||||||
Public | 5,564 (64) | 5,036 (91) | 528 (10) | <0.001 | 3,424 (68) | 3,161 (92) | 263 (8) | 0.163 | 2,140 (59) | 1,875 (88) | 265 (12) | 0.148 |
Private | 3,064 (35) | 2,702 (88) | 362 (12) | 1,603 (32) | 1,458 (91) | 145 (9) | 1,461 (40) | 1,244 (85) | 217 (15) | |||
Charity | 45 (1) | 37 (82) | 8 (18) | 25 (1) | 21 (84) | 4 (16) | 20 (1) | 16 (80) | 4 (20) | |||
Self | 16 (0) | 14 (88) | 2 (13) | 7 (0) | 6 (86) | 1 (14) | 9 (0) | 8 (89) | 1 (11) | |||
Length of stay, d | ||||||||||||
<3 | 3,156 (36) | 2,930 (93) | 226 (7) | <0.001 | 1,961 (39) | 1,865 (95) | 96 (5) | <0.001 | 1,195 (33) | 1,065 (89) | 130 (11) | <0.001 |
36 | 3,330 (38) | 2,959 (89) | 371 (11) | 1,867 (37) | 1,702 (91) | 165 (9) | 1,463 (40) | 1,257 (86) | 206 (14) | |||
>6 | 2,203 (25) | 1,900 (86) | 303 (14) | 1,231 (24) | 1,079 (88) | 152 (12) | 972 (27) | 821 (85) | 151 (16) | |||
No. of attendings | ||||||||||||
<4 | 3,959 (46) | 3,615 (91) | 344 (9) | <0.001 | 2,307 (46) | 2,160 (94) | 147 (6) | <0.001 | 1,652 (46) | 1,455 (88) | 197 (12) | 0.052 |
46 | 3,067 (35) | 2,711 (88) | 356 (12) | 1,836 (36) | 1,663 (91) | 173 (9) | 1,231 (34) | 1,048 (85) | 183 (15) | |||
>6 | 1,663 (19) | 1,463 (88) | 200 (12) | 916 (18) | 823 (90) | 93 (10) | 747 (21) | 640 (86) | 107 (14) | |||
Severity index* | ||||||||||||
0 (lowest) | 2,812 (32) | 2,505 (89) | 307 (11) | 0.272 | 1,273 (25) | 1,185 (93) | 88 (7) | 0.045 | 1,539 (42) | 1,320 (86) | 219 (14) | 0.261 |
13 | 4,253 (49) | 3,827 (90) | 426 (10) | 2,604 (52) | 2,395 (92) | 209 (8) | 1,649 (45) | 1,432 (87) | 217 (13) | |||
46 | 1163 (13) | 1,052 (91) | 111 (10) | 849 (17) | 770 (91) | 79 (9) | 314 (9) | 282 (90) | 32 (10) | |||
>6 (highest) | 461 (5) | 405 (88) | 56 (12) | 333 (7) | 296 (89) | 37 (11) | 128 (4) | 109 (85) | 19 (15) | |||
Charges, | ||||||||||||
Low | 1,820 (21) | 1,707 (94) | 113 (6) | <0.001 | 1,426 (28) | 1,357 (95) | 69 (5) | <0.001 | 394 (11) | 350 (89) | 44 (11) | 0.007 |
Medium | 5,094 (59) | 4,581 (90) | 513 (10) | 2,807 (56) | 2,582 (92) | 225 (8) | 2,287 (63) | 1,999 (87) | 288 (13) | |||
High | 1,775 (20) | 1,501 (85) | 274 (15) | 826 (16) | 707 (86) | 119 (14) | 949 (26) | 794 (84) | 155 (16) |
Unadjusted analysis of medical and surgical patients identified significant associations of several variables with a top decile RSR (Table 2). Patients with longer lengths of stay (OR: 2.07, 95% CI: 1.72‐2.48), more attendings (OR: 1.44, 95% CI: 1.19‐1.73), and higher hospital charges (OR: 2.76, 95% CI: 2.19‐3.47) were more likely to report an RSR in the top decile. Patients without an ICU encounter (OR: 0.65, 95% CI: 0.55‐0.77) and on a medical service (OR: 0.57, 95% CI: 0.5‐ 0.66) were less likely to report an RSR in the top decile. Several associations were identified in only the medical or surgical cohorts. In the medical cohort, patients with the highest illness severity index (OR: 1.68, 95% CI: 1.12‐ 2.52) and with 7 different attending physicians (OR: 1.66, 95% CI: 1.27‐2.18) were more likely to report RSRs in the top decile. In the surgical cohort, patients <30 years of age (OR: 2.05, 95% CI 1.38‐3.07) were more likely to report an RSR in the top decile than patients >69 years of age. Insurance payer category and gender were not significantly associated with top decile RSRs.
Overall | Medical | Surgical | ||||
---|---|---|---|---|---|---|
Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | |
| ||||||
Age, y | ||||||
<30 | 1.5 (1.082.08) | 0.014 | 0.67 (0.351.29) | 0.227 | 2.05 (1.383.07) | <0.001 |
3049 | 1.64 (1.312.05) | <.001 | 1.47 (1.052.05) | 0.024 | 1.59 (1.172.17) | 0.003 |
5069 | 1.55 (1.321.82) | <.001 | 1.48 (1.191.85) | 0.001 | 1.48 (1.171.86) | 0.001 |
>69 | Ref | Ref | Ref | |||
Gender | ||||||
Male | 1.09 (0.951.25) | 0.220 | 1.06 (0.861.29) | 0.602 | 1.07 (0.881.3) | 0.506 |
Female | Ref | Ref | Ref | |||
ICU encounter | ||||||
No | 0.65 (0.550.77) | <0.001 | 0.65 (0.480.87) | 0.004 | 0.81 (0.661) | 0.048 |
Yes | Ref | Ref | Ref | |||
Payer | ||||||
Public | 0.73 (0.173.24) | 0.683 | 0.5 (0.064.16) | 0.521 | 1.13 (0.149.08) | 0.908 |
Private | 0.94 (0.214.14) | 0.933 | 0.6 (0.074.99) | 0.634 | 1.4 (0.1711.21) | 0.754 |
Charity | 1.51 (0.298.02) | 0.626 | 1.14 (0.1112.25) | 0.912 | 2 (0.1920.97) | 0.563 |
Self | Ref | Ref | Ref | |||
Length of stay, d | ||||||
<3 | Ref | Ref | Ref | |||
36 | 1.63 (1.371.93) | <0.001 | 1.88 (1.452.44) | <0.001 | 1.34 (1.061.7) | 0.014 |
>6 | 2.07 (1.722.48) | <0.001 | 2.74 (2.13.57) | <0.001 | 1.51 (1.171.94) | 0.001 |
No. of attendings | ||||||
<4 | Ref | Ref | Ref | |||
46 | 1.38 (1.181.61) | <0.001 | 1.53 (1.221.92) | <0.001 | 1.29 (1.041.6) | 0.021 |
>6 | 1.44 (1.191.73) | <0.001 | 1.66 (1.272.18) | <0.001 | 1.23 (0.961.59) | 0.102 |
Severity index* | ||||||
0 (lowest) | Ref | Ref | Ref | |||
13 | 0.91 (0.781.06) | 0.224 | 1.18 (0.911.52) | 0.221 | 0.91 (0.751.12) | 0.380 |
46 | 0.86 (0.681.08) | 0.200 | 1.38 (1.011.9) | 0.046 | 0.68 (0.461.01) | 0.058 |
>6 (highest) | 1.13 (0.831.53) | 0.436 | 1.68 (1.122.52) | 0.012 | 1.05 (0.631.75) | 0.849 |
Charges | ||||||
Low | Ref | Ref | Ref | |||
Medium | 1.69 (1.372.09) | <0.001 | 1.71 (1.32.26) | <0.001 | 1.15 (0.821.61) | 0.428 |
High | 2.76 (2.193.47) | <0.001 | 3.31 (2.434.51) | <0.001 | 1.55 (1.092.22) | 0.016 |
Service | ||||||
Medical | 0.57 (0.50.66) | <0.001 | ||||
Surgical | Ref |
Multivariable modeling (Table 3) for all patients without an ICU encounter suggested that (1) patients aged <30 years, 30 to 49 years, and 50 to 69 years were more likely to report top decile RSRs when compared to patients 70 years and older (OR: 1.61, 95% CI: 1.09‐2.36; OR: 1.44, 95% CI: 1.08‐1.93; and OR: 1.39, 95% CI: 1.13‐1.71, respectively) and (2), when compared to patients with extreme resource intensity scores, patients with higher resource intensity scores were more likely to report top decile RSRs (moderate [OR: 1.42, 95% CI: 1.11‐1.83], major [OR: 1.56, 95% CI: 1.22‐2.01], and extreme [OR: 2.29, 95% CI: 1.8‐2.92]. These results were relatively consistent within medical and surgical subgroups (Table 3).
Overall | Medical | Surgical | ||||
---|---|---|---|---|---|---|
Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | |
| ||||||
Age, y | ||||||
<30 | 1.61 (1.092.36) | 0.016 | 0.82 (0.41.7) | 0.596 | 2.31 (1.393.82) | 0.001 |
3049 | 1.44 (1.081.93) | 0.014 | 1.55 (1.032.32) | 0.034 | 1.41 (0.912.17) | 0.120 |
5069 | 1.39 (1.131.71) | 0.002 | 1.44 (1.11.88) | 0.008 | 1.39 (11.93) | 0.049 |
>69 | Ref | Ref | Ref | |||
Sex | ||||||
Male | 1 (0.851.17) | 0.964 | 1 (0.81.25) | 0.975 | 0.99 (0.791.26) | 0.965 |
Female | Ref | Ref | Ref | |||
Payer | ||||||
Public | 0.62 (0.142.8) | 0.531 | 0.42 (0.053.67) | 0.432 | 1.03 (0.128.59) | 0.978 |
Private | 0.67 (0.153.02) | 0.599 | 0.42 (0.053.67) | 0.434 | 1.17 (0.149.69) | 0.884 |
Charity | 1.54 (0.288.41) | 0.620 | 1 (0.0911.13) | 0.999 | 2.56 (0.2328.25) | 0.444 |
Self | Ref | Ref | Ref | |||
Severity index | ||||||
0 (lowest) | Ref | Ref | Ref | |||
13 | 1.07 (0.891.29) | 0.485 | 1.18 (0.881.58) | 0.267 | 1 (0.781.29) | 0.986 |
46 | 1.14 (0.861.51) | 0.377 | 1.42 (0.992.04) | 0.056 | 0.6 (0.331.1) | 0.100 |
>6 (highest) | 1.31 (0.911.9) | 0.150 | 1.47 (0.932.33) | 0.097 | 1.1 (0.542.21) | 0.795 |
Resource intensity score | ||||||
Low | Ref | Ref | Ref | |||
Moderate | 1.42 (1.111.83) | 0.006 | 1.6 (1.112.3) | 0.011 | 0.94 (0.661.34) | 0.722 |
Major | 1.56 (1.222.01) | 0.001 | 1.69 (1.182.43) | 0.004 | 1.28 (0.911.8) | 0.151 |
Extreme | 2.29 (1.82.92) | <0.001 | 2.72 (1.943.82) | <0.001 | 1.63 (1.172.26) | 0.004 |
Service | ||||||
Medical | 0.59 (0.50.69) | <0.001 | ||||
Surgical | Ref |
In those with at least 1 ICU attending encounter (see Supporting Table 1 in the online version of this article), no variables demonstrated significant association with top decile RSRs in the overall group or in the medical subgroup. For surgical patients with at least 1 ICU attending encounter (see Supporting Table 1 in the online version of this article), patients aged 30 to 49 and 50 to 69 years were more likely to provide top decile RSRs (OR: 1.93, 95% CI: 1.08‐3.46 and OR: 1.65, 95% CI 1.07‐2.53, respectively). Resource intensity was not significantly associated with top decile RSRs.
DISCUSSION
Our analysis suggests that, for patients on the general care floors, resource utilization is associated with the RSR and, therefore, potentially the CMS Summary Star Rating. Adjusting for severity of illness, patients with higher resource utilization were more likely to report top decile RSRs.
Prior data regarding utilization and satisfaction are mixed. In a 2‐year, prospective, national examination, patients in the highest quartile of patient satisfaction had increased healthcare and prescription drug expenditures as well as increased rates of hospitalization when compared with patients in the lowest quartile of patient satisfaction.[9] However, a recent national study of surgical administrative databases suggested hospitals with high patient satisfaction provided more efficient care.[13]
One reason for the conflicting data may be that large, national evaluations are unable to control for between‐hospital confounders (ie, hospital quality of care). By capturing all eligible returned surveys at 1 institution, our design allowed us to collect granular data. We found that in 1 hospital setting, patient population, facilities, and food services, patients receiving more clinical resources generally assigned higher ratings than patients receiving less.
It is possible that utilization is a proxy for serious illness, and that patients with serious illness receive more attention during hospitalization and are more satisfied when discharged in a good state of health. However, we did adjust for severity of illness in our model using the Charlson‐Deyo index and we suggest that, other factors being equal, hospitals with higher per‐patient expenditures may be assigned higher Summary Star Ratings.
CMS has recently implemented a number of metrics designed to decrease healthcare costs by improving quality, safety, and efficiency. Concurrently, CMS has also prioritized patient experience. The Summary Star Rating was created to provide healthcare consumers with an easy way to compare the patient experience between hospitals[4]; however, our data suggest that this metric may be at odds with inpatient cost savings and efficiency metrics.
Per‐patient spending becomes particularly salient when considering that in fiscal year 2016, CMS' hospital VBP reimbursement will include 2 metrics: an efficiency outcome measure labeled Medicare spending per beneficiary, and a patient experience outcome measure based on HCAHPS survey dimensions.[2] Together, these 2 metrics will comprise nearly half of the total VBP performance score used to determine reimbursement. Although our data suggest that these 2 VBP metrics may be correlated, it should be noted that we measured inpatient hospital charges, whereas the CMS efficiency outcome measure includes costs for episode of care spanning 3 days prior to hospitalization to 30 days after hospitalization.
Patient expectations likely play a role in satisfaction.[14, 15, 16] In an outpatient setting, physician fulfillment of patient requests has been associated with positive patient evaluations of care.[17] However, patients appear to value education, shared decision making, and provider empathy more than testing and intervention.[14, 18, 19, 20, 21, 22, 23] Perhaps, in the absence of the former attributes, patients use additional resource expenditure as a proxy.
It is not clear that higher resource expenditure improves outcomes. A landmark study of nearly 1 million Medicare enrollees by Fisher et al. suggests that, although Medicare patients in higher‐spending regions receive more care than those in lower‐spending regions, this does not result in better health outcomes, specifically with regard to mortality.[24, 25] Patients who live in areas of high hospital capacity use the hospital more frequently than do patients in areas of low hospital capacity, but this does not appear to result in improved mortality rates.[26] In fact, physicians in areas of high healthcare capacity report more difficulty maintaining high‐quality patient relationships and feel less able to provide high‐quality care than physicians in lower‐capacity areas.[27]
We hypothesize the cause of the association between resource utilization and patient satisfaction could be that patients (1) perceive that a doctor who allows them to stay longer in the hospital or who performs additional testing cares more about their well‐being and (2) that these patients feel more strongly that their concerns are being heard and addressed by their physicians. A systematic review of primary care patients identified many studies that found a positive association between meeting patient expectations and satisfaction with care, but also suggested that although patients frequently expect information, physicians misperceive this as an expectation of specific action.[28] A separate systematic review found that patient education in the form of decision aides can help patients develop more reasonable expectations and reduce utilization of certain discretionary procedures such as elective surgeries and prostate‐specific antigen testing.[29]
We did not specifically address clinical outcomes in our analysis because the clinical outcomes on which CMS currently adjusts VBP reimbursement focus on 30‐day mortality for specific diagnoses, nosocomial infections, and iatrogenic events.[30] Our data include only returned surveys from living patients, and it is likely that 30‐day mortality was similar throughout all subsets of patients. Additionally, the nosocomial and iatrogenic outcome measures used by CMS are sufficiently rare on the general floors and are unlikely to have significantly influenced our results.[31]
Our study has several strengths. Nearly all medical and surgical patient surveys returned during the study period were included, and therefore our calculations are likely to accurately reflect the Summary Star Rating that would have been assigned for the period. Second, the large sample size helps attenuate potential differences in commonly used outcome metrics. Third, by adjusting for a variety of demographic and clinical variables, we were able to decrease the likelihood of unidentified confounders.
Notably, we identified 38 (0.4%) surveys returned for patients under 18 years of age at admission. These surveys were included in our analysis because, to the best of our knowledge, they would have existed in the pool of surveys CMS could have used to assign a Summary Star Rating.
Our study also has limitations. First, geographically diverse data are needed to ensure generalizability. Second, we used the Charlson‐Deyo Comorbidity Index to describe the degree of illness for each patient. This index represents a patient's total illness burden but may not describe the relative severity of the patient's current illness relative to another patient. Third, we selected variables we felt were most likely to be associated with patient experience, but unidentified confounding remains possible. Fourth, attendings caring for ICU patients fall within the Division of Critical Care/Pulmonary Medicine. Therefore, we may have inadvertently placed patients into the ICU cohort who received a pulmonary/critical care consult on the general floors. Fifth, our data describe associations only for patients who returned surveys. Although there may be inherent biases in patients who return surveys, HCAHPS survey responses are used by CMS to determine a hospital's overall satisfaction score.
CONCLUSION
For patients who return HCAHPS surveys, resource utilization may be positively associated with a hospital's Summary Star Rating. These data suggest that hospitals with higher per‐patient expenditures may receive higher Summary Star Ratings, which could result in hospitals with higher per‐patient resource utilization appearing more attractive to healthcare consumers. Future studies should attempt to confirm our findings at other institutions and to determine causative factors.
Acknowledgements
The authors thank Jason Machan, PhD (Department of Orthopedics and Surgery, Warren Alpert Medical School, Brown University, Providence, Rhode Island) for his help with study design, and Ms. Brenda Foster (data analyst, University of Rochester Medical Center, Rochester, NY) for her help with data collection.
Disclosures: Nothing to report.
The patient experience has become increasingly important to healthcare in the United States. It is now a metric used commonly to determine physician compensation and accounts for nearly 30% of the Centers for Medicare and Medicaid Services' (CMS) Value‐Based Purchasing (VBP) reimbursement for fiscal years 2015 and 2016.[1, 2]
In April 2015, CMS added a 5‐star patient experience score to its Hospital Compare website in an attempt to address the Affordable Care Act's call for transparent and easily understandable public reporting.[3] A hospital's principal score is the Summary Star Rating, which is based on responses to the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. The formulas used to calculate Summary Star Ratings have been reported by CMS.[4]
Studies published over the past decade suggest that gender, age, education level, length of hospital stay, travel distance, and other factors may influence patient satisfaction.[5, 6, 7, 8] One study utilizing a national dataset suggested that higher patient satisfaction was associated with greater inpatient healthcare utilization and higher healthcare expenditures.[9] It is therefore possible that emphasizing patient experience scores could adversely impact healthcare resource utilization. However, positive patient experience may also be an important independent dimension of quality for patients and correlate with improved clinical outcomes.[10]
We know of no literature describing patient factors associated with the Summary Star Rating. Given that this rating is now used as a standard metric by which patient experience can be compared across more than 3,500 hospitals,[11] data describing the association between patient‐level factors and the Summary Star Rating may provide hospitals with an opportunity to target improvement efforts. We aimed to determine the degree to which resource utilization is associated with a satisfaction score based on the Summary Star Rating methodology.
METHODS
The study was conducted at the University of Rochester Medical Center (URMC), an 830‐bed tertiary care center in upstate New York. This was a retrospective review of all HCAHPS surveys returned to URMC over a 27‐month period from January 1, 2012 to April 1, 2014. URMC follows the standard CMS process for determining which patients receive surveys as follows. During the study timeframe, HCAHPS surveys were mailed to patients 18 years of age and older who had an inpatient stay spanning at least 1 midnight. Surveys were mailed within 5 days of discharge, and were generally returned within 6 weeks. URMC did not utilize telephone or email surveys during the study period. Surveys were not sent to patients who (1) were transferred to another facility, (2) were discharged to hospice, (3) died during the hospitalization, (4) received psychiatric or rehabilitative services during the hospitalization, (5) had an international address, and/or (6) were prisoners.
The survey vendor (Press Ganey, South Bend, IN) for URMC provided raw data for returned surveys with patient answers to questions. Administrative and billing databases were used to add demographic and clinical data for the corresponding hospitalization to the dataset. These data included age, gender, payer status (public, private, self, charity), length of stay, number of attendings who saw the patient (based on encounters documented in the electronic medical record (EMR)), all discharge International Classification of Diseases, 9th Revision (ICD‐9) diagnoses for the hospitalization, total charges for the hospitalization, and intensive care unit (ICU) utilization as evidenced by a documented encounter with a member of the Division of Critical Care/Pulmonary Medicine.
CMS analyzes surveys within 1 of 3 clinical service categories (medical, surgical, or obstetrics/gynecology) based on the discharging service. To parallel this approach, each returned survey was placed into 1 of these categories based on the clinical service of the discharging physician. Patients placed in the obstetrics/gynecology category (n = 1317, 13%) will be analyzed in a future analysis given inherent differences in patient characteristics that require evaluation of other variables.
Approximations of CMS Summary Star Rating
The HCAHPS survey is a multiple‐choice questionnaire that includes several domains of patient satisfaction. Respondents are asked to rate areas of satisfaction with their hospital experience on a Likert scale. CMS uses a weighted average of Likert responses to a subset of HCAHPS questions to calculate a hospital's raw score in 11 domains, as well as an overall raw summary score. CMS then adjusts each raw score for differences between hospitals (eg, clustering, improvement over time, method of survey) to determine a hospital's star rating in each domain and an overall Summary Star Rating (the Summary Star Rating is the primary factor by which consumers can compare hospitals).[4] Because our data were from a single hospital system, the between‐hospital scoring adjustments utilized by CMS were not applicable. Instead, we calculated the raw scores exactly as CMS does prior to the adjustments. Thus, our scores reflect the scores that CMS would have given URMC during the study period prior to standardized adjustments; we refer to this as the raw satisfaction rating (RSR). We calculated an RSR for every eligible survey. The RSR was calculated as a continuous variable from 0 (lowest) to 1 (highest). Detailed explanation of our RSR calculation is available in the Supporting Information in the online version of this article.
Statistical Analysis
All analyses were performed in aggregate and by service (medical vs surgical). Categorical variables were summarized using frequencies with percentages. Comparisons across levels of categorical variables were performed with the 2 test. We report bivariate associations between the independent variables and RSRs in the top decile using unadjusted odds ratios (ORs) with 95% confidence intervals (CIs). Similarly, multivariable logistic regression was used for adjusted analyses. For the variables of severity of illness and resource intensity, the group with the lowest illness severity and lowest resource use served as the reference groups. We modeled patients without an ICU encounter and with an ICU encounter separately.
Charges, number of unique attendings encountered, and lengths of stay were highly correlated, and likely various measures of the same underlying construct of resource intensity, and therefore could not be entered into our models simultaneously. We combined these into a resource intensity score using factor analysis with a varimax rotation, and extracted factor scores for a single factor (supported by a scree plot). We then placed patients into 4 groups based on the distribution of the factor scores: low (<25th percentile), moderate (25th50th percentile), major (50th75th percentile), and extreme (>75th percentile).
We used the Charlson‐Deyo comorbidity score as our disease severity index.[12] The index uses ICD‐9 diagnoses with points assigned for the impact of each diagnosis on morbidity and the points summed to an overall score. This provides a measure of disease severity for a patient based on the number of diagnoses and relative mortality of the individual diagnoses. Scores were categorized as 0 (representing no major illness burden), 1 to 3, 4 to 6, and >6.
All statistical analyses were performed using SAS version 9.4 (SAS Institute, Cary, NC), and P values <0.05 were considered statistically significant. This study was approved by the institutional review board at the University of Rochester Medical Center.
RESULTS
Our initial search identified 10,007 returned surveys (29% of eligible patients returned surveys during the study period). Of these, 5059 (51%) were categorized as medical, 3630 (36%) as surgical, and 1317 (13%) as obstetrics/gynecology. One survey did not have the service of the discharging physician recorded and was excluded. Cohort demographics and relationship to RSRs in the top decile for the 8689 medical and surgical patients can be found in Table 1. The most common discharge diagnosis‐related groups (DRGs) for medical patients were 247, percutaneous cardiovascular procedure with drug‐eluding stent without major complications or comorbidities (MCC) (3.8%); 871, septicemia or severe sepsis without mechanical ventilation >96 hours with MCC (2.7%); and 392, esophagitis, gastroenteritis, and miscellaneous digestive disorders with MCC (2.3%). The most common DRGs for surgical patients were 460, spinal fusion except cervical without MCC (3.5%); 328, stomach, esophageal and duodenal procedure without complication or comorbidities or MCC (3.3%); and 491, back and neck procedure excluding spinal fusion without complication or comorbidities or MCC (3.1%).
Overall | Medical | Surgical | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Total | <90th | Top Decile | P | Total | <90th | Top Decile | P | Total | <90th | Top Decile | P | |
| ||||||||||||
Overall | 8,689 | 7,789 (90) | 900 (10) | 5,059 | 4,646 (92) | 413 (8) | 3,630 | 3,143 (87) | 487 (13) | |||
Age, y | ||||||||||||
<30 | 419 (5) | 371 (89) | 48 (12) | <0.001 | 218 (4) | 208 (95) | 10 (5) | <0.001 | 201 (6) | 163 (81) | 38 (19) | <0.001 |
3049 | 1,029 (12) | 902 (88) | 127 (12) | 533 (11) | 482 (90) | 51 (10) | 496 (14) | 420 (85) | 76 (15) | |||
5069 | 3,911 (45) | 3,450 (88) | 461 (12) | 2,136 (42) | 1,930 (90) | 206 (10) | 1,775 (49) | 1,520 (86) | 255 (14) | |||
>69 | 3,330 (38) | 3,066 (92) | 264 (8) | 2,172 (43) | 2,026 (93) | 146 (7) | 1,158 (32) | 1,040 (90) | 118 (10) | |||
Gender | ||||||||||||
Male | 4,640 (53) | 4,142 (89) | 498 (11) | 0.220 | 2,596 (51) | 2,379 (92) | 217 (8) | 0.602 | 2,044 (56) | 1,763 (86) | 281 (14) | 0.506 |
Female | 4,049 (47) | 3,647 (90) | 402 (10) | 2,463 (49) | 2,267 (92) | 196 (8) | 1,586 (44) | 1,380 (87) | 206 (13) | |||
ICU encounter | ||||||||||||
No | 7,122 (82) | 6,441 (90) | 681 (10) | <0.001 | 4,547 (90) | 4,193 (92) | 354 (8) | <0.001 | 2,575 (71) | 2,248 (87) | 327 (13) | 0.048 |
Yes | 1,567 (18) | 1,348 (86) | 219 (14) | 512 (10) | 453 (89) | 59 (12) | 1,055 (29) | 895 (85) | 160 (15) | |||
Payer | ||||||||||||
Public | 5,564 (64) | 5,036 (91) | 528 (10) | <0.001 | 3,424 (68) | 3,161 (92) | 263 (8) | 0.163 | 2,140 (59) | 1,875 (88) | 265 (12) | 0.148 |
Private | 3,064 (35) | 2,702 (88) | 362 (12) | 1,603 (32) | 1,458 (91) | 145 (9) | 1,461 (40) | 1,244 (85) | 217 (15) | |||
Charity | 45 (1) | 37 (82) | 8 (18) | 25 (1) | 21 (84) | 4 (16) | 20 (1) | 16 (80) | 4 (20) | |||
Self | 16 (0) | 14 (88) | 2 (13) | 7 (0) | 6 (86) | 1 (14) | 9 (0) | 8 (89) | 1 (11) | |||
Length of stay, d | ||||||||||||
<3 | 3,156 (36) | 2,930 (93) | 226 (7) | <0.001 | 1,961 (39) | 1,865 (95) | 96 (5) | <0.001 | 1,195 (33) | 1,065 (89) | 130 (11) | <0.001 |
36 | 3,330 (38) | 2,959 (89) | 371 (11) | 1,867 (37) | 1,702 (91) | 165 (9) | 1,463 (40) | 1,257 (86) | 206 (14) | |||
>6 | 2,203 (25) | 1,900 (86) | 303 (14) | 1,231 (24) | 1,079 (88) | 152 (12) | 972 (27) | 821 (85) | 151 (16) | |||
No. of attendings | ||||||||||||
<4 | 3,959 (46) | 3,615 (91) | 344 (9) | <0.001 | 2,307 (46) | 2,160 (94) | 147 (6) | <0.001 | 1,652 (46) | 1,455 (88) | 197 (12) | 0.052 |
46 | 3,067 (35) | 2,711 (88) | 356 (12) | 1,836 (36) | 1,663 (91) | 173 (9) | 1,231 (34) | 1,048 (85) | 183 (15) | |||
>6 | 1,663 (19) | 1,463 (88) | 200 (12) | 916 (18) | 823 (90) | 93 (10) | 747 (21) | 640 (86) | 107 (14) | |||
Severity index* | ||||||||||||
0 (lowest) | 2,812 (32) | 2,505 (89) | 307 (11) | 0.272 | 1,273 (25) | 1,185 (93) | 88 (7) | 0.045 | 1,539 (42) | 1,320 (86) | 219 (14) | 0.261 |
13 | 4,253 (49) | 3,827 (90) | 426 (10) | 2,604 (52) | 2,395 (92) | 209 (8) | 1,649 (45) | 1,432 (87) | 217 (13) | |||
46 | 1163 (13) | 1,052 (91) | 111 (10) | 849 (17) | 770 (91) | 79 (9) | 314 (9) | 282 (90) | 32 (10) | |||
>6 (highest) | 461 (5) | 405 (88) | 56 (12) | 333 (7) | 296 (89) | 37 (11) | 128 (4) | 109 (85) | 19 (15) | |||
Charges, | ||||||||||||
Low | 1,820 (21) | 1,707 (94) | 113 (6) | <0.001 | 1,426 (28) | 1,357 (95) | 69 (5) | <0.001 | 394 (11) | 350 (89) | 44 (11) | 0.007 |
Medium | 5,094 (59) | 4,581 (90) | 513 (10) | 2,807 (56) | 2,582 (92) | 225 (8) | 2,287 (63) | 1,999 (87) | 288 (13) | |||
High | 1,775 (20) | 1,501 (85) | 274 (15) | 826 (16) | 707 (86) | 119 (14) | 949 (26) | 794 (84) | 155 (16) |
Unadjusted analysis of medical and surgical patients identified significant associations of several variables with a top decile RSR (Table 2). Patients with longer lengths of stay (OR: 2.07, 95% CI: 1.72‐2.48), more attendings (OR: 1.44, 95% CI: 1.19‐1.73), and higher hospital charges (OR: 2.76, 95% CI: 2.19‐3.47) were more likely to report an RSR in the top decile. Patients without an ICU encounter (OR: 0.65, 95% CI: 0.55‐0.77) and on a medical service (OR: 0.57, 95% CI: 0.5‐ 0.66) were less likely to report an RSR in the top decile. Several associations were identified in only the medical or surgical cohorts. In the medical cohort, patients with the highest illness severity index (OR: 1.68, 95% CI: 1.12‐ 2.52) and with 7 different attending physicians (OR: 1.66, 95% CI: 1.27‐2.18) were more likely to report RSRs in the top decile. In the surgical cohort, patients <30 years of age (OR: 2.05, 95% CI 1.38‐3.07) were more likely to report an RSR in the top decile than patients >69 years of age. Insurance payer category and gender were not significantly associated with top decile RSRs.
Overall | Medical | Surgical | ||||
---|---|---|---|---|---|---|
Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | |
| ||||||
Age, y | ||||||
<30 | 1.5 (1.082.08) | 0.014 | 0.67 (0.351.29) | 0.227 | 2.05 (1.383.07) | <0.001 |
3049 | 1.64 (1.312.05) | <.001 | 1.47 (1.052.05) | 0.024 | 1.59 (1.172.17) | 0.003 |
5069 | 1.55 (1.321.82) | <.001 | 1.48 (1.191.85) | 0.001 | 1.48 (1.171.86) | 0.001 |
>69 | Ref | Ref | Ref | |||
Gender | ||||||
Male | 1.09 (0.951.25) | 0.220 | 1.06 (0.861.29) | 0.602 | 1.07 (0.881.3) | 0.506 |
Female | Ref | Ref | Ref | |||
ICU encounter | ||||||
No | 0.65 (0.550.77) | <0.001 | 0.65 (0.480.87) | 0.004 | 0.81 (0.661) | 0.048 |
Yes | Ref | Ref | Ref | |||
Payer | ||||||
Public | 0.73 (0.173.24) | 0.683 | 0.5 (0.064.16) | 0.521 | 1.13 (0.149.08) | 0.908 |
Private | 0.94 (0.214.14) | 0.933 | 0.6 (0.074.99) | 0.634 | 1.4 (0.1711.21) | 0.754 |
Charity | 1.51 (0.298.02) | 0.626 | 1.14 (0.1112.25) | 0.912 | 2 (0.1920.97) | 0.563 |
Self | Ref | Ref | Ref | |||
Length of stay, d | ||||||
<3 | Ref | Ref | Ref | |||
36 | 1.63 (1.371.93) | <0.001 | 1.88 (1.452.44) | <0.001 | 1.34 (1.061.7) | 0.014 |
>6 | 2.07 (1.722.48) | <0.001 | 2.74 (2.13.57) | <0.001 | 1.51 (1.171.94) | 0.001 |
No. of attendings | ||||||
<4 | Ref | Ref | Ref | |||
46 | 1.38 (1.181.61) | <0.001 | 1.53 (1.221.92) | <0.001 | 1.29 (1.041.6) | 0.021 |
>6 | 1.44 (1.191.73) | <0.001 | 1.66 (1.272.18) | <0.001 | 1.23 (0.961.59) | 0.102 |
Severity index* | ||||||
0 (lowest) | Ref | Ref | Ref | |||
13 | 0.91 (0.781.06) | 0.224 | 1.18 (0.911.52) | 0.221 | 0.91 (0.751.12) | 0.380 |
46 | 0.86 (0.681.08) | 0.200 | 1.38 (1.011.9) | 0.046 | 0.68 (0.461.01) | 0.058 |
>6 (highest) | 1.13 (0.831.53) | 0.436 | 1.68 (1.122.52) | 0.012 | 1.05 (0.631.75) | 0.849 |
Charges | ||||||
Low | Ref | Ref | Ref | |||
Medium | 1.69 (1.372.09) | <0.001 | 1.71 (1.32.26) | <0.001 | 1.15 (0.821.61) | 0.428 |
High | 2.76 (2.193.47) | <0.001 | 3.31 (2.434.51) | <0.001 | 1.55 (1.092.22) | 0.016 |
Service | ||||||
Medical | 0.57 (0.50.66) | <0.001 | ||||
Surgical | Ref |
Multivariable modeling (Table 3) for all patients without an ICU encounter suggested that (1) patients aged <30 years, 30 to 49 years, and 50 to 69 years were more likely to report top decile RSRs when compared to patients 70 years and older (OR: 1.61, 95% CI: 1.09‐2.36; OR: 1.44, 95% CI: 1.08‐1.93; and OR: 1.39, 95% CI: 1.13‐1.71, respectively) and (2), when compared to patients with extreme resource intensity scores, patients with higher resource intensity scores were more likely to report top decile RSRs (moderate [OR: 1.42, 95% CI: 1.11‐1.83], major [OR: 1.56, 95% CI: 1.22‐2.01], and extreme [OR: 2.29, 95% CI: 1.8‐2.92]. These results were relatively consistent within medical and surgical subgroups (Table 3).
Overall | Medical | Surgical | ||||
---|---|---|---|---|---|---|
Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | |
| ||||||
Age, y | ||||||
<30 | 1.61 (1.092.36) | 0.016 | 0.82 (0.41.7) | 0.596 | 2.31 (1.393.82) | 0.001 |
3049 | 1.44 (1.081.93) | 0.014 | 1.55 (1.032.32) | 0.034 | 1.41 (0.912.17) | 0.120 |
5069 | 1.39 (1.131.71) | 0.002 | 1.44 (1.11.88) | 0.008 | 1.39 (11.93) | 0.049 |
>69 | Ref | Ref | Ref | |||
Sex | ||||||
Male | 1 (0.851.17) | 0.964 | 1 (0.81.25) | 0.975 | 0.99 (0.791.26) | 0.965 |
Female | Ref | Ref | Ref | |||
Payer | ||||||
Public | 0.62 (0.142.8) | 0.531 | 0.42 (0.053.67) | 0.432 | 1.03 (0.128.59) | 0.978 |
Private | 0.67 (0.153.02) | 0.599 | 0.42 (0.053.67) | 0.434 | 1.17 (0.149.69) | 0.884 |
Charity | 1.54 (0.288.41) | 0.620 | 1 (0.0911.13) | 0.999 | 2.56 (0.2328.25) | 0.444 |
Self | Ref | Ref | Ref | |||
Severity index | ||||||
0 (lowest) | Ref | Ref | Ref | |||
13 | 1.07 (0.891.29) | 0.485 | 1.18 (0.881.58) | 0.267 | 1 (0.781.29) | 0.986 |
46 | 1.14 (0.861.51) | 0.377 | 1.42 (0.992.04) | 0.056 | 0.6 (0.331.1) | 0.100 |
>6 (highest) | 1.31 (0.911.9) | 0.150 | 1.47 (0.932.33) | 0.097 | 1.1 (0.542.21) | 0.795 |
Resource intensity score | ||||||
Low | Ref | Ref | Ref | |||
Moderate | 1.42 (1.111.83) | 0.006 | 1.6 (1.112.3) | 0.011 | 0.94 (0.661.34) | 0.722 |
Major | 1.56 (1.222.01) | 0.001 | 1.69 (1.182.43) | 0.004 | 1.28 (0.911.8) | 0.151 |
Extreme | 2.29 (1.82.92) | <0.001 | 2.72 (1.943.82) | <0.001 | 1.63 (1.172.26) | 0.004 |
Service | ||||||
Medical | 0.59 (0.50.69) | <0.001 | ||||
Surgical | Ref |
In those with at least 1 ICU attending encounter (see Supporting Table 1 in the online version of this article), no variables demonstrated significant association with top decile RSRs in the overall group or in the medical subgroup. For surgical patients with at least 1 ICU attending encounter (see Supporting Table 1 in the online version of this article), patients aged 30 to 49 and 50 to 69 years were more likely to provide top decile RSRs (OR: 1.93, 95% CI: 1.08‐3.46 and OR: 1.65, 95% CI 1.07‐2.53, respectively). Resource intensity was not significantly associated with top decile RSRs.
DISCUSSION
Our analysis suggests that, for patients on the general care floors, resource utilization is associated with the RSR and, therefore, potentially the CMS Summary Star Rating. Adjusting for severity of illness, patients with higher resource utilization were more likely to report top decile RSRs.
Prior data regarding utilization and satisfaction are mixed. In a 2‐year, prospective, national examination, patients in the highest quartile of patient satisfaction had increased healthcare and prescription drug expenditures as well as increased rates of hospitalization when compared with patients in the lowest quartile of patient satisfaction.[9] However, a recent national study of surgical administrative databases suggested hospitals with high patient satisfaction provided more efficient care.[13]
One reason for the conflicting data may be that large, national evaluations are unable to control for between‐hospital confounders (ie, hospital quality of care). By capturing all eligible returned surveys at 1 institution, our design allowed us to collect granular data. We found that in 1 hospital setting, patient population, facilities, and food services, patients receiving more clinical resources generally assigned higher ratings than patients receiving less.
It is possible that utilization is a proxy for serious illness, and that patients with serious illness receive more attention during hospitalization and are more satisfied when discharged in a good state of health. However, we did adjust for severity of illness in our model using the Charlson‐Deyo index and we suggest that, other factors being equal, hospitals with higher per‐patient expenditures may be assigned higher Summary Star Ratings.
CMS has recently implemented a number of metrics designed to decrease healthcare costs by improving quality, safety, and efficiency. Concurrently, CMS has also prioritized patient experience. The Summary Star Rating was created to provide healthcare consumers with an easy way to compare the patient experience between hospitals[4]; however, our data suggest that this metric may be at odds with inpatient cost savings and efficiency metrics.
Per‐patient spending becomes particularly salient when considering that in fiscal year 2016, CMS' hospital VBP reimbursement will include 2 metrics: an efficiency outcome measure labeled Medicare spending per beneficiary, and a patient experience outcome measure based on HCAHPS survey dimensions.[2] Together, these 2 metrics will comprise nearly half of the total VBP performance score used to determine reimbursement. Although our data suggest that these 2 VBP metrics may be correlated, it should be noted that we measured inpatient hospital charges, whereas the CMS efficiency outcome measure includes costs for episode of care spanning 3 days prior to hospitalization to 30 days after hospitalization.
Patient expectations likely play a role in satisfaction.[14, 15, 16] In an outpatient setting, physician fulfillment of patient requests has been associated with positive patient evaluations of care.[17] However, patients appear to value education, shared decision making, and provider empathy more than testing and intervention.[14, 18, 19, 20, 21, 22, 23] Perhaps, in the absence of the former attributes, patients use additional resource expenditure as a proxy.
It is not clear that higher resource expenditure improves outcomes. A landmark study of nearly 1 million Medicare enrollees by Fisher et al. suggests that, although Medicare patients in higher‐spending regions receive more care than those in lower‐spending regions, this does not result in better health outcomes, specifically with regard to mortality.[24, 25] Patients who live in areas of high hospital capacity use the hospital more frequently than do patients in areas of low hospital capacity, but this does not appear to result in improved mortality rates.[26] In fact, physicians in areas of high healthcare capacity report more difficulty maintaining high‐quality patient relationships and feel less able to provide high‐quality care than physicians in lower‐capacity areas.[27]
We hypothesize the cause of the association between resource utilization and patient satisfaction could be that patients (1) perceive that a doctor who allows them to stay longer in the hospital or who performs additional testing cares more about their well‐being and (2) that these patients feel more strongly that their concerns are being heard and addressed by their physicians. A systematic review of primary care patients identified many studies that found a positive association between meeting patient expectations and satisfaction with care, but also suggested that although patients frequently expect information, physicians misperceive this as an expectation of specific action.[28] A separate systematic review found that patient education in the form of decision aides can help patients develop more reasonable expectations and reduce utilization of certain discretionary procedures such as elective surgeries and prostate‐specific antigen testing.[29]
We did not specifically address clinical outcomes in our analysis because the clinical outcomes on which CMS currently adjusts VBP reimbursement focus on 30‐day mortality for specific diagnoses, nosocomial infections, and iatrogenic events.[30] Our data include only returned surveys from living patients, and it is likely that 30‐day mortality was similar throughout all subsets of patients. Additionally, the nosocomial and iatrogenic outcome measures used by CMS are sufficiently rare on the general floors and are unlikely to have significantly influenced our results.[31]
Our study has several strengths. Nearly all medical and surgical patient surveys returned during the study period were included, and therefore our calculations are likely to accurately reflect the Summary Star Rating that would have been assigned for the period. Second, the large sample size helps attenuate potential differences in commonly used outcome metrics. Third, by adjusting for a variety of demographic and clinical variables, we were able to decrease the likelihood of unidentified confounders.
Notably, we identified 38 (0.4%) surveys returned for patients under 18 years of age at admission. These surveys were included in our analysis because, to the best of our knowledge, they would have existed in the pool of surveys CMS could have used to assign a Summary Star Rating.
Our study also has limitations. First, geographically diverse data are needed to ensure generalizability. Second, we used the Charlson‐Deyo Comorbidity Index to describe the degree of illness for each patient. This index represents a patient's total illness burden but may not describe the relative severity of the patient's current illness relative to another patient. Third, we selected variables we felt were most likely to be associated with patient experience, but unidentified confounding remains possible. Fourth, attendings caring for ICU patients fall within the Division of Critical Care/Pulmonary Medicine. Therefore, we may have inadvertently placed patients into the ICU cohort who received a pulmonary/critical care consult on the general floors. Fifth, our data describe associations only for patients who returned surveys. Although there may be inherent biases in patients who return surveys, HCAHPS survey responses are used by CMS to determine a hospital's overall satisfaction score.
CONCLUSION
For patients who return HCAHPS surveys, resource utilization may be positively associated with a hospital's Summary Star Rating. These data suggest that hospitals with higher per‐patient expenditures may receive higher Summary Star Ratings, which could result in hospitals with higher per‐patient resource utilization appearing more attractive to healthcare consumers. Future studies should attempt to confirm our findings at other institutions and to determine causative factors.
Acknowledgements
The authors thank Jason Machan, PhD (Department of Orthopedics and Surgery, Warren Alpert Medical School, Brown University, Providence, Rhode Island) for his help with study design, and Ms. Brenda Foster (data analyst, University of Rochester Medical Center, Rochester, NY) for her help with data collection.
Disclosures: Nothing to report.
- Redesigning physician compensation and improving ED performance. Healthc Financ Manage. 2011;65(6):114–117. , , .
- QualityNet. Available at: https://www.qualitynet.org/dcs/ContentServer?c=Page97(13):1041–1048.
- Factors determining inpatient satisfaction with care. Soc Sci Med. 2002;54(4):493–504. , , , .
- Patient satisfaction revisited: a multilevel approach. Soc Sci Med. 2009;69(1):68–75. , , , , .
- Predictors of patient satisfaction with hospital health care. BMC Health Serv Res. 2006;6:102. , , , et al.
- The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405–411. , , , .
- Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41–48. , , , , .
- Becker's Infection Control and Clinical Quality. Star Ratings go live on Hospital Compare: how many hospitals got 5 stars? Available at: http://www.beckershospitalreview.com/quality/star‐ratings‐go‐live‐on‐hospital‐compare‐how‐many‐hospitals‐got‐5‐stars.html. Published April 16, 2015. Accessed October 5, 2015.
- Adapting a clinical comorbidity index for use with ICD‐9‐CM administrative databases. J Clin Epidemiol. 1992;45(6):613–619. , , .
- Patient satisfaction and quality of surgical care in US hospitals. Ann Surg. 2015;261(1):2–8. , , .
- Should health care providers be accountable for patients' care experiences? J Gen Intern Med. 2015;30(2):253–256. , , , , .
- Unmet expectations for care and the patient‐physician relationship. J Gen Intern Med. 2002;17(11):817–824. , , , , .
- Do unmet expectations for specific tests, referrals, and new medications reduce patients' satisfaction? J Gen Intern Med. 2004;19(11):1080–1087. , , , et al.
- Request fulfillment in office practice: antecedents and relationship to outcomes. Med Care. 2002;40(1):38–51. , , , , , .
- Factors associated with patient satisfaction with care among dermatological outpatients. Br J Dermatol. 2001;145(4):617–623. , , , et al.
- Patient expectations of emergency department care: phase II—a cross‐sectional survey. CJEM. 2006;8(3):148–157. , , , .
- Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338–344. , , , , .
- What do people want from their health care? A qualitative study. J Participat Med. 2015;18:e10. , .
- Evaluations of care by adults following a denial of an advertisement‐related prescription drug request: the role of expectations, symptom severity, and physician communication style. Soc Sci Med. 2006;62(4):888–899. , , .
- Getting to “no”: strategies primary care physicians use to deny patient requests. Arch Intern Med. 2010;170(4):381–388. , , , , , .
- The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138(4):273–287. , , , , , .
- The implications of regional variations in Medicare spending. Part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288–298. , , , , , .
- Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351–1362. , , , et al.
- Regional variations in health care intensity and physician perceptions of quality of care. Ann Intern Med. 2006;144(9):641–649. , , , .
- Visit‐specific expectations and patient‐centered outcomes: a literature review. Arch Fam Med. 2000;9(10):1148–1155. , , .
- Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2014;1:CD001431. , , , et al.
- Centers for Medicare and Medicaid Services. Hospital Compare. Outcome domain. Available at: https://www.medicare.gov/hospitalcompare/data/outcome‐domain.html. Accessed October 5, 2015.
- Centers for Disease Control and Prevention. 2013 national and state healthcare‐associated infections progress report. Available at: www.cdc.gov/hai/progress‐report/index.html. Accessed October 5, 2015.
- Redesigning physician compensation and improving ED performance. Healthc Financ Manage. 2011;65(6):114–117. , , .
- QualityNet. Available at: https://www.qualitynet.org/dcs/ContentServer?c=Page97(13):1041–1048.
- Factors determining inpatient satisfaction with care. Soc Sci Med. 2002;54(4):493–504. , , , .
- Patient satisfaction revisited: a multilevel approach. Soc Sci Med. 2009;69(1):68–75. , , , , .
- Predictors of patient satisfaction with hospital health care. BMC Health Serv Res. 2006;6:102. , , , et al.
- The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405–411. , , , .
- Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41–48. , , , , .
- Becker's Infection Control and Clinical Quality. Star Ratings go live on Hospital Compare: how many hospitals got 5 stars? Available at: http://www.beckershospitalreview.com/quality/star‐ratings‐go‐live‐on‐hospital‐compare‐how‐many‐hospitals‐got‐5‐stars.html. Published April 16, 2015. Accessed October 5, 2015.
- Adapting a clinical comorbidity index for use with ICD‐9‐CM administrative databases. J Clin Epidemiol. 1992;45(6):613–619. , , .
- Patient satisfaction and quality of surgical care in US hospitals. Ann Surg. 2015;261(1):2–8. , , .
- Should health care providers be accountable for patients' care experiences? J Gen Intern Med. 2015;30(2):253–256. , , , , .
- Unmet expectations for care and the patient‐physician relationship. J Gen Intern Med. 2002;17(11):817–824. , , , , .
- Do unmet expectations for specific tests, referrals, and new medications reduce patients' satisfaction? J Gen Intern Med. 2004;19(11):1080–1087. , , , et al.
- Request fulfillment in office practice: antecedents and relationship to outcomes. Med Care. 2002;40(1):38–51. , , , , , .
- Factors associated with patient satisfaction with care among dermatological outpatients. Br J Dermatol. 2001;145(4):617–623. , , , et al.
- Patient expectations of emergency department care: phase II—a cross‐sectional survey. CJEM. 2006;8(3):148–157. , , , .
- Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338–344. , , , , .
- What do people want from their health care? A qualitative study. J Participat Med. 2015;18:e10. , .
- Evaluations of care by adults following a denial of an advertisement‐related prescription drug request: the role of expectations, symptom severity, and physician communication style. Soc Sci Med. 2006;62(4):888–899. , , .
- Getting to “no”: strategies primary care physicians use to deny patient requests. Arch Intern Med. 2010;170(4):381–388. , , , , , .
- The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138(4):273–287. , , , , , .
- The implications of regional variations in Medicare spending. Part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288–298. , , , , , .
- Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351–1362. , , , et al.
- Regional variations in health care intensity and physician perceptions of quality of care. Ann Intern Med. 2006;144(9):641–649. , , , .
- Visit‐specific expectations and patient‐centered outcomes: a literature review. Arch Fam Med. 2000;9(10):1148–1155. , , .
- Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2014;1:CD001431. , , , et al.
- Centers for Medicare and Medicaid Services. Hospital Compare. Outcome domain. Available at: https://www.medicare.gov/hospitalcompare/data/outcome‐domain.html. Accessed October 5, 2015.
- Centers for Disease Control and Prevention. 2013 national and state healthcare‐associated infections progress report. Available at: www.cdc.gov/hai/progress‐report/index.html. Accessed October 5, 2015.
Methotrexate in RA: Too low, too short, too few subcutaneous doses?
CHARLOTTE, N.C. – Methotrexate’s role as the mainstay drug of choice to treat rheumatoid arthritis has never been challenged, but questions persist as to why many rheumatologists don’t seem to titrate the dose of the drug up for a long enough period to see significant improvement in disease activity, or don’t instead start with or switch to subcutaneous administration before adding a biologic.
The issue is likely to come to greater prominence soon because of the rise of value-based care, according to Dr. James O’Dell, chief of the division of rheumatology at the University of Nebraska Medical Center, Omaha.
“Rheumatologists will ultimately be held accountable for providing value-based care. We’ll be measured on how well patients are doing, hopefully, and how expensive it is for you to take care of them. When rheumatologists are scored in that way, they will look for ways to provide quality care less expensively. And when they do that, they’ll use more methotrexate, they’ll use higher doses of methotrexate, they’ll use subcutaneous methotrexate, they’ll use more conventional therapy,” he said in an interview.
Dr. O’Dell called for rheumatologists to give methotrexate a longer time to work and to give subcutaneous methotrexate a shot before moving on to biologics, based on analyses of the TEAR (Treatment of Early Aggressive Rheumatoid Arthritis) trial and a U.S. pharmaceutical claims database study of methotrexate-prescribing habits during 2009-2014 that he presented at the annual meeting of the North Carolina Rheumatology Association (NCRA).
Overall, 28% of the early, poor-prognosis RA patients in the TEAR trial who were randomized to receive only oral methotrexate achieved a 28-joint Disease Activity Score (DAS28) of less than 3.2 at 24 weeks. The weekly dose of methotrexate was escalated if there were any tender or swollen joints, rising from 10 mg to 15 mg at 6 weeks and from 15 mg to 20 mg at 24 weeks. Those patients who did well on methotrexate alone showed no clinically meaningful or statistically significant clinical or radiographic differences at week 48 to the end of the study at week 102, compared with patients who initially took methotrexate alone but were randomized to either triple therapy with methotrexate, sulfasalazine, and hydroxychloroquine or combination disease-modifying antirheumatic drug (DMARD) therapy with etanercept and methotrexate after having a DAS28 of 3.2 or higher at 24 weeks, as well as patients who were initially randomized to triple therapy or combination DMARD therapy at the start of the trial (Arthritis Rheum. 2013;65:1985-94).
“The TEAR trial clearly showed that there are many individual patients who do not require anything more than methotrexate alone,” Dr. O’Dell said in an interview. “It also showed that combinations of conventional therapy – methotrexate, sulfasalazine, hydroxychloroquine – are equally efficacious both clinically and radiographically to combinations of methotrexate and etanercept. One of the messages that the TEAR trial clearly showed us is that if you have a patient that needs to step up to a biologic, you can titrate methotrexate up and wait until the 6-month time point to make that decision. And if you wait until 6 months, the patient is not going to be harmed in terms of their ultimate clinical response, and they’re not going to be harmed in terms of radiographic progression.”
However, patients using methotrexate in the TEAR trial only went up to a maximum of 20 mg/week orally, and this itself may be improved on because of evidence for greater bioavailability of oral methotrexate when it is titrated up to even higher doses by 6 months. There also is evidence, although limited, for the superior efficacy and bioavailability of subcutaneous methotrexate, Dr. O’Dell said.
Some data are beginning to indicate the usefulness of going straight to subcutaneous rather than oral methotrexate, he said. Data from the prospective, observational Canadian Early Arthritis Cohort (CATCH) have shown that 55% of patients with early RA who used subcutaneous methotrexate alone as their initial treatment needed it only during the first year of treatment, compared with 23% who were treated initially with oral methotrexate alone. Lack of efficacy was the only statistically significant difference between the two groups that was cited as a reason for failure of the initial treatment, which was the case for 72% of oral users vs. 40% of subcutaneous users.
Another analysis from the CATCH study that was reported at the 2016 Congress of the European League Against Rheumatism (EULAR) suggested that use of subcutaneous methotrexate as an initial treatment for RA could reduce the use of biologics by 53% after adjustment for confounding variables, compared with use of oral methotrexate.
When asked to comment on this, Dr. Daniel E. Furst noted that “there could be an awfully large selection bias in this study. If you ask yourself, ‘Why do people go to subcutaneous in an observational trial?’ it is possible that they didn’t have enough response to the oral. They didn’t go to a higher dose, but, rather stopped methotrexate altogether. The result is that injection looks better. I think this study has a significant flaw, as is often unavoidable in observational studies. This study, per se, does not prove that s.c. methotrexate has an advantage over oral methotrexate in terms of delay of starting a biologic.”
The increased bioavailability of subcutaneous administration versus oral – which is about 17% – does not make it necessarily better for all patients, Dr. Furst contended, although it can certainly be a good option for those affected by gastrointestinal side effects with oral administration. “Given that a 17% increase in bioavailability with subcutaneous administration, this equals about 3-4 mg extra methotrexate. 20 mg/week s.c. methotrexate equals about 23 or 24 mg/week of oral methotrexate. My point is that if a patients tolerates it, giving a little higher dose of oral methotrexate may be the same as giving a slightly higher dose subcutaneously. It is probably true, incidentally, that subcutaneous methotrexate, when given as a generic is less expensive than the oral. A 25-mg/mL vial costs somewhere in the range of one-half to one-third of oral.”
Waiting for 6 months of treatment with oral methotrexate is ideal when a patient is having some response but may not be appropriate for all, according to Dr. Furst, the Carl Pearson Professor of Medicine in the division of rheumatology at the University of California, Los Angeles. He cited research showing 35% of patients who did not yet have an ACR 20–level response at 12 weeks of treatment with oral methotrexate alone at 7.5-20 mg/week can reach that level of response by 26 weeks (Rheumatology. 2010;49[6]:1201-3). “What I took away from that is, on a practical basis, if patients have absolutely no response, however you define it, by 12 weeks, then stop. If they’re having some response, then wait and see if they get more response before stopping,” he said.
The claims analysis that Dr. O’Dell discussed at the NCRA annual meeting gets straight to that point of whether rheumatologists are giving methotrexate a long enough time to work. What compounds the problem, according to Dr. O’Dell, is that too few people are giving oral methotrexate a long enough chance to work or trying subcutaneous dosing. The analysis of claims data from 35,640 RA patients during 2009-2014 that he originally presented at the 2015 American College of Rheumatology annual meeting showed that 44% of patients who started on oral methotrexate alone stayed on it throughout the study period and didn’t need anything else.
Prescribers, however, stopped oral methotrexate at a mean dose of 15.3 mg and moved 87% straight to a biologic without giving subcutaneous methotrexate a shot. They did that after a median of less than 6 months, and within 3 months in more than 40% of patients. Of the patients who were given subcutaneous methotrexate when their oral formulation wasn’t enough that’s all most of them needed, as 72% remained on subcutaneous methotrexate alone for the remainder of the study period. The rest moved on to a biologic, but after a median of almost a year, not a few months. When their time on oral and subcutaneous methotrexate was included, their median time to a biologic was more than 2 years. The same results applied to patients who started treatment in 2009 or 2012.
What the evidence from the claims data study does not say is why physicians are prescribing this way. Some might suggest that the influence of pharmaceutical companies might have something to do with it, although the notion is controversial. A recent systematic review and meta-analysis of randomized, controlled trials that involved a direct comparison of methotrexate monotherapy against methotrexate plus a biologic or biologic monotherapy to treat RA concluded that there is a dosing bias in the trials where methotrexate monotherapy is underdosed because none of the 13 trials that met criteria to be in the review used a methotrexate dose of 25 mg/week (Ann Rheum Dis. 2016 Apr 18. doi: 10.1136/annrheumdis-2016-209383). The investigators, led by Dr. Josefina Durán of Pontificia Universidad Católica de Chile, Santiago, said that they used 25 mg/week as the maximum recommended dose because it is recommended by EULAR and expert opinion.
The contention of bias in the trial due to underdosing was met with pointed questioning from Dr. Joel Kremer of the Center for Rheumatology and Albany (N.Y.) Medical College, and founder and chief executive officer of the Consortium of Rheumatology Researchers of North America (CORRONA). The fact that the authors relied on an expert consensus statement from 2009 to use as their basis for a maximal dose recommendation of 25 mg/week is not “clinical science,” he said in a commentary accompanying the report by Dr. Durán and colleagues (Ann Rheum Dis. 2016 Apr 20. doi: 10.1136/annrheumdis-2016-209505). No trials have shown that pushing the dose from 20 to 25 mg/week gives “significant further clinical improvement without experiencing some possible clinical or laboratory toxicity,” he said, noting that it doesn’t make sense to use a one-size-fits all approach to methotrexate dosing because “the balance between maximal efficacy and toxicity is highly variable and likely to be quite different in diverse genetic populations.” Also, because 7 of the 13 studies cited by the meta-analysis were from 2010 or before, they didn’t have the opportunity to incorporate the higher doses recommended by the expert panel into their designs. Rather than having any preplanned bias, he said, “the dosages and route of administration employed by these trials reflect the common empirical practice in vogue at the time those trials were conducted.”
“To me, the Durán study is significantly overstating the data,” Dr. Furst said. He pointed out that in some of the trials the patients “were actually allowed to use higher doses, but [the meta-analysis authors] said they didn’t. The data showed that they were using 20 mg/week as a mean, but some of them went higher because the standard deviation was 4.5 mg. So to say that it wasn’t the full dosing isn’t quite legit. They actually did allow the full dosing.”
“Dr. Kremer’s point is appropriate,” Dr. O’Dell said, “but it’s not the reality. His point is that there are not a lot of trials that say methotrexate gets better when you use 25 mg/week vs. 20 mg/week, or when you use it subcutaneously vs. orally, even though we know the bioavailability is substantially greater. He’s right in a strict sense, but the whole concept, everything we know, the common sense, tells us that pushing methotrexate to higher tolerable doses and using it subcutaneously is better for our patients. All the data that have looked at that supports that.”
Dr. O’Dell serves on advisory boards for AbbVie, Bristol-Myers Squibb, GlaxoSmithKline, Lilly, Coherus, Antares, and Medac. Dr. Furst has received research support and has received honoraria or fees for serving as a consultant or on the speakers’ bureau for many pharmaceutical companies that manufacturer biologics.
CHARLOTTE, N.C. – Methotrexate’s role as the mainstay drug of choice to treat rheumatoid arthritis has never been challenged, but questions persist as to why many rheumatologists don’t seem to titrate the dose of the drug up for a long enough period to see significant improvement in disease activity, or don’t instead start with or switch to subcutaneous administration before adding a biologic.
The issue is likely to come to greater prominence soon because of the rise of value-based care, according to Dr. James O’Dell, chief of the division of rheumatology at the University of Nebraska Medical Center, Omaha.
“Rheumatologists will ultimately be held accountable for providing value-based care. We’ll be measured on how well patients are doing, hopefully, and how expensive it is for you to take care of them. When rheumatologists are scored in that way, they will look for ways to provide quality care less expensively. And when they do that, they’ll use more methotrexate, they’ll use higher doses of methotrexate, they’ll use subcutaneous methotrexate, they’ll use more conventional therapy,” he said in an interview.
Dr. O’Dell called for rheumatologists to give methotrexate a longer time to work and to give subcutaneous methotrexate a shot before moving on to biologics, based on analyses of the TEAR (Treatment of Early Aggressive Rheumatoid Arthritis) trial and a U.S. pharmaceutical claims database study of methotrexate-prescribing habits during 2009-2014 that he presented at the annual meeting of the North Carolina Rheumatology Association (NCRA).
Overall, 28% of the early, poor-prognosis RA patients in the TEAR trial who were randomized to receive only oral methotrexate achieved a 28-joint Disease Activity Score (DAS28) of less than 3.2 at 24 weeks. The weekly dose of methotrexate was escalated if there were any tender or swollen joints, rising from 10 mg to 15 mg at 6 weeks and from 15 mg to 20 mg at 24 weeks. Those patients who did well on methotrexate alone showed no clinically meaningful or statistically significant clinical or radiographic differences at week 48 to the end of the study at week 102, compared with patients who initially took methotrexate alone but were randomized to either triple therapy with methotrexate, sulfasalazine, and hydroxychloroquine or combination disease-modifying antirheumatic drug (DMARD) therapy with etanercept and methotrexate after having a DAS28 of 3.2 or higher at 24 weeks, as well as patients who were initially randomized to triple therapy or combination DMARD therapy at the start of the trial (Arthritis Rheum. 2013;65:1985-94).
“The TEAR trial clearly showed that there are many individual patients who do not require anything more than methotrexate alone,” Dr. O’Dell said in an interview. “It also showed that combinations of conventional therapy – methotrexate, sulfasalazine, hydroxychloroquine – are equally efficacious both clinically and radiographically to combinations of methotrexate and etanercept. One of the messages that the TEAR trial clearly showed us is that if you have a patient that needs to step up to a biologic, you can titrate methotrexate up and wait until the 6-month time point to make that decision. And if you wait until 6 months, the patient is not going to be harmed in terms of their ultimate clinical response, and they’re not going to be harmed in terms of radiographic progression.”
However, patients using methotrexate in the TEAR trial only went up to a maximum of 20 mg/week orally, and this itself may be improved on because of evidence for greater bioavailability of oral methotrexate when it is titrated up to even higher doses by 6 months. There also is evidence, although limited, for the superior efficacy and bioavailability of subcutaneous methotrexate, Dr. O’Dell said.
Some data are beginning to indicate the usefulness of going straight to subcutaneous rather than oral methotrexate, he said. Data from the prospective, observational Canadian Early Arthritis Cohort (CATCH) have shown that 55% of patients with early RA who used subcutaneous methotrexate alone as their initial treatment needed it only during the first year of treatment, compared with 23% who were treated initially with oral methotrexate alone. Lack of efficacy was the only statistically significant difference between the two groups that was cited as a reason for failure of the initial treatment, which was the case for 72% of oral users vs. 40% of subcutaneous users.
Another analysis from the CATCH study that was reported at the 2016 Congress of the European League Against Rheumatism (EULAR) suggested that use of subcutaneous methotrexate as an initial treatment for RA could reduce the use of biologics by 53% after adjustment for confounding variables, compared with use of oral methotrexate.
When asked to comment on this, Dr. Daniel E. Furst noted that “there could be an awfully large selection bias in this study. If you ask yourself, ‘Why do people go to subcutaneous in an observational trial?’ it is possible that they didn’t have enough response to the oral. They didn’t go to a higher dose, but, rather stopped methotrexate altogether. The result is that injection looks better. I think this study has a significant flaw, as is often unavoidable in observational studies. This study, per se, does not prove that s.c. methotrexate has an advantage over oral methotrexate in terms of delay of starting a biologic.”
The increased bioavailability of subcutaneous administration versus oral – which is about 17% – does not make it necessarily better for all patients, Dr. Furst contended, although it can certainly be a good option for those affected by gastrointestinal side effects with oral administration. “Given that a 17% increase in bioavailability with subcutaneous administration, this equals about 3-4 mg extra methotrexate. 20 mg/week s.c. methotrexate equals about 23 or 24 mg/week of oral methotrexate. My point is that if a patients tolerates it, giving a little higher dose of oral methotrexate may be the same as giving a slightly higher dose subcutaneously. It is probably true, incidentally, that subcutaneous methotrexate, when given as a generic is less expensive than the oral. A 25-mg/mL vial costs somewhere in the range of one-half to one-third of oral.”
Waiting for 6 months of treatment with oral methotrexate is ideal when a patient is having some response but may not be appropriate for all, according to Dr. Furst, the Carl Pearson Professor of Medicine in the division of rheumatology at the University of California, Los Angeles. He cited research showing 35% of patients who did not yet have an ACR 20–level response at 12 weeks of treatment with oral methotrexate alone at 7.5-20 mg/week can reach that level of response by 26 weeks (Rheumatology. 2010;49[6]:1201-3). “What I took away from that is, on a practical basis, if patients have absolutely no response, however you define it, by 12 weeks, then stop. If they’re having some response, then wait and see if they get more response before stopping,” he said.
The claims analysis that Dr. O’Dell discussed at the NCRA annual meeting gets straight to that point of whether rheumatologists are giving methotrexate a long enough time to work. What compounds the problem, according to Dr. O’Dell, is that too few people are giving oral methotrexate a long enough chance to work or trying subcutaneous dosing. The analysis of claims data from 35,640 RA patients during 2009-2014 that he originally presented at the 2015 American College of Rheumatology annual meeting showed that 44% of patients who started on oral methotrexate alone stayed on it throughout the study period and didn’t need anything else.
Prescribers, however, stopped oral methotrexate at a mean dose of 15.3 mg and moved 87% straight to a biologic without giving subcutaneous methotrexate a shot. They did that after a median of less than 6 months, and within 3 months in more than 40% of patients. Of the patients who were given subcutaneous methotrexate when their oral formulation wasn’t enough that’s all most of them needed, as 72% remained on subcutaneous methotrexate alone for the remainder of the study period. The rest moved on to a biologic, but after a median of almost a year, not a few months. When their time on oral and subcutaneous methotrexate was included, their median time to a biologic was more than 2 years. The same results applied to patients who started treatment in 2009 or 2012.
What the evidence from the claims data study does not say is why physicians are prescribing this way. Some might suggest that the influence of pharmaceutical companies might have something to do with it, although the notion is controversial. A recent systematic review and meta-analysis of randomized, controlled trials that involved a direct comparison of methotrexate monotherapy against methotrexate plus a biologic or biologic monotherapy to treat RA concluded that there is a dosing bias in the trials where methotrexate monotherapy is underdosed because none of the 13 trials that met criteria to be in the review used a methotrexate dose of 25 mg/week (Ann Rheum Dis. 2016 Apr 18. doi: 10.1136/annrheumdis-2016-209383). The investigators, led by Dr. Josefina Durán of Pontificia Universidad Católica de Chile, Santiago, said that they used 25 mg/week as the maximum recommended dose because it is recommended by EULAR and expert opinion.
The contention of bias in the trial due to underdosing was met with pointed questioning from Dr. Joel Kremer of the Center for Rheumatology and Albany (N.Y.) Medical College, and founder and chief executive officer of the Consortium of Rheumatology Researchers of North America (CORRONA). The fact that the authors relied on an expert consensus statement from 2009 to use as their basis for a maximal dose recommendation of 25 mg/week is not “clinical science,” he said in a commentary accompanying the report by Dr. Durán and colleagues (Ann Rheum Dis. 2016 Apr 20. doi: 10.1136/annrheumdis-2016-209505). No trials have shown that pushing the dose from 20 to 25 mg/week gives “significant further clinical improvement without experiencing some possible clinical or laboratory toxicity,” he said, noting that it doesn’t make sense to use a one-size-fits all approach to methotrexate dosing because “the balance between maximal efficacy and toxicity is highly variable and likely to be quite different in diverse genetic populations.” Also, because 7 of the 13 studies cited by the meta-analysis were from 2010 or before, they didn’t have the opportunity to incorporate the higher doses recommended by the expert panel into their designs. Rather than having any preplanned bias, he said, “the dosages and route of administration employed by these trials reflect the common empirical practice in vogue at the time those trials were conducted.”
“To me, the Durán study is significantly overstating the data,” Dr. Furst said. He pointed out that in some of the trials the patients “were actually allowed to use higher doses, but [the meta-analysis authors] said they didn’t. The data showed that they were using 20 mg/week as a mean, but some of them went higher because the standard deviation was 4.5 mg. So to say that it wasn’t the full dosing isn’t quite legit. They actually did allow the full dosing.”
“Dr. Kremer’s point is appropriate,” Dr. O’Dell said, “but it’s not the reality. His point is that there are not a lot of trials that say methotrexate gets better when you use 25 mg/week vs. 20 mg/week, or when you use it subcutaneously vs. orally, even though we know the bioavailability is substantially greater. He’s right in a strict sense, but the whole concept, everything we know, the common sense, tells us that pushing methotrexate to higher tolerable doses and using it subcutaneously is better for our patients. All the data that have looked at that supports that.”
Dr. O’Dell serves on advisory boards for AbbVie, Bristol-Myers Squibb, GlaxoSmithKline, Lilly, Coherus, Antares, and Medac. Dr. Furst has received research support and has received honoraria or fees for serving as a consultant or on the speakers’ bureau for many pharmaceutical companies that manufacturer biologics.
CHARLOTTE, N.C. – Methotrexate’s role as the mainstay drug of choice to treat rheumatoid arthritis has never been challenged, but questions persist as to why many rheumatologists don’t seem to titrate the dose of the drug up for a long enough period to see significant improvement in disease activity, or don’t instead start with or switch to subcutaneous administration before adding a biologic.
The issue is likely to come to greater prominence soon because of the rise of value-based care, according to Dr. James O’Dell, chief of the division of rheumatology at the University of Nebraska Medical Center, Omaha.
“Rheumatologists will ultimately be held accountable for providing value-based care. We’ll be measured on how well patients are doing, hopefully, and how expensive it is for you to take care of them. When rheumatologists are scored in that way, they will look for ways to provide quality care less expensively. And when they do that, they’ll use more methotrexate, they’ll use higher doses of methotrexate, they’ll use subcutaneous methotrexate, they’ll use more conventional therapy,” he said in an interview.
Dr. O’Dell called for rheumatologists to give methotrexate a longer time to work and to give subcutaneous methotrexate a shot before moving on to biologics, based on analyses of the TEAR (Treatment of Early Aggressive Rheumatoid Arthritis) trial and a U.S. pharmaceutical claims database study of methotrexate-prescribing habits during 2009-2014 that he presented at the annual meeting of the North Carolina Rheumatology Association (NCRA).
Overall, 28% of the early, poor-prognosis RA patients in the TEAR trial who were randomized to receive only oral methotrexate achieved a 28-joint Disease Activity Score (DAS28) of less than 3.2 at 24 weeks. The weekly dose of methotrexate was escalated if there were any tender or swollen joints, rising from 10 mg to 15 mg at 6 weeks and from 15 mg to 20 mg at 24 weeks. Those patients who did well on methotrexate alone showed no clinically meaningful or statistically significant clinical or radiographic differences at week 48 to the end of the study at week 102, compared with patients who initially took methotrexate alone but were randomized to either triple therapy with methotrexate, sulfasalazine, and hydroxychloroquine or combination disease-modifying antirheumatic drug (DMARD) therapy with etanercept and methotrexate after having a DAS28 of 3.2 or higher at 24 weeks, as well as patients who were initially randomized to triple therapy or combination DMARD therapy at the start of the trial (Arthritis Rheum. 2013;65:1985-94).
“The TEAR trial clearly showed that there are many individual patients who do not require anything more than methotrexate alone,” Dr. O’Dell said in an interview. “It also showed that combinations of conventional therapy – methotrexate, sulfasalazine, hydroxychloroquine – are equally efficacious both clinically and radiographically to combinations of methotrexate and etanercept. One of the messages that the TEAR trial clearly showed us is that if you have a patient that needs to step up to a biologic, you can titrate methotrexate up and wait until the 6-month time point to make that decision. And if you wait until 6 months, the patient is not going to be harmed in terms of their ultimate clinical response, and they’re not going to be harmed in terms of radiographic progression.”
However, patients using methotrexate in the TEAR trial only went up to a maximum of 20 mg/week orally, and this itself may be improved on because of evidence for greater bioavailability of oral methotrexate when it is titrated up to even higher doses by 6 months. There also is evidence, although limited, for the superior efficacy and bioavailability of subcutaneous methotrexate, Dr. O’Dell said.
Some data are beginning to indicate the usefulness of going straight to subcutaneous rather than oral methotrexate, he said. Data from the prospective, observational Canadian Early Arthritis Cohort (CATCH) have shown that 55% of patients with early RA who used subcutaneous methotrexate alone as their initial treatment needed it only during the first year of treatment, compared with 23% who were treated initially with oral methotrexate alone. Lack of efficacy was the only statistically significant difference between the two groups that was cited as a reason for failure of the initial treatment, which was the case for 72% of oral users vs. 40% of subcutaneous users.
Another analysis from the CATCH study that was reported at the 2016 Congress of the European League Against Rheumatism (EULAR) suggested that use of subcutaneous methotrexate as an initial treatment for RA could reduce the use of biologics by 53% after adjustment for confounding variables, compared with use of oral methotrexate.
When asked to comment on this, Dr. Daniel E. Furst noted that “there could be an awfully large selection bias in this study. If you ask yourself, ‘Why do people go to subcutaneous in an observational trial?’ it is possible that they didn’t have enough response to the oral. They didn’t go to a higher dose, but, rather stopped methotrexate altogether. The result is that injection looks better. I think this study has a significant flaw, as is often unavoidable in observational studies. This study, per se, does not prove that s.c. methotrexate has an advantage over oral methotrexate in terms of delay of starting a biologic.”
The increased bioavailability of subcutaneous administration versus oral – which is about 17% – does not make it necessarily better for all patients, Dr. Furst contended, although it can certainly be a good option for those affected by gastrointestinal side effects with oral administration. “Given that a 17% increase in bioavailability with subcutaneous administration, this equals about 3-4 mg extra methotrexate. 20 mg/week s.c. methotrexate equals about 23 or 24 mg/week of oral methotrexate. My point is that if a patients tolerates it, giving a little higher dose of oral methotrexate may be the same as giving a slightly higher dose subcutaneously. It is probably true, incidentally, that subcutaneous methotrexate, when given as a generic is less expensive than the oral. A 25-mg/mL vial costs somewhere in the range of one-half to one-third of oral.”
Waiting for 6 months of treatment with oral methotrexate is ideal when a patient is having some response but may not be appropriate for all, according to Dr. Furst, the Carl Pearson Professor of Medicine in the division of rheumatology at the University of California, Los Angeles. He cited research showing 35% of patients who did not yet have an ACR 20–level response at 12 weeks of treatment with oral methotrexate alone at 7.5-20 mg/week can reach that level of response by 26 weeks (Rheumatology. 2010;49[6]:1201-3). “What I took away from that is, on a practical basis, if patients have absolutely no response, however you define it, by 12 weeks, then stop. If they’re having some response, then wait and see if they get more response before stopping,” he said.
The claims analysis that Dr. O’Dell discussed at the NCRA annual meeting gets straight to that point of whether rheumatologists are giving methotrexate a long enough time to work. What compounds the problem, according to Dr. O’Dell, is that too few people are giving oral methotrexate a long enough chance to work or trying subcutaneous dosing. The analysis of claims data from 35,640 RA patients during 2009-2014 that he originally presented at the 2015 American College of Rheumatology annual meeting showed that 44% of patients who started on oral methotrexate alone stayed on it throughout the study period and didn’t need anything else.
Prescribers, however, stopped oral methotrexate at a mean dose of 15.3 mg and moved 87% straight to a biologic without giving subcutaneous methotrexate a shot. They did that after a median of less than 6 months, and within 3 months in more than 40% of patients. Of the patients who were given subcutaneous methotrexate when their oral formulation wasn’t enough that’s all most of them needed, as 72% remained on subcutaneous methotrexate alone for the remainder of the study period. The rest moved on to a biologic, but after a median of almost a year, not a few months. When their time on oral and subcutaneous methotrexate was included, their median time to a biologic was more than 2 years. The same results applied to patients who started treatment in 2009 or 2012.
What the evidence from the claims data study does not say is why physicians are prescribing this way. Some might suggest that the influence of pharmaceutical companies might have something to do with it, although the notion is controversial. A recent systematic review and meta-analysis of randomized, controlled trials that involved a direct comparison of methotrexate monotherapy against methotrexate plus a biologic or biologic monotherapy to treat RA concluded that there is a dosing bias in the trials where methotrexate monotherapy is underdosed because none of the 13 trials that met criteria to be in the review used a methotrexate dose of 25 mg/week (Ann Rheum Dis. 2016 Apr 18. doi: 10.1136/annrheumdis-2016-209383). The investigators, led by Dr. Josefina Durán of Pontificia Universidad Católica de Chile, Santiago, said that they used 25 mg/week as the maximum recommended dose because it is recommended by EULAR and expert opinion.
The contention of bias in the trial due to underdosing was met with pointed questioning from Dr. Joel Kremer of the Center for Rheumatology and Albany (N.Y.) Medical College, and founder and chief executive officer of the Consortium of Rheumatology Researchers of North America (CORRONA). The fact that the authors relied on an expert consensus statement from 2009 to use as their basis for a maximal dose recommendation of 25 mg/week is not “clinical science,” he said in a commentary accompanying the report by Dr. Durán and colleagues (Ann Rheum Dis. 2016 Apr 20. doi: 10.1136/annrheumdis-2016-209505). No trials have shown that pushing the dose from 20 to 25 mg/week gives “significant further clinical improvement without experiencing some possible clinical or laboratory toxicity,” he said, noting that it doesn’t make sense to use a one-size-fits all approach to methotrexate dosing because “the balance between maximal efficacy and toxicity is highly variable and likely to be quite different in diverse genetic populations.” Also, because 7 of the 13 studies cited by the meta-analysis were from 2010 or before, they didn’t have the opportunity to incorporate the higher doses recommended by the expert panel into their designs. Rather than having any preplanned bias, he said, “the dosages and route of administration employed by these trials reflect the common empirical practice in vogue at the time those trials were conducted.”
“To me, the Durán study is significantly overstating the data,” Dr. Furst said. He pointed out that in some of the trials the patients “were actually allowed to use higher doses, but [the meta-analysis authors] said they didn’t. The data showed that they were using 20 mg/week as a mean, but some of them went higher because the standard deviation was 4.5 mg. So to say that it wasn’t the full dosing isn’t quite legit. They actually did allow the full dosing.”
“Dr. Kremer’s point is appropriate,” Dr. O’Dell said, “but it’s not the reality. His point is that there are not a lot of trials that say methotrexate gets better when you use 25 mg/week vs. 20 mg/week, or when you use it subcutaneously vs. orally, even though we know the bioavailability is substantially greater. He’s right in a strict sense, but the whole concept, everything we know, the common sense, tells us that pushing methotrexate to higher tolerable doses and using it subcutaneously is better for our patients. All the data that have looked at that supports that.”
Dr. O’Dell serves on advisory boards for AbbVie, Bristol-Myers Squibb, GlaxoSmithKline, Lilly, Coherus, Antares, and Medac. Dr. Furst has received research support and has received honoraria or fees for serving as a consultant or on the speakers’ bureau for many pharmaceutical companies that manufacturer biologics.
VIDEO: Endoscopic pyloromyotomy works for gastroparesis when meds don’t
SAN DIEGO – Gastric peroral endoscopic myotomy, a novel procedure for gastroparesis, restored gastric emptying in 30 refractory patients at Johns Hopkins University, Baltimore, and elsewhere in the largest series to date for the technique.
Drug therapy had failed, and Botox injections and transpyloric stenting weren’t helping much. On gastric emptying scans (GES), patients had around 40% of solid meals in their stomachs at 4 hours. Their gastroparesis was related mostly to diabetes and postoperative complications, but about a quarter of the cases were idiopathic.
Twenty-six patients (87%) responded to gastric peroral endoscopic myotomy (G-POEM) during a median follow-up of 5.5 months. Nausea, vomiting, and abdominal pain resolved or improved in most. On repeat GES in 17 patients, emptying time normalized in about half and improved in a third. Overall, patients had 17% of solid meals in their stomachs at 4 hours. G-POEM took an average of 72 minutes, and patients were in the hospital for about 3 days. One patient in the series developed pneumoperitoneum, and another had a prepyloric ulcer.
“The problem with transpyloric stents is that they migrate,” said investigator Dr. Mouen A. Khashab, director of therapeutic endoscopy at Johns Hopkins University. “G-POEM offers a permanent solution with few side effects. You have to be good at doing POEM in the esophagus first, as a prerequisite.”
In an interview at the annual Digestive Disease Week, Dr. Khashab explained the procedure in detail, as well as how he incorporates it into his practice and the patient population most likely to benefit.
SAN DIEGO – Gastric peroral endoscopic myotomy, a novel procedure for gastroparesis, restored gastric emptying in 30 refractory patients at Johns Hopkins University, Baltimore, and elsewhere in the largest series to date for the technique.
Drug therapy had failed, and Botox injections and transpyloric stenting weren’t helping much. On gastric emptying scans (GES), patients had around 40% of solid meals in their stomachs at 4 hours. Their gastroparesis was related mostly to diabetes and postoperative complications, but about a quarter of the cases were idiopathic.
Twenty-six patients (87%) responded to gastric peroral endoscopic myotomy (G-POEM) during a median follow-up of 5.5 months. Nausea, vomiting, and abdominal pain resolved or improved in most. On repeat GES in 17 patients, emptying time normalized in about half and improved in a third. Overall, patients had 17% of solid meals in their stomachs at 4 hours. G-POEM took an average of 72 minutes, and patients were in the hospital for about 3 days. One patient in the series developed pneumoperitoneum, and another had a prepyloric ulcer.
“The problem with transpyloric stents is that they migrate,” said investigator Dr. Mouen A. Khashab, director of therapeutic endoscopy at Johns Hopkins University. “G-POEM offers a permanent solution with few side effects. You have to be good at doing POEM in the esophagus first, as a prerequisite.”
In an interview at the annual Digestive Disease Week, Dr. Khashab explained the procedure in detail, as well as how he incorporates it into his practice and the patient population most likely to benefit.
SAN DIEGO – Gastric peroral endoscopic myotomy, a novel procedure for gastroparesis, restored gastric emptying in 30 refractory patients at Johns Hopkins University, Baltimore, and elsewhere in the largest series to date for the technique.
Drug therapy had failed, and Botox injections and transpyloric stenting weren’t helping much. On gastric emptying scans (GES), patients had around 40% of solid meals in their stomachs at 4 hours. Their gastroparesis was related mostly to diabetes and postoperative complications, but about a quarter of the cases were idiopathic.
Twenty-six patients (87%) responded to gastric peroral endoscopic myotomy (G-POEM) during a median follow-up of 5.5 months. Nausea, vomiting, and abdominal pain resolved or improved in most. On repeat GES in 17 patients, emptying time normalized in about half and improved in a third. Overall, patients had 17% of solid meals in their stomachs at 4 hours. G-POEM took an average of 72 minutes, and patients were in the hospital for about 3 days. One patient in the series developed pneumoperitoneum, and another had a prepyloric ulcer.
“The problem with transpyloric stents is that they migrate,” said investigator Dr. Mouen A. Khashab, director of therapeutic endoscopy at Johns Hopkins University. “G-POEM offers a permanent solution with few side effects. You have to be good at doing POEM in the esophagus first, as a prerequisite.”
In an interview at the annual Digestive Disease Week, Dr. Khashab explained the procedure in detail, as well as how he incorporates it into his practice and the patient population most likely to benefit.
AT DDW® 2016
Building owners, managers must do more to prevent Legionnaires’ disease
Building owners, managers, and administrators of hospitals and other health care facilities around the country are being urged to shore up their water system management facilities to prevent further outbreaks of Legionnaires’ disease, which is the focus of the Center for Disease Control and Prevention’s latest Vital Signs report.
“Almost all Legionnaires’ disease outbreaks are preventable with improvements in water system management,” explained CDC Director Tom Frieden, adding that “At the end of the day, building owners and managers need to take steps to reduce the risk of Legionnaires’ disease [and] work together to reduce this risk and limit the number of people exposed, infected, and hospitalized or, potentially, fatally infected.”
For the report, the CDC investigated 27 outbreaks of Legionnaires’ disease in the United States from 2000 through 2014, which involved a total of 415 cases and 65 fatalities. In each outbreak analysis, the location, source of exposure, and problems with environmental controls of Legionella – the bacterium that causes the disease – were evaluated.
Hotels and resorts accounted for 44% of all outbreaks over the 15-year period, followed by long-term care facilities (19%) and hospitals (15%). However, outbreaks at the latter two location types accounted for 85% of all deaths, while outbreaks at hotels and resorts accounted for only 6%. Potable water was the most common direct cause of Legionella infections, followed by water from cooling towers, hot tubs, industrial equipment, and decorative fountains.
Additionally, 23 of the investigations yielded enough information to determine the exact cause of the outbreak, all of which were caused by at least one of 4 issues. The first was process failures, such as not having a proper water system management program in place to handle Legionella; this was found in two-thirds of the outbreaks. The second major cause was human error, such as not replacing filters or tubing as recommended by manufacturers, which was a cause in half of the outbreaks. The third was equipment breakdown, which was found in one-third of the outbreaks. Finally, reasons external to the buildings themselves – such as water main breaks or disruptions caused by nearby construction – factored into one-third of the outbreaks.
“Large, recent outbreaks of Legionnaires’ disease in New York City and Flint, Michigan, have brought attention to the disease and highlight the need for us to understand why these outbreaks happen and how best to prevent them, [which is] why this Vital Signs is targeted to a specific audience that we in public health don’t talk [to] often enough: building owners and managers,” Dr. Frieden said. “It’s not a traditional public health audience, [but] they are the key to environmental controls in buildings that we live in, get our health care in, and work in everyday.”
To that end, Dr. Frieden announced the release of a new CDC toolkit entitled “Developing a Water Management Program to Reduce Legionella Growth & Spread in Buildings: A Practical Guide to Implementing Industry Standards,” which building owners, managers, and administrators can turn to for guidance on how to implement effective water system management protocols in their buildings.
Legionnaires’ disease is a serious lung infection caused by inhalation of the bacteria Legionella, which can be found in water and inhaled as airborne mist. Elderly individuals, as well as those with suppressed immune systems because of underlying illnesses, are at a heightened risk for Legionnaires’ disease, which would explain the higher death rates observed at hospitals and long-term care facilities. Dr. Frieden stated that outbreaks and cases of Legionnaires’ disease are on the rise nationally, with about 5,000 infections and 20 outbreaks occurring annually; roughly 10% of infections result in death.
The uptick in recent cases is likely because of “the aging of the population, the increase in chronic illness, [an] increase in immunosuppression through use of medication to treat a variety of conditions [and] an aging plumbing infrastructure and that makes maintenance all the more challenging,” according to Dr. Frieden. “It is also possible that increased use of diagnostic tests and more reliable reporting are contributing to some of the rising rates.”
Building owners, managers, and administrators of hospitals and other health care facilities around the country are being urged to shore up their water system management facilities to prevent further outbreaks of Legionnaires’ disease, which is the focus of the Center for Disease Control and Prevention’s latest Vital Signs report.
“Almost all Legionnaires’ disease outbreaks are preventable with improvements in water system management,” explained CDC Director Tom Frieden, adding that “At the end of the day, building owners and managers need to take steps to reduce the risk of Legionnaires’ disease [and] work together to reduce this risk and limit the number of people exposed, infected, and hospitalized or, potentially, fatally infected.”
For the report, the CDC investigated 27 outbreaks of Legionnaires’ disease in the United States from 2000 through 2014, which involved a total of 415 cases and 65 fatalities. In each outbreak analysis, the location, source of exposure, and problems with environmental controls of Legionella – the bacterium that causes the disease – were evaluated.
Hotels and resorts accounted for 44% of all outbreaks over the 15-year period, followed by long-term care facilities (19%) and hospitals (15%). However, outbreaks at the latter two location types accounted for 85% of all deaths, while outbreaks at hotels and resorts accounted for only 6%. Potable water was the most common direct cause of Legionella infections, followed by water from cooling towers, hot tubs, industrial equipment, and decorative fountains.
Additionally, 23 of the investigations yielded enough information to determine the exact cause of the outbreak, all of which were caused by at least one of 4 issues. The first was process failures, such as not having a proper water system management program in place to handle Legionella; this was found in two-thirds of the outbreaks. The second major cause was human error, such as not replacing filters or tubing as recommended by manufacturers, which was a cause in half of the outbreaks. The third was equipment breakdown, which was found in one-third of the outbreaks. Finally, reasons external to the buildings themselves – such as water main breaks or disruptions caused by nearby construction – factored into one-third of the outbreaks.
“Large, recent outbreaks of Legionnaires’ disease in New York City and Flint, Michigan, have brought attention to the disease and highlight the need for us to understand why these outbreaks happen and how best to prevent them, [which is] why this Vital Signs is targeted to a specific audience that we in public health don’t talk [to] often enough: building owners and managers,” Dr. Frieden said. “It’s not a traditional public health audience, [but] they are the key to environmental controls in buildings that we live in, get our health care in, and work in everyday.”
To that end, Dr. Frieden announced the release of a new CDC toolkit entitled “Developing a Water Management Program to Reduce Legionella Growth & Spread in Buildings: A Practical Guide to Implementing Industry Standards,” which building owners, managers, and administrators can turn to for guidance on how to implement effective water system management protocols in their buildings.
Legionnaires’ disease is a serious lung infection caused by inhalation of the bacteria Legionella, which can be found in water and inhaled as airborne mist. Elderly individuals, as well as those with suppressed immune systems because of underlying illnesses, are at a heightened risk for Legionnaires’ disease, which would explain the higher death rates observed at hospitals and long-term care facilities. Dr. Frieden stated that outbreaks and cases of Legionnaires’ disease are on the rise nationally, with about 5,000 infections and 20 outbreaks occurring annually; roughly 10% of infections result in death.
The uptick in recent cases is likely because of “the aging of the population, the increase in chronic illness, [an] increase in immunosuppression through use of medication to treat a variety of conditions [and] an aging plumbing infrastructure and that makes maintenance all the more challenging,” according to Dr. Frieden. “It is also possible that increased use of diagnostic tests and more reliable reporting are contributing to some of the rising rates.”
Building owners, managers, and administrators of hospitals and other health care facilities around the country are being urged to shore up their water system management facilities to prevent further outbreaks of Legionnaires’ disease, which is the focus of the Center for Disease Control and Prevention’s latest Vital Signs report.
“Almost all Legionnaires’ disease outbreaks are preventable with improvements in water system management,” explained CDC Director Tom Frieden, adding that “At the end of the day, building owners and managers need to take steps to reduce the risk of Legionnaires’ disease [and] work together to reduce this risk and limit the number of people exposed, infected, and hospitalized or, potentially, fatally infected.”
For the report, the CDC investigated 27 outbreaks of Legionnaires’ disease in the United States from 2000 through 2014, which involved a total of 415 cases and 65 fatalities. In each outbreak analysis, the location, source of exposure, and problems with environmental controls of Legionella – the bacterium that causes the disease – were evaluated.
Hotels and resorts accounted for 44% of all outbreaks over the 15-year period, followed by long-term care facilities (19%) and hospitals (15%). However, outbreaks at the latter two location types accounted for 85% of all deaths, while outbreaks at hotels and resorts accounted for only 6%. Potable water was the most common direct cause of Legionella infections, followed by water from cooling towers, hot tubs, industrial equipment, and decorative fountains.
Additionally, 23 of the investigations yielded enough information to determine the exact cause of the outbreak, all of which were caused by at least one of 4 issues. The first was process failures, such as not having a proper water system management program in place to handle Legionella; this was found in two-thirds of the outbreaks. The second major cause was human error, such as not replacing filters or tubing as recommended by manufacturers, which was a cause in half of the outbreaks. The third was equipment breakdown, which was found in one-third of the outbreaks. Finally, reasons external to the buildings themselves – such as water main breaks or disruptions caused by nearby construction – factored into one-third of the outbreaks.
“Large, recent outbreaks of Legionnaires’ disease in New York City and Flint, Michigan, have brought attention to the disease and highlight the need for us to understand why these outbreaks happen and how best to prevent them, [which is] why this Vital Signs is targeted to a specific audience that we in public health don’t talk [to] often enough: building owners and managers,” Dr. Frieden said. “It’s not a traditional public health audience, [but] they are the key to environmental controls in buildings that we live in, get our health care in, and work in everyday.”
To that end, Dr. Frieden announced the release of a new CDC toolkit entitled “Developing a Water Management Program to Reduce Legionella Growth & Spread in Buildings: A Practical Guide to Implementing Industry Standards,” which building owners, managers, and administrators can turn to for guidance on how to implement effective water system management protocols in their buildings.
Legionnaires’ disease is a serious lung infection caused by inhalation of the bacteria Legionella, which can be found in water and inhaled as airborne mist. Elderly individuals, as well as those with suppressed immune systems because of underlying illnesses, are at a heightened risk for Legionnaires’ disease, which would explain the higher death rates observed at hospitals and long-term care facilities. Dr. Frieden stated that outbreaks and cases of Legionnaires’ disease are on the rise nationally, with about 5,000 infections and 20 outbreaks occurring annually; roughly 10% of infections result in death.
The uptick in recent cases is likely because of “the aging of the population, the increase in chronic illness, [an] increase in immunosuppression through use of medication to treat a variety of conditions [and] an aging plumbing infrastructure and that makes maintenance all the more challenging,” according to Dr. Frieden. “It is also possible that increased use of diagnostic tests and more reliable reporting are contributing to some of the rising rates.”
FROM CDC VITAL SIGNS
Most people who undergo gender reassignment surgery appreciate the results
ORLANDO – Gender reassignment surgery is the most extreme step for those transgender individuals who wish to complete the transformation to the opposite sex. While many transgender people do not opt to take this step, it may be an option for people who still have gender dysphoria after a thorough diagnostic work-up by a mental health professional, hormonal treatment, and having lived in the desired gender role as a “real-life test.”
Dr. Stan Monstrey, of Ghent University Hospital, Belgium, is an experienced gender reassignment surgeon and reported at the annual meeting of the American Academy of Clinical Endocrinology that between 1995 and 2005, he saw about 20-30 new patients a year. But now, he said, “We operate on a weekly basis between a minimum of three and sometimes six or seven transsexuals, so ... in our practice, probably between 90% and 95% are still going the whole way, still want what was called initially binary surgery.”
Transwomen: Male to female
The transformation procedure for male to female begins with feminizing aesthetic procedures, such as reducing the Adam’s apple (laryngeal prominence of the thyroid cartilage) and chin, frontal boss of the forehead, and possibly other facial work such as rhinoplasty. “Sometimes minor changes can have a huge effect on the face of the patient,” Dr. Monstrey said. “This is becoming, in our opinion, increasingly important for transwomen.”
Then, in about 75% of cases, Dr. Monstrey performs at least two surgeries under the same anesthesia – breast augmentation and perineal transformation. He said even after years of hormone therapy, most such patients have only a limited amount of breast tissue but want more prominent breasts. Implants can be placed behind or in front of the pectoralis muscle via inframammary, transaxillary, or occasionally periareolar approaches. Results are immediate, and complications are rare.
Another technique, which has become very popular over the past 5-10 years, is lipofilling to fill defects and depressions in the breasts. Stem cells contained in the fat may help soften scars. But when faced with a patient who had a BRCA1 mutation, the surgeons would not use lipofilling, fearing the potential for breast cancer, and would use prostheses instead (J Sex Med. 2014 Oct;11:2496-9). Still, questions remain about even using hormone treatments in such a patient.
Dr. Monstrey mentioned that in Belgium, breast augmentation for transwomen is considered reconstructive surgery and is always reimbursed whereas it is considered aesthetic surgery and never reimbursed for non-transwomen who want larger breasts. (For transmen, breast amputation is similarly reimbursed.)
The second operation is genital transformation. Basically, the interior of the penis is removed and the skin is invaginated to form a vagina of 8-18.5 cm and a scrotal flap, along with castration and removal of the penile bulb erectile tissue (corpus spongiosum) posteriorly. It is important to protect the rectal wall, which is not very strong. The foreskin becomes the new clitoral hood and inner side of the labia minora, and the clitoris is formed by reducing and transposing the penile glans. If the patient had a small penis and not enough tissue for the reconstruction, skin flaps from various other sites can be used.
Among more than 1,200 patients, 92% could achieve orgasm. Rectovaginal fistulas occurred in 4 patients, 19 needed repositioning of the urethra, 21 needed an operation to lengthen the vagina, and 95 needed aesthetic correction of the vulva. Dr. Monstrey said many patients have asked him when they should tell their new boyfriends about their transformation, meaning that the surgery was quite convincing even with penetrative sex.
If the first operation does not work, another technique is to use an isolated piece of colon or sigmoid bowel, which has been performed completely laparoscopically by a very skilled gastroenterologic surgeon at the hospital in Ghent.
Speaking to a roomful of endocrinologists, Dr. Monstrey told them, “I’ll be the first one to agree with you that indeed puberty blockers are a very good thing. However, we as surgeons are not so enthusiastic about them because … it is impossible to create a normal vagina” because of a lack of available tissue from the underdeveloped penis.
Transmen: Female to male
“Transmen react much better to hormonal therapy than do transwomen,” he said. “If they hide their breasts they really look like men. The disadvantage is that the surgical treatment is much more complex.” The most important operation for them is subcutaneous mastectomy and male contouring. A small, semiareolar incision leaves almost no scar. Most patients still require excision of redundant skin of the breasts.
Phalloplasty is a complex operation aimed at giving the patient an aesthetic phallus, a normal scrotum, the ability to void while standing, and to perform sexual intercourse, all while protecting erogenous sensation, with minimal morbidity and mortality. Dr. Monstrey reported that he has performed 600-700 phalloplasties.
The most common technique has been to use a free vascularized flap from another bodily site with the artery, vein, and nerves to reconnect at the phalloplasty site. Because the skin is very thin on the inner forearm, it is often used and allows forming an inner tube for the urethra and an outer tube for the penis. The surgery may have to be done in three or four stages for the best results. From pictures that Dr. Monstrey showed, it was obvious that the constructed penises were not absolutely natural in appearance, but he said most patients were “rather happy” with them, despite many of these patients being quite demanding. A scrotum is constructed from transposition of the labia minora.
Unfortunately, voiding while standing is often a problem, with 197 out of 562 patients (35%) having a fistula and urine leakage, but this issue frequently corrects itself. “More difficult to treat are the strictures with stenosis, which can be a problem voiding,” he said (occurring in 78/562). Other complications were 5 complete and 43 partial flap failures, 4 cases of compression syndrome, 58 cases of delayed wound healing, and 15 cases of transient ischemia. Flap failures occurred mainly in smokers, “so we don’t operate on smokers anymore,” he said.
One year after the constructive surgery, a penile prosthesis is implanted for those who want it, allowing sexual intercourse. Most individuals had orgasmic function, not because of reconnected nerves in the flap, but, Dr. Monstrey said he believes, because the clitoris, placed beneath the phallus, is denuded and stimulated during sexual activity. He said the problem is that the prostheses are usually intended for elderly men “who have sex a couple of times a month and who have a normal anatomy.” Young transmen may engage in more sexual activity, “so we have a lot of problems with exposure [of the prosthesis], infection, technical defects, and so on,” he said.
A technique gaining popularity is to use a skin flap from the groin area to make a urethra and one from the thigh to construct a penis. Although a penile transplant has recently been performed for a patient who had lost his penis to cancer, transplants are not being considered at this point, both for surgical technical reasons and because of a need for lifelong immunosuppressive drugs.
Proper referrals and counseling
The World Professional Association for Transgender Health in its Standard of Care guidelines 7 recommends one mental health professional referral for the breast surgery and two such referrals for genital surgery. The issue of possible parenthood should be discussed with patients, along with early counseling about fertility options. The age of majority and consent in different countries is important. Dr. Monstrey said genital surgery may be possible before the age of 18 years if all members of a multidisciplinary team of health professionals agree on a case by case basis that the adolescent can understand the risks, benefits, and alternatives to the surgery with the same degree of competence as someone 18 years of age or older.
Dr. Monstrey reported having no financial disclosures.
ORLANDO – Gender reassignment surgery is the most extreme step for those transgender individuals who wish to complete the transformation to the opposite sex. While many transgender people do not opt to take this step, it may be an option for people who still have gender dysphoria after a thorough diagnostic work-up by a mental health professional, hormonal treatment, and having lived in the desired gender role as a “real-life test.”
Dr. Stan Monstrey, of Ghent University Hospital, Belgium, is an experienced gender reassignment surgeon and reported at the annual meeting of the American Academy of Clinical Endocrinology that between 1995 and 2005, he saw about 20-30 new patients a year. But now, he said, “We operate on a weekly basis between a minimum of three and sometimes six or seven transsexuals, so ... in our practice, probably between 90% and 95% are still going the whole way, still want what was called initially binary surgery.”
Transwomen: Male to female
The transformation procedure for male to female begins with feminizing aesthetic procedures, such as reducing the Adam’s apple (laryngeal prominence of the thyroid cartilage) and chin, frontal boss of the forehead, and possibly other facial work such as rhinoplasty. “Sometimes minor changes can have a huge effect on the face of the patient,” Dr. Monstrey said. “This is becoming, in our opinion, increasingly important for transwomen.”
Then, in about 75% of cases, Dr. Monstrey performs at least two surgeries under the same anesthesia – breast augmentation and perineal transformation. He said even after years of hormone therapy, most such patients have only a limited amount of breast tissue but want more prominent breasts. Implants can be placed behind or in front of the pectoralis muscle via inframammary, transaxillary, or occasionally periareolar approaches. Results are immediate, and complications are rare.
Another technique, which has become very popular over the past 5-10 years, is lipofilling to fill defects and depressions in the breasts. Stem cells contained in the fat may help soften scars. But when faced with a patient who had a BRCA1 mutation, the surgeons would not use lipofilling, fearing the potential for breast cancer, and would use prostheses instead (J Sex Med. 2014 Oct;11:2496-9). Still, questions remain about even using hormone treatments in such a patient.
Dr. Monstrey mentioned that in Belgium, breast augmentation for transwomen is considered reconstructive surgery and is always reimbursed whereas it is considered aesthetic surgery and never reimbursed for non-transwomen who want larger breasts. (For transmen, breast amputation is similarly reimbursed.)
The second operation is genital transformation. Basically, the interior of the penis is removed and the skin is invaginated to form a vagina of 8-18.5 cm and a scrotal flap, along with castration and removal of the penile bulb erectile tissue (corpus spongiosum) posteriorly. It is important to protect the rectal wall, which is not very strong. The foreskin becomes the new clitoral hood and inner side of the labia minora, and the clitoris is formed by reducing and transposing the penile glans. If the patient had a small penis and not enough tissue for the reconstruction, skin flaps from various other sites can be used.
Among more than 1,200 patients, 92% could achieve orgasm. Rectovaginal fistulas occurred in 4 patients, 19 needed repositioning of the urethra, 21 needed an operation to lengthen the vagina, and 95 needed aesthetic correction of the vulva. Dr. Monstrey said many patients have asked him when they should tell their new boyfriends about their transformation, meaning that the surgery was quite convincing even with penetrative sex.
If the first operation does not work, another technique is to use an isolated piece of colon or sigmoid bowel, which has been performed completely laparoscopically by a very skilled gastroenterologic surgeon at the hospital in Ghent.
Speaking to a roomful of endocrinologists, Dr. Monstrey told them, “I’ll be the first one to agree with you that indeed puberty blockers are a very good thing. However, we as surgeons are not so enthusiastic about them because … it is impossible to create a normal vagina” because of a lack of available tissue from the underdeveloped penis.
Transmen: Female to male
“Transmen react much better to hormonal therapy than do transwomen,” he said. “If they hide their breasts they really look like men. The disadvantage is that the surgical treatment is much more complex.” The most important operation for them is subcutaneous mastectomy and male contouring. A small, semiareolar incision leaves almost no scar. Most patients still require excision of redundant skin of the breasts.
Phalloplasty is a complex operation aimed at giving the patient an aesthetic phallus, a normal scrotum, the ability to void while standing, and to perform sexual intercourse, all while protecting erogenous sensation, with minimal morbidity and mortality. Dr. Monstrey reported that he has performed 600-700 phalloplasties.
The most common technique has been to use a free vascularized flap from another bodily site with the artery, vein, and nerves to reconnect at the phalloplasty site. Because the skin is very thin on the inner forearm, it is often used and allows forming an inner tube for the urethra and an outer tube for the penis. The surgery may have to be done in three or four stages for the best results. From pictures that Dr. Monstrey showed, it was obvious that the constructed penises were not absolutely natural in appearance, but he said most patients were “rather happy” with them, despite many of these patients being quite demanding. A scrotum is constructed from transposition of the labia minora.
Unfortunately, voiding while standing is often a problem, with 197 out of 562 patients (35%) having a fistula and urine leakage, but this issue frequently corrects itself. “More difficult to treat are the strictures with stenosis, which can be a problem voiding,” he said (occurring in 78/562). Other complications were 5 complete and 43 partial flap failures, 4 cases of compression syndrome, 58 cases of delayed wound healing, and 15 cases of transient ischemia. Flap failures occurred mainly in smokers, “so we don’t operate on smokers anymore,” he said.
One year after the constructive surgery, a penile prosthesis is implanted for those who want it, allowing sexual intercourse. Most individuals had orgasmic function, not because of reconnected nerves in the flap, but, Dr. Monstrey said he believes, because the clitoris, placed beneath the phallus, is denuded and stimulated during sexual activity. He said the problem is that the prostheses are usually intended for elderly men “who have sex a couple of times a month and who have a normal anatomy.” Young transmen may engage in more sexual activity, “so we have a lot of problems with exposure [of the prosthesis], infection, technical defects, and so on,” he said.
A technique gaining popularity is to use a skin flap from the groin area to make a urethra and one from the thigh to construct a penis. Although a penile transplant has recently been performed for a patient who had lost his penis to cancer, transplants are not being considered at this point, both for surgical technical reasons and because of a need for lifelong immunosuppressive drugs.
Proper referrals and counseling
The World Professional Association for Transgender Health in its Standard of Care guidelines 7 recommends one mental health professional referral for the breast surgery and two such referrals for genital surgery. The issue of possible parenthood should be discussed with patients, along with early counseling about fertility options. The age of majority and consent in different countries is important. Dr. Monstrey said genital surgery may be possible before the age of 18 years if all members of a multidisciplinary team of health professionals agree on a case by case basis that the adolescent can understand the risks, benefits, and alternatives to the surgery with the same degree of competence as someone 18 years of age or older.
Dr. Monstrey reported having no financial disclosures.
ORLANDO – Gender reassignment surgery is the most extreme step for those transgender individuals who wish to complete the transformation to the opposite sex. While many transgender people do not opt to take this step, it may be an option for people who still have gender dysphoria after a thorough diagnostic work-up by a mental health professional, hormonal treatment, and having lived in the desired gender role as a “real-life test.”
Dr. Stan Monstrey, of Ghent University Hospital, Belgium, is an experienced gender reassignment surgeon and reported at the annual meeting of the American Academy of Clinical Endocrinology that between 1995 and 2005, he saw about 20-30 new patients a year. But now, he said, “We operate on a weekly basis between a minimum of three and sometimes six or seven transsexuals, so ... in our practice, probably between 90% and 95% are still going the whole way, still want what was called initially binary surgery.”
Transwomen: Male to female
The transformation procedure for male to female begins with feminizing aesthetic procedures, such as reducing the Adam’s apple (laryngeal prominence of the thyroid cartilage) and chin, frontal boss of the forehead, and possibly other facial work such as rhinoplasty. “Sometimes minor changes can have a huge effect on the face of the patient,” Dr. Monstrey said. “This is becoming, in our opinion, increasingly important for transwomen.”
Then, in about 75% of cases, Dr. Monstrey performs at least two surgeries under the same anesthesia – breast augmentation and perineal transformation. He said even after years of hormone therapy, most such patients have only a limited amount of breast tissue but want more prominent breasts. Implants can be placed behind or in front of the pectoralis muscle via inframammary, transaxillary, or occasionally periareolar approaches. Results are immediate, and complications are rare.
Another technique, which has become very popular over the past 5-10 years, is lipofilling to fill defects and depressions in the breasts. Stem cells contained in the fat may help soften scars. But when faced with a patient who had a BRCA1 mutation, the surgeons would not use lipofilling, fearing the potential for breast cancer, and would use prostheses instead (J Sex Med. 2014 Oct;11:2496-9). Still, questions remain about even using hormone treatments in such a patient.
Dr. Monstrey mentioned that in Belgium, breast augmentation for transwomen is considered reconstructive surgery and is always reimbursed whereas it is considered aesthetic surgery and never reimbursed for non-transwomen who want larger breasts. (For transmen, breast amputation is similarly reimbursed.)
The second operation is genital transformation. Basically, the interior of the penis is removed and the skin is invaginated to form a vagina of 8-18.5 cm and a scrotal flap, along with castration and removal of the penile bulb erectile tissue (corpus spongiosum) posteriorly. It is important to protect the rectal wall, which is not very strong. The foreskin becomes the new clitoral hood and inner side of the labia minora, and the clitoris is formed by reducing and transposing the penile glans. If the patient had a small penis and not enough tissue for the reconstruction, skin flaps from various other sites can be used.
Among more than 1,200 patients, 92% could achieve orgasm. Rectovaginal fistulas occurred in 4 patients, 19 needed repositioning of the urethra, 21 needed an operation to lengthen the vagina, and 95 needed aesthetic correction of the vulva. Dr. Monstrey said many patients have asked him when they should tell their new boyfriends about their transformation, meaning that the surgery was quite convincing even with penetrative sex.
If the first operation does not work, another technique is to use an isolated piece of colon or sigmoid bowel, which has been performed completely laparoscopically by a very skilled gastroenterologic surgeon at the hospital in Ghent.
Speaking to a roomful of endocrinologists, Dr. Monstrey told them, “I’ll be the first one to agree with you that indeed puberty blockers are a very good thing. However, we as surgeons are not so enthusiastic about them because … it is impossible to create a normal vagina” because of a lack of available tissue from the underdeveloped penis.
Transmen: Female to male
“Transmen react much better to hormonal therapy than do transwomen,” he said. “If they hide their breasts they really look like men. The disadvantage is that the surgical treatment is much more complex.” The most important operation for them is subcutaneous mastectomy and male contouring. A small, semiareolar incision leaves almost no scar. Most patients still require excision of redundant skin of the breasts.
Phalloplasty is a complex operation aimed at giving the patient an aesthetic phallus, a normal scrotum, the ability to void while standing, and to perform sexual intercourse, all while protecting erogenous sensation, with minimal morbidity and mortality. Dr. Monstrey reported that he has performed 600-700 phalloplasties.
The most common technique has been to use a free vascularized flap from another bodily site with the artery, vein, and nerves to reconnect at the phalloplasty site. Because the skin is very thin on the inner forearm, it is often used and allows forming an inner tube for the urethra and an outer tube for the penis. The surgery may have to be done in three or four stages for the best results. From pictures that Dr. Monstrey showed, it was obvious that the constructed penises were not absolutely natural in appearance, but he said most patients were “rather happy” with them, despite many of these patients being quite demanding. A scrotum is constructed from transposition of the labia minora.
Unfortunately, voiding while standing is often a problem, with 197 out of 562 patients (35%) having a fistula and urine leakage, but this issue frequently corrects itself. “More difficult to treat are the strictures with stenosis, which can be a problem voiding,” he said (occurring in 78/562). Other complications were 5 complete and 43 partial flap failures, 4 cases of compression syndrome, 58 cases of delayed wound healing, and 15 cases of transient ischemia. Flap failures occurred mainly in smokers, “so we don’t operate on smokers anymore,” he said.
One year after the constructive surgery, a penile prosthesis is implanted for those who want it, allowing sexual intercourse. Most individuals had orgasmic function, not because of reconnected nerves in the flap, but, Dr. Monstrey said he believes, because the clitoris, placed beneath the phallus, is denuded and stimulated during sexual activity. He said the problem is that the prostheses are usually intended for elderly men “who have sex a couple of times a month and who have a normal anatomy.” Young transmen may engage in more sexual activity, “so we have a lot of problems with exposure [of the prosthesis], infection, technical defects, and so on,” he said.
A technique gaining popularity is to use a skin flap from the groin area to make a urethra and one from the thigh to construct a penis. Although a penile transplant has recently been performed for a patient who had lost his penis to cancer, transplants are not being considered at this point, both for surgical technical reasons and because of a need for lifelong immunosuppressive drugs.
Proper referrals and counseling
The World Professional Association for Transgender Health in its Standard of Care guidelines 7 recommends one mental health professional referral for the breast surgery and two such referrals for genital surgery. The issue of possible parenthood should be discussed with patients, along with early counseling about fertility options. The age of majority and consent in different countries is important. Dr. Monstrey said genital surgery may be possible before the age of 18 years if all members of a multidisciplinary team of health professionals agree on a case by case basis that the adolescent can understand the risks, benefits, and alternatives to the surgery with the same degree of competence as someone 18 years of age or older.
Dr. Monstrey reported having no financial disclosures.
AACE 2016
Expert simplifies diagnosis of endocrine hypertension
ORLANDO – The diagnosis of hypertension with its origin in the endocrine system may appear complex, but it does not have to be. Primary aldosteronism may be underappreciated and underdiagnosed. On the other hand, catecholamine-secreting tumors are rare, but they often come to mind in making a diagnosis of endocrine hypertension. Dr. William Young Jr., professor of medicine at the Mayo Clinic, Rochester, Minn., presented cases in a lively session of audience participation at the annual meeting of the America Association of Clinical Endocrinologists. Later, Dr. Young summarized some of the key points in an interview, which has been edited for brevity.
Frontline Medical News: What is the endocrinologist’s role in working up the patient who has hypertension of suspected endocrine origin?
Dr. William Young Jr.: The first is knowing when to suspect endocrine hypertension. The most common form of endocrine hypertension is primary aldosteronism. So this is the adrenal-dependent autonomous production of aldosterone, which leads to high blood pressure, volume expansion, and sometimes hypokalemia. One of the concepts that many clinicians forget is that only about 30% of patients with primary aldosteronism present with hypokalemia. So 70% of patients with this disorder don’t have hypokalemia. They look like any other person with high blood pressure.
So when should we look for primary aldosteronism? Onset of high blood pressure at a young age, for example, less than age 30, drug resistant hypertension – so three drugs [with] poor control. Twenty percent of those patients will prove to have primary aldosteronism. Simply poorly controlled hypertension is another group; [or] family history of primary aldosteronism, so all first degree relatives should be tested. Or a patient who has hypertension and has had an incidental discovery of an adrenal mass should also be tested for primary aldosteronism.
Unfortunately, most primary care providers ... think that this is a complicated and dense endocrine disorder, and they frequently will not look for it, but it’s actually very simple. Some of the complexities are historical in nature in that when this disorder was first described, several rules were made for what medications a patient could be on, for example. And it’s difficult to comply with those rules. For example, if you have a patient who’s on five drugs and has poor control, you’re not going to switch him to the two drugs that are recommended because they are weak antihypertensives. It wouldn’t be ethical to do so. [The two drug classes are the calcium channel blocker verapamil and the alpha-1 antagonists doxazosin (Cardura) and terazosin (Hytrin).]
So the best thing to do regardless of what drugs the patient is on – it doesn’t matter if they’re on ACE inhibitors or angiotensin-receptor blockers or diuretics – just get a morning blood sample as your aldosterone and plasma renin activity. If aldosterone is high or generous, greater than 15 ng/dL, if the plasma renin activity is less than 1 ng/mL per hour, that’s a positive case detection test.
That doesn’t prove the patient has primary aldosteronism. The sensitivity/specificity of aldosterone and renin case detection testing is about 75%. So most patients need confirmatory testing, which would either be the saline infusion test or the 24-hour urine for aldosterone on a high-sodium diet. And once primary aldosteronism is confirmed, then we would do an adrenal-directed CT scan.
The problem with the findings in the adrenal glands on CT is that the prevalence of adrenal nodularity increases with age. So people in their 60s and 70s can have adrenal nodules that have nothing to do with aldosterone production. So whereas if the patient is less than age 35 and CT shows a unilateral macroadenoma, the contralateral adrenal is perfectly normal appearing, and the patient has a marked primary aldosteronism – so spontaneous hypokalemia, plasma aldosterone over 30 ng/dL – that subset of patients could go straight to surgery and skip adrenal vein sampling. However, everyone else over age 35 if they want to pursue the surgical option, adrenal vein sampling is a key test.
FMN: Is there anything that rules out primary aldosteronism?
Dr. Young: If the plasma aldosterone level is less than 10 ng/dL it makes primary aldosteronism very unlikely, and if the renin level is higher than 1 ng/mL per hour, that makes primary aldosteronism very unlikely.
FMN: What about working up pheochromocytoma?
Dr. Young: Clinicians, unlike with primary aldosteronism, where they don’t look for it enough, for pheochromocytoma they look for it a lot, and it’s really rare. Between 0.1 and 0.01% of the hypertensive population will prove to have pheochromocytoma.
The false positive rate with our case detection testing of plasma metanephrines about 15%. So based on how rare pheochromocytoma is and a 15% false positive rate with plasma metanephrines, 97% of patients with elevated plasma normetanephrines do not have pheochromocytoma.
So we have a real problem with case detection testing. The 24-hour urine metanephrines and catecholamines using appropriate reference ranges are probably a better way to do case detection testing for pheochromocytoma, but there’s still a false positive rate with urinary normetanephrine.
Never mistake a benign adrenal adenoma for a pheo. In terms of the imaging phenotype, pheos are dense and vascular. As they enlarge, they get cystic hemorrhagic areas within them.
FMN: What goes on with other paragangliomas?
Dr. Young: Pheochromocytoma is the term we use when you have a catecholamine-secreting tumor in the adrenal gland itself. It develops in the adrenal medulla. Paraganglioma is an identical tumor, but it’s outside of the adrenal gland. It’s somewhere in the pelvis, could be in the chest, could be in the skull base, or neck. Most commonly it’s in the abdomen. So the case detection testing is the same.
But patients we should consider testing for pheochromocytoma and paraganglioma are those with paroxysmal symptoms like episodes of pounding heartbeat, sweating, headache, tremor, and pallor. Young people with new onset hypertension, hypertension that’s poorly controlled, and vascular adrenal masses should also be tested for pheochromocytoma.
FMN: Are there things that can confound any of these tests we discussed or any drugs that should be noted that could get in the way?
Dr. Young: For pheochromocytoma, the good news is now that most reference labs use tandem mass spectrometry technology, the hypertension drugs that potentially interfered in the past like labetalol and sotalol no longer interfere. So these days the clinician doesn’t need to stop any blood pressure–related medications.
The medications that can cause false positive testing are primarily tricyclic antidepressants. Flexeril, which is cyclobenzaprine, is commonly used to treat fibromyalgia, and that is a tricyclic antidepressant, and that will cause false positive testing ... with norepinephrine and normetanephrine. Tricyclic antidepressants can increase those levels three, four, or fivefold. Levodopa, which is in Sinemet, can cause false positive testing. Antipsychotics can cause false positive testing, and MAO inhibitors ... So the clinician shouldn’t worry about blood pressure medications but should worry about the other medications the patient is taking.
FMN: When someone looks at laboratory values, should you be comparing these values to people with hypertension who do not have these conditions, and do labs have adjusted values?
Dr. Young: That’s a good question, and in the Mayo medical lab, our reference range that we use is based on patients who were tested for pheochromocytoma [and] proved not to have it. So our cutoffs are 50% to 100% higher than some other reference labs.
These other reference labs use normal laboratory volunteers who have normal blood pressure and who are taking no medications, and I’ve never tested such a patient for pheochromocytoma, so why would we use that group of people to determine our reference range? So we should use reference ranges based on patients tested for pheo but who prove to not have pheo. And that leads to higher accuracy of our case detection tests.
FMN: What are the treatments for these conditions and follow-up? I take it if there’s an adrenal mass, you get a surgeon, and I think you also noted that you need an experienced endocrine surgeon.
Dr. Young: For primary aldosteronism, if the patient has a unilateral aldosterone-producing adenoma, the outstanding treatment is laparoscopic adrenalectomy. Patients are in the hospital one night, [and] they’re back at work in 7-10 days, but that does require an expert laparoscopic adrenal surgeon. And in the United States we have a 1-year endocrine surgery program. It’s optimal that patients are referred to surgeons who have done that unique training.
For pheochromocytoma less than 8-9 cm, laparoscopic adrenalectomy with an experienced endocrine surgeon is an excellent treatment option. When the adrenal pheochromocytoma is larger than 8 or 9 cm, especially if it’s cystic, the surgeon may want to do it as open [surgery] because it’s critical that the capsule of the pheochromocytoma is not ruptured intraoperatively. If it is ruptured, a benign pheochromocytoma has just been transformed to malignant, incurable disease.
If it’s a paraganglioma, typically that requires an open operation whether it’s in the neck or the chest or the pelvis or lower abdomen.
FMN: What is the follow-up to any of these conditions?
Dr. Young: The follow-up once you’ve resected an adrenal pheochromocytoma depends on whether there is a germline mutation. If there is a germline mutation, for example, succinate dehydrogenase mutation [SDH], these patients are at higher risk for developing recurrent pheochromocytoma or paraganglioma, and they’re at risk for developing malignant pheochromocytoma or paraganglioma.
One of our challenges is when we resect a pheochromocytoma or paraganglioma, the pathologist doesn’t have the tools to tell us if it’s benign or malignant ... So all patients need lifelong biochemical follow-up, basically a 24-hour urine for metanephrines and catecholamines annually or plasma metanephrines for life.
If the patients have an underlying mutation like succinate dehydrogenase, they’re at risk for developing nonfunctioning paragangliomas. So these patients need periodic imaging in addition to the annual biochemical testing. For example, if a patient had an abdominal paraganglioma with an SDHB [succinate dehydrogenase complex iron sulfur subunit B], we would do abdominal MRI scans every 1-2 years. That would include the pelvis. We would screen for paragangliomas elsewhere with MRI of the skull base and neck and the chest every 3-5 years, and a total body scan every 5 years or so, either FDG-PET [18F-fluorodeoxyglucose positron emission tomography] scan or 123I-MIBG [metaiodobenzyl-guanidine] scan.
FMN: Is there anything that is particularly new in the past couple of years?
Dr. Young: Some of the innovations lately have been in the area of metastatic pheochromocytoma and paraganglioma. These are in patients who have limited metastatic disease that’s localized to bone or to liver, and we’ve been using ablative therapies. This includes cryoablation ... and radiofrequency ablation, which is killing the tumor with hot temperature, and that’s very effective for patients who have limited metastatic lesions in the bone or liver.
For patients with complex tumors in difficult areas of the body, for example, in the mediastinum or surrounding the heart, we’ve been using 3D printer technology to print [a replica of the structures and] the tumor preoperatively, and this assists in surgical planning.
FMN: And what do you see coming?
Dr. Young: I think we’re getting close to something near curative for patients with malignant pheochromocytoma and paraganglioma. We’re understanding the basic biology better [and] pathophysiology, and I think that’s going to lead to some novel treatments.
Also, what I see coming is that we’ll be able to use germline mutation information and somatic tumor mutation information to guide us on specific imaging modalities, to guide us on forms of preventative therapy so that we prevent the paraganglioma from ever developing and also provide us with additional treatment options.
ORLANDO – The diagnosis of hypertension with its origin in the endocrine system may appear complex, but it does not have to be. Primary aldosteronism may be underappreciated and underdiagnosed. On the other hand, catecholamine-secreting tumors are rare, but they often come to mind in making a diagnosis of endocrine hypertension. Dr. William Young Jr., professor of medicine at the Mayo Clinic, Rochester, Minn., presented cases in a lively session of audience participation at the annual meeting of the America Association of Clinical Endocrinologists. Later, Dr. Young summarized some of the key points in an interview, which has been edited for brevity.
Frontline Medical News: What is the endocrinologist’s role in working up the patient who has hypertension of suspected endocrine origin?
Dr. William Young Jr.: The first is knowing when to suspect endocrine hypertension. The most common form of endocrine hypertension is primary aldosteronism. So this is the adrenal-dependent autonomous production of aldosterone, which leads to high blood pressure, volume expansion, and sometimes hypokalemia. One of the concepts that many clinicians forget is that only about 30% of patients with primary aldosteronism present with hypokalemia. So 70% of patients with this disorder don’t have hypokalemia. They look like any other person with high blood pressure.
So when should we look for primary aldosteronism? Onset of high blood pressure at a young age, for example, less than age 30, drug resistant hypertension – so three drugs [with] poor control. Twenty percent of those patients will prove to have primary aldosteronism. Simply poorly controlled hypertension is another group; [or] family history of primary aldosteronism, so all first degree relatives should be tested. Or a patient who has hypertension and has had an incidental discovery of an adrenal mass should also be tested for primary aldosteronism.
Unfortunately, most primary care providers ... think that this is a complicated and dense endocrine disorder, and they frequently will not look for it, but it’s actually very simple. Some of the complexities are historical in nature in that when this disorder was first described, several rules were made for what medications a patient could be on, for example. And it’s difficult to comply with those rules. For example, if you have a patient who’s on five drugs and has poor control, you’re not going to switch him to the two drugs that are recommended because they are weak antihypertensives. It wouldn’t be ethical to do so. [The two drug classes are the calcium channel blocker verapamil and the alpha-1 antagonists doxazosin (Cardura) and terazosin (Hytrin).]
So the best thing to do regardless of what drugs the patient is on – it doesn’t matter if they’re on ACE inhibitors or angiotensin-receptor blockers or diuretics – just get a morning blood sample as your aldosterone and plasma renin activity. If aldosterone is high or generous, greater than 15 ng/dL, if the plasma renin activity is less than 1 ng/mL per hour, that’s a positive case detection test.
That doesn’t prove the patient has primary aldosteronism. The sensitivity/specificity of aldosterone and renin case detection testing is about 75%. So most patients need confirmatory testing, which would either be the saline infusion test or the 24-hour urine for aldosterone on a high-sodium diet. And once primary aldosteronism is confirmed, then we would do an adrenal-directed CT scan.
The problem with the findings in the adrenal glands on CT is that the prevalence of adrenal nodularity increases with age. So people in their 60s and 70s can have adrenal nodules that have nothing to do with aldosterone production. So whereas if the patient is less than age 35 and CT shows a unilateral macroadenoma, the contralateral adrenal is perfectly normal appearing, and the patient has a marked primary aldosteronism – so spontaneous hypokalemia, plasma aldosterone over 30 ng/dL – that subset of patients could go straight to surgery and skip adrenal vein sampling. However, everyone else over age 35 if they want to pursue the surgical option, adrenal vein sampling is a key test.
FMN: Is there anything that rules out primary aldosteronism?
Dr. Young: If the plasma aldosterone level is less than 10 ng/dL it makes primary aldosteronism very unlikely, and if the renin level is higher than 1 ng/mL per hour, that makes primary aldosteronism very unlikely.
FMN: What about working up pheochromocytoma?
Dr. Young: Clinicians, unlike with primary aldosteronism, where they don’t look for it enough, for pheochromocytoma they look for it a lot, and it’s really rare. Between 0.1 and 0.01% of the hypertensive population will prove to have pheochromocytoma.
The false positive rate with our case detection testing of plasma metanephrines about 15%. So based on how rare pheochromocytoma is and a 15% false positive rate with plasma metanephrines, 97% of patients with elevated plasma normetanephrines do not have pheochromocytoma.
So we have a real problem with case detection testing. The 24-hour urine metanephrines and catecholamines using appropriate reference ranges are probably a better way to do case detection testing for pheochromocytoma, but there’s still a false positive rate with urinary normetanephrine.
Never mistake a benign adrenal adenoma for a pheo. In terms of the imaging phenotype, pheos are dense and vascular. As they enlarge, they get cystic hemorrhagic areas within them.
FMN: What goes on with other paragangliomas?
Dr. Young: Pheochromocytoma is the term we use when you have a catecholamine-secreting tumor in the adrenal gland itself. It develops in the adrenal medulla. Paraganglioma is an identical tumor, but it’s outside of the adrenal gland. It’s somewhere in the pelvis, could be in the chest, could be in the skull base, or neck. Most commonly it’s in the abdomen. So the case detection testing is the same.
But patients we should consider testing for pheochromocytoma and paraganglioma are those with paroxysmal symptoms like episodes of pounding heartbeat, sweating, headache, tremor, and pallor. Young people with new onset hypertension, hypertension that’s poorly controlled, and vascular adrenal masses should also be tested for pheochromocytoma.
FMN: Are there things that can confound any of these tests we discussed or any drugs that should be noted that could get in the way?
Dr. Young: For pheochromocytoma, the good news is now that most reference labs use tandem mass spectrometry technology, the hypertension drugs that potentially interfered in the past like labetalol and sotalol no longer interfere. So these days the clinician doesn’t need to stop any blood pressure–related medications.
The medications that can cause false positive testing are primarily tricyclic antidepressants. Flexeril, which is cyclobenzaprine, is commonly used to treat fibromyalgia, and that is a tricyclic antidepressant, and that will cause false positive testing ... with norepinephrine and normetanephrine. Tricyclic antidepressants can increase those levels three, four, or fivefold. Levodopa, which is in Sinemet, can cause false positive testing. Antipsychotics can cause false positive testing, and MAO inhibitors ... So the clinician shouldn’t worry about blood pressure medications but should worry about the other medications the patient is taking.
FMN: When someone looks at laboratory values, should you be comparing these values to people with hypertension who do not have these conditions, and do labs have adjusted values?
Dr. Young: That’s a good question, and in the Mayo medical lab, our reference range that we use is based on patients who were tested for pheochromocytoma [and] proved not to have it. So our cutoffs are 50% to 100% higher than some other reference labs.
These other reference labs use normal laboratory volunteers who have normal blood pressure and who are taking no medications, and I’ve never tested such a patient for pheochromocytoma, so why would we use that group of people to determine our reference range? So we should use reference ranges based on patients tested for pheo but who prove to not have pheo. And that leads to higher accuracy of our case detection tests.
FMN: What are the treatments for these conditions and follow-up? I take it if there’s an adrenal mass, you get a surgeon, and I think you also noted that you need an experienced endocrine surgeon.
Dr. Young: For primary aldosteronism, if the patient has a unilateral aldosterone-producing adenoma, the outstanding treatment is laparoscopic adrenalectomy. Patients are in the hospital one night, [and] they’re back at work in 7-10 days, but that does require an expert laparoscopic adrenal surgeon. And in the United States we have a 1-year endocrine surgery program. It’s optimal that patients are referred to surgeons who have done that unique training.
For pheochromocytoma less than 8-9 cm, laparoscopic adrenalectomy with an experienced endocrine surgeon is an excellent treatment option. When the adrenal pheochromocytoma is larger than 8 or 9 cm, especially if it’s cystic, the surgeon may want to do it as open [surgery] because it’s critical that the capsule of the pheochromocytoma is not ruptured intraoperatively. If it is ruptured, a benign pheochromocytoma has just been transformed to malignant, incurable disease.
If it’s a paraganglioma, typically that requires an open operation whether it’s in the neck or the chest or the pelvis or lower abdomen.
FMN: What is the follow-up to any of these conditions?
Dr. Young: The follow-up once you’ve resected an adrenal pheochromocytoma depends on whether there is a germline mutation. If there is a germline mutation, for example, succinate dehydrogenase mutation [SDH], these patients are at higher risk for developing recurrent pheochromocytoma or paraganglioma, and they’re at risk for developing malignant pheochromocytoma or paraganglioma.
One of our challenges is when we resect a pheochromocytoma or paraganglioma, the pathologist doesn’t have the tools to tell us if it’s benign or malignant ... So all patients need lifelong biochemical follow-up, basically a 24-hour urine for metanephrines and catecholamines annually or plasma metanephrines for life.
If the patients have an underlying mutation like succinate dehydrogenase, they’re at risk for developing nonfunctioning paragangliomas. So these patients need periodic imaging in addition to the annual biochemical testing. For example, if a patient had an abdominal paraganglioma with an SDHB [succinate dehydrogenase complex iron sulfur subunit B], we would do abdominal MRI scans every 1-2 years. That would include the pelvis. We would screen for paragangliomas elsewhere with MRI of the skull base and neck and the chest every 3-5 years, and a total body scan every 5 years or so, either FDG-PET [18F-fluorodeoxyglucose positron emission tomography] scan or 123I-MIBG [metaiodobenzyl-guanidine] scan.
FMN: Is there anything that is particularly new in the past couple of years?
Dr. Young: Some of the innovations lately have been in the area of metastatic pheochromocytoma and paraganglioma. These are in patients who have limited metastatic disease that’s localized to bone or to liver, and we’ve been using ablative therapies. This includes cryoablation ... and radiofrequency ablation, which is killing the tumor with hot temperature, and that’s very effective for patients who have limited metastatic lesions in the bone or liver.
For patients with complex tumors in difficult areas of the body, for example, in the mediastinum or surrounding the heart, we’ve been using 3D printer technology to print [a replica of the structures and] the tumor preoperatively, and this assists in surgical planning.
FMN: And what do you see coming?
Dr. Young: I think we’re getting close to something near curative for patients with malignant pheochromocytoma and paraganglioma. We’re understanding the basic biology better [and] pathophysiology, and I think that’s going to lead to some novel treatments.
Also, what I see coming is that we’ll be able to use germline mutation information and somatic tumor mutation information to guide us on specific imaging modalities, to guide us on forms of preventative therapy so that we prevent the paraganglioma from ever developing and also provide us with additional treatment options.
ORLANDO – The diagnosis of hypertension with its origin in the endocrine system may appear complex, but it does not have to be. Primary aldosteronism may be underappreciated and underdiagnosed. On the other hand, catecholamine-secreting tumors are rare, but they often come to mind in making a diagnosis of endocrine hypertension. Dr. William Young Jr., professor of medicine at the Mayo Clinic, Rochester, Minn., presented cases in a lively session of audience participation at the annual meeting of the America Association of Clinical Endocrinologists. Later, Dr. Young summarized some of the key points in an interview, which has been edited for brevity.
Frontline Medical News: What is the endocrinologist’s role in working up the patient who has hypertension of suspected endocrine origin?
Dr. William Young Jr.: The first is knowing when to suspect endocrine hypertension. The most common form of endocrine hypertension is primary aldosteronism. So this is the adrenal-dependent autonomous production of aldosterone, which leads to high blood pressure, volume expansion, and sometimes hypokalemia. One of the concepts that many clinicians forget is that only about 30% of patients with primary aldosteronism present with hypokalemia. So 70% of patients with this disorder don’t have hypokalemia. They look like any other person with high blood pressure.
So when should we look for primary aldosteronism? Onset of high blood pressure at a young age, for example, less than age 30, drug resistant hypertension – so three drugs [with] poor control. Twenty percent of those patients will prove to have primary aldosteronism. Simply poorly controlled hypertension is another group; [or] family history of primary aldosteronism, so all first degree relatives should be tested. Or a patient who has hypertension and has had an incidental discovery of an adrenal mass should also be tested for primary aldosteronism.
Unfortunately, most primary care providers ... think that this is a complicated and dense endocrine disorder, and they frequently will not look for it, but it’s actually very simple. Some of the complexities are historical in nature in that when this disorder was first described, several rules were made for what medications a patient could be on, for example. And it’s difficult to comply with those rules. For example, if you have a patient who’s on five drugs and has poor control, you’re not going to switch him to the two drugs that are recommended because they are weak antihypertensives. It wouldn’t be ethical to do so. [The two drug classes are the calcium channel blocker verapamil and the alpha-1 antagonists doxazosin (Cardura) and terazosin (Hytrin).]
So the best thing to do regardless of what drugs the patient is on – it doesn’t matter if they’re on ACE inhibitors or angiotensin-receptor blockers or diuretics – just get a morning blood sample as your aldosterone and plasma renin activity. If aldosterone is high or generous, greater than 15 ng/dL, if the plasma renin activity is less than 1 ng/mL per hour, that’s a positive case detection test.
That doesn’t prove the patient has primary aldosteronism. The sensitivity/specificity of aldosterone and renin case detection testing is about 75%. So most patients need confirmatory testing, which would either be the saline infusion test or the 24-hour urine for aldosterone on a high-sodium diet. And once primary aldosteronism is confirmed, then we would do an adrenal-directed CT scan.
The problem with the findings in the adrenal glands on CT is that the prevalence of adrenal nodularity increases with age. So people in their 60s and 70s can have adrenal nodules that have nothing to do with aldosterone production. So whereas if the patient is less than age 35 and CT shows a unilateral macroadenoma, the contralateral adrenal is perfectly normal appearing, and the patient has a marked primary aldosteronism – so spontaneous hypokalemia, plasma aldosterone over 30 ng/dL – that subset of patients could go straight to surgery and skip adrenal vein sampling. However, everyone else over age 35 if they want to pursue the surgical option, adrenal vein sampling is a key test.
FMN: Is there anything that rules out primary aldosteronism?
Dr. Young: If the plasma aldosterone level is less than 10 ng/dL it makes primary aldosteronism very unlikely, and if the renin level is higher than 1 ng/mL per hour, that makes primary aldosteronism very unlikely.
FMN: What about working up pheochromocytoma?
Dr. Young: Clinicians, unlike with primary aldosteronism, where they don’t look for it enough, for pheochromocytoma they look for it a lot, and it’s really rare. Between 0.1 and 0.01% of the hypertensive population will prove to have pheochromocytoma.
The false positive rate with our case detection testing of plasma metanephrines about 15%. So based on how rare pheochromocytoma is and a 15% false positive rate with plasma metanephrines, 97% of patients with elevated plasma normetanephrines do not have pheochromocytoma.
So we have a real problem with case detection testing. The 24-hour urine metanephrines and catecholamines using appropriate reference ranges are probably a better way to do case detection testing for pheochromocytoma, but there’s still a false positive rate with urinary normetanephrine.
Never mistake a benign adrenal adenoma for a pheo. In terms of the imaging phenotype, pheos are dense and vascular. As they enlarge, they get cystic hemorrhagic areas within them.
FMN: What goes on with other paragangliomas?
Dr. Young: Pheochromocytoma is the term we use when you have a catecholamine-secreting tumor in the adrenal gland itself. It develops in the adrenal medulla. Paraganglioma is an identical tumor, but it’s outside of the adrenal gland. It’s somewhere in the pelvis, could be in the chest, could be in the skull base, or neck. Most commonly it’s in the abdomen. So the case detection testing is the same.
But patients we should consider testing for pheochromocytoma and paraganglioma are those with paroxysmal symptoms like episodes of pounding heartbeat, sweating, headache, tremor, and pallor. Young people with new onset hypertension, hypertension that’s poorly controlled, and vascular adrenal masses should also be tested for pheochromocytoma.
FMN: Are there things that can confound any of these tests we discussed or any drugs that should be noted that could get in the way?
Dr. Young: For pheochromocytoma, the good news is now that most reference labs use tandem mass spectrometry technology, the hypertension drugs that potentially interfered in the past like labetalol and sotalol no longer interfere. So these days the clinician doesn’t need to stop any blood pressure–related medications.
The medications that can cause false positive testing are primarily tricyclic antidepressants. Flexeril, which is cyclobenzaprine, is commonly used to treat fibromyalgia, and that is a tricyclic antidepressant, and that will cause false positive testing ... with norepinephrine and normetanephrine. Tricyclic antidepressants can increase those levels three, four, or fivefold. Levodopa, which is in Sinemet, can cause false positive testing. Antipsychotics can cause false positive testing, and MAO inhibitors ... So the clinician shouldn’t worry about blood pressure medications but should worry about the other medications the patient is taking.
FMN: When someone looks at laboratory values, should you be comparing these values to people with hypertension who do not have these conditions, and do labs have adjusted values?
Dr. Young: That’s a good question, and in the Mayo medical lab, our reference range that we use is based on patients who were tested for pheochromocytoma [and] proved not to have it. So our cutoffs are 50% to 100% higher than some other reference labs.
These other reference labs use normal laboratory volunteers who have normal blood pressure and who are taking no medications, and I’ve never tested such a patient for pheochromocytoma, so why would we use that group of people to determine our reference range? So we should use reference ranges based on patients tested for pheo but who prove to not have pheo. And that leads to higher accuracy of our case detection tests.
FMN: What are the treatments for these conditions and follow-up? I take it if there’s an adrenal mass, you get a surgeon, and I think you also noted that you need an experienced endocrine surgeon.
Dr. Young: For primary aldosteronism, if the patient has a unilateral aldosterone-producing adenoma, the outstanding treatment is laparoscopic adrenalectomy. Patients are in the hospital one night, [and] they’re back at work in 7-10 days, but that does require an expert laparoscopic adrenal surgeon. And in the United States we have a 1-year endocrine surgery program. It’s optimal that patients are referred to surgeons who have done that unique training.
For pheochromocytoma less than 8-9 cm, laparoscopic adrenalectomy with an experienced endocrine surgeon is an excellent treatment option. When the adrenal pheochromocytoma is larger than 8 or 9 cm, especially if it’s cystic, the surgeon may want to do it as open [surgery] because it’s critical that the capsule of the pheochromocytoma is not ruptured intraoperatively. If it is ruptured, a benign pheochromocytoma has just been transformed to malignant, incurable disease.
If it’s a paraganglioma, typically that requires an open operation whether it’s in the neck or the chest or the pelvis or lower abdomen.
FMN: What is the follow-up to any of these conditions?
Dr. Young: The follow-up once you’ve resected an adrenal pheochromocytoma depends on whether there is a germline mutation. If there is a germline mutation, for example, succinate dehydrogenase mutation [SDH], these patients are at higher risk for developing recurrent pheochromocytoma or paraganglioma, and they’re at risk for developing malignant pheochromocytoma or paraganglioma.
One of our challenges is when we resect a pheochromocytoma or paraganglioma, the pathologist doesn’t have the tools to tell us if it’s benign or malignant ... So all patients need lifelong biochemical follow-up, basically a 24-hour urine for metanephrines and catecholamines annually or plasma metanephrines for life.
If the patients have an underlying mutation like succinate dehydrogenase, they’re at risk for developing nonfunctioning paragangliomas. So these patients need periodic imaging in addition to the annual biochemical testing. For example, if a patient had an abdominal paraganglioma with an SDHB [succinate dehydrogenase complex iron sulfur subunit B], we would do abdominal MRI scans every 1-2 years. That would include the pelvis. We would screen for paragangliomas elsewhere with MRI of the skull base and neck and the chest every 3-5 years, and a total body scan every 5 years or so, either FDG-PET [18F-fluorodeoxyglucose positron emission tomography] scan or 123I-MIBG [metaiodobenzyl-guanidine] scan.
FMN: Is there anything that is particularly new in the past couple of years?
Dr. Young: Some of the innovations lately have been in the area of metastatic pheochromocytoma and paraganglioma. These are in patients who have limited metastatic disease that’s localized to bone or to liver, and we’ve been using ablative therapies. This includes cryoablation ... and radiofrequency ablation, which is killing the tumor with hot temperature, and that’s very effective for patients who have limited metastatic lesions in the bone or liver.
For patients with complex tumors in difficult areas of the body, for example, in the mediastinum or surrounding the heart, we’ve been using 3D printer technology to print [a replica of the structures and] the tumor preoperatively, and this assists in surgical planning.
FMN: And what do you see coming?
Dr. Young: I think we’re getting close to something near curative for patients with malignant pheochromocytoma and paraganglioma. We’re understanding the basic biology better [and] pathophysiology, and I think that’s going to lead to some novel treatments.
Also, what I see coming is that we’ll be able to use germline mutation information and somatic tumor mutation information to guide us on specific imaging modalities, to guide us on forms of preventative therapy so that we prevent the paraganglioma from ever developing and also provide us with additional treatment options.
EXPERT ANALYSIS AT AACE 2016
TAVR cerebral protection device appears safe, effective
PARIS – The TriGuard neuroprotection device for use during transcatheter aortic valve replacement effectively prevented strokes while raising no safety concerns in a pooled analysis of three controlled trials, according to Dr. Alexandra J. Lansky.
The TriGuard, which is investigational in the United States but approved in Europe, also significantly reduced the risk of central nervous system infarction, as assessed by diffusion-weighted MRI. Moreover, when imaging did show CNS infarcts in patients with the TriGuard in place during their TAVR (transcatheter aortic valve replacement), the total brain lesion volume was about 40% less than in controls without the neuroprotection device, according to Dr. Lansky, professor of medicine and director of the cardiovascular clinical research program at Yale University in New Haven, Conn.
“Essentially what’s happening is that we’re reducing with this device the frequency of CNS infarctions, and also reducing the size of the lesions when they are present,” she said at the annual congress of the European Association of Percutaneous Cardiovascular Interventions.
The TriGuard is designed to fill an unmet need for stroke protection in TAVR patients. The incidence of clinical stroke within 30 days after TAVR in recent randomized controlled trials is 1.5%-6%. But there is clear evidence of underreporting of stroke in these trials. When neurologists examine TAVR patients or the patients are evaluated by serial testing using the NIH Stroke Scale plus brain imaging, the 30-day stroke rates are 15%-28%, according to the cardiologist.
“We know that about 50% of these strokes happen in the periprocedural period, and stroke is one of the strongest predictors of mortality, conferring a three- to ninefold increased risk,” Dr. Lansky emphasized.
She presented a pooled analysis including 59 TriGuard recipients and 83 controls who underwent TAVR in the DEFLECT I and III trials and the NeuroTAVR registry. They were evaluated using the NIH Stroke Scale before TAVR and again at 4 and 30 days post procedure. In addition, they underwent brain imaging via diffusion-weighted MRI 4 days post TAVR.
Stroke as defined by the Valve Academic Research Consortium–2 (VARC2) criteria occurred in none of the TriGuard group but in 6% of controls. And stroke as defined by the American Stroke Association, which requires a worsening score on the serial NIH Stroke Scale measurements plus imaging evidence of CNS infarction, occurred in 0 TriGuard-protected patients and in 19% of controls.
The incidence of CNS infarction on MRI was 92% in controls and 72% in the TriGuard group. Thus, 28% of patients with the TriGuard in place developed no brain infarct lesions at all; that’s a first for any TAVR neuroprotection device, according to Dr. Lansky.
In patients with CNS lesions, the total lesion volume was 101 mm3 in the TriGuard group, compared with 174 mm3 in the controls. The average lesion volume was 25 mm3 in the TriGuard group versus 43 mm3 in the controls.
TriGuard is a relatively simple device consisting of a single-wire nitinol frame and mesh filter with a pore size of 130 mcm. It’s designed to deflect emboli during TAVR while allowing maximal cerebral blood flow. After being delivered by a 9 French sheath from the contralateral femoral artery, the device sits at the roof of the aortic arch. Importantly, it covers all three cerebral arteries, Dr. Lansky said. The device is held in position by a stabilizer in the innominate artery.
Although introducing an additional element into TAVR raises the theoretic possibility of safety concerns, no safety signal was seen in the pooled analysis. In-hospital major adverse event rates were similar in the two groups.
Asked why 72% of patients with the TriGuard in place nonetheless developed CNS infarcts, Dr. Lansky said she believes the device has gaps on the sides that allow smaller emboli to pass through. Future iterations of the TriGuard will address this.
The clinical significance of the CNS infarcts seen on MRI in TAVR patients is a controversial issue among interventional cardiologists. Some cardiologists consider these to be silent lesions of dubious clinical relevance. That’s not Dr. Lansky’s view.
“When you track these MRI lesions out to 30 days, many times they disappear. They don’t disappear because there’s no damage; they disappear because the cells die. When you talk to neurologists about the MRI lesions, they will tell you that they actually represent cell death and correlate with brain infarction,” she said.
Dr. Nicolo Piazza commented that he considered the pooled analysis findings hypothesis generating but not definitive because of baseline imbalances between the two study arms. The control group had numerically higher – albeit not statistically significantly so – rates of atrial fibrillation at hospital admission as well as higher Society of Thoracic Surgeons risk scores, both of which increase stroke risk, noted Dr. Piazza of McGill University in Montreal.
Dr. Lansky replied that the much larger ongoing pivotal randomized, phase III REFLECT trial should provide definitive answers.
She reported receiving institutional research grant support from Keystone Heart, which produces the TriGuard device.
PARIS – The TriGuard neuroprotection device for use during transcatheter aortic valve replacement effectively prevented strokes while raising no safety concerns in a pooled analysis of three controlled trials, according to Dr. Alexandra J. Lansky.
The TriGuard, which is investigational in the United States but approved in Europe, also significantly reduced the risk of central nervous system infarction, as assessed by diffusion-weighted MRI. Moreover, when imaging did show CNS infarcts in patients with the TriGuard in place during their TAVR (transcatheter aortic valve replacement), the total brain lesion volume was about 40% less than in controls without the neuroprotection device, according to Dr. Lansky, professor of medicine and director of the cardiovascular clinical research program at Yale University in New Haven, Conn.
“Essentially what’s happening is that we’re reducing with this device the frequency of CNS infarctions, and also reducing the size of the lesions when they are present,” she said at the annual congress of the European Association of Percutaneous Cardiovascular Interventions.
The TriGuard is designed to fill an unmet need for stroke protection in TAVR patients. The incidence of clinical stroke within 30 days after TAVR in recent randomized controlled trials is 1.5%-6%. But there is clear evidence of underreporting of stroke in these trials. When neurologists examine TAVR patients or the patients are evaluated by serial testing using the NIH Stroke Scale plus brain imaging, the 30-day stroke rates are 15%-28%, according to the cardiologist.
“We know that about 50% of these strokes happen in the periprocedural period, and stroke is one of the strongest predictors of mortality, conferring a three- to ninefold increased risk,” Dr. Lansky emphasized.
She presented a pooled analysis including 59 TriGuard recipients and 83 controls who underwent TAVR in the DEFLECT I and III trials and the NeuroTAVR registry. They were evaluated using the NIH Stroke Scale before TAVR and again at 4 and 30 days post procedure. In addition, they underwent brain imaging via diffusion-weighted MRI 4 days post TAVR.
Stroke as defined by the Valve Academic Research Consortium–2 (VARC2) criteria occurred in none of the TriGuard group but in 6% of controls. And stroke as defined by the American Stroke Association, which requires a worsening score on the serial NIH Stroke Scale measurements plus imaging evidence of CNS infarction, occurred in 0 TriGuard-protected patients and in 19% of controls.
The incidence of CNS infarction on MRI was 92% in controls and 72% in the TriGuard group. Thus, 28% of patients with the TriGuard in place developed no brain infarct lesions at all; that’s a first for any TAVR neuroprotection device, according to Dr. Lansky.
In patients with CNS lesions, the total lesion volume was 101 mm3 in the TriGuard group, compared with 174 mm3 in the controls. The average lesion volume was 25 mm3 in the TriGuard group versus 43 mm3 in the controls.
TriGuard is a relatively simple device consisting of a single-wire nitinol frame and mesh filter with a pore size of 130 mcm. It’s designed to deflect emboli during TAVR while allowing maximal cerebral blood flow. After being delivered by a 9 French sheath from the contralateral femoral artery, the device sits at the roof of the aortic arch. Importantly, it covers all three cerebral arteries, Dr. Lansky said. The device is held in position by a stabilizer in the innominate artery.
Although introducing an additional element into TAVR raises the theoretic possibility of safety concerns, no safety signal was seen in the pooled analysis. In-hospital major adverse event rates were similar in the two groups.
Asked why 72% of patients with the TriGuard in place nonetheless developed CNS infarcts, Dr. Lansky said she believes the device has gaps on the sides that allow smaller emboli to pass through. Future iterations of the TriGuard will address this.
The clinical significance of the CNS infarcts seen on MRI in TAVR patients is a controversial issue among interventional cardiologists. Some cardiologists consider these to be silent lesions of dubious clinical relevance. That’s not Dr. Lansky’s view.
“When you track these MRI lesions out to 30 days, many times they disappear. They don’t disappear because there’s no damage; they disappear because the cells die. When you talk to neurologists about the MRI lesions, they will tell you that they actually represent cell death and correlate with brain infarction,” she said.
Dr. Nicolo Piazza commented that he considered the pooled analysis findings hypothesis generating but not definitive because of baseline imbalances between the two study arms. The control group had numerically higher – albeit not statistically significantly so – rates of atrial fibrillation at hospital admission as well as higher Society of Thoracic Surgeons risk scores, both of which increase stroke risk, noted Dr. Piazza of McGill University in Montreal.
Dr. Lansky replied that the much larger ongoing pivotal randomized, phase III REFLECT trial should provide definitive answers.
She reported receiving institutional research grant support from Keystone Heart, which produces the TriGuard device.
PARIS – The TriGuard neuroprotection device for use during transcatheter aortic valve replacement effectively prevented strokes while raising no safety concerns in a pooled analysis of three controlled trials, according to Dr. Alexandra J. Lansky.
The TriGuard, which is investigational in the United States but approved in Europe, also significantly reduced the risk of central nervous system infarction, as assessed by diffusion-weighted MRI. Moreover, when imaging did show CNS infarcts in patients with the TriGuard in place during their TAVR (transcatheter aortic valve replacement), the total brain lesion volume was about 40% less than in controls without the neuroprotection device, according to Dr. Lansky, professor of medicine and director of the cardiovascular clinical research program at Yale University in New Haven, Conn.
“Essentially what’s happening is that we’re reducing with this device the frequency of CNS infarctions, and also reducing the size of the lesions when they are present,” she said at the annual congress of the European Association of Percutaneous Cardiovascular Interventions.
The TriGuard is designed to fill an unmet need for stroke protection in TAVR patients. The incidence of clinical stroke within 30 days after TAVR in recent randomized controlled trials is 1.5%-6%. But there is clear evidence of underreporting of stroke in these trials. When neurologists examine TAVR patients or the patients are evaluated by serial testing using the NIH Stroke Scale plus brain imaging, the 30-day stroke rates are 15%-28%, according to the cardiologist.
“We know that about 50% of these strokes happen in the periprocedural period, and stroke is one of the strongest predictors of mortality, conferring a three- to ninefold increased risk,” Dr. Lansky emphasized.
She presented a pooled analysis including 59 TriGuard recipients and 83 controls who underwent TAVR in the DEFLECT I and III trials and the NeuroTAVR registry. They were evaluated using the NIH Stroke Scale before TAVR and again at 4 and 30 days post procedure. In addition, they underwent brain imaging via diffusion-weighted MRI 4 days post TAVR.
Stroke as defined by the Valve Academic Research Consortium–2 (VARC2) criteria occurred in none of the TriGuard group but in 6% of controls. And stroke as defined by the American Stroke Association, which requires a worsening score on the serial NIH Stroke Scale measurements plus imaging evidence of CNS infarction, occurred in 0 TriGuard-protected patients and in 19% of controls.
The incidence of CNS infarction on MRI was 92% in controls and 72% in the TriGuard group. Thus, 28% of patients with the TriGuard in place developed no brain infarct lesions at all; that’s a first for any TAVR neuroprotection device, according to Dr. Lansky.
In patients with CNS lesions, the total lesion volume was 101 mm3 in the TriGuard group, compared with 174 mm3 in the controls. The average lesion volume was 25 mm3 in the TriGuard group versus 43 mm3 in the controls.
TriGuard is a relatively simple device consisting of a single-wire nitinol frame and mesh filter with a pore size of 130 mcm. It’s designed to deflect emboli during TAVR while allowing maximal cerebral blood flow. After being delivered by a 9 French sheath from the contralateral femoral artery, the device sits at the roof of the aortic arch. Importantly, it covers all three cerebral arteries, Dr. Lansky said. The device is held in position by a stabilizer in the innominate artery.
Although introducing an additional element into TAVR raises the theoretic possibility of safety concerns, no safety signal was seen in the pooled analysis. In-hospital major adverse event rates were similar in the two groups.
Asked why 72% of patients with the TriGuard in place nonetheless developed CNS infarcts, Dr. Lansky said she believes the device has gaps on the sides that allow smaller emboli to pass through. Future iterations of the TriGuard will address this.
The clinical significance of the CNS infarcts seen on MRI in TAVR patients is a controversial issue among interventional cardiologists. Some cardiologists consider these to be silent lesions of dubious clinical relevance. That’s not Dr. Lansky’s view.
“When you track these MRI lesions out to 30 days, many times they disappear. They don’t disappear because there’s no damage; they disappear because the cells die. When you talk to neurologists about the MRI lesions, they will tell you that they actually represent cell death and correlate with brain infarction,” she said.
Dr. Nicolo Piazza commented that he considered the pooled analysis findings hypothesis generating but not definitive because of baseline imbalances between the two study arms. The control group had numerically higher – albeit not statistically significantly so – rates of atrial fibrillation at hospital admission as well as higher Society of Thoracic Surgeons risk scores, both of which increase stroke risk, noted Dr. Piazza of McGill University in Montreal.
Dr. Lansky replied that the much larger ongoing pivotal randomized, phase III REFLECT trial should provide definitive answers.
She reported receiving institutional research grant support from Keystone Heart, which produces the TriGuard device.
AT EUROPCR 2016
Key clinical point: The TriGuard neuroprotection device for use in TAVR effectively prevented strokes.
Major finding: The 30-day incidence of stroke in TAVR patients with the TriGard embolic protection device in place was 0, compared with 6% or 19% in controls, depending upon the stroke definition used.
Data source: A post hoc analysis of pooled data on 59 TriGuard recipients and 83 controls in three trials.
Disclosures: The presenter reported receiving institutional research grant support from Keystone Heart, which produces the TriGuard device.
Skin Lesions in Patients Treated With Imatinib Mesylate: A 5-Year Prospective Study
Imatinib mesylate (IM) represents the first-line treatment of chronic myeloid leukemia (CML) and gastrointestinal stromal tumors (GISTs). Its pharmacological activity is related to a specific action on several tyrosine kinases in different tumors, including Bcr-Abl in CML, c-Kit (CD117) in GIST, and platelet-derived growth factor receptor in dermatofibrosarcoma protuberans.1,2
Imatinib mesylate has been shown to improve progression-free survival and overall survival2; however, it also has several side effects. Among the adverse effects (AEs), less than 10% are nonhematologic, such as nausea, vomiting, diarrhea, muscle cramps, and cutaneous reactions.3,4
We followed patients who were treated with IM for 5 years to identify AEs of therapy.
Methods
The aim of this prospective study was to identify and collect data regarding IM cutaneous side effects so that clinicians can detect AEs early and differentiate them from AEs caused by other medications. All patients were subjected to a median of 5 years’ follow-up. We included all the patients treated with IM and excluded patients who had a history of eczematous dermatitis, psoriasis, renal impairment, or dyshidrosis palmoplantar. Before starting IM, all patients presented for a dermatologic visit. They were subsequently evaluated every 3 months.
The incidence rate was defined as the ratio of patients with cutaneous side effects and the total patients treated with IM. Furthermore, we calculated the ratio between each class of patient with a specific cutaneous manifestation and the entire cohort of patients with cutaneous side effects related to IM.
When necessary, microbiological, serological, and histopathological analyses were performed.
Results
In 60 months, we followed 220 patients treated with IM. Among them, 55 (25%) developed cutaneous side effects (35 males; 20 females). The incidence rate of the patients with cutaneous side effects was 1:4. The median age of the entire cohort was 52.5 years. Fifty patients were being treated for CML and 5 for GISTs. All patients received IM at a dosage of 400 mg daily.
The following skin diseases were observed in patients treated with IM (Table): 19 patients with maculopapular rash with pruritus (no maculopapular rash without pruritus was detected), 7 patients with eczematous dermatitis such as stasis dermatitis and seborrheic dermatitis, 6 patients with onychodystrophy melanonychia (Figure 1), 5 patients with psoriasis, 5 patients with skin cancers including basal cell carcinoma (BCC)(Figure 2), 3 patients with periorbital edema (Figure 3), 3 patients with mycosis, 3 patients with dermatofibromas, 2 patients with dyshidrosis palmoplantar, 1 patient with pityriasis rosea–like eruption (Figure 4), and 1 patient with actinic keratoses on the face. No hypopigmentation or hyperpigmentation, excluding the individual case of melanonychia, was observed.
All cutaneous diseases reported in this study appeared after IM therapy (median, 3.8 months). The median time to onset for each cutaneous disorder is reported in the Table. During the first dermatologic visit before starting IM therapy, none of the patients showed any of these cutaneous diseases.
The adverse cutaneous reactions were treated with appropriate drugs. Generally, eczematous dermatitis was treated using topical steroids, emollients, and oral antihistamines. In patients with maculopapular rash with pruritus, oral corticosteroids (eg, betamethasone 3 mg daily or prednisolone 1 mg/kg) in association with antihistamine was necessary. Psoriasis was completely improved with topical betamethasone 0.5 mg and calcipotriol 50 µg. Skin cancers were treated with surgical excision with histologic examination. All treatments are outlined in the Table.
Imatinib mesylate therapy was suspended in 2 patients with maculopapular rash with moderate to severe pruritus; however, despite the temporary suspension of the drug and the appropriate therapies (corticosteroids and antihistamines), cutaneous side effects reappeared 7 to 10 days after therapy resumed. Therefore, the treatment was permanently suspended in these 2 cases and IM was replaced with erlotinib, a second-generation Bcr-Abl tyrosine kinase inhibitor.
Comment
The introduction of IM for the treatment of GIST and CML has changed the history of these diseases. The drug typically is well tolerated and few patients have reported severe AEs. Mild skin reactions are relatively frequent, ranging from 7% to 21% of patients treated.3 In our case, the percentage was relatively higher (25%), likely because of close monitoring of patients, with an increase in the incidence rate.
Imatinib mesylate cutaneous reactions are dose dependent.4 Indeed, in all our cases, the cutaneous reactions arose with an IM dosage of 400 mg daily, which is compatible with the definition of dose-independent cutaneous AEs.
The most common cutaneous AEs reported in the literature were swelling/edema and maculopapular rash. Swelling is the most common AE described during therapy with IM with an incidence of 63% to 84%.5 Swelling often involves the periorbital area and occurs approximately 6 weeks after starting IM. Although its pathogenesis is uncertain, it has been shown that IM blocks the platelet-derived growth factor receptor expressed on blood vessels that regulates the transportation transcapillary. The inhibition of this receptor can lead to increased pore pressure, resulting in edema and erythema. Maculopapular eruptions (50% of cases) often affect the trunk and the limbs and are accompanied by pruritus. Commonly, these rashes arise after 9 weeks of IM therapy. These eruptions are self-limiting and only topical emollients and steroids are required, without any change in IM schedule. To treat maculopapular eruptions with pruritus, oral steroids and antihistamines may be helpful, without suspending IM treatment. When grade 2 or 3 pruriginous maculopapular eruptions arise, the suspension of IM combined with steroids and antihistamines may be necessary. When the readministration of IM is required, it is mandatory to start IM at a lower dose (50–100 mg/d), administering prednisolone 0.5 to 1.0 mg/kg daily. Then, the steroid gradually can be tapered.6 Critical cutaneous AEs that are resistant to supportive measures warrant suspension of IM therapy. However, the incidence of this event is small (<1% of all patients).7
Regarding severe cutaneous AEs from IM therapy, Hsiao et al8 reported the case of Stevens-Johnson syndrome. In this case, IM was immediately stopped and systemic steroids were started. Rarely, erythroderma (grade 4 toxicity) can develop for which a prompt and perpetual suspension of IM is necessary and supportive care therapy with oral and topical steroids is recommended.9
Hyperpigmentation induced by IM, mostly in patients with Fitzpatrick skin types V to VI and with a general prevalence of 16% to 40% in treated patients, often is related to a mutation of c-Kit or other kinases that are activated rather than inhibited by the drug, resulting in overstimulation of melanogenesis.10 The prevalence of Fitzpatrick skin types I to III determined the absence of pigmentation changes in our cohort, excluding melanonychia. Hyperpigmentation was observed in the skin as well as the appendages such as nails, resulting in melanonychia (Figure 1). However, Brazzelli et al11 reported hypopigmentation in 5 white patients treated with IM; furthermore, they found a direct correlation between hypopigmentation and development of skin cancers in these patients. The susceptibility to develop skin cancers may persist, even without a clear manifestation of hypopigmentation, as reported in the current analysis. We documented BCC in 5 patients, 1 patient developed actinic keratoses, and 3 patients developed dermatofibromas. However, these neoplasms probably were not provoked by IM. On the contrary, we did not note squamous cell carcinoma, which was reported by Baskaynak et al12 in 2 CML patients treated with IM.
The administration of IM can be associated with exacerbation of psoriasis. Paradoxically, in genetically predisposed individuals, tumor necrosis factor α (TNF-α) antagonists, such as IM, seem to induce psoriasis, producing IFN-α rather than TNF-α and increasing inflammation.13 In fact, some research shows induction of psoriasis by anti–TNF-α drugs.14-16 Two cases of IM associated with psoriasis have been reported, and both cases represented an exacerbation of previously diagnosed psoriasis.13,17 On the contrary, in our analysis we reported 5 cases of psoriasis vulgaris induced by IM administration. Our patients developed cutaneous psoriatic lesions approximately 1.7 months after the start of IM therapy.
The pityriasis rosea–like eruption (Figure 4) presented as nonpruritic, erythematous, scaly patches on the trunk and extremities, and arose 3.6 months after the start of treatment. This particular cutaneous AE is rare. In 3 case reports, the IM dosage also was 400 mg daily.18-20 The pathophysiology of this rare skin reaction stems from the pharmacological effect of IM rather than a hypersensitivity reaction.18
Deininger et al7 reported that patients with a high basophil count (>20%) rarely show urticarial eruptions after IM due to histamine release from basophils. Premedication with an antihistamine was helpful and the urticarial eruption resolved after normalization in basophil count.7
Given the importance of IM for patients who have limited therapeutic alternatives for their disease and the ability to safely treat the cutaneous AEs, as demonstrated in our analysis, the suspension of IM for dermatological complications is necessary only in rare cases, as shown by the low number of patients (n=2) who had to discontinue therapy. The cutaneous AEs should be diagnosed and treated early with less impact on chemotherapy treatments. The administration of IM should involve a coordinated effort among oncologists and dermatologists to prevent important complications.
- Druker BJ, Talpaz M, Resta DJ, et al. Efficacy and safety of a specific inhibitor of the BCR-ABL tyrosine kinase in chronic myeloid leukemia. N Engl J Med. 2001;344:1031-1037.
- Scheinfeld N. Imatinib mesylate and dermatology part 2: a review of the cutaneous side effects of imatinib mesylate. J Drugs Dermatol. 2006;5:228-231.
- Breccia M, Carmosimo I, Russo E, et al. Early and tardive skin adverse events in chronic myeloid leukaemia patients treated with imatinib. Eur J Haematol. 2005;74:121-123.
- Ugurel S, Hildebrand R, Dippel E, et al. Dose dependent severe cutaneous reactions to imatinib. Br J Cancer. 2003;88:1157-1159.
- Valeyrie L, Bastuji-Garin S, Revuz J, et al. Adverse cutaneous reactions to imatinib (STI571) in Philadelphia chromosome-positive leukaemias: a prospective study of 54 patients. J Am Acad Dermatol. 2003;48:201-206.
- Scott LC, White JD, Reid R, et al. Management of skin toxicity related to the use of imatinibnmesylate (STI571, GlivecTM) for advanced stage gastrointestinal stromal tumors. Sarcoma. 2005;9:157-160.
- Deininger MW, O’Brien SG, Ford JM, et al. Practical management of patients with chronic myeloid leukemia receiving imatinib. J Clin Oncol. 2003;21:1637-1647.
- Hsiao LT, Chung HM, Lin JT, et al. Stevens-Johnson syndrome after treatment with STI571: a case report. Br J Haematol. 2002;117:620-622.
- Sehgal VN, Srivastava G, Sardana K. Erythroderma/exfoliative dermatitis: a synopsis. Int J Dermatol. 2004;43:39-47.
- Pietras K, Pahler J, Bergers G, et al. Functions of paracrine PDGF signaling in the proangiogenic tumor stroma revealed by pharmacological targeting. PLoS Med. 2008;5:e19.
- Brazzelli V, Prestinari F, Barbagallo T, et al. A long-term time course of colorimetric assessment of the effects of imatinib mesylate on skin pigmentation: a study of five patients. J Eur Acad Dermatol Venerol. 2007;21:384-387.
- Baskaynak G, Kreuzer KA, Schwarz M, et al. Squamous cutaneous epithelial cell carcinoma in two CML patients with progressive disease under imatinib treatment. Eur J Haematol. 2003;70:231-234.
- Cheng H, Geist DE, Piperdi M, et al. Management of imatinib-related exacerbation of psoriasis in a patient with a gastrointestinal stromal tumor. Australas J Dermatol. 2009;50:41-43.
- Faillace C, Duarte GV, Cunha RS, et al. Severe infliximab-induced psoriasis treated with adalimumab switching. Int J Dermatol. 2013;52:234-238.
- Iborra M, Beltrán B, Bastida G, et al. Infliximab and adalimumab-induced psoriasis in Crohn’s disease: a aradoxical side effect. J Crohns Colitis. 2011;5:157-161.
- Fernandes IC, Torres T, Sanches M, et al. Psoriasis induced by infliximab. Acta Med Port. 2011;24:709-712.
- Woo SM, Huh CH, Park KC, et al. Exacerbation of psoriasis in a chronic myelogenous leukemia patient treated with imatinib. J Dermatol. 2007;34:724-726.
- Brazzelli V, Prestinari F, Roveda E, et al. Pytiriasis rosea-like eruption during treatment with imatinib mesylate. description of 3 cases. J Am Acad Dermatol. 2005;53:240-243.
- Konstantapoulos K, Papadogianni A, Dimopoulou M, et al. Pytriasis rosea associated with imatinib (STI571, Gleevec). Dermatology. 2002;205:172-173.
- Cho AY, Kim DH, Im M, et al. Pityriasis rosealike drug eruption induced by imatinib mesylate (Gleevec). Ann Dermatol. 2011;23(suppl 3):360-363.
Imatinib mesylate (IM) represents the first-line treatment of chronic myeloid leukemia (CML) and gastrointestinal stromal tumors (GISTs). Its pharmacological activity is related to a specific action on several tyrosine kinases in different tumors, including Bcr-Abl in CML, c-Kit (CD117) in GIST, and platelet-derived growth factor receptor in dermatofibrosarcoma protuberans.1,2
Imatinib mesylate has been shown to improve progression-free survival and overall survival2; however, it also has several side effects. Among the adverse effects (AEs), less than 10% are nonhematologic, such as nausea, vomiting, diarrhea, muscle cramps, and cutaneous reactions.3,4
We followed patients who were treated with IM for 5 years to identify AEs of therapy.
Methods
The aim of this prospective study was to identify and collect data regarding IM cutaneous side effects so that clinicians can detect AEs early and differentiate them from AEs caused by other medications. All patients were subjected to a median of 5 years’ follow-up. We included all the patients treated with IM and excluded patients who had a history of eczematous dermatitis, psoriasis, renal impairment, or dyshidrosis palmoplantar. Before starting IM, all patients presented for a dermatologic visit. They were subsequently evaluated every 3 months.
The incidence rate was defined as the ratio of patients with cutaneous side effects and the total patients treated with IM. Furthermore, we calculated the ratio between each class of patient with a specific cutaneous manifestation and the entire cohort of patients with cutaneous side effects related to IM.
When necessary, microbiological, serological, and histopathological analyses were performed.
Results
In 60 months, we followed 220 patients treated with IM. Among them, 55 (25%) developed cutaneous side effects (35 males; 20 females). The incidence rate of the patients with cutaneous side effects was 1:4. The median age of the entire cohort was 52.5 years. Fifty patients were being treated for CML and 5 for GISTs. All patients received IM at a dosage of 400 mg daily.
The following skin diseases were observed in patients treated with IM (Table): 19 patients with maculopapular rash with pruritus (no maculopapular rash without pruritus was detected), 7 patients with eczematous dermatitis such as stasis dermatitis and seborrheic dermatitis, 6 patients with onychodystrophy melanonychia (Figure 1), 5 patients with psoriasis, 5 patients with skin cancers including basal cell carcinoma (BCC)(Figure 2), 3 patients with periorbital edema (Figure 3), 3 patients with mycosis, 3 patients with dermatofibromas, 2 patients with dyshidrosis palmoplantar, 1 patient with pityriasis rosea–like eruption (Figure 4), and 1 patient with actinic keratoses on the face. No hypopigmentation or hyperpigmentation, excluding the individual case of melanonychia, was observed.
All cutaneous diseases reported in this study appeared after IM therapy (median, 3.8 months). The median time to onset for each cutaneous disorder is reported in the Table. During the first dermatologic visit before starting IM therapy, none of the patients showed any of these cutaneous diseases.
The adverse cutaneous reactions were treated with appropriate drugs. Generally, eczematous dermatitis was treated using topical steroids, emollients, and oral antihistamines. In patients with maculopapular rash with pruritus, oral corticosteroids (eg, betamethasone 3 mg daily or prednisolone 1 mg/kg) in association with antihistamine was necessary. Psoriasis was completely improved with topical betamethasone 0.5 mg and calcipotriol 50 µg. Skin cancers were treated with surgical excision with histologic examination. All treatments are outlined in the Table.
Imatinib mesylate therapy was suspended in 2 patients with maculopapular rash with moderate to severe pruritus; however, despite the temporary suspension of the drug and the appropriate therapies (corticosteroids and antihistamines), cutaneous side effects reappeared 7 to 10 days after therapy resumed. Therefore, the treatment was permanently suspended in these 2 cases and IM was replaced with erlotinib, a second-generation Bcr-Abl tyrosine kinase inhibitor.
Comment
The introduction of IM for the treatment of GIST and CML has changed the history of these diseases. The drug typically is well tolerated and few patients have reported severe AEs. Mild skin reactions are relatively frequent, ranging from 7% to 21% of patients treated.3 In our case, the percentage was relatively higher (25%), likely because of close monitoring of patients, with an increase in the incidence rate.
Imatinib mesylate cutaneous reactions are dose dependent.4 Indeed, in all our cases, the cutaneous reactions arose with an IM dosage of 400 mg daily, which is compatible with the definition of dose-independent cutaneous AEs.
The most common cutaneous AEs reported in the literature were swelling/edema and maculopapular rash. Swelling is the most common AE described during therapy with IM with an incidence of 63% to 84%.5 Swelling often involves the periorbital area and occurs approximately 6 weeks after starting IM. Although its pathogenesis is uncertain, it has been shown that IM blocks the platelet-derived growth factor receptor expressed on blood vessels that regulates the transportation transcapillary. The inhibition of this receptor can lead to increased pore pressure, resulting in edema and erythema. Maculopapular eruptions (50% of cases) often affect the trunk and the limbs and are accompanied by pruritus. Commonly, these rashes arise after 9 weeks of IM therapy. These eruptions are self-limiting and only topical emollients and steroids are required, without any change in IM schedule. To treat maculopapular eruptions with pruritus, oral steroids and antihistamines may be helpful, without suspending IM treatment. When grade 2 or 3 pruriginous maculopapular eruptions arise, the suspension of IM combined with steroids and antihistamines may be necessary. When the readministration of IM is required, it is mandatory to start IM at a lower dose (50–100 mg/d), administering prednisolone 0.5 to 1.0 mg/kg daily. Then, the steroid gradually can be tapered.6 Critical cutaneous AEs that are resistant to supportive measures warrant suspension of IM therapy. However, the incidence of this event is small (<1% of all patients).7
Regarding severe cutaneous AEs from IM therapy, Hsiao et al8 reported the case of Stevens-Johnson syndrome. In this case, IM was immediately stopped and systemic steroids were started. Rarely, erythroderma (grade 4 toxicity) can develop for which a prompt and perpetual suspension of IM is necessary and supportive care therapy with oral and topical steroids is recommended.9
Hyperpigmentation induced by IM, mostly in patients with Fitzpatrick skin types V to VI and with a general prevalence of 16% to 40% in treated patients, often is related to a mutation of c-Kit or other kinases that are activated rather than inhibited by the drug, resulting in overstimulation of melanogenesis.10 The prevalence of Fitzpatrick skin types I to III determined the absence of pigmentation changes in our cohort, excluding melanonychia. Hyperpigmentation was observed in the skin as well as the appendages such as nails, resulting in melanonychia (Figure 1). However, Brazzelli et al11 reported hypopigmentation in 5 white patients treated with IM; furthermore, they found a direct correlation between hypopigmentation and development of skin cancers in these patients. The susceptibility to develop skin cancers may persist, even without a clear manifestation of hypopigmentation, as reported in the current analysis. We documented BCC in 5 patients, 1 patient developed actinic keratoses, and 3 patients developed dermatofibromas. However, these neoplasms probably were not provoked by IM. On the contrary, we did not note squamous cell carcinoma, which was reported by Baskaynak et al12 in 2 CML patients treated with IM.
The administration of IM can be associated with exacerbation of psoriasis. Paradoxically, in genetically predisposed individuals, tumor necrosis factor α (TNF-α) antagonists, such as IM, seem to induce psoriasis, producing IFN-α rather than TNF-α and increasing inflammation.13 In fact, some research shows induction of psoriasis by anti–TNF-α drugs.14-16 Two cases of IM associated with psoriasis have been reported, and both cases represented an exacerbation of previously diagnosed psoriasis.13,17 On the contrary, in our analysis we reported 5 cases of psoriasis vulgaris induced by IM administration. Our patients developed cutaneous psoriatic lesions approximately 1.7 months after the start of IM therapy.
The pityriasis rosea–like eruption (Figure 4) presented as nonpruritic, erythematous, scaly patches on the trunk and extremities, and arose 3.6 months after the start of treatment. This particular cutaneous AE is rare. In 3 case reports, the IM dosage also was 400 mg daily.18-20 The pathophysiology of this rare skin reaction stems from the pharmacological effect of IM rather than a hypersensitivity reaction.18
Deininger et al7 reported that patients with a high basophil count (>20%) rarely show urticarial eruptions after IM due to histamine release from basophils. Premedication with an antihistamine was helpful and the urticarial eruption resolved after normalization in basophil count.7
Given the importance of IM for patients who have limited therapeutic alternatives for their disease and the ability to safely treat the cutaneous AEs, as demonstrated in our analysis, the suspension of IM for dermatological complications is necessary only in rare cases, as shown by the low number of patients (n=2) who had to discontinue therapy. The cutaneous AEs should be diagnosed and treated early with less impact on chemotherapy treatments. The administration of IM should involve a coordinated effort among oncologists and dermatologists to prevent important complications.
Imatinib mesylate (IM) represents the first-line treatment of chronic myeloid leukemia (CML) and gastrointestinal stromal tumors (GISTs). Its pharmacological activity is related to a specific action on several tyrosine kinases in different tumors, including Bcr-Abl in CML, c-Kit (CD117) in GIST, and platelet-derived growth factor receptor in dermatofibrosarcoma protuberans.1,2
Imatinib mesylate has been shown to improve progression-free survival and overall survival2; however, it also has several side effects. Among the adverse effects (AEs), less than 10% are nonhematologic, such as nausea, vomiting, diarrhea, muscle cramps, and cutaneous reactions.3,4
We followed patients who were treated with IM for 5 years to identify AEs of therapy.
Methods
The aim of this prospective study was to identify and collect data regarding IM cutaneous side effects so that clinicians can detect AEs early and differentiate them from AEs caused by other medications. All patients were subjected to a median of 5 years’ follow-up. We included all the patients treated with IM and excluded patients who had a history of eczematous dermatitis, psoriasis, renal impairment, or dyshidrosis palmoplantar. Before starting IM, all patients presented for a dermatologic visit. They were subsequently evaluated every 3 months.
The incidence rate was defined as the ratio of patients with cutaneous side effects and the total patients treated with IM. Furthermore, we calculated the ratio between each class of patient with a specific cutaneous manifestation and the entire cohort of patients with cutaneous side effects related to IM.
When necessary, microbiological, serological, and histopathological analyses were performed.
Results
In 60 months, we followed 220 patients treated with IM. Among them, 55 (25%) developed cutaneous side effects (35 males; 20 females). The incidence rate of the patients with cutaneous side effects was 1:4. The median age of the entire cohort was 52.5 years. Fifty patients were being treated for CML and 5 for GISTs. All patients received IM at a dosage of 400 mg daily.
The following skin diseases were observed in patients treated with IM (Table): 19 patients with maculopapular rash with pruritus (no maculopapular rash without pruritus was detected), 7 patients with eczematous dermatitis such as stasis dermatitis and seborrheic dermatitis, 6 patients with onychodystrophy melanonychia (Figure 1), 5 patients with psoriasis, 5 patients with skin cancers including basal cell carcinoma (BCC)(Figure 2), 3 patients with periorbital edema (Figure 3), 3 patients with mycosis, 3 patients with dermatofibromas, 2 patients with dyshidrosis palmoplantar, 1 patient with pityriasis rosea–like eruption (Figure 4), and 1 patient with actinic keratoses on the face. No hypopigmentation or hyperpigmentation, excluding the individual case of melanonychia, was observed.
All cutaneous diseases reported in this study appeared after IM therapy (median, 3.8 months). The median time to onset for each cutaneous disorder is reported in the Table. During the first dermatologic visit before starting IM therapy, none of the patients showed any of these cutaneous diseases.
The adverse cutaneous reactions were treated with appropriate drugs. Generally, eczematous dermatitis was treated using topical steroids, emollients, and oral antihistamines. In patients with maculopapular rash with pruritus, oral corticosteroids (eg, betamethasone 3 mg daily or prednisolone 1 mg/kg) in association with antihistamine was necessary. Psoriasis was completely improved with topical betamethasone 0.5 mg and calcipotriol 50 µg. Skin cancers were treated with surgical excision with histologic examination. All treatments are outlined in the Table.
Imatinib mesylate therapy was suspended in 2 patients with maculopapular rash with moderate to severe pruritus; however, despite the temporary suspension of the drug and the appropriate therapies (corticosteroids and antihistamines), cutaneous side effects reappeared 7 to 10 days after therapy resumed. Therefore, the treatment was permanently suspended in these 2 cases and IM was replaced with erlotinib, a second-generation Bcr-Abl tyrosine kinase inhibitor.
Comment
The introduction of IM for the treatment of GIST and CML has changed the history of these diseases. The drug typically is well tolerated and few patients have reported severe AEs. Mild skin reactions are relatively frequent, ranging from 7% to 21% of patients treated.3 In our case, the percentage was relatively higher (25%), likely because of close monitoring of patients, with an increase in the incidence rate.
Imatinib mesylate cutaneous reactions are dose dependent.4 Indeed, in all our cases, the cutaneous reactions arose with an IM dosage of 400 mg daily, which is compatible with the definition of dose-independent cutaneous AEs.
The most common cutaneous AEs reported in the literature were swelling/edema and maculopapular rash. Swelling is the most common AE described during therapy with IM with an incidence of 63% to 84%.5 Swelling often involves the periorbital area and occurs approximately 6 weeks after starting IM. Although its pathogenesis is uncertain, it has been shown that IM blocks the platelet-derived growth factor receptor expressed on blood vessels that regulates the transportation transcapillary. The inhibition of this receptor can lead to increased pore pressure, resulting in edema and erythema. Maculopapular eruptions (50% of cases) often affect the trunk and the limbs and are accompanied by pruritus. Commonly, these rashes arise after 9 weeks of IM therapy. These eruptions are self-limiting and only topical emollients and steroids are required, without any change in IM schedule. To treat maculopapular eruptions with pruritus, oral steroids and antihistamines may be helpful, without suspending IM treatment. When grade 2 or 3 pruriginous maculopapular eruptions arise, the suspension of IM combined with steroids and antihistamines may be necessary. When the readministration of IM is required, it is mandatory to start IM at a lower dose (50–100 mg/d), administering prednisolone 0.5 to 1.0 mg/kg daily. Then, the steroid gradually can be tapered.6 Critical cutaneous AEs that are resistant to supportive measures warrant suspension of IM therapy. However, the incidence of this event is small (<1% of all patients).7
Regarding severe cutaneous AEs from IM therapy, Hsiao et al8 reported the case of Stevens-Johnson syndrome. In this case, IM was immediately stopped and systemic steroids were started. Rarely, erythroderma (grade 4 toxicity) can develop for which a prompt and perpetual suspension of IM is necessary and supportive care therapy with oral and topical steroids is recommended.9
Hyperpigmentation induced by IM, mostly in patients with Fitzpatrick skin types V to VI and with a general prevalence of 16% to 40% in treated patients, often is related to a mutation of c-Kit or other kinases that are activated rather than inhibited by the drug, resulting in overstimulation of melanogenesis.10 The prevalence of Fitzpatrick skin types I to III determined the absence of pigmentation changes in our cohort, excluding melanonychia. Hyperpigmentation was observed in the skin as well as the appendages such as nails, resulting in melanonychia (Figure 1). However, Brazzelli et al11 reported hypopigmentation in 5 white patients treated with IM; furthermore, they found a direct correlation between hypopigmentation and development of skin cancers in these patients. The susceptibility to develop skin cancers may persist, even without a clear manifestation of hypopigmentation, as reported in the current analysis. We documented BCC in 5 patients, 1 patient developed actinic keratoses, and 3 patients developed dermatofibromas. However, these neoplasms probably were not provoked by IM. On the contrary, we did not note squamous cell carcinoma, which was reported by Baskaynak et al12 in 2 CML patients treated with IM.
The administration of IM can be associated with exacerbation of psoriasis. Paradoxically, in genetically predisposed individuals, tumor necrosis factor α (TNF-α) antagonists, such as IM, seem to induce psoriasis, producing IFN-α rather than TNF-α and increasing inflammation.13 In fact, some research shows induction of psoriasis by anti–TNF-α drugs.14-16 Two cases of IM associated with psoriasis have been reported, and both cases represented an exacerbation of previously diagnosed psoriasis.13,17 On the contrary, in our analysis we reported 5 cases of psoriasis vulgaris induced by IM administration. Our patients developed cutaneous psoriatic lesions approximately 1.7 months after the start of IM therapy.
The pityriasis rosea–like eruption (Figure 4) presented as nonpruritic, erythematous, scaly patches on the trunk and extremities, and arose 3.6 months after the start of treatment. This particular cutaneous AE is rare. In 3 case reports, the IM dosage also was 400 mg daily.18-20 The pathophysiology of this rare skin reaction stems from the pharmacological effect of IM rather than a hypersensitivity reaction.18
Deininger et al7 reported that patients with a high basophil count (>20%) rarely show urticarial eruptions after IM due to histamine release from basophils. Premedication with an antihistamine was helpful and the urticarial eruption resolved after normalization in basophil count.7
Given the importance of IM for patients who have limited therapeutic alternatives for their disease and the ability to safely treat the cutaneous AEs, as demonstrated in our analysis, the suspension of IM for dermatological complications is necessary only in rare cases, as shown by the low number of patients (n=2) who had to discontinue therapy. The cutaneous AEs should be diagnosed and treated early with less impact on chemotherapy treatments. The administration of IM should involve a coordinated effort among oncologists and dermatologists to prevent important complications.
- Druker BJ, Talpaz M, Resta DJ, et al. Efficacy and safety of a specific inhibitor of the BCR-ABL tyrosine kinase in chronic myeloid leukemia. N Engl J Med. 2001;344:1031-1037.
- Scheinfeld N. Imatinib mesylate and dermatology part 2: a review of the cutaneous side effects of imatinib mesylate. J Drugs Dermatol. 2006;5:228-231.
- Breccia M, Carmosimo I, Russo E, et al. Early and tardive skin adverse events in chronic myeloid leukaemia patients treated with imatinib. Eur J Haematol. 2005;74:121-123.
- Ugurel S, Hildebrand R, Dippel E, et al. Dose dependent severe cutaneous reactions to imatinib. Br J Cancer. 2003;88:1157-1159.
- Valeyrie L, Bastuji-Garin S, Revuz J, et al. Adverse cutaneous reactions to imatinib (STI571) in Philadelphia chromosome-positive leukaemias: a prospective study of 54 patients. J Am Acad Dermatol. 2003;48:201-206.
- Scott LC, White JD, Reid R, et al. Management of skin toxicity related to the use of imatinibnmesylate (STI571, GlivecTM) for advanced stage gastrointestinal stromal tumors. Sarcoma. 2005;9:157-160.
- Deininger MW, O’Brien SG, Ford JM, et al. Practical management of patients with chronic myeloid leukemia receiving imatinib. J Clin Oncol. 2003;21:1637-1647.
- Hsiao LT, Chung HM, Lin JT, et al. Stevens-Johnson syndrome after treatment with STI571: a case report. Br J Haematol. 2002;117:620-622.
- Sehgal VN, Srivastava G, Sardana K. Erythroderma/exfoliative dermatitis: a synopsis. Int J Dermatol. 2004;43:39-47.
- Pietras K, Pahler J, Bergers G, et al. Functions of paracrine PDGF signaling in the proangiogenic tumor stroma revealed by pharmacological targeting. PLoS Med. 2008;5:e19.
- Brazzelli V, Prestinari F, Barbagallo T, et al. A long-term time course of colorimetric assessment of the effects of imatinib mesylate on skin pigmentation: a study of five patients. J Eur Acad Dermatol Venerol. 2007;21:384-387.
- Baskaynak G, Kreuzer KA, Schwarz M, et al. Squamous cutaneous epithelial cell carcinoma in two CML patients with progressive disease under imatinib treatment. Eur J Haematol. 2003;70:231-234.
- Cheng H, Geist DE, Piperdi M, et al. Management of imatinib-related exacerbation of psoriasis in a patient with a gastrointestinal stromal tumor. Australas J Dermatol. 2009;50:41-43.
- Faillace C, Duarte GV, Cunha RS, et al. Severe infliximab-induced psoriasis treated with adalimumab switching. Int J Dermatol. 2013;52:234-238.
- Iborra M, Beltrán B, Bastida G, et al. Infliximab and adalimumab-induced psoriasis in Crohn’s disease: a aradoxical side effect. J Crohns Colitis. 2011;5:157-161.
- Fernandes IC, Torres T, Sanches M, et al. Psoriasis induced by infliximab. Acta Med Port. 2011;24:709-712.
- Woo SM, Huh CH, Park KC, et al. Exacerbation of psoriasis in a chronic myelogenous leukemia patient treated with imatinib. J Dermatol. 2007;34:724-726.
- Brazzelli V, Prestinari F, Roveda E, et al. Pytiriasis rosea-like eruption during treatment with imatinib mesylate. description of 3 cases. J Am Acad Dermatol. 2005;53:240-243.
- Konstantapoulos K, Papadogianni A, Dimopoulou M, et al. Pytriasis rosea associated with imatinib (STI571, Gleevec). Dermatology. 2002;205:172-173.
- Cho AY, Kim DH, Im M, et al. Pityriasis rosealike drug eruption induced by imatinib mesylate (Gleevec). Ann Dermatol. 2011;23(suppl 3):360-363.
- Druker BJ, Talpaz M, Resta DJ, et al. Efficacy and safety of a specific inhibitor of the BCR-ABL tyrosine kinase in chronic myeloid leukemia. N Engl J Med. 2001;344:1031-1037.
- Scheinfeld N. Imatinib mesylate and dermatology part 2: a review of the cutaneous side effects of imatinib mesylate. J Drugs Dermatol. 2006;5:228-231.
- Breccia M, Carmosimo I, Russo E, et al. Early and tardive skin adverse events in chronic myeloid leukaemia patients treated with imatinib. Eur J Haematol. 2005;74:121-123.
- Ugurel S, Hildebrand R, Dippel E, et al. Dose dependent severe cutaneous reactions to imatinib. Br J Cancer. 2003;88:1157-1159.
- Valeyrie L, Bastuji-Garin S, Revuz J, et al. Adverse cutaneous reactions to imatinib (STI571) in Philadelphia chromosome-positive leukaemias: a prospective study of 54 patients. J Am Acad Dermatol. 2003;48:201-206.
- Scott LC, White JD, Reid R, et al. Management of skin toxicity related to the use of imatinibnmesylate (STI571, GlivecTM) for advanced stage gastrointestinal stromal tumors. Sarcoma. 2005;9:157-160.
- Deininger MW, O’Brien SG, Ford JM, et al. Practical management of patients with chronic myeloid leukemia receiving imatinib. J Clin Oncol. 2003;21:1637-1647.
- Hsiao LT, Chung HM, Lin JT, et al. Stevens-Johnson syndrome after treatment with STI571: a case report. Br J Haematol. 2002;117:620-622.
- Sehgal VN, Srivastava G, Sardana K. Erythroderma/exfoliative dermatitis: a synopsis. Int J Dermatol. 2004;43:39-47.
- Pietras K, Pahler J, Bergers G, et al. Functions of paracrine PDGF signaling in the proangiogenic tumor stroma revealed by pharmacological targeting. PLoS Med. 2008;5:e19.
- Brazzelli V, Prestinari F, Barbagallo T, et al. A long-term time course of colorimetric assessment of the effects of imatinib mesylate on skin pigmentation: a study of five patients. J Eur Acad Dermatol Venerol. 2007;21:384-387.
- Baskaynak G, Kreuzer KA, Schwarz M, et al. Squamous cutaneous epithelial cell carcinoma in two CML patients with progressive disease under imatinib treatment. Eur J Haematol. 2003;70:231-234.
- Cheng H, Geist DE, Piperdi M, et al. Management of imatinib-related exacerbation of psoriasis in a patient with a gastrointestinal stromal tumor. Australas J Dermatol. 2009;50:41-43.
- Faillace C, Duarte GV, Cunha RS, et al. Severe infliximab-induced psoriasis treated with adalimumab switching. Int J Dermatol. 2013;52:234-238.
- Iborra M, Beltrán B, Bastida G, et al. Infliximab and adalimumab-induced psoriasis in Crohn’s disease: a aradoxical side effect. J Crohns Colitis. 2011;5:157-161.
- Fernandes IC, Torres T, Sanches M, et al. Psoriasis induced by infliximab. Acta Med Port. 2011;24:709-712.
- Woo SM, Huh CH, Park KC, et al. Exacerbation of psoriasis in a chronic myelogenous leukemia patient treated with imatinib. J Dermatol. 2007;34:724-726.
- Brazzelli V, Prestinari F, Roveda E, et al. Pytiriasis rosea-like eruption during treatment with imatinib mesylate. description of 3 cases. J Am Acad Dermatol. 2005;53:240-243.
- Konstantapoulos K, Papadogianni A, Dimopoulou M, et al. Pytriasis rosea associated with imatinib (STI571, Gleevec). Dermatology. 2002;205:172-173.
- Cho AY, Kim DH, Im M, et al. Pityriasis rosealike drug eruption induced by imatinib mesylate (Gleevec). Ann Dermatol. 2011;23(suppl 3):360-363.
Practice Points
- The most common cutaneous adverse reactions from imatinib mesylate (IM) are swelling and edema.
- Maculopapular rash with pruritus is one of the most common side effects from IM and can be effectively treated with oral or systemic antihistamines.
- The onset of periorbital edema requires a complete evaluation of renal function.
Low hematocrit in elderly portends increased bleeding post PCI
PARIS – A low hematocrit in an elderly patient who’s going to undergo percutaneous coronary intervention signals a markedly increased risk of major bleeding within 30 days of the procedure, according to Dr. David Marti.
“Analysis of hematocrit in elderly patients can guide important procedural characteristics, such as access site and antithrombotic regimen,” he said at the annual congress of the European Association of Percutaneous Cardiovascular Interventions.
For example, studies have established that transradial artery access percutaneous coronary intervention (PCI) results in significantly less bleeding than the transfemoral route, said Dr. Marti, an interventional cardiologist at the University of Alcalá in Madrid.
He presented a prospective study of 212 consecutive patients aged 75 or older who underwent PCI at a single university hospital. Their mean age was 81.4 years, and slightly over half of them presented with an acute coronary syndrome.
All patients received dual-antiplatelet therapy in accord with current guidelines. Stent type and procedural anticoagulant regimen were left to the discretion of the cardiologist; 80% of the subjects received bivalirudin-based anticoagulation.
The primary study outcome was the 30-day incidence of major bleeding, as defined by a Bleeding Academic Research Consortium (BARC) type 3-5 event. The overall rate in this elderly PCI population was 5.5%. However, the rate varied markedly by baseline hematocrit tertile, in accord with the investigators’ study hypothesis.
Major bleeding occurred in 2.9% of patients with an Hct greater than 42% and 3.1% in those with an Hct of 38%-52%, and jumped to 10.6% in the one-third of subjects whose baseline Hct was below 38%, Dr. Marti reported.
Thus, a preprocedural Hct below 38% was associated with a 4.1-fold increased risk of major bleeding within 30 days following PCI. An Hct in this range was a stronger predictor of BARC type 3-5 bleeding risk than were other factors better known as being important, including advanced age, greater body weight, female sex, or an elevated serum creatinine indicative of chronic kidney disease. Indeed, an Hct below 38% was the only statistically significant predictor of major bleeding in this elderly population.
The likely explanation for the observed results is that a low Hct level in elderly patients usually reflects subclinical blood loss that can be worsened by antithrombotic therapies, the cardiologist explained.
The presenter reported having no financial conflicts regarding this study, conducted without commercial support.
PARIS – A low hematocrit in an elderly patient who’s going to undergo percutaneous coronary intervention signals a markedly increased risk of major bleeding within 30 days of the procedure, according to Dr. David Marti.
“Analysis of hematocrit in elderly patients can guide important procedural characteristics, such as access site and antithrombotic regimen,” he said at the annual congress of the European Association of Percutaneous Cardiovascular Interventions.
For example, studies have established that transradial artery access percutaneous coronary intervention (PCI) results in significantly less bleeding than the transfemoral route, said Dr. Marti, an interventional cardiologist at the University of Alcalá in Madrid.
He presented a prospective study of 212 consecutive patients aged 75 or older who underwent PCI at a single university hospital. Their mean age was 81.4 years, and slightly over half of them presented with an acute coronary syndrome.
All patients received dual-antiplatelet therapy in accord with current guidelines. Stent type and procedural anticoagulant regimen were left to the discretion of the cardiologist; 80% of the subjects received bivalirudin-based anticoagulation.
The primary study outcome was the 30-day incidence of major bleeding, as defined by a Bleeding Academic Research Consortium (BARC) type 3-5 event. The overall rate in this elderly PCI population was 5.5%. However, the rate varied markedly by baseline hematocrit tertile, in accord with the investigators’ study hypothesis.
Major bleeding occurred in 2.9% of patients with an Hct greater than 42% and 3.1% in those with an Hct of 38%-52%, and jumped to 10.6% in the one-third of subjects whose baseline Hct was below 38%, Dr. Marti reported.
Thus, a preprocedural Hct below 38% was associated with a 4.1-fold increased risk of major bleeding within 30 days following PCI. An Hct in this range was a stronger predictor of BARC type 3-5 bleeding risk than were other factors better known as being important, including advanced age, greater body weight, female sex, or an elevated serum creatinine indicative of chronic kidney disease. Indeed, an Hct below 38% was the only statistically significant predictor of major bleeding in this elderly population.
The likely explanation for the observed results is that a low Hct level in elderly patients usually reflects subclinical blood loss that can be worsened by antithrombotic therapies, the cardiologist explained.
The presenter reported having no financial conflicts regarding this study, conducted without commercial support.
PARIS – A low hematocrit in an elderly patient who’s going to undergo percutaneous coronary intervention signals a markedly increased risk of major bleeding within 30 days of the procedure, according to Dr. David Marti.
“Analysis of hematocrit in elderly patients can guide important procedural characteristics, such as access site and antithrombotic regimen,” he said at the annual congress of the European Association of Percutaneous Cardiovascular Interventions.
For example, studies have established that transradial artery access percutaneous coronary intervention (PCI) results in significantly less bleeding than the transfemoral route, said Dr. Marti, an interventional cardiologist at the University of Alcalá in Madrid.
He presented a prospective study of 212 consecutive patients aged 75 or older who underwent PCI at a single university hospital. Their mean age was 81.4 years, and slightly over half of them presented with an acute coronary syndrome.
All patients received dual-antiplatelet therapy in accord with current guidelines. Stent type and procedural anticoagulant regimen were left to the discretion of the cardiologist; 80% of the subjects received bivalirudin-based anticoagulation.
The primary study outcome was the 30-day incidence of major bleeding, as defined by a Bleeding Academic Research Consortium (BARC) type 3-5 event. The overall rate in this elderly PCI population was 5.5%. However, the rate varied markedly by baseline hematocrit tertile, in accord with the investigators’ study hypothesis.
Major bleeding occurred in 2.9% of patients with an Hct greater than 42% and 3.1% in those with an Hct of 38%-52%, and jumped to 10.6% in the one-third of subjects whose baseline Hct was below 38%, Dr. Marti reported.
Thus, a preprocedural Hct below 38% was associated with a 4.1-fold increased risk of major bleeding within 30 days following PCI. An Hct in this range was a stronger predictor of BARC type 3-5 bleeding risk than were other factors better known as being important, including advanced age, greater body weight, female sex, or an elevated serum creatinine indicative of chronic kidney disease. Indeed, an Hct below 38% was the only statistically significant predictor of major bleeding in this elderly population.
The likely explanation for the observed results is that a low Hct level in elderly patients usually reflects subclinical blood loss that can be worsened by antithrombotic therapies, the cardiologist explained.
The presenter reported having no financial conflicts regarding this study, conducted without commercial support.
AT EUROPCR 2016
Key clinical point: Elderly patients scheduled for PCI have a fourfold greater risk of major bleeding within 30 days if their Hct is less than 38%.
Major finding: The 30-day incidence of BARC types 3-5 major bleeding was 10.9% in elderly patients with a pre-PCI Hct below 38%, compared with 2.9% in those in the top Hct tertile.
Data source: A prospective study of 212 consecutive patients aged 75 or older who underwent PCI at a single university hospital.
Disclosures: The presenter reported having no financial conflicts regarding this study, conducted without commercial support.
Adolescent obesity rose slightly, again
Nearly one in five young people in the United States are obese, and proportionally more adolescents have been obese during every time period measured since 1994, according to an analysis published online June 7 in JAMA.
But the most recent data suggest only a “small” rise in adolescent obesity since 2011, and stable rates among children during this time period, said Cynthia L. Ogden, Ph.D., of the National Center for Health Statistics at the Centers for Disease Control and Prevention.
Few studies of obesity in young people have teased out rates by age, according to Dr. Ogden and her associates. Using National Health and Nutrition Examination Survey data, they calculated rates of obesity and extreme obesity among 40,780 children and adolescents aged 2-19 years for the periods 1988-1994 through 2013-2014. They defined obesity as a body mass index (BMI) at or above the sex-specific 95th percentile on the CDC BMI-for-age growth charts, and they defined extreme obesity as a BMI at least 120% of the sex-specific 95th percentile on the charts (JAMA. 2016 Jun 7. doi: 10.1001/jama.2016.6361).
Based on these definitions, 17% of children and adolescents were obese between 2011 and 2014, while 6% were extremely obese, the investigators reported. Furthermore, 21% of adolescents were obese in 2013-2014, compared with 17% during 2003-2004 and 11% during 1988-1994.
Rates for 6- to -11-year-olds have remained fairly stable in the high teens for more than a decade, while rates among 2- to 5-year-olds peaked in 2003-2004 at nearly 14% before dropping to about 9% during 2013-2014. The prevalence of obesity varied little by sex, but diverged substantially by race and ethnicity. For example, in 2011-2014, 23% of Hispanics and about 23% of black children were obese, and 9% and 12% were extremely obese, respectively the researchers reported. Rates for the same ages of non-Hispanic Asian children were 9% and 2%, respectively, and those for non-Hispanic whites were 20% and 7%, respectively.
“Body mass index is an imperfect measure of body fat and health risk,” the investigators cautioned. “There are racial and ethnic differences in body fat at the same BMI level. Among children and adolescents, the definition of obesity is statistical. Children and adolescents are compared with a group of U.S. children in the 1960s to early 1990s, so the prevalence of obesity is dependent on the characteristics of the age-specific population during that period. In addition, among young children, small changes in weight can lead to relatively large changes in BMI percentile”
The researchers reported no funding sources and had no disclosures.
Numerous foundations, industries, professional societies, and governmental agencies have provided hundreds of millions of dollars in funding to support basic science research in obesity, clinical trials, and observational studies, development of new drugs and devices, and hospital and community programs to help stem the tide of the obesity epidemic. In addition, communities, schools, places of worship, and professional societies have become active in attempting to counteract obesity – emphasizing exercise, better dietary choices, and nutritional labeling of foods.
Although it is impossible to know what the extent of the obesity epidemic would have been without these efforts, [these data] certainly do not suggest much success. Perhaps new incentives are needed to encourage the food industry to work with families and the medical community to prevent obesity. The stakes for the health of people in the United States are high, and creative solutions are needed.
Dr. Jody W. Zylke is deputy editor of JAMA. Dr. Howard Bauchner is editor in chief of JAMA. These comments are excerpted from their accompanying editorial (JAMA. 2016 Jun. doi: 10.1001/jama.2016.6190).
Numerous foundations, industries, professional societies, and governmental agencies have provided hundreds of millions of dollars in funding to support basic science research in obesity, clinical trials, and observational studies, development of new drugs and devices, and hospital and community programs to help stem the tide of the obesity epidemic. In addition, communities, schools, places of worship, and professional societies have become active in attempting to counteract obesity – emphasizing exercise, better dietary choices, and nutritional labeling of foods.
Although it is impossible to know what the extent of the obesity epidemic would have been without these efforts, [these data] certainly do not suggest much success. Perhaps new incentives are needed to encourage the food industry to work with families and the medical community to prevent obesity. The stakes for the health of people in the United States are high, and creative solutions are needed.
Dr. Jody W. Zylke is deputy editor of JAMA. Dr. Howard Bauchner is editor in chief of JAMA. These comments are excerpted from their accompanying editorial (JAMA. 2016 Jun. doi: 10.1001/jama.2016.6190).
Numerous foundations, industries, professional societies, and governmental agencies have provided hundreds of millions of dollars in funding to support basic science research in obesity, clinical trials, and observational studies, development of new drugs and devices, and hospital and community programs to help stem the tide of the obesity epidemic. In addition, communities, schools, places of worship, and professional societies have become active in attempting to counteract obesity – emphasizing exercise, better dietary choices, and nutritional labeling of foods.
Although it is impossible to know what the extent of the obesity epidemic would have been without these efforts, [these data] certainly do not suggest much success. Perhaps new incentives are needed to encourage the food industry to work with families and the medical community to prevent obesity. The stakes for the health of people in the United States are high, and creative solutions are needed.
Dr. Jody W. Zylke is deputy editor of JAMA. Dr. Howard Bauchner is editor in chief of JAMA. These comments are excerpted from their accompanying editorial (JAMA. 2016 Jun. doi: 10.1001/jama.2016.6190).
Nearly one in five young people in the United States are obese, and proportionally more adolescents have been obese during every time period measured since 1994, according to an analysis published online June 7 in JAMA.
But the most recent data suggest only a “small” rise in adolescent obesity since 2011, and stable rates among children during this time period, said Cynthia L. Ogden, Ph.D., of the National Center for Health Statistics at the Centers for Disease Control and Prevention.
Few studies of obesity in young people have teased out rates by age, according to Dr. Ogden and her associates. Using National Health and Nutrition Examination Survey data, they calculated rates of obesity and extreme obesity among 40,780 children and adolescents aged 2-19 years for the periods 1988-1994 through 2013-2014. They defined obesity as a body mass index (BMI) at or above the sex-specific 95th percentile on the CDC BMI-for-age growth charts, and they defined extreme obesity as a BMI at least 120% of the sex-specific 95th percentile on the charts (JAMA. 2016 Jun 7. doi: 10.1001/jama.2016.6361).
Based on these definitions, 17% of children and adolescents were obese between 2011 and 2014, while 6% were extremely obese, the investigators reported. Furthermore, 21% of adolescents were obese in 2013-2014, compared with 17% during 2003-2004 and 11% during 1988-1994.
Rates for 6- to -11-year-olds have remained fairly stable in the high teens for more than a decade, while rates among 2- to 5-year-olds peaked in 2003-2004 at nearly 14% before dropping to about 9% during 2013-2014. The prevalence of obesity varied little by sex, but diverged substantially by race and ethnicity. For example, in 2011-2014, 23% of Hispanics and about 23% of black children were obese, and 9% and 12% were extremely obese, respectively the researchers reported. Rates for the same ages of non-Hispanic Asian children were 9% and 2%, respectively, and those for non-Hispanic whites were 20% and 7%, respectively.
“Body mass index is an imperfect measure of body fat and health risk,” the investigators cautioned. “There are racial and ethnic differences in body fat at the same BMI level. Among children and adolescents, the definition of obesity is statistical. Children and adolescents are compared with a group of U.S. children in the 1960s to early 1990s, so the prevalence of obesity is dependent on the characteristics of the age-specific population during that period. In addition, among young children, small changes in weight can lead to relatively large changes in BMI percentile”
The researchers reported no funding sources and had no disclosures.
Nearly one in five young people in the United States are obese, and proportionally more adolescents have been obese during every time period measured since 1994, according to an analysis published online June 7 in JAMA.
But the most recent data suggest only a “small” rise in adolescent obesity since 2011, and stable rates among children during this time period, said Cynthia L. Ogden, Ph.D., of the National Center for Health Statistics at the Centers for Disease Control and Prevention.
Few studies of obesity in young people have teased out rates by age, according to Dr. Ogden and her associates. Using National Health and Nutrition Examination Survey data, they calculated rates of obesity and extreme obesity among 40,780 children and adolescents aged 2-19 years for the periods 1988-1994 through 2013-2014. They defined obesity as a body mass index (BMI) at or above the sex-specific 95th percentile on the CDC BMI-for-age growth charts, and they defined extreme obesity as a BMI at least 120% of the sex-specific 95th percentile on the charts (JAMA. 2016 Jun 7. doi: 10.1001/jama.2016.6361).
Based on these definitions, 17% of children and adolescents were obese between 2011 and 2014, while 6% were extremely obese, the investigators reported. Furthermore, 21% of adolescents were obese in 2013-2014, compared with 17% during 2003-2004 and 11% during 1988-1994.
Rates for 6- to -11-year-olds have remained fairly stable in the high teens for more than a decade, while rates among 2- to 5-year-olds peaked in 2003-2004 at nearly 14% before dropping to about 9% during 2013-2014. The prevalence of obesity varied little by sex, but diverged substantially by race and ethnicity. For example, in 2011-2014, 23% of Hispanics and about 23% of black children were obese, and 9% and 12% were extremely obese, respectively the researchers reported. Rates for the same ages of non-Hispanic Asian children were 9% and 2%, respectively, and those for non-Hispanic whites were 20% and 7%, respectively.
“Body mass index is an imperfect measure of body fat and health risk,” the investigators cautioned. “There are racial and ethnic differences in body fat at the same BMI level. Among children and adolescents, the definition of obesity is statistical. Children and adolescents are compared with a group of U.S. children in the 1960s to early 1990s, so the prevalence of obesity is dependent on the characteristics of the age-specific population during that period. In addition, among young children, small changes in weight can lead to relatively large changes in BMI percentile”
The researchers reported no funding sources and had no disclosures.
FROM JAMA
Key clinical point:Nearly one in five children and adolescents are obese, and rates of adolescent obesity have risen during every time period measured since 1994.
Major finding: About 17% of children and adolescents in the United States were obese between 2011 and 2014 (95% confidence interval, 15.5%-18.6%). Nearly 21% of adolescents were obese in 2013-2014, compared with 17% during 2003-2004 and 10% during 1988-1994.
Data source: An analysis of the body mass indexes of 40,780 individuals aged 2-19 years from the National Health and Nutrition Examination Survey.
Disclosures: The researchers reported no funding sources and had no disclosures.