User login
Stat Laboratory Order Feedback
Overuse of inpatient stat laboratory orders (stat is an abbreviation of the Latin word statim, meaning immediately, without delay; alternatively, some consider it an acronym for short turnaround time) is a major problem in the modern healthcare system.[1, 2, 3, 4, 5] Ordering laboratory tests stat is a common way to expedite processing, with expectation of results being reported within 1 hour from the time ordered, according to the College of American Pathologists.[6] However, stat orders are also requested for convenience,[2] to expedite discharge,[7] or to meet expectation of turnaround times.[8, 9, 10] Overuse of stat orders increases cost and may reduce the effectiveness of a system. Reduction of excessive stat order requests helps support safe and efficient patient care[11, 12] and may reduce laboratory costs.[13, 14]
Several studies have examined interventions to optimize stat laboratory utilization.[14, 15] Potentially effective interventions include establishment of stat ordering guidelines, utilization of point‐of‐care testing, and prompt feedback via computerized physician order entry (CPOE) systems.[16, 17, 18] However, limited evidence is available regarding the effectiveness of audit and feedback in reducing stat ordering frequency.
Our institution shared the challenge of a high frequency of stat laboratory test orders. An interdisciplinary working group comprising leadership in the medicine, surgery, informatics, laboratory medicine, and quality and patient safety departments was formed to approach this problem and identify potential interventions. The objectives of this study are to describe the patterns of stat orders at our institution as well as to assess the effectiveness of the targeted individual feedback intervention in reducing utilization of stat laboratory test orders.
METHODS
Design
This study is a retrospective analysis of administrative data for a quality‐improvement project. The study was deemed exempt from review by the Beth Israel Medical Center Institutional Review Board.
Setting
Beth Israel Medical Center is an 856‐bed, urban, tertiary‐care teaching hospital with a capacity of 504 medical and surgical beds. In October 2009, 47.8% of inpatient laboratory tests (excluding the emergency department) were ordered as stat, according to an electronic audit of our institution's CPOE system, GE Centricity Enterprise (GE Medical Systems Information Technologies, Milwaukee, WI). Another audit using the same data query for the period of December 2009 revealed that 50 of 488 providers (attending physicians, nurse practitioners, physician assistants, fellows, and residents) accounted for 51% of total stat laboratory orders, and that Medicine and General Surgery residents accounted for 43 of these 50 providers. These findings prompted us to develop interventions that targeted high utilizers of stat laboratory orders, especially Medicine and General Surgery residents.
Teaching Session
Medicine and General Surgery residents were given a 1‐hour educational session at a teaching conference in January 2010. At this session, residents were instructed that ordering stat laboratory tests was appropriate when the results were needed urgently to make clinical decisions as quickly as possible. This session also explained the potential consequences associated with excessive stat laboratory orders and provided department‐specific data on current stat laboratory utilization.
Individual Feedback
From January to May 2010, a list of stat laboratory orders by provider was generated each month by the laboratory department's database. The top 10 providers who most frequently placed stat orders were identified and given individual feedback by their direct supervisors based on data from the prior month (feedback provided from February to June 2010). Medicine and General Surgery residents were counseled by their residency program directors, and nontrainee providers by their immediate supervising physicians. Feedback and counseling were given via brief individual meetings, phone calls, or e‐mail. Supervisors chose the method that ensured the most timely delivery of feedback. Feedback and counseling consisted of explaining the effort to reduce stat laboratory ordering and the rationale behind this, alerting providers that they were outliers, and encouraging them to change their behavior. No punitive consequences were discussed; the feedback sessions were purely informative in nature. When an individual was ranked again in the top 10 after receiving feedback, he or she received repeated feedback.
Data Collection and Measured Outcomes
We retrospectively collected data on monthly laboratory test orders by providers from September 2009 to June 2010. The data were extracted from the electronic medical record (EMR) system and included any inpatient laboratory orders at the institution. Laboratory orders placed in the emergency department were excluded. Providers were divided into nontrainees (attending physicians, nurse practitioners, and physician assistants) and trainee providers (residents and fellows). Trainee providers were further categorized by educational levels (postgraduate year [PGY]‐1 vs PGY‐2 or higher) and specialty (Medicine vs General Surgery vs other). Fellows in medical and surgical subspecialties were categorized as other.
The primary outcome measure was the proportion of stat orders out of total laboratory orders for individuals. The proportion of stat orders out of total orders was selected to assess individuals' tendency to utilize stat laboratory orders.
Statistical Analysis
In the first analysis, stat and total laboratory orders were aggregated for each provider. Providers who ordered <10 laboratory tests during the study period were excluded. We calculated the proportion of stat out of total laboratory orders for each provider, and compared it by specialty, by educational level, and by feedback status. Median and interquartile range (IQR) were reported due to non‐normal distribution, and the Wilcoxon rank‐sum test was used for comparisons.
In the second analysis, we determined pre‐feedback and post‐feedback periods for providers who received feedback. The feedback month was defined as the month immediately after a provider was ranked in the top 10 for the first time during the intervention period. For each provider, stat orders and total laboratory orders during months before and after the feedback month, excluding the feedback month, were calculated. The change in the proportion of stat laboratory orders out of all orders from pre‐ to post‐feedback was then calculated for each provider for whom both pre‐ and post‐feedback data were available. Because providers may have utilized an unusually high proportion of stat orders during the months in which they were ranked in the top 10 (for example, due to being on rotations in which many orders are placed stat, such as the intensive care units), we conducted a sensitivity analysis excluding those months. Further, for comparison, we conducted the same analysis for providers who did not receive feedback and were ranked 11 to 30 in any month during the intervention period. In those providers, we considered the month immediately after a provider was ranked in the 11 to 30 range for the first time as the hypothetical feedback month. The proportional change in the stat laboratory ordering was analyzed using the paired Student t test.
In the third analysis, we calculated the proportion of stat laboratory orders each month for each provider. Individual provider data were excluded if total laboratory orders for the month were <10. We then calculated the average proportion of stat orders for each specialty and educational level among trainee providers every month, and plotted and compared the trends.
All analyses were performed with JMP software version 9.0 (SAS Institute, Inc., Cary, NC). All statistical tests were 2‐sided, and P < 0.05 was considered significant.
RESULTS
We identified 1045 providers who ordered 1 laboratory test from September 2009 to June 2010. Of those, 716 were nontrainee providers and 329 were trainee providers. Among the trainee providers, 126 were Medicine residents, 33 were General Surgery residents, and 103 were PGY‐1. A total of 772,734 laboratory tests were ordered during the study period, and 349,658 (45.2%) tests were ordered as stat. Of all stat orders, 179,901 (51.5%) were ordered by Medicine residents and 52,225 (14.9%) were ordered by General Surgery residents.
Thirty‐seven providers received individual feedback during the intervention period. This group consisted of 8 nontrainee providers (nurse practitioners and physician assistants), 21 Medicine residents (5 were PGY‐1), and 8 General Surgery residents (all PGY‐1). This group ordered a total of 84,435 stat laboratory tests from September 2009 to June 2010 and was responsible for 24.2% of all stat laboratory test orders at the institution.
Provider Analysis
After exclusion of providers who ordered <10 laboratory tests from September 2009 to June 2010, a total of 807 providers remained. The median proportion of stat orders out of total orders was 40% among all providers and 41.6% for nontrainee providers (N = 500), 38.7% for Medicine residents (N = 125), 80.2% for General Surgery residents (N = 32), and 24.2% for other trainee providers (N = 150). The proportion of stat orders differed significantly by specialty and educational level, but also even among providers in the same specialty at the same educational level. Among PGY‐1 residents, the stat‐ordering proportion ranged from 6.9% to 49.1% for Medicine (N = 54) and 69.0% to 97.1% for General Surgery (N = 16). The proportion of stat orders was significantly higher among providers who received feedback compared with those who did not (median, 72.4% [IQR, 55.0%89.5%] vs 39.0% [IQR, 14.9%65.7%], P < 0.001). When stratified by specialty and educational level, the statistical significance remained in nontrainee providers and trainee providers with higher educational level, but not in PGY‐1 residents (Table 1).
All Providers | Feedback Given | Feedback Not Given | |||||
---|---|---|---|---|---|---|---|
N | Stat % | N | Stat % | N | Stat % | P Valuea | |
| |||||||
Total | 807 | 40 (15.869.0) | 37 | 72.4 (55.089.5) | 770 | 39.0 (14.965.7) | <0.001 |
Nontrainee providersb | 500 | 41.6 (13.571.5) | 8 | 91.7 (64.097.5) | 492 | 40.2 (13.270.9) | <0.001 |
Trainee providersc | 307 | 37.8 (19.162.7) | 29 | 69.3 (44.380.9) | 278 | 35.1 (17.655.6) | <0.001 |
Medicine | 125 | 38.7 (26.850.4) | 21 | 58.8 (36.872.6) | 104 | 36.1 (25.945.6) | <0.001 |
PGY‐1 | 54 | 28.1 (23.935.2) | 5 | 32.0 (25.536.8) | 49 | 27.9 (23.534.6) | 0.52 |
PGY‐2 and higher | 71 | 46.5 (39.160.4) | 16 | 63.9 (54.575.7) | 55 | 45.1 (36.554.9) | <0.001 |
General surgery | 32 | 80.2 (69.690.1) | 8 | 89.5 (79.392.7) | 24 | 78.7 (67.987.4) | <0.05 |
PGY‐1 | 16 | 86.4 (79.191.1) | 8 | 89.5 (79.392.7) | 8 | 84.0 (73.289.1) | 0.25 |
PGY‐2 and higher | 16 | 74.4 (65.485.3) | |||||
Other | 150 | 24.2 (9.055.0) | |||||
PGY‐1 | 31 | 28.2 (18.478.3) | |||||
PGY‐2 or higher | 119 | 20.9 (5.651.3) |
Stat Ordering Pattern Change by Individual Feedback
Among 37 providers who received individual feedback, 8 providers were ranked in the top 10 more than once and received repeated feedback. Twenty‐seven of 37 providers had both pre‐feedback and post‐feedback data and were included in the analysis. Of those, 7 were nontrainee providers, 16 were Medicine residents (5 were PGY‐1), and 4 were General Surgery residents (all PGY‐1). The proportion of stat laboratory orders per provider decreased by 15.7% (95% confidence interval [CI]: 5.6% to 25.9%, P = 0.004) after feedback (Table 2). The decrease remained significant after excluding the months in which providers were ranked in the top 10 (11.4%; 95% CI: 0.7% to 22.1%, P = 0.04).
Top 10 Providers (Received Feedback) | Providers Ranked in 1130 (No Feedback) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
N | Mean Stat % | Mean Difference (95% CI) | P Value | N | Mean Stat % | Mean Difference (95% CI) | P Value | |||
Pre | Post | Pre | Post | |||||||
| ||||||||||
Total | 27 | 71.2 | 55.5 | 15.7 (25.9 to 5.6) | 0.004 | 39 | 64.6 | 60.2 | 4.5 (11.0 to 2.1) | 0.18 |
Nontrainee providers | 7 | 94.6 | 73.2 | 21.4 (46.9 to 4.1) | 0.09 | 12 | 84.4 | 80.6 | 3.8 (11.9 to 4.3) | 0.32 |
Trainee providers | 20 | 63.0 | 49.3 | 13.7 (25.6 to 1.9) | 0.03 | 27 | 55.8 | 51.1 | 4.7 (13.9 to 4.4) | 0.30 |
Medicine | 16 | 55.8 | 45.0 | 10.8 (23.3 to 1.6) | 0.08 | 21 | 46.2 | 41.3 | 4.8 (16.3 to 6.7) | 0.39 |
General Surgery | 4 | 91.9 | 66.4 | 25.4 (78.9 to 28.0) | 0.23 | 6 | 89.6 | 85.2 | 4.4 (20.5 to 11.6) | 0.51 |
PGY‐1 | 9 | 58.9 | 47.7 | 11.2 (32.0 to 9.5) | 0.25 | 15 | 55.2 | 49.2 | 6.0 (18.9 to 6.9) | 0.33 |
PGY‐2 or Higher | 11 | 66.4 | 50.6 | 15.8 (32.7 to 1.1) | 0.06 | 12 | 56.6 | 53.5 | 3.1 (18.3 to 12.1) | 0.66 |
In comparison, a total of 57 providers who did not receive feedback were in the 11 to 30 range during the intervention period. Three Obstetrics and Gynecology residents and 3 Family Medicine residents were excluded from the analysis to match specialty with providers who received feedback. Thirty‐nine of 51 providers had adequate data and were included in the analysis, comprising 12 nontrainee providers, 21 Medicine residents (10 were PGY‐1), and 6 General Surgery residents (5 were PGY‐1). Among them, the proportion of stat laboratory orders per provider did not change significantly, with a 4.5% decrease (95% CI: 2.1% to 11.0%, P = 0.18; Table 2).
Stat Ordering Trends Among Trainee Providers
After exclusion of data for the month with <10 total laboratory tests per provider, a total of 303 trainee providers remained, providing 2322 data points for analysis. Of the 303, 125 were Medicine residents (54 were PGY‐1), 32 were General Surgery residents (16 were PGY‐1), and 146 were others (31 were PGY‐1). The monthly trends for the average proportion of stat orders among those providers are shown in Figure 1. The decrease in the proportion of stat orders was observed after January 2010 in Medicine and General Surgery residents both in PGY‐1 and PGY‐2 or higher, but no change was observed in other trainee providers.

DISCUSSION
We describe a series of interventions implemented at our institution to decrease the utilization of stat laboratory orders. Based on an audit of laboratory‐ordering data, we decided to target high utilizers of stat laboratory tests, especially Medicine and General Surgery residents. After presenting an educational session to those residents, we gave individual feedback to the highest utilizers of stat laboratory orders. Providers who received feedback decreased their utilization of stat laboratory orders, but the stat ordering pattern did not change among those who did not receive feedback.
The individual feedback intervention involved key stakeholders for resident and nontrainee provider education (directors of the Medicine and General Surgery residency programs and other direct clinical supervisors). The targeted feedback was delivered via direct supervisors and was provided more than once as needed, which are key factors for effective feedback in modifying behavior in professional practice.[19] Allowing the supervisors to choose the most appropriate form of feedback for each individual (meetings, phone calls, or e‐mail) enabled timely and individually tailored feedback and contributed to successful implementation. We feel intervention had high educational value for residents, as it promoted residents' engagement in proper systems‐based practice, one of the 6 core competencies of the Accreditation Council for Graduate Medical Education (ACGME).
We utilized the EMR to obtain provider‐specific data for feedback and analysis. As previously suggested, the use of the EMR for audit and feedback was effective in providing timely, actionable, and individualized feedback with peer benchmarking.[20, 21] We used the raw number of stat laboratory orders for audit and the proportion of stat orders out of total orders to assess the individual behavioral patterns. Although the proportional use of stat orders is affected by patient acuity and workplace or rotation site, it also seems largely affected by provider's preference or practice patterns, as we saw the variance among providers of the same specialty and educational level. The changes in the stat ordering trends only seen among Medicine and General Surgery residents suggests that our interventions successfully decreased the overall utilization of stat laboratory orders among targeted providers, and it seems less likely that those decreases are due to changes in patient acuity, changes in rotation sites, or learning curve among trainee providers. When averaged over the 10‐month study period, as shown in Table 1, the providers who received feedback ordered a higher proportion of stat tests than those who did not receive feedback, except for PGY‐1 residents. This suggests that although auditing based on the number of stat laboratory orders identified providers who tended to order more stat tests than others, it may not be a reliable indicator for PGY‐1 residents, whose number of laboratory orders highly fluctuates by rotation.
There are certain limitations to our study. First, we assumed that the top utilizers were inappropriately ordering stat laboratory tests. Because there is no clear consensus as to what constitutes appropriate stat testing,[7] it was difficult, if not impossible, to determine which specific orders were inappropriate. However, high variability of the stat ordering pattern in the analysis provides some evidence that high stat utilizers customarily order more stat testing as compared with others. A recent study also revealed that the median stat ordering percentage was 35.9% among 52 US institutions.[13] At our institution, 47.8% of laboratory tests were ordered stat prior to the intervention, higher than the benchmark, providing the rationale for our intervention.
Second, the intervention was conducted in a time‐series fashion and no randomization was employed. The comparison of providers who received feedback with those who did not is subject to selection bias, and the difference in the change in stat ordering pattern between these 2 groups may be partially due to variability of work location, rotation type, or acuity of patients. However, we performed a sensitivity analysis excluding the months when the providers were ranked in the top 10, assuming that they may have ordered an unusually high proportion of stat tests due to high acuity of patients (eg, rotation in the intensive care units) during those months. Robust results in this analysis support our contention that individual feedback was effective. In addition, we cannot completely rule out the possibility that the changes in stat ordering practice may be solely due to natural maturation effects within an academic year among trainee providers, especially PGY‐1 residents. However, relatively acute changes in the stat ordering trends only among targeted provider groups around January 2010, corresponding to the timing of interventions, suggest otherwise.
Third, we were not able to test if the intervention or decrease in stat orders adversely affected patient care. For example, if, after receiving feedback, providers did not order some tests stat that should have been ordered that way, this could have negatively affected patient care. Additionally, we did not evaluate whether reduction in stat laboratory orders improved timeliness of the reporting of stat laboratory results.
Lastly, the sustained effect and feasibility of this intervention were not tested. Past studies suggest educational interventions in laboratory ordering behavior would most likely need to be continued to maintain its effectiveness.[22, 23] Although we acknowledge that sustainability of this type of intervention may be difficult, we feel we have demonstrated that there is still value associated with giving personalized feedback.
This study has implications for future interventions and research. Use of automated, EMR‐based feedback on laboratory ordering performance may be effective in reducing excessive stat ordering and may obviate the need for time‐consuming efforts by supervisors. Development of quality indicators that more accurately assess stat ordering patterns, potentially adjusted for working sites and patient acuity, may be necessary. Studies that measure the impact of decreasing stat laboratory orders on turnaround times and cost may be of value.
CONCLUSION
At our urban, tertiary‐care teaching institution, stat ordering frequency was highly variable among providers. Targeted individual feedback to providers who ordered a large number of stat laboratory tests decreased their stat laboratory order utilization.
- Turnaround time, part 2: stats too high, yet labs cope. MLO Med Lab Obs. 1993;25(9):33–38. .
- Laboratory turnaround time. Am J Clin Pathol. 1996;105(6):676–688. .
- No more STAT testing. MLO Med Lab Obs. 2005;37(8):22, 24, 26. .
- Phlebotomy, stat testing and laboratory organization: an intriguing relationship. Clin Chem Lab Med. 2012;50(12):2065–2068. , , .
- Laboratory request appropriateness in emergency: impact on hospital organization. Clin Chem Lab Med. 2006;44(6):760–764. , , , et al.
- College of American Pathologists.Definitions used in past Q‐PROBES studies (1991–2011). Available at: http://www.cap.org/apps/docs/q_probes/q‐probes_definitions.pdf. Updated September 29, 2011. Accessed July 31, 2013.
- Practice Parameter. STAT testing? A guideline for meeting clinician turnaround time requirements. Am J Clin Pathol. 1996;105(6):671–675. , , .
- Intralaboratory performance and laboratorians' expectations for stat turnaround times: a College of American Pathologists Q‐Probes study of four cerebrospinal fluid determinations. Arch Pathol Lab Med. 1991;115(10):977–983. , .
- How fast is fast enough for clinical laboratory turnaround time? Measurement of the interval between result entry and inquiries for reports. Am J Clin Pathol. 1997;108(4):400–405. , , , .
- Strategies of organization and service for the critical‐care laboratory. Clin Chem. 1990;36(8):1557–1561. , .
- Evaluation of stat and routine turnaround times as a component of laboratory quality. Am J Clin Pathol. 1989;91(3):331–335. , , , , .
- Laboratory results: Timeliness as a quality attribute and strategy. Am J Clin Pathol. 2001;116(3):311–315. , .
- Utilization of stat test priority in the clinical laboratory: a College of American Pathologists Q‐Probes study of 52 institutions. Arch Pathol Lab Med. 2013;137(2):220–227. , , , .
- Controlling the use of stat testing. Pathologist. 1984;38(8):474–477. .
- Optimizing the availability of ‘stat' laboratory tests using Shewhart ‘C' control charts. Ann Clin Biochem. 2002;39(part 2):140–144. , , .
- Evaluating stat testing options in an academic health center: therapeutic turnaround time and staff satisfaction. Clin Chem. 1998;44(8):1597–1603. , , .
- Impact of a physician's order entry (POE) system on physicians' ordering patterns and patient length of stay. Int J Med Inform. 2002;65(3):213–223. , , .
- Instrumentation for STAT analyses. Clin Lab Med. 1988;8(4):689–697. , .
- Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;6:CD000259. , , , et al.
- The feasibility of automating audit and feedback for ART guideline adherence in Malawi. J Am Med Inform Assoc. 2011;18(6):868–874. , , , , , .
- Effect of an outpatient antimicrobial stewardship intervention on broad‐spectrum antibiotic prescribing by primary care pediatricians: a randomized trial. JAMA. 2013;309(22):2345–2352. , , , et al.
- An educational program to modify laboratory use by house staff. J Med Educ. 1977;52(7):578–581. .
- Ordering of laboratory tests in a teaching hospital: can it be improved? JAMA. 1983;249(22):3076–3080. , , .
Overuse of inpatient stat laboratory orders (stat is an abbreviation of the Latin word statim, meaning immediately, without delay; alternatively, some consider it an acronym for short turnaround time) is a major problem in the modern healthcare system.[1, 2, 3, 4, 5] Ordering laboratory tests stat is a common way to expedite processing, with expectation of results being reported within 1 hour from the time ordered, according to the College of American Pathologists.[6] However, stat orders are also requested for convenience,[2] to expedite discharge,[7] or to meet expectation of turnaround times.[8, 9, 10] Overuse of stat orders increases cost and may reduce the effectiveness of a system. Reduction of excessive stat order requests helps support safe and efficient patient care[11, 12] and may reduce laboratory costs.[13, 14]
Several studies have examined interventions to optimize stat laboratory utilization.[14, 15] Potentially effective interventions include establishment of stat ordering guidelines, utilization of point‐of‐care testing, and prompt feedback via computerized physician order entry (CPOE) systems.[16, 17, 18] However, limited evidence is available regarding the effectiveness of audit and feedback in reducing stat ordering frequency.
Our institution shared the challenge of a high frequency of stat laboratory test orders. An interdisciplinary working group comprising leadership in the medicine, surgery, informatics, laboratory medicine, and quality and patient safety departments was formed to approach this problem and identify potential interventions. The objectives of this study are to describe the patterns of stat orders at our institution as well as to assess the effectiveness of the targeted individual feedback intervention in reducing utilization of stat laboratory test orders.
METHODS
Design
This study is a retrospective analysis of administrative data for a quality‐improvement project. The study was deemed exempt from review by the Beth Israel Medical Center Institutional Review Board.
Setting
Beth Israel Medical Center is an 856‐bed, urban, tertiary‐care teaching hospital with a capacity of 504 medical and surgical beds. In October 2009, 47.8% of inpatient laboratory tests (excluding the emergency department) were ordered as stat, according to an electronic audit of our institution's CPOE system, GE Centricity Enterprise (GE Medical Systems Information Technologies, Milwaukee, WI). Another audit using the same data query for the period of December 2009 revealed that 50 of 488 providers (attending physicians, nurse practitioners, physician assistants, fellows, and residents) accounted for 51% of total stat laboratory orders, and that Medicine and General Surgery residents accounted for 43 of these 50 providers. These findings prompted us to develop interventions that targeted high utilizers of stat laboratory orders, especially Medicine and General Surgery residents.
Teaching Session
Medicine and General Surgery residents were given a 1‐hour educational session at a teaching conference in January 2010. At this session, residents were instructed that ordering stat laboratory tests was appropriate when the results were needed urgently to make clinical decisions as quickly as possible. This session also explained the potential consequences associated with excessive stat laboratory orders and provided department‐specific data on current stat laboratory utilization.
Individual Feedback
From January to May 2010, a list of stat laboratory orders by provider was generated each month by the laboratory department's database. The top 10 providers who most frequently placed stat orders were identified and given individual feedback by their direct supervisors based on data from the prior month (feedback provided from February to June 2010). Medicine and General Surgery residents were counseled by their residency program directors, and nontrainee providers by their immediate supervising physicians. Feedback and counseling were given via brief individual meetings, phone calls, or e‐mail. Supervisors chose the method that ensured the most timely delivery of feedback. Feedback and counseling consisted of explaining the effort to reduce stat laboratory ordering and the rationale behind this, alerting providers that they were outliers, and encouraging them to change their behavior. No punitive consequences were discussed; the feedback sessions were purely informative in nature. When an individual was ranked again in the top 10 after receiving feedback, he or she received repeated feedback.
Data Collection and Measured Outcomes
We retrospectively collected data on monthly laboratory test orders by providers from September 2009 to June 2010. The data were extracted from the electronic medical record (EMR) system and included any inpatient laboratory orders at the institution. Laboratory orders placed in the emergency department were excluded. Providers were divided into nontrainees (attending physicians, nurse practitioners, and physician assistants) and trainee providers (residents and fellows). Trainee providers were further categorized by educational levels (postgraduate year [PGY]‐1 vs PGY‐2 or higher) and specialty (Medicine vs General Surgery vs other). Fellows in medical and surgical subspecialties were categorized as other.
The primary outcome measure was the proportion of stat orders out of total laboratory orders for individuals. The proportion of stat orders out of total orders was selected to assess individuals' tendency to utilize stat laboratory orders.
Statistical Analysis
In the first analysis, stat and total laboratory orders were aggregated for each provider. Providers who ordered <10 laboratory tests during the study period were excluded. We calculated the proportion of stat out of total laboratory orders for each provider, and compared it by specialty, by educational level, and by feedback status. Median and interquartile range (IQR) were reported due to non‐normal distribution, and the Wilcoxon rank‐sum test was used for comparisons.
In the second analysis, we determined pre‐feedback and post‐feedback periods for providers who received feedback. The feedback month was defined as the month immediately after a provider was ranked in the top 10 for the first time during the intervention period. For each provider, stat orders and total laboratory orders during months before and after the feedback month, excluding the feedback month, were calculated. The change in the proportion of stat laboratory orders out of all orders from pre‐ to post‐feedback was then calculated for each provider for whom both pre‐ and post‐feedback data were available. Because providers may have utilized an unusually high proportion of stat orders during the months in which they were ranked in the top 10 (for example, due to being on rotations in which many orders are placed stat, such as the intensive care units), we conducted a sensitivity analysis excluding those months. Further, for comparison, we conducted the same analysis for providers who did not receive feedback and were ranked 11 to 30 in any month during the intervention period. In those providers, we considered the month immediately after a provider was ranked in the 11 to 30 range for the first time as the hypothetical feedback month. The proportional change in the stat laboratory ordering was analyzed using the paired Student t test.
In the third analysis, we calculated the proportion of stat laboratory orders each month for each provider. Individual provider data were excluded if total laboratory orders for the month were <10. We then calculated the average proportion of stat orders for each specialty and educational level among trainee providers every month, and plotted and compared the trends.
All analyses were performed with JMP software version 9.0 (SAS Institute, Inc., Cary, NC). All statistical tests were 2‐sided, and P < 0.05 was considered significant.
RESULTS
We identified 1045 providers who ordered 1 laboratory test from September 2009 to June 2010. Of those, 716 were nontrainee providers and 329 were trainee providers. Among the trainee providers, 126 were Medicine residents, 33 were General Surgery residents, and 103 were PGY‐1. A total of 772,734 laboratory tests were ordered during the study period, and 349,658 (45.2%) tests were ordered as stat. Of all stat orders, 179,901 (51.5%) were ordered by Medicine residents and 52,225 (14.9%) were ordered by General Surgery residents.
Thirty‐seven providers received individual feedback during the intervention period. This group consisted of 8 nontrainee providers (nurse practitioners and physician assistants), 21 Medicine residents (5 were PGY‐1), and 8 General Surgery residents (all PGY‐1). This group ordered a total of 84,435 stat laboratory tests from September 2009 to June 2010 and was responsible for 24.2% of all stat laboratory test orders at the institution.
Provider Analysis
After exclusion of providers who ordered <10 laboratory tests from September 2009 to June 2010, a total of 807 providers remained. The median proportion of stat orders out of total orders was 40% among all providers and 41.6% for nontrainee providers (N = 500), 38.7% for Medicine residents (N = 125), 80.2% for General Surgery residents (N = 32), and 24.2% for other trainee providers (N = 150). The proportion of stat orders differed significantly by specialty and educational level, but also even among providers in the same specialty at the same educational level. Among PGY‐1 residents, the stat‐ordering proportion ranged from 6.9% to 49.1% for Medicine (N = 54) and 69.0% to 97.1% for General Surgery (N = 16). The proportion of stat orders was significantly higher among providers who received feedback compared with those who did not (median, 72.4% [IQR, 55.0%89.5%] vs 39.0% [IQR, 14.9%65.7%], P < 0.001). When stratified by specialty and educational level, the statistical significance remained in nontrainee providers and trainee providers with higher educational level, but not in PGY‐1 residents (Table 1).
All Providers | Feedback Given | Feedback Not Given | |||||
---|---|---|---|---|---|---|---|
N | Stat % | N | Stat % | N | Stat % | P Valuea | |
| |||||||
Total | 807 | 40 (15.869.0) | 37 | 72.4 (55.089.5) | 770 | 39.0 (14.965.7) | <0.001 |
Nontrainee providersb | 500 | 41.6 (13.571.5) | 8 | 91.7 (64.097.5) | 492 | 40.2 (13.270.9) | <0.001 |
Trainee providersc | 307 | 37.8 (19.162.7) | 29 | 69.3 (44.380.9) | 278 | 35.1 (17.655.6) | <0.001 |
Medicine | 125 | 38.7 (26.850.4) | 21 | 58.8 (36.872.6) | 104 | 36.1 (25.945.6) | <0.001 |
PGY‐1 | 54 | 28.1 (23.935.2) | 5 | 32.0 (25.536.8) | 49 | 27.9 (23.534.6) | 0.52 |
PGY‐2 and higher | 71 | 46.5 (39.160.4) | 16 | 63.9 (54.575.7) | 55 | 45.1 (36.554.9) | <0.001 |
General surgery | 32 | 80.2 (69.690.1) | 8 | 89.5 (79.392.7) | 24 | 78.7 (67.987.4) | <0.05 |
PGY‐1 | 16 | 86.4 (79.191.1) | 8 | 89.5 (79.392.7) | 8 | 84.0 (73.289.1) | 0.25 |
PGY‐2 and higher | 16 | 74.4 (65.485.3) | |||||
Other | 150 | 24.2 (9.055.0) | |||||
PGY‐1 | 31 | 28.2 (18.478.3) | |||||
PGY‐2 or higher | 119 | 20.9 (5.651.3) |
Stat Ordering Pattern Change by Individual Feedback
Among 37 providers who received individual feedback, 8 providers were ranked in the top 10 more than once and received repeated feedback. Twenty‐seven of 37 providers had both pre‐feedback and post‐feedback data and were included in the analysis. Of those, 7 were nontrainee providers, 16 were Medicine residents (5 were PGY‐1), and 4 were General Surgery residents (all PGY‐1). The proportion of stat laboratory orders per provider decreased by 15.7% (95% confidence interval [CI]: 5.6% to 25.9%, P = 0.004) after feedback (Table 2). The decrease remained significant after excluding the months in which providers were ranked in the top 10 (11.4%; 95% CI: 0.7% to 22.1%, P = 0.04).
Top 10 Providers (Received Feedback) | Providers Ranked in 1130 (No Feedback) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
N | Mean Stat % | Mean Difference (95% CI) | P Value | N | Mean Stat % | Mean Difference (95% CI) | P Value | |||
Pre | Post | Pre | Post | |||||||
| ||||||||||
Total | 27 | 71.2 | 55.5 | 15.7 (25.9 to 5.6) | 0.004 | 39 | 64.6 | 60.2 | 4.5 (11.0 to 2.1) | 0.18 |
Nontrainee providers | 7 | 94.6 | 73.2 | 21.4 (46.9 to 4.1) | 0.09 | 12 | 84.4 | 80.6 | 3.8 (11.9 to 4.3) | 0.32 |
Trainee providers | 20 | 63.0 | 49.3 | 13.7 (25.6 to 1.9) | 0.03 | 27 | 55.8 | 51.1 | 4.7 (13.9 to 4.4) | 0.30 |
Medicine | 16 | 55.8 | 45.0 | 10.8 (23.3 to 1.6) | 0.08 | 21 | 46.2 | 41.3 | 4.8 (16.3 to 6.7) | 0.39 |
General Surgery | 4 | 91.9 | 66.4 | 25.4 (78.9 to 28.0) | 0.23 | 6 | 89.6 | 85.2 | 4.4 (20.5 to 11.6) | 0.51 |
PGY‐1 | 9 | 58.9 | 47.7 | 11.2 (32.0 to 9.5) | 0.25 | 15 | 55.2 | 49.2 | 6.0 (18.9 to 6.9) | 0.33 |
PGY‐2 or Higher | 11 | 66.4 | 50.6 | 15.8 (32.7 to 1.1) | 0.06 | 12 | 56.6 | 53.5 | 3.1 (18.3 to 12.1) | 0.66 |
In comparison, a total of 57 providers who did not receive feedback were in the 11 to 30 range during the intervention period. Three Obstetrics and Gynecology residents and 3 Family Medicine residents were excluded from the analysis to match specialty with providers who received feedback. Thirty‐nine of 51 providers had adequate data and were included in the analysis, comprising 12 nontrainee providers, 21 Medicine residents (10 were PGY‐1), and 6 General Surgery residents (5 were PGY‐1). Among them, the proportion of stat laboratory orders per provider did not change significantly, with a 4.5% decrease (95% CI: 2.1% to 11.0%, P = 0.18; Table 2).
Stat Ordering Trends Among Trainee Providers
After exclusion of data for the month with <10 total laboratory tests per provider, a total of 303 trainee providers remained, providing 2322 data points for analysis. Of the 303, 125 were Medicine residents (54 were PGY‐1), 32 were General Surgery residents (16 were PGY‐1), and 146 were others (31 were PGY‐1). The monthly trends for the average proportion of stat orders among those providers are shown in Figure 1. The decrease in the proportion of stat orders was observed after January 2010 in Medicine and General Surgery residents both in PGY‐1 and PGY‐2 or higher, but no change was observed in other trainee providers.

DISCUSSION
We describe a series of interventions implemented at our institution to decrease the utilization of stat laboratory orders. Based on an audit of laboratory‐ordering data, we decided to target high utilizers of stat laboratory tests, especially Medicine and General Surgery residents. After presenting an educational session to those residents, we gave individual feedback to the highest utilizers of stat laboratory orders. Providers who received feedback decreased their utilization of stat laboratory orders, but the stat ordering pattern did not change among those who did not receive feedback.
The individual feedback intervention involved key stakeholders for resident and nontrainee provider education (directors of the Medicine and General Surgery residency programs and other direct clinical supervisors). The targeted feedback was delivered via direct supervisors and was provided more than once as needed, which are key factors for effective feedback in modifying behavior in professional practice.[19] Allowing the supervisors to choose the most appropriate form of feedback for each individual (meetings, phone calls, or e‐mail) enabled timely and individually tailored feedback and contributed to successful implementation. We feel intervention had high educational value for residents, as it promoted residents' engagement in proper systems‐based practice, one of the 6 core competencies of the Accreditation Council for Graduate Medical Education (ACGME).
We utilized the EMR to obtain provider‐specific data for feedback and analysis. As previously suggested, the use of the EMR for audit and feedback was effective in providing timely, actionable, and individualized feedback with peer benchmarking.[20, 21] We used the raw number of stat laboratory orders for audit and the proportion of stat orders out of total orders to assess the individual behavioral patterns. Although the proportional use of stat orders is affected by patient acuity and workplace or rotation site, it also seems largely affected by provider's preference or practice patterns, as we saw the variance among providers of the same specialty and educational level. The changes in the stat ordering trends only seen among Medicine and General Surgery residents suggests that our interventions successfully decreased the overall utilization of stat laboratory orders among targeted providers, and it seems less likely that those decreases are due to changes in patient acuity, changes in rotation sites, or learning curve among trainee providers. When averaged over the 10‐month study period, as shown in Table 1, the providers who received feedback ordered a higher proportion of stat tests than those who did not receive feedback, except for PGY‐1 residents. This suggests that although auditing based on the number of stat laboratory orders identified providers who tended to order more stat tests than others, it may not be a reliable indicator for PGY‐1 residents, whose number of laboratory orders highly fluctuates by rotation.
There are certain limitations to our study. First, we assumed that the top utilizers were inappropriately ordering stat laboratory tests. Because there is no clear consensus as to what constitutes appropriate stat testing,[7] it was difficult, if not impossible, to determine which specific orders were inappropriate. However, high variability of the stat ordering pattern in the analysis provides some evidence that high stat utilizers customarily order more stat testing as compared with others. A recent study also revealed that the median stat ordering percentage was 35.9% among 52 US institutions.[13] At our institution, 47.8% of laboratory tests were ordered stat prior to the intervention, higher than the benchmark, providing the rationale for our intervention.
Second, the intervention was conducted in a time‐series fashion and no randomization was employed. The comparison of providers who received feedback with those who did not is subject to selection bias, and the difference in the change in stat ordering pattern between these 2 groups may be partially due to variability of work location, rotation type, or acuity of patients. However, we performed a sensitivity analysis excluding the months when the providers were ranked in the top 10, assuming that they may have ordered an unusually high proportion of stat tests due to high acuity of patients (eg, rotation in the intensive care units) during those months. Robust results in this analysis support our contention that individual feedback was effective. In addition, we cannot completely rule out the possibility that the changes in stat ordering practice may be solely due to natural maturation effects within an academic year among trainee providers, especially PGY‐1 residents. However, relatively acute changes in the stat ordering trends only among targeted provider groups around January 2010, corresponding to the timing of interventions, suggest otherwise.
Third, we were not able to test if the intervention or decrease in stat orders adversely affected patient care. For example, if, after receiving feedback, providers did not order some tests stat that should have been ordered that way, this could have negatively affected patient care. Additionally, we did not evaluate whether reduction in stat laboratory orders improved timeliness of the reporting of stat laboratory results.
Lastly, the sustained effect and feasibility of this intervention were not tested. Past studies suggest educational interventions in laboratory ordering behavior would most likely need to be continued to maintain its effectiveness.[22, 23] Although we acknowledge that sustainability of this type of intervention may be difficult, we feel we have demonstrated that there is still value associated with giving personalized feedback.
This study has implications for future interventions and research. Use of automated, EMR‐based feedback on laboratory ordering performance may be effective in reducing excessive stat ordering and may obviate the need for time‐consuming efforts by supervisors. Development of quality indicators that more accurately assess stat ordering patterns, potentially adjusted for working sites and patient acuity, may be necessary. Studies that measure the impact of decreasing stat laboratory orders on turnaround times and cost may be of value.
CONCLUSION
At our urban, tertiary‐care teaching institution, stat ordering frequency was highly variable among providers. Targeted individual feedback to providers who ordered a large number of stat laboratory tests decreased their stat laboratory order utilization.
Overuse of inpatient stat laboratory orders (stat is an abbreviation of the Latin word statim, meaning immediately, without delay; alternatively, some consider it an acronym for short turnaround time) is a major problem in the modern healthcare system.[1, 2, 3, 4, 5] Ordering laboratory tests stat is a common way to expedite processing, with expectation of results being reported within 1 hour from the time ordered, according to the College of American Pathologists.[6] However, stat orders are also requested for convenience,[2] to expedite discharge,[7] or to meet expectation of turnaround times.[8, 9, 10] Overuse of stat orders increases cost and may reduce the effectiveness of a system. Reduction of excessive stat order requests helps support safe and efficient patient care[11, 12] and may reduce laboratory costs.[13, 14]
Several studies have examined interventions to optimize stat laboratory utilization.[14, 15] Potentially effective interventions include establishment of stat ordering guidelines, utilization of point‐of‐care testing, and prompt feedback via computerized physician order entry (CPOE) systems.[16, 17, 18] However, limited evidence is available regarding the effectiveness of audit and feedback in reducing stat ordering frequency.
Our institution shared the challenge of a high frequency of stat laboratory test orders. An interdisciplinary working group comprising leadership in the medicine, surgery, informatics, laboratory medicine, and quality and patient safety departments was formed to approach this problem and identify potential interventions. The objectives of this study are to describe the patterns of stat orders at our institution as well as to assess the effectiveness of the targeted individual feedback intervention in reducing utilization of stat laboratory test orders.
METHODS
Design
This study is a retrospective analysis of administrative data for a quality‐improvement project. The study was deemed exempt from review by the Beth Israel Medical Center Institutional Review Board.
Setting
Beth Israel Medical Center is an 856‐bed, urban, tertiary‐care teaching hospital with a capacity of 504 medical and surgical beds. In October 2009, 47.8% of inpatient laboratory tests (excluding the emergency department) were ordered as stat, according to an electronic audit of our institution's CPOE system, GE Centricity Enterprise (GE Medical Systems Information Technologies, Milwaukee, WI). Another audit using the same data query for the period of December 2009 revealed that 50 of 488 providers (attending physicians, nurse practitioners, physician assistants, fellows, and residents) accounted for 51% of total stat laboratory orders, and that Medicine and General Surgery residents accounted for 43 of these 50 providers. These findings prompted us to develop interventions that targeted high utilizers of stat laboratory orders, especially Medicine and General Surgery residents.
Teaching Session
Medicine and General Surgery residents were given a 1‐hour educational session at a teaching conference in January 2010. At this session, residents were instructed that ordering stat laboratory tests was appropriate when the results were needed urgently to make clinical decisions as quickly as possible. This session also explained the potential consequences associated with excessive stat laboratory orders and provided department‐specific data on current stat laboratory utilization.
Individual Feedback
From January to May 2010, a list of stat laboratory orders by provider was generated each month by the laboratory department's database. The top 10 providers who most frequently placed stat orders were identified and given individual feedback by their direct supervisors based on data from the prior month (feedback provided from February to June 2010). Medicine and General Surgery residents were counseled by their residency program directors, and nontrainee providers by their immediate supervising physicians. Feedback and counseling were given via brief individual meetings, phone calls, or e‐mail. Supervisors chose the method that ensured the most timely delivery of feedback. Feedback and counseling consisted of explaining the effort to reduce stat laboratory ordering and the rationale behind this, alerting providers that they were outliers, and encouraging them to change their behavior. No punitive consequences were discussed; the feedback sessions were purely informative in nature. When an individual was ranked again in the top 10 after receiving feedback, he or she received repeated feedback.
Data Collection and Measured Outcomes
We retrospectively collected data on monthly laboratory test orders by providers from September 2009 to June 2010. The data were extracted from the electronic medical record (EMR) system and included any inpatient laboratory orders at the institution. Laboratory orders placed in the emergency department were excluded. Providers were divided into nontrainees (attending physicians, nurse practitioners, and physician assistants) and trainee providers (residents and fellows). Trainee providers were further categorized by educational levels (postgraduate year [PGY]‐1 vs PGY‐2 or higher) and specialty (Medicine vs General Surgery vs other). Fellows in medical and surgical subspecialties were categorized as other.
The primary outcome measure was the proportion of stat orders out of total laboratory orders for individuals. The proportion of stat orders out of total orders was selected to assess individuals' tendency to utilize stat laboratory orders.
Statistical Analysis
In the first analysis, stat and total laboratory orders were aggregated for each provider. Providers who ordered <10 laboratory tests during the study period were excluded. We calculated the proportion of stat out of total laboratory orders for each provider, and compared it by specialty, by educational level, and by feedback status. Median and interquartile range (IQR) were reported due to non‐normal distribution, and the Wilcoxon rank‐sum test was used for comparisons.
In the second analysis, we determined pre‐feedback and post‐feedback periods for providers who received feedback. The feedback month was defined as the month immediately after a provider was ranked in the top 10 for the first time during the intervention period. For each provider, stat orders and total laboratory orders during months before and after the feedback month, excluding the feedback month, were calculated. The change in the proportion of stat laboratory orders out of all orders from pre‐ to post‐feedback was then calculated for each provider for whom both pre‐ and post‐feedback data were available. Because providers may have utilized an unusually high proportion of stat orders during the months in which they were ranked in the top 10 (for example, due to being on rotations in which many orders are placed stat, such as the intensive care units), we conducted a sensitivity analysis excluding those months. Further, for comparison, we conducted the same analysis for providers who did not receive feedback and were ranked 11 to 30 in any month during the intervention period. In those providers, we considered the month immediately after a provider was ranked in the 11 to 30 range for the first time as the hypothetical feedback month. The proportional change in the stat laboratory ordering was analyzed using the paired Student t test.
In the third analysis, we calculated the proportion of stat laboratory orders each month for each provider. Individual provider data were excluded if total laboratory orders for the month were <10. We then calculated the average proportion of stat orders for each specialty and educational level among trainee providers every month, and plotted and compared the trends.
All analyses were performed with JMP software version 9.0 (SAS Institute, Inc., Cary, NC). All statistical tests were 2‐sided, and P < 0.05 was considered significant.
RESULTS
We identified 1045 providers who ordered 1 laboratory test from September 2009 to June 2010. Of those, 716 were nontrainee providers and 329 were trainee providers. Among the trainee providers, 126 were Medicine residents, 33 were General Surgery residents, and 103 were PGY‐1. A total of 772,734 laboratory tests were ordered during the study period, and 349,658 (45.2%) tests were ordered as stat. Of all stat orders, 179,901 (51.5%) were ordered by Medicine residents and 52,225 (14.9%) were ordered by General Surgery residents.
Thirty‐seven providers received individual feedback during the intervention period. This group consisted of 8 nontrainee providers (nurse practitioners and physician assistants), 21 Medicine residents (5 were PGY‐1), and 8 General Surgery residents (all PGY‐1). This group ordered a total of 84,435 stat laboratory tests from September 2009 to June 2010 and was responsible for 24.2% of all stat laboratory test orders at the institution.
Provider Analysis
After exclusion of providers who ordered <10 laboratory tests from September 2009 to June 2010, a total of 807 providers remained. The median proportion of stat orders out of total orders was 40% among all providers and 41.6% for nontrainee providers (N = 500), 38.7% for Medicine residents (N = 125), 80.2% for General Surgery residents (N = 32), and 24.2% for other trainee providers (N = 150). The proportion of stat orders differed significantly by specialty and educational level, but also even among providers in the same specialty at the same educational level. Among PGY‐1 residents, the stat‐ordering proportion ranged from 6.9% to 49.1% for Medicine (N = 54) and 69.0% to 97.1% for General Surgery (N = 16). The proportion of stat orders was significantly higher among providers who received feedback compared with those who did not (median, 72.4% [IQR, 55.0%89.5%] vs 39.0% [IQR, 14.9%65.7%], P < 0.001). When stratified by specialty and educational level, the statistical significance remained in nontrainee providers and trainee providers with higher educational level, but not in PGY‐1 residents (Table 1).
All Providers | Feedback Given | Feedback Not Given | |||||
---|---|---|---|---|---|---|---|
N | Stat % | N | Stat % | N | Stat % | P Valuea | |
| |||||||
Total | 807 | 40 (15.869.0) | 37 | 72.4 (55.089.5) | 770 | 39.0 (14.965.7) | <0.001 |
Nontrainee providersb | 500 | 41.6 (13.571.5) | 8 | 91.7 (64.097.5) | 492 | 40.2 (13.270.9) | <0.001 |
Trainee providersc | 307 | 37.8 (19.162.7) | 29 | 69.3 (44.380.9) | 278 | 35.1 (17.655.6) | <0.001 |
Medicine | 125 | 38.7 (26.850.4) | 21 | 58.8 (36.872.6) | 104 | 36.1 (25.945.6) | <0.001 |
PGY‐1 | 54 | 28.1 (23.935.2) | 5 | 32.0 (25.536.8) | 49 | 27.9 (23.534.6) | 0.52 |
PGY‐2 and higher | 71 | 46.5 (39.160.4) | 16 | 63.9 (54.575.7) | 55 | 45.1 (36.554.9) | <0.001 |
General surgery | 32 | 80.2 (69.690.1) | 8 | 89.5 (79.392.7) | 24 | 78.7 (67.987.4) | <0.05 |
PGY‐1 | 16 | 86.4 (79.191.1) | 8 | 89.5 (79.392.7) | 8 | 84.0 (73.289.1) | 0.25 |
PGY‐2 and higher | 16 | 74.4 (65.485.3) | |||||
Other | 150 | 24.2 (9.055.0) | |||||
PGY‐1 | 31 | 28.2 (18.478.3) | |||||
PGY‐2 or higher | 119 | 20.9 (5.651.3) |
Stat Ordering Pattern Change by Individual Feedback
Among 37 providers who received individual feedback, 8 providers were ranked in the top 10 more than once and received repeated feedback. Twenty‐seven of 37 providers had both pre‐feedback and post‐feedback data and were included in the analysis. Of those, 7 were nontrainee providers, 16 were Medicine residents (5 were PGY‐1), and 4 were General Surgery residents (all PGY‐1). The proportion of stat laboratory orders per provider decreased by 15.7% (95% confidence interval [CI]: 5.6% to 25.9%, P = 0.004) after feedback (Table 2). The decrease remained significant after excluding the months in which providers were ranked in the top 10 (11.4%; 95% CI: 0.7% to 22.1%, P = 0.04).
Top 10 Providers (Received Feedback) | Providers Ranked in 1130 (No Feedback) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
N | Mean Stat % | Mean Difference (95% CI) | P Value | N | Mean Stat % | Mean Difference (95% CI) | P Value | |||
Pre | Post | Pre | Post | |||||||
| ||||||||||
Total | 27 | 71.2 | 55.5 | 15.7 (25.9 to 5.6) | 0.004 | 39 | 64.6 | 60.2 | 4.5 (11.0 to 2.1) | 0.18 |
Nontrainee providers | 7 | 94.6 | 73.2 | 21.4 (46.9 to 4.1) | 0.09 | 12 | 84.4 | 80.6 | 3.8 (11.9 to 4.3) | 0.32 |
Trainee providers | 20 | 63.0 | 49.3 | 13.7 (25.6 to 1.9) | 0.03 | 27 | 55.8 | 51.1 | 4.7 (13.9 to 4.4) | 0.30 |
Medicine | 16 | 55.8 | 45.0 | 10.8 (23.3 to 1.6) | 0.08 | 21 | 46.2 | 41.3 | 4.8 (16.3 to 6.7) | 0.39 |
General Surgery | 4 | 91.9 | 66.4 | 25.4 (78.9 to 28.0) | 0.23 | 6 | 89.6 | 85.2 | 4.4 (20.5 to 11.6) | 0.51 |
PGY‐1 | 9 | 58.9 | 47.7 | 11.2 (32.0 to 9.5) | 0.25 | 15 | 55.2 | 49.2 | 6.0 (18.9 to 6.9) | 0.33 |
PGY‐2 or Higher | 11 | 66.4 | 50.6 | 15.8 (32.7 to 1.1) | 0.06 | 12 | 56.6 | 53.5 | 3.1 (18.3 to 12.1) | 0.66 |
In comparison, a total of 57 providers who did not receive feedback were in the 11 to 30 range during the intervention period. Three Obstetrics and Gynecology residents and 3 Family Medicine residents were excluded from the analysis to match specialty with providers who received feedback. Thirty‐nine of 51 providers had adequate data and were included in the analysis, comprising 12 nontrainee providers, 21 Medicine residents (10 were PGY‐1), and 6 General Surgery residents (5 were PGY‐1). Among them, the proportion of stat laboratory orders per provider did not change significantly, with a 4.5% decrease (95% CI: 2.1% to 11.0%, P = 0.18; Table 2).
Stat Ordering Trends Among Trainee Providers
After exclusion of data for the month with <10 total laboratory tests per provider, a total of 303 trainee providers remained, providing 2322 data points for analysis. Of the 303, 125 were Medicine residents (54 were PGY‐1), 32 were General Surgery residents (16 were PGY‐1), and 146 were others (31 were PGY‐1). The monthly trends for the average proportion of stat orders among those providers are shown in Figure 1. The decrease in the proportion of stat orders was observed after January 2010 in Medicine and General Surgery residents both in PGY‐1 and PGY‐2 or higher, but no change was observed in other trainee providers.

DISCUSSION
We describe a series of interventions implemented at our institution to decrease the utilization of stat laboratory orders. Based on an audit of laboratory‐ordering data, we decided to target high utilizers of stat laboratory tests, especially Medicine and General Surgery residents. After presenting an educational session to those residents, we gave individual feedback to the highest utilizers of stat laboratory orders. Providers who received feedback decreased their utilization of stat laboratory orders, but the stat ordering pattern did not change among those who did not receive feedback.
The individual feedback intervention involved key stakeholders for resident and nontrainee provider education (directors of the Medicine and General Surgery residency programs and other direct clinical supervisors). The targeted feedback was delivered via direct supervisors and was provided more than once as needed, which are key factors for effective feedback in modifying behavior in professional practice.[19] Allowing the supervisors to choose the most appropriate form of feedback for each individual (meetings, phone calls, or e‐mail) enabled timely and individually tailored feedback and contributed to successful implementation. We feel intervention had high educational value for residents, as it promoted residents' engagement in proper systems‐based practice, one of the 6 core competencies of the Accreditation Council for Graduate Medical Education (ACGME).
We utilized the EMR to obtain provider‐specific data for feedback and analysis. As previously suggested, the use of the EMR for audit and feedback was effective in providing timely, actionable, and individualized feedback with peer benchmarking.[20, 21] We used the raw number of stat laboratory orders for audit and the proportion of stat orders out of total orders to assess the individual behavioral patterns. Although the proportional use of stat orders is affected by patient acuity and workplace or rotation site, it also seems largely affected by provider's preference or practice patterns, as we saw the variance among providers of the same specialty and educational level. The changes in the stat ordering trends only seen among Medicine and General Surgery residents suggests that our interventions successfully decreased the overall utilization of stat laboratory orders among targeted providers, and it seems less likely that those decreases are due to changes in patient acuity, changes in rotation sites, or learning curve among trainee providers. When averaged over the 10‐month study period, as shown in Table 1, the providers who received feedback ordered a higher proportion of stat tests than those who did not receive feedback, except for PGY‐1 residents. This suggests that although auditing based on the number of stat laboratory orders identified providers who tended to order more stat tests than others, it may not be a reliable indicator for PGY‐1 residents, whose number of laboratory orders highly fluctuates by rotation.
There are certain limitations to our study. First, we assumed that the top utilizers were inappropriately ordering stat laboratory tests. Because there is no clear consensus as to what constitutes appropriate stat testing,[7] it was difficult, if not impossible, to determine which specific orders were inappropriate. However, high variability of the stat ordering pattern in the analysis provides some evidence that high stat utilizers customarily order more stat testing as compared with others. A recent study also revealed that the median stat ordering percentage was 35.9% among 52 US institutions.[13] At our institution, 47.8% of laboratory tests were ordered stat prior to the intervention, higher than the benchmark, providing the rationale for our intervention.
Second, the intervention was conducted in a time‐series fashion and no randomization was employed. The comparison of providers who received feedback with those who did not is subject to selection bias, and the difference in the change in stat ordering pattern between these 2 groups may be partially due to variability of work location, rotation type, or acuity of patients. However, we performed a sensitivity analysis excluding the months when the providers were ranked in the top 10, assuming that they may have ordered an unusually high proportion of stat tests due to high acuity of patients (eg, rotation in the intensive care units) during those months. Robust results in this analysis support our contention that individual feedback was effective. In addition, we cannot completely rule out the possibility that the changes in stat ordering practice may be solely due to natural maturation effects within an academic year among trainee providers, especially PGY‐1 residents. However, relatively acute changes in the stat ordering trends only among targeted provider groups around January 2010, corresponding to the timing of interventions, suggest otherwise.
Third, we were not able to test if the intervention or decrease in stat orders adversely affected patient care. For example, if, after receiving feedback, providers did not order some tests stat that should have been ordered that way, this could have negatively affected patient care. Additionally, we did not evaluate whether reduction in stat laboratory orders improved timeliness of the reporting of stat laboratory results.
Lastly, the sustained effect and feasibility of this intervention were not tested. Past studies suggest educational interventions in laboratory ordering behavior would most likely need to be continued to maintain its effectiveness.[22, 23] Although we acknowledge that sustainability of this type of intervention may be difficult, we feel we have demonstrated that there is still value associated with giving personalized feedback.
This study has implications for future interventions and research. Use of automated, EMR‐based feedback on laboratory ordering performance may be effective in reducing excessive stat ordering and may obviate the need for time‐consuming efforts by supervisors. Development of quality indicators that more accurately assess stat ordering patterns, potentially adjusted for working sites and patient acuity, may be necessary. Studies that measure the impact of decreasing stat laboratory orders on turnaround times and cost may be of value.
CONCLUSION
At our urban, tertiary‐care teaching institution, stat ordering frequency was highly variable among providers. Targeted individual feedback to providers who ordered a large number of stat laboratory tests decreased their stat laboratory order utilization.
- Turnaround time, part 2: stats too high, yet labs cope. MLO Med Lab Obs. 1993;25(9):33–38. .
- Laboratory turnaround time. Am J Clin Pathol. 1996;105(6):676–688. .
- No more STAT testing. MLO Med Lab Obs. 2005;37(8):22, 24, 26. .
- Phlebotomy, stat testing and laboratory organization: an intriguing relationship. Clin Chem Lab Med. 2012;50(12):2065–2068. , , .
- Laboratory request appropriateness in emergency: impact on hospital organization. Clin Chem Lab Med. 2006;44(6):760–764. , , , et al.
- College of American Pathologists.Definitions used in past Q‐PROBES studies (1991–2011). Available at: http://www.cap.org/apps/docs/q_probes/q‐probes_definitions.pdf. Updated September 29, 2011. Accessed July 31, 2013.
- Practice Parameter. STAT testing? A guideline for meeting clinician turnaround time requirements. Am J Clin Pathol. 1996;105(6):671–675. , , .
- Intralaboratory performance and laboratorians' expectations for stat turnaround times: a College of American Pathologists Q‐Probes study of four cerebrospinal fluid determinations. Arch Pathol Lab Med. 1991;115(10):977–983. , .
- How fast is fast enough for clinical laboratory turnaround time? Measurement of the interval between result entry and inquiries for reports. Am J Clin Pathol. 1997;108(4):400–405. , , , .
- Strategies of organization and service for the critical‐care laboratory. Clin Chem. 1990;36(8):1557–1561. , .
- Evaluation of stat and routine turnaround times as a component of laboratory quality. Am J Clin Pathol. 1989;91(3):331–335. , , , , .
- Laboratory results: Timeliness as a quality attribute and strategy. Am J Clin Pathol. 2001;116(3):311–315. , .
- Utilization of stat test priority in the clinical laboratory: a College of American Pathologists Q‐Probes study of 52 institutions. Arch Pathol Lab Med. 2013;137(2):220–227. , , , .
- Controlling the use of stat testing. Pathologist. 1984;38(8):474–477. .
- Optimizing the availability of ‘stat' laboratory tests using Shewhart ‘C' control charts. Ann Clin Biochem. 2002;39(part 2):140–144. , , .
- Evaluating stat testing options in an academic health center: therapeutic turnaround time and staff satisfaction. Clin Chem. 1998;44(8):1597–1603. , , .
- Impact of a physician's order entry (POE) system on physicians' ordering patterns and patient length of stay. Int J Med Inform. 2002;65(3):213–223. , , .
- Instrumentation for STAT analyses. Clin Lab Med. 1988;8(4):689–697. , .
- Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;6:CD000259. , , , et al.
- The feasibility of automating audit and feedback for ART guideline adherence in Malawi. J Am Med Inform Assoc. 2011;18(6):868–874. , , , , , .
- Effect of an outpatient antimicrobial stewardship intervention on broad‐spectrum antibiotic prescribing by primary care pediatricians: a randomized trial. JAMA. 2013;309(22):2345–2352. , , , et al.
- An educational program to modify laboratory use by house staff. J Med Educ. 1977;52(7):578–581. .
- Ordering of laboratory tests in a teaching hospital: can it be improved? JAMA. 1983;249(22):3076–3080. , , .
- Turnaround time, part 2: stats too high, yet labs cope. MLO Med Lab Obs. 1993;25(9):33–38. .
- Laboratory turnaround time. Am J Clin Pathol. 1996;105(6):676–688. .
- No more STAT testing. MLO Med Lab Obs. 2005;37(8):22, 24, 26. .
- Phlebotomy, stat testing and laboratory organization: an intriguing relationship. Clin Chem Lab Med. 2012;50(12):2065–2068. , , .
- Laboratory request appropriateness in emergency: impact on hospital organization. Clin Chem Lab Med. 2006;44(6):760–764. , , , et al.
- College of American Pathologists.Definitions used in past Q‐PROBES studies (1991–2011). Available at: http://www.cap.org/apps/docs/q_probes/q‐probes_definitions.pdf. Updated September 29, 2011. Accessed July 31, 2013.
- Practice Parameter. STAT testing? A guideline for meeting clinician turnaround time requirements. Am J Clin Pathol. 1996;105(6):671–675. , , .
- Intralaboratory performance and laboratorians' expectations for stat turnaround times: a College of American Pathologists Q‐Probes study of four cerebrospinal fluid determinations. Arch Pathol Lab Med. 1991;115(10):977–983. , .
- How fast is fast enough for clinical laboratory turnaround time? Measurement of the interval between result entry and inquiries for reports. Am J Clin Pathol. 1997;108(4):400–405. , , , .
- Strategies of organization and service for the critical‐care laboratory. Clin Chem. 1990;36(8):1557–1561. , .
- Evaluation of stat and routine turnaround times as a component of laboratory quality. Am J Clin Pathol. 1989;91(3):331–335. , , , , .
- Laboratory results: Timeliness as a quality attribute and strategy. Am J Clin Pathol. 2001;116(3):311–315. , .
- Utilization of stat test priority in the clinical laboratory: a College of American Pathologists Q‐Probes study of 52 institutions. Arch Pathol Lab Med. 2013;137(2):220–227. , , , .
- Controlling the use of stat testing. Pathologist. 1984;38(8):474–477. .
- Optimizing the availability of ‘stat' laboratory tests using Shewhart ‘C' control charts. Ann Clin Biochem. 2002;39(part 2):140–144. , , .
- Evaluating stat testing options in an academic health center: therapeutic turnaround time and staff satisfaction. Clin Chem. 1998;44(8):1597–1603. , , .
- Impact of a physician's order entry (POE) system on physicians' ordering patterns and patient length of stay. Int J Med Inform. 2002;65(3):213–223. , , .
- Instrumentation for STAT analyses. Clin Lab Med. 1988;8(4):689–697. , .
- Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;6:CD000259. , , , et al.
- The feasibility of automating audit and feedback for ART guideline adherence in Malawi. J Am Med Inform Assoc. 2011;18(6):868–874. , , , , , .
- Effect of an outpatient antimicrobial stewardship intervention on broad‐spectrum antibiotic prescribing by primary care pediatricians: a randomized trial. JAMA. 2013;309(22):2345–2352. , , , et al.
- An educational program to modify laboratory use by house staff. J Med Educ. 1977;52(7):578–581. .
- Ordering of laboratory tests in a teaching hospital: can it be improved? JAMA. 1983;249(22):3076–3080. , , .
© 2013 Society of Hospital Medicine
Comment on “The impact of penicillin skin testing on clinical practice and antimicrobial stewardship”
We read with interest the report by Rimawi et al.[1] They showed convincing evidence that with a negative penicillin skin test, a course of ‐lactam is safe 2 hours after a negative challenge. However, we advise caution in generalizing these data to the outpatient setting where resensitization is a possibility. One study showed that 4.9% of patients who had negative skin tests and drug challenges reacted on rechallenges 3 weeks later.[2]
In our center, ‐lactam allergy assessment is carried out according to European Academy of Allergy and Clinical Immunology guidelines.[3] We encountered a patient who had life‐threatening anaphylaxis with co‐amoxiclav 1 month after negative allergy investigations.
A 43‐year‐old woman was referred with a history of non‐drug related urticarial episodes and urticaria and angioedema of face, neck, and arms 30 minutes after a first dose of oral co‐amoxiclav 2 years previously. Specific immunoglobulin E tests to penicillin and amoxicillin, skin tests, and oral co‐amoxiclav challenge were negative. A month later, she developed anaphylaxis (intraoral angioedema, wheeze, hypotension [70/30 mm Hg], oxygen desaturation to 60% on room air, becoming unresponsive) within minutes of an intravenous dose of co‐amoxiclav for acute cholecystitis.
Our case illustrates that despite a detailed negative allergy assessment, severe anaphylaxis can occur requiring prompt identification and appropriate treatment.
- The impact of penicillin skin testing on clinical practice and antimicrobial stewardship. J Hosp Med. 2013;8(6):342–345. , , , et al.
- J Investig Allergol Clin Immunol. 2012;22(1):41–47. , , , et al.
- Diagnosis of immediate allergic reactions to beta‐lactam antibiotics. Allergy. 2003;58:961–972. , , , et al.
We read with interest the report by Rimawi et al.[1] They showed convincing evidence that with a negative penicillin skin test, a course of ‐lactam is safe 2 hours after a negative challenge. However, we advise caution in generalizing these data to the outpatient setting where resensitization is a possibility. One study showed that 4.9% of patients who had negative skin tests and drug challenges reacted on rechallenges 3 weeks later.[2]
In our center, ‐lactam allergy assessment is carried out according to European Academy of Allergy and Clinical Immunology guidelines.[3] We encountered a patient who had life‐threatening anaphylaxis with co‐amoxiclav 1 month after negative allergy investigations.
A 43‐year‐old woman was referred with a history of non‐drug related urticarial episodes and urticaria and angioedema of face, neck, and arms 30 minutes after a first dose of oral co‐amoxiclav 2 years previously. Specific immunoglobulin E tests to penicillin and amoxicillin, skin tests, and oral co‐amoxiclav challenge were negative. A month later, she developed anaphylaxis (intraoral angioedema, wheeze, hypotension [70/30 mm Hg], oxygen desaturation to 60% on room air, becoming unresponsive) within minutes of an intravenous dose of co‐amoxiclav for acute cholecystitis.
Our case illustrates that despite a detailed negative allergy assessment, severe anaphylaxis can occur requiring prompt identification and appropriate treatment.
We read with interest the report by Rimawi et al.[1] They showed convincing evidence that with a negative penicillin skin test, a course of ‐lactam is safe 2 hours after a negative challenge. However, we advise caution in generalizing these data to the outpatient setting where resensitization is a possibility. One study showed that 4.9% of patients who had negative skin tests and drug challenges reacted on rechallenges 3 weeks later.[2]
In our center, ‐lactam allergy assessment is carried out according to European Academy of Allergy and Clinical Immunology guidelines.[3] We encountered a patient who had life‐threatening anaphylaxis with co‐amoxiclav 1 month after negative allergy investigations.
A 43‐year‐old woman was referred with a history of non‐drug related urticarial episodes and urticaria and angioedema of face, neck, and arms 30 minutes after a first dose of oral co‐amoxiclav 2 years previously. Specific immunoglobulin E tests to penicillin and amoxicillin, skin tests, and oral co‐amoxiclav challenge were negative. A month later, she developed anaphylaxis (intraoral angioedema, wheeze, hypotension [70/30 mm Hg], oxygen desaturation to 60% on room air, becoming unresponsive) within minutes of an intravenous dose of co‐amoxiclav for acute cholecystitis.
Our case illustrates that despite a detailed negative allergy assessment, severe anaphylaxis can occur requiring prompt identification and appropriate treatment.
- The impact of penicillin skin testing on clinical practice and antimicrobial stewardship. J Hosp Med. 2013;8(6):342–345. , , , et al.
- J Investig Allergol Clin Immunol. 2012;22(1):41–47. , , , et al.
- Diagnosis of immediate allergic reactions to beta‐lactam antibiotics. Allergy. 2003;58:961–972. , , , et al.
- The impact of penicillin skin testing on clinical practice and antimicrobial stewardship. J Hosp Med. 2013;8(6):342–345. , , , et al.
- J Investig Allergol Clin Immunol. 2012;22(1):41–47. , , , et al.
- Diagnosis of immediate allergic reactions to beta‐lactam antibiotics. Allergy. 2003;58:961–972. , , , et al.
JC Compliance in NJ Stroke Centers
Stroke is the fourth leading cause of death in the United States.[1] Though actual stroke‐related death has declined nationally by 19.4%, stroke‐related morbidity is still a significant burden.[2, 3] Hospital certification programs have been developed to improve the quality of stroke care on state and national levels. The Brain Attack Coalition (BAC) proposed 2 levels of stroke hospitals: primary stroke centers (PSCs) and comprehensive stroke centers (CSCs).[3, 4] Although most stroke patients can be cared for at PSCs, CSCs are able to care for more complex stroke patients.[4, 5] Using BAC recommendations the Joint Commission (JC) and American Heart Association/American Stroke Association created a certification program for PSCs. Eight evidence‐based performance measures are currently required for JC PSC certification.[6]
At the state level, New Jersey and Florida began designating PSCs and CSCs.[7, 8] New Jersey PSC and CSC designation criteria incorporate the elements of JC PSC certification, despite preceding them by several years.[6, 9] New Jersey CSC certification consists of more comprehensive requirements (Table 1). All New Jersey‐designated stroke centers submit data in the New Jersey Acute Stroke Registry (NJASR),[7] which closely matches the Centers for Disease Control and Prevention's Paul Coverdell National Acute Stroke Registry and includes the JC‐required core measures.
Primary Stroke Center (N=53) | Comprehensive Stroke Center (N=12) |
---|---|
| |
Must have acute stroke teams in place at all times that can respond to the bedside within 15 minutes of patient arrival or identification | Must meet all criteria for primary stroke centers |
Must maintain neurology and emergency department personnel trained in the treatment of acute stroke | Must maintain a neurosurgical team capable of assessing and treating complex stroke |
Must maintain telemetry or critical care beds staffed by physicians and nurses trained in caring for acute stroke patients | Must maintain on staff a neuroradiologist (boarded) and a neurointerventionalist |
Must provide for neurosurgical services within 2 hours either at the hospital or under agreement with a comprehensive stroke center | Must provide comprehensive rehabilitation services either on site or by transfer agreement |
Must provide acute care rehabilitation services | Must provide MRI, CTA, and digital subtraction angiography |
Must enter into a written transfer agreement with a comprehensive stroke center | Must develop and maintain sophisticated outcomes assessment and performance improvement capability |
Must provide graduate medical education in stroke and carry out research in stroke |
There is a paucity of data comparing state‐designated CSCs and PSCs, largely because few states have state designation programs. Although a recent observational study from Finland showed better outcomes in patients treated at CSCs, measures in that study were limited to mortality and institutionalization at 1 year.[10] In this study, we examined adherence of all New Jersey state‐designated stroke centers to the JC PSC measures and compared CSCs to PSCs in this regard. We posited that better compliance with these evidence‐based measures might translate into better quality of stroke care in the state and may lend support to future, larger studies that may be conducted because of the recent certification of CSCs by the JC.
METHODS
Components of the NJASR, key components of PSCs and CSCs in the BAC report, and the 8 JC's core stroke measures for PSC certification were assessed.[3, 4, 6]
First responders in New Jersey are required to bring suspected stroke patients to the nearest stroke‐designated hospital, regardless of whether it is a PSC or CSC, unless the patient is too medically unstable and needs to be taken to the nearest hospital. From there, decisions can be made to transfer a patient to a higher level hospital (CSC) depending on the complexity of the patient's condition.
New Jersey state‐designated PSCs and CSCs are required to abstract patient‐level data, evaluate outcomes, and initiate quality improvement activities on all patients evaluated for ischemic stroke, hemorrhagic stroke, transient ischemic attack (TIA), and those who undergo acute interventional therapy. Data are submitted quarterly to the New Jersey Department of Health and Senior Services (NJDHSS).[7] Hospital data are imported into the Acute Stroke Registry Database. The NJASR statewide dataset used for this analysis included all stroke admissions for the calendar years 2010 and 2011 and contains patient demographic information, health history, clinical investigations performed, treatments provided, and outcome measures that allow for risk‐adjusted assessment of outcomes.
The JC core stroke measures (STK‐1 thru 10 [except STK‐7 and 9]: venous thromboembolism [VTE] prophylaxis, discharged on antithrombotic therapy, anticoagulation therapy for atrial fibrillation/flutter, thrombolytic therapy, antithrombotic therapy by the end of hospital day 2, discharged on statin medication, stroke education, and assessment for rehabilitation) (except STK‐7 [dysphagia screening] and STK‐9 [smoking cessation/advice counseling]) apply only to acute ischemic and/or hemorrhagic stroke patients.
In our analysis, transferred patients and patients with a diagnosis of TIA, stroke not otherwise specified, and nonstroke‐related diagnoses were excluded. Hospital identity was kept anonymous through assignment of random numeric codes by the NJDHSS. Hospitals were categorized as CSC or PSC based on NJDHSS designation. Stroke severity on admission was assessed by categorizing National Institutes of Health Stroke Scale (NIHSS) scores into: no stroke when NIHSS=0, mild stroke when NIHSS=14, and moderate to severe stroke when NIHSS>5. Median door‐to‐thrombolytic drug times were assessed for both patients who arrived to the hospital within 2 hours of stroke symptom onset and received thrombolytic therapy within 3 hours, as well as patients who arrived within 3.5 hours (210 minutes) and received treatment within 4.5 hours (270 minutes).
Inclusion and Exclusion Criteria for JC Performance Measures
Excluded from all measures are patients who were discharged to hospice, left against medical advice (AMA), expired, were transferred to another short‐term care facility, had an unknown discharge location, comfort measures only (CMO), or enrolled in clinical trials. Other exclusions are listed below each measure. VTE prophylaxis included nonambulatory ischemic and hemorrhagic stroke patients who received VTE prophylaxis by end of hospital day 2 and excluded patients discharged prior to hospital day 2 and with length of stay >120 days. Antithrombotics at discharge included ischemic stroke patients discharged on antithrombotics and excluded those with documented reason for not receiving antithrombotics. Anticoagulation for atrial fibrillation included ischemic stroke patients with documented atrial fibrillation/flutter who received anticoagulation therapy and excluded those with documented reason for not receiving anticoagulation. Thrombolytic therapy included acute ischemic stroke patients who arrived at the hospital within 2 hours from time last known well and for whom intravenous (IV) tissue plasminogen activator (tPA) was initiated at that hospital within 1hour of hospital arrival. Excluded were patients with valid reason for not getting tPA, length of stay >120 days, and time last known well >2 hours. We also looked at thrombolytic therapy for patients who arrived by 3.5 hours from time last known well and received IV‐tPA within 1 hour.
Antithrombotics by the end of hospital day 2 included ischemic stroke patients who received antithrombotic medication by the end of hospital day 2 and excluded patients who were discharged before hospital day 2, had a documented reason for not receiving antithrombotic medication, had a length of stay greater >120 days, were CMO by day 2, and patients who received IV or intra‐arterial tPA. Statin therapy included ischemic stroke patients with low‐density lipoprotein exceeding 100 mg/dL or not measured, or on cholesterol‐reducing medication prior to admission. Excluded were those with length of stay >120 days and a documented reason for not receiving medication. Stroke education on discharge included all stroke patients being discharged home who received education during the hospitalization addressing the following: patient's stroke‐specific risk factors, warning signs and symptoms, emergency medical services activation, follow‐up, and discharge medications. Those with length of stay >120 days were excluded. Assessment for rehabilitation included ischemic and hemorrhagic patients assessed for rehabilitation services.
Statistical Analysis
Patient characteristics were summarized using frequencies and percentages for categorical variables as well as median and interquartile range for continuous variables. 2 tests and median 2‐sample tests were used to compare patient characteristics between the 2 hospital levels. The likelihood that a patient received a particular JC core measure service in relation to hospital level (PSC vs CSC) was estimated using a multiple logistic regression analysis for both the crude/unadjusted and adjusted odds ratios, and their 95% confidence intervals were estimated. Gender, age, race, stroke type, medical history (hypertension, atrial fibrillation, diabetes mellitus, and history of smoking), and severity of stroke as measured by NIHSS were included in the model. Institutional review board approval to evaluate the data for this analysis was obtained from John F. Kennedy Medical Center in Edison, New Jersey. All analyses were performed using SAS software package version 9.3 (SAS Institute, Cary, NC).
RESULTS
There were 36,892 acute stroke cases treated at the 53 New Jersey PSCs and 12 CSCs in the calendar year 2010 and 2011 (Table 2). Sixty percent were treated at PSCs and 40% at CSCs. There were significant differences in the distribution of patients' characteristics (race, age, and gender) between the 2 hospital levels. At both PSCs and CSCs, the majority of patients were white, distantly followed by blacks. Patients at PSCs were statistically significantly older than CSCs. The most prevalent comorbid conditions in both PSCs and CSCs were hypertension, diabetes mellitus, and dyslipidemia. Based on our categorization, we found that 45% of patients admitted to CSCs had moderate‐to‐severe stroke (NIHSS>5). The median door‐to‐thrombolytic drug times were significantly shorter at CSCs than PSCs for both the 3‐hour (65 vs 74 minutes, P<0.0001) and 4.5 hour (65 vs 76 minutes, P<0.0001) IV tPA time windows.
Variables | PSCs, N=22,305 | CSCs, N=14,587 | P Valuea |
---|---|---|---|
| |||
Race, n (%) | <0.0001 | ||
White | 16,586 (74.4) | 10,419 (71.4) | |
Black | 3,930 (17.6) | 2,875 (19.7) | |
Asian | 511 (2.3) | 519 (3.6) | |
All othersb | 1,278 (5.7) | 774 (5.3) | |
Age, y, median (IQR) | 75.0 (22.0) | 73.0 (23.0) | <0.0001c |
Gender, female, n (%) | 12,552 (56.3) | 7,757 (53.2) | <0.0001 |
Comorbidities | |||
Hypertension, n (%) | 17,405 (78.1) | 10,535 (72.2) | <0.0001 |
Atrial fibrillation/flutter, n (%) | 3,762 (16.9) | 2,237 (15.3) | 0.0001 |
Diabetes mellitus, n (%) | 7,219 (32.4) | 4,220 (28.9) | <0.0001 |
History of smoking, n (%) | 2,924 (13.1) | 1,706 (11.7) | <0.0001 |
Heart failure, n (%) | 1,733 (7.8) | 749 (5.1) | <0.0001 |
Myocardial infarction, n (%) | 6,138 (27.5) | 2,945 (20.3) | <0.0001 |
Dyslipidemia, n (%) | 10,106 (45.6) | 5,161 (35.4) | <0.0001 |
Prior stroke/TIA/VBI, n (%) | 7,085 (31.8) | 3,874 (26.6) | <0.0001 |
NIHSS on admission, n (%) | <0.0001 | ||
No stroke (NIHSS=0) | 2,747 (27.4) | 913 (18.3) | |
Mild stroke (NIHSS=14) | 4,010 (40.0) | 1,811 (33.3) | |
Moderatesevere (NIHSS >5) | 3,272 (32.6) | 2,271 (45.4) | |
Door‐to‐tPA time, min, median (IQR) | |||
Arrived within 120 minutes | 74.0 (35.0) | 65.0 (33.0) | <0.0001c |
Arrived within 210 minutes | 76.0 (37.0) | 65.0 (34.0) | <0.0001c |
Stroke diagnosis, distribution | <0.0001 | ||
Ischemic | 11,145 (50.0) | 8,235 (56.5) | |
Hemorrhagic | 1,587 (7.1) | 3,270 (13.3) | |
Subarachnoid | 219 (13.8) | 397 (20.4) | |
Intracerebral | 1,368 (86.2) | 1,545 (79.6) | |
Transient ischemic attack | 8,116 (36.4) | 4,162 (28.5) | |
Stroke not otherwise specified | 1,145 (5.1) | 130 (0.9) | |
No stroke‐related diagnosis | 293 (1.3) | 118 (0.8) |
The incidences of stroke diagnosis types are also detailed in Table 2. Seventy percent of patients at CSCs had either an ischemic or hemorrhagic stroke diagnosis versus 57.1% of patients admitted at PSCs. Hemorrhagic stroke patients were twice as likely to be admitted at CSCs compared to PSCs.
After excluding 13,964 patients with a diagnosis of TIA, stroke not otherwise specified, and those with nonstroke‐related diagnosis, the likelihood of stroke patients' receiving the JC's performance measure services at either of these hospital levels was assessed (Table 3). In general, the adjusted odds ratio estimates of patients receiving a JC core performance measure at PSCs were lower than CSCs, indicating better compliance with the measures at CSCs. For example, 19.5% of eligible patients received thrombolytic therapy at CSCs compared to 9.6% at PSCs. CSCs also were more likely to provide VTE prophylaxis, anticoagulation for atrial fibrillation, and assessment for rehabilitation. Stroke education and antithrombotic therapy by the end of hospital day 2 were more likely to be provided at PSCs, but the results were not statistically significant.
Variables | Hospital Levelsa | Odds Ratio (95% CI) | ||
---|---|---|---|---|
PSCs, N (%) | CSCs, N (%) | Unadjusted | Adjustedb | |
| ||||
VTE prophylaxis | 4,745 (92.1) | 5,455 (94.2) | 0.72 (0.610.83) | 0.47 (0.330.67) |
Discharged on antithrombotic therapy | 8,835 (98.1) | 6,873 (99.2) | 0.42 (0.310.56) | 0.46 (0.270.78) |
Anticoagulation therapy for atrial fibrillation/flutter | 1,464 (95.1) | 1,144 (97.6) | 0.48 (0.310.74) | 0.38 (0.170.86) |
Thrombolytic therapy | ||||
Time window=3.0 hours | 484 (9.6) | 666 (19.5) | 0.44 (0.390.50) | 0.28 (0.240.34) |
Time window=4.5 hours | 564 (11.0) | 792 (22.4) | 0.43 (0.380.48) | 0.28 (0.230.33) |
Antithrombotic therapy by end of hospital day 2 | 7,575 (97.4) | 5,396 (98.2) | 0.69 (0.540.88) | 1.01 (0.601.68) |
Discharged on statin medication | 6,035 (97.9) | 4,261 (98.7) | 0.59 (0.430.80) | 0.69 (0.421.13) |
Stroke education, for home discharge (overall) | 3,823 (97.7) | 3,072 (95.7) | 1.93 (1.472.53) | 1.78 (0.923.45) |
Risk factors for stroke | 3,480 (88.9) | 3,026 (94.4) | 0.49 (0.410.59) | 0.43 (0.280.66) |
Warning sign and symptoms | 3,514 (89.8) | 3,019 (94.1) | 0.56 (0.460.67) | 0.52 (0.340.79) |
Activation of EMS | 3,539 (90.5) | 3,023 (94.2) | 0.59 (0.490.70) | 0.44 (0.280.69) |
Followup after discharge | 3,807 (97.3) | 3,064 (95.5) | 1.73 (1.342.23) | 1.18 (0.652.20) |
Medications prescribed at discharge | 3,788 (96.8) | 3,067 (95.5) | 1.42 (1.111.82) | 0.44 (0.280.70) |
Assessed for rehabilitation | 9,725 (95.2) | 8,199 (97.5) | 0.51 (0.430.61) | 0.37 (0.260.53) |
DISCUSSION
In New Jersey, CSCs were more likely to adhere better to JC core performance measures than PSCs. Median door‐to‐thrombolytic drug times were also significantly lower at CSCs. Such differences may be due to several factors including the fact that CSCs have generally been state designated for a longer period of time than PSCs. CSCs are likely to have higher volumes of stroke admissions, are more likely to be JC certified, provide more staff education, and have more staff and resources. The New Jersey stroke designation program began in 2006, and 11 of the 12 CSCs were designated by the end of 2007. However, the PSC designation process has been more gradual, with several of them designated in 2010 and 2011 as the data for this study were being collected.
The New York State Stroke Center Designation Project prospectively showed that stroke center designation improved the quality of acute stroke patient care and administration of thrombolytic therapy; however, differing levels of hospital designation were not present in New York at that time.[11] Participation in a data measurement program such as Get With The Guidelines has also been examined. It is evident that the amount of time in a program is predictive of process measure compliance.[12] JC certification as a PSC is also associated with increased thrombolytic rates for acute stroke over time.[13] New Jersey does not require that stroke‐designated hospitals have JC stroke certification. Although 11 New Jersey CSCs have been certified as JC PSCs since 2009, only 21 of the 53 state‐designated PSCs are JC certified. It may be that the highest performing sites pursue state CSC designation and JC PSC certification/recertification repeatedly. CSCs in New Jersey may also have a greater focus on quality measures by virtue of having been in quality programs such as Get With The Guidelines or by having been state designated and JC certified for a longer period of time.
The New Jersey requirements for CSCs, like those of the JC, include a large number of highly trained stroke experts, which ensures more continuous coverage. Although a disparity in mortality on weekends versus weekdays has been reported,[14] such a difference in mortality has not been seen at CSCs in New Jersey.[15] This lack of a weekend effect is felt to be related to the 24/7 availability of stroke specialists, advanced neuroimaging, ongoing training, and surveillance of specialized nursing care available at CSCs.[4, 16]
In our study, New Jersey CSCs overall had significantly higher rates of thrombolysis compared to PSCs (19.5% vs 9.6%) when looking at the 3‐hour window. This is higher overall than the national rate of 3.4% to 5.2%.[17] The number of patients treated in the expanded thrombolytic window were also significantly higher at CSCs, increasing thrombolysis rates to 22.4% at CSCs versus the 11% at PSCs. Door‐to‐drug times were also shorter at CSCs than PSCs in the 3‐ and 4.5‐hour windows (65 vs 74 minutes and 65 minutes vs 76 minutes, respectively). After we excluded transferred patients and those with a diagnosis of TIA, stroke not otherwise specified, and those with nonstroke‐related diagnoses, the total number of ischemic and hemorrhagic stroke patients seen at each of the 12 CSCs (n=11,505) was on average 4 times higher than the number seen at each of the 53 PSCs (n=12,732). High annual hospital stroke volume has been shown to be associated with higher rates of thrombolysis and lower stroke mortality.[14] A study of US academic centers found that although the same percentage of patients presented within 2 hours of stroke symptom onset in 2001 and 2004, the use of IV tPA more than doubled over this time period.[18] Improved system organization at the prehospital and hospital levels as well as greater comfort and experience with use of thrombolytic therapy likely contribute to all of these findings.[11]
CSCs did not outperform PSCs with regard to stroke education and antithrombotics by end of hospital day 2, but these results were not statistically significant. The former measure includes only stroke patients who are discharged home and is considered complete when all 5 of the following are addressed: risk factors for stroke, warning signs and symptoms for stroke, activation of emergency medical systems, follow‐up after discharge, and medications prescribed at discharge. CSCs were more likely to provide education for the first 3 and last component but less likely for the fourth element. These findings should be considered in the context of CSCs having higher volumes of more ill and complex patients who are more likely to be discharged to a rehabilitation hospital, nursing home, or other facility than to home. In our registry, CSCs discharged 46% of patients' to home versus 54% at PSCs. We speculate that CSCs may be less likely to habitually address follow‐up care and discharge medications as compared to PSCs. As far as provision of antithrombotics by hospital day 2, it is possible again that because CSCs have a higher number of complicated stroke patients, many may have had contraindications to use of antithrombotics in that time period.
Limitations of this study include the fact that this was a retrospective analysis of a database. Although the 2010 and 2011 NJASR dataset was sizeable, it was not possible to capture all potentially confounding variables that may have affected our point estimates. We were not able to perform a hierarchical analysis to account for clustering at the hospital level because of limited data available in the registry. Errors in recording data, coding, and documentation cannot be excluded. The fact that not all PSCs were necessarily JC certified may have contributed to the observed differences. Also, because pursuing PSC or CSC status is voluntary, it is not clear if the hospitals that chose CSC status were different in other unmeasured factors than those that chose PSC status, and the difference may have existed even in the absence of the designation program. Over the years, there have been changes in the criteria required by the state and the JC for PSC designation, although the larger differences between hospital levels remained intact. This may have limited our findings as well. The goal for hospitals is to continue strict adherence to policies and measures and thus improve quality of care for stroke patients. Future prospective studies should be conducted to ascertain validity and generalizability of our findings. Association of stroke measure adherence and functional outcomes would also be of interest. We were not able to measure this consistently in our study because not all patients at PSCs had admission and/or discharge NIHSS or modified Rankin Score. Although some studies have not shown an association between improved outcomes and higher performance on quality measures, we would like to look at this more closely in the stroke population.[19] As our database gets larger, we would like to reexamine our findings after correcting for more specific characteristics of each hospital. In the future, if additional states designate centers by level of stroke care, it will be important to compare how such designations compare to nonprofit organization certifications in terms of impacting performance on a larger scale.
CONCLUSION
This study shows better compliance of New Jersey state‐designated CSCs with the JC PSC core stroke measures and better mean door‐to‐thrombolytic drug times. Because these measures are evidence based, these results may translate into better stroke care and outcomes for patients treated at state‐designated CSCs.
ACKNOWLEDGEMENTS
Disclosures: Jawad Kirmani, MD: Consultant to Joint Commission on Performance Measure Development (modest). Martin Gizzi, MD, PhD: Consultant to Joint Commission on Performance Measure Development (modest), New Jersey Department of Health and Senior Services as chair of the Stroke Advisory Panel (significant). No other potential conflicts to report.
- Centers for Disease Control and Prevention. Interactive atlas of heart disease and stroke. Available at: http://apps.Nccd.Cdc.Gov/dhdspatlas/reports.Aspx. Accessed May 5, 2012.
- American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Heart disease and stroke statistics—2012 update: a report from the American Heart Association. Circulation. 2012;125:e2–e220. , , , et al.;
- Recommendations for the establishment of primary stroke centers. JAMA. 2000;283:3102–3109. , , , et al.
- Brain Attack Coalition. Recommendations for comprehensive stroke centers: a consensus statement from the Brain Attack Coalition. Stroke. 2005;36:1597–1616. , , , et al.;
- Recommendations for the establishment of stroke systems of care. Stroke. 2005;36:690–703. , , , et al.
- The Joint Commission. Advanced Certification Comprehensive Stroke Centers. Available at: http://www.jointcommission.org/certification/advanced_certification_comprehensive_stroke_centers.aspx. Accessed July 25, 2012.
- New Jersey Department of Health and Senior Services: The New Jersey Acute Stroke Registry. Available at: http://www.state.nj.us/health/healthcarequality/stroke/documents/njacute_stroke_data_dictionary.pdf. Accessed July 8, 2012.
- Florida Agency for Health Care Administration. Primary stroke center and comprehensive stroke center designation. Available at: http://ahca.myflorida.com/mchq/Health_Facility_Regulation/Hospital_Outpatient/forms/59A3_2085_FAC_Rule_text.pdf. Accessed July 25, 2012.
- New Jersey Department of Health and Senior Services. Stroke Center Act (2004). Available at: http://www.njleg.state.nj.us/2004/bills/pl04/136_.pdf. Accessed September 25, 2013.
- Effectiveness of primary and comprehensive stroke centers. Stroke. 2010;41:1102–1107. , , , et al.
- NYSDOH Stroke Center Designation Project Workgroup. Quality improvement in acute stroke: The New York State Stroke Center Designation Project. Neurology. 2006;67(1):88–93. , , , et al.;
- Characteristics, performance measures, and in‐hospital outcomes of the first one million stroke and transient ischemic attack admissions in get with the guidelines‐stroke. Circ Cardiovasc Qual Outcomes. 2010;3(3):291–302. , , , et al.
- Intravenous thrombolysis for stroke increases over time at primary stroke centers. Stroke. 2012;43:875–877. , , , et al.
- Weekends: a dangerous time for having a stroke? Stroke. 2007;38:1211–1215. , , , et al.
- Myocardial Infarction Data Acquisition System (MIDAS 15) Study Group. Comprehensive stroke centers overcome the weekend versus weekday gap in stroke treatment and mortality. Stroke. 2011;42:2403–2409. , , , et al.;
- Can comprehensive stroke centers erase the “weekend effect”? Cerebrovasc Dis. 2009;27:107–113. , , , et al.
- Recombinant tissue‐type plasminogen activator use for ischemic stroke in the United States. Stroke. 2011;42:1952–1955. , , , et al.
- Hospital arrival time and intravenous t‐PA use in US academic medical centers, 2001–2004. Stroke. 2009;40:3845–3850. , , , et al.
- Association of surgical care improvement project infection‐related process measure compliance with risk‐adjusted outcomes: implications for quality measurement. J Am Coll Surg. 2010;211(6):705–714. , , , et al.
Stroke is the fourth leading cause of death in the United States.[1] Though actual stroke‐related death has declined nationally by 19.4%, stroke‐related morbidity is still a significant burden.[2, 3] Hospital certification programs have been developed to improve the quality of stroke care on state and national levels. The Brain Attack Coalition (BAC) proposed 2 levels of stroke hospitals: primary stroke centers (PSCs) and comprehensive stroke centers (CSCs).[3, 4] Although most stroke patients can be cared for at PSCs, CSCs are able to care for more complex stroke patients.[4, 5] Using BAC recommendations the Joint Commission (JC) and American Heart Association/American Stroke Association created a certification program for PSCs. Eight evidence‐based performance measures are currently required for JC PSC certification.[6]
At the state level, New Jersey and Florida began designating PSCs and CSCs.[7, 8] New Jersey PSC and CSC designation criteria incorporate the elements of JC PSC certification, despite preceding them by several years.[6, 9] New Jersey CSC certification consists of more comprehensive requirements (Table 1). All New Jersey‐designated stroke centers submit data in the New Jersey Acute Stroke Registry (NJASR),[7] which closely matches the Centers for Disease Control and Prevention's Paul Coverdell National Acute Stroke Registry and includes the JC‐required core measures.
Primary Stroke Center (N=53) | Comprehensive Stroke Center (N=12) |
---|---|
| |
Must have acute stroke teams in place at all times that can respond to the bedside within 15 minutes of patient arrival or identification | Must meet all criteria for primary stroke centers |
Must maintain neurology and emergency department personnel trained in the treatment of acute stroke | Must maintain a neurosurgical team capable of assessing and treating complex stroke |
Must maintain telemetry or critical care beds staffed by physicians and nurses trained in caring for acute stroke patients | Must maintain on staff a neuroradiologist (boarded) and a neurointerventionalist |
Must provide for neurosurgical services within 2 hours either at the hospital or under agreement with a comprehensive stroke center | Must provide comprehensive rehabilitation services either on site or by transfer agreement |
Must provide acute care rehabilitation services | Must provide MRI, CTA, and digital subtraction angiography |
Must enter into a written transfer agreement with a comprehensive stroke center | Must develop and maintain sophisticated outcomes assessment and performance improvement capability |
Must provide graduate medical education in stroke and carry out research in stroke |
There is a paucity of data comparing state‐designated CSCs and PSCs, largely because few states have state designation programs. Although a recent observational study from Finland showed better outcomes in patients treated at CSCs, measures in that study were limited to mortality and institutionalization at 1 year.[10] In this study, we examined adherence of all New Jersey state‐designated stroke centers to the JC PSC measures and compared CSCs to PSCs in this regard. We posited that better compliance with these evidence‐based measures might translate into better quality of stroke care in the state and may lend support to future, larger studies that may be conducted because of the recent certification of CSCs by the JC.
METHODS
Components of the NJASR, key components of PSCs and CSCs in the BAC report, and the 8 JC's core stroke measures for PSC certification were assessed.[3, 4, 6]
First responders in New Jersey are required to bring suspected stroke patients to the nearest stroke‐designated hospital, regardless of whether it is a PSC or CSC, unless the patient is too medically unstable and needs to be taken to the nearest hospital. From there, decisions can be made to transfer a patient to a higher level hospital (CSC) depending on the complexity of the patient's condition.
New Jersey state‐designated PSCs and CSCs are required to abstract patient‐level data, evaluate outcomes, and initiate quality improvement activities on all patients evaluated for ischemic stroke, hemorrhagic stroke, transient ischemic attack (TIA), and those who undergo acute interventional therapy. Data are submitted quarterly to the New Jersey Department of Health and Senior Services (NJDHSS).[7] Hospital data are imported into the Acute Stroke Registry Database. The NJASR statewide dataset used for this analysis included all stroke admissions for the calendar years 2010 and 2011 and contains patient demographic information, health history, clinical investigations performed, treatments provided, and outcome measures that allow for risk‐adjusted assessment of outcomes.
The JC core stroke measures (STK‐1 thru 10 [except STK‐7 and 9]: venous thromboembolism [VTE] prophylaxis, discharged on antithrombotic therapy, anticoagulation therapy for atrial fibrillation/flutter, thrombolytic therapy, antithrombotic therapy by the end of hospital day 2, discharged on statin medication, stroke education, and assessment for rehabilitation) (except STK‐7 [dysphagia screening] and STK‐9 [smoking cessation/advice counseling]) apply only to acute ischemic and/or hemorrhagic stroke patients.
In our analysis, transferred patients and patients with a diagnosis of TIA, stroke not otherwise specified, and nonstroke‐related diagnoses were excluded. Hospital identity was kept anonymous through assignment of random numeric codes by the NJDHSS. Hospitals were categorized as CSC or PSC based on NJDHSS designation. Stroke severity on admission was assessed by categorizing National Institutes of Health Stroke Scale (NIHSS) scores into: no stroke when NIHSS=0, mild stroke when NIHSS=14, and moderate to severe stroke when NIHSS>5. Median door‐to‐thrombolytic drug times were assessed for both patients who arrived to the hospital within 2 hours of stroke symptom onset and received thrombolytic therapy within 3 hours, as well as patients who arrived within 3.5 hours (210 minutes) and received treatment within 4.5 hours (270 minutes).
Inclusion and Exclusion Criteria for JC Performance Measures
Excluded from all measures are patients who were discharged to hospice, left against medical advice (AMA), expired, were transferred to another short‐term care facility, had an unknown discharge location, comfort measures only (CMO), or enrolled in clinical trials. Other exclusions are listed below each measure. VTE prophylaxis included nonambulatory ischemic and hemorrhagic stroke patients who received VTE prophylaxis by end of hospital day 2 and excluded patients discharged prior to hospital day 2 and with length of stay >120 days. Antithrombotics at discharge included ischemic stroke patients discharged on antithrombotics and excluded those with documented reason for not receiving antithrombotics. Anticoagulation for atrial fibrillation included ischemic stroke patients with documented atrial fibrillation/flutter who received anticoagulation therapy and excluded those with documented reason for not receiving anticoagulation. Thrombolytic therapy included acute ischemic stroke patients who arrived at the hospital within 2 hours from time last known well and for whom intravenous (IV) tissue plasminogen activator (tPA) was initiated at that hospital within 1hour of hospital arrival. Excluded were patients with valid reason for not getting tPA, length of stay >120 days, and time last known well >2 hours. We also looked at thrombolytic therapy for patients who arrived by 3.5 hours from time last known well and received IV‐tPA within 1 hour.
Antithrombotics by the end of hospital day 2 included ischemic stroke patients who received antithrombotic medication by the end of hospital day 2 and excluded patients who were discharged before hospital day 2, had a documented reason for not receiving antithrombotic medication, had a length of stay greater >120 days, were CMO by day 2, and patients who received IV or intra‐arterial tPA. Statin therapy included ischemic stroke patients with low‐density lipoprotein exceeding 100 mg/dL or not measured, or on cholesterol‐reducing medication prior to admission. Excluded were those with length of stay >120 days and a documented reason for not receiving medication. Stroke education on discharge included all stroke patients being discharged home who received education during the hospitalization addressing the following: patient's stroke‐specific risk factors, warning signs and symptoms, emergency medical services activation, follow‐up, and discharge medications. Those with length of stay >120 days were excluded. Assessment for rehabilitation included ischemic and hemorrhagic patients assessed for rehabilitation services.
Statistical Analysis
Patient characteristics were summarized using frequencies and percentages for categorical variables as well as median and interquartile range for continuous variables. 2 tests and median 2‐sample tests were used to compare patient characteristics between the 2 hospital levels. The likelihood that a patient received a particular JC core measure service in relation to hospital level (PSC vs CSC) was estimated using a multiple logistic regression analysis for both the crude/unadjusted and adjusted odds ratios, and their 95% confidence intervals were estimated. Gender, age, race, stroke type, medical history (hypertension, atrial fibrillation, diabetes mellitus, and history of smoking), and severity of stroke as measured by NIHSS were included in the model. Institutional review board approval to evaluate the data for this analysis was obtained from John F. Kennedy Medical Center in Edison, New Jersey. All analyses were performed using SAS software package version 9.3 (SAS Institute, Cary, NC).
RESULTS
There were 36,892 acute stroke cases treated at the 53 New Jersey PSCs and 12 CSCs in the calendar year 2010 and 2011 (Table 2). Sixty percent were treated at PSCs and 40% at CSCs. There were significant differences in the distribution of patients' characteristics (race, age, and gender) between the 2 hospital levels. At both PSCs and CSCs, the majority of patients were white, distantly followed by blacks. Patients at PSCs were statistically significantly older than CSCs. The most prevalent comorbid conditions in both PSCs and CSCs were hypertension, diabetes mellitus, and dyslipidemia. Based on our categorization, we found that 45% of patients admitted to CSCs had moderate‐to‐severe stroke (NIHSS>5). The median door‐to‐thrombolytic drug times were significantly shorter at CSCs than PSCs for both the 3‐hour (65 vs 74 minutes, P<0.0001) and 4.5 hour (65 vs 76 minutes, P<0.0001) IV tPA time windows.
Variables | PSCs, N=22,305 | CSCs, N=14,587 | P Valuea |
---|---|---|---|
| |||
Race, n (%) | <0.0001 | ||
White | 16,586 (74.4) | 10,419 (71.4) | |
Black | 3,930 (17.6) | 2,875 (19.7) | |
Asian | 511 (2.3) | 519 (3.6) | |
All othersb | 1,278 (5.7) | 774 (5.3) | |
Age, y, median (IQR) | 75.0 (22.0) | 73.0 (23.0) | <0.0001c |
Gender, female, n (%) | 12,552 (56.3) | 7,757 (53.2) | <0.0001 |
Comorbidities | |||
Hypertension, n (%) | 17,405 (78.1) | 10,535 (72.2) | <0.0001 |
Atrial fibrillation/flutter, n (%) | 3,762 (16.9) | 2,237 (15.3) | 0.0001 |
Diabetes mellitus, n (%) | 7,219 (32.4) | 4,220 (28.9) | <0.0001 |
History of smoking, n (%) | 2,924 (13.1) | 1,706 (11.7) | <0.0001 |
Heart failure, n (%) | 1,733 (7.8) | 749 (5.1) | <0.0001 |
Myocardial infarction, n (%) | 6,138 (27.5) | 2,945 (20.3) | <0.0001 |
Dyslipidemia, n (%) | 10,106 (45.6) | 5,161 (35.4) | <0.0001 |
Prior stroke/TIA/VBI, n (%) | 7,085 (31.8) | 3,874 (26.6) | <0.0001 |
NIHSS on admission, n (%) | <0.0001 | ||
No stroke (NIHSS=0) | 2,747 (27.4) | 913 (18.3) | |
Mild stroke (NIHSS=14) | 4,010 (40.0) | 1,811 (33.3) | |
Moderatesevere (NIHSS >5) | 3,272 (32.6) | 2,271 (45.4) | |
Door‐to‐tPA time, min, median (IQR) | |||
Arrived within 120 minutes | 74.0 (35.0) | 65.0 (33.0) | <0.0001c |
Arrived within 210 minutes | 76.0 (37.0) | 65.0 (34.0) | <0.0001c |
Stroke diagnosis, distribution | <0.0001 | ||
Ischemic | 11,145 (50.0) | 8,235 (56.5) | |
Hemorrhagic | 1,587 (7.1) | 3,270 (13.3) | |
Subarachnoid | 219 (13.8) | 397 (20.4) | |
Intracerebral | 1,368 (86.2) | 1,545 (79.6) | |
Transient ischemic attack | 8,116 (36.4) | 4,162 (28.5) | |
Stroke not otherwise specified | 1,145 (5.1) | 130 (0.9) | |
No stroke‐related diagnosis | 293 (1.3) | 118 (0.8) |
The incidences of stroke diagnosis types are also detailed in Table 2. Seventy percent of patients at CSCs had either an ischemic or hemorrhagic stroke diagnosis versus 57.1% of patients admitted at PSCs. Hemorrhagic stroke patients were twice as likely to be admitted at CSCs compared to PSCs.
After excluding 13,964 patients with a diagnosis of TIA, stroke not otherwise specified, and those with nonstroke‐related diagnosis, the likelihood of stroke patients' receiving the JC's performance measure services at either of these hospital levels was assessed (Table 3). In general, the adjusted odds ratio estimates of patients receiving a JC core performance measure at PSCs were lower than CSCs, indicating better compliance with the measures at CSCs. For example, 19.5% of eligible patients received thrombolytic therapy at CSCs compared to 9.6% at PSCs. CSCs also were more likely to provide VTE prophylaxis, anticoagulation for atrial fibrillation, and assessment for rehabilitation. Stroke education and antithrombotic therapy by the end of hospital day 2 were more likely to be provided at PSCs, but the results were not statistically significant.
Variables | Hospital Levelsa | Odds Ratio (95% CI) | ||
---|---|---|---|---|
PSCs, N (%) | CSCs, N (%) | Unadjusted | Adjustedb | |
| ||||
VTE prophylaxis | 4,745 (92.1) | 5,455 (94.2) | 0.72 (0.610.83) | 0.47 (0.330.67) |
Discharged on antithrombotic therapy | 8,835 (98.1) | 6,873 (99.2) | 0.42 (0.310.56) | 0.46 (0.270.78) |
Anticoagulation therapy for atrial fibrillation/flutter | 1,464 (95.1) | 1,144 (97.6) | 0.48 (0.310.74) | 0.38 (0.170.86) |
Thrombolytic therapy | ||||
Time window=3.0 hours | 484 (9.6) | 666 (19.5) | 0.44 (0.390.50) | 0.28 (0.240.34) |
Time window=4.5 hours | 564 (11.0) | 792 (22.4) | 0.43 (0.380.48) | 0.28 (0.230.33) |
Antithrombotic therapy by end of hospital day 2 | 7,575 (97.4) | 5,396 (98.2) | 0.69 (0.540.88) | 1.01 (0.601.68) |
Discharged on statin medication | 6,035 (97.9) | 4,261 (98.7) | 0.59 (0.430.80) | 0.69 (0.421.13) |
Stroke education, for home discharge (overall) | 3,823 (97.7) | 3,072 (95.7) | 1.93 (1.472.53) | 1.78 (0.923.45) |
Risk factors for stroke | 3,480 (88.9) | 3,026 (94.4) | 0.49 (0.410.59) | 0.43 (0.280.66) |
Warning sign and symptoms | 3,514 (89.8) | 3,019 (94.1) | 0.56 (0.460.67) | 0.52 (0.340.79) |
Activation of EMS | 3,539 (90.5) | 3,023 (94.2) | 0.59 (0.490.70) | 0.44 (0.280.69) |
Followup after discharge | 3,807 (97.3) | 3,064 (95.5) | 1.73 (1.342.23) | 1.18 (0.652.20) |
Medications prescribed at discharge | 3,788 (96.8) | 3,067 (95.5) | 1.42 (1.111.82) | 0.44 (0.280.70) |
Assessed for rehabilitation | 9,725 (95.2) | 8,199 (97.5) | 0.51 (0.430.61) | 0.37 (0.260.53) |
DISCUSSION
In New Jersey, CSCs were more likely to adhere better to JC core performance measures than PSCs. Median door‐to‐thrombolytic drug times were also significantly lower at CSCs. Such differences may be due to several factors including the fact that CSCs have generally been state designated for a longer period of time than PSCs. CSCs are likely to have higher volumes of stroke admissions, are more likely to be JC certified, provide more staff education, and have more staff and resources. The New Jersey stroke designation program began in 2006, and 11 of the 12 CSCs were designated by the end of 2007. However, the PSC designation process has been more gradual, with several of them designated in 2010 and 2011 as the data for this study were being collected.
The New York State Stroke Center Designation Project prospectively showed that stroke center designation improved the quality of acute stroke patient care and administration of thrombolytic therapy; however, differing levels of hospital designation were not present in New York at that time.[11] Participation in a data measurement program such as Get With The Guidelines has also been examined. It is evident that the amount of time in a program is predictive of process measure compliance.[12] JC certification as a PSC is also associated with increased thrombolytic rates for acute stroke over time.[13] New Jersey does not require that stroke‐designated hospitals have JC stroke certification. Although 11 New Jersey CSCs have been certified as JC PSCs since 2009, only 21 of the 53 state‐designated PSCs are JC certified. It may be that the highest performing sites pursue state CSC designation and JC PSC certification/recertification repeatedly. CSCs in New Jersey may also have a greater focus on quality measures by virtue of having been in quality programs such as Get With The Guidelines or by having been state designated and JC certified for a longer period of time.
The New Jersey requirements for CSCs, like those of the JC, include a large number of highly trained stroke experts, which ensures more continuous coverage. Although a disparity in mortality on weekends versus weekdays has been reported,[14] such a difference in mortality has not been seen at CSCs in New Jersey.[15] This lack of a weekend effect is felt to be related to the 24/7 availability of stroke specialists, advanced neuroimaging, ongoing training, and surveillance of specialized nursing care available at CSCs.[4, 16]
In our study, New Jersey CSCs overall had significantly higher rates of thrombolysis compared to PSCs (19.5% vs 9.6%) when looking at the 3‐hour window. This is higher overall than the national rate of 3.4% to 5.2%.[17] The number of patients treated in the expanded thrombolytic window were also significantly higher at CSCs, increasing thrombolysis rates to 22.4% at CSCs versus the 11% at PSCs. Door‐to‐drug times were also shorter at CSCs than PSCs in the 3‐ and 4.5‐hour windows (65 vs 74 minutes and 65 minutes vs 76 minutes, respectively). After we excluded transferred patients and those with a diagnosis of TIA, stroke not otherwise specified, and those with nonstroke‐related diagnoses, the total number of ischemic and hemorrhagic stroke patients seen at each of the 12 CSCs (n=11,505) was on average 4 times higher than the number seen at each of the 53 PSCs (n=12,732). High annual hospital stroke volume has been shown to be associated with higher rates of thrombolysis and lower stroke mortality.[14] A study of US academic centers found that although the same percentage of patients presented within 2 hours of stroke symptom onset in 2001 and 2004, the use of IV tPA more than doubled over this time period.[18] Improved system organization at the prehospital and hospital levels as well as greater comfort and experience with use of thrombolytic therapy likely contribute to all of these findings.[11]
CSCs did not outperform PSCs with regard to stroke education and antithrombotics by end of hospital day 2, but these results were not statistically significant. The former measure includes only stroke patients who are discharged home and is considered complete when all 5 of the following are addressed: risk factors for stroke, warning signs and symptoms for stroke, activation of emergency medical systems, follow‐up after discharge, and medications prescribed at discharge. CSCs were more likely to provide education for the first 3 and last component but less likely for the fourth element. These findings should be considered in the context of CSCs having higher volumes of more ill and complex patients who are more likely to be discharged to a rehabilitation hospital, nursing home, or other facility than to home. In our registry, CSCs discharged 46% of patients' to home versus 54% at PSCs. We speculate that CSCs may be less likely to habitually address follow‐up care and discharge medications as compared to PSCs. As far as provision of antithrombotics by hospital day 2, it is possible again that because CSCs have a higher number of complicated stroke patients, many may have had contraindications to use of antithrombotics in that time period.
Limitations of this study include the fact that this was a retrospective analysis of a database. Although the 2010 and 2011 NJASR dataset was sizeable, it was not possible to capture all potentially confounding variables that may have affected our point estimates. We were not able to perform a hierarchical analysis to account for clustering at the hospital level because of limited data available in the registry. Errors in recording data, coding, and documentation cannot be excluded. The fact that not all PSCs were necessarily JC certified may have contributed to the observed differences. Also, because pursuing PSC or CSC status is voluntary, it is not clear if the hospitals that chose CSC status were different in other unmeasured factors than those that chose PSC status, and the difference may have existed even in the absence of the designation program. Over the years, there have been changes in the criteria required by the state and the JC for PSC designation, although the larger differences between hospital levels remained intact. This may have limited our findings as well. The goal for hospitals is to continue strict adherence to policies and measures and thus improve quality of care for stroke patients. Future prospective studies should be conducted to ascertain validity and generalizability of our findings. Association of stroke measure adherence and functional outcomes would also be of interest. We were not able to measure this consistently in our study because not all patients at PSCs had admission and/or discharge NIHSS or modified Rankin Score. Although some studies have not shown an association between improved outcomes and higher performance on quality measures, we would like to look at this more closely in the stroke population.[19] As our database gets larger, we would like to reexamine our findings after correcting for more specific characteristics of each hospital. In the future, if additional states designate centers by level of stroke care, it will be important to compare how such designations compare to nonprofit organization certifications in terms of impacting performance on a larger scale.
CONCLUSION
This study shows better compliance of New Jersey state‐designated CSCs with the JC PSC core stroke measures and better mean door‐to‐thrombolytic drug times. Because these measures are evidence based, these results may translate into better stroke care and outcomes for patients treated at state‐designated CSCs.
ACKNOWLEDGEMENTS
Disclosures: Jawad Kirmani, MD: Consultant to Joint Commission on Performance Measure Development (modest). Martin Gizzi, MD, PhD: Consultant to Joint Commission on Performance Measure Development (modest), New Jersey Department of Health and Senior Services as chair of the Stroke Advisory Panel (significant). No other potential conflicts to report.
Stroke is the fourth leading cause of death in the United States.[1] Though actual stroke‐related death has declined nationally by 19.4%, stroke‐related morbidity is still a significant burden.[2, 3] Hospital certification programs have been developed to improve the quality of stroke care on state and national levels. The Brain Attack Coalition (BAC) proposed 2 levels of stroke hospitals: primary stroke centers (PSCs) and comprehensive stroke centers (CSCs).[3, 4] Although most stroke patients can be cared for at PSCs, CSCs are able to care for more complex stroke patients.[4, 5] Using BAC recommendations the Joint Commission (JC) and American Heart Association/American Stroke Association created a certification program for PSCs. Eight evidence‐based performance measures are currently required for JC PSC certification.[6]
At the state level, New Jersey and Florida began designating PSCs and CSCs.[7, 8] New Jersey PSC and CSC designation criteria incorporate the elements of JC PSC certification, despite preceding them by several years.[6, 9] New Jersey CSC certification consists of more comprehensive requirements (Table 1). All New Jersey‐designated stroke centers submit data in the New Jersey Acute Stroke Registry (NJASR),[7] which closely matches the Centers for Disease Control and Prevention's Paul Coverdell National Acute Stroke Registry and includes the JC‐required core measures.
Primary Stroke Center (N=53) | Comprehensive Stroke Center (N=12) |
---|---|
| |
Must have acute stroke teams in place at all times that can respond to the bedside within 15 minutes of patient arrival or identification | Must meet all criteria for primary stroke centers |
Must maintain neurology and emergency department personnel trained in the treatment of acute stroke | Must maintain a neurosurgical team capable of assessing and treating complex stroke |
Must maintain telemetry or critical care beds staffed by physicians and nurses trained in caring for acute stroke patients | Must maintain on staff a neuroradiologist (boarded) and a neurointerventionalist |
Must provide for neurosurgical services within 2 hours either at the hospital or under agreement with a comprehensive stroke center | Must provide comprehensive rehabilitation services either on site or by transfer agreement |
Must provide acute care rehabilitation services | Must provide MRI, CTA, and digital subtraction angiography |
Must enter into a written transfer agreement with a comprehensive stroke center | Must develop and maintain sophisticated outcomes assessment and performance improvement capability |
Must provide graduate medical education in stroke and carry out research in stroke |
There is a paucity of data comparing state‐designated CSCs and PSCs, largely because few states have state designation programs. Although a recent observational study from Finland showed better outcomes in patients treated at CSCs, measures in that study were limited to mortality and institutionalization at 1 year.[10] In this study, we examined adherence of all New Jersey state‐designated stroke centers to the JC PSC measures and compared CSCs to PSCs in this regard. We posited that better compliance with these evidence‐based measures might translate into better quality of stroke care in the state and may lend support to future, larger studies that may be conducted because of the recent certification of CSCs by the JC.
METHODS
Components of the NJASR, key components of PSCs and CSCs in the BAC report, and the 8 JC's core stroke measures for PSC certification were assessed.[3, 4, 6]
First responders in New Jersey are required to bring suspected stroke patients to the nearest stroke‐designated hospital, regardless of whether it is a PSC or CSC, unless the patient is too medically unstable and needs to be taken to the nearest hospital. From there, decisions can be made to transfer a patient to a higher level hospital (CSC) depending on the complexity of the patient's condition.
New Jersey state‐designated PSCs and CSCs are required to abstract patient‐level data, evaluate outcomes, and initiate quality improvement activities on all patients evaluated for ischemic stroke, hemorrhagic stroke, transient ischemic attack (TIA), and those who undergo acute interventional therapy. Data are submitted quarterly to the New Jersey Department of Health and Senior Services (NJDHSS).[7] Hospital data are imported into the Acute Stroke Registry Database. The NJASR statewide dataset used for this analysis included all stroke admissions for the calendar years 2010 and 2011 and contains patient demographic information, health history, clinical investigations performed, treatments provided, and outcome measures that allow for risk‐adjusted assessment of outcomes.
The JC core stroke measures (STK‐1 thru 10 [except STK‐7 and 9]: venous thromboembolism [VTE] prophylaxis, discharged on antithrombotic therapy, anticoagulation therapy for atrial fibrillation/flutter, thrombolytic therapy, antithrombotic therapy by the end of hospital day 2, discharged on statin medication, stroke education, and assessment for rehabilitation) (except STK‐7 [dysphagia screening] and STK‐9 [smoking cessation/advice counseling]) apply only to acute ischemic and/or hemorrhagic stroke patients.
In our analysis, transferred patients and patients with a diagnosis of TIA, stroke not otherwise specified, and nonstroke‐related diagnoses were excluded. Hospital identity was kept anonymous through assignment of random numeric codes by the NJDHSS. Hospitals were categorized as CSC or PSC based on NJDHSS designation. Stroke severity on admission was assessed by categorizing National Institutes of Health Stroke Scale (NIHSS) scores into: no stroke when NIHSS=0, mild stroke when NIHSS=14, and moderate to severe stroke when NIHSS>5. Median door‐to‐thrombolytic drug times were assessed for both patients who arrived to the hospital within 2 hours of stroke symptom onset and received thrombolytic therapy within 3 hours, as well as patients who arrived within 3.5 hours (210 minutes) and received treatment within 4.5 hours (270 minutes).
Inclusion and Exclusion Criteria for JC Performance Measures
Excluded from all measures are patients who were discharged to hospice, left against medical advice (AMA), expired, were transferred to another short‐term care facility, had an unknown discharge location, comfort measures only (CMO), or enrolled in clinical trials. Other exclusions are listed below each measure. VTE prophylaxis included nonambulatory ischemic and hemorrhagic stroke patients who received VTE prophylaxis by end of hospital day 2 and excluded patients discharged prior to hospital day 2 and with length of stay >120 days. Antithrombotics at discharge included ischemic stroke patients discharged on antithrombotics and excluded those with documented reason for not receiving antithrombotics. Anticoagulation for atrial fibrillation included ischemic stroke patients with documented atrial fibrillation/flutter who received anticoagulation therapy and excluded those with documented reason for not receiving anticoagulation. Thrombolytic therapy included acute ischemic stroke patients who arrived at the hospital within 2 hours from time last known well and for whom intravenous (IV) tissue plasminogen activator (tPA) was initiated at that hospital within 1hour of hospital arrival. Excluded were patients with valid reason for not getting tPA, length of stay >120 days, and time last known well >2 hours. We also looked at thrombolytic therapy for patients who arrived by 3.5 hours from time last known well and received IV‐tPA within 1 hour.
Antithrombotics by the end of hospital day 2 included ischemic stroke patients who received antithrombotic medication by the end of hospital day 2 and excluded patients who were discharged before hospital day 2, had a documented reason for not receiving antithrombotic medication, had a length of stay greater >120 days, were CMO by day 2, and patients who received IV or intra‐arterial tPA. Statin therapy included ischemic stroke patients with low‐density lipoprotein exceeding 100 mg/dL or not measured, or on cholesterol‐reducing medication prior to admission. Excluded were those with length of stay >120 days and a documented reason for not receiving medication. Stroke education on discharge included all stroke patients being discharged home who received education during the hospitalization addressing the following: patient's stroke‐specific risk factors, warning signs and symptoms, emergency medical services activation, follow‐up, and discharge medications. Those with length of stay >120 days were excluded. Assessment for rehabilitation included ischemic and hemorrhagic patients assessed for rehabilitation services.
Statistical Analysis
Patient characteristics were summarized using frequencies and percentages for categorical variables as well as median and interquartile range for continuous variables. 2 tests and median 2‐sample tests were used to compare patient characteristics between the 2 hospital levels. The likelihood that a patient received a particular JC core measure service in relation to hospital level (PSC vs CSC) was estimated using a multiple logistic regression analysis for both the crude/unadjusted and adjusted odds ratios, and their 95% confidence intervals were estimated. Gender, age, race, stroke type, medical history (hypertension, atrial fibrillation, diabetes mellitus, and history of smoking), and severity of stroke as measured by NIHSS were included in the model. Institutional review board approval to evaluate the data for this analysis was obtained from John F. Kennedy Medical Center in Edison, New Jersey. All analyses were performed using SAS software package version 9.3 (SAS Institute, Cary, NC).
RESULTS
There were 36,892 acute stroke cases treated at the 53 New Jersey PSCs and 12 CSCs in the calendar year 2010 and 2011 (Table 2). Sixty percent were treated at PSCs and 40% at CSCs. There were significant differences in the distribution of patients' characteristics (race, age, and gender) between the 2 hospital levels. At both PSCs and CSCs, the majority of patients were white, distantly followed by blacks. Patients at PSCs were statistically significantly older than CSCs. The most prevalent comorbid conditions in both PSCs and CSCs were hypertension, diabetes mellitus, and dyslipidemia. Based on our categorization, we found that 45% of patients admitted to CSCs had moderate‐to‐severe stroke (NIHSS>5). The median door‐to‐thrombolytic drug times were significantly shorter at CSCs than PSCs for both the 3‐hour (65 vs 74 minutes, P<0.0001) and 4.5 hour (65 vs 76 minutes, P<0.0001) IV tPA time windows.
Variables | PSCs, N=22,305 | CSCs, N=14,587 | P Valuea |
---|---|---|---|
| |||
Race, n (%) | <0.0001 | ||
White | 16,586 (74.4) | 10,419 (71.4) | |
Black | 3,930 (17.6) | 2,875 (19.7) | |
Asian | 511 (2.3) | 519 (3.6) | |
All othersb | 1,278 (5.7) | 774 (5.3) | |
Age, y, median (IQR) | 75.0 (22.0) | 73.0 (23.0) | <0.0001c |
Gender, female, n (%) | 12,552 (56.3) | 7,757 (53.2) | <0.0001 |
Comorbidities | |||
Hypertension, n (%) | 17,405 (78.1) | 10,535 (72.2) | <0.0001 |
Atrial fibrillation/flutter, n (%) | 3,762 (16.9) | 2,237 (15.3) | 0.0001 |
Diabetes mellitus, n (%) | 7,219 (32.4) | 4,220 (28.9) | <0.0001 |
History of smoking, n (%) | 2,924 (13.1) | 1,706 (11.7) | <0.0001 |
Heart failure, n (%) | 1,733 (7.8) | 749 (5.1) | <0.0001 |
Myocardial infarction, n (%) | 6,138 (27.5) | 2,945 (20.3) | <0.0001 |
Dyslipidemia, n (%) | 10,106 (45.6) | 5,161 (35.4) | <0.0001 |
Prior stroke/TIA/VBI, n (%) | 7,085 (31.8) | 3,874 (26.6) | <0.0001 |
NIHSS on admission, n (%) | <0.0001 | ||
No stroke (NIHSS=0) | 2,747 (27.4) | 913 (18.3) | |
Mild stroke (NIHSS=14) | 4,010 (40.0) | 1,811 (33.3) | |
Moderatesevere (NIHSS >5) | 3,272 (32.6) | 2,271 (45.4) | |
Door‐to‐tPA time, min, median (IQR) | |||
Arrived within 120 minutes | 74.0 (35.0) | 65.0 (33.0) | <0.0001c |
Arrived within 210 minutes | 76.0 (37.0) | 65.0 (34.0) | <0.0001c |
Stroke diagnosis, distribution | <0.0001 | ||
Ischemic | 11,145 (50.0) | 8,235 (56.5) | |
Hemorrhagic | 1,587 (7.1) | 3,270 (13.3) | |
Subarachnoid | 219 (13.8) | 397 (20.4) | |
Intracerebral | 1,368 (86.2) | 1,545 (79.6) | |
Transient ischemic attack | 8,116 (36.4) | 4,162 (28.5) | |
Stroke not otherwise specified | 1,145 (5.1) | 130 (0.9) | |
No stroke‐related diagnosis | 293 (1.3) | 118 (0.8) |
The incidences of stroke diagnosis types are also detailed in Table 2. Seventy percent of patients at CSCs had either an ischemic or hemorrhagic stroke diagnosis versus 57.1% of patients admitted at PSCs. Hemorrhagic stroke patients were twice as likely to be admitted at CSCs compared to PSCs.
After excluding 13,964 patients with a diagnosis of TIA, stroke not otherwise specified, and those with nonstroke‐related diagnosis, the likelihood of stroke patients' receiving the JC's performance measure services at either of these hospital levels was assessed (Table 3). In general, the adjusted odds ratio estimates of patients receiving a JC core performance measure at PSCs were lower than CSCs, indicating better compliance with the measures at CSCs. For example, 19.5% of eligible patients received thrombolytic therapy at CSCs compared to 9.6% at PSCs. CSCs also were more likely to provide VTE prophylaxis, anticoagulation for atrial fibrillation, and assessment for rehabilitation. Stroke education and antithrombotic therapy by the end of hospital day 2 were more likely to be provided at PSCs, but the results were not statistically significant.
Variables | Hospital Levelsa | Odds Ratio (95% CI) | ||
---|---|---|---|---|
PSCs, N (%) | CSCs, N (%) | Unadjusted | Adjustedb | |
| ||||
VTE prophylaxis | 4,745 (92.1) | 5,455 (94.2) | 0.72 (0.610.83) | 0.47 (0.330.67) |
Discharged on antithrombotic therapy | 8,835 (98.1) | 6,873 (99.2) | 0.42 (0.310.56) | 0.46 (0.270.78) |
Anticoagulation therapy for atrial fibrillation/flutter | 1,464 (95.1) | 1,144 (97.6) | 0.48 (0.310.74) | 0.38 (0.170.86) |
Thrombolytic therapy | ||||
Time window=3.0 hours | 484 (9.6) | 666 (19.5) | 0.44 (0.390.50) | 0.28 (0.240.34) |
Time window=4.5 hours | 564 (11.0) | 792 (22.4) | 0.43 (0.380.48) | 0.28 (0.230.33) |
Antithrombotic therapy by end of hospital day 2 | 7,575 (97.4) | 5,396 (98.2) | 0.69 (0.540.88) | 1.01 (0.601.68) |
Discharged on statin medication | 6,035 (97.9) | 4,261 (98.7) | 0.59 (0.430.80) | 0.69 (0.421.13) |
Stroke education, for home discharge (overall) | 3,823 (97.7) | 3,072 (95.7) | 1.93 (1.472.53) | 1.78 (0.923.45) |
Risk factors for stroke | 3,480 (88.9) | 3,026 (94.4) | 0.49 (0.410.59) | 0.43 (0.280.66) |
Warning sign and symptoms | 3,514 (89.8) | 3,019 (94.1) | 0.56 (0.460.67) | 0.52 (0.340.79) |
Activation of EMS | 3,539 (90.5) | 3,023 (94.2) | 0.59 (0.490.70) | 0.44 (0.280.69) |
Followup after discharge | 3,807 (97.3) | 3,064 (95.5) | 1.73 (1.342.23) | 1.18 (0.652.20) |
Medications prescribed at discharge | 3,788 (96.8) | 3,067 (95.5) | 1.42 (1.111.82) | 0.44 (0.280.70) |
Assessed for rehabilitation | 9,725 (95.2) | 8,199 (97.5) | 0.51 (0.430.61) | 0.37 (0.260.53) |
DISCUSSION
In New Jersey, CSCs were more likely to adhere better to JC core performance measures than PSCs. Median door‐to‐thrombolytic drug times were also significantly lower at CSCs. Such differences may be due to several factors including the fact that CSCs have generally been state designated for a longer period of time than PSCs. CSCs are likely to have higher volumes of stroke admissions, are more likely to be JC certified, provide more staff education, and have more staff and resources. The New Jersey stroke designation program began in 2006, and 11 of the 12 CSCs were designated by the end of 2007. However, the PSC designation process has been more gradual, with several of them designated in 2010 and 2011 as the data for this study were being collected.
The New York State Stroke Center Designation Project prospectively showed that stroke center designation improved the quality of acute stroke patient care and administration of thrombolytic therapy; however, differing levels of hospital designation were not present in New York at that time.[11] Participation in a data measurement program such as Get With The Guidelines has also been examined. It is evident that the amount of time in a program is predictive of process measure compliance.[12] JC certification as a PSC is also associated with increased thrombolytic rates for acute stroke over time.[13] New Jersey does not require that stroke‐designated hospitals have JC stroke certification. Although 11 New Jersey CSCs have been certified as JC PSCs since 2009, only 21 of the 53 state‐designated PSCs are JC certified. It may be that the highest performing sites pursue state CSC designation and JC PSC certification/recertification repeatedly. CSCs in New Jersey may also have a greater focus on quality measures by virtue of having been in quality programs such as Get With The Guidelines or by having been state designated and JC certified for a longer period of time.
The New Jersey requirements for CSCs, like those of the JC, include a large number of highly trained stroke experts, which ensures more continuous coverage. Although a disparity in mortality on weekends versus weekdays has been reported,[14] such a difference in mortality has not been seen at CSCs in New Jersey.[15] This lack of a weekend effect is felt to be related to the 24/7 availability of stroke specialists, advanced neuroimaging, ongoing training, and surveillance of specialized nursing care available at CSCs.[4, 16]
In our study, New Jersey CSCs overall had significantly higher rates of thrombolysis compared to PSCs (19.5% vs 9.6%) when looking at the 3‐hour window. This is higher overall than the national rate of 3.4% to 5.2%.[17] The number of patients treated in the expanded thrombolytic window were also significantly higher at CSCs, increasing thrombolysis rates to 22.4% at CSCs versus the 11% at PSCs. Door‐to‐drug times were also shorter at CSCs than PSCs in the 3‐ and 4.5‐hour windows (65 vs 74 minutes and 65 minutes vs 76 minutes, respectively). After we excluded transferred patients and those with a diagnosis of TIA, stroke not otherwise specified, and those with nonstroke‐related diagnoses, the total number of ischemic and hemorrhagic stroke patients seen at each of the 12 CSCs (n=11,505) was on average 4 times higher than the number seen at each of the 53 PSCs (n=12,732). High annual hospital stroke volume has been shown to be associated with higher rates of thrombolysis and lower stroke mortality.[14] A study of US academic centers found that although the same percentage of patients presented within 2 hours of stroke symptom onset in 2001 and 2004, the use of IV tPA more than doubled over this time period.[18] Improved system organization at the prehospital and hospital levels as well as greater comfort and experience with use of thrombolytic therapy likely contribute to all of these findings.[11]
CSCs did not outperform PSCs with regard to stroke education and antithrombotics by end of hospital day 2, but these results were not statistically significant. The former measure includes only stroke patients who are discharged home and is considered complete when all 5 of the following are addressed: risk factors for stroke, warning signs and symptoms for stroke, activation of emergency medical systems, follow‐up after discharge, and medications prescribed at discharge. CSCs were more likely to provide education for the first 3 and last component but less likely for the fourth element. These findings should be considered in the context of CSCs having higher volumes of more ill and complex patients who are more likely to be discharged to a rehabilitation hospital, nursing home, or other facility than to home. In our registry, CSCs discharged 46% of patients' to home versus 54% at PSCs. We speculate that CSCs may be less likely to habitually address follow‐up care and discharge medications as compared to PSCs. As far as provision of antithrombotics by hospital day 2, it is possible again that because CSCs have a higher number of complicated stroke patients, many may have had contraindications to use of antithrombotics in that time period.
Limitations of this study include the fact that this was a retrospective analysis of a database. Although the 2010 and 2011 NJASR dataset was sizeable, it was not possible to capture all potentially confounding variables that may have affected our point estimates. We were not able to perform a hierarchical analysis to account for clustering at the hospital level because of limited data available in the registry. Errors in recording data, coding, and documentation cannot be excluded. The fact that not all PSCs were necessarily JC certified may have contributed to the observed differences. Also, because pursuing PSC or CSC status is voluntary, it is not clear if the hospitals that chose CSC status were different in other unmeasured factors than those that chose PSC status, and the difference may have existed even in the absence of the designation program. Over the years, there have been changes in the criteria required by the state and the JC for PSC designation, although the larger differences between hospital levels remained intact. This may have limited our findings as well. The goal for hospitals is to continue strict adherence to policies and measures and thus improve quality of care for stroke patients. Future prospective studies should be conducted to ascertain validity and generalizability of our findings. Association of stroke measure adherence and functional outcomes would also be of interest. We were not able to measure this consistently in our study because not all patients at PSCs had admission and/or discharge NIHSS or modified Rankin Score. Although some studies have not shown an association between improved outcomes and higher performance on quality measures, we would like to look at this more closely in the stroke population.[19] As our database gets larger, we would like to reexamine our findings after correcting for more specific characteristics of each hospital. In the future, if additional states designate centers by level of stroke care, it will be important to compare how such designations compare to nonprofit organization certifications in terms of impacting performance on a larger scale.
CONCLUSION
This study shows better compliance of New Jersey state‐designated CSCs with the JC PSC core stroke measures and better mean door‐to‐thrombolytic drug times. Because these measures are evidence based, these results may translate into better stroke care and outcomes for patients treated at state‐designated CSCs.
ACKNOWLEDGEMENTS
Disclosures: Jawad Kirmani, MD: Consultant to Joint Commission on Performance Measure Development (modest). Martin Gizzi, MD, PhD: Consultant to Joint Commission on Performance Measure Development (modest), New Jersey Department of Health and Senior Services as chair of the Stroke Advisory Panel (significant). No other potential conflicts to report.
- Centers for Disease Control and Prevention. Interactive atlas of heart disease and stroke. Available at: http://apps.Nccd.Cdc.Gov/dhdspatlas/reports.Aspx. Accessed May 5, 2012.
- American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Heart disease and stroke statistics—2012 update: a report from the American Heart Association. Circulation. 2012;125:e2–e220. , , , et al.;
- Recommendations for the establishment of primary stroke centers. JAMA. 2000;283:3102–3109. , , , et al.
- Brain Attack Coalition. Recommendations for comprehensive stroke centers: a consensus statement from the Brain Attack Coalition. Stroke. 2005;36:1597–1616. , , , et al.;
- Recommendations for the establishment of stroke systems of care. Stroke. 2005;36:690–703. , , , et al.
- The Joint Commission. Advanced Certification Comprehensive Stroke Centers. Available at: http://www.jointcommission.org/certification/advanced_certification_comprehensive_stroke_centers.aspx. Accessed July 25, 2012.
- New Jersey Department of Health and Senior Services: The New Jersey Acute Stroke Registry. Available at: http://www.state.nj.us/health/healthcarequality/stroke/documents/njacute_stroke_data_dictionary.pdf. Accessed July 8, 2012.
- Florida Agency for Health Care Administration. Primary stroke center and comprehensive stroke center designation. Available at: http://ahca.myflorida.com/mchq/Health_Facility_Regulation/Hospital_Outpatient/forms/59A3_2085_FAC_Rule_text.pdf. Accessed July 25, 2012.
- New Jersey Department of Health and Senior Services. Stroke Center Act (2004). Available at: http://www.njleg.state.nj.us/2004/bills/pl04/136_.pdf. Accessed September 25, 2013.
- Effectiveness of primary and comprehensive stroke centers. Stroke. 2010;41:1102–1107. , , , et al.
- NYSDOH Stroke Center Designation Project Workgroup. Quality improvement in acute stroke: The New York State Stroke Center Designation Project. Neurology. 2006;67(1):88–93. , , , et al.;
- Characteristics, performance measures, and in‐hospital outcomes of the first one million stroke and transient ischemic attack admissions in get with the guidelines‐stroke. Circ Cardiovasc Qual Outcomes. 2010;3(3):291–302. , , , et al.
- Intravenous thrombolysis for stroke increases over time at primary stroke centers. Stroke. 2012;43:875–877. , , , et al.
- Weekends: a dangerous time for having a stroke? Stroke. 2007;38:1211–1215. , , , et al.
- Myocardial Infarction Data Acquisition System (MIDAS 15) Study Group. Comprehensive stroke centers overcome the weekend versus weekday gap in stroke treatment and mortality. Stroke. 2011;42:2403–2409. , , , et al.;
- Can comprehensive stroke centers erase the “weekend effect”? Cerebrovasc Dis. 2009;27:107–113. , , , et al.
- Recombinant tissue‐type plasminogen activator use for ischemic stroke in the United States. Stroke. 2011;42:1952–1955. , , , et al.
- Hospital arrival time and intravenous t‐PA use in US academic medical centers, 2001–2004. Stroke. 2009;40:3845–3850. , , , et al.
- Association of surgical care improvement project infection‐related process measure compliance with risk‐adjusted outcomes: implications for quality measurement. J Am Coll Surg. 2010;211(6):705–714. , , , et al.
- Centers for Disease Control and Prevention. Interactive atlas of heart disease and stroke. Available at: http://apps.Nccd.Cdc.Gov/dhdspatlas/reports.Aspx. Accessed May 5, 2012.
- American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Heart disease and stroke statistics—2012 update: a report from the American Heart Association. Circulation. 2012;125:e2–e220. , , , et al.;
- Recommendations for the establishment of primary stroke centers. JAMA. 2000;283:3102–3109. , , , et al.
- Brain Attack Coalition. Recommendations for comprehensive stroke centers: a consensus statement from the Brain Attack Coalition. Stroke. 2005;36:1597–1616. , , , et al.;
- Recommendations for the establishment of stroke systems of care. Stroke. 2005;36:690–703. , , , et al.
- The Joint Commission. Advanced Certification Comprehensive Stroke Centers. Available at: http://www.jointcommission.org/certification/advanced_certification_comprehensive_stroke_centers.aspx. Accessed July 25, 2012.
- New Jersey Department of Health and Senior Services: The New Jersey Acute Stroke Registry. Available at: http://www.state.nj.us/health/healthcarequality/stroke/documents/njacute_stroke_data_dictionary.pdf. Accessed July 8, 2012.
- Florida Agency for Health Care Administration. Primary stroke center and comprehensive stroke center designation. Available at: http://ahca.myflorida.com/mchq/Health_Facility_Regulation/Hospital_Outpatient/forms/59A3_2085_FAC_Rule_text.pdf. Accessed July 25, 2012.
- New Jersey Department of Health and Senior Services. Stroke Center Act (2004). Available at: http://www.njleg.state.nj.us/2004/bills/pl04/136_.pdf. Accessed September 25, 2013.
- Effectiveness of primary and comprehensive stroke centers. Stroke. 2010;41:1102–1107. , , , et al.
- NYSDOH Stroke Center Designation Project Workgroup. Quality improvement in acute stroke: The New York State Stroke Center Designation Project. Neurology. 2006;67(1):88–93. , , , et al.;
- Characteristics, performance measures, and in‐hospital outcomes of the first one million stroke and transient ischemic attack admissions in get with the guidelines‐stroke. Circ Cardiovasc Qual Outcomes. 2010;3(3):291–302. , , , et al.
- Intravenous thrombolysis for stroke increases over time at primary stroke centers. Stroke. 2012;43:875–877. , , , et al.
- Weekends: a dangerous time for having a stroke? Stroke. 2007;38:1211–1215. , , , et al.
- Myocardial Infarction Data Acquisition System (MIDAS 15) Study Group. Comprehensive stroke centers overcome the weekend versus weekday gap in stroke treatment and mortality. Stroke. 2011;42:2403–2409. , , , et al.;
- Can comprehensive stroke centers erase the “weekend effect”? Cerebrovasc Dis. 2009;27:107–113. , , , et al.
- Recombinant tissue‐type plasminogen activator use for ischemic stroke in the United States. Stroke. 2011;42:1952–1955. , , , et al.
- Hospital arrival time and intravenous t‐PA use in US academic medical centers, 2001–2004. Stroke. 2009;40:3845–3850. , , , et al.
- Association of surgical care improvement project infection‐related process measure compliance with risk‐adjusted outcomes: implications for quality measurement. J Am Coll Surg. 2010;211(6):705–714. , , , et al.
© 2013 Society of Hospital Medicine
Diffuse large B-cell lymphoma of the lung in a 63-year-old man with left flank pain
Diffuse large B-cell lymphoma of the lung is a rare entity, and although the prognosis is favorable, its biological features, clinical presentation, prognostic markers, and treatment have not been well defined.1,2 It is the second most common primary pulmonary lymphoma after mucosa-associated lymphoid tissue. PPL itself is very rare; it represents 3%-4% of extranodal non-Hodgkin lymphoma, less than 1% of NHL, and 0.5%-1.0% of primary pulmonary malignancies.2,3 A review of the literature indicates a lack of data on pulmonary DLBCL. The objective of this case report is to highlight areas in which further research may be pursued to better understand this disease.
Click on the PDF icon at the top of this introduction to read the full article.
Diffuse large B-cell lymphoma of the lung is a rare entity, and although the prognosis is favorable, its biological features, clinical presentation, prognostic markers, and treatment have not been well defined.1,2 It is the second most common primary pulmonary lymphoma after mucosa-associated lymphoid tissue. PPL itself is very rare; it represents 3%-4% of extranodal non-Hodgkin lymphoma, less than 1% of NHL, and 0.5%-1.0% of primary pulmonary malignancies.2,3 A review of the literature indicates a lack of data on pulmonary DLBCL. The objective of this case report is to highlight areas in which further research may be pursued to better understand this disease.
Click on the PDF icon at the top of this introduction to read the full article.
Diffuse large B-cell lymphoma of the lung is a rare entity, and although the prognosis is favorable, its biological features, clinical presentation, prognostic markers, and treatment have not been well defined.1,2 It is the second most common primary pulmonary lymphoma after mucosa-associated lymphoid tissue. PPL itself is very rare; it represents 3%-4% of extranodal non-Hodgkin lymphoma, less than 1% of NHL, and 0.5%-1.0% of primary pulmonary malignancies.2,3 A review of the literature indicates a lack of data on pulmonary DLBCL. The objective of this case report is to highlight areas in which further research may be pursued to better understand this disease.
Click on the PDF icon at the top of this introduction to read the full article.
A planning and evaluation program for assessing telecommunications applications in community radiation oncology programs
Management-focused scientific evaluation is a useful administrative tool especially when hospitals implement a new technology. This paper describes the components of a scientific evaluation framework and then illustrates the application and the utility of the framework in a hospital-based community oncology setting. The clinical technology, Telesynergy, is an advanced telecommunications and remote medical consultation system which has been developed by the National Cancer Institute to support community hospital-based radiation oncology programs.
Click on the PDF icon at the top of this introduction to read the full article.
Management-focused scientific evaluation is a useful administrative tool especially when hospitals implement a new technology. This paper describes the components of a scientific evaluation framework and then illustrates the application and the utility of the framework in a hospital-based community oncology setting. The clinical technology, Telesynergy, is an advanced telecommunications and remote medical consultation system which has been developed by the National Cancer Institute to support community hospital-based radiation oncology programs.
Click on the PDF icon at the top of this introduction to read the full article.
Management-focused scientific evaluation is a useful administrative tool especially when hospitals implement a new technology. This paper describes the components of a scientific evaluation framework and then illustrates the application and the utility of the framework in a hospital-based community oncology setting. The clinical technology, Telesynergy, is an advanced telecommunications and remote medical consultation system which has been developed by the National Cancer Institute to support community hospital-based radiation oncology programs.
Click on the PDF icon at the top of this introduction to read the full article.
Inexpensive solutions to enhance remote cancer care in community hospitals
Rapidly increasing volume and complexity of information used for multidisciplinary cancer treatment requires carefully evolving communications with programmatic planning, detailed evaluation, and new methodologies and technical approaches to enhance the impact and efficacy of medical conferencing systems. We designed, implemented, and evaluated cost-effective and appropriate remote learning optimize oncology practice techniques in community hospitals. Our experience over the course of more than 7 years demonstrated simple and inexpensive communication solutions for both professional and lay education, satisfying information-dense needs of multimodality cancer care. We describe how potential complexities may be resolved with inexpensive devices and software programs. Staff teamwork and creativity are always required to implement constantly evolving technologies. We provide both quantitative and qualitative data describing activities and resulting staff responses resulting in 6,520 personnel with more than 391 aggregate credit hours of continuing medical education and continuing education credit activities with enhanced collegial participant satisfaction levels and heightened interactions/professionalism among regional oncology staff. We noted significant cost reductions for communications in all our three partnered hospitals. We demonstrated both increased satisfaction levels and heightened levels of behavioral changes (Impacts) in participants. Always, activities must be cost effective and responsive to changing medical needs. Community focused efforts with regional partners should be similar, assuring evolving successes.
Click on the PDF icon at the top of this introduction to read the full article.
Rapidly increasing volume and complexity of information used for multidisciplinary cancer treatment requires carefully evolving communications with programmatic planning, detailed evaluation, and new methodologies and technical approaches to enhance the impact and efficacy of medical conferencing systems. We designed, implemented, and evaluated cost-effective and appropriate remote learning optimize oncology practice techniques in community hospitals. Our experience over the course of more than 7 years demonstrated simple and inexpensive communication solutions for both professional and lay education, satisfying information-dense needs of multimodality cancer care. We describe how potential complexities may be resolved with inexpensive devices and software programs. Staff teamwork and creativity are always required to implement constantly evolving technologies. We provide both quantitative and qualitative data describing activities and resulting staff responses resulting in 6,520 personnel with more than 391 aggregate credit hours of continuing medical education and continuing education credit activities with enhanced collegial participant satisfaction levels and heightened interactions/professionalism among regional oncology staff. We noted significant cost reductions for communications in all our three partnered hospitals. We demonstrated both increased satisfaction levels and heightened levels of behavioral changes (Impacts) in participants. Always, activities must be cost effective and responsive to changing medical needs. Community focused efforts with regional partners should be similar, assuring evolving successes.
Click on the PDF icon at the top of this introduction to read the full article.
Rapidly increasing volume and complexity of information used for multidisciplinary cancer treatment requires carefully evolving communications with programmatic planning, detailed evaluation, and new methodologies and technical approaches to enhance the impact and efficacy of medical conferencing systems. We designed, implemented, and evaluated cost-effective and appropriate remote learning optimize oncology practice techniques in community hospitals. Our experience over the course of more than 7 years demonstrated simple and inexpensive communication solutions for both professional and lay education, satisfying information-dense needs of multimodality cancer care. We describe how potential complexities may be resolved with inexpensive devices and software programs. Staff teamwork and creativity are always required to implement constantly evolving technologies. We provide both quantitative and qualitative data describing activities and resulting staff responses resulting in 6,520 personnel with more than 391 aggregate credit hours of continuing medical education and continuing education credit activities with enhanced collegial participant satisfaction levels and heightened interactions/professionalism among regional oncology staff. We noted significant cost reductions for communications in all our three partnered hospitals. We demonstrated both increased satisfaction levels and heightened levels of behavioral changes (Impacts) in participants. Always, activities must be cost effective and responsive to changing medical needs. Community focused efforts with regional partners should be similar, assuring evolving successes.
Click on the PDF icon at the top of this introduction to read the full article.
Virtual tumor boards: community–university collaboration to improve quality of care
Objective To develop and implement virtual interactive multidisciplinary cancer tumor boards (VTBs), created throughtelemedicine links between the University of California, Davis Cancer Center and community-based cancer care providers. Thegoal of this project was to facilitate communication among community and academic cancer specialists.
Materials and methods Four geographically remote sites were selected to participate with established disease-specific tumorboards of the UC Davis Cancer Center. Telemedicine links were created using dedicated T1 lines, and PolyCom HDX 9000 was used by the center for teleconference hosting. Participants were then surveyed on their perception of the benefit of VTBs.
Results The results across disease-specific virtual tumor boards show that most of the participants reported that the right amountof clinical information on the cases was presented and that new information was discussed that helped providers manage thecare of the patients.
Conclusions Teleconferencing of disease-specific tumor boards allowed providers in a geographically remote group ofproviders to make prospective, case-based treatment decisions that increased their knowledge of treatment options and facilitatedtheir decision making. This transfer of knowledge and experience speeds up the dissemination of rapidly evolving cancer care,which could lead to higher quality patient outcomes.
Click on the PDF icon at the top of this introduction to read the full article.
Objective To develop and implement virtual interactive multidisciplinary cancer tumor boards (VTBs), created throughtelemedicine links between the University of California, Davis Cancer Center and community-based cancer care providers. Thegoal of this project was to facilitate communication among community and academic cancer specialists.
Materials and methods Four geographically remote sites were selected to participate with established disease-specific tumorboards of the UC Davis Cancer Center. Telemedicine links were created using dedicated T1 lines, and PolyCom HDX 9000 was used by the center for teleconference hosting. Participants were then surveyed on their perception of the benefit of VTBs.
Results The results across disease-specific virtual tumor boards show that most of the participants reported that the right amountof clinical information on the cases was presented and that new information was discussed that helped providers manage thecare of the patients.
Conclusions Teleconferencing of disease-specific tumor boards allowed providers in a geographically remote group ofproviders to make prospective, case-based treatment decisions that increased their knowledge of treatment options and facilitatedtheir decision making. This transfer of knowledge and experience speeds up the dissemination of rapidly evolving cancer care,which could lead to higher quality patient outcomes.
Click on the PDF icon at the top of this introduction to read the full article.
Objective To develop and implement virtual interactive multidisciplinary cancer tumor boards (VTBs), created throughtelemedicine links between the University of California, Davis Cancer Center and community-based cancer care providers. Thegoal of this project was to facilitate communication among community and academic cancer specialists.
Materials and methods Four geographically remote sites were selected to participate with established disease-specific tumorboards of the UC Davis Cancer Center. Telemedicine links were created using dedicated T1 lines, and PolyCom HDX 9000 was used by the center for teleconference hosting. Participants were then surveyed on their perception of the benefit of VTBs.
Results The results across disease-specific virtual tumor boards show that most of the participants reported that the right amountof clinical information on the cases was presented and that new information was discussed that helped providers manage thecare of the patients.
Conclusions Teleconferencing of disease-specific tumor boards allowed providers in a geographically remote group ofproviders to make prospective, case-based treatment decisions that increased their knowledge of treatment options and facilitatedtheir decision making. This transfer of knowledge and experience speeds up the dissemination of rapidly evolving cancer care,which could lead to higher quality patient outcomes.
Click on the PDF icon at the top of this introduction to read the full article.
Sickle cell crises curtailed with experimental cellular adhesion inhibitor
NEW ORLEANS – An experimental cellular adhesion inhibitor was successful at reducing the severity and duration of vaso-occlusive crises in patients with sickle cell disease.
In a phase II trial of 76 patients with sickle cell disease, patients randomized to receive the pan-selectin inhibitor GMI 1070 early in their hospitalization for a vaso-occlusive crisis (VOC) had shorter lengths of stay and needed significantly lower cumulative doses of narcotics for pain control than did patients randomized to placebo, reported Dr. Marilyn J. Telen, chief of the hematology division at Duke University, Durham, N.C.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
"We see somewhere between 75,000 and 90,000 admissions [annually] for acute painful vaso-occlusive crisis among this patient population. Indeed, these crises are the most common and essentially the archetypal presentation of sickle cell disease. Nevertheless, up till this time, treatment for these crises or VOC in sickle cell disease, remain only supportive, focusing largely on using narcotics for symptom relief, and then other measures, some of which are used to counteract the ill effects of narcotics," said Dr. Telen at the annual meeting of the American Society of Hematology.
GMI 1070 (being developed by GlycoMimetics, in partnership with Pfizer) is a synthetic molecule designed to inhibit the glycoprotein cellular-adhesion molecules involved in inflammation. In previous studies, the drug has been shown to be safe, and in a mouse model of VOC, was successful at restoring blood flow, Dr. Telen said. The drug has received both orphan drug and fast-track status from the Food and Drug Administration, according to GlycoMimetics.
Dr. Telen and her colleagues enrolled 76 patients aged 12-51 years with sickle cell disease and randomized them to receive a loading dose of GMI 1070 delivered intravenously (43 patients), followed by up to 14 subsequent doses delivered every 12 hours, or placebo (33 patients), with other treatment left to the discretion of the participating institutions. After an interim pharmacokinetic analysis showed that the drug did not reach target nadir levels, the dose was doubled.
All 76 patients reached the primary endpoint of VOC resolution, defined as a composite of decreased pain, termination of the need for intravenous opioids, patient and physician agreement on the ability to discharge the patient, and actual hospital discharge.
A total of 58 patients continued on the assigned drug until they either reached the primary endpoint criteria or received the maximum number of doses allowed. The remaining 18 patients discontinued the drug either for adverse events, no improvement by day 5 on the assigned drug, or other reasons.
In an analysis pooling all patients assigned to GMI 1070, including those who started out on the lower dose, there was a consistent reduction over placebo in the mean time to resolution of VOC: 103 hours vs. 144 hours for patients treated with placebo. This difference was not statistically significant, however.
A Kaplan-Meier analysis showed a median time to resolution of 69.6 hours for GMI 1070, compared with 139 hours with placebo, a difference that was not significant.
There was an 83% reduction in the secondary endpoint of cumulative opioid analgesics administered during hospitalization, a difference that was statistically significant (P =.010). There was also a reduction by 84 hours in the median time to discharge, and by 55 hours in the mean time to discharge, among patients treated with the active drug, compared with those on placebo. These differences, while large, were not significant, Dr. Telen said.
She noted that although most of the endpoints in this study failed to reach statistical significance, the separation of the curves between the placebo- and GMI 1070–treated patients began early, usually within 2 days of the start of treatment.
Total adverse event rates, including serious events and those deemed to be treatment related, were comparable between the two study arms for all subgroups.
Dr. Telen noted that because the population of patients enrolled was more clinically diverse than the available literature would suggest, the study was underpowered to detect differences, given the size of the sample. She predicted that given the size of the effects seen, statistical significance would emerge in a larger study.
GlycoMimetics is currently working with Pfizer to develop a phase III trial of GM 1070 for this indication.
The study was supported by GlycoMimetics. Dr. Telen is a consultant to the company, and several coauthors are employees of the company.
NEW ORLEANS – An experimental cellular adhesion inhibitor was successful at reducing the severity and duration of vaso-occlusive crises in patients with sickle cell disease.
In a phase II trial of 76 patients with sickle cell disease, patients randomized to receive the pan-selectin inhibitor GMI 1070 early in their hospitalization for a vaso-occlusive crisis (VOC) had shorter lengths of stay and needed significantly lower cumulative doses of narcotics for pain control than did patients randomized to placebo, reported Dr. Marilyn J. Telen, chief of the hematology division at Duke University, Durham, N.C.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
"We see somewhere between 75,000 and 90,000 admissions [annually] for acute painful vaso-occlusive crisis among this patient population. Indeed, these crises are the most common and essentially the archetypal presentation of sickle cell disease. Nevertheless, up till this time, treatment for these crises or VOC in sickle cell disease, remain only supportive, focusing largely on using narcotics for symptom relief, and then other measures, some of which are used to counteract the ill effects of narcotics," said Dr. Telen at the annual meeting of the American Society of Hematology.
GMI 1070 (being developed by GlycoMimetics, in partnership with Pfizer) is a synthetic molecule designed to inhibit the glycoprotein cellular-adhesion molecules involved in inflammation. In previous studies, the drug has been shown to be safe, and in a mouse model of VOC, was successful at restoring blood flow, Dr. Telen said. The drug has received both orphan drug and fast-track status from the Food and Drug Administration, according to GlycoMimetics.
Dr. Telen and her colleagues enrolled 76 patients aged 12-51 years with sickle cell disease and randomized them to receive a loading dose of GMI 1070 delivered intravenously (43 patients), followed by up to 14 subsequent doses delivered every 12 hours, or placebo (33 patients), with other treatment left to the discretion of the participating institutions. After an interim pharmacokinetic analysis showed that the drug did not reach target nadir levels, the dose was doubled.
All 76 patients reached the primary endpoint of VOC resolution, defined as a composite of decreased pain, termination of the need for intravenous opioids, patient and physician agreement on the ability to discharge the patient, and actual hospital discharge.
A total of 58 patients continued on the assigned drug until they either reached the primary endpoint criteria or received the maximum number of doses allowed. The remaining 18 patients discontinued the drug either for adverse events, no improvement by day 5 on the assigned drug, or other reasons.
In an analysis pooling all patients assigned to GMI 1070, including those who started out on the lower dose, there was a consistent reduction over placebo in the mean time to resolution of VOC: 103 hours vs. 144 hours for patients treated with placebo. This difference was not statistically significant, however.
A Kaplan-Meier analysis showed a median time to resolution of 69.6 hours for GMI 1070, compared with 139 hours with placebo, a difference that was not significant.
There was an 83% reduction in the secondary endpoint of cumulative opioid analgesics administered during hospitalization, a difference that was statistically significant (P =.010). There was also a reduction by 84 hours in the median time to discharge, and by 55 hours in the mean time to discharge, among patients treated with the active drug, compared with those on placebo. These differences, while large, were not significant, Dr. Telen said.
She noted that although most of the endpoints in this study failed to reach statistical significance, the separation of the curves between the placebo- and GMI 1070–treated patients began early, usually within 2 days of the start of treatment.
Total adverse event rates, including serious events and those deemed to be treatment related, were comparable between the two study arms for all subgroups.
Dr. Telen noted that because the population of patients enrolled was more clinically diverse than the available literature would suggest, the study was underpowered to detect differences, given the size of the sample. She predicted that given the size of the effects seen, statistical significance would emerge in a larger study.
GlycoMimetics is currently working with Pfizer to develop a phase III trial of GM 1070 for this indication.
The study was supported by GlycoMimetics. Dr. Telen is a consultant to the company, and several coauthors are employees of the company.
NEW ORLEANS – An experimental cellular adhesion inhibitor was successful at reducing the severity and duration of vaso-occlusive crises in patients with sickle cell disease.
In a phase II trial of 76 patients with sickle cell disease, patients randomized to receive the pan-selectin inhibitor GMI 1070 early in their hospitalization for a vaso-occlusive crisis (VOC) had shorter lengths of stay and needed significantly lower cumulative doses of narcotics for pain control than did patients randomized to placebo, reported Dr. Marilyn J. Telen, chief of the hematology division at Duke University, Durham, N.C.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
"We see somewhere between 75,000 and 90,000 admissions [annually] for acute painful vaso-occlusive crisis among this patient population. Indeed, these crises are the most common and essentially the archetypal presentation of sickle cell disease. Nevertheless, up till this time, treatment for these crises or VOC in sickle cell disease, remain only supportive, focusing largely on using narcotics for symptom relief, and then other measures, some of which are used to counteract the ill effects of narcotics," said Dr. Telen at the annual meeting of the American Society of Hematology.
GMI 1070 (being developed by GlycoMimetics, in partnership with Pfizer) is a synthetic molecule designed to inhibit the glycoprotein cellular-adhesion molecules involved in inflammation. In previous studies, the drug has been shown to be safe, and in a mouse model of VOC, was successful at restoring blood flow, Dr. Telen said. The drug has received both orphan drug and fast-track status from the Food and Drug Administration, according to GlycoMimetics.
Dr. Telen and her colleagues enrolled 76 patients aged 12-51 years with sickle cell disease and randomized them to receive a loading dose of GMI 1070 delivered intravenously (43 patients), followed by up to 14 subsequent doses delivered every 12 hours, or placebo (33 patients), with other treatment left to the discretion of the participating institutions. After an interim pharmacokinetic analysis showed that the drug did not reach target nadir levels, the dose was doubled.
All 76 patients reached the primary endpoint of VOC resolution, defined as a composite of decreased pain, termination of the need for intravenous opioids, patient and physician agreement on the ability to discharge the patient, and actual hospital discharge.
A total of 58 patients continued on the assigned drug until they either reached the primary endpoint criteria or received the maximum number of doses allowed. The remaining 18 patients discontinued the drug either for adverse events, no improvement by day 5 on the assigned drug, or other reasons.
In an analysis pooling all patients assigned to GMI 1070, including those who started out on the lower dose, there was a consistent reduction over placebo in the mean time to resolution of VOC: 103 hours vs. 144 hours for patients treated with placebo. This difference was not statistically significant, however.
A Kaplan-Meier analysis showed a median time to resolution of 69.6 hours for GMI 1070, compared with 139 hours with placebo, a difference that was not significant.
There was an 83% reduction in the secondary endpoint of cumulative opioid analgesics administered during hospitalization, a difference that was statistically significant (P =.010). There was also a reduction by 84 hours in the median time to discharge, and by 55 hours in the mean time to discharge, among patients treated with the active drug, compared with those on placebo. These differences, while large, were not significant, Dr. Telen said.
She noted that although most of the endpoints in this study failed to reach statistical significance, the separation of the curves between the placebo- and GMI 1070–treated patients began early, usually within 2 days of the start of treatment.
Total adverse event rates, including serious events and those deemed to be treatment related, were comparable between the two study arms for all subgroups.
Dr. Telen noted that because the population of patients enrolled was more clinically diverse than the available literature would suggest, the study was underpowered to detect differences, given the size of the sample. She predicted that given the size of the effects seen, statistical significance would emerge in a larger study.
GlycoMimetics is currently working with Pfizer to develop a phase III trial of GM 1070 for this indication.
The study was supported by GlycoMimetics. Dr. Telen is a consultant to the company, and several coauthors are employees of the company.
AT ASH 2013
Major finding: The mean time to resolution of vaso-occlusive crisis in patients with sickle cell disease was 103 hours for patients treated with GMI 1070 vs. 144 for those treated with placebo.
Data source: A randomized, double blind multicenter study of 76 patients aged 12-51.
Disclosures: The study was supported by GlycoMimetics. Dr. Telen is a consultant to the company, and several coauthors are employees of the company.
VIDEO: Idelalisib shows promise in refractory non-Hodgkin lymphoma
NEW ORLEANS – When indolent B-cell non-Hodgkin lymphoma becomes refractory to rituximab and alkylating agents, few therapeutic options remain. But the PI3kd inhibitor idelalisib may someday offer a new treatment choice. Dr. Ajay Gopal discusses the promising findings from a phase II trial of idelalisib, including a 57% response rate.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
NEW ORLEANS – When indolent B-cell non-Hodgkin lymphoma becomes refractory to rituximab and alkylating agents, few therapeutic options remain. But the PI3kd inhibitor idelalisib may someday offer a new treatment choice. Dr. Ajay Gopal discusses the promising findings from a phase II trial of idelalisib, including a 57% response rate.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
NEW ORLEANS – When indolent B-cell non-Hodgkin lymphoma becomes refractory to rituximab and alkylating agents, few therapeutic options remain. But the PI3kd inhibitor idelalisib may someday offer a new treatment choice. Dr. Ajay Gopal discusses the promising findings from a phase II trial of idelalisib, including a 57% response rate.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
Gene therapy for SCID-X1 may successfully reset immune system
NEW ORLEANS – Tweaking experimental gene therapy for X-linked severe combined immunodeficiency may help to restore patient immune function while reducing the risk for subsequent leukemias.
In a small, multinational phase I/II trial, seven of nine children with SCID-X1 showed evidence of T-cell recovery and function, as well as a lower risk for promoting growth of leukemic cells, when a self-inactivating gamma-retroviral vector (SCID-2) was used to promote reconstitution of the child’s immune system without insertional oncogenesis, reported Dr. Sung-Yun Pai at the annual meeting of the American Society of Hematology.
"Outcomes for boys who do not have well-matched donors are suboptimal, and it’s particularly for these boys that we are targeting gene therapy," said Dr. Pai, of Boston Children’s Hospital and the Dana-Farber Cancer Institute, Boston.
In previous gene therapy trials, investigators used the Moloney leukemia virus (MLV)-based gamma-retroviral vector (SCID-1) with strong promoters and enhancers to express an IL-2 receptor that reconstituted the immune system successfully in 18 of 20 boys.
However, 5 of the 20 boys developed T-cell acute lymphoblastic leukemia; 1 of the children died, and the remaining 4 were successfully treated.
The investigators found that in the patients with leukemia, the SCID-1 vector had inserted into a chromosomal region close to proto-oncogenes such as LMO2, and the enhancers were driving expression of the neighboring oncogene, promoting expression of aberrant T cells. The vector was subsequently modified with the goal of improved safety but similar efficacy to the original, said Dr. Pai. The strong viral enhancers were removed to prevent accident enhancement should the inserted genes manage to find their way into oncogenes.
The phase I/II study is being conducted in London, Paris, Boston, Cincinnati, and Los Angeles, and to date has enrolled 9 male children of a planned 20.
Of the 9, one child died from a preexisting adenoviral infection before immune recovery was complete, and one child did not have engraftment of the gene-marked cells and went on to transplant.
"The other patients have 9 months to 36 months of follow-up, they have evidence of T-cell recovery, of T-cell function, have cleared SCID-related infections, and are all out of hospital, healthy at home, [and] leading essentially normal lives."
When the investigators looked at the comparative efficacy of the SCID-1 and SCID-2 vectors, they saw that 6 months after gene therapy, there was no significant difference in the median number of T cells generated.
"It’s far too early to comment on whether this vector will truly be safer in terms of leukemia," Dr. Pai said, noting that in the SCID-1 trial the leukemias developed 3-5 years after gene therapy, and the longest follow-up in the SCID-2 study is only 3 years.
The investigators are, however, conducting molecular surrogate safety analyses looking at gene insertion sites from the blood of patients treated with SCID-1 and are comparing those sites with the vector-insertion sites in cells from patients in SCID-2.
Looking at a global genomewide map of integrations, they found no significant differences between SCID-1 and SCID-2. However, when they focused on 38 genes that are to be proto-oncogenes in lymphoid cancer, they found that significantly more integration of the modified genes occurred in proximity to the oncogenes in SCID-1 than in SCID-2 (P = .003).
"We hope that these data suggest that the modified SCID vector will show less capacity to drive aberrant cell growth and lead to less leukemogenesis," said Dr. Pai.
SCID-X1 is caused by inherited mutations in the gamma subunit of the interleukin (IL)-2 receptor. As a result, males are born without T lymphocytes or natural killer cells. Without a bone marrow or stem cell transplantation, children with the disease die early from opportunistic or community-acquired infections.
"These are really paradigm-changing results for mortally wounded children," said Dr. Laurence James Neil Cooper of the University of Texas M.D. Anderson Cancer Center in Houston, who moderated the briefing but was not involved in the study.
The trial is being sponsored by Children’s Hospital Boston, Cincinnati Children’s Hospital Medical Center, and the University of California, Los Angeles. Dr. Pai and Dr. Cooper reported having no relevant conflicts of interest.
NEW ORLEANS – Tweaking experimental gene therapy for X-linked severe combined immunodeficiency may help to restore patient immune function while reducing the risk for subsequent leukemias.
In a small, multinational phase I/II trial, seven of nine children with SCID-X1 showed evidence of T-cell recovery and function, as well as a lower risk for promoting growth of leukemic cells, when a self-inactivating gamma-retroviral vector (SCID-2) was used to promote reconstitution of the child’s immune system without insertional oncogenesis, reported Dr. Sung-Yun Pai at the annual meeting of the American Society of Hematology.
"Outcomes for boys who do not have well-matched donors are suboptimal, and it’s particularly for these boys that we are targeting gene therapy," said Dr. Pai, of Boston Children’s Hospital and the Dana-Farber Cancer Institute, Boston.
In previous gene therapy trials, investigators used the Moloney leukemia virus (MLV)-based gamma-retroviral vector (SCID-1) with strong promoters and enhancers to express an IL-2 receptor that reconstituted the immune system successfully in 18 of 20 boys.
However, 5 of the 20 boys developed T-cell acute lymphoblastic leukemia; 1 of the children died, and the remaining 4 were successfully treated.
The investigators found that in the patients with leukemia, the SCID-1 vector had inserted into a chromosomal region close to proto-oncogenes such as LMO2, and the enhancers were driving expression of the neighboring oncogene, promoting expression of aberrant T cells. The vector was subsequently modified with the goal of improved safety but similar efficacy to the original, said Dr. Pai. The strong viral enhancers were removed to prevent accident enhancement should the inserted genes manage to find their way into oncogenes.
The phase I/II study is being conducted in London, Paris, Boston, Cincinnati, and Los Angeles, and to date has enrolled 9 male children of a planned 20.
Of the 9, one child died from a preexisting adenoviral infection before immune recovery was complete, and one child did not have engraftment of the gene-marked cells and went on to transplant.
"The other patients have 9 months to 36 months of follow-up, they have evidence of T-cell recovery, of T-cell function, have cleared SCID-related infections, and are all out of hospital, healthy at home, [and] leading essentially normal lives."
When the investigators looked at the comparative efficacy of the SCID-1 and SCID-2 vectors, they saw that 6 months after gene therapy, there was no significant difference in the median number of T cells generated.
"It’s far too early to comment on whether this vector will truly be safer in terms of leukemia," Dr. Pai said, noting that in the SCID-1 trial the leukemias developed 3-5 years after gene therapy, and the longest follow-up in the SCID-2 study is only 3 years.
The investigators are, however, conducting molecular surrogate safety analyses looking at gene insertion sites from the blood of patients treated with SCID-1 and are comparing those sites with the vector-insertion sites in cells from patients in SCID-2.
Looking at a global genomewide map of integrations, they found no significant differences between SCID-1 and SCID-2. However, when they focused on 38 genes that are to be proto-oncogenes in lymphoid cancer, they found that significantly more integration of the modified genes occurred in proximity to the oncogenes in SCID-1 than in SCID-2 (P = .003).
"We hope that these data suggest that the modified SCID vector will show less capacity to drive aberrant cell growth and lead to less leukemogenesis," said Dr. Pai.
SCID-X1 is caused by inherited mutations in the gamma subunit of the interleukin (IL)-2 receptor. As a result, males are born without T lymphocytes or natural killer cells. Without a bone marrow or stem cell transplantation, children with the disease die early from opportunistic or community-acquired infections.
"These are really paradigm-changing results for mortally wounded children," said Dr. Laurence James Neil Cooper of the University of Texas M.D. Anderson Cancer Center in Houston, who moderated the briefing but was not involved in the study.
The trial is being sponsored by Children’s Hospital Boston, Cincinnati Children’s Hospital Medical Center, and the University of California, Los Angeles. Dr. Pai and Dr. Cooper reported having no relevant conflicts of interest.
NEW ORLEANS – Tweaking experimental gene therapy for X-linked severe combined immunodeficiency may help to restore patient immune function while reducing the risk for subsequent leukemias.
In a small, multinational phase I/II trial, seven of nine children with SCID-X1 showed evidence of T-cell recovery and function, as well as a lower risk for promoting growth of leukemic cells, when a self-inactivating gamma-retroviral vector (SCID-2) was used to promote reconstitution of the child’s immune system without insertional oncogenesis, reported Dr. Sung-Yun Pai at the annual meeting of the American Society of Hematology.
"Outcomes for boys who do not have well-matched donors are suboptimal, and it’s particularly for these boys that we are targeting gene therapy," said Dr. Pai, of Boston Children’s Hospital and the Dana-Farber Cancer Institute, Boston.
In previous gene therapy trials, investigators used the Moloney leukemia virus (MLV)-based gamma-retroviral vector (SCID-1) with strong promoters and enhancers to express an IL-2 receptor that reconstituted the immune system successfully in 18 of 20 boys.
However, 5 of the 20 boys developed T-cell acute lymphoblastic leukemia; 1 of the children died, and the remaining 4 were successfully treated.
The investigators found that in the patients with leukemia, the SCID-1 vector had inserted into a chromosomal region close to proto-oncogenes such as LMO2, and the enhancers were driving expression of the neighboring oncogene, promoting expression of aberrant T cells. The vector was subsequently modified with the goal of improved safety but similar efficacy to the original, said Dr. Pai. The strong viral enhancers were removed to prevent accident enhancement should the inserted genes manage to find their way into oncogenes.
The phase I/II study is being conducted in London, Paris, Boston, Cincinnati, and Los Angeles, and to date has enrolled 9 male children of a planned 20.
Of the 9, one child died from a preexisting adenoviral infection before immune recovery was complete, and one child did not have engraftment of the gene-marked cells and went on to transplant.
"The other patients have 9 months to 36 months of follow-up, they have evidence of T-cell recovery, of T-cell function, have cleared SCID-related infections, and are all out of hospital, healthy at home, [and] leading essentially normal lives."
When the investigators looked at the comparative efficacy of the SCID-1 and SCID-2 vectors, they saw that 6 months after gene therapy, there was no significant difference in the median number of T cells generated.
"It’s far too early to comment on whether this vector will truly be safer in terms of leukemia," Dr. Pai said, noting that in the SCID-1 trial the leukemias developed 3-5 years after gene therapy, and the longest follow-up in the SCID-2 study is only 3 years.
The investigators are, however, conducting molecular surrogate safety analyses looking at gene insertion sites from the blood of patients treated with SCID-1 and are comparing those sites with the vector-insertion sites in cells from patients in SCID-2.
Looking at a global genomewide map of integrations, they found no significant differences between SCID-1 and SCID-2. However, when they focused on 38 genes that are to be proto-oncogenes in lymphoid cancer, they found that significantly more integration of the modified genes occurred in proximity to the oncogenes in SCID-1 than in SCID-2 (P = .003).
"We hope that these data suggest that the modified SCID vector will show less capacity to drive aberrant cell growth and lead to less leukemogenesis," said Dr. Pai.
SCID-X1 is caused by inherited mutations in the gamma subunit of the interleukin (IL)-2 receptor. As a result, males are born without T lymphocytes or natural killer cells. Without a bone marrow or stem cell transplantation, children with the disease die early from opportunistic or community-acquired infections.
"These are really paradigm-changing results for mortally wounded children," said Dr. Laurence James Neil Cooper of the University of Texas M.D. Anderson Cancer Center in Houston, who moderated the briefing but was not involved in the study.
The trial is being sponsored by Children’s Hospital Boston, Cincinnati Children’s Hospital Medical Center, and the University of California, Los Angeles. Dr. Pai and Dr. Cooper reported having no relevant conflicts of interest.
AT ASH 2013
Major finding: Of nine boys with X-linked severe combined immunodeficiency who were treated with gene therapy, seven patients have evidence of T-cell function, have cleared SCID-related infections, and are out of hospital.
Data source: Preliminary results of a prospective phase I/II clinical trial of nine children.
Disclosures: The trial is being sponsored by Children’s Hospital Boston, Cincinnati Children’s Hospital Medical Center, and the University of California, Los Angeles. Dr. Pai and Dr. Cooper reported having no relevant conflicts of interest.