Robotic Technology Produces More Conservative Tibial Resection Than Conventional Techniques in UKA

Article Type
Changed
Thu, 09/19/2019 - 13:23
Display Headline
Robotic Technology Produces More Conservative Tibial Resection Than Conventional Techniques in UKA

Unicompartmental knee arthroplasty (UKA) is considered a less invasive approach for the treatment of unicompartmental knee arthritis when compared with total knee arthroplasty (TKA), with optimal preservation of kinematics.1 Despite excellent functional outcomes, conversion to TKA may be necessary if the UKA fails, or in patients with progressive knee arthritis. Some studies have found UKA conversion to TKA to be comparable with primary TKA,2,3 whereas others have found that conversion often requires bone graft, augments, and stemmed components and has increased complications and inferior results compared to primary TKA.4-7 While some studies report that <10% of UKA conversions to TKA require augments,2 others have found that as many as 76% require augments.4-8

Schwarzkopf and colleagues9 recently demonstrated that UKA conversion to TKA is comparable with primary TKA when a conservative tibial resection is performed during the index procedure. However, they reported increased complexity when greater tibial resection was performed and thicker polyethylene inserts were used at the time of the index UKA. The odds ratio of needing an augment or stem during the conversion to TKA was 26.8 (95% confidence interval, 3.71-194) when an aggressive tibial resection was performed during the UKA.9 Tibial resection thickness may thus be predictive of anticipated complexity of UKA revision to TKA and may aid in preoperative planning.

Robotic assistance has been shown to enhance the accuracy of bone preparation, implant component alignment, and soft tissue balance in UKA.10-15 It has yet to be determined whether this improved accuracy translates to improved clinical performance or longevity of the UKA implant. However, the enhanced accuracy of robotic technology may result in more conservative tibial resection when compared to conventional UKA and may be advantageous if conversion to TKA becomes necessary.

The purpose of this study was to compare the distribution of polyethylene insert sizes implanted during conventional and robotic-assisted UKA. We hypothesized that robotic assistance would demonstrate more conservative tibial resection compared to conventional methods of bone preparation.

Methods

We retrospectively compared the distribution of polyethylene insert sizes implanted during consecutive conventional and robotic-assisted UKA procedures. Several manufacturers were queried to provide a listing of the polyethylene insert sizes utilized, ranging from 8 mm to 14 mm. The analysis included 8421 robotic-assisted UKA cases and 27,989 conventional UKA cases. Data were provided by Zimmer Biomet and Smith & Nephew regarding conventional cases, as well as Blue Belt Technologies (now part of Smith & Nephew) and MAKO Surgical (now part of Stryker) regarding robotic-assisted cases. (Dr. Lonner has an ongoing relationship as a consultant with Blue Belt Technologies, whose data was utilized in this study.) Using tibial insert thickness as a surrogate measure of the extent of tibial resection, an insert size of ≥10 mm was defined as aggressive while <10 mm was considered conservative. This cutoff was established based on its corresponding resection level with primary TKA and the anticipated need for augments. Statistical analysis was performed using a Mann-Whitney-Wilcoxon test. Significance was set at P < .05.

Results

Tibial resection thickness was found to be most commonly conservative in nature, with sizes 8-mm and 9-mm polyethylene inserts utilized in the majority of both robotic-assisted and conventional UKA cases. However, statistically more 8-mm and 9-mm polyethylene inserts were used in the robotic group (93.6%) than in the conventional group (84.5%) (P < .0001; Figure). Aggressive tibial resection, requiring tibial inserts ≥10 mm, was performed in 6.4% of robotic-assisted cases and 15.5% of conventional cases.

Figure.
Only .29% of robotic-assisted cases required tibial inserts ≥10 mm, whereas 5.7% of patients undergoing conventional UKA had tibial inserts ≥10 mm. In this analysis, the maximum tibial component thickness was 11 mm in robotic-assisted UKA and 14 mm in conventional UKA. The distribution of conventional UKA tibial resection thicknesses is significantly greater in comparison to robotic-assisted UKA, which more reproducibly achieved accurate and precise conservative resection. No significant differences were noted in the percentages of polyethylene sizes between Blue Belt Technologies or MAKO cases.

Discussion

Robotic assistance enhances the accuracy of bone preparation, implant component alignment, and soft tissue balance in UKA.10-15 It has yet to be determined whether this improved accuracy translates to improved clinical performance or longevity of the UKA implant. However, we demonstrate that the enhanced accuracy of robotic technology results in more conservative tibial resection when compared to conventional techniques with a potential benefit suggested in the literature upon conversion to TKA.

The findings of this study have important implications for patients undergoing conversion of UKA to TKA, potentially optimizing the ease of revision and clinical outcomes. The outcomes of UKA conversion to TKA are often considered inferior to those of primary TKA, compromised by bone loss, need for augmentation, and challenges of restoring the joint line and rotation.9,16-22 Barrett and Scott18 reported only 66% of patients had good or excellent results at an average of 4.6 years of follow-up after UKA conversion to TKA. Over 50% required stemmed implants and bone graft or bone cement augmentation to address osseous insufficiency. The authors suggested that the primary determinant of the complexity of the conversion to TKA was the surgical technique used in the index procedure. They concluded that UKA conversion to TKA can be as successful as a primary TKA and primary TKA implants can be used without bone augmentation or stems during the revision procedure if minimal tibial bone is resected at the time of the index UKA.18 Schwarzkopf and colleagues9 supported this conclusion when they found that aggressive tibial resection during UKA resulted in the need for bone graft, stem, wedge, or augment in 70% of cases when converted to TKA. Similarly, Khan and colleagues23 found that 26% of patients required bone grafting and 26% required some form of augmentation, and Springer and colleagues3 reported that 68% required a graft, augment, or stem.3,22 Using data from the New Zealand Joint Registry, Pearse and colleagues5 reported that revision TKA components were necessary in 28% of patients and concluded that converting a UKA to TKA gives a less reliable result than primary TKA, and with functional results that are not significantly better than a revision from a TKA.

Conservative tibial resection during UKA minimizes the complexity and concerns of bone loss upon conversion to TKA. Schwarzkopf and colleagues9 found 96.6% of patients with conservative tibial resection received a primary TKA implant, without augments or stems. Furthermore, patients with a primary TKA implant showed improved tibial survivorship, with revision as an end point, compared with patients who received a TKA implant that required stems and augments or bone graft for support.9 Also emphasizing the importance of minimal tibial resection, O’Donnell and colleagues8 compared a cohort of patients undergoing conversion of a minimal resection resurfacing onlay-type UKA to TKA with a cohort of patients undergoing primary TKA. They found that 40% of patients required bone grafting for contained defects, 3.6% required metal augments, and 1.8% required stems.8 There was no significant difference between the groups in terms of range of motion, functional outcome, or radiologic outcomes. The authors concluded that revision of minimal resection resurfacing implants to TKA is associated with similar results to primary TKA and is superior to revision of UKA with greater bone loss. Prior studies have shown that one of the advantages of robotic-assisted UKA is the accuracy and precision of bone resection. The present study supports this premise by showing that tibial resection is significantly more conservative using robotic-assisted techniques when using tibial component thickness as a surrogate for extent of bone resection. While our study did not address implant durability or the impact of conservative resection on conversion to TKA, studies referenced above suggest that the conservative nature of bone preparation would have a relevant impact on the revision of the implant to TKA.

Our study is a retrospective case series that reports tibial component thickness as a surrogate for volume of tibial resection during UKA. While the implication is that more conservative tibial resection may optimize durability and ease of conversion to TKA, future study will be needed to compare robotic-assisted and conventional cases of UKA upon conversion to TKA in order to ascertain whether the more conventional resections of robotic-assisted UKA in fact lead to revision that is comparable with primary TKA in terms of bone loss at the time of revision, components utilized, the need for bone graft, augments, or stems, and clinical outcomes. Given the method of data collection in this study, we could not control for clinical deformity, selection bias, surgeon experience, or medial vs lateral knee compartments. These potential confounders represent weaknesses of this study.

In conclusion, conversion of UKA to TKA may be associated with significant osseous insufficiency, which may compromise patient outcomes in comparison to primary TKA. Studies have shown that UKA conversion to TKA is comparable to primary TKA when minimal tibial resection is performed during the UKA, and the need for augmentation, grafting or stems is increased with more aggressive tibial resection. This study has shown that when robotic assistance is utilized, tibial resection is more precise, less variable, and more conservative compared to conventional techniques.

Am J Orthop. 2016;45(7):E465-E468. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

References

1. Patil S, Colwell CW Jr, Ezzet KA, D’Lima DD. Can normal knee kinematics be restored with unicompartmental knee replacement? J Bone Joint Surg Am. 2005;87(2):332-338.

2. Johnson S, Jones P, Newman JH. The survivorship and results of total knee replacements converted from unicompartmental knee replacements. Knee. 2007;14(2):154-157.

3. Springer BD, Scott RD, Thornhill TS. Conversion of failed unicompartmental knee arthroplasty to TKA. Clin Orthop Relat Res. 2006;446:214-220.

4. Järvenpää J, Kettunen J, Miettinen H, Kröger H. The clinical outcome of revision knee replacement after unicompartmental knee arthroplasty versus primary total knee arthroplasty: 8-17 years follow-up study of 49 patients. Int Orthop. 2010;34(5):649-653.

5. Pearse AJ, Hooper GJ, Rothwell AG, Frampton C. Osteotomy and unicompartmental knee arthroplasty converted to total knee arthroplasty: data from the New Zealand Joint Registry. J Arthroplasty. 2012;27(10):1827-1831.

6. Rancourt MF, Kemp KA, Plamondon SM, Kim PR, Dervin GF. Unicompartmental knee arthroplasties revised to total knee arthroplasties compared with primary total knee arthroplasties. J Arthroplasty. 2012;27(8 Suppl):106-110.

7. Sierra RJ, Kassel CA, Wetters NG, Berend KR, Della Valle CJ, Lombardi AV. Revision of unicompartmental arthroplasty to total knee arthroplasty: not always a slam dunk! J Arthroplasty. 2013;28(8 Suppl):128-132.

8. O’Donnell TM, Abouazza O, Neil MJ. Revision of minimal resection resurfacing unicondylar knee arthroplasty to total knee arthroplasty: results compared with primary total knee arthroplasty. J Arthroplasty. 2013;28(1):33-39.

9. Schwarzkopf R, Mikhael B, Li L, Josephs L, Scott RD. Effect of initial tibial resection thickness on outcomes of revision UKA. Orthopedics. 2013;36(4):e409-e414.

10. Conditt MA, Roche MW. Minimally invasive robotic-arm-guided unicompartmental knee arthroplasty. J Bone Joint Surg Am. 2009;91 Suppl 1:63-68.

11. Dunbar NJ, Roche MW, Park BH, Branch SH, Conditt MA, Banks SA. Accuracy of dynamic tactile-guided unicompartmental knee arthroplasty. J Arthroplasty. 2012;27(5):803-808.e1.

12. Karia M, Masjedi M, Andrews B, Jaffry Z, Cobb J. Robotic assistance enables inexperienced surgeons to perform unicompartmental knee arthroplasties on dry bone models with accuracy superior to conventional methods. Adv Orthop. 2013;2013:481039.

13. Lonner JH, John TK, Conditt MA. Robotic arm-assisted UKA improves tibial component alignment: a pilot study. Clin Orthop Relat Res. 2010;468(1):141-146.

14. Lonner JH, Smith JR, Picard F, Hamlin B, Rowe PJ, Riches PE. High degree of accuracy of a novel image-free handheld robot for unicondylar knee arthroplasty in a cadaveric study. Clin Orthop Relat Res. 2015;473(1):206-212.

15. Smith JR, Picard F, Rowe PJ, Deakin A, Riches PE. The accuracy of a robotically-controlled freehand sculpting tool for unicondylar knee arthroplasty. Bone Joint J. 2013;95-B(suppl 28):68.

16. Chakrabarty G, Newman JH, Ackroyd CE. Revision of unicompartmental arthroplasty of the knee. Clinical and technical considerations. J Arthroplasty. 1998;13(2):191-196.

17. Levine WN, Ozuna RM, Scott RD, Thornhill TS. Conversion of failed modern unicompartmental arthroplasty to total knee arthroplasty. J Arthroplasty. 1996;11(7):797-801.

18. Barrett WP, Scott RD. Revision of failed unicondylar unicompartmental knee arthroplasty. J Bone Joint Surg Am. 1987;69(9):1328-1335.

19. Padgett DE, Stern SH, Insall JN. Revision total knee arthroplasty for failed unicompartmental replacement. J Bone Joint Surg Am. 1991;73(2):186-190.

20. Aleto TJ, Berend ME, Ritter MA, Faris PM, Meneghini RM. Early failure of unicompartmental knee arthroplasty leading to revision. J Arthroplasty. 2008;23(2):159-163.

21. McAuley JP, Engh GA, Ammeen DJ. Revision of failed unicompartmental knee arthroplasty. Clin Orthop Relat Res. 2001;(392):279-282.22. Böhm I, Landsiedl F. Revision surgery after failed unicompartmental knee arthroplasty: a study of 35 cases. J Arthroplasty. 2000;15(8):982-989.

23. Khan Z, Nawaz SZ, Kahane S, Ester C, Chatterji U. Conversion of unicompartmental knee arthroplasty to total knee arthroplasty: the challenges and need for augments. Acta Orthop Belg. 2013;79(6):699-705.

Article PDF
Author and Disclosure Information

Authors’ Disclosure Statement: Dr. Lonner reports that he is a consultant to, and receives royalties from, Zimmer Biomet and Smith & Nephew. Dr. Ponzio reports no actual or potential conflict of interest in relation to this article.

Issue
The American Journal of Orthopedics - 45(7)
Publications
Topics
Page Number
E465-E468
Sections
Author and Disclosure Information

Authors’ Disclosure Statement: Dr. Lonner reports that he is a consultant to, and receives royalties from, Zimmer Biomet and Smith & Nephew. Dr. Ponzio reports no actual or potential conflict of interest in relation to this article.

Author and Disclosure Information

Authors’ Disclosure Statement: Dr. Lonner reports that he is a consultant to, and receives royalties from, Zimmer Biomet and Smith & Nephew. Dr. Ponzio reports no actual or potential conflict of interest in relation to this article.

Article PDF
Article PDF

Unicompartmental knee arthroplasty (UKA) is considered a less invasive approach for the treatment of unicompartmental knee arthritis when compared with total knee arthroplasty (TKA), with optimal preservation of kinematics.1 Despite excellent functional outcomes, conversion to TKA may be necessary if the UKA fails, or in patients with progressive knee arthritis. Some studies have found UKA conversion to TKA to be comparable with primary TKA,2,3 whereas others have found that conversion often requires bone graft, augments, and stemmed components and has increased complications and inferior results compared to primary TKA.4-7 While some studies report that <10% of UKA conversions to TKA require augments,2 others have found that as many as 76% require augments.4-8

Schwarzkopf and colleagues9 recently demonstrated that UKA conversion to TKA is comparable with primary TKA when a conservative tibial resection is performed during the index procedure. However, they reported increased complexity when greater tibial resection was performed and thicker polyethylene inserts were used at the time of the index UKA. The odds ratio of needing an augment or stem during the conversion to TKA was 26.8 (95% confidence interval, 3.71-194) when an aggressive tibial resection was performed during the UKA.9 Tibial resection thickness may thus be predictive of anticipated complexity of UKA revision to TKA and may aid in preoperative planning.

Robotic assistance has been shown to enhance the accuracy of bone preparation, implant component alignment, and soft tissue balance in UKA.10-15 It has yet to be determined whether this improved accuracy translates to improved clinical performance or longevity of the UKA implant. However, the enhanced accuracy of robotic technology may result in more conservative tibial resection when compared to conventional UKA and may be advantageous if conversion to TKA becomes necessary.

The purpose of this study was to compare the distribution of polyethylene insert sizes implanted during conventional and robotic-assisted UKA. We hypothesized that robotic assistance would demonstrate more conservative tibial resection compared to conventional methods of bone preparation.

Methods

We retrospectively compared the distribution of polyethylene insert sizes implanted during consecutive conventional and robotic-assisted UKA procedures. Several manufacturers were queried to provide a listing of the polyethylene insert sizes utilized, ranging from 8 mm to 14 mm. The analysis included 8421 robotic-assisted UKA cases and 27,989 conventional UKA cases. Data were provided by Zimmer Biomet and Smith & Nephew regarding conventional cases, as well as Blue Belt Technologies (now part of Smith & Nephew) and MAKO Surgical (now part of Stryker) regarding robotic-assisted cases. (Dr. Lonner has an ongoing relationship as a consultant with Blue Belt Technologies, whose data was utilized in this study.) Using tibial insert thickness as a surrogate measure of the extent of tibial resection, an insert size of ≥10 mm was defined as aggressive while <10 mm was considered conservative. This cutoff was established based on its corresponding resection level with primary TKA and the anticipated need for augments. Statistical analysis was performed using a Mann-Whitney-Wilcoxon test. Significance was set at P < .05.

Results

Tibial resection thickness was found to be most commonly conservative in nature, with sizes 8-mm and 9-mm polyethylene inserts utilized in the majority of both robotic-assisted and conventional UKA cases. However, statistically more 8-mm and 9-mm polyethylene inserts were used in the robotic group (93.6%) than in the conventional group (84.5%) (P < .0001; Figure). Aggressive tibial resection, requiring tibial inserts ≥10 mm, was performed in 6.4% of robotic-assisted cases and 15.5% of conventional cases.

Figure.
Only .29% of robotic-assisted cases required tibial inserts ≥10 mm, whereas 5.7% of patients undergoing conventional UKA had tibial inserts ≥10 mm. In this analysis, the maximum tibial component thickness was 11 mm in robotic-assisted UKA and 14 mm in conventional UKA. The distribution of conventional UKA tibial resection thicknesses is significantly greater in comparison to robotic-assisted UKA, which more reproducibly achieved accurate and precise conservative resection. No significant differences were noted in the percentages of polyethylene sizes between Blue Belt Technologies or MAKO cases.

Discussion

Robotic assistance enhances the accuracy of bone preparation, implant component alignment, and soft tissue balance in UKA.10-15 It has yet to be determined whether this improved accuracy translates to improved clinical performance or longevity of the UKA implant. However, we demonstrate that the enhanced accuracy of robotic technology results in more conservative tibial resection when compared to conventional techniques with a potential benefit suggested in the literature upon conversion to TKA.

The findings of this study have important implications for patients undergoing conversion of UKA to TKA, potentially optimizing the ease of revision and clinical outcomes. The outcomes of UKA conversion to TKA are often considered inferior to those of primary TKA, compromised by bone loss, need for augmentation, and challenges of restoring the joint line and rotation.9,16-22 Barrett and Scott18 reported only 66% of patients had good or excellent results at an average of 4.6 years of follow-up after UKA conversion to TKA. Over 50% required stemmed implants and bone graft or bone cement augmentation to address osseous insufficiency. The authors suggested that the primary determinant of the complexity of the conversion to TKA was the surgical technique used in the index procedure. They concluded that UKA conversion to TKA can be as successful as a primary TKA and primary TKA implants can be used without bone augmentation or stems during the revision procedure if minimal tibial bone is resected at the time of the index UKA.18 Schwarzkopf and colleagues9 supported this conclusion when they found that aggressive tibial resection during UKA resulted in the need for bone graft, stem, wedge, or augment in 70% of cases when converted to TKA. Similarly, Khan and colleagues23 found that 26% of patients required bone grafting and 26% required some form of augmentation, and Springer and colleagues3 reported that 68% required a graft, augment, or stem.3,22 Using data from the New Zealand Joint Registry, Pearse and colleagues5 reported that revision TKA components were necessary in 28% of patients and concluded that converting a UKA to TKA gives a less reliable result than primary TKA, and with functional results that are not significantly better than a revision from a TKA.

Conservative tibial resection during UKA minimizes the complexity and concerns of bone loss upon conversion to TKA. Schwarzkopf and colleagues9 found 96.6% of patients with conservative tibial resection received a primary TKA implant, without augments or stems. Furthermore, patients with a primary TKA implant showed improved tibial survivorship, with revision as an end point, compared with patients who received a TKA implant that required stems and augments or bone graft for support.9 Also emphasizing the importance of minimal tibial resection, O’Donnell and colleagues8 compared a cohort of patients undergoing conversion of a minimal resection resurfacing onlay-type UKA to TKA with a cohort of patients undergoing primary TKA. They found that 40% of patients required bone grafting for contained defects, 3.6% required metal augments, and 1.8% required stems.8 There was no significant difference between the groups in terms of range of motion, functional outcome, or radiologic outcomes. The authors concluded that revision of minimal resection resurfacing implants to TKA is associated with similar results to primary TKA and is superior to revision of UKA with greater bone loss. Prior studies have shown that one of the advantages of robotic-assisted UKA is the accuracy and precision of bone resection. The present study supports this premise by showing that tibial resection is significantly more conservative using robotic-assisted techniques when using tibial component thickness as a surrogate for extent of bone resection. While our study did not address implant durability or the impact of conservative resection on conversion to TKA, studies referenced above suggest that the conservative nature of bone preparation would have a relevant impact on the revision of the implant to TKA.

Our study is a retrospective case series that reports tibial component thickness as a surrogate for volume of tibial resection during UKA. While the implication is that more conservative tibial resection may optimize durability and ease of conversion to TKA, future study will be needed to compare robotic-assisted and conventional cases of UKA upon conversion to TKA in order to ascertain whether the more conventional resections of robotic-assisted UKA in fact lead to revision that is comparable with primary TKA in terms of bone loss at the time of revision, components utilized, the need for bone graft, augments, or stems, and clinical outcomes. Given the method of data collection in this study, we could not control for clinical deformity, selection bias, surgeon experience, or medial vs lateral knee compartments. These potential confounders represent weaknesses of this study.

In conclusion, conversion of UKA to TKA may be associated with significant osseous insufficiency, which may compromise patient outcomes in comparison to primary TKA. Studies have shown that UKA conversion to TKA is comparable to primary TKA when minimal tibial resection is performed during the UKA, and the need for augmentation, grafting or stems is increased with more aggressive tibial resection. This study has shown that when robotic assistance is utilized, tibial resection is more precise, less variable, and more conservative compared to conventional techniques.

Am J Orthop. 2016;45(7):E465-E468. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

Unicompartmental knee arthroplasty (UKA) is considered a less invasive approach for the treatment of unicompartmental knee arthritis when compared with total knee arthroplasty (TKA), with optimal preservation of kinematics.1 Despite excellent functional outcomes, conversion to TKA may be necessary if the UKA fails, or in patients with progressive knee arthritis. Some studies have found UKA conversion to TKA to be comparable with primary TKA,2,3 whereas others have found that conversion often requires bone graft, augments, and stemmed components and has increased complications and inferior results compared to primary TKA.4-7 While some studies report that <10% of UKA conversions to TKA require augments,2 others have found that as many as 76% require augments.4-8

Schwarzkopf and colleagues9 recently demonstrated that UKA conversion to TKA is comparable with primary TKA when a conservative tibial resection is performed during the index procedure. However, they reported increased complexity when greater tibial resection was performed and thicker polyethylene inserts were used at the time of the index UKA. The odds ratio of needing an augment or stem during the conversion to TKA was 26.8 (95% confidence interval, 3.71-194) when an aggressive tibial resection was performed during the UKA.9 Tibial resection thickness may thus be predictive of anticipated complexity of UKA revision to TKA and may aid in preoperative planning.

Robotic assistance has been shown to enhance the accuracy of bone preparation, implant component alignment, and soft tissue balance in UKA.10-15 It has yet to be determined whether this improved accuracy translates to improved clinical performance or longevity of the UKA implant. However, the enhanced accuracy of robotic technology may result in more conservative tibial resection when compared to conventional UKA and may be advantageous if conversion to TKA becomes necessary.

The purpose of this study was to compare the distribution of polyethylene insert sizes implanted during conventional and robotic-assisted UKA. We hypothesized that robotic assistance would demonstrate more conservative tibial resection compared to conventional methods of bone preparation.

Methods

We retrospectively compared the distribution of polyethylene insert sizes implanted during consecutive conventional and robotic-assisted UKA procedures. Several manufacturers were queried to provide a listing of the polyethylene insert sizes utilized, ranging from 8 mm to 14 mm. The analysis included 8421 robotic-assisted UKA cases and 27,989 conventional UKA cases. Data were provided by Zimmer Biomet and Smith & Nephew regarding conventional cases, as well as Blue Belt Technologies (now part of Smith & Nephew) and MAKO Surgical (now part of Stryker) regarding robotic-assisted cases. (Dr. Lonner has an ongoing relationship as a consultant with Blue Belt Technologies, whose data was utilized in this study.) Using tibial insert thickness as a surrogate measure of the extent of tibial resection, an insert size of ≥10 mm was defined as aggressive while <10 mm was considered conservative. This cutoff was established based on its corresponding resection level with primary TKA and the anticipated need for augments. Statistical analysis was performed using a Mann-Whitney-Wilcoxon test. Significance was set at P < .05.

Results

Tibial resection thickness was found to be most commonly conservative in nature, with sizes 8-mm and 9-mm polyethylene inserts utilized in the majority of both robotic-assisted and conventional UKA cases. However, statistically more 8-mm and 9-mm polyethylene inserts were used in the robotic group (93.6%) than in the conventional group (84.5%) (P < .0001; Figure). Aggressive tibial resection, requiring tibial inserts ≥10 mm, was performed in 6.4% of robotic-assisted cases and 15.5% of conventional cases.

Figure.
Only .29% of robotic-assisted cases required tibial inserts ≥10 mm, whereas 5.7% of patients undergoing conventional UKA had tibial inserts ≥10 mm. In this analysis, the maximum tibial component thickness was 11 mm in robotic-assisted UKA and 14 mm in conventional UKA. The distribution of conventional UKA tibial resection thicknesses is significantly greater in comparison to robotic-assisted UKA, which more reproducibly achieved accurate and precise conservative resection. No significant differences were noted in the percentages of polyethylene sizes between Blue Belt Technologies or MAKO cases.

Discussion

Robotic assistance enhances the accuracy of bone preparation, implant component alignment, and soft tissue balance in UKA.10-15 It has yet to be determined whether this improved accuracy translates to improved clinical performance or longevity of the UKA implant. However, we demonstrate that the enhanced accuracy of robotic technology results in more conservative tibial resection when compared to conventional techniques with a potential benefit suggested in the literature upon conversion to TKA.

The findings of this study have important implications for patients undergoing conversion of UKA to TKA, potentially optimizing the ease of revision and clinical outcomes. The outcomes of UKA conversion to TKA are often considered inferior to those of primary TKA, compromised by bone loss, need for augmentation, and challenges of restoring the joint line and rotation.9,16-22 Barrett and Scott18 reported only 66% of patients had good or excellent results at an average of 4.6 years of follow-up after UKA conversion to TKA. Over 50% required stemmed implants and bone graft or bone cement augmentation to address osseous insufficiency. The authors suggested that the primary determinant of the complexity of the conversion to TKA was the surgical technique used in the index procedure. They concluded that UKA conversion to TKA can be as successful as a primary TKA and primary TKA implants can be used without bone augmentation or stems during the revision procedure if minimal tibial bone is resected at the time of the index UKA.18 Schwarzkopf and colleagues9 supported this conclusion when they found that aggressive tibial resection during UKA resulted in the need for bone graft, stem, wedge, or augment in 70% of cases when converted to TKA. Similarly, Khan and colleagues23 found that 26% of patients required bone grafting and 26% required some form of augmentation, and Springer and colleagues3 reported that 68% required a graft, augment, or stem.3,22 Using data from the New Zealand Joint Registry, Pearse and colleagues5 reported that revision TKA components were necessary in 28% of patients and concluded that converting a UKA to TKA gives a less reliable result than primary TKA, and with functional results that are not significantly better than a revision from a TKA.

Conservative tibial resection during UKA minimizes the complexity and concerns of bone loss upon conversion to TKA. Schwarzkopf and colleagues9 found 96.6% of patients with conservative tibial resection received a primary TKA implant, without augments or stems. Furthermore, patients with a primary TKA implant showed improved tibial survivorship, with revision as an end point, compared with patients who received a TKA implant that required stems and augments or bone graft for support.9 Also emphasizing the importance of minimal tibial resection, O’Donnell and colleagues8 compared a cohort of patients undergoing conversion of a minimal resection resurfacing onlay-type UKA to TKA with a cohort of patients undergoing primary TKA. They found that 40% of patients required bone grafting for contained defects, 3.6% required metal augments, and 1.8% required stems.8 There was no significant difference between the groups in terms of range of motion, functional outcome, or radiologic outcomes. The authors concluded that revision of minimal resection resurfacing implants to TKA is associated with similar results to primary TKA and is superior to revision of UKA with greater bone loss. Prior studies have shown that one of the advantages of robotic-assisted UKA is the accuracy and precision of bone resection. The present study supports this premise by showing that tibial resection is significantly more conservative using robotic-assisted techniques when using tibial component thickness as a surrogate for extent of bone resection. While our study did not address implant durability or the impact of conservative resection on conversion to TKA, studies referenced above suggest that the conservative nature of bone preparation would have a relevant impact on the revision of the implant to TKA.

Our study is a retrospective case series that reports tibial component thickness as a surrogate for volume of tibial resection during UKA. While the implication is that more conservative tibial resection may optimize durability and ease of conversion to TKA, future study will be needed to compare robotic-assisted and conventional cases of UKA upon conversion to TKA in order to ascertain whether the more conventional resections of robotic-assisted UKA in fact lead to revision that is comparable with primary TKA in terms of bone loss at the time of revision, components utilized, the need for bone graft, augments, or stems, and clinical outcomes. Given the method of data collection in this study, we could not control for clinical deformity, selection bias, surgeon experience, or medial vs lateral knee compartments. These potential confounders represent weaknesses of this study.

In conclusion, conversion of UKA to TKA may be associated with significant osseous insufficiency, which may compromise patient outcomes in comparison to primary TKA. Studies have shown that UKA conversion to TKA is comparable to primary TKA when minimal tibial resection is performed during the UKA, and the need for augmentation, grafting or stems is increased with more aggressive tibial resection. This study has shown that when robotic assistance is utilized, tibial resection is more precise, less variable, and more conservative compared to conventional techniques.

Am J Orthop. 2016;45(7):E465-E468. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

References

1. Patil S, Colwell CW Jr, Ezzet KA, D’Lima DD. Can normal knee kinematics be restored with unicompartmental knee replacement? J Bone Joint Surg Am. 2005;87(2):332-338.

2. Johnson S, Jones P, Newman JH. The survivorship and results of total knee replacements converted from unicompartmental knee replacements. Knee. 2007;14(2):154-157.

3. Springer BD, Scott RD, Thornhill TS. Conversion of failed unicompartmental knee arthroplasty to TKA. Clin Orthop Relat Res. 2006;446:214-220.

4. Järvenpää J, Kettunen J, Miettinen H, Kröger H. The clinical outcome of revision knee replacement after unicompartmental knee arthroplasty versus primary total knee arthroplasty: 8-17 years follow-up study of 49 patients. Int Orthop. 2010;34(5):649-653.

5. Pearse AJ, Hooper GJ, Rothwell AG, Frampton C. Osteotomy and unicompartmental knee arthroplasty converted to total knee arthroplasty: data from the New Zealand Joint Registry. J Arthroplasty. 2012;27(10):1827-1831.

6. Rancourt MF, Kemp KA, Plamondon SM, Kim PR, Dervin GF. Unicompartmental knee arthroplasties revised to total knee arthroplasties compared with primary total knee arthroplasties. J Arthroplasty. 2012;27(8 Suppl):106-110.

7. Sierra RJ, Kassel CA, Wetters NG, Berend KR, Della Valle CJ, Lombardi AV. Revision of unicompartmental arthroplasty to total knee arthroplasty: not always a slam dunk! J Arthroplasty. 2013;28(8 Suppl):128-132.

8. O’Donnell TM, Abouazza O, Neil MJ. Revision of minimal resection resurfacing unicondylar knee arthroplasty to total knee arthroplasty: results compared with primary total knee arthroplasty. J Arthroplasty. 2013;28(1):33-39.

9. Schwarzkopf R, Mikhael B, Li L, Josephs L, Scott RD. Effect of initial tibial resection thickness on outcomes of revision UKA. Orthopedics. 2013;36(4):e409-e414.

10. Conditt MA, Roche MW. Minimally invasive robotic-arm-guided unicompartmental knee arthroplasty. J Bone Joint Surg Am. 2009;91 Suppl 1:63-68.

11. Dunbar NJ, Roche MW, Park BH, Branch SH, Conditt MA, Banks SA. Accuracy of dynamic tactile-guided unicompartmental knee arthroplasty. J Arthroplasty. 2012;27(5):803-808.e1.

12. Karia M, Masjedi M, Andrews B, Jaffry Z, Cobb J. Robotic assistance enables inexperienced surgeons to perform unicompartmental knee arthroplasties on dry bone models with accuracy superior to conventional methods. Adv Orthop. 2013;2013:481039.

13. Lonner JH, John TK, Conditt MA. Robotic arm-assisted UKA improves tibial component alignment: a pilot study. Clin Orthop Relat Res. 2010;468(1):141-146.

14. Lonner JH, Smith JR, Picard F, Hamlin B, Rowe PJ, Riches PE. High degree of accuracy of a novel image-free handheld robot for unicondylar knee arthroplasty in a cadaveric study. Clin Orthop Relat Res. 2015;473(1):206-212.

15. Smith JR, Picard F, Rowe PJ, Deakin A, Riches PE. The accuracy of a robotically-controlled freehand sculpting tool for unicondylar knee arthroplasty. Bone Joint J. 2013;95-B(suppl 28):68.

16. Chakrabarty G, Newman JH, Ackroyd CE. Revision of unicompartmental arthroplasty of the knee. Clinical and technical considerations. J Arthroplasty. 1998;13(2):191-196.

17. Levine WN, Ozuna RM, Scott RD, Thornhill TS. Conversion of failed modern unicompartmental arthroplasty to total knee arthroplasty. J Arthroplasty. 1996;11(7):797-801.

18. Barrett WP, Scott RD. Revision of failed unicondylar unicompartmental knee arthroplasty. J Bone Joint Surg Am. 1987;69(9):1328-1335.

19. Padgett DE, Stern SH, Insall JN. Revision total knee arthroplasty for failed unicompartmental replacement. J Bone Joint Surg Am. 1991;73(2):186-190.

20. Aleto TJ, Berend ME, Ritter MA, Faris PM, Meneghini RM. Early failure of unicompartmental knee arthroplasty leading to revision. J Arthroplasty. 2008;23(2):159-163.

21. McAuley JP, Engh GA, Ammeen DJ. Revision of failed unicompartmental knee arthroplasty. Clin Orthop Relat Res. 2001;(392):279-282.22. Böhm I, Landsiedl F. Revision surgery after failed unicompartmental knee arthroplasty: a study of 35 cases. J Arthroplasty. 2000;15(8):982-989.

23. Khan Z, Nawaz SZ, Kahane S, Ester C, Chatterji U. Conversion of unicompartmental knee arthroplasty to total knee arthroplasty: the challenges and need for augments. Acta Orthop Belg. 2013;79(6):699-705.

References

1. Patil S, Colwell CW Jr, Ezzet KA, D’Lima DD. Can normal knee kinematics be restored with unicompartmental knee replacement? J Bone Joint Surg Am. 2005;87(2):332-338.

2. Johnson S, Jones P, Newman JH. The survivorship and results of total knee replacements converted from unicompartmental knee replacements. Knee. 2007;14(2):154-157.

3. Springer BD, Scott RD, Thornhill TS. Conversion of failed unicompartmental knee arthroplasty to TKA. Clin Orthop Relat Res. 2006;446:214-220.

4. Järvenpää J, Kettunen J, Miettinen H, Kröger H. The clinical outcome of revision knee replacement after unicompartmental knee arthroplasty versus primary total knee arthroplasty: 8-17 years follow-up study of 49 patients. Int Orthop. 2010;34(5):649-653.

5. Pearse AJ, Hooper GJ, Rothwell AG, Frampton C. Osteotomy and unicompartmental knee arthroplasty converted to total knee arthroplasty: data from the New Zealand Joint Registry. J Arthroplasty. 2012;27(10):1827-1831.

6. Rancourt MF, Kemp KA, Plamondon SM, Kim PR, Dervin GF. Unicompartmental knee arthroplasties revised to total knee arthroplasties compared with primary total knee arthroplasties. J Arthroplasty. 2012;27(8 Suppl):106-110.

7. Sierra RJ, Kassel CA, Wetters NG, Berend KR, Della Valle CJ, Lombardi AV. Revision of unicompartmental arthroplasty to total knee arthroplasty: not always a slam dunk! J Arthroplasty. 2013;28(8 Suppl):128-132.

8. O’Donnell TM, Abouazza O, Neil MJ. Revision of minimal resection resurfacing unicondylar knee arthroplasty to total knee arthroplasty: results compared with primary total knee arthroplasty. J Arthroplasty. 2013;28(1):33-39.

9. Schwarzkopf R, Mikhael B, Li L, Josephs L, Scott RD. Effect of initial tibial resection thickness on outcomes of revision UKA. Orthopedics. 2013;36(4):e409-e414.

10. Conditt MA, Roche MW. Minimally invasive robotic-arm-guided unicompartmental knee arthroplasty. J Bone Joint Surg Am. 2009;91 Suppl 1:63-68.

11. Dunbar NJ, Roche MW, Park BH, Branch SH, Conditt MA, Banks SA. Accuracy of dynamic tactile-guided unicompartmental knee arthroplasty. J Arthroplasty. 2012;27(5):803-808.e1.

12. Karia M, Masjedi M, Andrews B, Jaffry Z, Cobb J. Robotic assistance enables inexperienced surgeons to perform unicompartmental knee arthroplasties on dry bone models with accuracy superior to conventional methods. Adv Orthop. 2013;2013:481039.

13. Lonner JH, John TK, Conditt MA. Robotic arm-assisted UKA improves tibial component alignment: a pilot study. Clin Orthop Relat Res. 2010;468(1):141-146.

14. Lonner JH, Smith JR, Picard F, Hamlin B, Rowe PJ, Riches PE. High degree of accuracy of a novel image-free handheld robot for unicondylar knee arthroplasty in a cadaveric study. Clin Orthop Relat Res. 2015;473(1):206-212.

15. Smith JR, Picard F, Rowe PJ, Deakin A, Riches PE. The accuracy of a robotically-controlled freehand sculpting tool for unicondylar knee arthroplasty. Bone Joint J. 2013;95-B(suppl 28):68.

16. Chakrabarty G, Newman JH, Ackroyd CE. Revision of unicompartmental arthroplasty of the knee. Clinical and technical considerations. J Arthroplasty. 1998;13(2):191-196.

17. Levine WN, Ozuna RM, Scott RD, Thornhill TS. Conversion of failed modern unicompartmental arthroplasty to total knee arthroplasty. J Arthroplasty. 1996;11(7):797-801.

18. Barrett WP, Scott RD. Revision of failed unicondylar unicompartmental knee arthroplasty. J Bone Joint Surg Am. 1987;69(9):1328-1335.

19. Padgett DE, Stern SH, Insall JN. Revision total knee arthroplasty for failed unicompartmental replacement. J Bone Joint Surg Am. 1991;73(2):186-190.

20. Aleto TJ, Berend ME, Ritter MA, Faris PM, Meneghini RM. Early failure of unicompartmental knee arthroplasty leading to revision. J Arthroplasty. 2008;23(2):159-163.

21. McAuley JP, Engh GA, Ammeen DJ. Revision of failed unicompartmental knee arthroplasty. Clin Orthop Relat Res. 2001;(392):279-282.22. Böhm I, Landsiedl F. Revision surgery after failed unicompartmental knee arthroplasty: a study of 35 cases. J Arthroplasty. 2000;15(8):982-989.

23. Khan Z, Nawaz SZ, Kahane S, Ester C, Chatterji U. Conversion of unicompartmental knee arthroplasty to total knee arthroplasty: the challenges and need for augments. Acta Orthop Belg. 2013;79(6):699-705.

Issue
The American Journal of Orthopedics - 45(7)
Issue
The American Journal of Orthopedics - 45(7)
Page Number
E465-E468
Page Number
E465-E468
Publications
Publications
Topics
Article Type
Display Headline
Robotic Technology Produces More Conservative Tibial Resection Than Conventional Techniques in UKA
Display Headline
Robotic Technology Produces More Conservative Tibial Resection Than Conventional Techniques in UKA
Sections
Disallow All Ads
Article PDF Media

Perceived Leg-Length Discrepancy After Primary Total Knee Arthroplasty: Does Knee Alignment Play a Role?

Article Type
Changed
Thu, 09/19/2019 - 13:24
Display Headline
Perceived Leg-Length Discrepancy After Primary Total Knee Arthroplasty: Does Knee Alignment Play a Role?

Leg-length discrepancy (LLD) is common in the general population1 and particularly in patients with degenerative joint diseases of the hip and knee.2 Common complications of LLD include femoral, sciatic, and peroneal nerve palsy; lower back pain; gait abnormalities3; and general dissatisfaction. LLD is a concern for orthopedic surgeons who perform total knee arthroplasty (TKA) because limb lengthening is common after this procedure.4,5 Surgeons are aware of the limb lengthening that occurs during TKA,4,5 and studies have confirmed that LLD usually decreases after TKA.4,5

Despite surgeons’ best efforts, some patients still perceive LLD after surgery, though the incidence of perceived LLD in patients who have had TKA has not been well documented. Aside from actual, objectively measured LLD, there may be other factors that lead patients to perceive LLD. Study results have suggested that preoperative varus–valgus alignment of the knee joint may correlate with how much an operative leg is lengthened after TKA4,5; however, the outcome investigated was objective LLD measurements, not perceived LLD. Understanding the factors that may influence patients’ ability to perceive LLD would allow surgeons to preoperatively identify patients who are at higher risk for postoperative perceived LLD. This information, along with expected time to resolution of postoperative perceived LLD, would allow surgeons to educate their patients accordingly.

We conducted a study to determine the incidence of perceived LLD before and after primary TKA in patients with unilateral osteoarthritis and to determine the correlation between mechanical axis of the knee and perceived LLD before and after surgery. Given that surgery may correct mechanical axis misalignment, we investigated the correlation between this correction and its ability to change patients’ preoperative and postoperative perceived LLD. We hypothesized that a large correction of mechanical axis would lead patients to perceive LLD after surgery. The relationship of body mass index (BMI) and age to patients’ perceived LLD was also assessed. The incidence and time frame of resolution of postoperative perceived LLD were determined.

Methods

Approval for this study was received from the Institutional Review Board at our institution, Rush University Medical Center in Chicago, Illinois. Seventy-three patients undergoing primary TKA performed by 3 surgeons at 2 institutions between February 2010 and January 2013 were prospectively enrolled. Inclusion criteria were age 18 years to 90 years and primary TKA for unilateral osteoarthritis; exclusion criteria were allergy or intolerance to the study materials, operative treatment of affected joint or its underlying etiology within prior month, previous surgeries (other than arthroscopy) on affected joint, previous surgeries (on unaffected lower extremity) that may influence preoperative and postoperative leg lengths, and any substance abuse or dependence within the past 6 months. Patients provided written informed consent for total knee arthroplasty.

All surgeries were performed by Dr. Levine, Dr. Della Valle, and Dr. Sporer using the medial parapatellar or midvastus approach with tourniquet. Similar standard postoperative rehabilitation protocols with early mobilization were used in all cases.

During clinical evaluation, patient demographic data were collected and LLD surveys administered. Patients were asked, before surgery and 3 to 6 weeks, 3 months, 6 months, and 1 year after surgery, if they perceived LLD. A patient who no longer perceived LLD after surgery was no longer followed for this study.

At the preoperative clinic visit and at the 3-month or 6-week postoperative visit, standing mechanical axis radiographs were viewed by 2 of the authors (not the primary surgeons) using PACS (picture archiving and communication system software). The mechanical axis of the operative leg was measured with ImageJ software by taking the angle from the center of the femur to the middle of the ankle joint, with the vertex assigned to the middle of the knee joint.

We used a 2-tailed unpaired t test to determine the relationship of preoperative mechanical axis to perceived LLD (or lack thereof) before surgery. The data were analyzed for separate varus and valgus deformities. Then we determined the relationship of postoperative mechanical axis to perceived LLD (or lack thereof) after surgery. The McNemar test was used to determine the effect of surgery on patients’ LLD perceptions.

To determine the relationship between preoperative-to-postoperative change in mechanical axis and change in LLD perceptions, we divided patients into 4 groups. Group 1 had both preoperative and postoperative perceived LLD, group 2 had no preoperative or postoperative perceived LLD, group 3 had preoperative perceived LLD but no postoperative perceived LLD, and group 4 had postoperative perceived LLD but no preoperative perceived LLD. The absolute value of the difference between preoperative and postoperative mechanical axis was then determined, relative to 180°, to account for changes in varus to valgus deformity before and after surgery and vice versa. Analysis of variance (ANOVA) was used to detect differences between groups. This analysis was then stratified based on BMI and age.

 

 

Results

Of the 73 enrolled patients, 2 were excluded from results analysis because of inadequate data—one did not complete the postoperative LLD survey, and the other did not have postoperative standing mechanical axis radiographs—leaving 71 patients (27 men, 44 women) with adequate data. Mean (SD) age of all patients was 65 (8.4) years (range, 47-89 years). Mean (SD) BMI was 35.1 (9.9; range, 20.2-74.8).

Of the 71 patients with adequate data, 18 had preoperative perceived LLD and 53 did not; in addition, 7 had postoperative perceived LLD and 64 did not. All 7 patients with postoperative perceived LLD noted resolution of LLD, at a mean of 8.5 weeks (range, 3 weeks-3 months). There was a significant difference between the 18 patients with preoperative perceived LLD and the 7 with postoperative perceived LLD (P = .035, analyzed with the McNemar test).

Table 1 lists the mean preoperative mechanical axis measurements for patients with and without preoperative perceived LLD.

There was no significant difference between the 2 groups (P = .27). There was also no significant difference in preoperative mechanical axis when cases were separated and analyzed as varus and valgus deformities (varus P = .53, valgus P = .20).

Table 2 lists the mean postoperative mechanical axis measurements for patients with and without postoperative perceived LLD.
There was no significant difference between the 2 groups (P = .42). There was also no significant difference in postoperative mechanical axis for separate varus (P = .29) and valgus (P = .52) deformities.

Table 3 lists the mean absolute values of mechanical axis correction (preoperative to postoperative) for the 4 patient groups described in the Methods section.
ANOVA revealed no significant statistical difference in these values among the groups (P = .9229). There were also no significant statistical differences when the groups were stratified by age (40-59.9 years, P = .5973; 60-69.9 years, P = .6263; 70 years or older, P = .3779) or when ANOVA was used to compare the groups’ mean ages (P = .3183). In addition, the 4 groups were not significantly statistically different in BMI: obese (BMI >30; P = .3891) and nonobese (BMI <29.9; P = .9862).

Discussion

In this study, 18 patients (25%) had preoperative perceived LLD, proving that perceived LLD is common in patients who undergo TKA for unilateral osteoarthritis. Surgeons should give their patients a preoperative survey on perceived LLD, as survey responses may inform and influence surgical decisions and strategies.

Of the 18 patients with preoperative perceived LLD, only 1 had postoperative perceived LLD. That perceived LLD decreased after surgery makes sense given the widely accepted notion that actual LLD is common before primary TKA but in most cases is corrected during surgery.4,5 As LLD correction during surgery is so successful, surgeons should tell their patients with preoperative perceived LLD that in most cases it will be fixed after TKA.

Although the incidence of perceived LLD decreased after TKA (as mentioned earlier), the decrease seemed to be restricted mostly to patients with preoperative perceived LLD, and the underlying LLD was most probably corrected by the surgery. However, surgery introduced perceived LLD in 6 cases, supporting the notion that it is crucial to understand which patients are at higher risk for postoperative perceived LLD and what if any time frame can be expected for resolution in these cases. In our study, all cases of perceived LLD had resolved by a mean follow-up of 8.5 weeks (range, 3 weeks-3 months). This phenomenon of resolution may be attributed to some of the physical, objective LLD corrections that naturally occur throughout the postoperative course,4 though psychological factors may also be involved. Our study results suggest patients should be counseled that, though about 10% of patients perceive LLD after primary TKA, the vast majority of perceived LLD cases resolve within 3 months.

One study goal was to determine the relationship between the mechanical axis of the knee and perceived LLD both before and after surgery. There were no significant relationships. This was also true when cases of varus and valgus deformity were analyzed separately.

Another study goal was to determine if a surgical change in the mechanical alignment of the knee would influence preoperative-to-postoperative LLD perceptions. In our analysis, patients were divided into 4 groups based on their preoperative and postoperative LLD perceptions (see Methods section). ANOVA revealed no significant differences in absolute values of mechanical axis correction among the 4 groups. Likewise, there were no correlations between BMI and age and mechanical axis correction among the groups, suggesting LLD perception is unrelated to any of these variables. Ideally, if a relationship between a threshold knee alignment value and perceived LLD existed, surgeons would be able to counsel patients at higher risk for perceived LLD about how their knee alignment may contribute to their perception. Unfortunately, our study results did not show any significant statistical relationships in this regard.

The problem of LLD in patients undergoing TKA is not new, and much research is needed to determine the correlation between perceived versus actual discrepancies, and why they occur. Our study results confirmed that TKA corrects most cases of preoperative perceived LLD but introduces perceived LLD in other cases. Whether preoperative or postoperative LLD is merely perceived or is in fact an actual discrepancy remains to be seen.

One limitation of this study was its lack of leg-length measurements. Although we studied knee alignment specifically, it would have been useful to compare perceived LLD with measured leg lengths, either clinically or radiographically, especially since leg lengths obviously play a role in any perceived LLD. We used mechanical alignment as a surrogate for actual LLD because we hypothesized that alignment may contribute to patients’ perceived discrepancies.

Another limitation was the relatively small sample. Only 24 cases of perceived LLD were analyzed. Given our low rates of perceived LLD (25% before surgery, 10% after surgery), it is difficult to study a large enough TKA group to establish a statistically significant number of cases. Nevertheless, investigators may use larger groups to establish more meaningful relationships.

A third limitation was that alignment was measured on the operative side but not the contralateral side. As we were focusing on perceived discrepancy, contralateral knee alignment may play an important role. Our study involved patients with unilateral osteoarthritis, so it would be reasonable to assume the nonoperative knee was almost neutral in alignment in most cases. However, given that varus/valgus misalignment is a known risk factor for osteoarthritis,6 many of our patients with unilateral disease may very well have had preexisting misalignment of both knees. The undetermined alignment of the nonoperative side may be a confounding variable in the relationship between operative knee alignment and perceived LLD.

Fourth, not all patients were surveyed 3 weeks after surgery. Some were first surveyed at 6 weeks, and it is possible there were cases of transient postoperative LLD that resolved before that point. Therefore, our reported incidence of postoperative LLD could have missed some cases. In addition, our mean 8.5-week period for LLD resolution may not have accounted for these resolved cases of transient perceived LLD.


Am J Orthop. 2016;45(7):E429-E433. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

References

1. O’Brien S, Kernohan G, Fitzpatrick C, Hill J, Beverland D. Perception of imposed leg length inequality in normal subjects. Hip Int. 2010;20(4):505-511.

2. Noll DR. Leg length discrepancy and osteoarthritic knee pain in the elderly: an observational study. J Am Osteopath Assoc. 2013;113(9):670-678.

3. Clark CR, Huddleston HD, Schoch EP 3rd, Thomas BJ. Leg-length discrepancy after total hip arthroplasty. J Am Acad Orthop Surg. 2006;14(1):38-45.

4. Chang MJ, Kang YG, Chang CB, Seong SC, Kim TK. The patterns of limb length, height, weight and body mass index changes after total knee arthroplasty. J Arthroplasty. 2013;28(10):1856-1861.

5. Lang JE, Scott RD, Lonner JH, Bono JV, Hunter DJ, Li L. Magnitude of limb lengthening after primary total knee arthroplasty. J Arthroplasty. 2012;27(3):341-346.

6. Sharma L, Song J, Dunlop D, et al. Varus and valgus alignment and incident and progressive knee osteoarthritis. Ann Rheum Dis. 2010;69(11):1940-1945.

Article PDF
Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Issue
The American Journal of Orthopedics - 45(7)
Publications
Topics
Page Number
E429-E433
Sections
Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Article PDF
Article PDF

Leg-length discrepancy (LLD) is common in the general population1 and particularly in patients with degenerative joint diseases of the hip and knee.2 Common complications of LLD include femoral, sciatic, and peroneal nerve palsy; lower back pain; gait abnormalities3; and general dissatisfaction. LLD is a concern for orthopedic surgeons who perform total knee arthroplasty (TKA) because limb lengthening is common after this procedure.4,5 Surgeons are aware of the limb lengthening that occurs during TKA,4,5 and studies have confirmed that LLD usually decreases after TKA.4,5

Despite surgeons’ best efforts, some patients still perceive LLD after surgery, though the incidence of perceived LLD in patients who have had TKA has not been well documented. Aside from actual, objectively measured LLD, there may be other factors that lead patients to perceive LLD. Study results have suggested that preoperative varus–valgus alignment of the knee joint may correlate with how much an operative leg is lengthened after TKA4,5; however, the outcome investigated was objective LLD measurements, not perceived LLD. Understanding the factors that may influence patients’ ability to perceive LLD would allow surgeons to preoperatively identify patients who are at higher risk for postoperative perceived LLD. This information, along with expected time to resolution of postoperative perceived LLD, would allow surgeons to educate their patients accordingly.

We conducted a study to determine the incidence of perceived LLD before and after primary TKA in patients with unilateral osteoarthritis and to determine the correlation between mechanical axis of the knee and perceived LLD before and after surgery. Given that surgery may correct mechanical axis misalignment, we investigated the correlation between this correction and its ability to change patients’ preoperative and postoperative perceived LLD. We hypothesized that a large correction of mechanical axis would lead patients to perceive LLD after surgery. The relationship of body mass index (BMI) and age to patients’ perceived LLD was also assessed. The incidence and time frame of resolution of postoperative perceived LLD were determined.

Methods

Approval for this study was received from the Institutional Review Board at our institution, Rush University Medical Center in Chicago, Illinois. Seventy-three patients undergoing primary TKA performed by 3 surgeons at 2 institutions between February 2010 and January 2013 were prospectively enrolled. Inclusion criteria were age 18 years to 90 years and primary TKA for unilateral osteoarthritis; exclusion criteria were allergy or intolerance to the study materials, operative treatment of affected joint or its underlying etiology within prior month, previous surgeries (other than arthroscopy) on affected joint, previous surgeries (on unaffected lower extremity) that may influence preoperative and postoperative leg lengths, and any substance abuse or dependence within the past 6 months. Patients provided written informed consent for total knee arthroplasty.

All surgeries were performed by Dr. Levine, Dr. Della Valle, and Dr. Sporer using the medial parapatellar or midvastus approach with tourniquet. Similar standard postoperative rehabilitation protocols with early mobilization were used in all cases.

During clinical evaluation, patient demographic data were collected and LLD surveys administered. Patients were asked, before surgery and 3 to 6 weeks, 3 months, 6 months, and 1 year after surgery, if they perceived LLD. A patient who no longer perceived LLD after surgery was no longer followed for this study.

At the preoperative clinic visit and at the 3-month or 6-week postoperative visit, standing mechanical axis radiographs were viewed by 2 of the authors (not the primary surgeons) using PACS (picture archiving and communication system software). The mechanical axis of the operative leg was measured with ImageJ software by taking the angle from the center of the femur to the middle of the ankle joint, with the vertex assigned to the middle of the knee joint.

We used a 2-tailed unpaired t test to determine the relationship of preoperative mechanical axis to perceived LLD (or lack thereof) before surgery. The data were analyzed for separate varus and valgus deformities. Then we determined the relationship of postoperative mechanical axis to perceived LLD (or lack thereof) after surgery. The McNemar test was used to determine the effect of surgery on patients’ LLD perceptions.

To determine the relationship between preoperative-to-postoperative change in mechanical axis and change in LLD perceptions, we divided patients into 4 groups. Group 1 had both preoperative and postoperative perceived LLD, group 2 had no preoperative or postoperative perceived LLD, group 3 had preoperative perceived LLD but no postoperative perceived LLD, and group 4 had postoperative perceived LLD but no preoperative perceived LLD. The absolute value of the difference between preoperative and postoperative mechanical axis was then determined, relative to 180°, to account for changes in varus to valgus deformity before and after surgery and vice versa. Analysis of variance (ANOVA) was used to detect differences between groups. This analysis was then stratified based on BMI and age.

 

 

Results

Of the 73 enrolled patients, 2 were excluded from results analysis because of inadequate data—one did not complete the postoperative LLD survey, and the other did not have postoperative standing mechanical axis radiographs—leaving 71 patients (27 men, 44 women) with adequate data. Mean (SD) age of all patients was 65 (8.4) years (range, 47-89 years). Mean (SD) BMI was 35.1 (9.9; range, 20.2-74.8).

Of the 71 patients with adequate data, 18 had preoperative perceived LLD and 53 did not; in addition, 7 had postoperative perceived LLD and 64 did not. All 7 patients with postoperative perceived LLD noted resolution of LLD, at a mean of 8.5 weeks (range, 3 weeks-3 months). There was a significant difference between the 18 patients with preoperative perceived LLD and the 7 with postoperative perceived LLD (P = .035, analyzed with the McNemar test).

Table 1 lists the mean preoperative mechanical axis measurements for patients with and without preoperative perceived LLD.

There was no significant difference between the 2 groups (P = .27). There was also no significant difference in preoperative mechanical axis when cases were separated and analyzed as varus and valgus deformities (varus P = .53, valgus P = .20).

Table 2 lists the mean postoperative mechanical axis measurements for patients with and without postoperative perceived LLD.
There was no significant difference between the 2 groups (P = .42). There was also no significant difference in postoperative mechanical axis for separate varus (P = .29) and valgus (P = .52) deformities.

Table 3 lists the mean absolute values of mechanical axis correction (preoperative to postoperative) for the 4 patient groups described in the Methods section.
ANOVA revealed no significant statistical difference in these values among the groups (P = .9229). There were also no significant statistical differences when the groups were stratified by age (40-59.9 years, P = .5973; 60-69.9 years, P = .6263; 70 years or older, P = .3779) or when ANOVA was used to compare the groups’ mean ages (P = .3183). In addition, the 4 groups were not significantly statistically different in BMI: obese (BMI >30; P = .3891) and nonobese (BMI <29.9; P = .9862).

Discussion

In this study, 18 patients (25%) had preoperative perceived LLD, proving that perceived LLD is common in patients who undergo TKA for unilateral osteoarthritis. Surgeons should give their patients a preoperative survey on perceived LLD, as survey responses may inform and influence surgical decisions and strategies.

Of the 18 patients with preoperative perceived LLD, only 1 had postoperative perceived LLD. That perceived LLD decreased after surgery makes sense given the widely accepted notion that actual LLD is common before primary TKA but in most cases is corrected during surgery.4,5 As LLD correction during surgery is so successful, surgeons should tell their patients with preoperative perceived LLD that in most cases it will be fixed after TKA.

Although the incidence of perceived LLD decreased after TKA (as mentioned earlier), the decrease seemed to be restricted mostly to patients with preoperative perceived LLD, and the underlying LLD was most probably corrected by the surgery. However, surgery introduced perceived LLD in 6 cases, supporting the notion that it is crucial to understand which patients are at higher risk for postoperative perceived LLD and what if any time frame can be expected for resolution in these cases. In our study, all cases of perceived LLD had resolved by a mean follow-up of 8.5 weeks (range, 3 weeks-3 months). This phenomenon of resolution may be attributed to some of the physical, objective LLD corrections that naturally occur throughout the postoperative course,4 though psychological factors may also be involved. Our study results suggest patients should be counseled that, though about 10% of patients perceive LLD after primary TKA, the vast majority of perceived LLD cases resolve within 3 months.

One study goal was to determine the relationship between the mechanical axis of the knee and perceived LLD both before and after surgery. There were no significant relationships. This was also true when cases of varus and valgus deformity were analyzed separately.

Another study goal was to determine if a surgical change in the mechanical alignment of the knee would influence preoperative-to-postoperative LLD perceptions. In our analysis, patients were divided into 4 groups based on their preoperative and postoperative LLD perceptions (see Methods section). ANOVA revealed no significant differences in absolute values of mechanical axis correction among the 4 groups. Likewise, there were no correlations between BMI and age and mechanical axis correction among the groups, suggesting LLD perception is unrelated to any of these variables. Ideally, if a relationship between a threshold knee alignment value and perceived LLD existed, surgeons would be able to counsel patients at higher risk for perceived LLD about how their knee alignment may contribute to their perception. Unfortunately, our study results did not show any significant statistical relationships in this regard.

The problem of LLD in patients undergoing TKA is not new, and much research is needed to determine the correlation between perceived versus actual discrepancies, and why they occur. Our study results confirmed that TKA corrects most cases of preoperative perceived LLD but introduces perceived LLD in other cases. Whether preoperative or postoperative LLD is merely perceived or is in fact an actual discrepancy remains to be seen.

One limitation of this study was its lack of leg-length measurements. Although we studied knee alignment specifically, it would have been useful to compare perceived LLD with measured leg lengths, either clinically or radiographically, especially since leg lengths obviously play a role in any perceived LLD. We used mechanical alignment as a surrogate for actual LLD because we hypothesized that alignment may contribute to patients’ perceived discrepancies.

Another limitation was the relatively small sample. Only 24 cases of perceived LLD were analyzed. Given our low rates of perceived LLD (25% before surgery, 10% after surgery), it is difficult to study a large enough TKA group to establish a statistically significant number of cases. Nevertheless, investigators may use larger groups to establish more meaningful relationships.

A third limitation was that alignment was measured on the operative side but not the contralateral side. As we were focusing on perceived discrepancy, contralateral knee alignment may play an important role. Our study involved patients with unilateral osteoarthritis, so it would be reasonable to assume the nonoperative knee was almost neutral in alignment in most cases. However, given that varus/valgus misalignment is a known risk factor for osteoarthritis,6 many of our patients with unilateral disease may very well have had preexisting misalignment of both knees. The undetermined alignment of the nonoperative side may be a confounding variable in the relationship between operative knee alignment and perceived LLD.

Fourth, not all patients were surveyed 3 weeks after surgery. Some were first surveyed at 6 weeks, and it is possible there were cases of transient postoperative LLD that resolved before that point. Therefore, our reported incidence of postoperative LLD could have missed some cases. In addition, our mean 8.5-week period for LLD resolution may not have accounted for these resolved cases of transient perceived LLD.


Am J Orthop. 2016;45(7):E429-E433. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

Leg-length discrepancy (LLD) is common in the general population1 and particularly in patients with degenerative joint diseases of the hip and knee.2 Common complications of LLD include femoral, sciatic, and peroneal nerve palsy; lower back pain; gait abnormalities3; and general dissatisfaction. LLD is a concern for orthopedic surgeons who perform total knee arthroplasty (TKA) because limb lengthening is common after this procedure.4,5 Surgeons are aware of the limb lengthening that occurs during TKA,4,5 and studies have confirmed that LLD usually decreases after TKA.4,5

Despite surgeons’ best efforts, some patients still perceive LLD after surgery, though the incidence of perceived LLD in patients who have had TKA has not been well documented. Aside from actual, objectively measured LLD, there may be other factors that lead patients to perceive LLD. Study results have suggested that preoperative varus–valgus alignment of the knee joint may correlate with how much an operative leg is lengthened after TKA4,5; however, the outcome investigated was objective LLD measurements, not perceived LLD. Understanding the factors that may influence patients’ ability to perceive LLD would allow surgeons to preoperatively identify patients who are at higher risk for postoperative perceived LLD. This information, along with expected time to resolution of postoperative perceived LLD, would allow surgeons to educate their patients accordingly.

We conducted a study to determine the incidence of perceived LLD before and after primary TKA in patients with unilateral osteoarthritis and to determine the correlation between mechanical axis of the knee and perceived LLD before and after surgery. Given that surgery may correct mechanical axis misalignment, we investigated the correlation between this correction and its ability to change patients’ preoperative and postoperative perceived LLD. We hypothesized that a large correction of mechanical axis would lead patients to perceive LLD after surgery. The relationship of body mass index (BMI) and age to patients’ perceived LLD was also assessed. The incidence and time frame of resolution of postoperative perceived LLD were determined.

Methods

Approval for this study was received from the Institutional Review Board at our institution, Rush University Medical Center in Chicago, Illinois. Seventy-three patients undergoing primary TKA performed by 3 surgeons at 2 institutions between February 2010 and January 2013 were prospectively enrolled. Inclusion criteria were age 18 years to 90 years and primary TKA for unilateral osteoarthritis; exclusion criteria were allergy or intolerance to the study materials, operative treatment of affected joint or its underlying etiology within prior month, previous surgeries (other than arthroscopy) on affected joint, previous surgeries (on unaffected lower extremity) that may influence preoperative and postoperative leg lengths, and any substance abuse or dependence within the past 6 months. Patients provided written informed consent for total knee arthroplasty.

All surgeries were performed by Dr. Levine, Dr. Della Valle, and Dr. Sporer using the medial parapatellar or midvastus approach with tourniquet. Similar standard postoperative rehabilitation protocols with early mobilization were used in all cases.

During clinical evaluation, patient demographic data were collected and LLD surveys administered. Patients were asked, before surgery and 3 to 6 weeks, 3 months, 6 months, and 1 year after surgery, if they perceived LLD. A patient who no longer perceived LLD after surgery was no longer followed for this study.

At the preoperative clinic visit and at the 3-month or 6-week postoperative visit, standing mechanical axis radiographs were viewed by 2 of the authors (not the primary surgeons) using PACS (picture archiving and communication system software). The mechanical axis of the operative leg was measured with ImageJ software by taking the angle from the center of the femur to the middle of the ankle joint, with the vertex assigned to the middle of the knee joint.

We used a 2-tailed unpaired t test to determine the relationship of preoperative mechanical axis to perceived LLD (or lack thereof) before surgery. The data were analyzed for separate varus and valgus deformities. Then we determined the relationship of postoperative mechanical axis to perceived LLD (or lack thereof) after surgery. The McNemar test was used to determine the effect of surgery on patients’ LLD perceptions.

To determine the relationship between preoperative-to-postoperative change in mechanical axis and change in LLD perceptions, we divided patients into 4 groups. Group 1 had both preoperative and postoperative perceived LLD, group 2 had no preoperative or postoperative perceived LLD, group 3 had preoperative perceived LLD but no postoperative perceived LLD, and group 4 had postoperative perceived LLD but no preoperative perceived LLD. The absolute value of the difference between preoperative and postoperative mechanical axis was then determined, relative to 180°, to account for changes in varus to valgus deformity before and after surgery and vice versa. Analysis of variance (ANOVA) was used to detect differences between groups. This analysis was then stratified based on BMI and age.

 

 

Results

Of the 73 enrolled patients, 2 were excluded from results analysis because of inadequate data—one did not complete the postoperative LLD survey, and the other did not have postoperative standing mechanical axis radiographs—leaving 71 patients (27 men, 44 women) with adequate data. Mean (SD) age of all patients was 65 (8.4) years (range, 47-89 years). Mean (SD) BMI was 35.1 (9.9; range, 20.2-74.8).

Of the 71 patients with adequate data, 18 had preoperative perceived LLD and 53 did not; in addition, 7 had postoperative perceived LLD and 64 did not. All 7 patients with postoperative perceived LLD noted resolution of LLD, at a mean of 8.5 weeks (range, 3 weeks-3 months). There was a significant difference between the 18 patients with preoperative perceived LLD and the 7 with postoperative perceived LLD (P = .035, analyzed with the McNemar test).

Table 1 lists the mean preoperative mechanical axis measurements for patients with and without preoperative perceived LLD.

There was no significant difference between the 2 groups (P = .27). There was also no significant difference in preoperative mechanical axis when cases were separated and analyzed as varus and valgus deformities (varus P = .53, valgus P = .20).

Table 2 lists the mean postoperative mechanical axis measurements for patients with and without postoperative perceived LLD.
There was no significant difference between the 2 groups (P = .42). There was also no significant difference in postoperative mechanical axis for separate varus (P = .29) and valgus (P = .52) deformities.

Table 3 lists the mean absolute values of mechanical axis correction (preoperative to postoperative) for the 4 patient groups described in the Methods section.
ANOVA revealed no significant statistical difference in these values among the groups (P = .9229). There were also no significant statistical differences when the groups were stratified by age (40-59.9 years, P = .5973; 60-69.9 years, P = .6263; 70 years or older, P = .3779) or when ANOVA was used to compare the groups’ mean ages (P = .3183). In addition, the 4 groups were not significantly statistically different in BMI: obese (BMI >30; P = .3891) and nonobese (BMI <29.9; P = .9862).

Discussion

In this study, 18 patients (25%) had preoperative perceived LLD, proving that perceived LLD is common in patients who undergo TKA for unilateral osteoarthritis. Surgeons should give their patients a preoperative survey on perceived LLD, as survey responses may inform and influence surgical decisions and strategies.

Of the 18 patients with preoperative perceived LLD, only 1 had postoperative perceived LLD. That perceived LLD decreased after surgery makes sense given the widely accepted notion that actual LLD is common before primary TKA but in most cases is corrected during surgery.4,5 As LLD correction during surgery is so successful, surgeons should tell their patients with preoperative perceived LLD that in most cases it will be fixed after TKA.

Although the incidence of perceived LLD decreased after TKA (as mentioned earlier), the decrease seemed to be restricted mostly to patients with preoperative perceived LLD, and the underlying LLD was most probably corrected by the surgery. However, surgery introduced perceived LLD in 6 cases, supporting the notion that it is crucial to understand which patients are at higher risk for postoperative perceived LLD and what if any time frame can be expected for resolution in these cases. In our study, all cases of perceived LLD had resolved by a mean follow-up of 8.5 weeks (range, 3 weeks-3 months). This phenomenon of resolution may be attributed to some of the physical, objective LLD corrections that naturally occur throughout the postoperative course,4 though psychological factors may also be involved. Our study results suggest patients should be counseled that, though about 10% of patients perceive LLD after primary TKA, the vast majority of perceived LLD cases resolve within 3 months.

One study goal was to determine the relationship between the mechanical axis of the knee and perceived LLD both before and after surgery. There were no significant relationships. This was also true when cases of varus and valgus deformity were analyzed separately.

Another study goal was to determine if a surgical change in the mechanical alignment of the knee would influence preoperative-to-postoperative LLD perceptions. In our analysis, patients were divided into 4 groups based on their preoperative and postoperative LLD perceptions (see Methods section). ANOVA revealed no significant differences in absolute values of mechanical axis correction among the 4 groups. Likewise, there were no correlations between BMI and age and mechanical axis correction among the groups, suggesting LLD perception is unrelated to any of these variables. Ideally, if a relationship between a threshold knee alignment value and perceived LLD existed, surgeons would be able to counsel patients at higher risk for perceived LLD about how their knee alignment may contribute to their perception. Unfortunately, our study results did not show any significant statistical relationships in this regard.

The problem of LLD in patients undergoing TKA is not new, and much research is needed to determine the correlation between perceived versus actual discrepancies, and why they occur. Our study results confirmed that TKA corrects most cases of preoperative perceived LLD but introduces perceived LLD in other cases. Whether preoperative or postoperative LLD is merely perceived or is in fact an actual discrepancy remains to be seen.

One limitation of this study was its lack of leg-length measurements. Although we studied knee alignment specifically, it would have been useful to compare perceived LLD with measured leg lengths, either clinically or radiographically, especially since leg lengths obviously play a role in any perceived LLD. We used mechanical alignment as a surrogate for actual LLD because we hypothesized that alignment may contribute to patients’ perceived discrepancies.

Another limitation was the relatively small sample. Only 24 cases of perceived LLD were analyzed. Given our low rates of perceived LLD (25% before surgery, 10% after surgery), it is difficult to study a large enough TKA group to establish a statistically significant number of cases. Nevertheless, investigators may use larger groups to establish more meaningful relationships.

A third limitation was that alignment was measured on the operative side but not the contralateral side. As we were focusing on perceived discrepancy, contralateral knee alignment may play an important role. Our study involved patients with unilateral osteoarthritis, so it would be reasonable to assume the nonoperative knee was almost neutral in alignment in most cases. However, given that varus/valgus misalignment is a known risk factor for osteoarthritis,6 many of our patients with unilateral disease may very well have had preexisting misalignment of both knees. The undetermined alignment of the nonoperative side may be a confounding variable in the relationship between operative knee alignment and perceived LLD.

Fourth, not all patients were surveyed 3 weeks after surgery. Some were first surveyed at 6 weeks, and it is possible there were cases of transient postoperative LLD that resolved before that point. Therefore, our reported incidence of postoperative LLD could have missed some cases. In addition, our mean 8.5-week period for LLD resolution may not have accounted for these resolved cases of transient perceived LLD.


Am J Orthop. 2016;45(7):E429-E433. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

References

1. O’Brien S, Kernohan G, Fitzpatrick C, Hill J, Beverland D. Perception of imposed leg length inequality in normal subjects. Hip Int. 2010;20(4):505-511.

2. Noll DR. Leg length discrepancy and osteoarthritic knee pain in the elderly: an observational study. J Am Osteopath Assoc. 2013;113(9):670-678.

3. Clark CR, Huddleston HD, Schoch EP 3rd, Thomas BJ. Leg-length discrepancy after total hip arthroplasty. J Am Acad Orthop Surg. 2006;14(1):38-45.

4. Chang MJ, Kang YG, Chang CB, Seong SC, Kim TK. The patterns of limb length, height, weight and body mass index changes after total knee arthroplasty. J Arthroplasty. 2013;28(10):1856-1861.

5. Lang JE, Scott RD, Lonner JH, Bono JV, Hunter DJ, Li L. Magnitude of limb lengthening after primary total knee arthroplasty. J Arthroplasty. 2012;27(3):341-346.

6. Sharma L, Song J, Dunlop D, et al. Varus and valgus alignment and incident and progressive knee osteoarthritis. Ann Rheum Dis. 2010;69(11):1940-1945.

References

1. O’Brien S, Kernohan G, Fitzpatrick C, Hill J, Beverland D. Perception of imposed leg length inequality in normal subjects. Hip Int. 2010;20(4):505-511.

2. Noll DR. Leg length discrepancy and osteoarthritic knee pain in the elderly: an observational study. J Am Osteopath Assoc. 2013;113(9):670-678.

3. Clark CR, Huddleston HD, Schoch EP 3rd, Thomas BJ. Leg-length discrepancy after total hip arthroplasty. J Am Acad Orthop Surg. 2006;14(1):38-45.

4. Chang MJ, Kang YG, Chang CB, Seong SC, Kim TK. The patterns of limb length, height, weight and body mass index changes after total knee arthroplasty. J Arthroplasty. 2013;28(10):1856-1861.

5. Lang JE, Scott RD, Lonner JH, Bono JV, Hunter DJ, Li L. Magnitude of limb lengthening after primary total knee arthroplasty. J Arthroplasty. 2012;27(3):341-346.

6. Sharma L, Song J, Dunlop D, et al. Varus and valgus alignment and incident and progressive knee osteoarthritis. Ann Rheum Dis. 2010;69(11):1940-1945.

Issue
The American Journal of Orthopedics - 45(7)
Issue
The American Journal of Orthopedics - 45(7)
Page Number
E429-E433
Page Number
E429-E433
Publications
Publications
Topics
Article Type
Display Headline
Perceived Leg-Length Discrepancy After Primary Total Knee Arthroplasty: Does Knee Alignment Play a Role?
Display Headline
Perceived Leg-Length Discrepancy After Primary Total Knee Arthroplasty: Does Knee Alignment Play a Role?
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Posttraumatic Stress Disorder, Depression, and Other Comorbidities: Clinical and Systems Approaches to Diagnostic Uncertainties

Article Type
Changed
Tue, 01/30/2018 - 15:30
Overlap in the clinical presentation and significant rates of comorbidity complicate effective management of depression and PTSD, each presenting major health burdens for veterans and active-duty service members.

Over the past decade, nationwide attention has focused on mental health conditions associated with military service. Recent legal mandates have led to changes in the DoD, VA, and HHS health systems aimed at increasing access to care, decreasing barriers to care, and expanding research on mental health conditions commonly seen in service members and veterans. On August 31, 2012, President Barack Obama signed the Improving Access to Mental Health Services for Veterans, Service Members, and Military Families executive order, establishing an interagency task force from the VA, DoD, and HHS.1 The task force was charged with addressing quality of care and provider training in the management of commonly comorbid conditions, including (among other conditions) posttraumatic stress disorder (PTSD) and depression.

Depression and PTSD present major health burdens in both military and veteran cohorts. Overlap in clinical presentation and significant rates of comorbidity complicate effective management of these conditions. This article offers a brief review of the diagnostic and epidemiologic complexities associated with PTSD and depression, a summary of research relevant to these issues, and a description of recent system-level developments within the Military Health System (MHS) designed to improve care through better approaches in identification, management, and research of these conditions.

Diagnostic Uncertainty

Both PTSD and major depressive disorder (MDD) have been recognized as mental health disorders since the American Psychiatric Association’s Diagnostic and Statistical Manual (DSM) discarded its previous etiologically based approach to diagnostic classification in 1980 in favor of a system in which diagnosis is based on observable symptoms.2,3 With the release of DSM-5 in 2013, the diagnostic criteria for PTSD underwent a substantial transformation.4 Previously, PTSD was described as an anxiety disorder, and some of its manifestations overlapped descriptively (and in many cases, etiologically) with anxiety and depressive illnesses.5

Clinicians also often described shorter-lived, developmental, formes fruste, or otherwise subsyndromal manifestations of trauma associated with PTSD. In DSM-5, PTSD was removed from the anxiety disorders section and placed in a new category of disorders labeled Trauma and Stressor-Related Disorders. This new category also included reactive attachment disorder (in children), acute stress disorder, adjustment disorders, and unspecified or other trauma and stressor-related disorders. Other major changes to the PTSD diagnostic criteria included modification to the DSM-IV-TR (text revision) trauma definition (making the construct more specific), removal of the requirement for explicit subjective emotional reaction to a traumatic event, and greater emphasis on negative cognitions and mood. Debate surrounds the updated symptom criteria with critics questioning whether there is any improvement in the clinical utility of the diagnosis, especially in light of the substantial policy and practice implications the change engenders.6

Recently, Hoge and colleagues examined the psychometric implications of the diagnostic changes (between DSM-IV-TR and DSM-5) in the PTSD definition.6 The authors found that although the 2 definitions showed nearly identical association with other psychiatric disorders (including depression) and functional impairment, 30% of soldiers who met DSM-IV-TR criteria for PTSD failed to meet criteria in DSM-5, and another 20% met only DSM-5 criteria. Recognizing discordance in PTSD and associated diagnoses, the U.S. Army Medical Command mandated that its clinicians familiarize themselves with the controversies surrounding the discordant diagnoses and coding of subthreshold PTSD.7

Adding to the problem of diagnostic uncertainty, the clinical presentation of MDD includes significant overlap with that of PTSD. Specifically, symptoms of guilt, diminished interests, problems with concentration, and sleep disturbances are descriptive of both disorders. Furthermore, the criteria set for several subthreshold forms of MDD evidence considerable overlap with PTSD symptoms. For example, diagnostic criteria for disruptive mood dysregulation disorder include behavioral outbursts and irritability, and diagnostic criteria for dysthymia include sleep disturbances and concentration problems.

Adjustment disorders are categorized as trauma and stressor-related disorders in DSM-5 and hold many emotional and behavioral symptoms in common with PTSD. The “acute” and “chronic” adjustment disorder specifiers contribute to problems in diagnostic certainty for PTSD. In general, issues pertaining to diagnostic uncertainty and overlap likely reflect the limits of using a diagnostic classification system that relies exclusively on observational and subjective reports of psychological symptoms.8,9

In a treatment environment where a veteran or active-duty patient has presented for care, in the face of these shared symptom sets, clinicians frequently offer initial diagnoses. These diagnoses are often based on perceived etiologic factors derived from patients’ descriptions of stressors encountered during military service. This tendency likely contributes to considerable inconsistencies and potential inaccuracies in diagnoses, and much of the variance can be attributed to the clinicians’ degree of familiarity with military exposures, perceptions of what constitutes trauma, and outside pressure to assign or avoid specific diagnoses.

Importantly, the phenomenologic differences between PTSD and depressive disorders increase the likelihood of poorly aligned and inconsistent treatment plans, and this lack of clarity may, in turn, compromise effective patient care. To address some of these diagnostic challenges, the VA and DoD incorporate military culture training into clinicians’ curriculum to increase provider familiarity with the common stressors and challenges of military life, mandate the use of validated measures to support diagnostic decision making, and regularly review policies that influence diagnostic practices.

 

 

Epidemiology

The prevalence rates for PTSD are increasing in the military, possibly stemming from the demands on service members engaged in years’ long wars. Despite the increased attention on this phenomenon, research has demonstrated that the majority of service members who deploy do not develop PTSD or significant trauma-related functional impairment.10 Furthermore, many cases of PTSD diagnosed in the MHS stem from traumatic experiences other than combat exposure, including childhood abuse and neglect, sexual and other assaults, accidents and health care exposures, domestic abuse, and bullying. Depression arguably has received less attention despite comparable prevalence rates in military populations, high co-occurrence of PTSD and depression, and depression being associated with a greater odds ratio for mortality that includes death by suicide in military service members.11

Estimates of the prevalence of PTSD from the U.S. Army suggest that it exists in 3% to 6% of military members who have not deployed and in 6% to 25% of service members with combat deployment histories. The frequency and intensity of combat are strong predictors of risk.7 A recent epidemiologic study using inpatient and outpatient encounter records showed that the prevalence of PTSD in the active military component was 2.0% in the middle of calendar year (CY) 2010; a two-thirds increase from 1.2% in CY 2007.12 The incidence of PTSD

diagnoses likewise increased by one-fifth, from 0.81% to 0.97%, over the same period.

Epidemiologic studies and prevalence/incidence rates derived from administrative data rely on strict case definitions. Consequently, such administrative investigations include data only from service members

engaged in or identified by the medical system. Although these rates describe a lower limit for diagnostic prevalence, they serve as a good starting point to ascertain trends. Keeping in mind the limitations of administrative epidemiology, the MHS has witnessed a steady upward trend in comorbid cases of PTSD and depression since 2010. On average, between 2010 and 2015, patients diagnosed with PTSD were twice as likely to have a comorbid depression spectrum disorder diagnosis (42.4%) than depression spectrum disorder patients were to have a comorbid PTSD dia
gnosis (20.8%). Period prevalence for PTSD, depressive spectrum disorders, and comorbid disorders are described in Tables 1-3.

PTSD and Depression Treatment

Despite the high rates of PTSD and MDD comorbidity, few treatments have been developed for and tested on an exclusively comorbid sample of patients.13 However, psychopharmacologic agents targeting depression have been applied to the treatment of PTSD, and PTSD psychotherapy trials typically include depression response as a secondary outcome. The generalizability of findings to a truly comorbid population may be limited based on study sampling frames and the unique characteristics of patients with comorbid PTSD and depression.14-16 Several psychopharmacologic treatments for depression have been evaluated as frontline treatments for PTSD. The 3 pharmacologic treatments that demonstrate efficacy in treating PTSD include fluoxetine, paroxetine, and venlafaxine.17

Although these pharmacologic agents represent good candidate treatments for comorbid patients, the effect size of pharmacologic treatments are generally smaller than those of psychotherapeutic treatments for PTSD.17,18 This observation, however, is based on indirect comparisons, and a recent systematic review concluded that the evidence was insufficient to determine the comparative effectiveness between psychotherapy and pharmacotherapy for PTSD.19 Evidence indicates that trauma-focused cognitive behavioral therapies consistently demonstrate efficacy and effectiveness in treating PTSD.19,20 These treatments also have been shown to significantly reduce depressive symptoms among PTSD samples.21

Based on strong bodies of evidence, these pharmacologic and psychological treatments have received the highest level of recommendation in the VA and DoD.22,23 Accordingly, both agencies have invested considerable resources in large-scale efforts to improve patient access to these particular treatments. Despite these impressive implementation efforts, however, the limitations of relying exclusively on these treatments as frontline approaches within large health care systems have become evident.24-26

Penetration of Therapies

Penetration of these evidence-based treatments (EBTs) within the DoD and VHA remains limited. For instance, one study showed that VA clinicians in mental health specialty care clinics may provide only about 4 hours of EBT per week.27

Other reports suggest that only about 60% of treatment-seeking patients in PTSD clinics receive any type of evidence-based therapy and that within-session care quality is questionable based on a systematic review of chart notes.28,29 Attrition in trauma-focused therapy is a recognized limitation, with 1 out of 3 treatment-seeking patients not completing a full dose of evidence-based treatment.30-33 Large-scale analyses of VHA and DoD utilization data suggest that the majority of PTSD patients do not receive a sufficient number of sessions to be characterized as an adequate dose of EBT, with a majority of dropouts occur- ring after just a few sessions.34-37

Hoge and colleagues found that < 50% of soldiers meeting criteria for PTSD received any mental health care within the prior 6 months with one-quarter of those patients dropping out of care prematurely.38 Among a large cohort of soldiers engaged in care for the treatment of PTSD, only about 40% received a number of EBT treatment sessions that could qualify as an adequate dose.38 Thus, although major advancements in the development and implementation of effective treatments for PTSD and depression have occurred, the penetration of these treatments is limited, and the majority of patients in need of treatment potentially receive inadequate care.39

System level approaches that integrate behavioral health services into the primary care system have been proposed to address these care gaps for service members and veterans.40-42 Fundamentally, system-level approaches seek to improve the reach and effectiveness of care through large-scale screening efforts, a greater emphasis on the quality of patient care, and enhanced care continuity across episodes of treatment.

 

 

Primary Care

With the primary care setting considered the de facto mental health system, integrated approaches enhance the reach of care by incorporating uniform mental health screening and referral for patients coming through primary care. Specific evidence-based treatments can be integrated into this approach within a stepped-care framework that aims to match patients strategically to the right type of care and leverage specialty care resources as needed. Integrated care approaches for the treatment of PTSD and depression have been developed and evaluated inside and outside of the MHS. Findings indicate that integrated treatment approaches can improve care access, care continuity, patient satisfaction, quality of care,and in several trials, PTSD and depression outcomes.43-47

Recently, an integrated care approach targeting U.S. Army soldiers who screened positive for PTSD or depression in primary care was evaluated in a multisite effectiveness trial.48 Patients randomized to the treatment approach experienced significant improvements in both PTSD and depression symptoms relative to patients in usual care.43 In addition, patients treated in this care model received significantly more mental health services; the patterns of care indicated that patients with comorbid PTSD and depression were more likely to be triaged to specialty care, whereas patients with a single diagnosis were more likely to be managed in primary care.49 This trial suggests that integrated care models feasibly can be implemented in the U.S. Army care system, yielding increased uptake of mental health care, more efficiently matched care based on patient comorbidities, and improved PTSD and depression outcomes.

Treatment Research

The MHS supports a large portfolio of research in PTSD and depression through DoD/VA research consortia (eg, the Congressionally Directed Medical Research Program, the Consortium to Alleviate PTSD, the Injury and Traumatic Stress Clinical Consortium). The U.S. Army Medical Research and Materiel Command (USAMRMC) executes and manages the portfolio of research, relying on a joint program committee of DoD and non-DoD experts to make funding recommendations based on identified research priorities, policy guidance, and knowledge translation needs.

Health systems research on PTSD and MDD in federal health care settings is expanding. For example, the RAND Corporation recently evaluated a candidate set of quality measures for PTSD and MDD, using an operational definition of an episode of care.37 This work is intended to inform efforts to measure and improve the quality of care for PTSD and depression across the enterprise.

The DoD Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury is simultaneously completing an inferential assessment of adjunctive mental health care services, many focused on PTSD and depression, throughout the health care enterprise. Along with the substantial resources devoted to research on PTSD and depression, the MHS is implementing strategies to improve the system of care for service members with mental health conditions.

Army Care System Innovations

The U.S. Army is engaged in a variety of strategies to improve the identification of patients with mental health conditions, increase access to mental health services, and enhance the quality of care that soldiers receive for PTSD and depression. To improve the coordination of mental health care, the U.S. Army Medical Command implemented a wide-scale innovative transformation of its mental health care system through the establishment of the Behavioral Health Service Line program management office.

This move eliminated separate departments of psychiatry, psychology, and social work in favor of integrated behavioral health departments that are now responsible for all mental health care delivered to soldiers, including inpatient, outpatient, partial hospitalization, residential, embedded care in garrison, and primary care settings. This transformation ensured coordination of care for soldiers, eliminating potential miscommunication with patients, commands, and other clinicians while clearly defining performance indicators in process (eg, productivity, scheduling, access to care, and patient satisfaction) and outcome measures.49 In conjunction with the development of its service line, the U.S. Army created a Behavioral Health Data Portal (BHDP), an electronic and standardized means to assess clinical outcomes for common conditions.

To promote higher quality mental health care, the Office of the Surgeon General of the U.S. Army provided direct guidance on the treatment of PTSD and depression. U.S. Army policy mandates that providers treating mental health conditions adhere to the VA/DoD clinical practice guidelines (CPGs) and that soldiers with PTSD and depression be offered treatments with the highest level of scientific support and that outcome measures be routinely administered. In line with the CPGs, U.S. Army policy also recommends the use of both integrated and embedded mental health care approaches to address PTSD, depression, and other common physical and psychological health conditions.

To reduce stigma and improve mental health care access, the U.S. Army began implementing integrated care approaches in 2007 with its Re-Engineering Systems of Primary Care Treatment in the Military (RESPECT-Mil) program, an evidence-based collaborative care model.51-55 This approach included structured screening and diagnostic procedures, predictable follow-up schedules for patients, and the coordination of the divisions of responsibility among and between primary care providers, paraprofessionals, and behavioral health care providers. From 2007 to 2013, this collaborative care model was rolled out across 96 clinics worldwide and provided PTSD and depression screening to more than 1 million encounters per year.52,53

More recently, the U.S. Army led DoD in integrating behavioral health personnel in patient centered medical homes (PCMH) in compliance with DoD Instruction 6490.15.56 This hybrid integrated care model combines collaborative care elements developed in the RESPECT-Mil program with elements of the U.S. Air Force Behavioral Health Optimization project colocating behavioral health providers in primary care settings to provide brief consultative services.

 

 

MHS Care Enhancements

Many of the innovations deployed throughout the U.S. Army system of behavioral health care have driven changes across the MHS as a whole. The DoD and the VA have made substantive systemwide policy and practice changes to improve care for beneficiaries with PTSD, depression, and comorbid PTSD and depression. In particular, significant implementation efforts have addressed population screening strategies, outcome monitoring to support measurement-based care, increased access to effective care, and revision of the disability evaluation system.

To improve the identification and referral of soldiers with deployment-related mental health concerns, the DoD implemented a comprehensive program that screens service members prior to deployment, immediately on redeployment, and then again 6 months after returning from deployment. Additionally, annual primary care- based screening requirements have been instituted as part of the DoD PCMH initiative. Both deployment-related and primary care-based screenings include an instrumentation to detect symptoms of PTSD and depression and extend the reach of mental health screening to the entire MHS population.

Building on the success of BHDP, former Assistant Secretary of Defense for Health Affairs Jonathan Woodson mandated BHDP use across the MHS for all patients in DoD behavioral health clinics and the use of outcome measures for the treatment of PTSD, anxiety, depression, and alcohol use disorders.57 A DoD-wide requirement to use the PTSD checklist and patient health questionnaire to monitor PTSD and depression symptoms at mental health intakes and regularly at follow-up visits is being implemented. The Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury, through its Practice-Based Implementation Network (underwritten by a Joint Incentive Fund managed between DoD and VA), has worked across the MHS and the VA to facilitate the implementation, uptake, and adoption of this initiative.

The DoD established the Center for Deployment Psychology (CDP) in 2006 to promote clinician training in EBTs with the aim of increasing service members’ access to effective psychological treatments. Since its inception, the CDP has provided EBT training to more than 40,000 behavioral health providers. Although the impact of these and other efforts on improving the quality of care that patients receive is unknown, a recent study documented widespread self-reported usage of EBT components in U.S. Army clinics and that providers formally trained in EBTs were more likely to deliver EBTs.58

Finally, systemwide changes to the VA Schedule of Ratings for Disability (VASRD) and integration of DoD and VA disability evaluation systems have led to shifts in diagnosis toward PTSD that usually merit a minimum 50% disability rating. Mandates in law require military clinicians to evaluate patients who have deployed for PTSD and TBI prior to taking any actions associated with administrative separation. The practice of attributing PTSD symptoms to character pathology or personality disorders, even when these symptoms did not clearly manifest or worsen with military service, has likely been eliminated from practice in military and veteran populations.

Robust policy changes to limit personality disorder discharges started in fiscal year 2007, when there were 4,127 personality disorder separations across DoD. This number was reduced to 300 within 5 years. Policy changes regarding separation not only seem to have affected discharges, but also may have shaped diagnostic practice. The incidence rate of personality disorder diagnoses declined from 513 per 100,000 person-years in 2007 to 284 per 100,000 person-years by 2011.59 The VASRD recognizes chronic adjustment disorder as a disability, and the National Defense Authorization Act of 2008 mandated that DoD follow disability guidelines promulgated by VA.

As stated in the memorandum Clinical Policy Guidance for Assessment and Treatment of Post-Traumatic Stress Disorders (August 24, 2012), DoD recognizes chronic adjustment disorder as an unfitting condition that merits referral to its disability evaluation system.60 Acute adjustment disorders may still lead to administrative separations, as many service members manifest emotional symptoms stemming from the failure to adjust to the routine vicissitudes of military life. Finally, many court jurisdictions, including veteran’s courts, military courts, and commanders empowered to adjudicate nonjudicial infractions under the Uniform Code of Military Justice, have recognized PTSD as grounds for the mitigation of penalties associated with a wide array of criminal and administrative infractions.

Conclusion

In response to the increased mental health burden following a decade of war and the associated pressures stemming from federal mandates, the MHS has invested unprecedented resources into improving care for military service members. The U.S. Army has played a prominent role in this endeavor by investing in clinical research efforts to accelerate discovery on the causes and cures for these conditions, enacting policies that mandate best practices, and implementing evidence-based care approaches across the system of care. Despite this progress, however, understanding and effectively treating the most prevalent mental health conditions remain a challenge across the DoD and VHA health care systems. Many service members and veterans still do not receive timely, high-quality care for PTSD, depression, and other common comorbidities associated with military experience, and controversies in diagnostic clarification abound.

In short, great strides have been made, yet there is still a large distance to go. The vision of an effective, efficient, comprehensive care system for mental health conditions will continue to be pursued and achieved through collaborations across key agencies and the scientific community, implementation of health system approaches that support population care, and the sustained efforts of dedicated clinicians, staff, and clinic leaders who deliver the care to our service members and veterans.

References

1. The White House, Office of the Press Secretary. Executive Order 13625: Improving Access to Mental Health Services for Veterans, Service Members, and Military Families. https://www.whitehouse.gov/the-press-office/2012/08/31/executive-order-improving-access-mental-health-services-veterans-service. Published August 31, 2012. Accessed September 20, 2016.

2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 3rd ed. Arlington, VA: American Psychiatric Association Press; 1980.

3. Mayes R, Horwitz AV. DSM-III and the revolution in the classification of mental illness. J Hist Behav Sci. 2005;41(3):249-267.

4. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Arlington, VA: American Psychiatric Association Press; 2013.

5. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 4th ed., text rev. Arlington, VA: American Psychiatric Association Press; 2000.

6. Hoge CW, Riviere LA, Wilk JE, Herrell RK, Weathers FW. The prevalence of post-traumatic stress disorder (PTSD) in US combat soldiers: a head-to-head comparison of DSM-5 versus DSM-IV-TR symptom criteria with the PTSD checklist. Lancet Psychiatry. 2014;1(4):269-277.

7. OTSG-MEDCOM. Policy Memo 14-094: Policy Guidance on the Assessment and Treatment of Posttraumatic Stress Disorder (PTSD). Published December 18, 2014.

8. Insel T, Cuthbert B, Garvey M, et al. Research domain criteria (RDoC): toward a new classification framework for research on mental disorders. Am J Psychiatry, 2010;167(7):748-751.

9. National Institute of Mental Health. NIMH strategic plan for research. http://www.nimh.nih.gov/about/strategic-planning-reports/index.shtml. Revised 2015. Accessed September 20, 2016.

10. Colston M, Hocter W. Forensic aspects of posttraumatic stress disorder. In: Ritchie EC, ed. Forensic and Ethical Issues in Military Behavioral Health. Washington, DC: U.S. Department of the Army; 2015:97-110.

11. Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury. National Center for Telehealth and Technology. Department of Defense suicide event report: calendar year 2013 annual report. http://t2health.dcoe.mil/programs/dodser. Published January 13, 2015. Accessed September 20, 2016.

12. Otto JL, O’Donnell FL, Ford SA, Ritschard HV. Selected mental health disorders among active component members, US Armed Forces, 2007-2010. MSMR. 2010;17(11):2-5.

13. Gutner CA, Galovski T, Bovin MJ, Schnurr PP. Emergence of transdiagnostic treatments for PTSD and posttraumatic distress. Curr Psychiatry Rep. 2016;18(10):95-101.

14. Campbell DG, Felker BL, Liu CF, et al. Prevalence of depression-PTSD comorbidity: implications for clinical practice guidelines and primary care-based interventions. J Gen Intern Med. 2007;22(6):711-718.

15. Chan D, Cheadle AD, Reiber G, Unützer J, Chaney EF. Health care utilization and its costs for depressed veterans with and without comorbid PTSD symptoms. Psychiatr Serv. 2009;60(12):1612-1617.

16. Maguen S, Cohen B, Cohen G, Madden E, Bertenthal D, Seal K. Gender differences in health service utilization among Iraq and Afghanistan veterans with posttraumatic stress disorder. J Womens Health (Larchmt). 2012;21(6):666-673.

17. Hoskins M, Pearce J, Bethell A, et al. Pharmacotherapy for post-traumatic stress disorder: systematic review and meta-analysis. Br J Psychiatry. 2015;206(2):93-100.

18. Puetz TW, Youngstedt SD, Herring MP. Effects of pharmacotherapy on combat-related PTSD, anxiety, and depression: a systematic review and meta-regression analysis. PLoS One. 2015;10(5):e0126529.

19. Jonas DE, Cusack K, Forneris CA, et al. Psychological and pharmacological treatments for adults with posttraumatic stress disorder (PTSD). Comparative effectiveness review no. 92. https://effectivehealthcare.ahrq.gov/ehc/products/347/1435/PTSD-adult-treatment-report-130403.pdf. Published April 3, 2013. Accessed September 20, 2016.

20. Haagen JFG, Smid GE, Knipscheer JW, Kleber RJ. The efficacy of recommended treatments for veterans with PTSD: a metaregression analysis. Clin Psychol Rev. 2015;40:184-194.

21. Tran K, Moulton K, Santesso N, Rabb D. Cognitive processing therapy for post-traumatic stress disorder: a systematic review and meta-analysis. https://www.cadth.ca/cognitive-processing-therapy-post-traumatic-stress-disorder-systematic-review-and-meta-analysis. Published August 11, 2015. Accessed September 20, 2016.

22. VA/DoD Management of Post-Traumatic Stress Working Group. VA/DoD Clinical Practice Guideline for Management of Post-Traumatic Stress. Version 2. http://www.healthquality.va.gov/guidelines/MH/ptsd/. Published October, 2010. Accessed September 20, 2016.

23. VA/DoD Management of Major Depressive Disorder Working Group. VA/DoD Clinical Practice Guideline for the Management of Major Depressive Disorder. Version 3. http://www.healthquality.va.gov/guidelines/mh/mdd/index.asp. Published April 2016. Accessed September 20, 2016.

24. Zatzick DF, Galea S. An epidemiologic approach to the development of early trauma focused intervention. J Trauma Stress. 2007;20(4):401-412.

25. Zatzick DF, Koepsell T, Rivara FP. Using target population specification, effect size, and reach to estimate and compare the population impact of two PTSD preventive interventions. Psychiatry. 2009;72(4):346-359.

26. Glasgow RE, Nelson CC, Strycker LA, King DK. Using RE-AIM metrics to evaluate diabetes self-management support interventions. Am J Prev Med. 2006;30(1):67-73.

27. Finley EP, Garcia HA, Ketchum NS, et al. Utilization of evidence-based psychotherapies in Veterans Affairs posttraumatic stress disorder outpatient clinics. Psychol Serv. 2015;12(1):73-82.

28. Mott JM, Mondragon S, Hundt NE, Beason-Smith M, Grady RH, Teng EJ. Characteristics of U.S. veterans who begin and complete prolonged exposure and cognitive processing therapy for PTSD. J Trauma Stress. 2014;27(3):265-273.

29. Shiner B, D’Avolio LW, Nguyen TM, et al. Measuring use of evidence based psychotherapy for PTSD. Adm Policy Ment Health. 2013;40(4):311-318.

30. Schnurr PP, Friedman MJ, Engel CC, et al. Cognitive behavioral therapy for posttraumatic stress disorder in women: a randomized controlled trial. JAMA. 2007;297(8):820-830.

31. Tuerk PW, Yoder M, Grubaugh A, Myrick H, Hamner M, Acierno R. Prolonged exposure therapy for combat-related posttraumatic stress disorder: an examination of treatment effectiveness for veterans of the wars in Afghanistan and Iraq. J Anxiety Disord. 2011;25(3):397-403.

32. Chard KM, Schumm JA, Owens GP, Cottingham SM. A comparison of OEF and OIF veterans and Vietnam veterans receiving cognitive processing therapy. J Trauma Stress. 2010;23(1):25-32.

 

 

33. Monson CM, Schnurr PP, Resick PA, Friedman MJ, Young-Xu Y, Stevens SP. Cognitive processing therapy for veterans with military-related posttraumatic stress disorder. J Consult Clin Psychol. 2006;74(5):898-907.

34. Mott JM, Hundt NE, Sansgiry S, Mignogna J, Cully JA. Changes in psychotherapy utilization among veterans with depression, anxiety, and PTSD. Psychiatr Serv. 2014;65(1):106-112.

35. Seal KH, Maguen S, Cohen B, et al. VA mental health services utilization in Iraq and Afghanistan veterans in the first year of receiving new mental health diagnoses. J Trauma Stress. 2010;23(1):5-16.

36. Russell M, Silver SM. Training needs for the treatment of combat-related posttraumatic stress disorder: a survey of Department of Defense clinicians. Traumatology. 2007;13(3):4-10.

37. Schell TL, Marshall GN. Survey of individuals previously deployed for OEF/OIF. In: Tanielian T, Jaycox LH, eds. Invisible Wounds of War: Psychological and Cognitive Injuries, Their Consequences, and Services to Assist Recovery. Santa Monica, CA: RAND Corporation; 2008:87-118.

38. Hoge CW, Grossman SH, Auchterlonie JL, Riviere LA, Milliken CS, >Wilk JE. PTSD treatment for soldiers after combat deployment: low utilization of mental health care and reasons for dropout. Psychiatr Serv. 2014;65(8):997-1004.

39. Committee on the Assessment of Ongoing Efforts in the Treatment of Posttraumatic Stress Disorder, Board on the Health of Select Populations, Institute of Medicine. Treatment for Posttraumatic Stress Disorder in Military and Veteran Populations: Final Assessment. Washington, DC: National Academies Press; 2014.

40. Schnurr PP. Extending collaborative care for posttraumatic mental health. JAMA Intern Med. 2016;176(7):956-957.

41. Hoge CW. Interventions for war-related posttraumatic stress disorder: meeting veterans where they are. JAMA. 2011;306(5):549-551.

42. Engel CC. Improving primary care for military personnel and veterans with posttraumatic stress disorder: the road ahead. Gen Hosp Psychiatry. 2005;27(3):158-160.

43. Engel CC, Jaycox LH, Freed MC, et al. Centrally assisted collaborative telecare management for posttraumatic stress disorder and depression in military primary care: a randomized controlled trial. JAMA Intern Med. 2016;176(7):948-956.

44. Fortney JC, Pyne JM, Kimbrell TA, et al. Telemedicine-based collaborative care for posttraumatic stress disorder: a randomized clinical trial. JAMA Psychiatry. 2015;72(1):58-67.

45. Schnurr PP, Friedman MJ, Oxman TE, et al. RESPECT-PTSD: re-engineering systems for the primary care treatment of PTSD, a randomized controlled trial. J Gen Intern Med. 2013;28(1):32-40.

46. Zatzick D, Roy-Byrne P, Russo J, et al. A randomized effectiveness trial of stepped collaborative care for acutely injured trauma survivors. Arch Gen Psychiatry. 2004;61(5):498-506.

47. Zatzick D, O’Connor SS, Russo J, et al. Technology-enhanced stepped collaborative care targeting posttraumatic stress disorder and comorbidity after injury: a randomized controlled trial. J Trauma Stress. 2015;28(5):391-400.

48. Engel CC, Bray RM, Jaycox LH, et al. Implementing collaborative primary care for depression and posttraumatic stress disorder: design and sample for a randomized trial in the U.S. Military Health System. Contemp Clin Trials. 2014;39(2):310-319.

49. Belsher BE, Jaycox LH, Freed MC, et al. Mental health utilization patterns during a stepped, collaborative care effectiveness trial for PTSD and depression in the military health system. Med Care. 2016;54(7):706-713.

50. Hepner KA, Roth CP, Farris C, et al. Measuring the Quality of Care for Psychological Health Conditions in the Military Health System: Candidate Quality Measures for Posttraumatic Stress Disorder and Major Depressive Disorder. Santa Monica, CA: RAND Corporation; 2015.

51. Engel C, Oxman T, Yamamoto C, et al. RESPECT-Mil: feasibility of a systems-level collaborative care approach to depression and post-traumatic stress disorder in military primary care. Mil Med. 2008;173(10):935-940.

52. Belsher BE, Curry J, McCutchan P, et al. Implementation of a collaborative care initiative for PTSD and depression in the Army primary care system. Soc Work Ment Health. 2014;12(5-6):500-522.

53. Wong EC, Jaycox LH, Ayer L, et al. Evaluating the Implementation of the Re-Engineering Systems of Primary Care Treatment in the Military (RESPECT-Mil). Santa Monica, CA: RAND Corporation; 2015.

54. Archer J, Bower P, Gilbody S, et al. Collaborative care for depression and anxiety problems. Cochrane Database Syst Rev. 2012;10:CD006525.

55. Woltmann E, Grogan-Kaylor A, Perron B, Georges H, Kilbourne AM, Bauer MS. Comparative effectiveness of collaborative chronic care models for mental health conditions across primary, specialty, and behavioral health care settings: systematic review and meta-analysis. Am J Psychiatry. 2012;169(8):790-804.

56. Wright JL. DoD Directive 6490.15. www.dtic.mil/whs/directives/corres/pdf/649015p.pdf.Revised November 20, 2014. Accessed October 3, 2016. 57. Woodson J. Military treatment facility mental health clinical outcomes guidance. http://dcoe.mil/Libraries/Documents/MentalHealthClinicalOutcomesGuidance_Woodson.pdf. Published September 9, 2013. Accessed October 4, 2016.

58. Wilk JE, West JC, Duffy FF, Herrell RK, Rae DS, Hoge CW. Use of evidence-based treatment for posttraumatic stress disorder in Army behavioral healthcare. Psychiatry. 2013;76(4):336-348.

59. Stockton PN, Olsen ET, Hayford S, et al. Security from within: independent review of the Washington Navy Yard shooting. http://archive.defense.gov/pubs/Independent-Review-of-the-WNY-Shooting-14-Nov-2013.pdf. Published November, 2013. Accessed September 20, 2016.

60. Woodson J. ASD(HA) Memorandum: Clinical Policy Guidance for Assessment and Treatment of Posttraumatic Stress Disorder. August 24, 2012.

Article PDF
Author and Disclosure Information

CAPT Colston, Dr. Belsher, Ms. Beech, Dr. Curry, Mr. Tyberg, Mr. Melmed, Dr. McGraw, and Dr. Stoltz are all affiliated with the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury in Silver Spring, Maryland.

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Issue
Federal Practitioner - 33(11)
Publications
Topics
Page Number
37-45
Sections
Author and Disclosure Information

CAPT Colston, Dr. Belsher, Ms. Beech, Dr. Curry, Mr. Tyberg, Mr. Melmed, Dr. McGraw, and Dr. Stoltz are all affiliated with the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury in Silver Spring, Maryland.

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Author and Disclosure Information

CAPT Colston, Dr. Belsher, Ms. Beech, Dr. Curry, Mr. Tyberg, Mr. Melmed, Dr. McGraw, and Dr. Stoltz are all affiliated with the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury in Silver Spring, Maryland.

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Article PDF
Article PDF
Related Articles
Overlap in the clinical presentation and significant rates of comorbidity complicate effective management of depression and PTSD, each presenting major health burdens for veterans and active-duty service members.
Overlap in the clinical presentation and significant rates of comorbidity complicate effective management of depression and PTSD, each presenting major health burdens for veterans and active-duty service members.

Over the past decade, nationwide attention has focused on mental health conditions associated with military service. Recent legal mandates have led to changes in the DoD, VA, and HHS health systems aimed at increasing access to care, decreasing barriers to care, and expanding research on mental health conditions commonly seen in service members and veterans. On August 31, 2012, President Barack Obama signed the Improving Access to Mental Health Services for Veterans, Service Members, and Military Families executive order, establishing an interagency task force from the VA, DoD, and HHS.1 The task force was charged with addressing quality of care and provider training in the management of commonly comorbid conditions, including (among other conditions) posttraumatic stress disorder (PTSD) and depression.

Depression and PTSD present major health burdens in both military and veteran cohorts. Overlap in clinical presentation and significant rates of comorbidity complicate effective management of these conditions. This article offers a brief review of the diagnostic and epidemiologic complexities associated with PTSD and depression, a summary of research relevant to these issues, and a description of recent system-level developments within the Military Health System (MHS) designed to improve care through better approaches in identification, management, and research of these conditions.

Diagnostic Uncertainty

Both PTSD and major depressive disorder (MDD) have been recognized as mental health disorders since the American Psychiatric Association’s Diagnostic and Statistical Manual (DSM) discarded its previous etiologically based approach to diagnostic classification in 1980 in favor of a system in which diagnosis is based on observable symptoms.2,3 With the release of DSM-5 in 2013, the diagnostic criteria for PTSD underwent a substantial transformation.4 Previously, PTSD was described as an anxiety disorder, and some of its manifestations overlapped descriptively (and in many cases, etiologically) with anxiety and depressive illnesses.5

Clinicians also often described shorter-lived, developmental, formes fruste, or otherwise subsyndromal manifestations of trauma associated with PTSD. In DSM-5, PTSD was removed from the anxiety disorders section and placed in a new category of disorders labeled Trauma and Stressor-Related Disorders. This new category also included reactive attachment disorder (in children), acute stress disorder, adjustment disorders, and unspecified or other trauma and stressor-related disorders. Other major changes to the PTSD diagnostic criteria included modification to the DSM-IV-TR (text revision) trauma definition (making the construct more specific), removal of the requirement for explicit subjective emotional reaction to a traumatic event, and greater emphasis on negative cognitions and mood. Debate surrounds the updated symptom criteria with critics questioning whether there is any improvement in the clinical utility of the diagnosis, especially in light of the substantial policy and practice implications the change engenders.6

Recently, Hoge and colleagues examined the psychometric implications of the diagnostic changes (between DSM-IV-TR and DSM-5) in the PTSD definition.6 The authors found that although the 2 definitions showed nearly identical association with other psychiatric disorders (including depression) and functional impairment, 30% of soldiers who met DSM-IV-TR criteria for PTSD failed to meet criteria in DSM-5, and another 20% met only DSM-5 criteria. Recognizing discordance in PTSD and associated diagnoses, the U.S. Army Medical Command mandated that its clinicians familiarize themselves with the controversies surrounding the discordant diagnoses and coding of subthreshold PTSD.7

Adding to the problem of diagnostic uncertainty, the clinical presentation of MDD includes significant overlap with that of PTSD. Specifically, symptoms of guilt, diminished interests, problems with concentration, and sleep disturbances are descriptive of both disorders. Furthermore, the criteria set for several subthreshold forms of MDD evidence considerable overlap with PTSD symptoms. For example, diagnostic criteria for disruptive mood dysregulation disorder include behavioral outbursts and irritability, and diagnostic criteria for dysthymia include sleep disturbances and concentration problems.

Adjustment disorders are categorized as trauma and stressor-related disorders in DSM-5 and hold many emotional and behavioral symptoms in common with PTSD. The “acute” and “chronic” adjustment disorder specifiers contribute to problems in diagnostic certainty for PTSD. In general, issues pertaining to diagnostic uncertainty and overlap likely reflect the limits of using a diagnostic classification system that relies exclusively on observational and subjective reports of psychological symptoms.8,9

In a treatment environment where a veteran or active-duty patient has presented for care, in the face of these shared symptom sets, clinicians frequently offer initial diagnoses. These diagnoses are often based on perceived etiologic factors derived from patients’ descriptions of stressors encountered during military service. This tendency likely contributes to considerable inconsistencies and potential inaccuracies in diagnoses, and much of the variance can be attributed to the clinicians’ degree of familiarity with military exposures, perceptions of what constitutes trauma, and outside pressure to assign or avoid specific diagnoses.

Importantly, the phenomenologic differences between PTSD and depressive disorders increase the likelihood of poorly aligned and inconsistent treatment plans, and this lack of clarity may, in turn, compromise effective patient care. To address some of these diagnostic challenges, the VA and DoD incorporate military culture training into clinicians’ curriculum to increase provider familiarity with the common stressors and challenges of military life, mandate the use of validated measures to support diagnostic decision making, and regularly review policies that influence diagnostic practices.

 

 

Epidemiology

The prevalence rates for PTSD are increasing in the military, possibly stemming from the demands on service members engaged in years’ long wars. Despite the increased attention on this phenomenon, research has demonstrated that the majority of service members who deploy do not develop PTSD or significant trauma-related functional impairment.10 Furthermore, many cases of PTSD diagnosed in the MHS stem from traumatic experiences other than combat exposure, including childhood abuse and neglect, sexual and other assaults, accidents and health care exposures, domestic abuse, and bullying. Depression arguably has received less attention despite comparable prevalence rates in military populations, high co-occurrence of PTSD and depression, and depression being associated with a greater odds ratio for mortality that includes death by suicide in military service members.11

Estimates of the prevalence of PTSD from the U.S. Army suggest that it exists in 3% to 6% of military members who have not deployed and in 6% to 25% of service members with combat deployment histories. The frequency and intensity of combat are strong predictors of risk.7 A recent epidemiologic study using inpatient and outpatient encounter records showed that the prevalence of PTSD in the active military component was 2.0% in the middle of calendar year (CY) 2010; a two-thirds increase from 1.2% in CY 2007.12 The incidence of PTSD

diagnoses likewise increased by one-fifth, from 0.81% to 0.97%, over the same period.

Epidemiologic studies and prevalence/incidence rates derived from administrative data rely on strict case definitions. Consequently, such administrative investigations include data only from service members

engaged in or identified by the medical system. Although these rates describe a lower limit for diagnostic prevalence, they serve as a good starting point to ascertain trends. Keeping in mind the limitations of administrative epidemiology, the MHS has witnessed a steady upward trend in comorbid cases of PTSD and depression since 2010. On average, between 2010 and 2015, patients diagnosed with PTSD were twice as likely to have a comorbid depression spectrum disorder diagnosis (42.4%) than depression spectrum disorder patients were to have a comorbid PTSD dia
gnosis (20.8%). Period prevalence for PTSD, depressive spectrum disorders, and comorbid disorders are described in Tables 1-3.

PTSD and Depression Treatment

Despite the high rates of PTSD and MDD comorbidity, few treatments have been developed for and tested on an exclusively comorbid sample of patients.13 However, psychopharmacologic agents targeting depression have been applied to the treatment of PTSD, and PTSD psychotherapy trials typically include depression response as a secondary outcome. The generalizability of findings to a truly comorbid population may be limited based on study sampling frames and the unique characteristics of patients with comorbid PTSD and depression.14-16 Several psychopharmacologic treatments for depression have been evaluated as frontline treatments for PTSD. The 3 pharmacologic treatments that demonstrate efficacy in treating PTSD include fluoxetine, paroxetine, and venlafaxine.17

Although these pharmacologic agents represent good candidate treatments for comorbid patients, the effect size of pharmacologic treatments are generally smaller than those of psychotherapeutic treatments for PTSD.17,18 This observation, however, is based on indirect comparisons, and a recent systematic review concluded that the evidence was insufficient to determine the comparative effectiveness between psychotherapy and pharmacotherapy for PTSD.19 Evidence indicates that trauma-focused cognitive behavioral therapies consistently demonstrate efficacy and effectiveness in treating PTSD.19,20 These treatments also have been shown to significantly reduce depressive symptoms among PTSD samples.21

Based on strong bodies of evidence, these pharmacologic and psychological treatments have received the highest level of recommendation in the VA and DoD.22,23 Accordingly, both agencies have invested considerable resources in large-scale efforts to improve patient access to these particular treatments. Despite these impressive implementation efforts, however, the limitations of relying exclusively on these treatments as frontline approaches within large health care systems have become evident.24-26

Penetration of Therapies

Penetration of these evidence-based treatments (EBTs) within the DoD and VHA remains limited. For instance, one study showed that VA clinicians in mental health specialty care clinics may provide only about 4 hours of EBT per week.27

Other reports suggest that only about 60% of treatment-seeking patients in PTSD clinics receive any type of evidence-based therapy and that within-session care quality is questionable based on a systematic review of chart notes.28,29 Attrition in trauma-focused therapy is a recognized limitation, with 1 out of 3 treatment-seeking patients not completing a full dose of evidence-based treatment.30-33 Large-scale analyses of VHA and DoD utilization data suggest that the majority of PTSD patients do not receive a sufficient number of sessions to be characterized as an adequate dose of EBT, with a majority of dropouts occur- ring after just a few sessions.34-37

Hoge and colleagues found that < 50% of soldiers meeting criteria for PTSD received any mental health care within the prior 6 months with one-quarter of those patients dropping out of care prematurely.38 Among a large cohort of soldiers engaged in care for the treatment of PTSD, only about 40% received a number of EBT treatment sessions that could qualify as an adequate dose.38 Thus, although major advancements in the development and implementation of effective treatments for PTSD and depression have occurred, the penetration of these treatments is limited, and the majority of patients in need of treatment potentially receive inadequate care.39

System level approaches that integrate behavioral health services into the primary care system have been proposed to address these care gaps for service members and veterans.40-42 Fundamentally, system-level approaches seek to improve the reach and effectiveness of care through large-scale screening efforts, a greater emphasis on the quality of patient care, and enhanced care continuity across episodes of treatment.

 

 

Primary Care

With the primary care setting considered the de facto mental health system, integrated approaches enhance the reach of care by incorporating uniform mental health screening and referral for patients coming through primary care. Specific evidence-based treatments can be integrated into this approach within a stepped-care framework that aims to match patients strategically to the right type of care and leverage specialty care resources as needed. Integrated care approaches for the treatment of PTSD and depression have been developed and evaluated inside and outside of the MHS. Findings indicate that integrated treatment approaches can improve care access, care continuity, patient satisfaction, quality of care,and in several trials, PTSD and depression outcomes.43-47

Recently, an integrated care approach targeting U.S. Army soldiers who screened positive for PTSD or depression in primary care was evaluated in a multisite effectiveness trial.48 Patients randomized to the treatment approach experienced significant improvements in both PTSD and depression symptoms relative to patients in usual care.43 In addition, patients treated in this care model received significantly more mental health services; the patterns of care indicated that patients with comorbid PTSD and depression were more likely to be triaged to specialty care, whereas patients with a single diagnosis were more likely to be managed in primary care.49 This trial suggests that integrated care models feasibly can be implemented in the U.S. Army care system, yielding increased uptake of mental health care, more efficiently matched care based on patient comorbidities, and improved PTSD and depression outcomes.

Treatment Research

The MHS supports a large portfolio of research in PTSD and depression through DoD/VA research consortia (eg, the Congressionally Directed Medical Research Program, the Consortium to Alleviate PTSD, the Injury and Traumatic Stress Clinical Consortium). The U.S. Army Medical Research and Materiel Command (USAMRMC) executes and manages the portfolio of research, relying on a joint program committee of DoD and non-DoD experts to make funding recommendations based on identified research priorities, policy guidance, and knowledge translation needs.

Health systems research on PTSD and MDD in federal health care settings is expanding. For example, the RAND Corporation recently evaluated a candidate set of quality measures for PTSD and MDD, using an operational definition of an episode of care.37 This work is intended to inform efforts to measure and improve the quality of care for PTSD and depression across the enterprise.

The DoD Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury is simultaneously completing an inferential assessment of adjunctive mental health care services, many focused on PTSD and depression, throughout the health care enterprise. Along with the substantial resources devoted to research on PTSD and depression, the MHS is implementing strategies to improve the system of care for service members with mental health conditions.

Army Care System Innovations

The U.S. Army is engaged in a variety of strategies to improve the identification of patients with mental health conditions, increase access to mental health services, and enhance the quality of care that soldiers receive for PTSD and depression. To improve the coordination of mental health care, the U.S. Army Medical Command implemented a wide-scale innovative transformation of its mental health care system through the establishment of the Behavioral Health Service Line program management office.

This move eliminated separate departments of psychiatry, psychology, and social work in favor of integrated behavioral health departments that are now responsible for all mental health care delivered to soldiers, including inpatient, outpatient, partial hospitalization, residential, embedded care in garrison, and primary care settings. This transformation ensured coordination of care for soldiers, eliminating potential miscommunication with patients, commands, and other clinicians while clearly defining performance indicators in process (eg, productivity, scheduling, access to care, and patient satisfaction) and outcome measures.49 In conjunction with the development of its service line, the U.S. Army created a Behavioral Health Data Portal (BHDP), an electronic and standardized means to assess clinical outcomes for common conditions.

To promote higher quality mental health care, the Office of the Surgeon General of the U.S. Army provided direct guidance on the treatment of PTSD and depression. U.S. Army policy mandates that providers treating mental health conditions adhere to the VA/DoD clinical practice guidelines (CPGs) and that soldiers with PTSD and depression be offered treatments with the highest level of scientific support and that outcome measures be routinely administered. In line with the CPGs, U.S. Army policy also recommends the use of both integrated and embedded mental health care approaches to address PTSD, depression, and other common physical and psychological health conditions.

To reduce stigma and improve mental health care access, the U.S. Army began implementing integrated care approaches in 2007 with its Re-Engineering Systems of Primary Care Treatment in the Military (RESPECT-Mil) program, an evidence-based collaborative care model.51-55 This approach included structured screening and diagnostic procedures, predictable follow-up schedules for patients, and the coordination of the divisions of responsibility among and between primary care providers, paraprofessionals, and behavioral health care providers. From 2007 to 2013, this collaborative care model was rolled out across 96 clinics worldwide and provided PTSD and depression screening to more than 1 million encounters per year.52,53

More recently, the U.S. Army led DoD in integrating behavioral health personnel in patient centered medical homes (PCMH) in compliance with DoD Instruction 6490.15.56 This hybrid integrated care model combines collaborative care elements developed in the RESPECT-Mil program with elements of the U.S. Air Force Behavioral Health Optimization project colocating behavioral health providers in primary care settings to provide brief consultative services.

 

 

MHS Care Enhancements

Many of the innovations deployed throughout the U.S. Army system of behavioral health care have driven changes across the MHS as a whole. The DoD and the VA have made substantive systemwide policy and practice changes to improve care for beneficiaries with PTSD, depression, and comorbid PTSD and depression. In particular, significant implementation efforts have addressed population screening strategies, outcome monitoring to support measurement-based care, increased access to effective care, and revision of the disability evaluation system.

To improve the identification and referral of soldiers with deployment-related mental health concerns, the DoD implemented a comprehensive program that screens service members prior to deployment, immediately on redeployment, and then again 6 months after returning from deployment. Additionally, annual primary care- based screening requirements have been instituted as part of the DoD PCMH initiative. Both deployment-related and primary care-based screenings include an instrumentation to detect symptoms of PTSD and depression and extend the reach of mental health screening to the entire MHS population.

Building on the success of BHDP, former Assistant Secretary of Defense for Health Affairs Jonathan Woodson mandated BHDP use across the MHS for all patients in DoD behavioral health clinics and the use of outcome measures for the treatment of PTSD, anxiety, depression, and alcohol use disorders.57 A DoD-wide requirement to use the PTSD checklist and patient health questionnaire to monitor PTSD and depression symptoms at mental health intakes and regularly at follow-up visits is being implemented. The Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury, through its Practice-Based Implementation Network (underwritten by a Joint Incentive Fund managed between DoD and VA), has worked across the MHS and the VA to facilitate the implementation, uptake, and adoption of this initiative.

The DoD established the Center for Deployment Psychology (CDP) in 2006 to promote clinician training in EBTs with the aim of increasing service members’ access to effective psychological treatments. Since its inception, the CDP has provided EBT training to more than 40,000 behavioral health providers. Although the impact of these and other efforts on improving the quality of care that patients receive is unknown, a recent study documented widespread self-reported usage of EBT components in U.S. Army clinics and that providers formally trained in EBTs were more likely to deliver EBTs.58

Finally, systemwide changes to the VA Schedule of Ratings for Disability (VASRD) and integration of DoD and VA disability evaluation systems have led to shifts in diagnosis toward PTSD that usually merit a minimum 50% disability rating. Mandates in law require military clinicians to evaluate patients who have deployed for PTSD and TBI prior to taking any actions associated with administrative separation. The practice of attributing PTSD symptoms to character pathology or personality disorders, even when these symptoms did not clearly manifest or worsen with military service, has likely been eliminated from practice in military and veteran populations.

Robust policy changes to limit personality disorder discharges started in fiscal year 2007, when there were 4,127 personality disorder separations across DoD. This number was reduced to 300 within 5 years. Policy changes regarding separation not only seem to have affected discharges, but also may have shaped diagnostic practice. The incidence rate of personality disorder diagnoses declined from 513 per 100,000 person-years in 2007 to 284 per 100,000 person-years by 2011.59 The VASRD recognizes chronic adjustment disorder as a disability, and the National Defense Authorization Act of 2008 mandated that DoD follow disability guidelines promulgated by VA.

As stated in the memorandum Clinical Policy Guidance for Assessment and Treatment of Post-Traumatic Stress Disorders (August 24, 2012), DoD recognizes chronic adjustment disorder as an unfitting condition that merits referral to its disability evaluation system.60 Acute adjustment disorders may still lead to administrative separations, as many service members manifest emotional symptoms stemming from the failure to adjust to the routine vicissitudes of military life. Finally, many court jurisdictions, including veteran’s courts, military courts, and commanders empowered to adjudicate nonjudicial infractions under the Uniform Code of Military Justice, have recognized PTSD as grounds for the mitigation of penalties associated with a wide array of criminal and administrative infractions.

Conclusion

In response to the increased mental health burden following a decade of war and the associated pressures stemming from federal mandates, the MHS has invested unprecedented resources into improving care for military service members. The U.S. Army has played a prominent role in this endeavor by investing in clinical research efforts to accelerate discovery on the causes and cures for these conditions, enacting policies that mandate best practices, and implementing evidence-based care approaches across the system of care. Despite this progress, however, understanding and effectively treating the most prevalent mental health conditions remain a challenge across the DoD and VHA health care systems. Many service members and veterans still do not receive timely, high-quality care for PTSD, depression, and other common comorbidities associated with military experience, and controversies in diagnostic clarification abound.

In short, great strides have been made, yet there is still a large distance to go. The vision of an effective, efficient, comprehensive care system for mental health conditions will continue to be pursued and achieved through collaborations across key agencies and the scientific community, implementation of health system approaches that support population care, and the sustained efforts of dedicated clinicians, staff, and clinic leaders who deliver the care to our service members and veterans.

Over the past decade, nationwide attention has focused on mental health conditions associated with military service. Recent legal mandates have led to changes in the DoD, VA, and HHS health systems aimed at increasing access to care, decreasing barriers to care, and expanding research on mental health conditions commonly seen in service members and veterans. On August 31, 2012, President Barack Obama signed the Improving Access to Mental Health Services for Veterans, Service Members, and Military Families executive order, establishing an interagency task force from the VA, DoD, and HHS.1 The task force was charged with addressing quality of care and provider training in the management of commonly comorbid conditions, including (among other conditions) posttraumatic stress disorder (PTSD) and depression.

Depression and PTSD present major health burdens in both military and veteran cohorts. Overlap in clinical presentation and significant rates of comorbidity complicate effective management of these conditions. This article offers a brief review of the diagnostic and epidemiologic complexities associated with PTSD and depression, a summary of research relevant to these issues, and a description of recent system-level developments within the Military Health System (MHS) designed to improve care through better approaches in identification, management, and research of these conditions.

Diagnostic Uncertainty

Both PTSD and major depressive disorder (MDD) have been recognized as mental health disorders since the American Psychiatric Association’s Diagnostic and Statistical Manual (DSM) discarded its previous etiologically based approach to diagnostic classification in 1980 in favor of a system in which diagnosis is based on observable symptoms.2,3 With the release of DSM-5 in 2013, the diagnostic criteria for PTSD underwent a substantial transformation.4 Previously, PTSD was described as an anxiety disorder, and some of its manifestations overlapped descriptively (and in many cases, etiologically) with anxiety and depressive illnesses.5

Clinicians also often described shorter-lived, developmental, formes fruste, or otherwise subsyndromal manifestations of trauma associated with PTSD. In DSM-5, PTSD was removed from the anxiety disorders section and placed in a new category of disorders labeled Trauma and Stressor-Related Disorders. This new category also included reactive attachment disorder (in children), acute stress disorder, adjustment disorders, and unspecified or other trauma and stressor-related disorders. Other major changes to the PTSD diagnostic criteria included modification to the DSM-IV-TR (text revision) trauma definition (making the construct more specific), removal of the requirement for explicit subjective emotional reaction to a traumatic event, and greater emphasis on negative cognitions and mood. Debate surrounds the updated symptom criteria with critics questioning whether there is any improvement in the clinical utility of the diagnosis, especially in light of the substantial policy and practice implications the change engenders.6

Recently, Hoge and colleagues examined the psychometric implications of the diagnostic changes (between DSM-IV-TR and DSM-5) in the PTSD definition.6 The authors found that although the 2 definitions showed nearly identical association with other psychiatric disorders (including depression) and functional impairment, 30% of soldiers who met DSM-IV-TR criteria for PTSD failed to meet criteria in DSM-5, and another 20% met only DSM-5 criteria. Recognizing discordance in PTSD and associated diagnoses, the U.S. Army Medical Command mandated that its clinicians familiarize themselves with the controversies surrounding the discordant diagnoses and coding of subthreshold PTSD.7

Adding to the problem of diagnostic uncertainty, the clinical presentation of MDD includes significant overlap with that of PTSD. Specifically, symptoms of guilt, diminished interests, problems with concentration, and sleep disturbances are descriptive of both disorders. Furthermore, the criteria set for several subthreshold forms of MDD evidence considerable overlap with PTSD symptoms. For example, diagnostic criteria for disruptive mood dysregulation disorder include behavioral outbursts and irritability, and diagnostic criteria for dysthymia include sleep disturbances and concentration problems.

Adjustment disorders are categorized as trauma and stressor-related disorders in DSM-5 and hold many emotional and behavioral symptoms in common with PTSD. The “acute” and “chronic” adjustment disorder specifiers contribute to problems in diagnostic certainty for PTSD. In general, issues pertaining to diagnostic uncertainty and overlap likely reflect the limits of using a diagnostic classification system that relies exclusively on observational and subjective reports of psychological symptoms.8,9

In a treatment environment where a veteran or active-duty patient has presented for care, in the face of these shared symptom sets, clinicians frequently offer initial diagnoses. These diagnoses are often based on perceived etiologic factors derived from patients’ descriptions of stressors encountered during military service. This tendency likely contributes to considerable inconsistencies and potential inaccuracies in diagnoses, and much of the variance can be attributed to the clinicians’ degree of familiarity with military exposures, perceptions of what constitutes trauma, and outside pressure to assign or avoid specific diagnoses.

Importantly, the phenomenologic differences between PTSD and depressive disorders increase the likelihood of poorly aligned and inconsistent treatment plans, and this lack of clarity may, in turn, compromise effective patient care. To address some of these diagnostic challenges, the VA and DoD incorporate military culture training into clinicians’ curriculum to increase provider familiarity with the common stressors and challenges of military life, mandate the use of validated measures to support diagnostic decision making, and regularly review policies that influence diagnostic practices.

 

 

Epidemiology

The prevalence rates for PTSD are increasing in the military, possibly stemming from the demands on service members engaged in years’ long wars. Despite the increased attention on this phenomenon, research has demonstrated that the majority of service members who deploy do not develop PTSD or significant trauma-related functional impairment.10 Furthermore, many cases of PTSD diagnosed in the MHS stem from traumatic experiences other than combat exposure, including childhood abuse and neglect, sexual and other assaults, accidents and health care exposures, domestic abuse, and bullying. Depression arguably has received less attention despite comparable prevalence rates in military populations, high co-occurrence of PTSD and depression, and depression being associated with a greater odds ratio for mortality that includes death by suicide in military service members.11

Estimates of the prevalence of PTSD from the U.S. Army suggest that it exists in 3% to 6% of military members who have not deployed and in 6% to 25% of service members with combat deployment histories. The frequency and intensity of combat are strong predictors of risk.7 A recent epidemiologic study using inpatient and outpatient encounter records showed that the prevalence of PTSD in the active military component was 2.0% in the middle of calendar year (CY) 2010; a two-thirds increase from 1.2% in CY 2007.12 The incidence of PTSD

diagnoses likewise increased by one-fifth, from 0.81% to 0.97%, over the same period.

Epidemiologic studies and prevalence/incidence rates derived from administrative data rely on strict case definitions. Consequently, such administrative investigations include data only from service members

engaged in or identified by the medical system. Although these rates describe a lower limit for diagnostic prevalence, they serve as a good starting point to ascertain trends. Keeping in mind the limitations of administrative epidemiology, the MHS has witnessed a steady upward trend in comorbid cases of PTSD and depression since 2010. On average, between 2010 and 2015, patients diagnosed with PTSD were twice as likely to have a comorbid depression spectrum disorder diagnosis (42.4%) than depression spectrum disorder patients were to have a comorbid PTSD dia
gnosis (20.8%). Period prevalence for PTSD, depressive spectrum disorders, and comorbid disorders are described in Tables 1-3.

PTSD and Depression Treatment

Despite the high rates of PTSD and MDD comorbidity, few treatments have been developed for and tested on an exclusively comorbid sample of patients.13 However, psychopharmacologic agents targeting depression have been applied to the treatment of PTSD, and PTSD psychotherapy trials typically include depression response as a secondary outcome. The generalizability of findings to a truly comorbid population may be limited based on study sampling frames and the unique characteristics of patients with comorbid PTSD and depression.14-16 Several psychopharmacologic treatments for depression have been evaluated as frontline treatments for PTSD. The 3 pharmacologic treatments that demonstrate efficacy in treating PTSD include fluoxetine, paroxetine, and venlafaxine.17

Although these pharmacologic agents represent good candidate treatments for comorbid patients, the effect size of pharmacologic treatments are generally smaller than those of psychotherapeutic treatments for PTSD.17,18 This observation, however, is based on indirect comparisons, and a recent systematic review concluded that the evidence was insufficient to determine the comparative effectiveness between psychotherapy and pharmacotherapy for PTSD.19 Evidence indicates that trauma-focused cognitive behavioral therapies consistently demonstrate efficacy and effectiveness in treating PTSD.19,20 These treatments also have been shown to significantly reduce depressive symptoms among PTSD samples.21

Based on strong bodies of evidence, these pharmacologic and psychological treatments have received the highest level of recommendation in the VA and DoD.22,23 Accordingly, both agencies have invested considerable resources in large-scale efforts to improve patient access to these particular treatments. Despite these impressive implementation efforts, however, the limitations of relying exclusively on these treatments as frontline approaches within large health care systems have become evident.24-26

Penetration of Therapies

Penetration of these evidence-based treatments (EBTs) within the DoD and VHA remains limited. For instance, one study showed that VA clinicians in mental health specialty care clinics may provide only about 4 hours of EBT per week.27

Other reports suggest that only about 60% of treatment-seeking patients in PTSD clinics receive any type of evidence-based therapy and that within-session care quality is questionable based on a systematic review of chart notes.28,29 Attrition in trauma-focused therapy is a recognized limitation, with 1 out of 3 treatment-seeking patients not completing a full dose of evidence-based treatment.30-33 Large-scale analyses of VHA and DoD utilization data suggest that the majority of PTSD patients do not receive a sufficient number of sessions to be characterized as an adequate dose of EBT, with a majority of dropouts occur- ring after just a few sessions.34-37

Hoge and colleagues found that < 50% of soldiers meeting criteria for PTSD received any mental health care within the prior 6 months with one-quarter of those patients dropping out of care prematurely.38 Among a large cohort of soldiers engaged in care for the treatment of PTSD, only about 40% received a number of EBT treatment sessions that could qualify as an adequate dose.38 Thus, although major advancements in the development and implementation of effective treatments for PTSD and depression have occurred, the penetration of these treatments is limited, and the majority of patients in need of treatment potentially receive inadequate care.39

System level approaches that integrate behavioral health services into the primary care system have been proposed to address these care gaps for service members and veterans.40-42 Fundamentally, system-level approaches seek to improve the reach and effectiveness of care through large-scale screening efforts, a greater emphasis on the quality of patient care, and enhanced care continuity across episodes of treatment.

 

 

Primary Care

With the primary care setting considered the de facto mental health system, integrated approaches enhance the reach of care by incorporating uniform mental health screening and referral for patients coming through primary care. Specific evidence-based treatments can be integrated into this approach within a stepped-care framework that aims to match patients strategically to the right type of care and leverage specialty care resources as needed. Integrated care approaches for the treatment of PTSD and depression have been developed and evaluated inside and outside of the MHS. Findings indicate that integrated treatment approaches can improve care access, care continuity, patient satisfaction, quality of care,and in several trials, PTSD and depression outcomes.43-47

Recently, an integrated care approach targeting U.S. Army soldiers who screened positive for PTSD or depression in primary care was evaluated in a multisite effectiveness trial.48 Patients randomized to the treatment approach experienced significant improvements in both PTSD and depression symptoms relative to patients in usual care.43 In addition, patients treated in this care model received significantly more mental health services; the patterns of care indicated that patients with comorbid PTSD and depression were more likely to be triaged to specialty care, whereas patients with a single diagnosis were more likely to be managed in primary care.49 This trial suggests that integrated care models feasibly can be implemented in the U.S. Army care system, yielding increased uptake of mental health care, more efficiently matched care based on patient comorbidities, and improved PTSD and depression outcomes.

Treatment Research

The MHS supports a large portfolio of research in PTSD and depression through DoD/VA research consortia (eg, the Congressionally Directed Medical Research Program, the Consortium to Alleviate PTSD, the Injury and Traumatic Stress Clinical Consortium). The U.S. Army Medical Research and Materiel Command (USAMRMC) executes and manages the portfolio of research, relying on a joint program committee of DoD and non-DoD experts to make funding recommendations based on identified research priorities, policy guidance, and knowledge translation needs.

Health systems research on PTSD and MDD in federal health care settings is expanding. For example, the RAND Corporation recently evaluated a candidate set of quality measures for PTSD and MDD, using an operational definition of an episode of care.37 This work is intended to inform efforts to measure and improve the quality of care for PTSD and depression across the enterprise.

The DoD Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury is simultaneously completing an inferential assessment of adjunctive mental health care services, many focused on PTSD and depression, throughout the health care enterprise. Along with the substantial resources devoted to research on PTSD and depression, the MHS is implementing strategies to improve the system of care for service members with mental health conditions.

Army Care System Innovations

The U.S. Army is engaged in a variety of strategies to improve the identification of patients with mental health conditions, increase access to mental health services, and enhance the quality of care that soldiers receive for PTSD and depression. To improve the coordination of mental health care, the U.S. Army Medical Command implemented a wide-scale innovative transformation of its mental health care system through the establishment of the Behavioral Health Service Line program management office.

This move eliminated separate departments of psychiatry, psychology, and social work in favor of integrated behavioral health departments that are now responsible for all mental health care delivered to soldiers, including inpatient, outpatient, partial hospitalization, residential, embedded care in garrison, and primary care settings. This transformation ensured coordination of care for soldiers, eliminating potential miscommunication with patients, commands, and other clinicians while clearly defining performance indicators in process (eg, productivity, scheduling, access to care, and patient satisfaction) and outcome measures.49 In conjunction with the development of its service line, the U.S. Army created a Behavioral Health Data Portal (BHDP), an electronic and standardized means to assess clinical outcomes for common conditions.

To promote higher quality mental health care, the Office of the Surgeon General of the U.S. Army provided direct guidance on the treatment of PTSD and depression. U.S. Army policy mandates that providers treating mental health conditions adhere to the VA/DoD clinical practice guidelines (CPGs) and that soldiers with PTSD and depression be offered treatments with the highest level of scientific support and that outcome measures be routinely administered. In line with the CPGs, U.S. Army policy also recommends the use of both integrated and embedded mental health care approaches to address PTSD, depression, and other common physical and psychological health conditions.

To reduce stigma and improve mental health care access, the U.S. Army began implementing integrated care approaches in 2007 with its Re-Engineering Systems of Primary Care Treatment in the Military (RESPECT-Mil) program, an evidence-based collaborative care model.51-55 This approach included structured screening and diagnostic procedures, predictable follow-up schedules for patients, and the coordination of the divisions of responsibility among and between primary care providers, paraprofessionals, and behavioral health care providers. From 2007 to 2013, this collaborative care model was rolled out across 96 clinics worldwide and provided PTSD and depression screening to more than 1 million encounters per year.52,53

More recently, the U.S. Army led DoD in integrating behavioral health personnel in patient centered medical homes (PCMH) in compliance with DoD Instruction 6490.15.56 This hybrid integrated care model combines collaborative care elements developed in the RESPECT-Mil program with elements of the U.S. Air Force Behavioral Health Optimization project colocating behavioral health providers in primary care settings to provide brief consultative services.

 

 

MHS Care Enhancements

Many of the innovations deployed throughout the U.S. Army system of behavioral health care have driven changes across the MHS as a whole. The DoD and the VA have made substantive systemwide policy and practice changes to improve care for beneficiaries with PTSD, depression, and comorbid PTSD and depression. In particular, significant implementation efforts have addressed population screening strategies, outcome monitoring to support measurement-based care, increased access to effective care, and revision of the disability evaluation system.

To improve the identification and referral of soldiers with deployment-related mental health concerns, the DoD implemented a comprehensive program that screens service members prior to deployment, immediately on redeployment, and then again 6 months after returning from deployment. Additionally, annual primary care- based screening requirements have been instituted as part of the DoD PCMH initiative. Both deployment-related and primary care-based screenings include an instrumentation to detect symptoms of PTSD and depression and extend the reach of mental health screening to the entire MHS population.

Building on the success of BHDP, former Assistant Secretary of Defense for Health Affairs Jonathan Woodson mandated BHDP use across the MHS for all patients in DoD behavioral health clinics and the use of outcome measures for the treatment of PTSD, anxiety, depression, and alcohol use disorders.57 A DoD-wide requirement to use the PTSD checklist and patient health questionnaire to monitor PTSD and depression symptoms at mental health intakes and regularly at follow-up visits is being implemented. The Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury, through its Practice-Based Implementation Network (underwritten by a Joint Incentive Fund managed between DoD and VA), has worked across the MHS and the VA to facilitate the implementation, uptake, and adoption of this initiative.

The DoD established the Center for Deployment Psychology (CDP) in 2006 to promote clinician training in EBTs with the aim of increasing service members’ access to effective psychological treatments. Since its inception, the CDP has provided EBT training to more than 40,000 behavioral health providers. Although the impact of these and other efforts on improving the quality of care that patients receive is unknown, a recent study documented widespread self-reported usage of EBT components in U.S. Army clinics and that providers formally trained in EBTs were more likely to deliver EBTs.58

Finally, systemwide changes to the VA Schedule of Ratings for Disability (VASRD) and integration of DoD and VA disability evaluation systems have led to shifts in diagnosis toward PTSD that usually merit a minimum 50% disability rating. Mandates in law require military clinicians to evaluate patients who have deployed for PTSD and TBI prior to taking any actions associated with administrative separation. The practice of attributing PTSD symptoms to character pathology or personality disorders, even when these symptoms did not clearly manifest or worsen with military service, has likely been eliminated from practice in military and veteran populations.

Robust policy changes to limit personality disorder discharges started in fiscal year 2007, when there were 4,127 personality disorder separations across DoD. This number was reduced to 300 within 5 years. Policy changes regarding separation not only seem to have affected discharges, but also may have shaped diagnostic practice. The incidence rate of personality disorder diagnoses declined from 513 per 100,000 person-years in 2007 to 284 per 100,000 person-years by 2011.59 The VASRD recognizes chronic adjustment disorder as a disability, and the National Defense Authorization Act of 2008 mandated that DoD follow disability guidelines promulgated by VA.

As stated in the memorandum Clinical Policy Guidance for Assessment and Treatment of Post-Traumatic Stress Disorders (August 24, 2012), DoD recognizes chronic adjustment disorder as an unfitting condition that merits referral to its disability evaluation system.60 Acute adjustment disorders may still lead to administrative separations, as many service members manifest emotional symptoms stemming from the failure to adjust to the routine vicissitudes of military life. Finally, many court jurisdictions, including veteran’s courts, military courts, and commanders empowered to adjudicate nonjudicial infractions under the Uniform Code of Military Justice, have recognized PTSD as grounds for the mitigation of penalties associated with a wide array of criminal and administrative infractions.

Conclusion

In response to the increased mental health burden following a decade of war and the associated pressures stemming from federal mandates, the MHS has invested unprecedented resources into improving care for military service members. The U.S. Army has played a prominent role in this endeavor by investing in clinical research efforts to accelerate discovery on the causes and cures for these conditions, enacting policies that mandate best practices, and implementing evidence-based care approaches across the system of care. Despite this progress, however, understanding and effectively treating the most prevalent mental health conditions remain a challenge across the DoD and VHA health care systems. Many service members and veterans still do not receive timely, high-quality care for PTSD, depression, and other common comorbidities associated with military experience, and controversies in diagnostic clarification abound.

In short, great strides have been made, yet there is still a large distance to go. The vision of an effective, efficient, comprehensive care system for mental health conditions will continue to be pursued and achieved through collaborations across key agencies and the scientific community, implementation of health system approaches that support population care, and the sustained efforts of dedicated clinicians, staff, and clinic leaders who deliver the care to our service members and veterans.

References

1. The White House, Office of the Press Secretary. Executive Order 13625: Improving Access to Mental Health Services for Veterans, Service Members, and Military Families. https://www.whitehouse.gov/the-press-office/2012/08/31/executive-order-improving-access-mental-health-services-veterans-service. Published August 31, 2012. Accessed September 20, 2016.

2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 3rd ed. Arlington, VA: American Psychiatric Association Press; 1980.

3. Mayes R, Horwitz AV. DSM-III and the revolution in the classification of mental illness. J Hist Behav Sci. 2005;41(3):249-267.

4. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Arlington, VA: American Psychiatric Association Press; 2013.

5. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 4th ed., text rev. Arlington, VA: American Psychiatric Association Press; 2000.

6. Hoge CW, Riviere LA, Wilk JE, Herrell RK, Weathers FW. The prevalence of post-traumatic stress disorder (PTSD) in US combat soldiers: a head-to-head comparison of DSM-5 versus DSM-IV-TR symptom criteria with the PTSD checklist. Lancet Psychiatry. 2014;1(4):269-277.

7. OTSG-MEDCOM. Policy Memo 14-094: Policy Guidance on the Assessment and Treatment of Posttraumatic Stress Disorder (PTSD). Published December 18, 2014.

8. Insel T, Cuthbert B, Garvey M, et al. Research domain criteria (RDoC): toward a new classification framework for research on mental disorders. Am J Psychiatry, 2010;167(7):748-751.

9. National Institute of Mental Health. NIMH strategic plan for research. http://www.nimh.nih.gov/about/strategic-planning-reports/index.shtml. Revised 2015. Accessed September 20, 2016.

10. Colston M, Hocter W. Forensic aspects of posttraumatic stress disorder. In: Ritchie EC, ed. Forensic and Ethical Issues in Military Behavioral Health. Washington, DC: U.S. Department of the Army; 2015:97-110.

11. Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury. National Center for Telehealth and Technology. Department of Defense suicide event report: calendar year 2013 annual report. http://t2health.dcoe.mil/programs/dodser. Published January 13, 2015. Accessed September 20, 2016.

12. Otto JL, O’Donnell FL, Ford SA, Ritschard HV. Selected mental health disorders among active component members, US Armed Forces, 2007-2010. MSMR. 2010;17(11):2-5.

13. Gutner CA, Galovski T, Bovin MJ, Schnurr PP. Emergence of transdiagnostic treatments for PTSD and posttraumatic distress. Curr Psychiatry Rep. 2016;18(10):95-101.

14. Campbell DG, Felker BL, Liu CF, et al. Prevalence of depression-PTSD comorbidity: implications for clinical practice guidelines and primary care-based interventions. J Gen Intern Med. 2007;22(6):711-718.

15. Chan D, Cheadle AD, Reiber G, Unützer J, Chaney EF. Health care utilization and its costs for depressed veterans with and without comorbid PTSD symptoms. Psychiatr Serv. 2009;60(12):1612-1617.

16. Maguen S, Cohen B, Cohen G, Madden E, Bertenthal D, Seal K. Gender differences in health service utilization among Iraq and Afghanistan veterans with posttraumatic stress disorder. J Womens Health (Larchmt). 2012;21(6):666-673.

17. Hoskins M, Pearce J, Bethell A, et al. Pharmacotherapy for post-traumatic stress disorder: systematic review and meta-analysis. Br J Psychiatry. 2015;206(2):93-100.

18. Puetz TW, Youngstedt SD, Herring MP. Effects of pharmacotherapy on combat-related PTSD, anxiety, and depression: a systematic review and meta-regression analysis. PLoS One. 2015;10(5):e0126529.

19. Jonas DE, Cusack K, Forneris CA, et al. Psychological and pharmacological treatments for adults with posttraumatic stress disorder (PTSD). Comparative effectiveness review no. 92. https://effectivehealthcare.ahrq.gov/ehc/products/347/1435/PTSD-adult-treatment-report-130403.pdf. Published April 3, 2013. Accessed September 20, 2016.

20. Haagen JFG, Smid GE, Knipscheer JW, Kleber RJ. The efficacy of recommended treatments for veterans with PTSD: a metaregression analysis. Clin Psychol Rev. 2015;40:184-194.

21. Tran K, Moulton K, Santesso N, Rabb D. Cognitive processing therapy for post-traumatic stress disorder: a systematic review and meta-analysis. https://www.cadth.ca/cognitive-processing-therapy-post-traumatic-stress-disorder-systematic-review-and-meta-analysis. Published August 11, 2015. Accessed September 20, 2016.

22. VA/DoD Management of Post-Traumatic Stress Working Group. VA/DoD Clinical Practice Guideline for Management of Post-Traumatic Stress. Version 2. http://www.healthquality.va.gov/guidelines/MH/ptsd/. Published October, 2010. Accessed September 20, 2016.

23. VA/DoD Management of Major Depressive Disorder Working Group. VA/DoD Clinical Practice Guideline for the Management of Major Depressive Disorder. Version 3. http://www.healthquality.va.gov/guidelines/mh/mdd/index.asp. Published April 2016. Accessed September 20, 2016.

24. Zatzick DF, Galea S. An epidemiologic approach to the development of early trauma focused intervention. J Trauma Stress. 2007;20(4):401-412.

25. Zatzick DF, Koepsell T, Rivara FP. Using target population specification, effect size, and reach to estimate and compare the population impact of two PTSD preventive interventions. Psychiatry. 2009;72(4):346-359.

26. Glasgow RE, Nelson CC, Strycker LA, King DK. Using RE-AIM metrics to evaluate diabetes self-management support interventions. Am J Prev Med. 2006;30(1):67-73.

27. Finley EP, Garcia HA, Ketchum NS, et al. Utilization of evidence-based psychotherapies in Veterans Affairs posttraumatic stress disorder outpatient clinics. Psychol Serv. 2015;12(1):73-82.

28. Mott JM, Mondragon S, Hundt NE, Beason-Smith M, Grady RH, Teng EJ. Characteristics of U.S. veterans who begin and complete prolonged exposure and cognitive processing therapy for PTSD. J Trauma Stress. 2014;27(3):265-273.

29. Shiner B, D’Avolio LW, Nguyen TM, et al. Measuring use of evidence based psychotherapy for PTSD. Adm Policy Ment Health. 2013;40(4):311-318.

30. Schnurr PP, Friedman MJ, Engel CC, et al. Cognitive behavioral therapy for posttraumatic stress disorder in women: a randomized controlled trial. JAMA. 2007;297(8):820-830.

31. Tuerk PW, Yoder M, Grubaugh A, Myrick H, Hamner M, Acierno R. Prolonged exposure therapy for combat-related posttraumatic stress disorder: an examination of treatment effectiveness for veterans of the wars in Afghanistan and Iraq. J Anxiety Disord. 2011;25(3):397-403.

32. Chard KM, Schumm JA, Owens GP, Cottingham SM. A comparison of OEF and OIF veterans and Vietnam veterans receiving cognitive processing therapy. J Trauma Stress. 2010;23(1):25-32.

 

 

33. Monson CM, Schnurr PP, Resick PA, Friedman MJ, Young-Xu Y, Stevens SP. Cognitive processing therapy for veterans with military-related posttraumatic stress disorder. J Consult Clin Psychol. 2006;74(5):898-907.

34. Mott JM, Hundt NE, Sansgiry S, Mignogna J, Cully JA. Changes in psychotherapy utilization among veterans with depression, anxiety, and PTSD. Psychiatr Serv. 2014;65(1):106-112.

35. Seal KH, Maguen S, Cohen B, et al. VA mental health services utilization in Iraq and Afghanistan veterans in the first year of receiving new mental health diagnoses. J Trauma Stress. 2010;23(1):5-16.

36. Russell M, Silver SM. Training needs for the treatment of combat-related posttraumatic stress disorder: a survey of Department of Defense clinicians. Traumatology. 2007;13(3):4-10.

37. Schell TL, Marshall GN. Survey of individuals previously deployed for OEF/OIF. In: Tanielian T, Jaycox LH, eds. Invisible Wounds of War: Psychological and Cognitive Injuries, Their Consequences, and Services to Assist Recovery. Santa Monica, CA: RAND Corporation; 2008:87-118.

38. Hoge CW, Grossman SH, Auchterlonie JL, Riviere LA, Milliken CS, >Wilk JE. PTSD treatment for soldiers after combat deployment: low utilization of mental health care and reasons for dropout. Psychiatr Serv. 2014;65(8):997-1004.

39. Committee on the Assessment of Ongoing Efforts in the Treatment of Posttraumatic Stress Disorder, Board on the Health of Select Populations, Institute of Medicine. Treatment for Posttraumatic Stress Disorder in Military and Veteran Populations: Final Assessment. Washington, DC: National Academies Press; 2014.

40. Schnurr PP. Extending collaborative care for posttraumatic mental health. JAMA Intern Med. 2016;176(7):956-957.

41. Hoge CW. Interventions for war-related posttraumatic stress disorder: meeting veterans where they are. JAMA. 2011;306(5):549-551.

42. Engel CC. Improving primary care for military personnel and veterans with posttraumatic stress disorder: the road ahead. Gen Hosp Psychiatry. 2005;27(3):158-160.

43. Engel CC, Jaycox LH, Freed MC, et al. Centrally assisted collaborative telecare management for posttraumatic stress disorder and depression in military primary care: a randomized controlled trial. JAMA Intern Med. 2016;176(7):948-956.

44. Fortney JC, Pyne JM, Kimbrell TA, et al. Telemedicine-based collaborative care for posttraumatic stress disorder: a randomized clinical trial. JAMA Psychiatry. 2015;72(1):58-67.

45. Schnurr PP, Friedman MJ, Oxman TE, et al. RESPECT-PTSD: re-engineering systems for the primary care treatment of PTSD, a randomized controlled trial. J Gen Intern Med. 2013;28(1):32-40.

46. Zatzick D, Roy-Byrne P, Russo J, et al. A randomized effectiveness trial of stepped collaborative care for acutely injured trauma survivors. Arch Gen Psychiatry. 2004;61(5):498-506.

47. Zatzick D, O’Connor SS, Russo J, et al. Technology-enhanced stepped collaborative care targeting posttraumatic stress disorder and comorbidity after injury: a randomized controlled trial. J Trauma Stress. 2015;28(5):391-400.

48. Engel CC, Bray RM, Jaycox LH, et al. Implementing collaborative primary care for depression and posttraumatic stress disorder: design and sample for a randomized trial in the U.S. Military Health System. Contemp Clin Trials. 2014;39(2):310-319.

49. Belsher BE, Jaycox LH, Freed MC, et al. Mental health utilization patterns during a stepped, collaborative care effectiveness trial for PTSD and depression in the military health system. Med Care. 2016;54(7):706-713.

50. Hepner KA, Roth CP, Farris C, et al. Measuring the Quality of Care for Psychological Health Conditions in the Military Health System: Candidate Quality Measures for Posttraumatic Stress Disorder and Major Depressive Disorder. Santa Monica, CA: RAND Corporation; 2015.

51. Engel C, Oxman T, Yamamoto C, et al. RESPECT-Mil: feasibility of a systems-level collaborative care approach to depression and post-traumatic stress disorder in military primary care. Mil Med. 2008;173(10):935-940.

52. Belsher BE, Curry J, McCutchan P, et al. Implementation of a collaborative care initiative for PTSD and depression in the Army primary care system. Soc Work Ment Health. 2014;12(5-6):500-522.

53. Wong EC, Jaycox LH, Ayer L, et al. Evaluating the Implementation of the Re-Engineering Systems of Primary Care Treatment in the Military (RESPECT-Mil). Santa Monica, CA: RAND Corporation; 2015.

54. Archer J, Bower P, Gilbody S, et al. Collaborative care for depression and anxiety problems. Cochrane Database Syst Rev. 2012;10:CD006525.

55. Woltmann E, Grogan-Kaylor A, Perron B, Georges H, Kilbourne AM, Bauer MS. Comparative effectiveness of collaborative chronic care models for mental health conditions across primary, specialty, and behavioral health care settings: systematic review and meta-analysis. Am J Psychiatry. 2012;169(8):790-804.

56. Wright JL. DoD Directive 6490.15. www.dtic.mil/whs/directives/corres/pdf/649015p.pdf.Revised November 20, 2014. Accessed October 3, 2016. 57. Woodson J. Military treatment facility mental health clinical outcomes guidance. http://dcoe.mil/Libraries/Documents/MentalHealthClinicalOutcomesGuidance_Woodson.pdf. Published September 9, 2013. Accessed October 4, 2016.

58. Wilk JE, West JC, Duffy FF, Herrell RK, Rae DS, Hoge CW. Use of evidence-based treatment for posttraumatic stress disorder in Army behavioral healthcare. Psychiatry. 2013;76(4):336-348.

59. Stockton PN, Olsen ET, Hayford S, et al. Security from within: independent review of the Washington Navy Yard shooting. http://archive.defense.gov/pubs/Independent-Review-of-the-WNY-Shooting-14-Nov-2013.pdf. Published November, 2013. Accessed September 20, 2016.

60. Woodson J. ASD(HA) Memorandum: Clinical Policy Guidance for Assessment and Treatment of Posttraumatic Stress Disorder. August 24, 2012.

References

1. The White House, Office of the Press Secretary. Executive Order 13625: Improving Access to Mental Health Services for Veterans, Service Members, and Military Families. https://www.whitehouse.gov/the-press-office/2012/08/31/executive-order-improving-access-mental-health-services-veterans-service. Published August 31, 2012. Accessed September 20, 2016.

2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 3rd ed. Arlington, VA: American Psychiatric Association Press; 1980.

3. Mayes R, Horwitz AV. DSM-III and the revolution in the classification of mental illness. J Hist Behav Sci. 2005;41(3):249-267.

4. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Arlington, VA: American Psychiatric Association Press; 2013.

5. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 4th ed., text rev. Arlington, VA: American Psychiatric Association Press; 2000.

6. Hoge CW, Riviere LA, Wilk JE, Herrell RK, Weathers FW. The prevalence of post-traumatic stress disorder (PTSD) in US combat soldiers: a head-to-head comparison of DSM-5 versus DSM-IV-TR symptom criteria with the PTSD checklist. Lancet Psychiatry. 2014;1(4):269-277.

7. OTSG-MEDCOM. Policy Memo 14-094: Policy Guidance on the Assessment and Treatment of Posttraumatic Stress Disorder (PTSD). Published December 18, 2014.

8. Insel T, Cuthbert B, Garvey M, et al. Research domain criteria (RDoC): toward a new classification framework for research on mental disorders. Am J Psychiatry, 2010;167(7):748-751.

9. National Institute of Mental Health. NIMH strategic plan for research. http://www.nimh.nih.gov/about/strategic-planning-reports/index.shtml. Revised 2015. Accessed September 20, 2016.

10. Colston M, Hocter W. Forensic aspects of posttraumatic stress disorder. In: Ritchie EC, ed. Forensic and Ethical Issues in Military Behavioral Health. Washington, DC: U.S. Department of the Army; 2015:97-110.

11. Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury. National Center for Telehealth and Technology. Department of Defense suicide event report: calendar year 2013 annual report. http://t2health.dcoe.mil/programs/dodser. Published January 13, 2015. Accessed September 20, 2016.

12. Otto JL, O’Donnell FL, Ford SA, Ritschard HV. Selected mental health disorders among active component members, US Armed Forces, 2007-2010. MSMR. 2010;17(11):2-5.

13. Gutner CA, Galovski T, Bovin MJ, Schnurr PP. Emergence of transdiagnostic treatments for PTSD and posttraumatic distress. Curr Psychiatry Rep. 2016;18(10):95-101.

14. Campbell DG, Felker BL, Liu CF, et al. Prevalence of depression-PTSD comorbidity: implications for clinical practice guidelines and primary care-based interventions. J Gen Intern Med. 2007;22(6):711-718.

15. Chan D, Cheadle AD, Reiber G, Unützer J, Chaney EF. Health care utilization and its costs for depressed veterans with and without comorbid PTSD symptoms. Psychiatr Serv. 2009;60(12):1612-1617.

16. Maguen S, Cohen B, Cohen G, Madden E, Bertenthal D, Seal K. Gender differences in health service utilization among Iraq and Afghanistan veterans with posttraumatic stress disorder. J Womens Health (Larchmt). 2012;21(6):666-673.

17. Hoskins M, Pearce J, Bethell A, et al. Pharmacotherapy for post-traumatic stress disorder: systematic review and meta-analysis. Br J Psychiatry. 2015;206(2):93-100.

18. Puetz TW, Youngstedt SD, Herring MP. Effects of pharmacotherapy on combat-related PTSD, anxiety, and depression: a systematic review and meta-regression analysis. PLoS One. 2015;10(5):e0126529.

19. Jonas DE, Cusack K, Forneris CA, et al. Psychological and pharmacological treatments for adults with posttraumatic stress disorder (PTSD). Comparative effectiveness review no. 92. https://effectivehealthcare.ahrq.gov/ehc/products/347/1435/PTSD-adult-treatment-report-130403.pdf. Published April 3, 2013. Accessed September 20, 2016.

20. Haagen JFG, Smid GE, Knipscheer JW, Kleber RJ. The efficacy of recommended treatments for veterans with PTSD: a metaregression analysis. Clin Psychol Rev. 2015;40:184-194.

21. Tran K, Moulton K, Santesso N, Rabb D. Cognitive processing therapy for post-traumatic stress disorder: a systematic review and meta-analysis. https://www.cadth.ca/cognitive-processing-therapy-post-traumatic-stress-disorder-systematic-review-and-meta-analysis. Published August 11, 2015. Accessed September 20, 2016.

22. VA/DoD Management of Post-Traumatic Stress Working Group. VA/DoD Clinical Practice Guideline for Management of Post-Traumatic Stress. Version 2. http://www.healthquality.va.gov/guidelines/MH/ptsd/. Published October, 2010. Accessed September 20, 2016.

23. VA/DoD Management of Major Depressive Disorder Working Group. VA/DoD Clinical Practice Guideline for the Management of Major Depressive Disorder. Version 3. http://www.healthquality.va.gov/guidelines/mh/mdd/index.asp. Published April 2016. Accessed September 20, 2016.

24. Zatzick DF, Galea S. An epidemiologic approach to the development of early trauma focused intervention. J Trauma Stress. 2007;20(4):401-412.

25. Zatzick DF, Koepsell T, Rivara FP. Using target population specification, effect size, and reach to estimate and compare the population impact of two PTSD preventive interventions. Psychiatry. 2009;72(4):346-359.

26. Glasgow RE, Nelson CC, Strycker LA, King DK. Using RE-AIM metrics to evaluate diabetes self-management support interventions. Am J Prev Med. 2006;30(1):67-73.

27. Finley EP, Garcia HA, Ketchum NS, et al. Utilization of evidence-based psychotherapies in Veterans Affairs posttraumatic stress disorder outpatient clinics. Psychol Serv. 2015;12(1):73-82.

28. Mott JM, Mondragon S, Hundt NE, Beason-Smith M, Grady RH, Teng EJ. Characteristics of U.S. veterans who begin and complete prolonged exposure and cognitive processing therapy for PTSD. J Trauma Stress. 2014;27(3):265-273.

29. Shiner B, D’Avolio LW, Nguyen TM, et al. Measuring use of evidence based psychotherapy for PTSD. Adm Policy Ment Health. 2013;40(4):311-318.

30. Schnurr PP, Friedman MJ, Engel CC, et al. Cognitive behavioral therapy for posttraumatic stress disorder in women: a randomized controlled trial. JAMA. 2007;297(8):820-830.

31. Tuerk PW, Yoder M, Grubaugh A, Myrick H, Hamner M, Acierno R. Prolonged exposure therapy for combat-related posttraumatic stress disorder: an examination of treatment effectiveness for veterans of the wars in Afghanistan and Iraq. J Anxiety Disord. 2011;25(3):397-403.

32. Chard KM, Schumm JA, Owens GP, Cottingham SM. A comparison of OEF and OIF veterans and Vietnam veterans receiving cognitive processing therapy. J Trauma Stress. 2010;23(1):25-32.

 

 

33. Monson CM, Schnurr PP, Resick PA, Friedman MJ, Young-Xu Y, Stevens SP. Cognitive processing therapy for veterans with military-related posttraumatic stress disorder. J Consult Clin Psychol. 2006;74(5):898-907.

34. Mott JM, Hundt NE, Sansgiry S, Mignogna J, Cully JA. Changes in psychotherapy utilization among veterans with depression, anxiety, and PTSD. Psychiatr Serv. 2014;65(1):106-112.

35. Seal KH, Maguen S, Cohen B, et al. VA mental health services utilization in Iraq and Afghanistan veterans in the first year of receiving new mental health diagnoses. J Trauma Stress. 2010;23(1):5-16.

36. Russell M, Silver SM. Training needs for the treatment of combat-related posttraumatic stress disorder: a survey of Department of Defense clinicians. Traumatology. 2007;13(3):4-10.

37. Schell TL, Marshall GN. Survey of individuals previously deployed for OEF/OIF. In: Tanielian T, Jaycox LH, eds. Invisible Wounds of War: Psychological and Cognitive Injuries, Their Consequences, and Services to Assist Recovery. Santa Monica, CA: RAND Corporation; 2008:87-118.

38. Hoge CW, Grossman SH, Auchterlonie JL, Riviere LA, Milliken CS, >Wilk JE. PTSD treatment for soldiers after combat deployment: low utilization of mental health care and reasons for dropout. Psychiatr Serv. 2014;65(8):997-1004.

39. Committee on the Assessment of Ongoing Efforts in the Treatment of Posttraumatic Stress Disorder, Board on the Health of Select Populations, Institute of Medicine. Treatment for Posttraumatic Stress Disorder in Military and Veteran Populations: Final Assessment. Washington, DC: National Academies Press; 2014.

40. Schnurr PP. Extending collaborative care for posttraumatic mental health. JAMA Intern Med. 2016;176(7):956-957.

41. Hoge CW. Interventions for war-related posttraumatic stress disorder: meeting veterans where they are. JAMA. 2011;306(5):549-551.

42. Engel CC. Improving primary care for military personnel and veterans with posttraumatic stress disorder: the road ahead. Gen Hosp Psychiatry. 2005;27(3):158-160.

43. Engel CC, Jaycox LH, Freed MC, et al. Centrally assisted collaborative telecare management for posttraumatic stress disorder and depression in military primary care: a randomized controlled trial. JAMA Intern Med. 2016;176(7):948-956.

44. Fortney JC, Pyne JM, Kimbrell TA, et al. Telemedicine-based collaborative care for posttraumatic stress disorder: a randomized clinical trial. JAMA Psychiatry. 2015;72(1):58-67.

45. Schnurr PP, Friedman MJ, Oxman TE, et al. RESPECT-PTSD: re-engineering systems for the primary care treatment of PTSD, a randomized controlled trial. J Gen Intern Med. 2013;28(1):32-40.

46. Zatzick D, Roy-Byrne P, Russo J, et al. A randomized effectiveness trial of stepped collaborative care for acutely injured trauma survivors. Arch Gen Psychiatry. 2004;61(5):498-506.

47. Zatzick D, O’Connor SS, Russo J, et al. Technology-enhanced stepped collaborative care targeting posttraumatic stress disorder and comorbidity after injury: a randomized controlled trial. J Trauma Stress. 2015;28(5):391-400.

48. Engel CC, Bray RM, Jaycox LH, et al. Implementing collaborative primary care for depression and posttraumatic stress disorder: design and sample for a randomized trial in the U.S. Military Health System. Contemp Clin Trials. 2014;39(2):310-319.

49. Belsher BE, Jaycox LH, Freed MC, et al. Mental health utilization patterns during a stepped, collaborative care effectiveness trial for PTSD and depression in the military health system. Med Care. 2016;54(7):706-713.

50. Hepner KA, Roth CP, Farris C, et al. Measuring the Quality of Care for Psychological Health Conditions in the Military Health System: Candidate Quality Measures for Posttraumatic Stress Disorder and Major Depressive Disorder. Santa Monica, CA: RAND Corporation; 2015.

51. Engel C, Oxman T, Yamamoto C, et al. RESPECT-Mil: feasibility of a systems-level collaborative care approach to depression and post-traumatic stress disorder in military primary care. Mil Med. 2008;173(10):935-940.

52. Belsher BE, Curry J, McCutchan P, et al. Implementation of a collaborative care initiative for PTSD and depression in the Army primary care system. Soc Work Ment Health. 2014;12(5-6):500-522.

53. Wong EC, Jaycox LH, Ayer L, et al. Evaluating the Implementation of the Re-Engineering Systems of Primary Care Treatment in the Military (RESPECT-Mil). Santa Monica, CA: RAND Corporation; 2015.

54. Archer J, Bower P, Gilbody S, et al. Collaborative care for depression and anxiety problems. Cochrane Database Syst Rev. 2012;10:CD006525.

55. Woltmann E, Grogan-Kaylor A, Perron B, Georges H, Kilbourne AM, Bauer MS. Comparative effectiveness of collaborative chronic care models for mental health conditions across primary, specialty, and behavioral health care settings: systematic review and meta-analysis. Am J Psychiatry. 2012;169(8):790-804.

56. Wright JL. DoD Directive 6490.15. www.dtic.mil/whs/directives/corres/pdf/649015p.pdf.Revised November 20, 2014. Accessed October 3, 2016. 57. Woodson J. Military treatment facility mental health clinical outcomes guidance. http://dcoe.mil/Libraries/Documents/MentalHealthClinicalOutcomesGuidance_Woodson.pdf. Published September 9, 2013. Accessed October 4, 2016.

58. Wilk JE, West JC, Duffy FF, Herrell RK, Rae DS, Hoge CW. Use of evidence-based treatment for posttraumatic stress disorder in Army behavioral healthcare. Psychiatry. 2013;76(4):336-348.

59. Stockton PN, Olsen ET, Hayford S, et al. Security from within: independent review of the Washington Navy Yard shooting. http://archive.defense.gov/pubs/Independent-Review-of-the-WNY-Shooting-14-Nov-2013.pdf. Published November, 2013. Accessed September 20, 2016.

60. Woodson J. ASD(HA) Memorandum: Clinical Policy Guidance for Assessment and Treatment of Posttraumatic Stress Disorder. August 24, 2012.

Issue
Federal Practitioner - 33(11)
Issue
Federal Practitioner - 33(11)
Page Number
37-45
Page Number
37-45
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Instability After Reverse Total Shoulder Arthroplasty: Which Patients Dislocate?

Article Type
Changed
Thu, 09/19/2019 - 13:24
Display Headline
Instability After Reverse Total Shoulder Arthroplasty: Which Patients Dislocate?

Risk factors for dislocation after reverse total shoulder arthroplasty (RTSA) are not clearly defined. Prosthetic dislocation can result in poor patient satisfaction, worse functional outcomes, and return to the operating room.1-3 As a result, identification of modifiable risk factors for complications represents an important research initiative for shoulder surgeons.

There is a paucity of literature devoted to the study of dislocation after RTSA. Chalmers and colleagues4 found a 2.9% (11/385) incidence of early dislocation within 3 months after index surgery—an improvement over the 15.8% reported for early instability over the period 2004–2006.5 As prosthesis design has improved and surgeons have become more comfortable with the RTSA prosthesis, surgical indications have expanded,6,7 and dislocation rates appear to have decreased. Although the most common indication for RTSA continues to be cuff tear arthropathy (CTA),6 there has been increased use in rheumatoid arthritis8-10; proximal humerus fractures, especially in cases of poor bone quality and unreliable fixation of tuberosities11-13; and failed previous shoulder reconstruction.14,15 As RTSA is performed more often, limiting the complications will become more important for both patient care and economics.

We conducted a study to analyze dislocation rates at our institution and to identify both modifiable and nonmodifiable risk factors for dislocation after RTSA. By identifying risk factors for dislocation, we will be able to implement additional perioperative clinical measures to reduce the incidence of dislocation.

Materials and Methods

This retrospective study of dislocation after RTSA was conducted at the Rothman Institute of Orthopedics and Methodist Hospital (Thomas Jefferson University Hospitals, Philadelphia, PA). After obtaining Institutional Review Board approval for the study, we searched our institution’s electronic database of shoulder arthroplasties to identify all RTSAs performed at our 2 large-volume urban institutions between September 27, 2010 and December 31, 2013. For the record search, International Classification of Diseases, Ninth Revision (ICD-9) codes were used (Table 1).

These unique procedure codes are used by the hospital system for billing, but are not always specific to assigned procedures. Therefore, the individual operative reports identified were reviewed to identify the patients who actually underwent RTSA. From this database, all patients who underwent RTSA were selected. Using the subpopulation of patients who underwent RTSA, we searched individual medical records to identify patients who had a dislocation after RTSA. This information was cross-referenced with ICD-9 codes for shoulder dislocation (831.0, 831.01, 831.02, 831.03) to ensure that all patients were identified.

The medical records of each patient were used to identify independent variables that could be associated with dislocation rate. Demographic variables included sex, age, and race. Preoperative clinical data included body mass index (BMI), etiology of shoulder disease leading to RTSA, individual comorbidities, and Charlson Comorbidity Index (CCI)16 modified to be used with ICD-9 codes.17 In addition, prior shoulder surgery history and arthroplasty type (primary or revision) were determined. Postoperative considerations were time to dislocation, mechanism of dislocation, and intervention(s) needed for dislocation. Although the institutional database did not include operative variables such as prosthesis type and surgical approach, all 6 surgeons in this study were using a standard deltopectoral approach in beach-chair position with a Grammont style prosthesis for RTSA cases.

Descriptive statistics for RTSA patients and the dislocation subpopulation were compiled. Bivariate analysis was used to evaluate which of the previously described variables influenced dislocation rates. Last, multivariate logistic regression analysis was performed to evaluate which factors were independent predictors of dislocation. We included demographic variables (age, sex, ethnicity), clinical variables (BMI, individual comorbidities, CCI), and surgical variables (primary vs revision, diagnosis at time of surgery). All statistical analyses were performed with Excel 2013 (Microsoft) and SPSS Statistics Version 20.0 (SPSS Inc.).

Results

From the database, we identified 487 patients who underwent 510 RTSAs during the study period. These surgeries were performed by 6 shoulder and elbow fellowship–trained surgeons. Of the 510 RTSAs, 393 (77.1%) were primary cases, and 117 (22.9%) were revision cases.

Of the 510 shoulders that underwent RTSA, 15 (2.9%; 14 patients) dislocated. Of these 15 cases, 5 were primary (1.3% of all primary cases) and 10 were revision (8.5% of all revision cases). Mean time from index surgery to diagnosis of dislocation was 58.2 days (range, 0-319 days). One dislocation occurred immediately after surgery, 2 after falls, 4 from patient-identified low-energy mechanisms of injury, and 8 without known inciting events. Nine dislocations (60%) did not have a subscapularis repair (7 were irreparable, 2 underwent subscapularis peel without repair), and the other 6 were repaired primarily (Table 2).

In addition, 11 dislocations (73.3%) previously underwent open or arthroscopic shoulder surgery. All patients who had a dislocation after RTSA returned to the operating room at least once; no dislocation was successfully treated with closed reduction in the clinic. The 15 dislocations underwent 17 surgeries: 7 isolated polyethylene exchanges, 2 isolated closed reductions, 1 hematoma aspiration with closed reduction, 1 open reduction, 2 humeral component revisions with polyethylene exchange, 1 humeral augmentation with polyethylene exchange, 2 glenosphere exchanges with polyethylene exchange, and 1 polyethylene exchange with concurrent subscapularis repair.

Male patients accounted for 32.2% of the study population but 60.0% of the dislocations (P = .019) (Table 3).
In addition, mean BMI was 33.2 for patients with dislocation and 29.5 for patients without dislocation (P = .039) (Table 3). Revision arthroplasty was found to be a risk factor for dislocation in univariate analysis: 66.7% of the dislocations occurred after revision RTSA, and only 21.6% of nondislocated shoulders were revision cases (P < .001) (Table 4).
Patients who underwent RTSA for CTA had a very low incidence of dislocation (0.35%, 1/285), accounting for 6.7% of the dislocated group and 57.6% of the nondislocated group (P < .001) (Table 4). The 1 patient with a dislocation after primary RTSA for CTA had an indolent infection at time of surgery after dislocation.

Multivariate logistic regression analysis revealed revision arthroplasty (OR = 7.515; P = .042) and increased BMI (OR = 1.09; P = .047) to be independent risk factors for dislocation after RTSA. Analysis also found a diagnosis of primary CTA to be independently associated with lower risk of dislocation after RTSA (OR = 0.025; P = .008). Last, the previously described risk factor of male sex was found not to be a significant independent risk factor, though it did trend positively (OR = 3.011; P = .071).

 

 

Discussion

With more RTSAs being performed, evaluation of their common complications becomes increasingly important.18 We found a 3.0% rate of dislocation after RTSA, which is consistent with the most recently reported incidence4 and falls within the previously described range of 0% to 8.6%.19-26 Of the clinical risk factors identified in this study, those previously described were prior surgery, subscapularis insufficiency, higher BMI, and male sex.4 However, our finding of lower risk of dislocation after RTSA for primary rotator cuff pathology was not previously described. Although Chalmers and colleagues4 did not report this lower risk, 3 (27.3%) of their 11 patients with dislocation had primary CTA, compared with 1 (6.7%) of 15 patients in the present study.4 Our literature review did not identify any studies that independently reported the dislocation rate in patients who underwent RTSA for rotator cuff failure.

The risk factors of subscapularis irreparability and revision surgery suggest the importance of the soft-tissue envelope and bony anatomy in dislocation prevention. Previous analyses have suggested implant malpositioning,27,28 poor subscapularis quality,29 and inadequate muscle tensioning5,30-32 as risk factors for RTSA. Patients with an irreparable subscapularis tendon have often had multiple surgeries with compromise to the muscle/soft-tissue envelope or bony anatomy of the shoulder. A biomechanical study by Gutiérrez and colleagues31 found the compressive forces of the soft tissue at the glenohumeral joint to be the most important contributor to stability in the RTSA prosthesis. In clinical studies, the role of the subscapularis in preventing instability after RTSA remains unclear. Edwards and colleagues29 prospectively compared dislocation rates in patients with reparable and irreparable subscapularis tendons during RTSA and found a higher rate of dislocation in the irreparable subscapularis group. Of note, patients in the irreparable subscapularis group also had more complex diagnoses, including proximal humeral nonunion, fixed glenohumeral dislocation, and failed prior arthroplasty. Clark and colleagues33 retrospectively analyzed subscapularis repair in 2 RTSA groups and found no appreciable effect on complication rate, dislocation events, range-of-motion gains, or pain relief.

Our finding that higher BMI is an independent risk factor was previously described.4 The association is unclear but could be related to implant positioning, difficulty in intraoperative assessment of muscle tensioning, or body habitus that may generate a lever arm for impingement and dislocation when the arm is in adduction. Last, our finding that male sex is a risk factor for dislocation approached significance, and this relationship was previously reported.4 This could be attributable to a higher rate of activity or of indolent infection in male patients.34,35Besides studying risk factors for dislocation after RTSA, we investigated treatment. None of our patients were treated successfully and definitively with closed reduction in the clinic. This finding diverges from findings in studies by Teusink and colleagues2 and Chalmers and colleagues,4who respectively reported 62% and 44% rates of success with closed reduction. Our cohort of 14 patients with 15 dislocations required a total of 17 trips to the operating room after dislocation. This significantly higher rate of return to the operating room suggests that dislocation after RTSA may be a more costly and morbid problem than has been previously described.

This study had several weaknesses. Despite its large consecutive series of patients, the study was retrospective, and several variables that would be documented and controlled in a prospective study could not be measured here. Specifically, neither preoperative physical examination nor patient-specific assessments of pain or function were consistently obtained. Similarly, postoperative patient-specific instruments of outcomes evaluation were not obtained consistently, so results of patients with dislocation could not be compared with those of a control group. In addition, preoperative and postoperative radiographs were not consistently present in our electronic medical records, so the influence of preoperative bony anatomy, intraoperative limb lengthening, and any implant malpositioning could not be determined. Furthermore, operative details, such as reparability of the subscapularis, were not fully available for the control group and could not be included in statistical analysis. In addition, that the known dislocation risk factor of male sex4 was identified here but was not significant in multivariate regression analysis suggests that this study may not have been adequately powered to identify a significant difference in dislocation rate between the sexes. Last, though our results suggested associations between the aforementioned variables and dislocation after RTSA, a truly causative relationship could not be confirmed with this study design or analysis. Therefore, our study findings are hypothesis-generating and may indicate a benefit to greater deltoid tensioning, use of retentive liners, or more conservative rehabilitation protocols for high-risk patients.

Conclusion

Dislocation after RTSA is an uncommon complication that often requires a return to the operating room. This study identified a modifiable risk factor (higher BMI) and 3 nonmodifiable risk factors (male sex, subscapularis insufficiency, revision surgery) for dislocation after RTSA. In contrast, patients who undergo RTSA for primary rotator cuff pathology are unlikely to dislocate after surgery. This low risk of dislocation after RTSA for primary cuff pathology was not previously described. Patients in the higher risk category may benefit from preoperative lifestyle modification, intraoperative techniques for increasing stability, and more conservative therapy after surgery. In addition, unlike previous investigations, this study did not find closed reduction in the clinic alone to be successful in definitively treating this patient population.


Am J Orthop. 2016;45(7):E444-E450. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

References

1. Aldinger PR, Raiss P, Rickert M, Loew M. Complications in shoulder arthroplasty: an analysis of 485 cases. Int Orthop. 2010;34(4):517-524.

2. Teusink MJ, Pappou IP, Schwartz DG, Cottrell BJ, Frankle MA. Results of closed management of acute dislocation after reverse shoulder arthroplasty. J Shoulder Elbow Surg. 2015;24(4):621-627.

3. Fink Barnes LA, Grantham WJ, Meadows MC, Bigliani LU, Levine WN, Ahmad CS. Sports activity after reverse total shoulder arthroplasty with minimum 2-year follow-up. Am J Orthop. 2015;44(2):68-72.

4. Chalmers PN, Rahman Z, Romeo AA, Nicholson GP. Early dislocation after reverse total shoulder arthroplasty. J Shoulder Elbow Surg. 2014;23(5):737-744.

5. Gallo RA, Gamradt SC, Mattern CJ, et al; Sports Medicine and Shoulder Service at the Hospital for Special Surgery, New York, NY. Instability after reverse total shoulder replacement. J Shoulder Elbow Surg. 2011;20(4):584-590.

6. Walch G, Bacle G, Lädermann A, Nové-Josserand L, Smithers CJ. Do the indications, results, and complications of reverse shoulder arthroplasty change with surgeon’s experience? J Shoulder Elbow Surg. 2012;21(11):1470-1477.

7. Smith CD, Guyver P, Bunker TD. Indications for reverse shoulder replacement: a systematic review. J Bone Joint Surg Br. 2012;94(5):577-583.

8. Young AA, Smith MM, Bacle G, Moraga C, Walch G. Early results of reverse shoulder arthroplasty in patients with rheumatoid arthritis. J Bone Joint Surg Am. 2011;93(20):1915-1923.

9. Hedtmann A, Werner A. Shoulder arthroplasty in rheumatoid arthritis [in German]. Orthopade. 2007;36(11):1050-1061.

10. Rittmeister M, Kerschbaumer F. Grammont reverse total shoulder arthroplasty in patients with rheumatoid arthritis and nonreconstructible rotator cuff lesions. J Shoulder Elbow Surg. 2001;10(1):17-22.

11. Acevedo DC, Vanbeek C, Lazarus MD, Williams GR, Abboud JA. Reverse shoulder arthroplasty for proximal humeral fractures: update on indications, technique, and results. J Shoulder Elbow Surg. 2014;23(2):279-289.

12. Bufquin T, Hersan A, Hubert L, Massin P. Reverse shoulder arthroplasty for the treatment of three- and four-part fractures of the proximal humerus in the elderly: a prospective review of 43 cases with a short-term follow-up. J Bone Joint Surg Br. 2007;89(4):516-520.

13. Cuff DJ, Pupello DR. Comparison of hemiarthroplasty and reverse shoulder arthroplasty for the treatment of proximal humeral fractures in elderly patients. J Bone Joint Surg Am. 2013;95(22):2050-2055.

14. Walker M, Willis MP, Brooks JP, Pupello D, Mulieri PJ, Frankle MA. The use of the reverse shoulder arthroplasty for treatment of failed total shoulder arthroplasty. J Shoulder Elbow Surg. 2012;21(4):514-522.

15. Valenti P, Kilinc AS, Sauzières P, Katz D. Results of 30 reverse shoulder prostheses for revision of failed hemi- or total shoulder arthroplasty. Eur J Orthop Surg Traumatol. 2014;24(8):1375-1382.

16. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383.

17. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45(6):613-619.

18. Kim SH, Wise BL, Zhang Y, Szabo RM. Increasing incidence of shoulder arthroplasty in the United States. J Bone Joint Surg Am. 2011;93(24):2249-2254.

19. Boileau P, Watkinson D, Hatzidakis AM, Hovorka I. Neer Award 2005: the Grammont reverse shoulder prosthesis: results in cuff tear arthritis, fracture sequelae, and revision arthroplasty. J Shoulder Elbow Surg. 2006;15(5):527-540.

20. Cuff D, Pupello D, Virani N, Levy J, Frankle M. Reverse shoulder arthroplasty for the treatment of rotator cuff deficiency. J Bone Joint Surg Am. 2008;90(6):1244-1251.

21. Frankle M, Siegal S, Pupello D, Saleem A, Mighell M, Vasey M. The reverse shoulder prosthesis for glenohumeral arthritis associated with severe rotator cuff deficiency. A minimum two-year follow-up study of sixty patients. J Bone Joint Surg Am. 2005;87(8):1697-1705.

22. Guery J, Favard L, Sirveaux F, Oudet D, Mole D, Walch G. Reverse total shoulder arthroplasty. Survivorship analysis of eighty replacements followed for five to ten years. J Bone Joint Surg Am. 2006;88(8):1742-1747.

23. Mulieri P, Dunning P, Klein S, Pupello D, Frankle M. Reverse shoulder arthroplasty for the treatment of irreparable rotator cuff tear without glenohumeral arthritis. J Bone Joint Surg Am. 2010;92(15):2544-2556.

24. Sirveaux F, Favard L, Oudet D, Huquet D, Walch G, Molé D. Grammont inverted total shoulder arthroplasty in the treatment of glenohumeral osteoarthritis with massive rupture of the cuff. Results of a multicentre study of 80 shoulders. J Bone Joint Surg Br. 2004;86(3):388-395.

25. Wall B, Nové-Josserand L, O’Connor DP, Edwards TB, Walch G. Reverse total shoulder arthroplasty: a review of results according to etiology. J Bone Joint Surg Am. 2007;89(7):1476-1485.

26. Werner CM, Steinmann PA, Gilbart M, Gerber C. Treatment of painful pseudoparesis due to irreparable rotator cuff dysfunction with the Delta III reverse-ball-and-socket total shoulder prosthesis. J Bone Joint Surg Am. 2005;87(7):1476-1486.

27. Cazeneuve JF, Cristofari DJ. The reverse shoulder prosthesis in the treatment of fractures of the proximal humerus in the elderly. J Bone Joint Surg Br. 2010;92(4):535-539.

28. Stephenson DR, Oh JH, McGarry MH, Rick Hatch GF 3rd, Lee TQ. Effect of humeral component version on impingement in reverse total shoulder arthroplasty. J Shoulder Elbow Surg. 2011;20(4):652-658.

29. Edwards TB, Williams MD, Labriola JE, Elkousy HA, Gartsman GM, O’Connor DP. Subscapularis insufficiency and the risk of shoulder dislocation after reverse shoulder arthroplasty. J Shoulder Elbow Surg. 2009;18(6):892-896.

30. Affonso J, Nicholson GP, Frankle MA, et al. Complications of the reverse prosthesis: prevention and treatment. Instr Course Lect. 2012;61:157-168.

31. Gutiérrez S, Keller TS, Levy JC, Lee WE 3rd, Luo ZP. Hierarchy of stability factors in reverse shoulder arthroplasty. Clin Orthop Relat Res. 2008;466(3):670-676.

32. Boileau P, Watkinson DJ, Hatzidakis AM, Balg F. Grammont reverse prosthesis: design, rationale, and biomechanics. J Shoulder Elbow Surg. 2005;14(1 suppl S):147S-161S.

33. Clark JC, Ritchie J, Song FS, et al. Complication rates, dislocation, pain, and postoperative range of motion after reverse shoulder arthroplasty in patients with and without repair of the subscapularis. J Shoulder Elbow Surg. 2012;21(1):36-41.

34. Richards J, Inacio MC, Beckett M, et al. Patient and procedure-specific risk factors for deep infection after primary shoulder arthroplasty. Clin Orthop Relat Res. 2014;472(9):2809-2815.

35. Singh JA, Sperling JW, Schleck C, Harmsen WS, Cofield RH. Periprosthetic infections after total shoulder arthroplasty: a 33-year perspective. J Shoulder Elbow Surg. 2012;21(11):1534-1541.

Article PDF
Author and Disclosure Information

Authors’ Disclosure Statement: Dr. Abboud reports that he receives royalties from Integra Life Sciences and Lippincott Williams & Wilkins; and is an unpaid consultant for Integra Life Sciences, Depuy Synthes, Tornier, and DJO Global. Dr. Lazarus reports that he receives royalties and is a paid consultant for Tornier on the subject of shoulder arthroplasty. Dr. Ramsey reports that he receives royalties from and is a paid consultant for Zimmer Biomet and Integra Life Sciences on the subject of shoulder arthroplasty. Dr. Williams reports that he receives research funding from Depuy Synthes and Tornier, receives royalties from Depuy Synthes and IMDS/Cleveland Clinic, and is a paid consultant for Depuy Synthes on the subject of shoulder arthroplasty. Dr. Namdari reports that he receives research funding from Depuy Synthes, Zimmer Biomet, Tornier, Integra Life Sciences, and Arthrex; is a paid consultant for Don Joy Orthopedics, Integra Life Sciences, and Miami Device Solutions; and receives product design royalties from Don Joy Orthopedics, Miami Device Solutions, and Elsevier. The other authors report no actual or potential conflict of interest in relation to this article.

Issue
The American Journal of Orthopedics - 45(7)
Publications
Topics
Page Number
E444-E450
Sections
Author and Disclosure Information

Authors’ Disclosure Statement: Dr. Abboud reports that he receives royalties from Integra Life Sciences and Lippincott Williams & Wilkins; and is an unpaid consultant for Integra Life Sciences, Depuy Synthes, Tornier, and DJO Global. Dr. Lazarus reports that he receives royalties and is a paid consultant for Tornier on the subject of shoulder arthroplasty. Dr. Ramsey reports that he receives royalties from and is a paid consultant for Zimmer Biomet and Integra Life Sciences on the subject of shoulder arthroplasty. Dr. Williams reports that he receives research funding from Depuy Synthes and Tornier, receives royalties from Depuy Synthes and IMDS/Cleveland Clinic, and is a paid consultant for Depuy Synthes on the subject of shoulder arthroplasty. Dr. Namdari reports that he receives research funding from Depuy Synthes, Zimmer Biomet, Tornier, Integra Life Sciences, and Arthrex; is a paid consultant for Don Joy Orthopedics, Integra Life Sciences, and Miami Device Solutions; and receives product design royalties from Don Joy Orthopedics, Miami Device Solutions, and Elsevier. The other authors report no actual or potential conflict of interest in relation to this article.

Author and Disclosure Information

Authors’ Disclosure Statement: Dr. Abboud reports that he receives royalties from Integra Life Sciences and Lippincott Williams & Wilkins; and is an unpaid consultant for Integra Life Sciences, Depuy Synthes, Tornier, and DJO Global. Dr. Lazarus reports that he receives royalties and is a paid consultant for Tornier on the subject of shoulder arthroplasty. Dr. Ramsey reports that he receives royalties from and is a paid consultant for Zimmer Biomet and Integra Life Sciences on the subject of shoulder arthroplasty. Dr. Williams reports that he receives research funding from Depuy Synthes and Tornier, receives royalties from Depuy Synthes and IMDS/Cleveland Clinic, and is a paid consultant for Depuy Synthes on the subject of shoulder arthroplasty. Dr. Namdari reports that he receives research funding from Depuy Synthes, Zimmer Biomet, Tornier, Integra Life Sciences, and Arthrex; is a paid consultant for Don Joy Orthopedics, Integra Life Sciences, and Miami Device Solutions; and receives product design royalties from Don Joy Orthopedics, Miami Device Solutions, and Elsevier. The other authors report no actual or potential conflict of interest in relation to this article.

Article PDF
Article PDF

Risk factors for dislocation after reverse total shoulder arthroplasty (RTSA) are not clearly defined. Prosthetic dislocation can result in poor patient satisfaction, worse functional outcomes, and return to the operating room.1-3 As a result, identification of modifiable risk factors for complications represents an important research initiative for shoulder surgeons.

There is a paucity of literature devoted to the study of dislocation after RTSA. Chalmers and colleagues4 found a 2.9% (11/385) incidence of early dislocation within 3 months after index surgery—an improvement over the 15.8% reported for early instability over the period 2004–2006.5 As prosthesis design has improved and surgeons have become more comfortable with the RTSA prosthesis, surgical indications have expanded,6,7 and dislocation rates appear to have decreased. Although the most common indication for RTSA continues to be cuff tear arthropathy (CTA),6 there has been increased use in rheumatoid arthritis8-10; proximal humerus fractures, especially in cases of poor bone quality and unreliable fixation of tuberosities11-13; and failed previous shoulder reconstruction.14,15 As RTSA is performed more often, limiting the complications will become more important for both patient care and economics.

We conducted a study to analyze dislocation rates at our institution and to identify both modifiable and nonmodifiable risk factors for dislocation after RTSA. By identifying risk factors for dislocation, we will be able to implement additional perioperative clinical measures to reduce the incidence of dislocation.

Materials and Methods

This retrospective study of dislocation after RTSA was conducted at the Rothman Institute of Orthopedics and Methodist Hospital (Thomas Jefferson University Hospitals, Philadelphia, PA). After obtaining Institutional Review Board approval for the study, we searched our institution’s electronic database of shoulder arthroplasties to identify all RTSAs performed at our 2 large-volume urban institutions between September 27, 2010 and December 31, 2013. For the record search, International Classification of Diseases, Ninth Revision (ICD-9) codes were used (Table 1).

These unique procedure codes are used by the hospital system for billing, but are not always specific to assigned procedures. Therefore, the individual operative reports identified were reviewed to identify the patients who actually underwent RTSA. From this database, all patients who underwent RTSA were selected. Using the subpopulation of patients who underwent RTSA, we searched individual medical records to identify patients who had a dislocation after RTSA. This information was cross-referenced with ICD-9 codes for shoulder dislocation (831.0, 831.01, 831.02, 831.03) to ensure that all patients were identified.

The medical records of each patient were used to identify independent variables that could be associated with dislocation rate. Demographic variables included sex, age, and race. Preoperative clinical data included body mass index (BMI), etiology of shoulder disease leading to RTSA, individual comorbidities, and Charlson Comorbidity Index (CCI)16 modified to be used with ICD-9 codes.17 In addition, prior shoulder surgery history and arthroplasty type (primary or revision) were determined. Postoperative considerations were time to dislocation, mechanism of dislocation, and intervention(s) needed for dislocation. Although the institutional database did not include operative variables such as prosthesis type and surgical approach, all 6 surgeons in this study were using a standard deltopectoral approach in beach-chair position with a Grammont style prosthesis for RTSA cases.

Descriptive statistics for RTSA patients and the dislocation subpopulation were compiled. Bivariate analysis was used to evaluate which of the previously described variables influenced dislocation rates. Last, multivariate logistic regression analysis was performed to evaluate which factors were independent predictors of dislocation. We included demographic variables (age, sex, ethnicity), clinical variables (BMI, individual comorbidities, CCI), and surgical variables (primary vs revision, diagnosis at time of surgery). All statistical analyses were performed with Excel 2013 (Microsoft) and SPSS Statistics Version 20.0 (SPSS Inc.).

Results

From the database, we identified 487 patients who underwent 510 RTSAs during the study period. These surgeries were performed by 6 shoulder and elbow fellowship–trained surgeons. Of the 510 RTSAs, 393 (77.1%) were primary cases, and 117 (22.9%) were revision cases.

Of the 510 shoulders that underwent RTSA, 15 (2.9%; 14 patients) dislocated. Of these 15 cases, 5 were primary (1.3% of all primary cases) and 10 were revision (8.5% of all revision cases). Mean time from index surgery to diagnosis of dislocation was 58.2 days (range, 0-319 days). One dislocation occurred immediately after surgery, 2 after falls, 4 from patient-identified low-energy mechanisms of injury, and 8 without known inciting events. Nine dislocations (60%) did not have a subscapularis repair (7 were irreparable, 2 underwent subscapularis peel without repair), and the other 6 were repaired primarily (Table 2).

In addition, 11 dislocations (73.3%) previously underwent open or arthroscopic shoulder surgery. All patients who had a dislocation after RTSA returned to the operating room at least once; no dislocation was successfully treated with closed reduction in the clinic. The 15 dislocations underwent 17 surgeries: 7 isolated polyethylene exchanges, 2 isolated closed reductions, 1 hematoma aspiration with closed reduction, 1 open reduction, 2 humeral component revisions with polyethylene exchange, 1 humeral augmentation with polyethylene exchange, 2 glenosphere exchanges with polyethylene exchange, and 1 polyethylene exchange with concurrent subscapularis repair.

Male patients accounted for 32.2% of the study population but 60.0% of the dislocations (P = .019) (Table 3).
In addition, mean BMI was 33.2 for patients with dislocation and 29.5 for patients without dislocation (P = .039) (Table 3). Revision arthroplasty was found to be a risk factor for dislocation in univariate analysis: 66.7% of the dislocations occurred after revision RTSA, and only 21.6% of nondislocated shoulders were revision cases (P < .001) (Table 4).
Patients who underwent RTSA for CTA had a very low incidence of dislocation (0.35%, 1/285), accounting for 6.7% of the dislocated group and 57.6% of the nondislocated group (P < .001) (Table 4). The 1 patient with a dislocation after primary RTSA for CTA had an indolent infection at time of surgery after dislocation.

Multivariate logistic regression analysis revealed revision arthroplasty (OR = 7.515; P = .042) and increased BMI (OR = 1.09; P = .047) to be independent risk factors for dislocation after RTSA. Analysis also found a diagnosis of primary CTA to be independently associated with lower risk of dislocation after RTSA (OR = 0.025; P = .008). Last, the previously described risk factor of male sex was found not to be a significant independent risk factor, though it did trend positively (OR = 3.011; P = .071).

 

 

Discussion

With more RTSAs being performed, evaluation of their common complications becomes increasingly important.18 We found a 3.0% rate of dislocation after RTSA, which is consistent with the most recently reported incidence4 and falls within the previously described range of 0% to 8.6%.19-26 Of the clinical risk factors identified in this study, those previously described were prior surgery, subscapularis insufficiency, higher BMI, and male sex.4 However, our finding of lower risk of dislocation after RTSA for primary rotator cuff pathology was not previously described. Although Chalmers and colleagues4 did not report this lower risk, 3 (27.3%) of their 11 patients with dislocation had primary CTA, compared with 1 (6.7%) of 15 patients in the present study.4 Our literature review did not identify any studies that independently reported the dislocation rate in patients who underwent RTSA for rotator cuff failure.

The risk factors of subscapularis irreparability and revision surgery suggest the importance of the soft-tissue envelope and bony anatomy in dislocation prevention. Previous analyses have suggested implant malpositioning,27,28 poor subscapularis quality,29 and inadequate muscle tensioning5,30-32 as risk factors for RTSA. Patients with an irreparable subscapularis tendon have often had multiple surgeries with compromise to the muscle/soft-tissue envelope or bony anatomy of the shoulder. A biomechanical study by Gutiérrez and colleagues31 found the compressive forces of the soft tissue at the glenohumeral joint to be the most important contributor to stability in the RTSA prosthesis. In clinical studies, the role of the subscapularis in preventing instability after RTSA remains unclear. Edwards and colleagues29 prospectively compared dislocation rates in patients with reparable and irreparable subscapularis tendons during RTSA and found a higher rate of dislocation in the irreparable subscapularis group. Of note, patients in the irreparable subscapularis group also had more complex diagnoses, including proximal humeral nonunion, fixed glenohumeral dislocation, and failed prior arthroplasty. Clark and colleagues33 retrospectively analyzed subscapularis repair in 2 RTSA groups and found no appreciable effect on complication rate, dislocation events, range-of-motion gains, or pain relief.

Our finding that higher BMI is an independent risk factor was previously described.4 The association is unclear but could be related to implant positioning, difficulty in intraoperative assessment of muscle tensioning, or body habitus that may generate a lever arm for impingement and dislocation when the arm is in adduction. Last, our finding that male sex is a risk factor for dislocation approached significance, and this relationship was previously reported.4 This could be attributable to a higher rate of activity or of indolent infection in male patients.34,35Besides studying risk factors for dislocation after RTSA, we investigated treatment. None of our patients were treated successfully and definitively with closed reduction in the clinic. This finding diverges from findings in studies by Teusink and colleagues2 and Chalmers and colleagues,4who respectively reported 62% and 44% rates of success with closed reduction. Our cohort of 14 patients with 15 dislocations required a total of 17 trips to the operating room after dislocation. This significantly higher rate of return to the operating room suggests that dislocation after RTSA may be a more costly and morbid problem than has been previously described.

This study had several weaknesses. Despite its large consecutive series of patients, the study was retrospective, and several variables that would be documented and controlled in a prospective study could not be measured here. Specifically, neither preoperative physical examination nor patient-specific assessments of pain or function were consistently obtained. Similarly, postoperative patient-specific instruments of outcomes evaluation were not obtained consistently, so results of patients with dislocation could not be compared with those of a control group. In addition, preoperative and postoperative radiographs were not consistently present in our electronic medical records, so the influence of preoperative bony anatomy, intraoperative limb lengthening, and any implant malpositioning could not be determined. Furthermore, operative details, such as reparability of the subscapularis, were not fully available for the control group and could not be included in statistical analysis. In addition, that the known dislocation risk factor of male sex4 was identified here but was not significant in multivariate regression analysis suggests that this study may not have been adequately powered to identify a significant difference in dislocation rate between the sexes. Last, though our results suggested associations between the aforementioned variables and dislocation after RTSA, a truly causative relationship could not be confirmed with this study design or analysis. Therefore, our study findings are hypothesis-generating and may indicate a benefit to greater deltoid tensioning, use of retentive liners, or more conservative rehabilitation protocols for high-risk patients.

Conclusion

Dislocation after RTSA is an uncommon complication that often requires a return to the operating room. This study identified a modifiable risk factor (higher BMI) and 3 nonmodifiable risk factors (male sex, subscapularis insufficiency, revision surgery) for dislocation after RTSA. In contrast, patients who undergo RTSA for primary rotator cuff pathology are unlikely to dislocate after surgery. This low risk of dislocation after RTSA for primary cuff pathology was not previously described. Patients in the higher risk category may benefit from preoperative lifestyle modification, intraoperative techniques for increasing stability, and more conservative therapy after surgery. In addition, unlike previous investigations, this study did not find closed reduction in the clinic alone to be successful in definitively treating this patient population.


Am J Orthop. 2016;45(7):E444-E450. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

Risk factors for dislocation after reverse total shoulder arthroplasty (RTSA) are not clearly defined. Prosthetic dislocation can result in poor patient satisfaction, worse functional outcomes, and return to the operating room.1-3 As a result, identification of modifiable risk factors for complications represents an important research initiative for shoulder surgeons.

There is a paucity of literature devoted to the study of dislocation after RTSA. Chalmers and colleagues4 found a 2.9% (11/385) incidence of early dislocation within 3 months after index surgery—an improvement over the 15.8% reported for early instability over the period 2004–2006.5 As prosthesis design has improved and surgeons have become more comfortable with the RTSA prosthesis, surgical indications have expanded,6,7 and dislocation rates appear to have decreased. Although the most common indication for RTSA continues to be cuff tear arthropathy (CTA),6 there has been increased use in rheumatoid arthritis8-10; proximal humerus fractures, especially in cases of poor bone quality and unreliable fixation of tuberosities11-13; and failed previous shoulder reconstruction.14,15 As RTSA is performed more often, limiting the complications will become more important for both patient care and economics.

We conducted a study to analyze dislocation rates at our institution and to identify both modifiable and nonmodifiable risk factors for dislocation after RTSA. By identifying risk factors for dislocation, we will be able to implement additional perioperative clinical measures to reduce the incidence of dislocation.

Materials and Methods

This retrospective study of dislocation after RTSA was conducted at the Rothman Institute of Orthopedics and Methodist Hospital (Thomas Jefferson University Hospitals, Philadelphia, PA). After obtaining Institutional Review Board approval for the study, we searched our institution’s electronic database of shoulder arthroplasties to identify all RTSAs performed at our 2 large-volume urban institutions between September 27, 2010 and December 31, 2013. For the record search, International Classification of Diseases, Ninth Revision (ICD-9) codes were used (Table 1).

These unique procedure codes are used by the hospital system for billing, but are not always specific to assigned procedures. Therefore, the individual operative reports identified were reviewed to identify the patients who actually underwent RTSA. From this database, all patients who underwent RTSA were selected. Using the subpopulation of patients who underwent RTSA, we searched individual medical records to identify patients who had a dislocation after RTSA. This information was cross-referenced with ICD-9 codes for shoulder dislocation (831.0, 831.01, 831.02, 831.03) to ensure that all patients were identified.

The medical records of each patient were used to identify independent variables that could be associated with dislocation rate. Demographic variables included sex, age, and race. Preoperative clinical data included body mass index (BMI), etiology of shoulder disease leading to RTSA, individual comorbidities, and Charlson Comorbidity Index (CCI)16 modified to be used with ICD-9 codes.17 In addition, prior shoulder surgery history and arthroplasty type (primary or revision) were determined. Postoperative considerations were time to dislocation, mechanism of dislocation, and intervention(s) needed for dislocation. Although the institutional database did not include operative variables such as prosthesis type and surgical approach, all 6 surgeons in this study were using a standard deltopectoral approach in beach-chair position with a Grammont style prosthesis for RTSA cases.

Descriptive statistics for RTSA patients and the dislocation subpopulation were compiled. Bivariate analysis was used to evaluate which of the previously described variables influenced dislocation rates. Last, multivariate logistic regression analysis was performed to evaluate which factors were independent predictors of dislocation. We included demographic variables (age, sex, ethnicity), clinical variables (BMI, individual comorbidities, CCI), and surgical variables (primary vs revision, diagnosis at time of surgery). All statistical analyses were performed with Excel 2013 (Microsoft) and SPSS Statistics Version 20.0 (SPSS Inc.).

Results

From the database, we identified 487 patients who underwent 510 RTSAs during the study period. These surgeries were performed by 6 shoulder and elbow fellowship–trained surgeons. Of the 510 RTSAs, 393 (77.1%) were primary cases, and 117 (22.9%) were revision cases.

Of the 510 shoulders that underwent RTSA, 15 (2.9%; 14 patients) dislocated. Of these 15 cases, 5 were primary (1.3% of all primary cases) and 10 were revision (8.5% of all revision cases). Mean time from index surgery to diagnosis of dislocation was 58.2 days (range, 0-319 days). One dislocation occurred immediately after surgery, 2 after falls, 4 from patient-identified low-energy mechanisms of injury, and 8 without known inciting events. Nine dislocations (60%) did not have a subscapularis repair (7 were irreparable, 2 underwent subscapularis peel without repair), and the other 6 were repaired primarily (Table 2).

In addition, 11 dislocations (73.3%) previously underwent open or arthroscopic shoulder surgery. All patients who had a dislocation after RTSA returned to the operating room at least once; no dislocation was successfully treated with closed reduction in the clinic. The 15 dislocations underwent 17 surgeries: 7 isolated polyethylene exchanges, 2 isolated closed reductions, 1 hematoma aspiration with closed reduction, 1 open reduction, 2 humeral component revisions with polyethylene exchange, 1 humeral augmentation with polyethylene exchange, 2 glenosphere exchanges with polyethylene exchange, and 1 polyethylene exchange with concurrent subscapularis repair.

Male patients accounted for 32.2% of the study population but 60.0% of the dislocations (P = .019) (Table 3).
In addition, mean BMI was 33.2 for patients with dislocation and 29.5 for patients without dislocation (P = .039) (Table 3). Revision arthroplasty was found to be a risk factor for dislocation in univariate analysis: 66.7% of the dislocations occurred after revision RTSA, and only 21.6% of nondislocated shoulders were revision cases (P < .001) (Table 4).
Patients who underwent RTSA for CTA had a very low incidence of dislocation (0.35%, 1/285), accounting for 6.7% of the dislocated group and 57.6% of the nondislocated group (P < .001) (Table 4). The 1 patient with a dislocation after primary RTSA for CTA had an indolent infection at time of surgery after dislocation.

Multivariate logistic regression analysis revealed revision arthroplasty (OR = 7.515; P = .042) and increased BMI (OR = 1.09; P = .047) to be independent risk factors for dislocation after RTSA. Analysis also found a diagnosis of primary CTA to be independently associated with lower risk of dislocation after RTSA (OR = 0.025; P = .008). Last, the previously described risk factor of male sex was found not to be a significant independent risk factor, though it did trend positively (OR = 3.011; P = .071).

 

 

Discussion

With more RTSAs being performed, evaluation of their common complications becomes increasingly important.18 We found a 3.0% rate of dislocation after RTSA, which is consistent with the most recently reported incidence4 and falls within the previously described range of 0% to 8.6%.19-26 Of the clinical risk factors identified in this study, those previously described were prior surgery, subscapularis insufficiency, higher BMI, and male sex.4 However, our finding of lower risk of dislocation after RTSA for primary rotator cuff pathology was not previously described. Although Chalmers and colleagues4 did not report this lower risk, 3 (27.3%) of their 11 patients with dislocation had primary CTA, compared with 1 (6.7%) of 15 patients in the present study.4 Our literature review did not identify any studies that independently reported the dislocation rate in patients who underwent RTSA for rotator cuff failure.

The risk factors of subscapularis irreparability and revision surgery suggest the importance of the soft-tissue envelope and bony anatomy in dislocation prevention. Previous analyses have suggested implant malpositioning,27,28 poor subscapularis quality,29 and inadequate muscle tensioning5,30-32 as risk factors for RTSA. Patients with an irreparable subscapularis tendon have often had multiple surgeries with compromise to the muscle/soft-tissue envelope or bony anatomy of the shoulder. A biomechanical study by Gutiérrez and colleagues31 found the compressive forces of the soft tissue at the glenohumeral joint to be the most important contributor to stability in the RTSA prosthesis. In clinical studies, the role of the subscapularis in preventing instability after RTSA remains unclear. Edwards and colleagues29 prospectively compared dislocation rates in patients with reparable and irreparable subscapularis tendons during RTSA and found a higher rate of dislocation in the irreparable subscapularis group. Of note, patients in the irreparable subscapularis group also had more complex diagnoses, including proximal humeral nonunion, fixed glenohumeral dislocation, and failed prior arthroplasty. Clark and colleagues33 retrospectively analyzed subscapularis repair in 2 RTSA groups and found no appreciable effect on complication rate, dislocation events, range-of-motion gains, or pain relief.

Our finding that higher BMI is an independent risk factor was previously described.4 The association is unclear but could be related to implant positioning, difficulty in intraoperative assessment of muscle tensioning, or body habitus that may generate a lever arm for impingement and dislocation when the arm is in adduction. Last, our finding that male sex is a risk factor for dislocation approached significance, and this relationship was previously reported.4 This could be attributable to a higher rate of activity or of indolent infection in male patients.34,35Besides studying risk factors for dislocation after RTSA, we investigated treatment. None of our patients were treated successfully and definitively with closed reduction in the clinic. This finding diverges from findings in studies by Teusink and colleagues2 and Chalmers and colleagues,4who respectively reported 62% and 44% rates of success with closed reduction. Our cohort of 14 patients with 15 dislocations required a total of 17 trips to the operating room after dislocation. This significantly higher rate of return to the operating room suggests that dislocation after RTSA may be a more costly and morbid problem than has been previously described.

This study had several weaknesses. Despite its large consecutive series of patients, the study was retrospective, and several variables that would be documented and controlled in a prospective study could not be measured here. Specifically, neither preoperative physical examination nor patient-specific assessments of pain or function were consistently obtained. Similarly, postoperative patient-specific instruments of outcomes evaluation were not obtained consistently, so results of patients with dislocation could not be compared with those of a control group. In addition, preoperative and postoperative radiographs were not consistently present in our electronic medical records, so the influence of preoperative bony anatomy, intraoperative limb lengthening, and any implant malpositioning could not be determined. Furthermore, operative details, such as reparability of the subscapularis, were not fully available for the control group and could not be included in statistical analysis. In addition, that the known dislocation risk factor of male sex4 was identified here but was not significant in multivariate regression analysis suggests that this study may not have been adequately powered to identify a significant difference in dislocation rate between the sexes. Last, though our results suggested associations between the aforementioned variables and dislocation after RTSA, a truly causative relationship could not be confirmed with this study design or analysis. Therefore, our study findings are hypothesis-generating and may indicate a benefit to greater deltoid tensioning, use of retentive liners, or more conservative rehabilitation protocols for high-risk patients.

Conclusion

Dislocation after RTSA is an uncommon complication that often requires a return to the operating room. This study identified a modifiable risk factor (higher BMI) and 3 nonmodifiable risk factors (male sex, subscapularis insufficiency, revision surgery) for dislocation after RTSA. In contrast, patients who undergo RTSA for primary rotator cuff pathology are unlikely to dislocate after surgery. This low risk of dislocation after RTSA for primary cuff pathology was not previously described. Patients in the higher risk category may benefit from preoperative lifestyle modification, intraoperative techniques for increasing stability, and more conservative therapy after surgery. In addition, unlike previous investigations, this study did not find closed reduction in the clinic alone to be successful in definitively treating this patient population.


Am J Orthop. 2016;45(7):E444-E450. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

References

1. Aldinger PR, Raiss P, Rickert M, Loew M. Complications in shoulder arthroplasty: an analysis of 485 cases. Int Orthop. 2010;34(4):517-524.

2. Teusink MJ, Pappou IP, Schwartz DG, Cottrell BJ, Frankle MA. Results of closed management of acute dislocation after reverse shoulder arthroplasty. J Shoulder Elbow Surg. 2015;24(4):621-627.

3. Fink Barnes LA, Grantham WJ, Meadows MC, Bigliani LU, Levine WN, Ahmad CS. Sports activity after reverse total shoulder arthroplasty with minimum 2-year follow-up. Am J Orthop. 2015;44(2):68-72.

4. Chalmers PN, Rahman Z, Romeo AA, Nicholson GP. Early dislocation after reverse total shoulder arthroplasty. J Shoulder Elbow Surg. 2014;23(5):737-744.

5. Gallo RA, Gamradt SC, Mattern CJ, et al; Sports Medicine and Shoulder Service at the Hospital for Special Surgery, New York, NY. Instability after reverse total shoulder replacement. J Shoulder Elbow Surg. 2011;20(4):584-590.

6. Walch G, Bacle G, Lädermann A, Nové-Josserand L, Smithers CJ. Do the indications, results, and complications of reverse shoulder arthroplasty change with surgeon’s experience? J Shoulder Elbow Surg. 2012;21(11):1470-1477.

7. Smith CD, Guyver P, Bunker TD. Indications for reverse shoulder replacement: a systematic review. J Bone Joint Surg Br. 2012;94(5):577-583.

8. Young AA, Smith MM, Bacle G, Moraga C, Walch G. Early results of reverse shoulder arthroplasty in patients with rheumatoid arthritis. J Bone Joint Surg Am. 2011;93(20):1915-1923.

9. Hedtmann A, Werner A. Shoulder arthroplasty in rheumatoid arthritis [in German]. Orthopade. 2007;36(11):1050-1061.

10. Rittmeister M, Kerschbaumer F. Grammont reverse total shoulder arthroplasty in patients with rheumatoid arthritis and nonreconstructible rotator cuff lesions. J Shoulder Elbow Surg. 2001;10(1):17-22.

11. Acevedo DC, Vanbeek C, Lazarus MD, Williams GR, Abboud JA. Reverse shoulder arthroplasty for proximal humeral fractures: update on indications, technique, and results. J Shoulder Elbow Surg. 2014;23(2):279-289.

12. Bufquin T, Hersan A, Hubert L, Massin P. Reverse shoulder arthroplasty for the treatment of three- and four-part fractures of the proximal humerus in the elderly: a prospective review of 43 cases with a short-term follow-up. J Bone Joint Surg Br. 2007;89(4):516-520.

13. Cuff DJ, Pupello DR. Comparison of hemiarthroplasty and reverse shoulder arthroplasty for the treatment of proximal humeral fractures in elderly patients. J Bone Joint Surg Am. 2013;95(22):2050-2055.

14. Walker M, Willis MP, Brooks JP, Pupello D, Mulieri PJ, Frankle MA. The use of the reverse shoulder arthroplasty for treatment of failed total shoulder arthroplasty. J Shoulder Elbow Surg. 2012;21(4):514-522.

15. Valenti P, Kilinc AS, Sauzières P, Katz D. Results of 30 reverse shoulder prostheses for revision of failed hemi- or total shoulder arthroplasty. Eur J Orthop Surg Traumatol. 2014;24(8):1375-1382.

16. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383.

17. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45(6):613-619.

18. Kim SH, Wise BL, Zhang Y, Szabo RM. Increasing incidence of shoulder arthroplasty in the United States. J Bone Joint Surg Am. 2011;93(24):2249-2254.

19. Boileau P, Watkinson D, Hatzidakis AM, Hovorka I. Neer Award 2005: the Grammont reverse shoulder prosthesis: results in cuff tear arthritis, fracture sequelae, and revision arthroplasty. J Shoulder Elbow Surg. 2006;15(5):527-540.

20. Cuff D, Pupello D, Virani N, Levy J, Frankle M. Reverse shoulder arthroplasty for the treatment of rotator cuff deficiency. J Bone Joint Surg Am. 2008;90(6):1244-1251.

21. Frankle M, Siegal S, Pupello D, Saleem A, Mighell M, Vasey M. The reverse shoulder prosthesis for glenohumeral arthritis associated with severe rotator cuff deficiency. A minimum two-year follow-up study of sixty patients. J Bone Joint Surg Am. 2005;87(8):1697-1705.

22. Guery J, Favard L, Sirveaux F, Oudet D, Mole D, Walch G. Reverse total shoulder arthroplasty. Survivorship analysis of eighty replacements followed for five to ten years. J Bone Joint Surg Am. 2006;88(8):1742-1747.

23. Mulieri P, Dunning P, Klein S, Pupello D, Frankle M. Reverse shoulder arthroplasty for the treatment of irreparable rotator cuff tear without glenohumeral arthritis. J Bone Joint Surg Am. 2010;92(15):2544-2556.

24. Sirveaux F, Favard L, Oudet D, Huquet D, Walch G, Molé D. Grammont inverted total shoulder arthroplasty in the treatment of glenohumeral osteoarthritis with massive rupture of the cuff. Results of a multicentre study of 80 shoulders. J Bone Joint Surg Br. 2004;86(3):388-395.

25. Wall B, Nové-Josserand L, O’Connor DP, Edwards TB, Walch G. Reverse total shoulder arthroplasty: a review of results according to etiology. J Bone Joint Surg Am. 2007;89(7):1476-1485.

26. Werner CM, Steinmann PA, Gilbart M, Gerber C. Treatment of painful pseudoparesis due to irreparable rotator cuff dysfunction with the Delta III reverse-ball-and-socket total shoulder prosthesis. J Bone Joint Surg Am. 2005;87(7):1476-1486.

27. Cazeneuve JF, Cristofari DJ. The reverse shoulder prosthesis in the treatment of fractures of the proximal humerus in the elderly. J Bone Joint Surg Br. 2010;92(4):535-539.

28. Stephenson DR, Oh JH, McGarry MH, Rick Hatch GF 3rd, Lee TQ. Effect of humeral component version on impingement in reverse total shoulder arthroplasty. J Shoulder Elbow Surg. 2011;20(4):652-658.

29. Edwards TB, Williams MD, Labriola JE, Elkousy HA, Gartsman GM, O’Connor DP. Subscapularis insufficiency and the risk of shoulder dislocation after reverse shoulder arthroplasty. J Shoulder Elbow Surg. 2009;18(6):892-896.

30. Affonso J, Nicholson GP, Frankle MA, et al. Complications of the reverse prosthesis: prevention and treatment. Instr Course Lect. 2012;61:157-168.

31. Gutiérrez S, Keller TS, Levy JC, Lee WE 3rd, Luo ZP. Hierarchy of stability factors in reverse shoulder arthroplasty. Clin Orthop Relat Res. 2008;466(3):670-676.

32. Boileau P, Watkinson DJ, Hatzidakis AM, Balg F. Grammont reverse prosthesis: design, rationale, and biomechanics. J Shoulder Elbow Surg. 2005;14(1 suppl S):147S-161S.

33. Clark JC, Ritchie J, Song FS, et al. Complication rates, dislocation, pain, and postoperative range of motion after reverse shoulder arthroplasty in patients with and without repair of the subscapularis. J Shoulder Elbow Surg. 2012;21(1):36-41.

34. Richards J, Inacio MC, Beckett M, et al. Patient and procedure-specific risk factors for deep infection after primary shoulder arthroplasty. Clin Orthop Relat Res. 2014;472(9):2809-2815.

35. Singh JA, Sperling JW, Schleck C, Harmsen WS, Cofield RH. Periprosthetic infections after total shoulder arthroplasty: a 33-year perspective. J Shoulder Elbow Surg. 2012;21(11):1534-1541.

References

1. Aldinger PR, Raiss P, Rickert M, Loew M. Complications in shoulder arthroplasty: an analysis of 485 cases. Int Orthop. 2010;34(4):517-524.

2. Teusink MJ, Pappou IP, Schwartz DG, Cottrell BJ, Frankle MA. Results of closed management of acute dislocation after reverse shoulder arthroplasty. J Shoulder Elbow Surg. 2015;24(4):621-627.

3. Fink Barnes LA, Grantham WJ, Meadows MC, Bigliani LU, Levine WN, Ahmad CS. Sports activity after reverse total shoulder arthroplasty with minimum 2-year follow-up. Am J Orthop. 2015;44(2):68-72.

4. Chalmers PN, Rahman Z, Romeo AA, Nicholson GP. Early dislocation after reverse total shoulder arthroplasty. J Shoulder Elbow Surg. 2014;23(5):737-744.

5. Gallo RA, Gamradt SC, Mattern CJ, et al; Sports Medicine and Shoulder Service at the Hospital for Special Surgery, New York, NY. Instability after reverse total shoulder replacement. J Shoulder Elbow Surg. 2011;20(4):584-590.

6. Walch G, Bacle G, Lädermann A, Nové-Josserand L, Smithers CJ. Do the indications, results, and complications of reverse shoulder arthroplasty change with surgeon’s experience? J Shoulder Elbow Surg. 2012;21(11):1470-1477.

7. Smith CD, Guyver P, Bunker TD. Indications for reverse shoulder replacement: a systematic review. J Bone Joint Surg Br. 2012;94(5):577-583.

8. Young AA, Smith MM, Bacle G, Moraga C, Walch G. Early results of reverse shoulder arthroplasty in patients with rheumatoid arthritis. J Bone Joint Surg Am. 2011;93(20):1915-1923.

9. Hedtmann A, Werner A. Shoulder arthroplasty in rheumatoid arthritis [in German]. Orthopade. 2007;36(11):1050-1061.

10. Rittmeister M, Kerschbaumer F. Grammont reverse total shoulder arthroplasty in patients with rheumatoid arthritis and nonreconstructible rotator cuff lesions. J Shoulder Elbow Surg. 2001;10(1):17-22.

11. Acevedo DC, Vanbeek C, Lazarus MD, Williams GR, Abboud JA. Reverse shoulder arthroplasty for proximal humeral fractures: update on indications, technique, and results. J Shoulder Elbow Surg. 2014;23(2):279-289.

12. Bufquin T, Hersan A, Hubert L, Massin P. Reverse shoulder arthroplasty for the treatment of three- and four-part fractures of the proximal humerus in the elderly: a prospective review of 43 cases with a short-term follow-up. J Bone Joint Surg Br. 2007;89(4):516-520.

13. Cuff DJ, Pupello DR. Comparison of hemiarthroplasty and reverse shoulder arthroplasty for the treatment of proximal humeral fractures in elderly patients. J Bone Joint Surg Am. 2013;95(22):2050-2055.

14. Walker M, Willis MP, Brooks JP, Pupello D, Mulieri PJ, Frankle MA. The use of the reverse shoulder arthroplasty for treatment of failed total shoulder arthroplasty. J Shoulder Elbow Surg. 2012;21(4):514-522.

15. Valenti P, Kilinc AS, Sauzières P, Katz D. Results of 30 reverse shoulder prostheses for revision of failed hemi- or total shoulder arthroplasty. Eur J Orthop Surg Traumatol. 2014;24(8):1375-1382.

16. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383.

17. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45(6):613-619.

18. Kim SH, Wise BL, Zhang Y, Szabo RM. Increasing incidence of shoulder arthroplasty in the United States. J Bone Joint Surg Am. 2011;93(24):2249-2254.

19. Boileau P, Watkinson D, Hatzidakis AM, Hovorka I. Neer Award 2005: the Grammont reverse shoulder prosthesis: results in cuff tear arthritis, fracture sequelae, and revision arthroplasty. J Shoulder Elbow Surg. 2006;15(5):527-540.

20. Cuff D, Pupello D, Virani N, Levy J, Frankle M. Reverse shoulder arthroplasty for the treatment of rotator cuff deficiency. J Bone Joint Surg Am. 2008;90(6):1244-1251.

21. Frankle M, Siegal S, Pupello D, Saleem A, Mighell M, Vasey M. The reverse shoulder prosthesis for glenohumeral arthritis associated with severe rotator cuff deficiency. A minimum two-year follow-up study of sixty patients. J Bone Joint Surg Am. 2005;87(8):1697-1705.

22. Guery J, Favard L, Sirveaux F, Oudet D, Mole D, Walch G. Reverse total shoulder arthroplasty. Survivorship analysis of eighty replacements followed for five to ten years. J Bone Joint Surg Am. 2006;88(8):1742-1747.

23. Mulieri P, Dunning P, Klein S, Pupello D, Frankle M. Reverse shoulder arthroplasty for the treatment of irreparable rotator cuff tear without glenohumeral arthritis. J Bone Joint Surg Am. 2010;92(15):2544-2556.

24. Sirveaux F, Favard L, Oudet D, Huquet D, Walch G, Molé D. Grammont inverted total shoulder arthroplasty in the treatment of glenohumeral osteoarthritis with massive rupture of the cuff. Results of a multicentre study of 80 shoulders. J Bone Joint Surg Br. 2004;86(3):388-395.

25. Wall B, Nové-Josserand L, O’Connor DP, Edwards TB, Walch G. Reverse total shoulder arthroplasty: a review of results according to etiology. J Bone Joint Surg Am. 2007;89(7):1476-1485.

26. Werner CM, Steinmann PA, Gilbart M, Gerber C. Treatment of painful pseudoparesis due to irreparable rotator cuff dysfunction with the Delta III reverse-ball-and-socket total shoulder prosthesis. J Bone Joint Surg Am. 2005;87(7):1476-1486.

27. Cazeneuve JF, Cristofari DJ. The reverse shoulder prosthesis in the treatment of fractures of the proximal humerus in the elderly. J Bone Joint Surg Br. 2010;92(4):535-539.

28. Stephenson DR, Oh JH, McGarry MH, Rick Hatch GF 3rd, Lee TQ. Effect of humeral component version on impingement in reverse total shoulder arthroplasty. J Shoulder Elbow Surg. 2011;20(4):652-658.

29. Edwards TB, Williams MD, Labriola JE, Elkousy HA, Gartsman GM, O’Connor DP. Subscapularis insufficiency and the risk of shoulder dislocation after reverse shoulder arthroplasty. J Shoulder Elbow Surg. 2009;18(6):892-896.

30. Affonso J, Nicholson GP, Frankle MA, et al. Complications of the reverse prosthesis: prevention and treatment. Instr Course Lect. 2012;61:157-168.

31. Gutiérrez S, Keller TS, Levy JC, Lee WE 3rd, Luo ZP. Hierarchy of stability factors in reverse shoulder arthroplasty. Clin Orthop Relat Res. 2008;466(3):670-676.

32. Boileau P, Watkinson DJ, Hatzidakis AM, Balg F. Grammont reverse prosthesis: design, rationale, and biomechanics. J Shoulder Elbow Surg. 2005;14(1 suppl S):147S-161S.

33. Clark JC, Ritchie J, Song FS, et al. Complication rates, dislocation, pain, and postoperative range of motion after reverse shoulder arthroplasty in patients with and without repair of the subscapularis. J Shoulder Elbow Surg. 2012;21(1):36-41.

34. Richards J, Inacio MC, Beckett M, et al. Patient and procedure-specific risk factors for deep infection after primary shoulder arthroplasty. Clin Orthop Relat Res. 2014;472(9):2809-2815.

35. Singh JA, Sperling JW, Schleck C, Harmsen WS, Cofield RH. Periprosthetic infections after total shoulder arthroplasty: a 33-year perspective. J Shoulder Elbow Surg. 2012;21(11):1534-1541.

Issue
The American Journal of Orthopedics - 45(7)
Issue
The American Journal of Orthopedics - 45(7)
Page Number
E444-E450
Page Number
E444-E450
Publications
Publications
Topics
Article Type
Display Headline
Instability After Reverse Total Shoulder Arthroplasty: Which Patients Dislocate?
Display Headline
Instability After Reverse Total Shoulder Arthroplasty: Which Patients Dislocate?
Sections
Disallow All Ads
Article PDF Media

Arthroscopic Transosseous and Transosseous-Equivalent Rotator Cuff Repair: An Analysis of Cost, Operative Time, and Clinical Outcomes

Article Type
Changed
Thu, 09/19/2019 - 13:24
Display Headline
Arthroscopic Transosseous and Transosseous-Equivalent Rotator Cuff Repair: An Analysis of Cost, Operative Time, and Clinical Outcomes

The rate of medical visits for rotator cuff pathology and the US incidence of arthroscopic rotator cuff repair (RCR) have increased over the past 10 years.1 The increased use of RCR has been justified with improved patient outcomes.2,3 Advances in surgical techniques and instrumentation have contributed to better outcomes for patients with rotator cuff pathology.3-5 Several studies have validated RCR with functional outcome measures, cost–benefit analysis, and health-related quality-of-life measurements.6-9

Healthcare reimbursement models are being changed to include capitated care, pay for performance, and penalties.10 Given the changing healthcare climate and the increasing incidence of RCR, it is becoming increasingly important for orthopedic surgeons to critically evaluate and modify their practice and procedures to decrease costs without compromising outcomes.11 RCR outcome studies have focused on comparing open/mini-open with arthroscopic techniques, and single-row with double-row techniques, among others.4,12-18 Furthermore, several studies on the cost-effectiveness of these surgical techniques have been conducted.19-21Arthroscopic anchorless (transosseous [TO]) RCR, which is increasingly popular,22 combines the minimal invasiveness of arthroscopic procedures with the biomechanical strength of open TO repair. In addition, this technique avoids the potential complications and costs associated with suture anchors, such as anchor pullout and greater tuberosity osteolysis.22,23 Several studies have documented the effectiveness of this technique.24-26 Biomechanical and clinical outcome data supporting arthroscopic TO-RCR have been published, but there are no reports of studies that have analyzed the cost savings associated with this technique.

In this study, we compared implant costs associated with arthroscopic TO-RCR and arthroscopic TO-equivalent (TOE) RCR. We also evaluated these techniques’ operative time and outcomes. Our hypothesis was that arthroscopic TO-RCR can be performed at lower cost and without increasing operative time or compromising outcomes.

Materials and Methods

Our Institutional Review Board approved this study. Between February 2013 and January 2014, participating surgeons performed 43 arthroscopic TO-RCRs that met the study’s inclusion criteria. Twenty-one of the 43 patients enrolled and became the study group. The control group of 21 patients, who underwent arthroscopic TOE-RCR the preceding year (between January 2012 and January 2013), was matched to the study group on tear size and concomitant procedures, including biceps treatment, labral treatment, acromioplasty, and distal clavicle excision (Table 1).

Males or nonpregnant females, age 18 years or older, with full-thickness rotator cuff tear treated with arthroscopic RCR at one regional healthcare system were eligible for the study. Exclusion criteria were revision repair, irreparable tear, worker compensation claim, and subscapularis repair.

The primary outcome measure was implant cost (amount paid by institution). Cost was determined and reported by an independent third party using Cerner Surginet as the operating room documentation system and McKessen Pathways Materials Management System for item pricing.

All arthroscopic RCRs were performed by 1 of 3 orthopedic surgeons fellowship-trained in either sports medicine or shoulder and elbow surgery. Using the Cofield classification,27 the treating surgeon recorded the size of the rotator cuff tear: small (<1 cm), medium (1-3 cm), large (3-5 cm), massive (>5 cm). The surgeon also recorded the number of suture anchors used, repair technique, biceps treatment, execution of subacromial decompression, execution of distal clavicle excision, and intraoperative complications. TO repair surgical technique is described in the next section. TOE repair was double-row repair with suture anchors. The number of suture anchors varied by tear size: small (3 anchors), medium (2-5 anchors), large (4-6 anchors), massive (4-5 anchors).

Secondary outcome measures were operative time (time from cut to close) and scores on pain VAS (visual analog scale), SANE (Single Assessment Numeric Evaluation), and SST (Simple Shoulder Test). Demographic information was also obtained: age, sex, body mass index, smoking status (Table 1). All patients were asked to fill out questionnaires before surgery and 3, 6, and >12 months after surgery. Outcome surveys were scored by a single research coordinator, who recorded each patient’s outcome scores at the preoperative and postoperative intervals. Follow-up of >12 months was reached by 17 (81%) of the 21 TO patients and 14 (67%) of the 21 TOE patients. For >12 months, the overall rate of follow-up was 74%.

All patients followed the same postoperative rehabilitation protocol: sling immobilization with pendulums for 6 weeks starting at 2 weeks, passive range of motion starting at 6 weeks, and active range of motion starting at 8 weeks. At 3 months, they were allowed progressive resistant exercises with a 10-pound limit, and at 4.5 months they progressed to a 20-pound limit. At 6 months, they were cleared for discharge.

 

 

Surgical Technique: Arthroscopic Transosseous Repair

Surgery was performed with the patient in either the beach-chair position or the lateral decubitus position, based on surgeon preference. Our technique is similar to what has been described in the past.22,28 The glenohumeral joint is accessed through a standard posterior portal, followed by an anterior accessory portal through the rotator interval. Standard diagnostic arthroscopy is performed and intra-articular pathology addressed. Next, the scope is placed in the subacromial space through the posterior portal. A lateral subacromial portal is established and cannulated, and a bursectomy performed. The scope is then placed in a posterolateral portal for better visualization of the rotator cuff tear. The greater tuberosity is débrided with a curette to prepare the bed for repair. An ArthroTunneler (Tornier) is used to pass sutures through the greater tuberosity. For standard 2-tunnel repair, 3 sutures are placed through each tunnel. All 6 sutures are next passed (using a suture passer) through the rotator cuff. The second and fifth suture ends that are passed through the cuff are brought out through the cannula and tied together. They are then brought into the shoulder by pulling on the opposite ends and tied alongside the greater tuberosity to create a box stitch. The box stitch acts as a medial row fixation and as a rip stitch that strengthens the vertical mattress sutures against pullout. The other 4 sutures are tied in vertical mattress configuration.

Statistical Analysis

After obtaining the TO and TOE implant costs, we compared them using a generalized linear model with negative binomial distribution and an identity link function so returned parameters were in additive dollars. This comparison included evaluation of tear size and concomitant procedures. Operative times for TO and TOE were obtained and evaluated, and then compared using time-to-event analysis and the log-rank test. Outcome scores were obtained from patients at baseline and 3, 6, and >12 months after surgery and were compared using a linear mixed model that identified change in outcome scores over time, and difference in outcome scores between the TO and TOE groups.

Results

Table 1 lists patient demographics, including age, sex, body mass index, smoking status, and concomitant procedures. The TO and TOE groups had identical tear-size distributions. In addition, they had similar numbers of concomitant procedures, though our study was underpowered to confirm equivalence. Treatment techniques differed: more biceps tenodesis cases in the TO group (n = 12) than in the TOE group (n = 2) and more biceps tenotomy cases in the TOE group (n = 8) than in the TO group (n = 1).

TO implant cost was significantly lower than TOE implant cost for all tear sizes and independent of concomitant procedures (Figure 1).

Mean (SD) implant cost was $563.10 ($29.65) for the TO group and $1489.00 ($331.05) for the TOE group. With all other factors controlled, mean (SD) implant cost was $946.91 ($100.70) more expensive for the TOE group (P < .0001).

Operative time was not significantly different between the TO and TOE groups. Mean (SD) operative time was 82.38 (24.09) minutes for the TO group and 81.71 (17.27) minutes for the TOE group. With all other factors controlled, mean operative time was 5.96 minutes shorter for the TOE group, but the difference was not significant (P = .549).

There was no significant difference in preoperative pain VAS (P = .93), SANE (P = .35), or SST (P = .36) scores between the TO and TOE groups.
At all postoperative follow-ups (3, 6, and >12 months), there was significant (P < .0001) improvement in outcome scores (VAS, SANE, SST) for both groups (Table 2).
There was no significant difference in pain VAS (P = .688), SANE (P = .882), or SST (P = .272) scores (Figure 2) between the groups across all time points.

Discussion

RCR is one of the most common orthopedic surgical procedures, and its use has increased over the past decade.9,21 This increase coincides with the emergence of new repair techniques and implants. These advancements come at a cost. Given the increasingly cost-conscious healthcare environment and its changing reimbursement models, now surgeons must evaluate the economics of their surgical procedures in an attempt to decrease costs without compromising outcomes. We hypothesized that arthroscopic TO-RCR can be performed at lower cost relative to arthroscopic TOE-RCR and without increasing operative time or compromising short-term outcomes.

Studies on the cost-effectiveness of different RCR techniques have been conducted.19-21 Adla and colleagues19 found that open RCR was more cost-effective than arthroscopic RCR, with most of the difference attributable to disposables and suture anchors. Genuario and colleagues21 found that double-row RCR was not as cost-effective as single-row RCR in treating tears of any size. They attributed the difference to 2 more anchors and about 15 more minutes in the operating room.

The increased interest in healthcare costs and the understanding that a substantial part of the cost of arthroscopic RCR is attributable to implants (suture anchors, specifically) led to recent efforts to eliminate the need for anchors. Newly available instrumentation was designed to assist in arthroscopic anchorless repair constructs using the concepts of traditional TO repair.22 Although still considered to be the RCR gold standard, TO fixation has been used less often in recent years, owing to the shift from open to arthroscopic surgery.24 Arthroscopic TO-RCR allows for all the benefits of arthroscopic surgery, plus the biological and mechanical benefits of traditional open or mini-open TO repair. In addition, this technique eliminates the cost of anchors. Kummer and colleagues25 confirmed with biomechanical testing that arthroscopic TO repair and double-row TOE repair are similar in strength, with a trend of less tendon displacement in the TO group.

Our study results support the hypothesis that arthroscopic TO repair provides significant cost savings over tear size–matched arthroscopic TOE repair. Implant cost was substantially higher for TOE repair than for TO repair. Mean (SD) total savings of $946.91 ($100.70) (P < .0001) can be realized performing TO rather than TOE repair. In the United States, where about 250,000 RCRs are performed each year, the use of TO repair would result in an annual savings of almost $250 million.6Operative time was analyzed as well. Running an operating room in the United States costs an estimated $62 per minute (range, $22-$133 per minute).29 Much of this cost is indirect, unrelated to the surgery (eg, capital investment, personnel, insurance), and is being paid even when the operating room is not in use. Therefore, for the hospital’s bottom line, operative time savings are less important than direct cost savings (supplies, implants). However, operative time has more of an effect on the surgeon’s bottom line, and longer procedures reduce the number of surgeries that can be performed and billed. We found no significant difference in operative time between TO and TOE repairs. Critical evaluation revealed that operative time was 5.96 minutes shorter for TOE repairs, but this difference was not significant (P = .677).

Our study results showed no significant difference in clinical outcomes between TO and TOE repair patients. Both groups’ outcome scores improved. At all follow-ups, both groups’ VAS, SANE, and SST scores were significantly improved. Overall, this is the first study to validate the proposed cost benefit of arthroscopic TO repair and confirm no compromise in patient outcomes.

This study had limitations. First, it enrolled relatively few patients, particularly those with small tears. In addition, despite the fact that patients were matched on tear size and concomitant procedures, the groups differed in their biceps pathology treatments. Of the 13 TO patients who had biceps treatment, 12 underwent tenodesis (1 had tenotomy); in contrast, of the 10 TOE patients who had biceps treatment, only 2 underwent tenodesis (8 had tenotomy). The difference is explained by the consecutive course of this study and the increasing popularity of tenodesis over tenotomy. The TOE group underwent surgery before the TO group did, at a time when the involved surgeons were routinely performing tenotomy more than tenodesis. We did not include the costs of implants related to biceps treatment in our analysis, as our focus was on the implant cost of RCR. As for operative time, biceps tenodesis would be expected to extend surgery and potentially affect the comparison of operative times between the TO and TOE groups. However, despite the fact that 12 of the 13 TO patients underwent biceps tenodesis, there was no significant difference in overall operative time. Last, regarding the effect of biceps treatment on clinical outcomes, there are no data showing improved outcomes with tenodesis over tenotomy in the setting of RCR.

A final limitation is lack of data from longer term (>12 months) follow-up for all patients. Our analysis included cost and operative time data for all 42 enrolled patients, but our clinical outcome data represent only 74% of the patients enrolled. Eleven of the 42 patients were lost to follow-up at >12 months, and outcome scores could not be obtained, despite multiple attempts at contact (phone, mail, email). The study design and primary outcome variable focused on cost analysis rather than clinical outcomes. Nevertheless, our data support our hypothesis that there is no difference in clinical outcomes between TO and TOE repairs.

 

 

Conclusion

Arthroscopic TO-RCR provides significant cost savings over arthroscopic TOE-RCR without increasing operative time or compromising outcomes. Arthroscopic TO-RCR may have an important role in the evolving healthcare environment and its changing reimbursement models.

Am J Orthop. 2016;45(7):E415-E420. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

References

1. Colvin AC, Egorova N, Harrison AK, Moskowitz A, Flatow EL. National trends in rotator cuff repair. J Bone Joint Surg Am. 2012;94(3):227-233.

2. Pedowitz RA, Yamaguchi K, Ahmad CS, et al. American Academy of Orthopaedic Surgeons Clinical Practice Guideline on: optimizing the management of rotator cuff problems. J Bone Joint Surg Am. 2012;94(2):163-167.

3. Wolf BR, Dunn WR, Wright RW. Indications for repair of full-thickness rotator cuff tears. Am J Sports Med. 2007;35(6):1007-1016.

4. Yamaguchi K, Ball CM, Galatz LM. Arthroscopic rotator cuff repair: transition from mini-open to all-arthroscopic. Clin Orthop Relat Res. 2001;(390):83-94.

5. Yamaguchi K, Levine WN, Marra G, Galatz LM, Klepps S, Flatow EL. Transitioning to arthroscopic rotator cuff repair: the pros and cons. Instr Course Lect. 2003;52:81-92.

6. Mather RC 3rd, Koenig L, Acevedo D, et al. The societal and economic value of rotator cuff repair. J Bone Joint Surg Am. 2013;95(22):1993-2000.

7. Milne JC, Gartsman GM. Cost of shoulder surgery. J Shoulder Elbow Surg. 1994;3(5):295-298.

8. Savoie FH 3rd, Field LD, Jenkins RN. Costs analysis of successful rotator cuff repair surgery: an outcome study. Comparison of gatekeeper system in surgical patients. Arthroscopy. 1995;11(6):672-676.

9. Vitale MA, Vitale MG, Zivin JG, Braman JP, Bigliani LU, Flatow EL. Rotator cuff repair: an analysis of utility scores and cost-effectiveness. J Shoulder Elbow Surg. 2007;16(2):181-187.

10. Ihejirika RC, Sathiyakumar V, Thakore RV, et al. Healthcare reimbursement models and orthopaedic trauma: will there be change in patient management? A survey of orthopaedic surgeons. J Orthop Trauma. 2015;29(2):e79-e84.

11. Black EM, Higgins LD, Warner JJ. Value-based shoulder surgery: practicing outcomes-driven, cost-conscious care. J Shoulder Elbow Surg. 2013;22(7):1000-1009.

12. Barber FA, Hapa O, Bynum JA. Comparative testing by cyclic loading of rotator cuff suture anchors containing multiple high-strength sutures. Arthroscopy. 2010;26(9 suppl):S134-S141.

13. Barros RM, Matos MA, Ferreira Neto AA, et al. Biomechanical evaluation on tendon reinsertion by comparing trans-osseous suture and suture anchor at different stages of healing: experimental study on rabbits. J Shoulder Elbow Surg. 2010;19(6):878-883.

14. Cole BJ, ElAttrache NS, Anbari A. Arthroscopic rotator cuff repairs: an anatomic and biomechanical rationale for different suture-anchor repair configurations. Arthroscopy. 2007;23(6):662-669.

15. Ghodadra NS, Provencher MT, Verma NN, Wilk KE, Romeo AA. Open, mini-open, and all-arthroscopic rotator cuff repair surgery: indications and implications for rehabilitation. J Orthop Sports Phys Ther. 2009;39(2):81-89.

16. Pietschmann MF, Fröhlich V, Ficklscherer A, et al. Pullout strength of suture anchors in comparison with transosseous sutures for rotator cuff repair. Knee Surg Sports Traumatol Arthrosc. 2008;16(5):504-510.

17. van der Zwaal P, Thomassen BJ, Nieuwenhuijse MJ, Lindenburg R, Swen JW, van Arkel ER. Clinical outcome in all-arthroscopic versus mini-open rotator cuff repair in small to medium-sized tears: a randomized controlled trial in 100 patients with 1-year follow-up. Arthroscopy. 2013;29(2):266-273.

18. Wang VM, Wang FC, McNickle AG, et al. Medial versus lateral supraspinatus tendon properties: implications for double-row rotator cuff repair. Am J Sports Med. 2010;38(12):2456-2463.

19. Adla DN, Rowsell M, Pandey R. Cost-effectiveness of open versus arthroscopic rotator cuff repair. J Shoulder Elbow Surg. 2010;19(2):258-261.

20. Churchill RS, Ghorai JK. Total cost and operating room time comparison of rotator cuff repair techniques at low, intermediate, and high volume centers: mini-open versus all-arthroscopic. J Shoulder Elbow Surg. 2010;19(5):716-721.

21. Genuario JW, Donegan RP, Hamman D, et al. The cost-effectiveness of single-row compared with double-row arthroscopic rotator cuff repair. J Bone Joint Surg Am. 2012;94(15):1369-1377.

22. Garofalo R, Castagna A, Borroni M, Krishnan SG. Arthroscopic transosseous (anchorless) rotator cuff repair. Knee Surg Sports Traumatol Arthrosc. 2012;20(6):1031-1035.

23. Benson EC, MacDermid JC, Drosdowech DS, Athwal GS. The incidence of early metallic suture anchor pullout after arthroscopic rotator cuff repair. Arthroscopy. 2010;26(3):310-315.

24. Baudi P, Rasia Dani E, Campochiaro G, Rebuzzi M, Serafini F, Catani F. The rotator cuff tear repair with a new arthroscopic transosseous system: the Sharc-FT®. Musculoskelet Surg. 2013;97(suppl 1):57-61.

25. Kummer FJ, Hahn M, Day M, Meislin RJ, Jazrawi LM. A laboratory comparison of a new arthroscopic transosseous rotator cuff repair to a double row transosseous equivalent rotator cuff repair using suture anchors. Bull Hosp Joint Dis. 2013;71(2):128-131.

26. Kuroda S, Ishige N, Mikasa M. Advantages of arthroscopic transosseous suture repair of the rotator cuff without the use of anchors. Clin Orthop Relat Res. 2013;471(11):3514-3522.

27. Cofield RH. Subscapular muscle transposition for repair of chronic rotator cuff tears. Surg Gynecol Obstet. 1982;154(5):667-672.

28. Paxton ES, Lazarus MD. Arthroscopic transosseous rotator cuff repair. Orthop Knowledge Online J. 2014;12(2). http://orthoportal.aaos.org/oko/article.aspx?article=OKO_SHO052#article. Accessed October 4, 2016.

29. Macario A. What does one minute of operating room time cost? J Clin Anesth. 2010;22(4):233-236.

Article PDF
Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Issue
The American Journal of Orthopedics - 45(7)
Publications
Topics
Page Number
E415-E420
Sections
Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Article PDF
Article PDF

The rate of medical visits for rotator cuff pathology and the US incidence of arthroscopic rotator cuff repair (RCR) have increased over the past 10 years.1 The increased use of RCR has been justified with improved patient outcomes.2,3 Advances in surgical techniques and instrumentation have contributed to better outcomes for patients with rotator cuff pathology.3-5 Several studies have validated RCR with functional outcome measures, cost–benefit analysis, and health-related quality-of-life measurements.6-9

Healthcare reimbursement models are being changed to include capitated care, pay for performance, and penalties.10 Given the changing healthcare climate and the increasing incidence of RCR, it is becoming increasingly important for orthopedic surgeons to critically evaluate and modify their practice and procedures to decrease costs without compromising outcomes.11 RCR outcome studies have focused on comparing open/mini-open with arthroscopic techniques, and single-row with double-row techniques, among others.4,12-18 Furthermore, several studies on the cost-effectiveness of these surgical techniques have been conducted.19-21Arthroscopic anchorless (transosseous [TO]) RCR, which is increasingly popular,22 combines the minimal invasiveness of arthroscopic procedures with the biomechanical strength of open TO repair. In addition, this technique avoids the potential complications and costs associated with suture anchors, such as anchor pullout and greater tuberosity osteolysis.22,23 Several studies have documented the effectiveness of this technique.24-26 Biomechanical and clinical outcome data supporting arthroscopic TO-RCR have been published, but there are no reports of studies that have analyzed the cost savings associated with this technique.

In this study, we compared implant costs associated with arthroscopic TO-RCR and arthroscopic TO-equivalent (TOE) RCR. We also evaluated these techniques’ operative time and outcomes. Our hypothesis was that arthroscopic TO-RCR can be performed at lower cost and without increasing operative time or compromising outcomes.

Materials and Methods

Our Institutional Review Board approved this study. Between February 2013 and January 2014, participating surgeons performed 43 arthroscopic TO-RCRs that met the study’s inclusion criteria. Twenty-one of the 43 patients enrolled and became the study group. The control group of 21 patients, who underwent arthroscopic TOE-RCR the preceding year (between January 2012 and January 2013), was matched to the study group on tear size and concomitant procedures, including biceps treatment, labral treatment, acromioplasty, and distal clavicle excision (Table 1).

Males or nonpregnant females, age 18 years or older, with full-thickness rotator cuff tear treated with arthroscopic RCR at one regional healthcare system were eligible for the study. Exclusion criteria were revision repair, irreparable tear, worker compensation claim, and subscapularis repair.

The primary outcome measure was implant cost (amount paid by institution). Cost was determined and reported by an independent third party using Cerner Surginet as the operating room documentation system and McKessen Pathways Materials Management System for item pricing.

All arthroscopic RCRs were performed by 1 of 3 orthopedic surgeons fellowship-trained in either sports medicine or shoulder and elbow surgery. Using the Cofield classification,27 the treating surgeon recorded the size of the rotator cuff tear: small (<1 cm), medium (1-3 cm), large (3-5 cm), massive (>5 cm). The surgeon also recorded the number of suture anchors used, repair technique, biceps treatment, execution of subacromial decompression, execution of distal clavicle excision, and intraoperative complications. TO repair surgical technique is described in the next section. TOE repair was double-row repair with suture anchors. The number of suture anchors varied by tear size: small (3 anchors), medium (2-5 anchors), large (4-6 anchors), massive (4-5 anchors).

Secondary outcome measures were operative time (time from cut to close) and scores on pain VAS (visual analog scale), SANE (Single Assessment Numeric Evaluation), and SST (Simple Shoulder Test). Demographic information was also obtained: age, sex, body mass index, smoking status (Table 1). All patients were asked to fill out questionnaires before surgery and 3, 6, and >12 months after surgery. Outcome surveys were scored by a single research coordinator, who recorded each patient’s outcome scores at the preoperative and postoperative intervals. Follow-up of >12 months was reached by 17 (81%) of the 21 TO patients and 14 (67%) of the 21 TOE patients. For >12 months, the overall rate of follow-up was 74%.

All patients followed the same postoperative rehabilitation protocol: sling immobilization with pendulums for 6 weeks starting at 2 weeks, passive range of motion starting at 6 weeks, and active range of motion starting at 8 weeks. At 3 months, they were allowed progressive resistant exercises with a 10-pound limit, and at 4.5 months they progressed to a 20-pound limit. At 6 months, they were cleared for discharge.

 

 

Surgical Technique: Arthroscopic Transosseous Repair

Surgery was performed with the patient in either the beach-chair position or the lateral decubitus position, based on surgeon preference. Our technique is similar to what has been described in the past.22,28 The glenohumeral joint is accessed through a standard posterior portal, followed by an anterior accessory portal through the rotator interval. Standard diagnostic arthroscopy is performed and intra-articular pathology addressed. Next, the scope is placed in the subacromial space through the posterior portal. A lateral subacromial portal is established and cannulated, and a bursectomy performed. The scope is then placed in a posterolateral portal for better visualization of the rotator cuff tear. The greater tuberosity is débrided with a curette to prepare the bed for repair. An ArthroTunneler (Tornier) is used to pass sutures through the greater tuberosity. For standard 2-tunnel repair, 3 sutures are placed through each tunnel. All 6 sutures are next passed (using a suture passer) through the rotator cuff. The second and fifth suture ends that are passed through the cuff are brought out through the cannula and tied together. They are then brought into the shoulder by pulling on the opposite ends and tied alongside the greater tuberosity to create a box stitch. The box stitch acts as a medial row fixation and as a rip stitch that strengthens the vertical mattress sutures against pullout. The other 4 sutures are tied in vertical mattress configuration.

Statistical Analysis

After obtaining the TO and TOE implant costs, we compared them using a generalized linear model with negative binomial distribution and an identity link function so returned parameters were in additive dollars. This comparison included evaluation of tear size and concomitant procedures. Operative times for TO and TOE were obtained and evaluated, and then compared using time-to-event analysis and the log-rank test. Outcome scores were obtained from patients at baseline and 3, 6, and >12 months after surgery and were compared using a linear mixed model that identified change in outcome scores over time, and difference in outcome scores between the TO and TOE groups.

Results

Table 1 lists patient demographics, including age, sex, body mass index, smoking status, and concomitant procedures. The TO and TOE groups had identical tear-size distributions. In addition, they had similar numbers of concomitant procedures, though our study was underpowered to confirm equivalence. Treatment techniques differed: more biceps tenodesis cases in the TO group (n = 12) than in the TOE group (n = 2) and more biceps tenotomy cases in the TOE group (n = 8) than in the TO group (n = 1).

TO implant cost was significantly lower than TOE implant cost for all tear sizes and independent of concomitant procedures (Figure 1).

Mean (SD) implant cost was $563.10 ($29.65) for the TO group and $1489.00 ($331.05) for the TOE group. With all other factors controlled, mean (SD) implant cost was $946.91 ($100.70) more expensive for the TOE group (P < .0001).

Operative time was not significantly different between the TO and TOE groups. Mean (SD) operative time was 82.38 (24.09) minutes for the TO group and 81.71 (17.27) minutes for the TOE group. With all other factors controlled, mean operative time was 5.96 minutes shorter for the TOE group, but the difference was not significant (P = .549).

There was no significant difference in preoperative pain VAS (P = .93), SANE (P = .35), or SST (P = .36) scores between the TO and TOE groups.
At all postoperative follow-ups (3, 6, and >12 months), there was significant (P < .0001) improvement in outcome scores (VAS, SANE, SST) for both groups (Table 2).
There was no significant difference in pain VAS (P = .688), SANE (P = .882), or SST (P = .272) scores (Figure 2) between the groups across all time points.

Discussion

RCR is one of the most common orthopedic surgical procedures, and its use has increased over the past decade.9,21 This increase coincides with the emergence of new repair techniques and implants. These advancements come at a cost. Given the increasingly cost-conscious healthcare environment and its changing reimbursement models, now surgeons must evaluate the economics of their surgical procedures in an attempt to decrease costs without compromising outcomes. We hypothesized that arthroscopic TO-RCR can be performed at lower cost relative to arthroscopic TOE-RCR and without increasing operative time or compromising short-term outcomes.

Studies on the cost-effectiveness of different RCR techniques have been conducted.19-21 Adla and colleagues19 found that open RCR was more cost-effective than arthroscopic RCR, with most of the difference attributable to disposables and suture anchors. Genuario and colleagues21 found that double-row RCR was not as cost-effective as single-row RCR in treating tears of any size. They attributed the difference to 2 more anchors and about 15 more minutes in the operating room.

The increased interest in healthcare costs and the understanding that a substantial part of the cost of arthroscopic RCR is attributable to implants (suture anchors, specifically) led to recent efforts to eliminate the need for anchors. Newly available instrumentation was designed to assist in arthroscopic anchorless repair constructs using the concepts of traditional TO repair.22 Although still considered to be the RCR gold standard, TO fixation has been used less often in recent years, owing to the shift from open to arthroscopic surgery.24 Arthroscopic TO-RCR allows for all the benefits of arthroscopic surgery, plus the biological and mechanical benefits of traditional open or mini-open TO repair. In addition, this technique eliminates the cost of anchors. Kummer and colleagues25 confirmed with biomechanical testing that arthroscopic TO repair and double-row TOE repair are similar in strength, with a trend of less tendon displacement in the TO group.

Our study results support the hypothesis that arthroscopic TO repair provides significant cost savings over tear size–matched arthroscopic TOE repair. Implant cost was substantially higher for TOE repair than for TO repair. Mean (SD) total savings of $946.91 ($100.70) (P < .0001) can be realized performing TO rather than TOE repair. In the United States, where about 250,000 RCRs are performed each year, the use of TO repair would result in an annual savings of almost $250 million.6Operative time was analyzed as well. Running an operating room in the United States costs an estimated $62 per minute (range, $22-$133 per minute).29 Much of this cost is indirect, unrelated to the surgery (eg, capital investment, personnel, insurance), and is being paid even when the operating room is not in use. Therefore, for the hospital’s bottom line, operative time savings are less important than direct cost savings (supplies, implants). However, operative time has more of an effect on the surgeon’s bottom line, and longer procedures reduce the number of surgeries that can be performed and billed. We found no significant difference in operative time between TO and TOE repairs. Critical evaluation revealed that operative time was 5.96 minutes shorter for TOE repairs, but this difference was not significant (P = .677).

Our study results showed no significant difference in clinical outcomes between TO and TOE repair patients. Both groups’ outcome scores improved. At all follow-ups, both groups’ VAS, SANE, and SST scores were significantly improved. Overall, this is the first study to validate the proposed cost benefit of arthroscopic TO repair and confirm no compromise in patient outcomes.

This study had limitations. First, it enrolled relatively few patients, particularly those with small tears. In addition, despite the fact that patients were matched on tear size and concomitant procedures, the groups differed in their biceps pathology treatments. Of the 13 TO patients who had biceps treatment, 12 underwent tenodesis (1 had tenotomy); in contrast, of the 10 TOE patients who had biceps treatment, only 2 underwent tenodesis (8 had tenotomy). The difference is explained by the consecutive course of this study and the increasing popularity of tenodesis over tenotomy. The TOE group underwent surgery before the TO group did, at a time when the involved surgeons were routinely performing tenotomy more than tenodesis. We did not include the costs of implants related to biceps treatment in our analysis, as our focus was on the implant cost of RCR. As for operative time, biceps tenodesis would be expected to extend surgery and potentially affect the comparison of operative times between the TO and TOE groups. However, despite the fact that 12 of the 13 TO patients underwent biceps tenodesis, there was no significant difference in overall operative time. Last, regarding the effect of biceps treatment on clinical outcomes, there are no data showing improved outcomes with tenodesis over tenotomy in the setting of RCR.

A final limitation is lack of data from longer term (>12 months) follow-up for all patients. Our analysis included cost and operative time data for all 42 enrolled patients, but our clinical outcome data represent only 74% of the patients enrolled. Eleven of the 42 patients were lost to follow-up at >12 months, and outcome scores could not be obtained, despite multiple attempts at contact (phone, mail, email). The study design and primary outcome variable focused on cost analysis rather than clinical outcomes. Nevertheless, our data support our hypothesis that there is no difference in clinical outcomes between TO and TOE repairs.

 

 

Conclusion

Arthroscopic TO-RCR provides significant cost savings over arthroscopic TOE-RCR without increasing operative time or compromising outcomes. Arthroscopic TO-RCR may have an important role in the evolving healthcare environment and its changing reimbursement models.

Am J Orthop. 2016;45(7):E415-E420. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

The rate of medical visits for rotator cuff pathology and the US incidence of arthroscopic rotator cuff repair (RCR) have increased over the past 10 years.1 The increased use of RCR has been justified with improved patient outcomes.2,3 Advances in surgical techniques and instrumentation have contributed to better outcomes for patients with rotator cuff pathology.3-5 Several studies have validated RCR with functional outcome measures, cost–benefit analysis, and health-related quality-of-life measurements.6-9

Healthcare reimbursement models are being changed to include capitated care, pay for performance, and penalties.10 Given the changing healthcare climate and the increasing incidence of RCR, it is becoming increasingly important for orthopedic surgeons to critically evaluate and modify their practice and procedures to decrease costs without compromising outcomes.11 RCR outcome studies have focused on comparing open/mini-open with arthroscopic techniques, and single-row with double-row techniques, among others.4,12-18 Furthermore, several studies on the cost-effectiveness of these surgical techniques have been conducted.19-21Arthroscopic anchorless (transosseous [TO]) RCR, which is increasingly popular,22 combines the minimal invasiveness of arthroscopic procedures with the biomechanical strength of open TO repair. In addition, this technique avoids the potential complications and costs associated with suture anchors, such as anchor pullout and greater tuberosity osteolysis.22,23 Several studies have documented the effectiveness of this technique.24-26 Biomechanical and clinical outcome data supporting arthroscopic TO-RCR have been published, but there are no reports of studies that have analyzed the cost savings associated with this technique.

In this study, we compared implant costs associated with arthroscopic TO-RCR and arthroscopic TO-equivalent (TOE) RCR. We also evaluated these techniques’ operative time and outcomes. Our hypothesis was that arthroscopic TO-RCR can be performed at lower cost and without increasing operative time or compromising outcomes.

Materials and Methods

Our Institutional Review Board approved this study. Between February 2013 and January 2014, participating surgeons performed 43 arthroscopic TO-RCRs that met the study’s inclusion criteria. Twenty-one of the 43 patients enrolled and became the study group. The control group of 21 patients, who underwent arthroscopic TOE-RCR the preceding year (between January 2012 and January 2013), was matched to the study group on tear size and concomitant procedures, including biceps treatment, labral treatment, acromioplasty, and distal clavicle excision (Table 1).

Males or nonpregnant females, age 18 years or older, with full-thickness rotator cuff tear treated with arthroscopic RCR at one regional healthcare system were eligible for the study. Exclusion criteria were revision repair, irreparable tear, worker compensation claim, and subscapularis repair.

The primary outcome measure was implant cost (amount paid by institution). Cost was determined and reported by an independent third party using Cerner Surginet as the operating room documentation system and McKessen Pathways Materials Management System for item pricing.

All arthroscopic RCRs were performed by 1 of 3 orthopedic surgeons fellowship-trained in either sports medicine or shoulder and elbow surgery. Using the Cofield classification,27 the treating surgeon recorded the size of the rotator cuff tear: small (<1 cm), medium (1-3 cm), large (3-5 cm), massive (>5 cm). The surgeon also recorded the number of suture anchors used, repair technique, biceps treatment, execution of subacromial decompression, execution of distal clavicle excision, and intraoperative complications. TO repair surgical technique is described in the next section. TOE repair was double-row repair with suture anchors. The number of suture anchors varied by tear size: small (3 anchors), medium (2-5 anchors), large (4-6 anchors), massive (4-5 anchors).

Secondary outcome measures were operative time (time from cut to close) and scores on pain VAS (visual analog scale), SANE (Single Assessment Numeric Evaluation), and SST (Simple Shoulder Test). Demographic information was also obtained: age, sex, body mass index, smoking status (Table 1). All patients were asked to fill out questionnaires before surgery and 3, 6, and >12 months after surgery. Outcome surveys were scored by a single research coordinator, who recorded each patient’s outcome scores at the preoperative and postoperative intervals. Follow-up of >12 months was reached by 17 (81%) of the 21 TO patients and 14 (67%) of the 21 TOE patients. For >12 months, the overall rate of follow-up was 74%.

All patients followed the same postoperative rehabilitation protocol: sling immobilization with pendulums for 6 weeks starting at 2 weeks, passive range of motion starting at 6 weeks, and active range of motion starting at 8 weeks. At 3 months, they were allowed progressive resistant exercises with a 10-pound limit, and at 4.5 months they progressed to a 20-pound limit. At 6 months, they were cleared for discharge.

 

 

Surgical Technique: Arthroscopic Transosseous Repair

Surgery was performed with the patient in either the beach-chair position or the lateral decubitus position, based on surgeon preference. Our technique is similar to what has been described in the past.22,28 The glenohumeral joint is accessed through a standard posterior portal, followed by an anterior accessory portal through the rotator interval. Standard diagnostic arthroscopy is performed and intra-articular pathology addressed. Next, the scope is placed in the subacromial space through the posterior portal. A lateral subacromial portal is established and cannulated, and a bursectomy performed. The scope is then placed in a posterolateral portal for better visualization of the rotator cuff tear. The greater tuberosity is débrided with a curette to prepare the bed for repair. An ArthroTunneler (Tornier) is used to pass sutures through the greater tuberosity. For standard 2-tunnel repair, 3 sutures are placed through each tunnel. All 6 sutures are next passed (using a suture passer) through the rotator cuff. The second and fifth suture ends that are passed through the cuff are brought out through the cannula and tied together. They are then brought into the shoulder by pulling on the opposite ends and tied alongside the greater tuberosity to create a box stitch. The box stitch acts as a medial row fixation and as a rip stitch that strengthens the vertical mattress sutures against pullout. The other 4 sutures are tied in vertical mattress configuration.

Statistical Analysis

After obtaining the TO and TOE implant costs, we compared them using a generalized linear model with negative binomial distribution and an identity link function so returned parameters were in additive dollars. This comparison included evaluation of tear size and concomitant procedures. Operative times for TO and TOE were obtained and evaluated, and then compared using time-to-event analysis and the log-rank test. Outcome scores were obtained from patients at baseline and 3, 6, and >12 months after surgery and were compared using a linear mixed model that identified change in outcome scores over time, and difference in outcome scores between the TO and TOE groups.

Results

Table 1 lists patient demographics, including age, sex, body mass index, smoking status, and concomitant procedures. The TO and TOE groups had identical tear-size distributions. In addition, they had similar numbers of concomitant procedures, though our study was underpowered to confirm equivalence. Treatment techniques differed: more biceps tenodesis cases in the TO group (n = 12) than in the TOE group (n = 2) and more biceps tenotomy cases in the TOE group (n = 8) than in the TO group (n = 1).

TO implant cost was significantly lower than TOE implant cost for all tear sizes and independent of concomitant procedures (Figure 1).

Mean (SD) implant cost was $563.10 ($29.65) for the TO group and $1489.00 ($331.05) for the TOE group. With all other factors controlled, mean (SD) implant cost was $946.91 ($100.70) more expensive for the TOE group (P < .0001).

Operative time was not significantly different between the TO and TOE groups. Mean (SD) operative time was 82.38 (24.09) minutes for the TO group and 81.71 (17.27) minutes for the TOE group. With all other factors controlled, mean operative time was 5.96 minutes shorter for the TOE group, but the difference was not significant (P = .549).

There was no significant difference in preoperative pain VAS (P = .93), SANE (P = .35), or SST (P = .36) scores between the TO and TOE groups.
At all postoperative follow-ups (3, 6, and >12 months), there was significant (P < .0001) improvement in outcome scores (VAS, SANE, SST) for both groups (Table 2).
There was no significant difference in pain VAS (P = .688), SANE (P = .882), or SST (P = .272) scores (Figure 2) between the groups across all time points.

Discussion

RCR is one of the most common orthopedic surgical procedures, and its use has increased over the past decade.9,21 This increase coincides with the emergence of new repair techniques and implants. These advancements come at a cost. Given the increasingly cost-conscious healthcare environment and its changing reimbursement models, now surgeons must evaluate the economics of their surgical procedures in an attempt to decrease costs without compromising outcomes. We hypothesized that arthroscopic TO-RCR can be performed at lower cost relative to arthroscopic TOE-RCR and without increasing operative time or compromising short-term outcomes.

Studies on the cost-effectiveness of different RCR techniques have been conducted.19-21 Adla and colleagues19 found that open RCR was more cost-effective than arthroscopic RCR, with most of the difference attributable to disposables and suture anchors. Genuario and colleagues21 found that double-row RCR was not as cost-effective as single-row RCR in treating tears of any size. They attributed the difference to 2 more anchors and about 15 more minutes in the operating room.

The increased interest in healthcare costs and the understanding that a substantial part of the cost of arthroscopic RCR is attributable to implants (suture anchors, specifically) led to recent efforts to eliminate the need for anchors. Newly available instrumentation was designed to assist in arthroscopic anchorless repair constructs using the concepts of traditional TO repair.22 Although still considered to be the RCR gold standard, TO fixation has been used less often in recent years, owing to the shift from open to arthroscopic surgery.24 Arthroscopic TO-RCR allows for all the benefits of arthroscopic surgery, plus the biological and mechanical benefits of traditional open or mini-open TO repair. In addition, this technique eliminates the cost of anchors. Kummer and colleagues25 confirmed with biomechanical testing that arthroscopic TO repair and double-row TOE repair are similar in strength, with a trend of less tendon displacement in the TO group.

Our study results support the hypothesis that arthroscopic TO repair provides significant cost savings over tear size–matched arthroscopic TOE repair. Implant cost was substantially higher for TOE repair than for TO repair. Mean (SD) total savings of $946.91 ($100.70) (P < .0001) can be realized performing TO rather than TOE repair. In the United States, where about 250,000 RCRs are performed each year, the use of TO repair would result in an annual savings of almost $250 million.6Operative time was analyzed as well. Running an operating room in the United States costs an estimated $62 per minute (range, $22-$133 per minute).29 Much of this cost is indirect, unrelated to the surgery (eg, capital investment, personnel, insurance), and is being paid even when the operating room is not in use. Therefore, for the hospital’s bottom line, operative time savings are less important than direct cost savings (supplies, implants). However, operative time has more of an effect on the surgeon’s bottom line, and longer procedures reduce the number of surgeries that can be performed and billed. We found no significant difference in operative time between TO and TOE repairs. Critical evaluation revealed that operative time was 5.96 minutes shorter for TOE repairs, but this difference was not significant (P = .677).

Our study results showed no significant difference in clinical outcomes between TO and TOE repair patients. Both groups’ outcome scores improved. At all follow-ups, both groups’ VAS, SANE, and SST scores were significantly improved. Overall, this is the first study to validate the proposed cost benefit of arthroscopic TO repair and confirm no compromise in patient outcomes.

This study had limitations. First, it enrolled relatively few patients, particularly those with small tears. In addition, despite the fact that patients were matched on tear size and concomitant procedures, the groups differed in their biceps pathology treatments. Of the 13 TO patients who had biceps treatment, 12 underwent tenodesis (1 had tenotomy); in contrast, of the 10 TOE patients who had biceps treatment, only 2 underwent tenodesis (8 had tenotomy). The difference is explained by the consecutive course of this study and the increasing popularity of tenodesis over tenotomy. The TOE group underwent surgery before the TO group did, at a time when the involved surgeons were routinely performing tenotomy more than tenodesis. We did not include the costs of implants related to biceps treatment in our analysis, as our focus was on the implant cost of RCR. As for operative time, biceps tenodesis would be expected to extend surgery and potentially affect the comparison of operative times between the TO and TOE groups. However, despite the fact that 12 of the 13 TO patients underwent biceps tenodesis, there was no significant difference in overall operative time. Last, regarding the effect of biceps treatment on clinical outcomes, there are no data showing improved outcomes with tenodesis over tenotomy in the setting of RCR.

A final limitation is lack of data from longer term (>12 months) follow-up for all patients. Our analysis included cost and operative time data for all 42 enrolled patients, but our clinical outcome data represent only 74% of the patients enrolled. Eleven of the 42 patients were lost to follow-up at >12 months, and outcome scores could not be obtained, despite multiple attempts at contact (phone, mail, email). The study design and primary outcome variable focused on cost analysis rather than clinical outcomes. Nevertheless, our data support our hypothesis that there is no difference in clinical outcomes between TO and TOE repairs.

 

 

Conclusion

Arthroscopic TO-RCR provides significant cost savings over arthroscopic TOE-RCR without increasing operative time or compromising outcomes. Arthroscopic TO-RCR may have an important role in the evolving healthcare environment and its changing reimbursement models.

Am J Orthop. 2016;45(7):E415-E420. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

References

1. Colvin AC, Egorova N, Harrison AK, Moskowitz A, Flatow EL. National trends in rotator cuff repair. J Bone Joint Surg Am. 2012;94(3):227-233.

2. Pedowitz RA, Yamaguchi K, Ahmad CS, et al. American Academy of Orthopaedic Surgeons Clinical Practice Guideline on: optimizing the management of rotator cuff problems. J Bone Joint Surg Am. 2012;94(2):163-167.

3. Wolf BR, Dunn WR, Wright RW. Indications for repair of full-thickness rotator cuff tears. Am J Sports Med. 2007;35(6):1007-1016.

4. Yamaguchi K, Ball CM, Galatz LM. Arthroscopic rotator cuff repair: transition from mini-open to all-arthroscopic. Clin Orthop Relat Res. 2001;(390):83-94.

5. Yamaguchi K, Levine WN, Marra G, Galatz LM, Klepps S, Flatow EL. Transitioning to arthroscopic rotator cuff repair: the pros and cons. Instr Course Lect. 2003;52:81-92.

6. Mather RC 3rd, Koenig L, Acevedo D, et al. The societal and economic value of rotator cuff repair. J Bone Joint Surg Am. 2013;95(22):1993-2000.

7. Milne JC, Gartsman GM. Cost of shoulder surgery. J Shoulder Elbow Surg. 1994;3(5):295-298.

8. Savoie FH 3rd, Field LD, Jenkins RN. Costs analysis of successful rotator cuff repair surgery: an outcome study. Comparison of gatekeeper system in surgical patients. Arthroscopy. 1995;11(6):672-676.

9. Vitale MA, Vitale MG, Zivin JG, Braman JP, Bigliani LU, Flatow EL. Rotator cuff repair: an analysis of utility scores and cost-effectiveness. J Shoulder Elbow Surg. 2007;16(2):181-187.

10. Ihejirika RC, Sathiyakumar V, Thakore RV, et al. Healthcare reimbursement models and orthopaedic trauma: will there be change in patient management? A survey of orthopaedic surgeons. J Orthop Trauma. 2015;29(2):e79-e84.

11. Black EM, Higgins LD, Warner JJ. Value-based shoulder surgery: practicing outcomes-driven, cost-conscious care. J Shoulder Elbow Surg. 2013;22(7):1000-1009.

12. Barber FA, Hapa O, Bynum JA. Comparative testing by cyclic loading of rotator cuff suture anchors containing multiple high-strength sutures. Arthroscopy. 2010;26(9 suppl):S134-S141.

13. Barros RM, Matos MA, Ferreira Neto AA, et al. Biomechanical evaluation on tendon reinsertion by comparing trans-osseous suture and suture anchor at different stages of healing: experimental study on rabbits. J Shoulder Elbow Surg. 2010;19(6):878-883.

14. Cole BJ, ElAttrache NS, Anbari A. Arthroscopic rotator cuff repairs: an anatomic and biomechanical rationale for different suture-anchor repair configurations. Arthroscopy. 2007;23(6):662-669.

15. Ghodadra NS, Provencher MT, Verma NN, Wilk KE, Romeo AA. Open, mini-open, and all-arthroscopic rotator cuff repair surgery: indications and implications for rehabilitation. J Orthop Sports Phys Ther. 2009;39(2):81-89.

16. Pietschmann MF, Fröhlich V, Ficklscherer A, et al. Pullout strength of suture anchors in comparison with transosseous sutures for rotator cuff repair. Knee Surg Sports Traumatol Arthrosc. 2008;16(5):504-510.

17. van der Zwaal P, Thomassen BJ, Nieuwenhuijse MJ, Lindenburg R, Swen JW, van Arkel ER. Clinical outcome in all-arthroscopic versus mini-open rotator cuff repair in small to medium-sized tears: a randomized controlled trial in 100 patients with 1-year follow-up. Arthroscopy. 2013;29(2):266-273.

18. Wang VM, Wang FC, McNickle AG, et al. Medial versus lateral supraspinatus tendon properties: implications for double-row rotator cuff repair. Am J Sports Med. 2010;38(12):2456-2463.

19. Adla DN, Rowsell M, Pandey R. Cost-effectiveness of open versus arthroscopic rotator cuff repair. J Shoulder Elbow Surg. 2010;19(2):258-261.

20. Churchill RS, Ghorai JK. Total cost and operating room time comparison of rotator cuff repair techniques at low, intermediate, and high volume centers: mini-open versus all-arthroscopic. J Shoulder Elbow Surg. 2010;19(5):716-721.

21. Genuario JW, Donegan RP, Hamman D, et al. The cost-effectiveness of single-row compared with double-row arthroscopic rotator cuff repair. J Bone Joint Surg Am. 2012;94(15):1369-1377.

22. Garofalo R, Castagna A, Borroni M, Krishnan SG. Arthroscopic transosseous (anchorless) rotator cuff repair. Knee Surg Sports Traumatol Arthrosc. 2012;20(6):1031-1035.

23. Benson EC, MacDermid JC, Drosdowech DS, Athwal GS. The incidence of early metallic suture anchor pullout after arthroscopic rotator cuff repair. Arthroscopy. 2010;26(3):310-315.

24. Baudi P, Rasia Dani E, Campochiaro G, Rebuzzi M, Serafini F, Catani F. The rotator cuff tear repair with a new arthroscopic transosseous system: the Sharc-FT®. Musculoskelet Surg. 2013;97(suppl 1):57-61.

25. Kummer FJ, Hahn M, Day M, Meislin RJ, Jazrawi LM. A laboratory comparison of a new arthroscopic transosseous rotator cuff repair to a double row transosseous equivalent rotator cuff repair using suture anchors. Bull Hosp Joint Dis. 2013;71(2):128-131.

26. Kuroda S, Ishige N, Mikasa M. Advantages of arthroscopic transosseous suture repair of the rotator cuff without the use of anchors. Clin Orthop Relat Res. 2013;471(11):3514-3522.

27. Cofield RH. Subscapular muscle transposition for repair of chronic rotator cuff tears. Surg Gynecol Obstet. 1982;154(5):667-672.

28. Paxton ES, Lazarus MD. Arthroscopic transosseous rotator cuff repair. Orthop Knowledge Online J. 2014;12(2). http://orthoportal.aaos.org/oko/article.aspx?article=OKO_SHO052#article. Accessed October 4, 2016.

29. Macario A. What does one minute of operating room time cost? J Clin Anesth. 2010;22(4):233-236.

References

1. Colvin AC, Egorova N, Harrison AK, Moskowitz A, Flatow EL. National trends in rotator cuff repair. J Bone Joint Surg Am. 2012;94(3):227-233.

2. Pedowitz RA, Yamaguchi K, Ahmad CS, et al. American Academy of Orthopaedic Surgeons Clinical Practice Guideline on: optimizing the management of rotator cuff problems. J Bone Joint Surg Am. 2012;94(2):163-167.

3. Wolf BR, Dunn WR, Wright RW. Indications for repair of full-thickness rotator cuff tears. Am J Sports Med. 2007;35(6):1007-1016.

4. Yamaguchi K, Ball CM, Galatz LM. Arthroscopic rotator cuff repair: transition from mini-open to all-arthroscopic. Clin Orthop Relat Res. 2001;(390):83-94.

5. Yamaguchi K, Levine WN, Marra G, Galatz LM, Klepps S, Flatow EL. Transitioning to arthroscopic rotator cuff repair: the pros and cons. Instr Course Lect. 2003;52:81-92.

6. Mather RC 3rd, Koenig L, Acevedo D, et al. The societal and economic value of rotator cuff repair. J Bone Joint Surg Am. 2013;95(22):1993-2000.

7. Milne JC, Gartsman GM. Cost of shoulder surgery. J Shoulder Elbow Surg. 1994;3(5):295-298.

8. Savoie FH 3rd, Field LD, Jenkins RN. Costs analysis of successful rotator cuff repair surgery: an outcome study. Comparison of gatekeeper system in surgical patients. Arthroscopy. 1995;11(6):672-676.

9. Vitale MA, Vitale MG, Zivin JG, Braman JP, Bigliani LU, Flatow EL. Rotator cuff repair: an analysis of utility scores and cost-effectiveness. J Shoulder Elbow Surg. 2007;16(2):181-187.

10. Ihejirika RC, Sathiyakumar V, Thakore RV, et al. Healthcare reimbursement models and orthopaedic trauma: will there be change in patient management? A survey of orthopaedic surgeons. J Orthop Trauma. 2015;29(2):e79-e84.

11. Black EM, Higgins LD, Warner JJ. Value-based shoulder surgery: practicing outcomes-driven, cost-conscious care. J Shoulder Elbow Surg. 2013;22(7):1000-1009.

12. Barber FA, Hapa O, Bynum JA. Comparative testing by cyclic loading of rotator cuff suture anchors containing multiple high-strength sutures. Arthroscopy. 2010;26(9 suppl):S134-S141.

13. Barros RM, Matos MA, Ferreira Neto AA, et al. Biomechanical evaluation on tendon reinsertion by comparing trans-osseous suture and suture anchor at different stages of healing: experimental study on rabbits. J Shoulder Elbow Surg. 2010;19(6):878-883.

14. Cole BJ, ElAttrache NS, Anbari A. Arthroscopic rotator cuff repairs: an anatomic and biomechanical rationale for different suture-anchor repair configurations. Arthroscopy. 2007;23(6):662-669.

15. Ghodadra NS, Provencher MT, Verma NN, Wilk KE, Romeo AA. Open, mini-open, and all-arthroscopic rotator cuff repair surgery: indications and implications for rehabilitation. J Orthop Sports Phys Ther. 2009;39(2):81-89.

16. Pietschmann MF, Fröhlich V, Ficklscherer A, et al. Pullout strength of suture anchors in comparison with transosseous sutures for rotator cuff repair. Knee Surg Sports Traumatol Arthrosc. 2008;16(5):504-510.

17. van der Zwaal P, Thomassen BJ, Nieuwenhuijse MJ, Lindenburg R, Swen JW, van Arkel ER. Clinical outcome in all-arthroscopic versus mini-open rotator cuff repair in small to medium-sized tears: a randomized controlled trial in 100 patients with 1-year follow-up. Arthroscopy. 2013;29(2):266-273.

18. Wang VM, Wang FC, McNickle AG, et al. Medial versus lateral supraspinatus tendon properties: implications for double-row rotator cuff repair. Am J Sports Med. 2010;38(12):2456-2463.

19. Adla DN, Rowsell M, Pandey R. Cost-effectiveness of open versus arthroscopic rotator cuff repair. J Shoulder Elbow Surg. 2010;19(2):258-261.

20. Churchill RS, Ghorai JK. Total cost and operating room time comparison of rotator cuff repair techniques at low, intermediate, and high volume centers: mini-open versus all-arthroscopic. J Shoulder Elbow Surg. 2010;19(5):716-721.

21. Genuario JW, Donegan RP, Hamman D, et al. The cost-effectiveness of single-row compared with double-row arthroscopic rotator cuff repair. J Bone Joint Surg Am. 2012;94(15):1369-1377.

22. Garofalo R, Castagna A, Borroni M, Krishnan SG. Arthroscopic transosseous (anchorless) rotator cuff repair. Knee Surg Sports Traumatol Arthrosc. 2012;20(6):1031-1035.

23. Benson EC, MacDermid JC, Drosdowech DS, Athwal GS. The incidence of early metallic suture anchor pullout after arthroscopic rotator cuff repair. Arthroscopy. 2010;26(3):310-315.

24. Baudi P, Rasia Dani E, Campochiaro G, Rebuzzi M, Serafini F, Catani F. The rotator cuff tear repair with a new arthroscopic transosseous system: the Sharc-FT®. Musculoskelet Surg. 2013;97(suppl 1):57-61.

25. Kummer FJ, Hahn M, Day M, Meislin RJ, Jazrawi LM. A laboratory comparison of a new arthroscopic transosseous rotator cuff repair to a double row transosseous equivalent rotator cuff repair using suture anchors. Bull Hosp Joint Dis. 2013;71(2):128-131.

26. Kuroda S, Ishige N, Mikasa M. Advantages of arthroscopic transosseous suture repair of the rotator cuff without the use of anchors. Clin Orthop Relat Res. 2013;471(11):3514-3522.

27. Cofield RH. Subscapular muscle transposition for repair of chronic rotator cuff tears. Surg Gynecol Obstet. 1982;154(5):667-672.

28. Paxton ES, Lazarus MD. Arthroscopic transosseous rotator cuff repair. Orthop Knowledge Online J. 2014;12(2). http://orthoportal.aaos.org/oko/article.aspx?article=OKO_SHO052#article. Accessed October 4, 2016.

29. Macario A. What does one minute of operating room time cost? J Clin Anesth. 2010;22(4):233-236.

Issue
The American Journal of Orthopedics - 45(7)
Issue
The American Journal of Orthopedics - 45(7)
Page Number
E415-E420
Page Number
E415-E420
Publications
Publications
Topics
Article Type
Display Headline
Arthroscopic Transosseous and Transosseous-Equivalent Rotator Cuff Repair: An Analysis of Cost, Operative Time, and Clinical Outcomes
Display Headline
Arthroscopic Transosseous and Transosseous-Equivalent Rotator Cuff Repair: An Analysis of Cost, Operative Time, and Clinical Outcomes
Sections
Disallow All Ads
Article PDF Media

Liposomal Bupivacaine vs Interscalene Nerve Block for Pain Control After Shoulder Arthroplasty: A Retrospective Cohort Analysis

Article Type
Changed
Thu, 09/19/2019 - 13:24
Display Headline
Liposomal Bupivacaine vs Interscalene Nerve Block for Pain Control After Shoulder Arthroplasty: A Retrospective Cohort Analysis

The annual number of total shoulder arthroplasties (TSAs) is rising with the growing elderly population and development of new technologies such as reverse shoulder arthroplasty.1 In 2008, 47,000 shoulder arthroplasties were performed in the US compared with 19,000 in 1998.1 As of 2011, there were 53,000 shoulder arthroplasties performed annually.2 Pain control after shoulder procedures, particularly TSA, is challenging. 3

Several modalities exist to manage pain after shoulder arthroplasty. The interscalene brachial plexus nerve block is considered the “gold standard” for shoulder analgesia. A new approach is the periarticular injection method, in which the surgeon administers a local anesthetic intraoperatively. Liposomal bupivacaine (Exparel, Pacira Pharmaceuticals, Inc.) is a nonopioid anesthetic that has been shown to improve pain control, shorten hospital stays, and decrease costs for total knee and hip arthroplasty compared with nerve blocks.4-6 Patients who were treated with liposomal bupivacaine consumed less opioid medication than a placebo group.7

Our purpose was to compare intraoperative local liposomal bupivacaine injection with preoperative single-shot interscalene nerve block (ISNB) in terms of pain control, opioid use, and length of hospital stay (LOS) after shoulder arthroplasty. We hypothesized that patients in the liposomal bupivacaine group would have lower pain scores, less opioid use, and shorter LOS compared with patients in the ISNB group.

Methods

A retrospective cohort analysis was conducted with 58 patients who underwent shoulder arthroplasty by 1 surgeon at our academically affiliated community hospital from January 2012 through January 2015. ISNBs were the standard at the beginning of the study period and were used until Exparel became available on the hospital formulary in 2013. We began using Exparel for all shoulder arthroplasties in November 2013. No other changes were made in the perioperative management of our arthroplasty patients during this period. Patients who underwent TSA, reverse TSA, or hemiarthroplasty of the shoulder were included. Patients who underwent revision TSA were excluded. Twenty-one patients received ISNBs and 37 received liposomal bupivacaine injections. This study was approved by our Institutional Review Board.

Baseline data for each patient were age, sex, body mass index, and the American Society of Anesthesiologists (ASA) Physical Status Classification. The primary outcome measure was the numeric rating scale (NRS) pain score at 4 post-operative time intervals. The NRS pain score has a range of 0 to 10, with 10 representing severe pain. Data were gathered from nursing and physical therapy notes in patient charts. The postoperative time intervals were 0 to 1 hour, 8 to 14 hours, 18 to 24 hours, and 27 to 36 hours. Available NRS scores for these time intervals were averaged. Patients were included if they had pain scores for at least 3 of the postoperative time intervals documented in their charts. Secondary outcome measures were LOS and opioid consumption during hospital admission. Intravenous acetaminophen use was also measured in both groups. All data on opioids were converted to oral morphine equivalents using the method described by Schneider and colleagues.8

A board-certified, fellowship-trained anesthesiologist, experienced in regional anesthesia, administered the single-shot ISNB before surgery. The block was administered under ultrasound guidance using a 44-mm, 22-gauge needle with the patient in the supine position. No indwelling catheter was used. The medication consisted of 30 mL of 5% ropivacaine (5 mg/mL). The surgeon injected liposomal bupivacaine (266 mg diluted into 40 mL of injectable saline) near the end of the procedure throughout the pericapsular area and multiple layers of the wound, per manufacturer guidelines.9 A 60-mL syringe with a 20-gauge needle was used. All operations were performed by 1 board-certified, fellowship-trained surgeon using a standard deltopectoral approach with the same surgical equipment. The same postoperative pain protocol was used for all patients, including intravenous acetaminophen and patient-controlled analgesia. Additional oral pain medication was provided as needed for all patients. Physical therapy protocols were identical between groups.

Statistical Analysis

Mean patient ages in the 2 treatment groups were compared using the Student t test. Sex distribution and the ASA scores were compared using a χ2 test and a Fisher exact test, respectively. Arthroplasty types were compared using a Fisher exact test. The medians and interquartile ranges of the NRS scores at each time point measured were tabulated by treatment group, and at each time point the difference between groups was tested using nonparametric rank sum tests.

We tested the longitudinal trajectory of NRS scores over time, accounting for repeated measurements in the same patients using linear mixed model analysis. Treatment group, time period as a categorical variable, and the interaction between treatment and time period were included as fixed effects, and patient identification number was included as the random effect. An initial omnibus test was performed for all treatment and treatment-by-time interaction effects. Subsequently, the treatment-by-time interaction was tested for each of the time periods. The association of day of discharge (as a categorical variable) with treatment was tested using the Fisher exact test. All analyses were conducted using Stata, version 13, software (StataCorp LP). P values <.05 were considered significant.

 

 

Sample Size Analysis

We calculated the minimum detectable effect size with 80% power at an alpha level of 0.05 for the nonparametric rank sum test in terms of the proportion of every possible pair of patients treated with the 2 treatments, where the patient treated with liposomal bupivacaine has a lower pain score than the patient treated with ISNB. For pain score at 18 to 24 hours, the sample sizes of 33 patients treated with liposomal bupivacaine and 20 treated with ISNB, the minimum detectable effect size is 73%.

Results

Fifty-eight patient charts (21 in the ISNB group and 37 in the liposomal bupivacaine group) were reviewed for the study. Patient sex distribution, mean age, mean body mass index, and mean baseline ASA scores were not statistically different (Table 1).

In the ISNB group, 5 patients had hemiarthroplasty, 12 had TSA, and 4 had reverse TSA. In the liposomal bupivacaine group, 1 patient had hemiarthroplasty, 23 had TSA, and 13 had reverse TSA. Frequency of procedure types was significantly different between groups (P = .039), with the liposomal bupivacaine group undergoing fewer hemiarthroplasties.

The primary outcome measure, NRS pain score, showed no significant differences between groups at 0 to 1 hour after surgery (P = .99) or 8 to 14 hours after surgery (P = .208).

At 18 to 24 hours after surgery, the liposomal bupivacaine group had a lower mean NRS score than the ISNB group (P = .001). This was statistically significant when taking repeated measures of variance into account (Figure 1). Mean NRS score was also lower for the liposomal bupivacaine group at 27 to 36 hours after surgery (P = .029).
This was a significant difference when repeated measures of variance was considered (Table 2).

There was no difference in the amount of intravenous acetaminophen given during the hospital stay between groups. There was no significant difference in opioid consumption on postoperative day 1 in the hospital (P = .59) (Figure 2). However, there were significant differences between groups on postoperative days 2 and 3.
On postoperative day 2, the ISNB group required significantly more opioids (mean, 112 mg morphine equivalents) than the liposomal bupivacaine group (mean, 37 mg morphine equivalents) (P = .001). The ISNB group also required significantly more opioids (mean, 25 mg morphine equivalents) on postoperative day 3 than the liposomal bupivacaine group (mean, 5 mg) (P = .002).

Sixteen of 37 patients in the liposomal bupivacaine group and 2 of 21 in the ISNB group were discharged on the day after surgery (P = .010) (Table 3).
The mean LOS was 46 ± 20 hours for the liposomal bupivacaine group and 57 ± 14 hours for the ISNB group (P = .012).

There were no major cardiac or respiratory events in either group. No long-term paresthesias or neuropathies were noted. There were no readmissions for either group.

Discussion

Postoperative pain control after shoulder arthroplasty can be challenging, and several modalities have been tried in various combinations to minimize pain and decrease adverse effects of opioid medications. The most common method for pain relief after shoulder arthroplasty is the ISNB. Several studies of ISNBs have shown improved pain control after shoulder arthroplasty with associated decreased opioid consumption and related side effects.10 Patient rehabilitation and satisfaction have improved with the increasing use of peripheral nerve blocks.11

Despite the well-established benefits of ISNBs, several limitations exist. Although the superior portion of the shoulder is well covered by an ISNB, the inferior portion of the brachial plexus can remain uncovered or only partially covered.12 Complications of ISNBs include hemidiaphragmatic paresis, rebound pain 24 hours after surgery,13 chronic neurologic complications,14 and substantial respiratory and cardiovascular events.15 Nerve blocks also require additional time and resources in the perioperative period, including an anesthesiologist with specialized training, assistants, and ultrasonography or nerve stimulation equipment contraindicated in patients taking blood thinners.16

Periarticular injections of local anesthetics have also shown promise in reducing pain after arthroplasty.4 Benefits include an enhanced safety profile because local injection avoids the concurrent blockade of the phrenic nerve and recurrent laryngeal nerve and has not been associated with the risk of peripheral neuropathies. Further, local injection is a simple technique that can be performed during surgery without additional personnel or expertise. A limitation of this approach is the relatively short duration of effectiveness of the local anesthetic and uncertainty regarding the best agent and the ideal volume of injection.6 Liposomal bupivacaine is a new agent (approved by the US Food and Drug Administration in 201117) with a sustained release over 72 to 96 hours.18 The most common adverse effects of liposomal bupivacaine are nausea, vomiting, constipation, pyrexia, dizziness, and headache.19 Chondrotoxicity and granulomatous inflammation are more serious, yet rare, complications of liposomal bupivacaine.20

We found that liposomal bupivacaine injections were associated with lower pain scores compared with ISNB at 18 to 24 hours after surgery. This correlated with less opioid consumption in the liposomal bupivacaine group than in the ISNB group on the second postoperative day. These differences in pain values are consistent with the known pharmacokinetics of liposomal bupivacaine.18 Peak plasma levels normally occur approximately 24 hours after injection, leaving the early postoperative period relatively uncovered by anesthetic agent. This finding of relatively poor pain control early after surgery has also been noted in patients undergoing knee arthroplasty.5 On the basis of the findings of this study, we have added standard bupivacaine injections to our separate liposomal bupivacaine injection to cover early postoperative pain. Opioid consumption was significantly lower in the liposomal bupivacaine group than in the ISNB group on postoperative days 2 and 3. We did not measure adverse events related to opioid consumption, so we cannot comment on whether the decreased opioid consumption was associated with the rate of adverse events. However, other studies21,22 have established this relationship.

We found the liposomal bupivacaine group to have earlier discharges to home. Sixteen of 37 patients in the liposomal bupivacaine group compared with 2 of 21 patients in the ISNB group were discharged on the day after surgery. A mean reduction in LOS of 18 hours for the liposomal bupivacaine group was statistically significant (P = .012). This reduction in LOS has important implications for hospitals and value analysis committees considering whether to keep a new, more expensive local anesthetic on formulary. Savings from reduced LOS and improvements in patient satisfaction may justify the expense (approximately $300 per 266-mg vial) of Exparel.

From a societal cost perspective, liposomal bupivacaine is more economical compared with ISNB, which adds approximately $1500 to the cost of anesthesia per patient.23 Eliminating the costs associated with ISNB administration in shoulder arthroplasties could result in substantial savings to our healthcare system. More research examining time savings and exact costs of each procedure is needed to determine the true cost effectiveness of each approach.

Limitations of our study include the retrospective design, relatively small numbers of patients in each group, missing data for some patients at various time points, variation in the types of procedures in each group, and lack of long-term outcome measures. It is important to note that we did not confirm the success of the nerve block after administration. However, this study reflects the effectiveness of each of the modalities in actual clinical conditions (as opposed to a controlled experimental setting). The actual effectiveness of a nerve block varies, even when performed by an experienced anesthesiologist with ultrasound guidance. Furthermore, immediate postoperative pain scores in the nerve block group are consistent with those of prior research reporting pain values ranging from 4 to 5 and a mean duration of effect ranging from 9 to 14 hours.23,24 Additionally, the patients, surgeon, and nursing team were not blinded to the treatment group. Although we did note a significant difference in the types of procedures between groups, this finding is related to the greater number of hemiarthroplasties performed in the ISNB group (N = 5) compared with the liposomal group (N = 1). Because of this variation and the decreased invasiveness of hemiarthroplasties, the bias is against the liposomal group. Finally, our primary outcome variable was pain, which is a subjective, self-reported measure. However, our opioid consumption data and LOS data corroborate the improved pain scores in the liposomal bupivacaine group.

Limiting the study to a single surgeon may limit external validity. Another limitation is the lack of data on adverse events related to opioid medication use. There was no additional experimental group to determine whether less expensive local anesthetics injected locally would perform similarly to liposomal bupivacaine. In total knee arthroplasty, periarticular injections of liposomal bupivacaine were not as effective as less expensive periarticular injections.25 It is unclear which agents (and in what doses or combinations) should be used for periarticular injections. Finally, we acknowledge that our retrospective study design cannot account for all potential factors affecting discharge time.

This is the first comparative study of liposomal bupivacaine and ISNB in TSA. The study design allowed us to control for variables such as surgical technique, postoperative protocols (including use and type of sling), and use of other pain modalities such as patient-controlled analgesia and intravenous acetaminophen that are likely to affect postoperative pain and LOS. This study provides preliminary data that confirm relative equipoise between liposomal bupivacaine and ISNB, which is needed for the ethical conduct of a randomized controlled trial. Such a trial would allow for a more robust comparison, and this retrospective study provides appropriate pilot data on which to base this design and the clinical information needed to counsel patients during enrollment.

Our results suggest that liposomal bupivacaine may provide superior or similar pain relief compared with ISNB after shoulder arthroplasty. Additionally, the use of liposomal bupivacaine was associated with decreased opioid consumption and earlier discharge to home compared with ISNB. These findings have important implications for pain control after TSA because pain represents a major concern for patients and providers after surgery. In addition to clinical improvements, use of liposomal bupivacaine may save time and eliminate costs associated with administering nerve blocks. Local injection may also be used in patients who are contraindicated for ISNB such as those with obesity, pulmonary disease, or peripheral neuropathy. Although we cannot definitively suggest that liposomal bupivacaine is superior to the current gold standard ISNB for pain control after shoulder arthroplasty, our results suggest a relative clinical equipoise between these modalities. Larger analytical studies, including randomized trials, should be performed to explore the potential benefits of liposomal bupivacaine injections for pain control after shoulder arthroplasty.

Am J Orthop. 2016;45(7):424-430. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

References

1. Kim SH, Wise BL, Zhang Y, Szabo RM. Increasing incidence of shoulder arthroplasty in the United States. J Bone Joint Surg Am. 2011;93(24):2249-2254.

2. American Academy of Orthopaedic Surgeons. Shoulder joint replacement. http://orthoinfo.aaos.org/topic.cfm?topic=A00094. Accessed June 3, 2015.

3. Desai VN, Cheung EV. Postoperative pain associated with orthopedic shoulder and elbow surgery: a prospective study. J Shoulder Elbow Surg. 2012;21(4):441-450.

4. Springer BD. Transition from nerve blocks to periarticular injections and emerging techniques in total joint arthroplasty. Am J Orthop. 2014;43(10 Suppl):S6-S9.

5. Surdam JW, Licini DJ, Baynes NT, Arce BR. The use of exparel (liposomal bupivacaine) to manage postoperative pain in unilateral total knee arthroplasty patients. J Arthroplasty. 2015;30(2):325-329.

6. Tong YC, Kaye AD, Urman RD. Liposomal bupivacaine and clinical outcomes. Best Pract Res Clin Anaesthesiol. 2014;28(1):15-27.

7. Chahar P, Cummings KC 3rd. Liposomal bupivacaine: a review of a new bupivacaine formulation. J Pain Res. 2012;5:257-264.

8. Schneider C, Yale SH, Larson M. Principles of pain management. Clin Med Res. 2003;1(4):337-340.

9. Pacira Pharmaceuticals, Inc. Highlights of prescribing information. http://www.exparel.com/pdf/EXPAREL_Prescribing_Information.pdf. Accessed May 7, 2015.

10. Gohl MR, Moeller RK, Olson RL, Vacchiano CA. The addition of interscalene block to general anesthesia for patients undergoing open shoulder procedures. AANA J. 2001;69(2):105-109.

11. Ironfield CM, Barrington MJ, Kluger R, Sites B. Are patients satisfied after peripheral nerve blockade? Results from an International Registry of Regional Anesthesia. Reg Anesth Pain Med. 2014;39(1):48-55.

12. Srikumaran U, Stein BE, Tan EW, Freehill MT, Wilckens JH. Upper-extremity peripheral nerve blocks in the perioperative pain management of orthopaedic patients: AAOS exhibit selection. J Bone Joint Surg Am. 2013;95(24):e197(1-13).

13. DeMarco JR, Componovo R, Barfield WR, Liles L, Nietert P. Efficacy of augmenting a subacromial continuous-infusion pump with a preoperative interscalene block in outpatient arthroscopic shoulder surgery: a prospective, randomized, blinded, and placebo-controlled study. Arthroscopy. 2011;27(5):603-610.

14. Misamore G, Webb B, McMurray S, Sallay P. A prospective analysis of interscalene brachial plexus blocks performed under general anesthesia. J Shoulder Elbow Surg. 2011;20(2):308-314.

15. Lenters TR, Davies J, Matsen FA 3rd. The types and severity of complications associated with interscalene brachial plexus block anesthesia: local and national evidence. J Shoulder Elbow Surg. 2007;16(4):379-387.

16. Park SK, Choi YS, Choi SW, Song SW. A comparison of three methods for postoperative pain control in patients undergoing arthroscopic shoulder surgery. Korean J Pain. 2015;28(1):45-51.

17. Pacira Pharmaceuticals, Inc. Pacira Pharmaceuticals, Inc. announces U.S. FDA approval of EXPAREL™ for postsurgical pain management. http://investor.pacira.com/phoenix.zhtml?c=220759&p=irol-newsArticle_print&ID=1623529. Published October 31, 2011. Accessed June 3, 2015.

18. White PF, Ardeleanu M, Schooley G, Burch RM. Pharmocokinetics of depobupivacaine following infiltration in patients undergoing two types of surgery and in normal volunteers. Paper presented at: Annual Meeting of the International Anesthesia Research Society; March 14, 2009; San Diego, CA.

19. Bramlett K, Onel E, Viscusi ER, Jones K. A randomized, double-blind, dose-ranging study comparing wound infiltration of DepoFoam bupivacaine, an extended-release liposomal bupivacaine, to bupivacaine HCl for postsurgical analgesia in total knee arthroplasty. Knee. 2012;19(5):530-536.

20. Lambrechts M, O’Brien MJ, Savoie FH, You Z. Liposomal extended-release bupivacaine for postsurgical analgesia. Patient Prefer Adherence. 2013;7:885-890.

21. American Society of Anesthesiologists Task Force on Acute Pain Management. Practice guidelines for acute pain management in the perioperative setting: an updated report by the American Society of Anesthesiologists Task Force on Acute Pain Management. Anesthesiology. 2012;116(2):248-273.

22. Candiotti KA, Sands LR, Lee E, et al. Liposome bupivacaine for postsurgical analgesia in adult patients undergoing laparoscopic colectomy: results from prospective phase IV sequential cohort studies assessing health economic outcomes. Curr Ther Res Clin Exp. 2013;76:1-6.

23. Weber SC, Jain R. Scalene regional anesthesia for shoulder surgery in a community setting: an assessment of risk. J Bone Joint Surg Am. 2002;84-A(5):775-779.

24. Beaudet V, Williams SR, Tétreault P, Perrault MA. Perioperative interscalene block versus intra-articular injection of local anesthetics for postoperative analgesia in shoulder surgery. Reg Anesth Pain Med. 2008;33(2):134-138.

25. Bagsby DT, Ireland PH, Meneghini RM. Liposomal bupivacaine versus traditional periarticular injection for pain control after total knee arthroplasty. J Arthroplasty. 2014;29(8):1687-1690.

Article PDF
Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article. This article was made possible by The Johns Hopkins Institute for Clinical and Translational Research (ICTR), which is funded in part by grant number UL1 TR 001079 from the National Center for Advancing Translational Sciences (NCATS), a component of the National Institutes of Health (NIH), and the NIH Roadmap for Medical Research. Its
contents are solely the responsibility of the authors and do not necessarily represent the official view of The Johns Hopkins ICTR, NCATS, or NIH.

Issue
The American Journal of Orthopedics - 45(7)
Publications
Topics
Page Number
424-430
Sections
Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article. This article was made possible by The Johns Hopkins Institute for Clinical and Translational Research (ICTR), which is funded in part by grant number UL1 TR 001079 from the National Center for Advancing Translational Sciences (NCATS), a component of the National Institutes of Health (NIH), and the NIH Roadmap for Medical Research. Its
contents are solely the responsibility of the authors and do not necessarily represent the official view of The Johns Hopkins ICTR, NCATS, or NIH.

Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article. This article was made possible by The Johns Hopkins Institute for Clinical and Translational Research (ICTR), which is funded in part by grant number UL1 TR 001079 from the National Center for Advancing Translational Sciences (NCATS), a component of the National Institutes of Health (NIH), and the NIH Roadmap for Medical Research. Its
contents are solely the responsibility of the authors and do not necessarily represent the official view of The Johns Hopkins ICTR, NCATS, or NIH.

Article PDF
Article PDF

The annual number of total shoulder arthroplasties (TSAs) is rising with the growing elderly population and development of new technologies such as reverse shoulder arthroplasty.1 In 2008, 47,000 shoulder arthroplasties were performed in the US compared with 19,000 in 1998.1 As of 2011, there were 53,000 shoulder arthroplasties performed annually.2 Pain control after shoulder procedures, particularly TSA, is challenging. 3

Several modalities exist to manage pain after shoulder arthroplasty. The interscalene brachial plexus nerve block is considered the “gold standard” for shoulder analgesia. A new approach is the periarticular injection method, in which the surgeon administers a local anesthetic intraoperatively. Liposomal bupivacaine (Exparel, Pacira Pharmaceuticals, Inc.) is a nonopioid anesthetic that has been shown to improve pain control, shorten hospital stays, and decrease costs for total knee and hip arthroplasty compared with nerve blocks.4-6 Patients who were treated with liposomal bupivacaine consumed less opioid medication than a placebo group.7

Our purpose was to compare intraoperative local liposomal bupivacaine injection with preoperative single-shot interscalene nerve block (ISNB) in terms of pain control, opioid use, and length of hospital stay (LOS) after shoulder arthroplasty. We hypothesized that patients in the liposomal bupivacaine group would have lower pain scores, less opioid use, and shorter LOS compared with patients in the ISNB group.

Methods

A retrospective cohort analysis was conducted with 58 patients who underwent shoulder arthroplasty by 1 surgeon at our academically affiliated community hospital from January 2012 through January 2015. ISNBs were the standard at the beginning of the study period and were used until Exparel became available on the hospital formulary in 2013. We began using Exparel for all shoulder arthroplasties in November 2013. No other changes were made in the perioperative management of our arthroplasty patients during this period. Patients who underwent TSA, reverse TSA, or hemiarthroplasty of the shoulder were included. Patients who underwent revision TSA were excluded. Twenty-one patients received ISNBs and 37 received liposomal bupivacaine injections. This study was approved by our Institutional Review Board.

Baseline data for each patient were age, sex, body mass index, and the American Society of Anesthesiologists (ASA) Physical Status Classification. The primary outcome measure was the numeric rating scale (NRS) pain score at 4 post-operative time intervals. The NRS pain score has a range of 0 to 10, with 10 representing severe pain. Data were gathered from nursing and physical therapy notes in patient charts. The postoperative time intervals were 0 to 1 hour, 8 to 14 hours, 18 to 24 hours, and 27 to 36 hours. Available NRS scores for these time intervals were averaged. Patients were included if they had pain scores for at least 3 of the postoperative time intervals documented in their charts. Secondary outcome measures were LOS and opioid consumption during hospital admission. Intravenous acetaminophen use was also measured in both groups. All data on opioids were converted to oral morphine equivalents using the method described by Schneider and colleagues.8

A board-certified, fellowship-trained anesthesiologist, experienced in regional anesthesia, administered the single-shot ISNB before surgery. The block was administered under ultrasound guidance using a 44-mm, 22-gauge needle with the patient in the supine position. No indwelling catheter was used. The medication consisted of 30 mL of 5% ropivacaine (5 mg/mL). The surgeon injected liposomal bupivacaine (266 mg diluted into 40 mL of injectable saline) near the end of the procedure throughout the pericapsular area and multiple layers of the wound, per manufacturer guidelines.9 A 60-mL syringe with a 20-gauge needle was used. All operations were performed by 1 board-certified, fellowship-trained surgeon using a standard deltopectoral approach with the same surgical equipment. The same postoperative pain protocol was used for all patients, including intravenous acetaminophen and patient-controlled analgesia. Additional oral pain medication was provided as needed for all patients. Physical therapy protocols were identical between groups.

Statistical Analysis

Mean patient ages in the 2 treatment groups were compared using the Student t test. Sex distribution and the ASA scores were compared using a χ2 test and a Fisher exact test, respectively. Arthroplasty types were compared using a Fisher exact test. The medians and interquartile ranges of the NRS scores at each time point measured were tabulated by treatment group, and at each time point the difference between groups was tested using nonparametric rank sum tests.

We tested the longitudinal trajectory of NRS scores over time, accounting for repeated measurements in the same patients using linear mixed model analysis. Treatment group, time period as a categorical variable, and the interaction between treatment and time period were included as fixed effects, and patient identification number was included as the random effect. An initial omnibus test was performed for all treatment and treatment-by-time interaction effects. Subsequently, the treatment-by-time interaction was tested for each of the time periods. The association of day of discharge (as a categorical variable) with treatment was tested using the Fisher exact test. All analyses were conducted using Stata, version 13, software (StataCorp LP). P values <.05 were considered significant.

 

 

Sample Size Analysis

We calculated the minimum detectable effect size with 80% power at an alpha level of 0.05 for the nonparametric rank sum test in terms of the proportion of every possible pair of patients treated with the 2 treatments, where the patient treated with liposomal bupivacaine has a lower pain score than the patient treated with ISNB. For pain score at 18 to 24 hours, the sample sizes of 33 patients treated with liposomal bupivacaine and 20 treated with ISNB, the minimum detectable effect size is 73%.

Results

Fifty-eight patient charts (21 in the ISNB group and 37 in the liposomal bupivacaine group) were reviewed for the study. Patient sex distribution, mean age, mean body mass index, and mean baseline ASA scores were not statistically different (Table 1).

In the ISNB group, 5 patients had hemiarthroplasty, 12 had TSA, and 4 had reverse TSA. In the liposomal bupivacaine group, 1 patient had hemiarthroplasty, 23 had TSA, and 13 had reverse TSA. Frequency of procedure types was significantly different between groups (P = .039), with the liposomal bupivacaine group undergoing fewer hemiarthroplasties.

The primary outcome measure, NRS pain score, showed no significant differences between groups at 0 to 1 hour after surgery (P = .99) or 8 to 14 hours after surgery (P = .208).

At 18 to 24 hours after surgery, the liposomal bupivacaine group had a lower mean NRS score than the ISNB group (P = .001). This was statistically significant when taking repeated measures of variance into account (Figure 1). Mean NRS score was also lower for the liposomal bupivacaine group at 27 to 36 hours after surgery (P = .029).
This was a significant difference when repeated measures of variance was considered (Table 2).

There was no difference in the amount of intravenous acetaminophen given during the hospital stay between groups. There was no significant difference in opioid consumption on postoperative day 1 in the hospital (P = .59) (Figure 2). However, there were significant differences between groups on postoperative days 2 and 3.
On postoperative day 2, the ISNB group required significantly more opioids (mean, 112 mg morphine equivalents) than the liposomal bupivacaine group (mean, 37 mg morphine equivalents) (P = .001). The ISNB group also required significantly more opioids (mean, 25 mg morphine equivalents) on postoperative day 3 than the liposomal bupivacaine group (mean, 5 mg) (P = .002).

Sixteen of 37 patients in the liposomal bupivacaine group and 2 of 21 in the ISNB group were discharged on the day after surgery (P = .010) (Table 3).
The mean LOS was 46 ± 20 hours for the liposomal bupivacaine group and 57 ± 14 hours for the ISNB group (P = .012).

There were no major cardiac or respiratory events in either group. No long-term paresthesias or neuropathies were noted. There were no readmissions for either group.

Discussion

Postoperative pain control after shoulder arthroplasty can be challenging, and several modalities have been tried in various combinations to minimize pain and decrease adverse effects of opioid medications. The most common method for pain relief after shoulder arthroplasty is the ISNB. Several studies of ISNBs have shown improved pain control after shoulder arthroplasty with associated decreased opioid consumption and related side effects.10 Patient rehabilitation and satisfaction have improved with the increasing use of peripheral nerve blocks.11

Despite the well-established benefits of ISNBs, several limitations exist. Although the superior portion of the shoulder is well covered by an ISNB, the inferior portion of the brachial plexus can remain uncovered or only partially covered.12 Complications of ISNBs include hemidiaphragmatic paresis, rebound pain 24 hours after surgery,13 chronic neurologic complications,14 and substantial respiratory and cardiovascular events.15 Nerve blocks also require additional time and resources in the perioperative period, including an anesthesiologist with specialized training, assistants, and ultrasonography or nerve stimulation equipment contraindicated in patients taking blood thinners.16

Periarticular injections of local anesthetics have also shown promise in reducing pain after arthroplasty.4 Benefits include an enhanced safety profile because local injection avoids the concurrent blockade of the phrenic nerve and recurrent laryngeal nerve and has not been associated with the risk of peripheral neuropathies. Further, local injection is a simple technique that can be performed during surgery without additional personnel or expertise. A limitation of this approach is the relatively short duration of effectiveness of the local anesthetic and uncertainty regarding the best agent and the ideal volume of injection.6 Liposomal bupivacaine is a new agent (approved by the US Food and Drug Administration in 201117) with a sustained release over 72 to 96 hours.18 The most common adverse effects of liposomal bupivacaine are nausea, vomiting, constipation, pyrexia, dizziness, and headache.19 Chondrotoxicity and granulomatous inflammation are more serious, yet rare, complications of liposomal bupivacaine.20

We found that liposomal bupivacaine injections were associated with lower pain scores compared with ISNB at 18 to 24 hours after surgery. This correlated with less opioid consumption in the liposomal bupivacaine group than in the ISNB group on the second postoperative day. These differences in pain values are consistent with the known pharmacokinetics of liposomal bupivacaine.18 Peak plasma levels normally occur approximately 24 hours after injection, leaving the early postoperative period relatively uncovered by anesthetic agent. This finding of relatively poor pain control early after surgery has also been noted in patients undergoing knee arthroplasty.5 On the basis of the findings of this study, we have added standard bupivacaine injections to our separate liposomal bupivacaine injection to cover early postoperative pain. Opioid consumption was significantly lower in the liposomal bupivacaine group than in the ISNB group on postoperative days 2 and 3. We did not measure adverse events related to opioid consumption, so we cannot comment on whether the decreased opioid consumption was associated with the rate of adverse events. However, other studies21,22 have established this relationship.

We found the liposomal bupivacaine group to have earlier discharges to home. Sixteen of 37 patients in the liposomal bupivacaine group compared with 2 of 21 patients in the ISNB group were discharged on the day after surgery. A mean reduction in LOS of 18 hours for the liposomal bupivacaine group was statistically significant (P = .012). This reduction in LOS has important implications for hospitals and value analysis committees considering whether to keep a new, more expensive local anesthetic on formulary. Savings from reduced LOS and improvements in patient satisfaction may justify the expense (approximately $300 per 266-mg vial) of Exparel.

From a societal cost perspective, liposomal bupivacaine is more economical compared with ISNB, which adds approximately $1500 to the cost of anesthesia per patient.23 Eliminating the costs associated with ISNB administration in shoulder arthroplasties could result in substantial savings to our healthcare system. More research examining time savings and exact costs of each procedure is needed to determine the true cost effectiveness of each approach.

Limitations of our study include the retrospective design, relatively small numbers of patients in each group, missing data for some patients at various time points, variation in the types of procedures in each group, and lack of long-term outcome measures. It is important to note that we did not confirm the success of the nerve block after administration. However, this study reflects the effectiveness of each of the modalities in actual clinical conditions (as opposed to a controlled experimental setting). The actual effectiveness of a nerve block varies, even when performed by an experienced anesthesiologist with ultrasound guidance. Furthermore, immediate postoperative pain scores in the nerve block group are consistent with those of prior research reporting pain values ranging from 4 to 5 and a mean duration of effect ranging from 9 to 14 hours.23,24 Additionally, the patients, surgeon, and nursing team were not blinded to the treatment group. Although we did note a significant difference in the types of procedures between groups, this finding is related to the greater number of hemiarthroplasties performed in the ISNB group (N = 5) compared with the liposomal group (N = 1). Because of this variation and the decreased invasiveness of hemiarthroplasties, the bias is against the liposomal group. Finally, our primary outcome variable was pain, which is a subjective, self-reported measure. However, our opioid consumption data and LOS data corroborate the improved pain scores in the liposomal bupivacaine group.

Limiting the study to a single surgeon may limit external validity. Another limitation is the lack of data on adverse events related to opioid medication use. There was no additional experimental group to determine whether less expensive local anesthetics injected locally would perform similarly to liposomal bupivacaine. In total knee arthroplasty, periarticular injections of liposomal bupivacaine were not as effective as less expensive periarticular injections.25 It is unclear which agents (and in what doses or combinations) should be used for periarticular injections. Finally, we acknowledge that our retrospective study design cannot account for all potential factors affecting discharge time.

This is the first comparative study of liposomal bupivacaine and ISNB in TSA. The study design allowed us to control for variables such as surgical technique, postoperative protocols (including use and type of sling), and use of other pain modalities such as patient-controlled analgesia and intravenous acetaminophen that are likely to affect postoperative pain and LOS. This study provides preliminary data that confirm relative equipoise between liposomal bupivacaine and ISNB, which is needed for the ethical conduct of a randomized controlled trial. Such a trial would allow for a more robust comparison, and this retrospective study provides appropriate pilot data on which to base this design and the clinical information needed to counsel patients during enrollment.

Our results suggest that liposomal bupivacaine may provide superior or similar pain relief compared with ISNB after shoulder arthroplasty. Additionally, the use of liposomal bupivacaine was associated with decreased opioid consumption and earlier discharge to home compared with ISNB. These findings have important implications for pain control after TSA because pain represents a major concern for patients and providers after surgery. In addition to clinical improvements, use of liposomal bupivacaine may save time and eliminate costs associated with administering nerve blocks. Local injection may also be used in patients who are contraindicated for ISNB such as those with obesity, pulmonary disease, or peripheral neuropathy. Although we cannot definitively suggest that liposomal bupivacaine is superior to the current gold standard ISNB for pain control after shoulder arthroplasty, our results suggest a relative clinical equipoise between these modalities. Larger analytical studies, including randomized trials, should be performed to explore the potential benefits of liposomal bupivacaine injections for pain control after shoulder arthroplasty.

Am J Orthop. 2016;45(7):424-430. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

The annual number of total shoulder arthroplasties (TSAs) is rising with the growing elderly population and development of new technologies such as reverse shoulder arthroplasty.1 In 2008, 47,000 shoulder arthroplasties were performed in the US compared with 19,000 in 1998.1 As of 2011, there were 53,000 shoulder arthroplasties performed annually.2 Pain control after shoulder procedures, particularly TSA, is challenging. 3

Several modalities exist to manage pain after shoulder arthroplasty. The interscalene brachial plexus nerve block is considered the “gold standard” for shoulder analgesia. A new approach is the periarticular injection method, in which the surgeon administers a local anesthetic intraoperatively. Liposomal bupivacaine (Exparel, Pacira Pharmaceuticals, Inc.) is a nonopioid anesthetic that has been shown to improve pain control, shorten hospital stays, and decrease costs for total knee and hip arthroplasty compared with nerve blocks.4-6 Patients who were treated with liposomal bupivacaine consumed less opioid medication than a placebo group.7

Our purpose was to compare intraoperative local liposomal bupivacaine injection with preoperative single-shot interscalene nerve block (ISNB) in terms of pain control, opioid use, and length of hospital stay (LOS) after shoulder arthroplasty. We hypothesized that patients in the liposomal bupivacaine group would have lower pain scores, less opioid use, and shorter LOS compared with patients in the ISNB group.

Methods

A retrospective cohort analysis was conducted with 58 patients who underwent shoulder arthroplasty by 1 surgeon at our academically affiliated community hospital from January 2012 through January 2015. ISNBs were the standard at the beginning of the study period and were used until Exparel became available on the hospital formulary in 2013. We began using Exparel for all shoulder arthroplasties in November 2013. No other changes were made in the perioperative management of our arthroplasty patients during this period. Patients who underwent TSA, reverse TSA, or hemiarthroplasty of the shoulder were included. Patients who underwent revision TSA were excluded. Twenty-one patients received ISNBs and 37 received liposomal bupivacaine injections. This study was approved by our Institutional Review Board.

Baseline data for each patient were age, sex, body mass index, and the American Society of Anesthesiologists (ASA) Physical Status Classification. The primary outcome measure was the numeric rating scale (NRS) pain score at 4 post-operative time intervals. The NRS pain score has a range of 0 to 10, with 10 representing severe pain. Data were gathered from nursing and physical therapy notes in patient charts. The postoperative time intervals were 0 to 1 hour, 8 to 14 hours, 18 to 24 hours, and 27 to 36 hours. Available NRS scores for these time intervals were averaged. Patients were included if they had pain scores for at least 3 of the postoperative time intervals documented in their charts. Secondary outcome measures were LOS and opioid consumption during hospital admission. Intravenous acetaminophen use was also measured in both groups. All data on opioids were converted to oral morphine equivalents using the method described by Schneider and colleagues.8

A board-certified, fellowship-trained anesthesiologist, experienced in regional anesthesia, administered the single-shot ISNB before surgery. The block was administered under ultrasound guidance using a 44-mm, 22-gauge needle with the patient in the supine position. No indwelling catheter was used. The medication consisted of 30 mL of 5% ropivacaine (5 mg/mL). The surgeon injected liposomal bupivacaine (266 mg diluted into 40 mL of injectable saline) near the end of the procedure throughout the pericapsular area and multiple layers of the wound, per manufacturer guidelines.9 A 60-mL syringe with a 20-gauge needle was used. All operations were performed by 1 board-certified, fellowship-trained surgeon using a standard deltopectoral approach with the same surgical equipment. The same postoperative pain protocol was used for all patients, including intravenous acetaminophen and patient-controlled analgesia. Additional oral pain medication was provided as needed for all patients. Physical therapy protocols were identical between groups.

Statistical Analysis

Mean patient ages in the 2 treatment groups were compared using the Student t test. Sex distribution and the ASA scores were compared using a χ2 test and a Fisher exact test, respectively. Arthroplasty types were compared using a Fisher exact test. The medians and interquartile ranges of the NRS scores at each time point measured were tabulated by treatment group, and at each time point the difference between groups was tested using nonparametric rank sum tests.

We tested the longitudinal trajectory of NRS scores over time, accounting for repeated measurements in the same patients using linear mixed model analysis. Treatment group, time period as a categorical variable, and the interaction between treatment and time period were included as fixed effects, and patient identification number was included as the random effect. An initial omnibus test was performed for all treatment and treatment-by-time interaction effects. Subsequently, the treatment-by-time interaction was tested for each of the time periods. The association of day of discharge (as a categorical variable) with treatment was tested using the Fisher exact test. All analyses were conducted using Stata, version 13, software (StataCorp LP). P values <.05 were considered significant.

 

 

Sample Size Analysis

We calculated the minimum detectable effect size with 80% power at an alpha level of 0.05 for the nonparametric rank sum test in terms of the proportion of every possible pair of patients treated with the 2 treatments, where the patient treated with liposomal bupivacaine has a lower pain score than the patient treated with ISNB. For pain score at 18 to 24 hours, the sample sizes of 33 patients treated with liposomal bupivacaine and 20 treated with ISNB, the minimum detectable effect size is 73%.

Results

Fifty-eight patient charts (21 in the ISNB group and 37 in the liposomal bupivacaine group) were reviewed for the study. Patient sex distribution, mean age, mean body mass index, and mean baseline ASA scores were not statistically different (Table 1).

In the ISNB group, 5 patients had hemiarthroplasty, 12 had TSA, and 4 had reverse TSA. In the liposomal bupivacaine group, 1 patient had hemiarthroplasty, 23 had TSA, and 13 had reverse TSA. Frequency of procedure types was significantly different between groups (P = .039), with the liposomal bupivacaine group undergoing fewer hemiarthroplasties.

The primary outcome measure, NRS pain score, showed no significant differences between groups at 0 to 1 hour after surgery (P = .99) or 8 to 14 hours after surgery (P = .208).

At 18 to 24 hours after surgery, the liposomal bupivacaine group had a lower mean NRS score than the ISNB group (P = .001). This was statistically significant when taking repeated measures of variance into account (Figure 1). Mean NRS score was also lower for the liposomal bupivacaine group at 27 to 36 hours after surgery (P = .029).
This was a significant difference when repeated measures of variance was considered (Table 2).

There was no difference in the amount of intravenous acetaminophen given during the hospital stay between groups. There was no significant difference in opioid consumption on postoperative day 1 in the hospital (P = .59) (Figure 2). However, there were significant differences between groups on postoperative days 2 and 3.
On postoperative day 2, the ISNB group required significantly more opioids (mean, 112 mg morphine equivalents) than the liposomal bupivacaine group (mean, 37 mg morphine equivalents) (P = .001). The ISNB group also required significantly more opioids (mean, 25 mg morphine equivalents) on postoperative day 3 than the liposomal bupivacaine group (mean, 5 mg) (P = .002).

Sixteen of 37 patients in the liposomal bupivacaine group and 2 of 21 in the ISNB group were discharged on the day after surgery (P = .010) (Table 3).
The mean LOS was 46 ± 20 hours for the liposomal bupivacaine group and 57 ± 14 hours for the ISNB group (P = .012).

There were no major cardiac or respiratory events in either group. No long-term paresthesias or neuropathies were noted. There were no readmissions for either group.

Discussion

Postoperative pain control after shoulder arthroplasty can be challenging, and several modalities have been tried in various combinations to minimize pain and decrease adverse effects of opioid medications. The most common method for pain relief after shoulder arthroplasty is the ISNB. Several studies of ISNBs have shown improved pain control after shoulder arthroplasty with associated decreased opioid consumption and related side effects.10 Patient rehabilitation and satisfaction have improved with the increasing use of peripheral nerve blocks.11

Despite the well-established benefits of ISNBs, several limitations exist. Although the superior portion of the shoulder is well covered by an ISNB, the inferior portion of the brachial plexus can remain uncovered or only partially covered.12 Complications of ISNBs include hemidiaphragmatic paresis, rebound pain 24 hours after surgery,13 chronic neurologic complications,14 and substantial respiratory and cardiovascular events.15 Nerve blocks also require additional time and resources in the perioperative period, including an anesthesiologist with specialized training, assistants, and ultrasonography or nerve stimulation equipment contraindicated in patients taking blood thinners.16

Periarticular injections of local anesthetics have also shown promise in reducing pain after arthroplasty.4 Benefits include an enhanced safety profile because local injection avoids the concurrent blockade of the phrenic nerve and recurrent laryngeal nerve and has not been associated with the risk of peripheral neuropathies. Further, local injection is a simple technique that can be performed during surgery without additional personnel or expertise. A limitation of this approach is the relatively short duration of effectiveness of the local anesthetic and uncertainty regarding the best agent and the ideal volume of injection.6 Liposomal bupivacaine is a new agent (approved by the US Food and Drug Administration in 201117) with a sustained release over 72 to 96 hours.18 The most common adverse effects of liposomal bupivacaine are nausea, vomiting, constipation, pyrexia, dizziness, and headache.19 Chondrotoxicity and granulomatous inflammation are more serious, yet rare, complications of liposomal bupivacaine.20

We found that liposomal bupivacaine injections were associated with lower pain scores compared with ISNB at 18 to 24 hours after surgery. This correlated with less opioid consumption in the liposomal bupivacaine group than in the ISNB group on the second postoperative day. These differences in pain values are consistent with the known pharmacokinetics of liposomal bupivacaine.18 Peak plasma levels normally occur approximately 24 hours after injection, leaving the early postoperative period relatively uncovered by anesthetic agent. This finding of relatively poor pain control early after surgery has also been noted in patients undergoing knee arthroplasty.5 On the basis of the findings of this study, we have added standard bupivacaine injections to our separate liposomal bupivacaine injection to cover early postoperative pain. Opioid consumption was significantly lower in the liposomal bupivacaine group than in the ISNB group on postoperative days 2 and 3. We did not measure adverse events related to opioid consumption, so we cannot comment on whether the decreased opioid consumption was associated with the rate of adverse events. However, other studies21,22 have established this relationship.

We found the liposomal bupivacaine group to have earlier discharges to home. Sixteen of 37 patients in the liposomal bupivacaine group compared with 2 of 21 patients in the ISNB group were discharged on the day after surgery. A mean reduction in LOS of 18 hours for the liposomal bupivacaine group was statistically significant (P = .012). This reduction in LOS has important implications for hospitals and value analysis committees considering whether to keep a new, more expensive local anesthetic on formulary. Savings from reduced LOS and improvements in patient satisfaction may justify the expense (approximately $300 per 266-mg vial) of Exparel.

From a societal cost perspective, liposomal bupivacaine is more economical compared with ISNB, which adds approximately $1500 to the cost of anesthesia per patient.23 Eliminating the costs associated with ISNB administration in shoulder arthroplasties could result in substantial savings to our healthcare system. More research examining time savings and exact costs of each procedure is needed to determine the true cost effectiveness of each approach.

Limitations of our study include the retrospective design, relatively small numbers of patients in each group, missing data for some patients at various time points, variation in the types of procedures in each group, and lack of long-term outcome measures. It is important to note that we did not confirm the success of the nerve block after administration. However, this study reflects the effectiveness of each of the modalities in actual clinical conditions (as opposed to a controlled experimental setting). The actual effectiveness of a nerve block varies, even when performed by an experienced anesthesiologist with ultrasound guidance. Furthermore, immediate postoperative pain scores in the nerve block group are consistent with those of prior research reporting pain values ranging from 4 to 5 and a mean duration of effect ranging from 9 to 14 hours.23,24 Additionally, the patients, surgeon, and nursing team were not blinded to the treatment group. Although we did note a significant difference in the types of procedures between groups, this finding is related to the greater number of hemiarthroplasties performed in the ISNB group (N = 5) compared with the liposomal group (N = 1). Because of this variation and the decreased invasiveness of hemiarthroplasties, the bias is against the liposomal group. Finally, our primary outcome variable was pain, which is a subjective, self-reported measure. However, our opioid consumption data and LOS data corroborate the improved pain scores in the liposomal bupivacaine group.

Limiting the study to a single surgeon may limit external validity. Another limitation is the lack of data on adverse events related to opioid medication use. There was no additional experimental group to determine whether less expensive local anesthetics injected locally would perform similarly to liposomal bupivacaine. In total knee arthroplasty, periarticular injections of liposomal bupivacaine were not as effective as less expensive periarticular injections.25 It is unclear which agents (and in what doses or combinations) should be used for periarticular injections. Finally, we acknowledge that our retrospective study design cannot account for all potential factors affecting discharge time.

This is the first comparative study of liposomal bupivacaine and ISNB in TSA. The study design allowed us to control for variables such as surgical technique, postoperative protocols (including use and type of sling), and use of other pain modalities such as patient-controlled analgesia and intravenous acetaminophen that are likely to affect postoperative pain and LOS. This study provides preliminary data that confirm relative equipoise between liposomal bupivacaine and ISNB, which is needed for the ethical conduct of a randomized controlled trial. Such a trial would allow for a more robust comparison, and this retrospective study provides appropriate pilot data on which to base this design and the clinical information needed to counsel patients during enrollment.

Our results suggest that liposomal bupivacaine may provide superior or similar pain relief compared with ISNB after shoulder arthroplasty. Additionally, the use of liposomal bupivacaine was associated with decreased opioid consumption and earlier discharge to home compared with ISNB. These findings have important implications for pain control after TSA because pain represents a major concern for patients and providers after surgery. In addition to clinical improvements, use of liposomal bupivacaine may save time and eliminate costs associated with administering nerve blocks. Local injection may also be used in patients who are contraindicated for ISNB such as those with obesity, pulmonary disease, or peripheral neuropathy. Although we cannot definitively suggest that liposomal bupivacaine is superior to the current gold standard ISNB for pain control after shoulder arthroplasty, our results suggest a relative clinical equipoise between these modalities. Larger analytical studies, including randomized trials, should be performed to explore the potential benefits of liposomal bupivacaine injections for pain control after shoulder arthroplasty.

Am J Orthop. 2016;45(7):424-430. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.

References

1. Kim SH, Wise BL, Zhang Y, Szabo RM. Increasing incidence of shoulder arthroplasty in the United States. J Bone Joint Surg Am. 2011;93(24):2249-2254.

2. American Academy of Orthopaedic Surgeons. Shoulder joint replacement. http://orthoinfo.aaos.org/topic.cfm?topic=A00094. Accessed June 3, 2015.

3. Desai VN, Cheung EV. Postoperative pain associated with orthopedic shoulder and elbow surgery: a prospective study. J Shoulder Elbow Surg. 2012;21(4):441-450.

4. Springer BD. Transition from nerve blocks to periarticular injections and emerging techniques in total joint arthroplasty. Am J Orthop. 2014;43(10 Suppl):S6-S9.

5. Surdam JW, Licini DJ, Baynes NT, Arce BR. The use of exparel (liposomal bupivacaine) to manage postoperative pain in unilateral total knee arthroplasty patients. J Arthroplasty. 2015;30(2):325-329.

6. Tong YC, Kaye AD, Urman RD. Liposomal bupivacaine and clinical outcomes. Best Pract Res Clin Anaesthesiol. 2014;28(1):15-27.

7. Chahar P, Cummings KC 3rd. Liposomal bupivacaine: a review of a new bupivacaine formulation. J Pain Res. 2012;5:257-264.

8. Schneider C, Yale SH, Larson M. Principles of pain management. Clin Med Res. 2003;1(4):337-340.

9. Pacira Pharmaceuticals, Inc. Highlights of prescribing information. http://www.exparel.com/pdf/EXPAREL_Prescribing_Information.pdf. Accessed May 7, 2015.

10. Gohl MR, Moeller RK, Olson RL, Vacchiano CA. The addition of interscalene block to general anesthesia for patients undergoing open shoulder procedures. AANA J. 2001;69(2):105-109.

11. Ironfield CM, Barrington MJ, Kluger R, Sites B. Are patients satisfied after peripheral nerve blockade? Results from an International Registry of Regional Anesthesia. Reg Anesth Pain Med. 2014;39(1):48-55.

12. Srikumaran U, Stein BE, Tan EW, Freehill MT, Wilckens JH. Upper-extremity peripheral nerve blocks in the perioperative pain management of orthopaedic patients: AAOS exhibit selection. J Bone Joint Surg Am. 2013;95(24):e197(1-13).

13. DeMarco JR, Componovo R, Barfield WR, Liles L, Nietert P. Efficacy of augmenting a subacromial continuous-infusion pump with a preoperative interscalene block in outpatient arthroscopic shoulder surgery: a prospective, randomized, blinded, and placebo-controlled study. Arthroscopy. 2011;27(5):603-610.

14. Misamore G, Webb B, McMurray S, Sallay P. A prospective analysis of interscalene brachial plexus blocks performed under general anesthesia. J Shoulder Elbow Surg. 2011;20(2):308-314.

15. Lenters TR, Davies J, Matsen FA 3rd. The types and severity of complications associated with interscalene brachial plexus block anesthesia: local and national evidence. J Shoulder Elbow Surg. 2007;16(4):379-387.

16. Park SK, Choi YS, Choi SW, Song SW. A comparison of three methods for postoperative pain control in patients undergoing arthroscopic shoulder surgery. Korean J Pain. 2015;28(1):45-51.

17. Pacira Pharmaceuticals, Inc. Pacira Pharmaceuticals, Inc. announces U.S. FDA approval of EXPAREL™ for postsurgical pain management. http://investor.pacira.com/phoenix.zhtml?c=220759&p=irol-newsArticle_print&ID=1623529. Published October 31, 2011. Accessed June 3, 2015.

18. White PF, Ardeleanu M, Schooley G, Burch RM. Pharmocokinetics of depobupivacaine following infiltration in patients undergoing two types of surgery and in normal volunteers. Paper presented at: Annual Meeting of the International Anesthesia Research Society; March 14, 2009; San Diego, CA.

19. Bramlett K, Onel E, Viscusi ER, Jones K. A randomized, double-blind, dose-ranging study comparing wound infiltration of DepoFoam bupivacaine, an extended-release liposomal bupivacaine, to bupivacaine HCl for postsurgical analgesia in total knee arthroplasty. Knee. 2012;19(5):530-536.

20. Lambrechts M, O’Brien MJ, Savoie FH, You Z. Liposomal extended-release bupivacaine for postsurgical analgesia. Patient Prefer Adherence. 2013;7:885-890.

21. American Society of Anesthesiologists Task Force on Acute Pain Management. Practice guidelines for acute pain management in the perioperative setting: an updated report by the American Society of Anesthesiologists Task Force on Acute Pain Management. Anesthesiology. 2012;116(2):248-273.

22. Candiotti KA, Sands LR, Lee E, et al. Liposome bupivacaine for postsurgical analgesia in adult patients undergoing laparoscopic colectomy: results from prospective phase IV sequential cohort studies assessing health economic outcomes. Curr Ther Res Clin Exp. 2013;76:1-6.

23. Weber SC, Jain R. Scalene regional anesthesia for shoulder surgery in a community setting: an assessment of risk. J Bone Joint Surg Am. 2002;84-A(5):775-779.

24. Beaudet V, Williams SR, Tétreault P, Perrault MA. Perioperative interscalene block versus intra-articular injection of local anesthetics for postoperative analgesia in shoulder surgery. Reg Anesth Pain Med. 2008;33(2):134-138.

25. Bagsby DT, Ireland PH, Meneghini RM. Liposomal bupivacaine versus traditional periarticular injection for pain control after total knee arthroplasty. J Arthroplasty. 2014;29(8):1687-1690.

References

1. Kim SH, Wise BL, Zhang Y, Szabo RM. Increasing incidence of shoulder arthroplasty in the United States. J Bone Joint Surg Am. 2011;93(24):2249-2254.

2. American Academy of Orthopaedic Surgeons. Shoulder joint replacement. http://orthoinfo.aaos.org/topic.cfm?topic=A00094. Accessed June 3, 2015.

3. Desai VN, Cheung EV. Postoperative pain associated with orthopedic shoulder and elbow surgery: a prospective study. J Shoulder Elbow Surg. 2012;21(4):441-450.

4. Springer BD. Transition from nerve blocks to periarticular injections and emerging techniques in total joint arthroplasty. Am J Orthop. 2014;43(10 Suppl):S6-S9.

5. Surdam JW, Licini DJ, Baynes NT, Arce BR. The use of exparel (liposomal bupivacaine) to manage postoperative pain in unilateral total knee arthroplasty patients. J Arthroplasty. 2015;30(2):325-329.

6. Tong YC, Kaye AD, Urman RD. Liposomal bupivacaine and clinical outcomes. Best Pract Res Clin Anaesthesiol. 2014;28(1):15-27.

7. Chahar P, Cummings KC 3rd. Liposomal bupivacaine: a review of a new bupivacaine formulation. J Pain Res. 2012;5:257-264.

8. Schneider C, Yale SH, Larson M. Principles of pain management. Clin Med Res. 2003;1(4):337-340.

9. Pacira Pharmaceuticals, Inc. Highlights of prescribing information. http://www.exparel.com/pdf/EXPAREL_Prescribing_Information.pdf. Accessed May 7, 2015.

10. Gohl MR, Moeller RK, Olson RL, Vacchiano CA. The addition of interscalene block to general anesthesia for patients undergoing open shoulder procedures. AANA J. 2001;69(2):105-109.

11. Ironfield CM, Barrington MJ, Kluger R, Sites B. Are patients satisfied after peripheral nerve blockade? Results from an International Registry of Regional Anesthesia. Reg Anesth Pain Med. 2014;39(1):48-55.

12. Srikumaran U, Stein BE, Tan EW, Freehill MT, Wilckens JH. Upper-extremity peripheral nerve blocks in the perioperative pain management of orthopaedic patients: AAOS exhibit selection. J Bone Joint Surg Am. 2013;95(24):e197(1-13).

13. DeMarco JR, Componovo R, Barfield WR, Liles L, Nietert P. Efficacy of augmenting a subacromial continuous-infusion pump with a preoperative interscalene block in outpatient arthroscopic shoulder surgery: a prospective, randomized, blinded, and placebo-controlled study. Arthroscopy. 2011;27(5):603-610.

14. Misamore G, Webb B, McMurray S, Sallay P. A prospective analysis of interscalene brachial plexus blocks performed under general anesthesia. J Shoulder Elbow Surg. 2011;20(2):308-314.

15. Lenters TR, Davies J, Matsen FA 3rd. The types and severity of complications associated with interscalene brachial plexus block anesthesia: local and national evidence. J Shoulder Elbow Surg. 2007;16(4):379-387.

16. Park SK, Choi YS, Choi SW, Song SW. A comparison of three methods for postoperative pain control in patients undergoing arthroscopic shoulder surgery. Korean J Pain. 2015;28(1):45-51.

17. Pacira Pharmaceuticals, Inc. Pacira Pharmaceuticals, Inc. announces U.S. FDA approval of EXPAREL™ for postsurgical pain management. http://investor.pacira.com/phoenix.zhtml?c=220759&p=irol-newsArticle_print&ID=1623529. Published October 31, 2011. Accessed June 3, 2015.

18. White PF, Ardeleanu M, Schooley G, Burch RM. Pharmocokinetics of depobupivacaine following infiltration in patients undergoing two types of surgery and in normal volunteers. Paper presented at: Annual Meeting of the International Anesthesia Research Society; March 14, 2009; San Diego, CA.

19. Bramlett K, Onel E, Viscusi ER, Jones K. A randomized, double-blind, dose-ranging study comparing wound infiltration of DepoFoam bupivacaine, an extended-release liposomal bupivacaine, to bupivacaine HCl for postsurgical analgesia in total knee arthroplasty. Knee. 2012;19(5):530-536.

20. Lambrechts M, O’Brien MJ, Savoie FH, You Z. Liposomal extended-release bupivacaine for postsurgical analgesia. Patient Prefer Adherence. 2013;7:885-890.

21. American Society of Anesthesiologists Task Force on Acute Pain Management. Practice guidelines for acute pain management in the perioperative setting: an updated report by the American Society of Anesthesiologists Task Force on Acute Pain Management. Anesthesiology. 2012;116(2):248-273.

22. Candiotti KA, Sands LR, Lee E, et al. Liposome bupivacaine for postsurgical analgesia in adult patients undergoing laparoscopic colectomy: results from prospective phase IV sequential cohort studies assessing health economic outcomes. Curr Ther Res Clin Exp. 2013;76:1-6.

23. Weber SC, Jain R. Scalene regional anesthesia for shoulder surgery in a community setting: an assessment of risk. J Bone Joint Surg Am. 2002;84-A(5):775-779.

24. Beaudet V, Williams SR, Tétreault P, Perrault MA. Perioperative interscalene block versus intra-articular injection of local anesthetics for postoperative analgesia in shoulder surgery. Reg Anesth Pain Med. 2008;33(2):134-138.

25. Bagsby DT, Ireland PH, Meneghini RM. Liposomal bupivacaine versus traditional periarticular injection for pain control after total knee arthroplasty. J Arthroplasty. 2014;29(8):1687-1690.

Issue
The American Journal of Orthopedics - 45(7)
Issue
The American Journal of Orthopedics - 45(7)
Page Number
424-430
Page Number
424-430
Publications
Publications
Topics
Article Type
Display Headline
Liposomal Bupivacaine vs Interscalene Nerve Block for Pain Control After Shoulder Arthroplasty: A Retrospective Cohort Analysis
Display Headline
Liposomal Bupivacaine vs Interscalene Nerve Block for Pain Control After Shoulder Arthroplasty: A Retrospective Cohort Analysis
Sections
Disallow All Ads
Article PDF Media

Accuracy and Sources of Images From Direct Google Image Searches for Common Dermatology Terms

Article Type
Changed
Thu, 01/10/2019 - 13:35
Display Headline
Accuracy and Sources of Images From Direct Google Image Searches for Common Dermatology Terms

To the Editor:

Prior studies have assessed the quality of text-based dermatology information on the Internet using traditional search engine queries.1 However, little is understood about the sources, accuracy, and quality of online dermatology images derived from direct image searches. Previous work has shown that direct search engine image queries were largely accurate for 3 pediatric dermatology diagnosis searches: atopic dermatitis, lichen striatus, and subcutaneous fat necrosis.2 We assessed images obtained for common dermatologic conditions from a Google image search (GIS) compared to a traditional text-based Google web search (GWS).

Image results for 32 unique dermatologic search terms were analyzed (Table 1). These search terms were selected using the results of a prior study that identified the most common dermatologic diagnoses that led users to the 2 most popular dermatology-specific websites worldwide: the American Academy of Dermatology (www.aad.org) and DermNet New Zealand (www.dermnetnz.org).3 The Alexa directory (www.alexa.com), a large publicly available Internet analytics resource, was used to determine the most common dermatology search terms that led a user to either www.dermnetnz.org or www.aad.org. In addition, searches for the 3 most common types of skin cancer—melanoma, squamous cell carcinoma, and basal cell carcinoma—were included. Each term was entered into a GIS and a GWS. The first 10 results, which represent 92% of the websites ultimately visited by users,4 were analyzed. The source, diagnostic accuracy, and Fitzpatrick skin type of the images was determined. Website sources were organized into 11 categories. All data collection occurred within a 1-week period in August 2015.

A total of 320 images were analyzed. In the GIS, private websites (36%), dermatology association websites (28%), and general health information websites (10%) were the 3 most common sources. In the GWS, health information websites (35%), private websites (21%), and dermatology association websites (20%) accounted for the most common sources (Table 2). The majority of images were of Fitzpatrick skin types I and II (89%) and nearly all images were diagnostically accurate (98%). There was no statistically significant difference in accuracy of diagnosis between physician-associated websites (100% accuracy) versus nonphysician-associated sites (98% accuracy, P=.25).

Our results showed high diagnostic accuracy among the top GIS results for common dermatology search terms. Diagnostic accuracy did not vary between websites that were physician associated versus those that were not. Our results are comparable to the reported accuracy of online dermatologic health information.1 In GIS results, the majority of images were provided by private websites, whereas the top websites in GWS results were health information websites.

Only 1% of images were of Fitzpatrick skin types VI and VII. Presentation of skin diseases is remarkably different based on the patient’s skin type.5 The shortage of readily accessible images of skin of color is in line with the lack of familiarity physicians and trainees have with dermatologic conditions in ethnic skin.6

Based on the results from this analysis, providers and patients searching for dermatologic conditions via a direct GIS should be cognizant of several considerations. Although our results showed that GIS was accurate, the searcher should note that image-based searches are not accompanied by relevant text that can help confirm relevancy and accuracy. Image searches depend on textual tags added by the source website. Websites that represent dermatological associations and academic centers can add an additional layer of confidence for users. Patients and clinicians also should be aware that the consideration of a patient’s Fitzpatrick skin type is critical when assessing the relevancy of a GIS result. In conclusion, search results via GIS queries are accurate for the dermatological diagnoses tested but may be lacking in skin of color variations, suggesting a potential unmet need based on our growing ethnic skin population.

References
  1. Jensen JD, Dunnick CA, Arbuckle HA, et al. Dermatology information on the Internet: an appraisal by dermatologists and dermatology residents. J Am Acad Dermatol. 2010;63:1101-1103.
  2. Cutrone M, Grimalt R. Dermatological image search engines on the Internet: do they work? J Eur Acad Dermatol Venereol. 2007;21:175-177.
  3. Xu S, Nault A, Bhatia A. Search and engagement analysis of association websites representing dermatologists—implications and opportunities for web visibility and patient education: website rankings of dermatology associations. Pract Dermatol. In press.
  4. comScore releases July 2015 U.S. desktop search engine rankings [press release]. Reston, VA: comScore, Inc; August 14, 2015. http://www.comscore.com/Insights/Market-Rankings/comScore-Releases-July-2015-U.S.-Desktop-Search-Engine-Rankings. Accessed October 18, 2016.
  5. Kundu RV, Patterson S. Dermatologic conditions in skin of color: part I. special considerations for common skin disorders. Am Fam Physician. 2013;87:850-856.
  6. Nijhawan RI, Jacob SE, Woolery-Lloyd H. Skin of color education in dermatology residency programs: does residency training reflect the changing demographics of the United States? J Am Acad Dermatol. 2008;59:615-618.
Article PDF
Author and Disclosure Information

Dr. Nault is from University of Wisconsin School of Medicine and Public Health, Madison. Drs. Bhatia and Xu are from the Department of Dermatology, Northwestern University, Feinberg School of Medicine, Chicago, Illinois. Dr. Bhatia also is from Dupage Medical Group, Naperville, Illinois.

The authors report no conflict of interest.

Correspondence: Shuai Xu, MD, MSc, 676 N St Clair St, Ste 1600, Chicago, IL 60611 ([email protected]).

Issue
Cutis - 98(5)
Publications
Topics
Page Number
E6-E8
Sections
Author and Disclosure Information

Dr. Nault is from University of Wisconsin School of Medicine and Public Health, Madison. Drs. Bhatia and Xu are from the Department of Dermatology, Northwestern University, Feinberg School of Medicine, Chicago, Illinois. Dr. Bhatia also is from Dupage Medical Group, Naperville, Illinois.

The authors report no conflict of interest.

Correspondence: Shuai Xu, MD, MSc, 676 N St Clair St, Ste 1600, Chicago, IL 60611 ([email protected]).

Author and Disclosure Information

Dr. Nault is from University of Wisconsin School of Medicine and Public Health, Madison. Drs. Bhatia and Xu are from the Department of Dermatology, Northwestern University, Feinberg School of Medicine, Chicago, Illinois. Dr. Bhatia also is from Dupage Medical Group, Naperville, Illinois.

The authors report no conflict of interest.

Correspondence: Shuai Xu, MD, MSc, 676 N St Clair St, Ste 1600, Chicago, IL 60611 ([email protected]).

Article PDF
Article PDF

To the Editor:

Prior studies have assessed the quality of text-based dermatology information on the Internet using traditional search engine queries.1 However, little is understood about the sources, accuracy, and quality of online dermatology images derived from direct image searches. Previous work has shown that direct search engine image queries were largely accurate for 3 pediatric dermatology diagnosis searches: atopic dermatitis, lichen striatus, and subcutaneous fat necrosis.2 We assessed images obtained for common dermatologic conditions from a Google image search (GIS) compared to a traditional text-based Google web search (GWS).

Image results for 32 unique dermatologic search terms were analyzed (Table 1). These search terms were selected using the results of a prior study that identified the most common dermatologic diagnoses that led users to the 2 most popular dermatology-specific websites worldwide: the American Academy of Dermatology (www.aad.org) and DermNet New Zealand (www.dermnetnz.org).3 The Alexa directory (www.alexa.com), a large publicly available Internet analytics resource, was used to determine the most common dermatology search terms that led a user to either www.dermnetnz.org or www.aad.org. In addition, searches for the 3 most common types of skin cancer—melanoma, squamous cell carcinoma, and basal cell carcinoma—were included. Each term was entered into a GIS and a GWS. The first 10 results, which represent 92% of the websites ultimately visited by users,4 were analyzed. The source, diagnostic accuracy, and Fitzpatrick skin type of the images was determined. Website sources were organized into 11 categories. All data collection occurred within a 1-week period in August 2015.

A total of 320 images were analyzed. In the GIS, private websites (36%), dermatology association websites (28%), and general health information websites (10%) were the 3 most common sources. In the GWS, health information websites (35%), private websites (21%), and dermatology association websites (20%) accounted for the most common sources (Table 2). The majority of images were of Fitzpatrick skin types I and II (89%) and nearly all images were diagnostically accurate (98%). There was no statistically significant difference in accuracy of diagnosis between physician-associated websites (100% accuracy) versus nonphysician-associated sites (98% accuracy, P=.25).

Our results showed high diagnostic accuracy among the top GIS results for common dermatology search terms. Diagnostic accuracy did not vary between websites that were physician associated versus those that were not. Our results are comparable to the reported accuracy of online dermatologic health information.1 In GIS results, the majority of images were provided by private websites, whereas the top websites in GWS results were health information websites.

Only 1% of images were of Fitzpatrick skin types VI and VII. Presentation of skin diseases is remarkably different based on the patient’s skin type.5 The shortage of readily accessible images of skin of color is in line with the lack of familiarity physicians and trainees have with dermatologic conditions in ethnic skin.6

Based on the results from this analysis, providers and patients searching for dermatologic conditions via a direct GIS should be cognizant of several considerations. Although our results showed that GIS was accurate, the searcher should note that image-based searches are not accompanied by relevant text that can help confirm relevancy and accuracy. Image searches depend on textual tags added by the source website. Websites that represent dermatological associations and academic centers can add an additional layer of confidence for users. Patients and clinicians also should be aware that the consideration of a patient’s Fitzpatrick skin type is critical when assessing the relevancy of a GIS result. In conclusion, search results via GIS queries are accurate for the dermatological diagnoses tested but may be lacking in skin of color variations, suggesting a potential unmet need based on our growing ethnic skin population.

To the Editor:

Prior studies have assessed the quality of text-based dermatology information on the Internet using traditional search engine queries.1 However, little is understood about the sources, accuracy, and quality of online dermatology images derived from direct image searches. Previous work has shown that direct search engine image queries were largely accurate for 3 pediatric dermatology diagnosis searches: atopic dermatitis, lichen striatus, and subcutaneous fat necrosis.2 We assessed images obtained for common dermatologic conditions from a Google image search (GIS) compared to a traditional text-based Google web search (GWS).

Image results for 32 unique dermatologic search terms were analyzed (Table 1). These search terms were selected using the results of a prior study that identified the most common dermatologic diagnoses that led users to the 2 most popular dermatology-specific websites worldwide: the American Academy of Dermatology (www.aad.org) and DermNet New Zealand (www.dermnetnz.org).3 The Alexa directory (www.alexa.com), a large publicly available Internet analytics resource, was used to determine the most common dermatology search terms that led a user to either www.dermnetnz.org or www.aad.org. In addition, searches for the 3 most common types of skin cancer—melanoma, squamous cell carcinoma, and basal cell carcinoma—were included. Each term was entered into a GIS and a GWS. The first 10 results, which represent 92% of the websites ultimately visited by users,4 were analyzed. The source, diagnostic accuracy, and Fitzpatrick skin type of the images was determined. Website sources were organized into 11 categories. All data collection occurred within a 1-week period in August 2015.

A total of 320 images were analyzed. In the GIS, private websites (36%), dermatology association websites (28%), and general health information websites (10%) were the 3 most common sources. In the GWS, health information websites (35%), private websites (21%), and dermatology association websites (20%) accounted for the most common sources (Table 2). The majority of images were of Fitzpatrick skin types I and II (89%) and nearly all images were diagnostically accurate (98%). There was no statistically significant difference in accuracy of diagnosis between physician-associated websites (100% accuracy) versus nonphysician-associated sites (98% accuracy, P=.25).

Our results showed high diagnostic accuracy among the top GIS results for common dermatology search terms. Diagnostic accuracy did not vary between websites that were physician associated versus those that were not. Our results are comparable to the reported accuracy of online dermatologic health information.1 In GIS results, the majority of images were provided by private websites, whereas the top websites in GWS results were health information websites.

Only 1% of images were of Fitzpatrick skin types VI and VII. Presentation of skin diseases is remarkably different based on the patient’s skin type.5 The shortage of readily accessible images of skin of color is in line with the lack of familiarity physicians and trainees have with dermatologic conditions in ethnic skin.6

Based on the results from this analysis, providers and patients searching for dermatologic conditions via a direct GIS should be cognizant of several considerations. Although our results showed that GIS was accurate, the searcher should note that image-based searches are not accompanied by relevant text that can help confirm relevancy and accuracy. Image searches depend on textual tags added by the source website. Websites that represent dermatological associations and academic centers can add an additional layer of confidence for users. Patients and clinicians also should be aware that the consideration of a patient’s Fitzpatrick skin type is critical when assessing the relevancy of a GIS result. In conclusion, search results via GIS queries are accurate for the dermatological diagnoses tested but may be lacking in skin of color variations, suggesting a potential unmet need based on our growing ethnic skin population.

References
  1. Jensen JD, Dunnick CA, Arbuckle HA, et al. Dermatology information on the Internet: an appraisal by dermatologists and dermatology residents. J Am Acad Dermatol. 2010;63:1101-1103.
  2. Cutrone M, Grimalt R. Dermatological image search engines on the Internet: do they work? J Eur Acad Dermatol Venereol. 2007;21:175-177.
  3. Xu S, Nault A, Bhatia A. Search and engagement analysis of association websites representing dermatologists—implications and opportunities for web visibility and patient education: website rankings of dermatology associations. Pract Dermatol. In press.
  4. comScore releases July 2015 U.S. desktop search engine rankings [press release]. Reston, VA: comScore, Inc; August 14, 2015. http://www.comscore.com/Insights/Market-Rankings/comScore-Releases-July-2015-U.S.-Desktop-Search-Engine-Rankings. Accessed October 18, 2016.
  5. Kundu RV, Patterson S. Dermatologic conditions in skin of color: part I. special considerations for common skin disorders. Am Fam Physician. 2013;87:850-856.
  6. Nijhawan RI, Jacob SE, Woolery-Lloyd H. Skin of color education in dermatology residency programs: does residency training reflect the changing demographics of the United States? J Am Acad Dermatol. 2008;59:615-618.
References
  1. Jensen JD, Dunnick CA, Arbuckle HA, et al. Dermatology information on the Internet: an appraisal by dermatologists and dermatology residents. J Am Acad Dermatol. 2010;63:1101-1103.
  2. Cutrone M, Grimalt R. Dermatological image search engines on the Internet: do they work? J Eur Acad Dermatol Venereol. 2007;21:175-177.
  3. Xu S, Nault A, Bhatia A. Search and engagement analysis of association websites representing dermatologists—implications and opportunities for web visibility and patient education: website rankings of dermatology associations. Pract Dermatol. In press.
  4. comScore releases July 2015 U.S. desktop search engine rankings [press release]. Reston, VA: comScore, Inc; August 14, 2015. http://www.comscore.com/Insights/Market-Rankings/comScore-Releases-July-2015-U.S.-Desktop-Search-Engine-Rankings. Accessed October 18, 2016.
  5. Kundu RV, Patterson S. Dermatologic conditions in skin of color: part I. special considerations for common skin disorders. Am Fam Physician. 2013;87:850-856.
  6. Nijhawan RI, Jacob SE, Woolery-Lloyd H. Skin of color education in dermatology residency programs: does residency training reflect the changing demographics of the United States? J Am Acad Dermatol. 2008;59:615-618.
Issue
Cutis - 98(5)
Issue
Cutis - 98(5)
Page Number
E6-E8
Page Number
E6-E8
Publications
Publications
Topics
Article Type
Display Headline
Accuracy and Sources of Images From Direct Google Image Searches for Common Dermatology Terms
Display Headline
Accuracy and Sources of Images From Direct Google Image Searches for Common Dermatology Terms
Sections
Inside the Article

Practice Points

  • Direct Google image searches largely deliver accurate results for common dermatological diagnoses.
  • Greater effort should be made to include more publicly available images for dermatological diseases in darker skin types.
Disallow All Ads
Article PDF Media

Critical Illness Outside the ICU

Article Type
Changed
Mon, 01/30/2017 - 11:14
Display Headline
Early detection of critical illness outside the intensive care unit: Clarifying treatment plans and honoring goals of care using a supportive care team

The likelihood of meaningful survival after cardiopulmonary arrest is low and even lower the longer the patient has been in the hospital[1, 2]; realization of this[3] played a major role in the development of rapid response teams (RRTs).[4] As noted elsewhere in this journal, the limited success of these teams[5, 6, 7] has inspired efforts to develop systems to identify patients at risk of deterioration much earlier.

Whereas a number of recent reports have described end‐of‐life care issues in the context of RRT operations,[8, 9, 10, 11, 12, 13, 14, 15, 16] descriptions of how one might incorporate respecting patient preferences into development of a response arm, particularly one meant to scale up to a multiple hospital system, are largely absent from the literature. In this article, we describe the implementation process for integrating palliative care and the honoring of patient choices, which we refer to as supportive care, with an automated early warning system (EWS) and an RRT.

The context of this work is a pilot project conducted at 2 community hospitals, the Kaiser Permanente Northern California (KPNC) Sacramento (200 beds) and South San Francisco (100 beds) medical centers. Our focus was to develop an approach that could serve as the basis for future dissemination to the remaining 19 KPNC hospitals, regardless of their size. Our work incorporated the Respecting Choices model,[17] which has been endorsed by KPNC for all its hospitals and clinics. We describe the workflow we developed to embed the supportive care team's (SCT) reactive and proactive components into the EWS response arm. We also provide a granular description of how our approach worked in practice, as evidenced by the combined patient and provider experiences captured in 5 vignettes as well as some preliminary data obtained by chart review

When patients arrive in the hospital, they may or may not have had a discussion about their care escalation and resuscitation preferences. As noted by Escobar and Dellinger[18] elsewhere in this issue of the Journal of Hospital Medicine, patients with documented restricted resuscitation preferences (eg, do not resuscitate [DNR] or partial code) at the time of admission to the hospital account for slightly more than half of the hospital deaths at 30 days after admission. In general, these stated preferences are honored.

Significant proportions of patients are unstable at the time of admission or have a significant underlying chronic illness burden predisposing them to unexpected deterioration. Often these patients lose decision‐making capacity when their condition worsens. We need to ensure we honor their wishes and identify the correct surrogate.

To make sure a patient's wishes are clear, we developed a workflow that included 2 components. One component is meant to ensure that patient preferences are honored following a EWS alert. This allows for contingencies, including the likelihood that a physician will not be available to discuss patient wishes due to clinical demands. Although it may appear that the role of the hospitalist is supplanted, in fact this is not the case. The only person who has authority to change a patient's code status is the hospitalist, and they always talk to the patient or their surrogate. The purpose of the teams described in this report is to provide backup, particularly in those instances where the hospitalist is tied up elsewhere (eg, the emergency department). Our workflows also facilitate the integration of the clinical with the palliative care response. The other component employs the EWS's ancillary elements (provision of a severity of illness score and longitudinal comorbidity score in real time) to screen patients who might need the SCT. This allows us to identify patients who are at high risk for deterioration in whom an alert has not yet been issued due to acute instability or comorbid burden (leading to high probability of unexpected deterioration) or both and who do not have stated goals of care and/or an identified surrogate.

IMPLEMENTATION APPROACH

We developed our workflow using the Institute for Healthcare Improvement's Plan‐Do‐Study‐Act approach.[19, 20] Our first finding was that most alerts did not require a rapid intervention by the SCT. Both sites reserved time in the SCT physicians' schedule and considered changing staffing levels (the smaller site only had funding for physician support 20 hours per week), but neither had to make such changes. One reason for this was that we increased social worker availability, particularly for off hours (to cover the contingency where an alert was issued in the middle of the night while the on‐call hospitalist was handling an admission in the emergency department). The second was that, as is described by Escobar et al.,[21] the EWS provides a risk of deterioration in the next 12 hours (as opposed to a code blue or regular RRT call, which indicate the need for immediate action) and provides an opportunity for spending time with patients without the constraints of an ongoing resuscitation.

We also found that of the patients who triggered an alert, approximately half would have been flagged for a palliative care referral using our own internal screening tool. Furthermore, having longitudinal comorbidity (Comorbidity Point Score, version 2 [COPS2]) and severity of illness (Laboratory‐Based Acute Physiology Score, version 2) scores[22] facilitated the identification of patients who needed review of their preferences with respect to escalation of care. Currently, our primary case‐finding criterion for proactive SCT consultation is a COPS2 >65, which is associated with a 10.8%, 30‐day mortality risk. Overall, the SCT was asked to see about 25% of patients in whom an alert was triggered.

The workflows we developed were employed at the first site to go live (South San Francisco, 7000 annual discharges, Figure 1) and then modified at Sacramento (14,000 annual discharges, Figure 2). Because the hospitals differ in several respects, from size and patient population to staffing, the workflows are slightly different.

Figure 1
Workflow for integrating Respecting Choices model with a real‐time early warning system at Kaiser Permanente South San Francisco. See text for additional details. Abbreviations: EWS, early warning system, EMR, electronic medical record; pt, patient; LCP, life care planning; HBS, hospital based specialist; RN, registered nurse; RRT, rapid response team; SCT, supportive care team; SW, social worker.
Figure 2
Workflow for integrating Respecting Choices model with a real‐time early warning system at Kaiser Permanente Sacramento. See text for additional details. Abbreviations: EWS, early warning system, EMR, electronic medical record; RN, registered nurse; RRT, rapid response team; SCT, supportive care team.

The EWS provides deterioration probabilities every 6 hours, and first responders (RRT nurses) intervene when this probability is 8%. The RRT nurse can activate the clinical response arm, the Respecting Choices pathway, or both. In South San Francisco, which lacked the resources to staff supportive care 24 hours a day/7 days a week, the RRT contacts a medical social worker (MSW) who performs an immediate record review. If this identifies something meriting urgent communication (eg, conflicting or absent information regarding a patient's surrogate), the MSW alerts the hospitalist. The MSW documents findings and ensures that a regular MSW consult occurs the next day. If the MSW feels the patient needs an SCT consult, the MSW alerts the team (this does not preclude a hospitalist or RRT nurse from initiating SCT consultation). At the Sacramento site, where the SCT team is staffed 24 hours a day/7 days a week, it is possible to bypass the MSW step. In addition, each morning the SCT reviews all alerts issued during the previous 24 hours to determine if an SCT consult is needed. In addition, the SCT also proactively reviews the COPS2 scores on all admissions to identify patients who could benefit from an SCT consult. Although surrogate identification and clarifying goals of care are essential, the SCT also helps patients in other ways, as is evident from the following case studies.

The major difference between the palliative care team and the SCT is that the SCT includes the inpatient social worker as part of the team. The SCT has a more focused role (its efforts center on aligning patient goals and priorities with the care that will actually be provided). In contrast, the palliative care team has other functions (eg, pain and symptom management) that are not necessarily associated with life care planning or the alert response.

Considerable overlap exists between patients who trigger an alert and those who would have met screening criteria established prior to EWS deployment. Although this is evolving, we can say that, in general, both sites are moving to an or criterion for involving the SCT (patient meets traditional criteria of the screening tool or triggers alert). Further, as KPNC begins adding more sites to the system, serious consideration is being given to only employing the COPS2 score as the primary screening criterion.

CASE STUDY 1: SURROGATE IDENTIFICATION

Mr. Smith, a 78‐year‐old man with congestive heart failure (CHF), atrial fibrillation, severe chronic obstructive pulmonary disease, and history of stroke, was admitted due to CHF exacerbation. The morning after admission, he experienced uncontrolled tachycardia associated with low oxygen saturation, triggering an alert. The hospitalist stabilized him and documented the treatment plan as follows: If worsening signs (shortness of breath/wheezing) or decreased saturation on current oxygen supplement, check chest film and arterial blood gas chest x‐ray/ arterial blood gas and call MD for possible bilevel positive airway pressure and repeating the echo. Intensive care unit (ICU) transfer as needed. According to his sister, his resuscitation preference was full code.

Given the new protocol instituted since the deployment of the EWS, the MSW reviewed the chart and found that the patient's sister, who lived locally and was the emergency contact, had been incorrectly identified as the surrogate. In a prior hospitalization, Mr. Smith had named his brother as his surrogate, as the patient felt strongly that his sister would not make good decisions for him. The following day, the SCT met with Mr. Smith, who articulated his desire to change his care directive to DNR. He also asked for a full palliative consult when his brother could come in (3 days later). During the consult, his brother learned, for the first time, exactly what heart failure was, and what to anticipate over the next months and years. The 2 brothers completed an advance directive granting Mr. Smith's brother a durable power of attorney including a request for a palliative approach to end‐stage illness. They also completed a physician order for life sustaining treatment, for DNR and limited intervention. Mr. Smith stated, When I go, I'm gone, and recalled that his mother and uncle had protracted illnesses, adding that I don't want to stay alive if I'm disabled like that.

In this example, the SCT was able to identify the correct surrogate and clarify the patient's resuscitation preference. Without SCT, if this patient had deteriorated unexpectedly, the sister would have insisted on treatment that was inconsistent with Mr. Smith's wishes. The interventions as a result of the alert also led the patient and his brother to begin discussing the medical goals of treatment openly and reach understanding about the patient's chronic and progressive conditions.

CASE STUDY 2: TRANSITION TO HOME‐BASED HOSPICE

Mr. North was a 71‐year‐old man admitted for sepsis due to pneumonia. He had a history of temporal arteritis, systemic lupus erythematosus, prostate cancer, squamous cell lung cancer, and chronic leg ulcers. Delirious at the time of admission, he triggered an alert at 6 am, shortly after admission to the ward. He was hypotensive and was transferred to the ICU.

The SCT reviewed the case and judged that he met criteria for consultation. His wife readily agreed to meet to discuss goals and plan of care. She had been taking care of him at home, and was overwhelmed by his physical needs as well as his worsening memory loss and agitation. She had not been able to bring him to the clinic for almost 2 years, and he had refused entry to the home health nurse. During the palliative consult, Mr. North was lucid enough to state his preference for comfort‐focused care, and his desire not to return to the hospital. Mrs. North accepted a plan for home hospice, with increased attendant care at home.

This case illustrates the benefit of the EWS in identifying patients whose chronic condition has progressed, and who would benefit from a palliative consult to clarify goals of care. Practice variation, the complexity of multiple medical problems, and the urgency of the acute presentation may obscure or delay the need for clarifying goals of care. A structured approach provided by the EWS workflow, as it did in this case, helps to ensure that these discussions are occurring with the appropriate patients at the appropriate times.

CASE STUDY 3: RESOLVING MD‐TO‐MD MISCOMMUNICATION

Mr. Joseph was an 89‐year‐old male hospitalized for a hip fracture. He had a history of atrial fibrillation, prostate cancer with bone metastases, radiation‐induced lung fibrosis, stroke, and advanced dementia. His initial admission order was DNR, but this was changed after surgery to full code and remained so. The next few days were relatively uneventful until the alert triggered. By then, the hospitalist attending him had changed 3 times. The social worker reviewed Mr. Joseph's records and determined that a palliative consult had taken place previously at another Kaiser Permanente facility, and that the prior code status was DNR. Although Mr. Joseph's admission care directive was DNR, this was switched to full code for surgery. However, the care directive was not changed back, nor was a discussion held to discuss his preference in case of a complication related to surgery. Meanwhile, he was having increasing respiratory problems due to aspiration and required noninvasive ventilation.

Consequently, the SCT reviewed the alerts from the previous 24 hours and determined that further investigation and discussion were required. When the hospitalist was called, the SCT discovered that the hospitalist had assumed the change to full code had been made by 1 of the previous attending physicians; he also informed the SCT that Mr. Joseph would likely need intubation. The SCT decided to go see the patient and, on approaching the room, saw Mr. Joseph's son waiting outside. The son was asked how things were going, and replied, We all knew that 1 day he would deteriorate, we just want to make sure he is comfortable. Clearly, the full code status did not reflect the Mr. Joseph's wishes, so this was clarified and the hospitalist was called immediately to change the care directive. The SCT met with the man's son and wife, educating them about aspiration and what to expect. They definitely wished a gentle approach for Mr. Joseph, and it was decided to continue current care, without escalation, until the morning. This was to allow the other son to be informed of his father's condition and to see if his status would improve. The next morning the SCT met with the family at the room, and the patient was placed on comfort measures.

This case illustrates 3 points. One, Mr. Joseph's status was changed to full code during surgery without addressing his preferences should he develop a complication during the postoperative period. Two, when the hospitalist saw the full code order in the electronic record, it was assumed someone else had had a discussion with the patient and his family. Lastly, although a social worker performed a chart review, the full picture only emerged after the entire SCT became involved. Therefore, even in the presence of an EWS with associated protocols, important details can be missed, highlighting the need to build redundancy into workflows.

CASE STUDY 4: RELUCTANCE TO INVOLVE PALLIATIVE CARE TEAM

Mrs. Wood, a bed‐bound 63‐year‐old with end‐stage heart failure, was admitted to the hospital with respiratory failure. She had met with a life care planning facilitator as well as a palliative physician previously but refused to discuss end‐of‐life options. She felt she would always do well and her husband felt the same way. During this admission a routine palliative referral was made, but she and her husband refused. The chaplain visited often and then the patient took a turn for the worse, triggering an alert and was transferred to the ICU.

The hospitalist did not feel a SCT consult was indicated based on prior discussions. However, the SCT reviewed the records and felt an intervention was needed. The patient, now obtunded, had worsening renal failure and required continuous pressor infusions. The chaplain spoke with Mr. Wood, who felt a consult was appropriate. Mrs. Wood was no longer able to make decisions, and her husband needed more information about what to expect. At the end of the discussion, he decided on comfort care, and his wife expired peacefully in the hospital.

This case illustrates that, although initially the primary attending may feel a palliative consult is not helpful and possibly detrimental to the patient's care under usual circumstances, decisions may change as the patient's condition changes. The EWS alert helped the SCT recognize the drastic change in the patient's condition and the need to support the patient's family. The family had been resistant, but the SCT was able to help the family transition to a palliative approach with its gentle contact and being clear about its role to provide support regardless of their decision.

CASE STUDY 5: ALERT FACILITATES TRANSITION TO OUTPATIENT PALLIATIVE CARE

Mr. Jones was an 82‐year‐old gentleman who had a recent episode of gastrointestinal bleeding while on vacation. He was transferred by air ambulance to the hospital and developed delirium and agitation. His evaluation revealed that he had polycythemia vera and a recently made diagnosis of mild dementia.

In this case, the SCT reviewed the chart not because of an alert, but because the hospitalist noted that Mr. Jones had a very high severity of illness score on admission. When the SCT arrived at Mr. Jones's room, 3 family members were present. His wife appeared to be very frail and was too emotional to make decisions. The children present at the bedside were new to the problems at hand but wanted to help. The SCT team educated the family about his current disease state, the general disease trajectory, and what to expect. They explored the patient's values and any indicators of what his care preference would be if he could communicate it. The SCT established a life care plan at that visit. Based upon Mr. Jones's own wishes and values, he was made DNR with limited interventions. He survived the hospitalization and was followed by the outpatient palliative care clinic as well as by hematology.

This case illustrates 2 facets: a high severity of illness score led to consultation even without an alert. Following this, the SCT could take on a taskarriving at a life care plan by exploring valuesthat is difficult and time consuming for a busy hospitalist. It also illustrates that patients may elect to obtain other options, in this case, outpatient palliative care.

FUTURE DIRECTIONS

Our team has also started a quantitative evaluation process. The major limitation we face in this effort is that, unlike physiologic or health services measures (eg, tachycardia, hospital length of stay, mortality), the key measures for assessing the quality of palliative and end‐of‐life care need to be extracted by manual chart review. Our approach is based on the palliative and end‐of‐life care measures endorsed by the National Quality Forum,[23] which are described in greater detail in the appendix. As is the case with other outcomes, and as described in the article by Escobar et al.,[21] we will be employing a difference‐in‐differences approach as well as multivariate matching[24, 25, 26] to evaluate effectiveness of the intervention. Because of the high costs of manual chart review, we will be reviewing randomly selected charts of patients who triggered an alert at the 2 pilot sites as well as matched comparison patient charts at the remaining 19 KPNC hospitals. Table 1 provides preliminary data we gathered to pilot the brief chart review instrument that will be used for evaluating changes in supportive care in the regional rollout. Data are from a randomly selected cohort of 150 patients who reached the alert threshold at the 2 pilot sites between November 13, 2013 and June 30, 2014. After removing 3 records with substantial missing data, we were able to find 146 matched patients at the remaining 19 KPNC hospitals during the same time period. Matched patients were selected from those patients who had a virtual alert based on retrospective data. Table 1 shows that, compared to the other KPNC hospitals, the quality of these 6 aspects of supportive care was better at the pilot sites.

Matched Analyses of Six Supportive Care Quality Measures
Hospital*121+2 combinedRemaining 19P (1)P(2)P(1+2)
  • NOTE: *See text for additional details. The patients at the remaining 19 hospitals were identified based on their retrospective (virtual) deterioration probabilities and then matched to the patients at the pilot sites. The matching algorithm specified exact matches for these variables: alert threshold reached or not; sex; Kaiser Permanente membership status; had the patient been in the intensive care unit prior to the first alert; and care directive prior to the alert (full code vs not full code). Once potential matches were found using the above, the algorithm found the closest match for the following variables: deterioration probability, age, comorbidity burden, and admission illness severity. Statistical comparisons are as follows: P(1): P value for comparison of pilot hospital 1 versus remaining 19 Kaiser Permanente Northern California hospitals; P(2), as per P(1), but for pilot hospital 2; P(1+2), both pilot hospitals' data combined. For continuous variables, numbers shown are mean standard deviation. Numbers in bold italics are those that were significantly different. Deterioration risk is generated by the early warning system. It is the probability that a patient will require transfer to the intensive care unit within the next 12 hours. Interventions are initiated when this risk is 8%. LAPS2 = admission Laboratory‐based Acute Physiology Score, version 2; measure of acute instability where the higher the score, the greater the degree of physiologic derangement. Patients with LAPS2 110 are very unstable. See citation 20 for additional details. COPS2 = Comorbidity Point Score, version 2; measure of chronic disease burden over preceding 12 months that is assigned to all Kaiser Permanente Northern California members on a monthly basis. The higher the score, the greater the chronic illness burden. Patients with COPS2 65 have a significant comorbid illness burden. See citation 20 for additional details. ‖Refers to 30 day mortality. Indicates whether documentation preceding an alert clearly specified who the patient's agent (decision‐maker or surrogate) was. #Indicates whether documentation immediately following an alert clearly specified who the patient's agent (decision‐maker or surrogate) was. **Refers to whether chart documentation indicated that the patient's family or agent were updated about the patient's condition within 24 hours after an alert. Refers to whether chart documentation indicated that a discussion occurred regarding the patient's goals of care occurred within 24 hours after an alert. Indicates whether a palliative care consultation occurred within 24 hours after an alert.

N7374147146   
Age (y)69.3 14.466.4 15.367.8 14.867.4 14.70.370.620.82
Male (%)39 (53.4%)43 (58.1%)82 (55.8%)82 (56.2%)0.700.780.95
Deterioration risk (%)20.0 14.317.4 11.618.7 13.018.8 13.60.540.440.94
LAPS2113 38102 39107 39107 380.280.380.9
COPS269 5266 5267 5266 510.751.000.85
Died (%)‖17 (23.3%)15 (20.3%)32 (21.8%)24 (16.4%)0.220.480.25
Agent identified prior28 (38.4%)18 (24.3%)46 (31.3%)21 (14.4%)<0.0010.070.001
Agent identified after#46 (63.0%)39 (52.7%)85 (57.8%)28 (19.4%)<0.001<0.001<0.001
Updating within 24 hours**32 (43.8%)45 (60.8%)77 (52.4%)59 (40.4%)0.630.000.04
Goals of care discussion20 (27.4%)37 (50.0%)57 (38.8%)32 (21.9%)0.370.0010.002
Palliative care consult19 (26.0%)49 (66.2%)68 (46.3%)35 (24.0%)0.74<0.001<0.001
Spiritual support offered27 (37.0%)30 (40.5%)57 (38.8%)43 (29.4%)0.260.100.09

CONCLUSION

Although we continue to review our care processes, we feel that our overall effort has been successful. Nonetheless, it is important to consider a number of limitations to the generalizability of our approach. First, our work has taken place in the context of a highly integrated care delivery system where both information transfer as well as referral from the inpatient to the outpatient setting can occur easily. Second, because the pilot sites were among the first KPNC hospitals to begin implementing the Respecting Choices model, they undoubtedly had less ground to cover than hospitals beginning with less infrastructure. Third, because of resource limitations, our ability to capture process data is limited. Lastly, both sites were able to obtain resources to expand necessary coverage, which might not be possible in many settings.

In conclusion, we made a conscious decision to incorporate palliative care into the planning for the deployment of the alert system. Further, we made this decision explicit, informing all caregivers that providing palliative care that adheres to the Respecting Choices model would be essential. We have found that integration of the SCT, the EWS, and routine hospital operations can be achieved. Clinician and patient acceptance of the Respecting Choices component has been excellent. We consider 3 elements to be critical for this process, and these elements form an integral component of the expansion of the early warning system to the remaining 19 KPNC hospitals. The first is careful planning, which includes instructing RRT first responders on their role in the process of ensuring the respect of patient preferences. Second, having social workers available 24 hours a day/7 days a week as backup for busy hospitalists, is essential. Finally, as is described by Dummett et al.,[27] including reminders regarding patient preferences in the documentation process (by embedding it in an automated note template) is also very important.

Acknowledgements

The authors thank Dr. Michelle Caughey, Dr. Philip Madvig, Dr. Patricia Conolly, Ms. Barbara Crawford, and Ms. Melissa Stern for their administrative support, Dr. Tracy Lieu for reviewing the manuscript, and Ms. Rachel Lesser for formatting the manuscript.

Disclosures: This work was supported by a grant from the Gordon and Betty Moore Foundation (Early Detection, Prevention, and Mitigation of Impending Physiologic Deterioration in Hospitalized Patients Outside Intensive Care: Phase 3, pilot), The Permanente Medical Group, Inc., and Kaiser Foundation Hospitals, Inc. As part of our agreement with the Gordon and Betty Moore Foundation, we made a commitment to disseminate our findings in articles such as this one. However, the Foundation played no role in how we actually structured our articles, nor did they review or preapprove any of the manuscripts submitted as part of the dissemination component. None of the other sponsors had any involvement in our decision to submit this manuscript or in the determination of its contents. None of the authors have any conflicts of interest to declare of relevance to this work.

APPENDIX 1

Key measures to assess the quality of supportive care extracted by manual chart review

Chart review questionOutcome desiredOutcome measuredRationale for selecting this outcome

Was the patient's decision‐maker documented following the alert? If yes: Time/date of documentation.

Timely identification and documentation of the patient's decision‐maker immediately following the alert

Whether the patient's decision‐maker was clearly identified and documented by a member of the treatment team (nurse, physician, and/or rapid response team) following the alert.

This outcome is measured independently of whether the patient's decision‐maker was already documented prior to the alert.

Clear documentation facilitates the notification of a patient's family/decision‐maker in a timely manner to enhance communication and clinical decision‐making to make sure that the patient's wishes and preferences are honored.

Was the patient's decision‐maker/family notified or was there an attempt to notify the patient's decision‐maker regarding the changes in the patient's condition following the alert? If yes: Time/date of notification/attempted contact.

Providing patient's family members/decision‐maker with an update in the patient's clinical condition following the alertWhether the medical team notified or attempted to contact the patient's family/decision‐maker to provide an update in the patient's clinical condition following the alert.Providing timely updates when a patient's clinical status changes enhances communication and helps to proactively involve patients and families in the decision‐making process.

Was there a goals of care discussion following the alert? If yes: Time/date of discussion

To clarify and to honor individual patient's goals of careWhether a goals of care discussion was initiated after the alert was issued. Criteria for Goals of Care discussion included any/all of the following:
  • Specific language in the documentation that stated verbatim Goals of Care Discussion
  • Providing prognosis and treatment options; eliciting preferences; AND documenting decisions made and preferences as a result of the discussion.
Goals of care discussions actively involve patients and families in the decision‐making process to ensure that their wishes and preferences are clearly documented and followed.
Was there a palliative care consultation during the patient's hospitalization?To provide comprehensive supportive care to patients and their families/loved ones.Whether palliative care was consulted during the patient's hospitalizationThe palliative care team plays an important role in helping patients/families make decisions, providing support, and ensuring that patients symptoms are addressed and properly managed
Was spiritual support offered to the patient and/or their family/loved during the patient's hospitalization?To offer and to provide spiritual support to patients and their families/loved onesWhether the patient/family was offered spiritual support during the patient's hospitalizationSpiritual support has been recognized as an important aspect of quality EOL care

 

APPENDIX 2

Respecting Choices, A Staged Approach to Advance Care Planning

Respecting Choices is a staged approach to advance care planning, where conversations begin when people are healthy and continue to occur throughout life.

Our Life Care Planning service consists ofthree distinct steps.

  1. My Values: First Steps is appropriate for all adults, but should definitely be initiated as a component of routine healthcare for those over the age of 55. The goals of First Steps are to motivate individuals to learn more about the importance of Life Care Planning, select a healthcare decision maker, and complete a basic written advance directive.
  2. My Choices: Next Steps is for patients with chronic, progressive illness who have begun to experience a decline in functional status or frequent hospitalizations. The goals of this stage of planning are to assist patients in understanding a) the progression of their illness, b) potential complications, and c) specific life‐sustaining treatments that may be required if their illness progresses. Understanding life‐sustaining treatments includes each treatment's benefits, burdens, and alternatives. With this understanding members will be better able to express what situations (e.g. complications or bad outcomes) would cause them to want to change their plan of care.Additionally, the individual's healthcare agent(s) and other loved ones are involved in the planning process so that they can be prepared to make decisions, if necessary, and to support the plan of care developed.
  3. My Care: Advanced Steps is intended for frail elders or others whose death in the next 12 months would not be surprising. It helps patients and their agent make specific and timely life‐sustaining treatment decisions that can be converted to medical orders to guide the actions of healthcare providers and be consistent with the goals of the individual.

 

(Reference: http://www.gundersenhealth.org/respecting-choices).

APPENDIX 3

Pilot site Palliative Care Referral Criteria

Automatic palliative care consults for adults at Sacramento site are as follows:

  1. 30 day readmits or >3 ED or acute readmissions in past year for CHF or COPD that have no Advance Directive and are not followed by Chronic Care Management
  2. Aspiration
  3. CVA with poor prognosis for regaining independence
  4. Hip fracture patients not weight bearing on post‐operative day 2
  5. Code blue survivor
  6. Skilled Nursing Facility resident with sepsis and or dementia
  7. Active hospice patients
  8. Sepsis patients with 10 or more ICD codes in the problem list

 

Potential palliative care consults for adults at Sacramento pilot site are as follows:

  1. Morbid obesity complicated by organ damage (e.g., congestive heart failure, refractory liver disease, chronic renal disease)
  2. Severe chronic kidney disease and/or congestive heart failure with poor functional status (chair or bed bound)
  3. Patient with pre‐operative arteriovenous fistulas and poor functional status, congestive heart failure, or age>80
  4. End stage liver disease with declining functional status, poor odds of transplant

 

 

Files
References
  1. Institute of Medicine of the National Academies. Dying in America: Improving Quality and Honoring Individual Preferences Near the End of Life. Washington, DC: Institute of Medicine of the National Academies; 2014.
  2. Partners LR. Final chapter: Californians' attitudes and experiences with death and dying. California HealthCare Foundation website. Available at: http://www.chcf.org/publications/2012/02/final‐chapter‐death‐dying. Published February 2012. Accessed July 14, 2015.
  3. Rozenbaum EA, Shenkman L. Predicting outcome of inhospital cardiopulmonary resuscitation. Crit Care Med. 1988;16(6):583586.
  4. Hournihan F, Bishop G., Hillman KM, Dauffurn K, Lee A. The medical emergency team: a new strategy to identify and intervene in high‐risk surgical patients. Clin Intensive Care. 1995;6:269272.
  5. Winters BD, Pham J, Pronovost PJ. Rapid response teams—walk, don't run. JAMA. 2006;296(13):16451647.
  6. Winters BD, Pham JC, Hunt EA, Guallar E, Berenholtz S, Pronovost PJ. Rapid response systems: a systematic review. Crit Care Med. 2007;35(5):12381243.
  7. Litvak E, Pronovost PJ. Rethinking rapid response teams. JAMA. 2010;304(12):13751376.
  8. Jones DA, McIntyre T, Baldwin I, Mercer I, Kattula A, Bellomo R. The medical emergency team and end‐of‐life care: a pilot study. Crit Care Resusc. 2007;9(2):151156.
  9. Chen J, Flabouris A, Bellomo R, Hillman K, Finfer S. The Medical Emergency Team System and not‐for‐resuscitation orders: results from the MERIT study. Resuscitation. 2008;79(3):391397.
  10. Vazquez R, Gheorghe C, Grigoriyan A, Palvinskaya T, Amoateng‐Adjepong Y, Manthous CA. Enhanced end‐of‐life care associated with deploying a rapid response team: a pilot study. J Hosp Med. 2009;4(7):449452.
  11. Knott CI, Psirides AJ, Young PJ, Sim D. A retrospective cohort study of the effect of medical emergency teams on documentation of advance care directives. Crit Care Resusc. 2011;13(3):167174.
  12. Coventry C, Flabouris A, Sundararajan K, Cramey T. Rapid response team calls to patients with a pre‐existing not for resuscitation order. Resuscitation. 2013;84(8):10351039.
  13. Downar J, Barua R, Rodin D, et al. Changes in end of life care 5 years after the introduction of a rapid response team: a multicentre retrospective study. Resuscitation. 2013;84(10):13391344.
  14. Smith RL, Hayashi VN, Lee YI, Navarro‐Mariazeta L, Felner K. The medical emergency team call: a sentinel event that triggers goals of care discussion. Crit Care Med. 2014;42(2):322327.
  15. Sundararajan K, Flabouris A, Keeshan A, Cramey T. Documentation of limitation of medical therapy at the time of a rapid response team call. Aust Health Rev. 2014;38(2):218222.
  16. Visser P, Dwyer A, Moran J, et al. Medical emergency response in a sub‐acute hospital: improving the model of care for deteriorating patients. Aust Health Rev. 2014;38(2):169176.
  17. Respecting Choices advance care planning. Available at: http://www.gundersenhealth.org/respecting‐choices. Gundersen Health System website. Accessed March 28, 2015.
  18. Escobar G, Dellinger RP. Early detection, prevention, and mitigation of critical illness outside intensive care settings. J Hosp Med. 2016;11:000000.
  19. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. San Francisco, CA: Jossey‐Bass; 2009.
  20. Nadeem E, Olin SS, Hill LC, Hoagwood KE, Horwitz SM. Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Q. 2013;91(2):354394.
  21. Escobar G, Turk B, Ragins A, et al. Piloting electronic medical record-based early detection of inpatient deterioration in community hospitals. J Hosp Med. 2016;11:000000.
  22. Escobar GJ, Gardner M, Greene JG, Draper D, Kipnis P. Risk‐adjusting hospital mortality using a comprehensive electronic record in an integrated healthcare delivery system. Med Care. 2013;51(5):446453.
  23. Department of Health and Human Services. Palliative care and end‐of‐life care—a consensus report. National Quality Forum website. Available at: http://www.qualityforum.org/projects/palliative_care_and_end‐of‐life_care.aspx. Accessed April 1, 2015.
  24. Gu XS, Rosenbaum PR. Comparison of multivariate matching methods: structures, distances, and algorithms. J Comput Graph Stat. 1993;2(4):405420.
  25. Feng WW, Jun Y, Xu R. A method/macro based on propensity score and Mahalanobis distance to reduce bias in treatment comparison in observational study: Eli Lilly working paper. Available at: http://www.lexjansen.com/pharmasug/2006/publichealthresearch/pr05.pdf. Accessed January 24, 2013.
  26. Stuart EA. Matching methods for causal inference: a review and a look forward. Stat Sci. 2010;25(1):121.
  27. Dummett BA, Adams C, Scruth E, Liu V, Guo M, Escobar G. Incorporating an early detection system into routine clinical practice in two community hospitals. J Hosp Med. 2016;11:000000.
Article PDF
Issue
Journal of Hospital Medicine - 11(1)
Page Number
S40-S47
Sections
Files
Files
Article PDF
Article PDF

The likelihood of meaningful survival after cardiopulmonary arrest is low and even lower the longer the patient has been in the hospital[1, 2]; realization of this[3] played a major role in the development of rapid response teams (RRTs).[4] As noted elsewhere in this journal, the limited success of these teams[5, 6, 7] has inspired efforts to develop systems to identify patients at risk of deterioration much earlier.

Whereas a number of recent reports have described end‐of‐life care issues in the context of RRT operations,[8, 9, 10, 11, 12, 13, 14, 15, 16] descriptions of how one might incorporate respecting patient preferences into development of a response arm, particularly one meant to scale up to a multiple hospital system, are largely absent from the literature. In this article, we describe the implementation process for integrating palliative care and the honoring of patient choices, which we refer to as supportive care, with an automated early warning system (EWS) and an RRT.

The context of this work is a pilot project conducted at 2 community hospitals, the Kaiser Permanente Northern California (KPNC) Sacramento (200 beds) and South San Francisco (100 beds) medical centers. Our focus was to develop an approach that could serve as the basis for future dissemination to the remaining 19 KPNC hospitals, regardless of their size. Our work incorporated the Respecting Choices model,[17] which has been endorsed by KPNC for all its hospitals and clinics. We describe the workflow we developed to embed the supportive care team's (SCT) reactive and proactive components into the EWS response arm. We also provide a granular description of how our approach worked in practice, as evidenced by the combined patient and provider experiences captured in 5 vignettes as well as some preliminary data obtained by chart review

When patients arrive in the hospital, they may or may not have had a discussion about their care escalation and resuscitation preferences. As noted by Escobar and Dellinger[18] elsewhere in this issue of the Journal of Hospital Medicine, patients with documented restricted resuscitation preferences (eg, do not resuscitate [DNR] or partial code) at the time of admission to the hospital account for slightly more than half of the hospital deaths at 30 days after admission. In general, these stated preferences are honored.

Significant proportions of patients are unstable at the time of admission or have a significant underlying chronic illness burden predisposing them to unexpected deterioration. Often these patients lose decision‐making capacity when their condition worsens. We need to ensure we honor their wishes and identify the correct surrogate.

To make sure a patient's wishes are clear, we developed a workflow that included 2 components. One component is meant to ensure that patient preferences are honored following a EWS alert. This allows for contingencies, including the likelihood that a physician will not be available to discuss patient wishes due to clinical demands. Although it may appear that the role of the hospitalist is supplanted, in fact this is not the case. The only person who has authority to change a patient's code status is the hospitalist, and they always talk to the patient or their surrogate. The purpose of the teams described in this report is to provide backup, particularly in those instances where the hospitalist is tied up elsewhere (eg, the emergency department). Our workflows also facilitate the integration of the clinical with the palliative care response. The other component employs the EWS's ancillary elements (provision of a severity of illness score and longitudinal comorbidity score in real time) to screen patients who might need the SCT. This allows us to identify patients who are at high risk for deterioration in whom an alert has not yet been issued due to acute instability or comorbid burden (leading to high probability of unexpected deterioration) or both and who do not have stated goals of care and/or an identified surrogate.

IMPLEMENTATION APPROACH

We developed our workflow using the Institute for Healthcare Improvement's Plan‐Do‐Study‐Act approach.[19, 20] Our first finding was that most alerts did not require a rapid intervention by the SCT. Both sites reserved time in the SCT physicians' schedule and considered changing staffing levels (the smaller site only had funding for physician support 20 hours per week), but neither had to make such changes. One reason for this was that we increased social worker availability, particularly for off hours (to cover the contingency where an alert was issued in the middle of the night while the on‐call hospitalist was handling an admission in the emergency department). The second was that, as is described by Escobar et al.,[21] the EWS provides a risk of deterioration in the next 12 hours (as opposed to a code blue or regular RRT call, which indicate the need for immediate action) and provides an opportunity for spending time with patients without the constraints of an ongoing resuscitation.

We also found that of the patients who triggered an alert, approximately half would have been flagged for a palliative care referral using our own internal screening tool. Furthermore, having longitudinal comorbidity (Comorbidity Point Score, version 2 [COPS2]) and severity of illness (Laboratory‐Based Acute Physiology Score, version 2) scores[22] facilitated the identification of patients who needed review of their preferences with respect to escalation of care. Currently, our primary case‐finding criterion for proactive SCT consultation is a COPS2 >65, which is associated with a 10.8%, 30‐day mortality risk. Overall, the SCT was asked to see about 25% of patients in whom an alert was triggered.

The workflows we developed were employed at the first site to go live (South San Francisco, 7000 annual discharges, Figure 1) and then modified at Sacramento (14,000 annual discharges, Figure 2). Because the hospitals differ in several respects, from size and patient population to staffing, the workflows are slightly different.

Figure 1
Workflow for integrating Respecting Choices model with a real‐time early warning system at Kaiser Permanente South San Francisco. See text for additional details. Abbreviations: EWS, early warning system, EMR, electronic medical record; pt, patient; LCP, life care planning; HBS, hospital based specialist; RN, registered nurse; RRT, rapid response team; SCT, supportive care team; SW, social worker.
Figure 2
Workflow for integrating Respecting Choices model with a real‐time early warning system at Kaiser Permanente Sacramento. See text for additional details. Abbreviations: EWS, early warning system, EMR, electronic medical record; RN, registered nurse; RRT, rapid response team; SCT, supportive care team.

The EWS provides deterioration probabilities every 6 hours, and first responders (RRT nurses) intervene when this probability is 8%. The RRT nurse can activate the clinical response arm, the Respecting Choices pathway, or both. In South San Francisco, which lacked the resources to staff supportive care 24 hours a day/7 days a week, the RRT contacts a medical social worker (MSW) who performs an immediate record review. If this identifies something meriting urgent communication (eg, conflicting or absent information regarding a patient's surrogate), the MSW alerts the hospitalist. The MSW documents findings and ensures that a regular MSW consult occurs the next day. If the MSW feels the patient needs an SCT consult, the MSW alerts the team (this does not preclude a hospitalist or RRT nurse from initiating SCT consultation). At the Sacramento site, where the SCT team is staffed 24 hours a day/7 days a week, it is possible to bypass the MSW step. In addition, each morning the SCT reviews all alerts issued during the previous 24 hours to determine if an SCT consult is needed. In addition, the SCT also proactively reviews the COPS2 scores on all admissions to identify patients who could benefit from an SCT consult. Although surrogate identification and clarifying goals of care are essential, the SCT also helps patients in other ways, as is evident from the following case studies.

The major difference between the palliative care team and the SCT is that the SCT includes the inpatient social worker as part of the team. The SCT has a more focused role (its efforts center on aligning patient goals and priorities with the care that will actually be provided). In contrast, the palliative care team has other functions (eg, pain and symptom management) that are not necessarily associated with life care planning or the alert response.

Considerable overlap exists between patients who trigger an alert and those who would have met screening criteria established prior to EWS deployment. Although this is evolving, we can say that, in general, both sites are moving to an or criterion for involving the SCT (patient meets traditional criteria of the screening tool or triggers alert). Further, as KPNC begins adding more sites to the system, serious consideration is being given to only employing the COPS2 score as the primary screening criterion.

CASE STUDY 1: SURROGATE IDENTIFICATION

Mr. Smith, a 78‐year‐old man with congestive heart failure (CHF), atrial fibrillation, severe chronic obstructive pulmonary disease, and history of stroke, was admitted due to CHF exacerbation. The morning after admission, he experienced uncontrolled tachycardia associated with low oxygen saturation, triggering an alert. The hospitalist stabilized him and documented the treatment plan as follows: If worsening signs (shortness of breath/wheezing) or decreased saturation on current oxygen supplement, check chest film and arterial blood gas chest x‐ray/ arterial blood gas and call MD for possible bilevel positive airway pressure and repeating the echo. Intensive care unit (ICU) transfer as needed. According to his sister, his resuscitation preference was full code.

Given the new protocol instituted since the deployment of the EWS, the MSW reviewed the chart and found that the patient's sister, who lived locally and was the emergency contact, had been incorrectly identified as the surrogate. In a prior hospitalization, Mr. Smith had named his brother as his surrogate, as the patient felt strongly that his sister would not make good decisions for him. The following day, the SCT met with Mr. Smith, who articulated his desire to change his care directive to DNR. He also asked for a full palliative consult when his brother could come in (3 days later). During the consult, his brother learned, for the first time, exactly what heart failure was, and what to anticipate over the next months and years. The 2 brothers completed an advance directive granting Mr. Smith's brother a durable power of attorney including a request for a palliative approach to end‐stage illness. They also completed a physician order for life sustaining treatment, for DNR and limited intervention. Mr. Smith stated, When I go, I'm gone, and recalled that his mother and uncle had protracted illnesses, adding that I don't want to stay alive if I'm disabled like that.

In this example, the SCT was able to identify the correct surrogate and clarify the patient's resuscitation preference. Without SCT, if this patient had deteriorated unexpectedly, the sister would have insisted on treatment that was inconsistent with Mr. Smith's wishes. The interventions as a result of the alert also led the patient and his brother to begin discussing the medical goals of treatment openly and reach understanding about the patient's chronic and progressive conditions.

CASE STUDY 2: TRANSITION TO HOME‐BASED HOSPICE

Mr. North was a 71‐year‐old man admitted for sepsis due to pneumonia. He had a history of temporal arteritis, systemic lupus erythematosus, prostate cancer, squamous cell lung cancer, and chronic leg ulcers. Delirious at the time of admission, he triggered an alert at 6 am, shortly after admission to the ward. He was hypotensive and was transferred to the ICU.

The SCT reviewed the case and judged that he met criteria for consultation. His wife readily agreed to meet to discuss goals and plan of care. She had been taking care of him at home, and was overwhelmed by his physical needs as well as his worsening memory loss and agitation. She had not been able to bring him to the clinic for almost 2 years, and he had refused entry to the home health nurse. During the palliative consult, Mr. North was lucid enough to state his preference for comfort‐focused care, and his desire not to return to the hospital. Mrs. North accepted a plan for home hospice, with increased attendant care at home.

This case illustrates the benefit of the EWS in identifying patients whose chronic condition has progressed, and who would benefit from a palliative consult to clarify goals of care. Practice variation, the complexity of multiple medical problems, and the urgency of the acute presentation may obscure or delay the need for clarifying goals of care. A structured approach provided by the EWS workflow, as it did in this case, helps to ensure that these discussions are occurring with the appropriate patients at the appropriate times.

CASE STUDY 3: RESOLVING MD‐TO‐MD MISCOMMUNICATION

Mr. Joseph was an 89‐year‐old male hospitalized for a hip fracture. He had a history of atrial fibrillation, prostate cancer with bone metastases, radiation‐induced lung fibrosis, stroke, and advanced dementia. His initial admission order was DNR, but this was changed after surgery to full code and remained so. The next few days were relatively uneventful until the alert triggered. By then, the hospitalist attending him had changed 3 times. The social worker reviewed Mr. Joseph's records and determined that a palliative consult had taken place previously at another Kaiser Permanente facility, and that the prior code status was DNR. Although Mr. Joseph's admission care directive was DNR, this was switched to full code for surgery. However, the care directive was not changed back, nor was a discussion held to discuss his preference in case of a complication related to surgery. Meanwhile, he was having increasing respiratory problems due to aspiration and required noninvasive ventilation.

Consequently, the SCT reviewed the alerts from the previous 24 hours and determined that further investigation and discussion were required. When the hospitalist was called, the SCT discovered that the hospitalist had assumed the change to full code had been made by 1 of the previous attending physicians; he also informed the SCT that Mr. Joseph would likely need intubation. The SCT decided to go see the patient and, on approaching the room, saw Mr. Joseph's son waiting outside. The son was asked how things were going, and replied, We all knew that 1 day he would deteriorate, we just want to make sure he is comfortable. Clearly, the full code status did not reflect the Mr. Joseph's wishes, so this was clarified and the hospitalist was called immediately to change the care directive. The SCT met with the man's son and wife, educating them about aspiration and what to expect. They definitely wished a gentle approach for Mr. Joseph, and it was decided to continue current care, without escalation, until the morning. This was to allow the other son to be informed of his father's condition and to see if his status would improve. The next morning the SCT met with the family at the room, and the patient was placed on comfort measures.

This case illustrates 3 points. One, Mr. Joseph's status was changed to full code during surgery without addressing his preferences should he develop a complication during the postoperative period. Two, when the hospitalist saw the full code order in the electronic record, it was assumed someone else had had a discussion with the patient and his family. Lastly, although a social worker performed a chart review, the full picture only emerged after the entire SCT became involved. Therefore, even in the presence of an EWS with associated protocols, important details can be missed, highlighting the need to build redundancy into workflows.

CASE STUDY 4: RELUCTANCE TO INVOLVE PALLIATIVE CARE TEAM

Mrs. Wood, a bed‐bound 63‐year‐old with end‐stage heart failure, was admitted to the hospital with respiratory failure. She had met with a life care planning facilitator as well as a palliative physician previously but refused to discuss end‐of‐life options. She felt she would always do well and her husband felt the same way. During this admission a routine palliative referral was made, but she and her husband refused. The chaplain visited often and then the patient took a turn for the worse, triggering an alert and was transferred to the ICU.

The hospitalist did not feel a SCT consult was indicated based on prior discussions. However, the SCT reviewed the records and felt an intervention was needed. The patient, now obtunded, had worsening renal failure and required continuous pressor infusions. The chaplain spoke with Mr. Wood, who felt a consult was appropriate. Mrs. Wood was no longer able to make decisions, and her husband needed more information about what to expect. At the end of the discussion, he decided on comfort care, and his wife expired peacefully in the hospital.

This case illustrates that, although initially the primary attending may feel a palliative consult is not helpful and possibly detrimental to the patient's care under usual circumstances, decisions may change as the patient's condition changes. The EWS alert helped the SCT recognize the drastic change in the patient's condition and the need to support the patient's family. The family had been resistant, but the SCT was able to help the family transition to a palliative approach with its gentle contact and being clear about its role to provide support regardless of their decision.

CASE STUDY 5: ALERT FACILITATES TRANSITION TO OUTPATIENT PALLIATIVE CARE

Mr. Jones was an 82‐year‐old gentleman who had a recent episode of gastrointestinal bleeding while on vacation. He was transferred by air ambulance to the hospital and developed delirium and agitation. His evaluation revealed that he had polycythemia vera and a recently made diagnosis of mild dementia.

In this case, the SCT reviewed the chart not because of an alert, but because the hospitalist noted that Mr. Jones had a very high severity of illness score on admission. When the SCT arrived at Mr. Jones's room, 3 family members were present. His wife appeared to be very frail and was too emotional to make decisions. The children present at the bedside were new to the problems at hand but wanted to help. The SCT team educated the family about his current disease state, the general disease trajectory, and what to expect. They explored the patient's values and any indicators of what his care preference would be if he could communicate it. The SCT established a life care plan at that visit. Based upon Mr. Jones's own wishes and values, he was made DNR with limited interventions. He survived the hospitalization and was followed by the outpatient palliative care clinic as well as by hematology.

This case illustrates 2 facets: a high severity of illness score led to consultation even without an alert. Following this, the SCT could take on a taskarriving at a life care plan by exploring valuesthat is difficult and time consuming for a busy hospitalist. It also illustrates that patients may elect to obtain other options, in this case, outpatient palliative care.

FUTURE DIRECTIONS

Our team has also started a quantitative evaluation process. The major limitation we face in this effort is that, unlike physiologic or health services measures (eg, tachycardia, hospital length of stay, mortality), the key measures for assessing the quality of palliative and end‐of‐life care need to be extracted by manual chart review. Our approach is based on the palliative and end‐of‐life care measures endorsed by the National Quality Forum,[23] which are described in greater detail in the appendix. As is the case with other outcomes, and as described in the article by Escobar et al.,[21] we will be employing a difference‐in‐differences approach as well as multivariate matching[24, 25, 26] to evaluate effectiveness of the intervention. Because of the high costs of manual chart review, we will be reviewing randomly selected charts of patients who triggered an alert at the 2 pilot sites as well as matched comparison patient charts at the remaining 19 KPNC hospitals. Table 1 provides preliminary data we gathered to pilot the brief chart review instrument that will be used for evaluating changes in supportive care in the regional rollout. Data are from a randomly selected cohort of 150 patients who reached the alert threshold at the 2 pilot sites between November 13, 2013 and June 30, 2014. After removing 3 records with substantial missing data, we were able to find 146 matched patients at the remaining 19 KPNC hospitals during the same time period. Matched patients were selected from those patients who had a virtual alert based on retrospective data. Table 1 shows that, compared to the other KPNC hospitals, the quality of these 6 aspects of supportive care was better at the pilot sites.

Matched Analyses of Six Supportive Care Quality Measures
Hospital*121+2 combinedRemaining 19P (1)P(2)P(1+2)
  • NOTE: *See text for additional details. The patients at the remaining 19 hospitals were identified based on their retrospective (virtual) deterioration probabilities and then matched to the patients at the pilot sites. The matching algorithm specified exact matches for these variables: alert threshold reached or not; sex; Kaiser Permanente membership status; had the patient been in the intensive care unit prior to the first alert; and care directive prior to the alert (full code vs not full code). Once potential matches were found using the above, the algorithm found the closest match for the following variables: deterioration probability, age, comorbidity burden, and admission illness severity. Statistical comparisons are as follows: P(1): P value for comparison of pilot hospital 1 versus remaining 19 Kaiser Permanente Northern California hospitals; P(2), as per P(1), but for pilot hospital 2; P(1+2), both pilot hospitals' data combined. For continuous variables, numbers shown are mean standard deviation. Numbers in bold italics are those that were significantly different. Deterioration risk is generated by the early warning system. It is the probability that a patient will require transfer to the intensive care unit within the next 12 hours. Interventions are initiated when this risk is 8%. LAPS2 = admission Laboratory‐based Acute Physiology Score, version 2; measure of acute instability where the higher the score, the greater the degree of physiologic derangement. Patients with LAPS2 110 are very unstable. See citation 20 for additional details. COPS2 = Comorbidity Point Score, version 2; measure of chronic disease burden over preceding 12 months that is assigned to all Kaiser Permanente Northern California members on a monthly basis. The higher the score, the greater the chronic illness burden. Patients with COPS2 65 have a significant comorbid illness burden. See citation 20 for additional details. ‖Refers to 30 day mortality. Indicates whether documentation preceding an alert clearly specified who the patient's agent (decision‐maker or surrogate) was. #Indicates whether documentation immediately following an alert clearly specified who the patient's agent (decision‐maker or surrogate) was. **Refers to whether chart documentation indicated that the patient's family or agent were updated about the patient's condition within 24 hours after an alert. Refers to whether chart documentation indicated that a discussion occurred regarding the patient's goals of care occurred within 24 hours after an alert. Indicates whether a palliative care consultation occurred within 24 hours after an alert.

N7374147146   
Age (y)69.3 14.466.4 15.367.8 14.867.4 14.70.370.620.82
Male (%)39 (53.4%)43 (58.1%)82 (55.8%)82 (56.2%)0.700.780.95
Deterioration risk (%)20.0 14.317.4 11.618.7 13.018.8 13.60.540.440.94
LAPS2113 38102 39107 39107 380.280.380.9
COPS269 5266 5267 5266 510.751.000.85
Died (%)‖17 (23.3%)15 (20.3%)32 (21.8%)24 (16.4%)0.220.480.25
Agent identified prior28 (38.4%)18 (24.3%)46 (31.3%)21 (14.4%)<0.0010.070.001
Agent identified after#46 (63.0%)39 (52.7%)85 (57.8%)28 (19.4%)<0.001<0.001<0.001
Updating within 24 hours**32 (43.8%)45 (60.8%)77 (52.4%)59 (40.4%)0.630.000.04
Goals of care discussion20 (27.4%)37 (50.0%)57 (38.8%)32 (21.9%)0.370.0010.002
Palliative care consult19 (26.0%)49 (66.2%)68 (46.3%)35 (24.0%)0.74<0.001<0.001
Spiritual support offered27 (37.0%)30 (40.5%)57 (38.8%)43 (29.4%)0.260.100.09

CONCLUSION

Although we continue to review our care processes, we feel that our overall effort has been successful. Nonetheless, it is important to consider a number of limitations to the generalizability of our approach. First, our work has taken place in the context of a highly integrated care delivery system where both information transfer as well as referral from the inpatient to the outpatient setting can occur easily. Second, because the pilot sites were among the first KPNC hospitals to begin implementing the Respecting Choices model, they undoubtedly had less ground to cover than hospitals beginning with less infrastructure. Third, because of resource limitations, our ability to capture process data is limited. Lastly, both sites were able to obtain resources to expand necessary coverage, which might not be possible in many settings.

In conclusion, we made a conscious decision to incorporate palliative care into the planning for the deployment of the alert system. Further, we made this decision explicit, informing all caregivers that providing palliative care that adheres to the Respecting Choices model would be essential. We have found that integration of the SCT, the EWS, and routine hospital operations can be achieved. Clinician and patient acceptance of the Respecting Choices component has been excellent. We consider 3 elements to be critical for this process, and these elements form an integral component of the expansion of the early warning system to the remaining 19 KPNC hospitals. The first is careful planning, which includes instructing RRT first responders on their role in the process of ensuring the respect of patient preferences. Second, having social workers available 24 hours a day/7 days a week as backup for busy hospitalists, is essential. Finally, as is described by Dummett et al.,[27] including reminders regarding patient preferences in the documentation process (by embedding it in an automated note template) is also very important.

Acknowledgements

The authors thank Dr. Michelle Caughey, Dr. Philip Madvig, Dr. Patricia Conolly, Ms. Barbara Crawford, and Ms. Melissa Stern for their administrative support, Dr. Tracy Lieu for reviewing the manuscript, and Ms. Rachel Lesser for formatting the manuscript.

Disclosures: This work was supported by a grant from the Gordon and Betty Moore Foundation (Early Detection, Prevention, and Mitigation of Impending Physiologic Deterioration in Hospitalized Patients Outside Intensive Care: Phase 3, pilot), The Permanente Medical Group, Inc., and Kaiser Foundation Hospitals, Inc. As part of our agreement with the Gordon and Betty Moore Foundation, we made a commitment to disseminate our findings in articles such as this one. However, the Foundation played no role in how we actually structured our articles, nor did they review or preapprove any of the manuscripts submitted as part of the dissemination component. None of the other sponsors had any involvement in our decision to submit this manuscript or in the determination of its contents. None of the authors have any conflicts of interest to declare of relevance to this work.

APPENDIX 1

Key measures to assess the quality of supportive care extracted by manual chart review

Chart review questionOutcome desiredOutcome measuredRationale for selecting this outcome

Was the patient's decision‐maker documented following the alert? If yes: Time/date of documentation.

Timely identification and documentation of the patient's decision‐maker immediately following the alert

Whether the patient's decision‐maker was clearly identified and documented by a member of the treatment team (nurse, physician, and/or rapid response team) following the alert.

This outcome is measured independently of whether the patient's decision‐maker was already documented prior to the alert.

Clear documentation facilitates the notification of a patient's family/decision‐maker in a timely manner to enhance communication and clinical decision‐making to make sure that the patient's wishes and preferences are honored.

Was the patient's decision‐maker/family notified or was there an attempt to notify the patient's decision‐maker regarding the changes in the patient's condition following the alert? If yes: Time/date of notification/attempted contact.

Providing patient's family members/decision‐maker with an update in the patient's clinical condition following the alertWhether the medical team notified or attempted to contact the patient's family/decision‐maker to provide an update in the patient's clinical condition following the alert.Providing timely updates when a patient's clinical status changes enhances communication and helps to proactively involve patients and families in the decision‐making process.

Was there a goals of care discussion following the alert? If yes: Time/date of discussion

To clarify and to honor individual patient's goals of careWhether a goals of care discussion was initiated after the alert was issued. Criteria for Goals of Care discussion included any/all of the following:
  • Specific language in the documentation that stated verbatim Goals of Care Discussion
  • Providing prognosis and treatment options; eliciting preferences; AND documenting decisions made and preferences as a result of the discussion.
Goals of care discussions actively involve patients and families in the decision‐making process to ensure that their wishes and preferences are clearly documented and followed.
Was there a palliative care consultation during the patient's hospitalization?To provide comprehensive supportive care to patients and their families/loved ones.Whether palliative care was consulted during the patient's hospitalizationThe palliative care team plays an important role in helping patients/families make decisions, providing support, and ensuring that patients symptoms are addressed and properly managed
Was spiritual support offered to the patient and/or their family/loved during the patient's hospitalization?To offer and to provide spiritual support to patients and their families/loved onesWhether the patient/family was offered spiritual support during the patient's hospitalizationSpiritual support has been recognized as an important aspect of quality EOL care

 

APPENDIX 2

Respecting Choices, A Staged Approach to Advance Care Planning

Respecting Choices is a staged approach to advance care planning, where conversations begin when people are healthy and continue to occur throughout life.

Our Life Care Planning service consists ofthree distinct steps.

  1. My Values: First Steps is appropriate for all adults, but should definitely be initiated as a component of routine healthcare for those over the age of 55. The goals of First Steps are to motivate individuals to learn more about the importance of Life Care Planning, select a healthcare decision maker, and complete a basic written advance directive.
  2. My Choices: Next Steps is for patients with chronic, progressive illness who have begun to experience a decline in functional status or frequent hospitalizations. The goals of this stage of planning are to assist patients in understanding a) the progression of their illness, b) potential complications, and c) specific life‐sustaining treatments that may be required if their illness progresses. Understanding life‐sustaining treatments includes each treatment's benefits, burdens, and alternatives. With this understanding members will be better able to express what situations (e.g. complications or bad outcomes) would cause them to want to change their plan of care.Additionally, the individual's healthcare agent(s) and other loved ones are involved in the planning process so that they can be prepared to make decisions, if necessary, and to support the plan of care developed.
  3. My Care: Advanced Steps is intended for frail elders or others whose death in the next 12 months would not be surprising. It helps patients and their agent make specific and timely life‐sustaining treatment decisions that can be converted to medical orders to guide the actions of healthcare providers and be consistent with the goals of the individual.

 

(Reference: http://www.gundersenhealth.org/respecting-choices).

APPENDIX 3

Pilot site Palliative Care Referral Criteria

Automatic palliative care consults for adults at Sacramento site are as follows:

  1. 30 day readmits or >3 ED or acute readmissions in past year for CHF or COPD that have no Advance Directive and are not followed by Chronic Care Management
  2. Aspiration
  3. CVA with poor prognosis for regaining independence
  4. Hip fracture patients not weight bearing on post‐operative day 2
  5. Code blue survivor
  6. Skilled Nursing Facility resident with sepsis and or dementia
  7. Active hospice patients
  8. Sepsis patients with 10 or more ICD codes in the problem list

 

Potential palliative care consults for adults at Sacramento pilot site are as follows:

  1. Morbid obesity complicated by organ damage (e.g., congestive heart failure, refractory liver disease, chronic renal disease)
  2. Severe chronic kidney disease and/or congestive heart failure with poor functional status (chair or bed bound)
  3. Patient with pre‐operative arteriovenous fistulas and poor functional status, congestive heart failure, or age>80
  4. End stage liver disease with declining functional status, poor odds of transplant

 

 

The likelihood of meaningful survival after cardiopulmonary arrest is low and even lower the longer the patient has been in the hospital[1, 2]; realization of this[3] played a major role in the development of rapid response teams (RRTs).[4] As noted elsewhere in this journal, the limited success of these teams[5, 6, 7] has inspired efforts to develop systems to identify patients at risk of deterioration much earlier.

Whereas a number of recent reports have described end‐of‐life care issues in the context of RRT operations,[8, 9, 10, 11, 12, 13, 14, 15, 16] descriptions of how one might incorporate respecting patient preferences into development of a response arm, particularly one meant to scale up to a multiple hospital system, are largely absent from the literature. In this article, we describe the implementation process for integrating palliative care and the honoring of patient choices, which we refer to as supportive care, with an automated early warning system (EWS) and an RRT.

The context of this work is a pilot project conducted at 2 community hospitals, the Kaiser Permanente Northern California (KPNC) Sacramento (200 beds) and South San Francisco (100 beds) medical centers. Our focus was to develop an approach that could serve as the basis for future dissemination to the remaining 19 KPNC hospitals, regardless of their size. Our work incorporated the Respecting Choices model,[17] which has been endorsed by KPNC for all its hospitals and clinics. We describe the workflow we developed to embed the supportive care team's (SCT) reactive and proactive components into the EWS response arm. We also provide a granular description of how our approach worked in practice, as evidenced by the combined patient and provider experiences captured in 5 vignettes as well as some preliminary data obtained by chart review

When patients arrive in the hospital, they may or may not have had a discussion about their care escalation and resuscitation preferences. As noted by Escobar and Dellinger[18] elsewhere in this issue of the Journal of Hospital Medicine, patients with documented restricted resuscitation preferences (eg, do not resuscitate [DNR] or partial code) at the time of admission to the hospital account for slightly more than half of the hospital deaths at 30 days after admission. In general, these stated preferences are honored.

Significant proportions of patients are unstable at the time of admission or have a significant underlying chronic illness burden predisposing them to unexpected deterioration. Often these patients lose decision‐making capacity when their condition worsens. We need to ensure we honor their wishes and identify the correct surrogate.

To make sure a patient's wishes are clear, we developed a workflow that included 2 components. One component is meant to ensure that patient preferences are honored following a EWS alert. This allows for contingencies, including the likelihood that a physician will not be available to discuss patient wishes due to clinical demands. Although it may appear that the role of the hospitalist is supplanted, in fact this is not the case. The only person who has authority to change a patient's code status is the hospitalist, and they always talk to the patient or their surrogate. The purpose of the teams described in this report is to provide backup, particularly in those instances where the hospitalist is tied up elsewhere (eg, the emergency department). Our workflows also facilitate the integration of the clinical with the palliative care response. The other component employs the EWS's ancillary elements (provision of a severity of illness score and longitudinal comorbidity score in real time) to screen patients who might need the SCT. This allows us to identify patients who are at high risk for deterioration in whom an alert has not yet been issued due to acute instability or comorbid burden (leading to high probability of unexpected deterioration) or both and who do not have stated goals of care and/or an identified surrogate.

IMPLEMENTATION APPROACH

We developed our workflow using the Institute for Healthcare Improvement's Plan‐Do‐Study‐Act approach.[19, 20] Our first finding was that most alerts did not require a rapid intervention by the SCT. Both sites reserved time in the SCT physicians' schedule and considered changing staffing levels (the smaller site only had funding for physician support 20 hours per week), but neither had to make such changes. One reason for this was that we increased social worker availability, particularly for off hours (to cover the contingency where an alert was issued in the middle of the night while the on‐call hospitalist was handling an admission in the emergency department). The second was that, as is described by Escobar et al.,[21] the EWS provides a risk of deterioration in the next 12 hours (as opposed to a code blue or regular RRT call, which indicate the need for immediate action) and provides an opportunity for spending time with patients without the constraints of an ongoing resuscitation.

We also found that of the patients who triggered an alert, approximately half would have been flagged for a palliative care referral using our own internal screening tool. Furthermore, having longitudinal comorbidity (Comorbidity Point Score, version 2 [COPS2]) and severity of illness (Laboratory‐Based Acute Physiology Score, version 2) scores[22] facilitated the identification of patients who needed review of their preferences with respect to escalation of care. Currently, our primary case‐finding criterion for proactive SCT consultation is a COPS2 >65, which is associated with a 10.8%, 30‐day mortality risk. Overall, the SCT was asked to see about 25% of patients in whom an alert was triggered.

The workflows we developed were employed at the first site to go live (South San Francisco, 7000 annual discharges, Figure 1) and then modified at Sacramento (14,000 annual discharges, Figure 2). Because the hospitals differ in several respects, from size and patient population to staffing, the workflows are slightly different.

Figure 1
Workflow for integrating Respecting Choices model with a real‐time early warning system at Kaiser Permanente South San Francisco. See text for additional details. Abbreviations: EWS, early warning system, EMR, electronic medical record; pt, patient; LCP, life care planning; HBS, hospital based specialist; RN, registered nurse; RRT, rapid response team; SCT, supportive care team; SW, social worker.
Figure 2
Workflow for integrating Respecting Choices model with a real‐time early warning system at Kaiser Permanente Sacramento. See text for additional details. Abbreviations: EWS, early warning system, EMR, electronic medical record; RN, registered nurse; RRT, rapid response team; SCT, supportive care team.

The EWS provides deterioration probabilities every 6 hours, and first responders (RRT nurses) intervene when this probability is 8%. The RRT nurse can activate the clinical response arm, the Respecting Choices pathway, or both. In South San Francisco, which lacked the resources to staff supportive care 24 hours a day/7 days a week, the RRT contacts a medical social worker (MSW) who performs an immediate record review. If this identifies something meriting urgent communication (eg, conflicting or absent information regarding a patient's surrogate), the MSW alerts the hospitalist. The MSW documents findings and ensures that a regular MSW consult occurs the next day. If the MSW feels the patient needs an SCT consult, the MSW alerts the team (this does not preclude a hospitalist or RRT nurse from initiating SCT consultation). At the Sacramento site, where the SCT team is staffed 24 hours a day/7 days a week, it is possible to bypass the MSW step. In addition, each morning the SCT reviews all alerts issued during the previous 24 hours to determine if an SCT consult is needed. In addition, the SCT also proactively reviews the COPS2 scores on all admissions to identify patients who could benefit from an SCT consult. Although surrogate identification and clarifying goals of care are essential, the SCT also helps patients in other ways, as is evident from the following case studies.

The major difference between the palliative care team and the SCT is that the SCT includes the inpatient social worker as part of the team. The SCT has a more focused role (its efforts center on aligning patient goals and priorities with the care that will actually be provided). In contrast, the palliative care team has other functions (eg, pain and symptom management) that are not necessarily associated with life care planning or the alert response.

Considerable overlap exists between patients who trigger an alert and those who would have met screening criteria established prior to EWS deployment. Although this is evolving, we can say that, in general, both sites are moving to an or criterion for involving the SCT (patient meets traditional criteria of the screening tool or triggers alert). Further, as KPNC begins adding more sites to the system, serious consideration is being given to only employing the COPS2 score as the primary screening criterion.

CASE STUDY 1: SURROGATE IDENTIFICATION

Mr. Smith, a 78‐year‐old man with congestive heart failure (CHF), atrial fibrillation, severe chronic obstructive pulmonary disease, and history of stroke, was admitted due to CHF exacerbation. The morning after admission, he experienced uncontrolled tachycardia associated with low oxygen saturation, triggering an alert. The hospitalist stabilized him and documented the treatment plan as follows: If worsening signs (shortness of breath/wheezing) or decreased saturation on current oxygen supplement, check chest film and arterial blood gas chest x‐ray/ arterial blood gas and call MD for possible bilevel positive airway pressure and repeating the echo. Intensive care unit (ICU) transfer as needed. According to his sister, his resuscitation preference was full code.

Given the new protocol instituted since the deployment of the EWS, the MSW reviewed the chart and found that the patient's sister, who lived locally and was the emergency contact, had been incorrectly identified as the surrogate. In a prior hospitalization, Mr. Smith had named his brother as his surrogate, as the patient felt strongly that his sister would not make good decisions for him. The following day, the SCT met with Mr. Smith, who articulated his desire to change his care directive to DNR. He also asked for a full palliative consult when his brother could come in (3 days later). During the consult, his brother learned, for the first time, exactly what heart failure was, and what to anticipate over the next months and years. The 2 brothers completed an advance directive granting Mr. Smith's brother a durable power of attorney including a request for a palliative approach to end‐stage illness. They also completed a physician order for life sustaining treatment, for DNR and limited intervention. Mr. Smith stated, When I go, I'm gone, and recalled that his mother and uncle had protracted illnesses, adding that I don't want to stay alive if I'm disabled like that.

In this example, the SCT was able to identify the correct surrogate and clarify the patient's resuscitation preference. Without SCT, if this patient had deteriorated unexpectedly, the sister would have insisted on treatment that was inconsistent with Mr. Smith's wishes. The interventions as a result of the alert also led the patient and his brother to begin discussing the medical goals of treatment openly and reach understanding about the patient's chronic and progressive conditions.

CASE STUDY 2: TRANSITION TO HOME‐BASED HOSPICE

Mr. North was a 71‐year‐old man admitted for sepsis due to pneumonia. He had a history of temporal arteritis, systemic lupus erythematosus, prostate cancer, squamous cell lung cancer, and chronic leg ulcers. Delirious at the time of admission, he triggered an alert at 6 am, shortly after admission to the ward. He was hypotensive and was transferred to the ICU.

The SCT reviewed the case and judged that he met criteria for consultation. His wife readily agreed to meet to discuss goals and plan of care. She had been taking care of him at home, and was overwhelmed by his physical needs as well as his worsening memory loss and agitation. She had not been able to bring him to the clinic for almost 2 years, and he had refused entry to the home health nurse. During the palliative consult, Mr. North was lucid enough to state his preference for comfort‐focused care, and his desire not to return to the hospital. Mrs. North accepted a plan for home hospice, with increased attendant care at home.

This case illustrates the benefit of the EWS in identifying patients whose chronic condition has progressed, and who would benefit from a palliative consult to clarify goals of care. Practice variation, the complexity of multiple medical problems, and the urgency of the acute presentation may obscure or delay the need for clarifying goals of care. A structured approach provided by the EWS workflow, as it did in this case, helps to ensure that these discussions are occurring with the appropriate patients at the appropriate times.

CASE STUDY 3: RESOLVING MD‐TO‐MD MISCOMMUNICATION

Mr. Joseph was an 89‐year‐old male hospitalized for a hip fracture. He had a history of atrial fibrillation, prostate cancer with bone metastases, radiation‐induced lung fibrosis, stroke, and advanced dementia. His initial admission order was DNR, but this was changed after surgery to full code and remained so. The next few days were relatively uneventful until the alert triggered. By then, the hospitalist attending him had changed 3 times. The social worker reviewed Mr. Joseph's records and determined that a palliative consult had taken place previously at another Kaiser Permanente facility, and that the prior code status was DNR. Although Mr. Joseph's admission care directive was DNR, this was switched to full code for surgery. However, the care directive was not changed back, nor was a discussion held to discuss his preference in case of a complication related to surgery. Meanwhile, he was having increasing respiratory problems due to aspiration and required noninvasive ventilation.

Consequently, the SCT reviewed the alerts from the previous 24 hours and determined that further investigation and discussion were required. When the hospitalist was called, the SCT discovered that the hospitalist had assumed the change to full code had been made by 1 of the previous attending physicians; he also informed the SCT that Mr. Joseph would likely need intubation. The SCT decided to go see the patient and, on approaching the room, saw Mr. Joseph's son waiting outside. The son was asked how things were going, and replied, We all knew that 1 day he would deteriorate, we just want to make sure he is comfortable. Clearly, the full code status did not reflect the Mr. Joseph's wishes, so this was clarified and the hospitalist was called immediately to change the care directive. The SCT met with the man's son and wife, educating them about aspiration and what to expect. They definitely wished a gentle approach for Mr. Joseph, and it was decided to continue current care, without escalation, until the morning. This was to allow the other son to be informed of his father's condition and to see if his status would improve. The next morning the SCT met with the family at the room, and the patient was placed on comfort measures.

This case illustrates 3 points. One, Mr. Joseph's status was changed to full code during surgery without addressing his preferences should he develop a complication during the postoperative period. Two, when the hospitalist saw the full code order in the electronic record, it was assumed someone else had had a discussion with the patient and his family. Lastly, although a social worker performed a chart review, the full picture only emerged after the entire SCT became involved. Therefore, even in the presence of an EWS with associated protocols, important details can be missed, highlighting the need to build redundancy into workflows.

CASE STUDY 4: RELUCTANCE TO INVOLVE PALLIATIVE CARE TEAM

Mrs. Wood, a bed‐bound 63‐year‐old with end‐stage heart failure, was admitted to the hospital with respiratory failure. She had met with a life care planning facilitator as well as a palliative physician previously but refused to discuss end‐of‐life options. She felt she would always do well and her husband felt the same way. During this admission a routine palliative referral was made, but she and her husband refused. The chaplain visited often and then the patient took a turn for the worse, triggering an alert and was transferred to the ICU.

The hospitalist did not feel a SCT consult was indicated based on prior discussions. However, the SCT reviewed the records and felt an intervention was needed. The patient, now obtunded, had worsening renal failure and required continuous pressor infusions. The chaplain spoke with Mr. Wood, who felt a consult was appropriate. Mrs. Wood was no longer able to make decisions, and her husband needed more information about what to expect. At the end of the discussion, he decided on comfort care, and his wife expired peacefully in the hospital.

This case illustrates that, although initially the primary attending may feel a palliative consult is not helpful and possibly detrimental to the patient's care under usual circumstances, decisions may change as the patient's condition changes. The EWS alert helped the SCT recognize the drastic change in the patient's condition and the need to support the patient's family. The family had been resistant, but the SCT was able to help the family transition to a palliative approach with its gentle contact and being clear about its role to provide support regardless of their decision.

CASE STUDY 5: ALERT FACILITATES TRANSITION TO OUTPATIENT PALLIATIVE CARE

Mr. Jones was an 82‐year‐old gentleman who had a recent episode of gastrointestinal bleeding while on vacation. He was transferred by air ambulance to the hospital and developed delirium and agitation. His evaluation revealed that he had polycythemia vera and a recently made diagnosis of mild dementia.

In this case, the SCT reviewed the chart not because of an alert, but because the hospitalist noted that Mr. Jones had a very high severity of illness score on admission. When the SCT arrived at Mr. Jones's room, 3 family members were present. His wife appeared to be very frail and was too emotional to make decisions. The children present at the bedside were new to the problems at hand but wanted to help. The SCT team educated the family about his current disease state, the general disease trajectory, and what to expect. They explored the patient's values and any indicators of what his care preference would be if he could communicate it. The SCT established a life care plan at that visit. Based upon Mr. Jones's own wishes and values, he was made DNR with limited interventions. He survived the hospitalization and was followed by the outpatient palliative care clinic as well as by hematology.

This case illustrates 2 facets: a high severity of illness score led to consultation even without an alert. Following this, the SCT could take on a taskarriving at a life care plan by exploring valuesthat is difficult and time consuming for a busy hospitalist. It also illustrates that patients may elect to obtain other options, in this case, outpatient palliative care.

FUTURE DIRECTIONS

Our team has also started a quantitative evaluation process. The major limitation we face in this effort is that, unlike physiologic or health services measures (eg, tachycardia, hospital length of stay, mortality), the key measures for assessing the quality of palliative and end‐of‐life care need to be extracted by manual chart review. Our approach is based on the palliative and end‐of‐life care measures endorsed by the National Quality Forum,[23] which are described in greater detail in the appendix. As is the case with other outcomes, and as described in the article by Escobar et al.,[21] we will be employing a difference‐in‐differences approach as well as multivariate matching[24, 25, 26] to evaluate effectiveness of the intervention. Because of the high costs of manual chart review, we will be reviewing randomly selected charts of patients who triggered an alert at the 2 pilot sites as well as matched comparison patient charts at the remaining 19 KPNC hospitals. Table 1 provides preliminary data we gathered to pilot the brief chart review instrument that will be used for evaluating changes in supportive care in the regional rollout. Data are from a randomly selected cohort of 150 patients who reached the alert threshold at the 2 pilot sites between November 13, 2013 and June 30, 2014. After removing 3 records with substantial missing data, we were able to find 146 matched patients at the remaining 19 KPNC hospitals during the same time period. Matched patients were selected from those patients who had a virtual alert based on retrospective data. Table 1 shows that, compared to the other KPNC hospitals, the quality of these 6 aspects of supportive care was better at the pilot sites.

Matched Analyses of Six Supportive Care Quality Measures
Hospital*121+2 combinedRemaining 19P (1)P(2)P(1+2)
  • NOTE: *See text for additional details. The patients at the remaining 19 hospitals were identified based on their retrospective (virtual) deterioration probabilities and then matched to the patients at the pilot sites. The matching algorithm specified exact matches for these variables: alert threshold reached or not; sex; Kaiser Permanente membership status; had the patient been in the intensive care unit prior to the first alert; and care directive prior to the alert (full code vs not full code). Once potential matches were found using the above, the algorithm found the closest match for the following variables: deterioration probability, age, comorbidity burden, and admission illness severity. Statistical comparisons are as follows: P(1): P value for comparison of pilot hospital 1 versus remaining 19 Kaiser Permanente Northern California hospitals; P(2), as per P(1), but for pilot hospital 2; P(1+2), both pilot hospitals' data combined. For continuous variables, numbers shown are mean standard deviation. Numbers in bold italics are those that were significantly different. Deterioration risk is generated by the early warning system. It is the probability that a patient will require transfer to the intensive care unit within the next 12 hours. Interventions are initiated when this risk is 8%. LAPS2 = admission Laboratory‐based Acute Physiology Score, version 2; measure of acute instability where the higher the score, the greater the degree of physiologic derangement. Patients with LAPS2 110 are very unstable. See citation 20 for additional details. COPS2 = Comorbidity Point Score, version 2; measure of chronic disease burden over preceding 12 months that is assigned to all Kaiser Permanente Northern California members on a monthly basis. The higher the score, the greater the chronic illness burden. Patients with COPS2 65 have a significant comorbid illness burden. See citation 20 for additional details. ‖Refers to 30 day mortality. Indicates whether documentation preceding an alert clearly specified who the patient's agent (decision‐maker or surrogate) was. #Indicates whether documentation immediately following an alert clearly specified who the patient's agent (decision‐maker or surrogate) was. **Refers to whether chart documentation indicated that the patient's family or agent were updated about the patient's condition within 24 hours after an alert. Refers to whether chart documentation indicated that a discussion occurred regarding the patient's goals of care occurred within 24 hours after an alert. Indicates whether a palliative care consultation occurred within 24 hours after an alert.

N7374147146   
Age (y)69.3 14.466.4 15.367.8 14.867.4 14.70.370.620.82
Male (%)39 (53.4%)43 (58.1%)82 (55.8%)82 (56.2%)0.700.780.95
Deterioration risk (%)20.0 14.317.4 11.618.7 13.018.8 13.60.540.440.94
LAPS2113 38102 39107 39107 380.280.380.9
COPS269 5266 5267 5266 510.751.000.85
Died (%)‖17 (23.3%)15 (20.3%)32 (21.8%)24 (16.4%)0.220.480.25
Agent identified prior28 (38.4%)18 (24.3%)46 (31.3%)21 (14.4%)<0.0010.070.001
Agent identified after#46 (63.0%)39 (52.7%)85 (57.8%)28 (19.4%)<0.001<0.001<0.001
Updating within 24 hours**32 (43.8%)45 (60.8%)77 (52.4%)59 (40.4%)0.630.000.04
Goals of care discussion20 (27.4%)37 (50.0%)57 (38.8%)32 (21.9%)0.370.0010.002
Palliative care consult19 (26.0%)49 (66.2%)68 (46.3%)35 (24.0%)0.74<0.001<0.001
Spiritual support offered27 (37.0%)30 (40.5%)57 (38.8%)43 (29.4%)0.260.100.09

CONCLUSION

Although we continue to review our care processes, we feel that our overall effort has been successful. Nonetheless, it is important to consider a number of limitations to the generalizability of our approach. First, our work has taken place in the context of a highly integrated care delivery system where both information transfer as well as referral from the inpatient to the outpatient setting can occur easily. Second, because the pilot sites were among the first KPNC hospitals to begin implementing the Respecting Choices model, they undoubtedly had less ground to cover than hospitals beginning with less infrastructure. Third, because of resource limitations, our ability to capture process data is limited. Lastly, both sites were able to obtain resources to expand necessary coverage, which might not be possible in many settings.

In conclusion, we made a conscious decision to incorporate palliative care into the planning for the deployment of the alert system. Further, we made this decision explicit, informing all caregivers that providing palliative care that adheres to the Respecting Choices model would be essential. We have found that integration of the SCT, the EWS, and routine hospital operations can be achieved. Clinician and patient acceptance of the Respecting Choices component has been excellent. We consider 3 elements to be critical for this process, and these elements form an integral component of the expansion of the early warning system to the remaining 19 KPNC hospitals. The first is careful planning, which includes instructing RRT first responders on their role in the process of ensuring the respect of patient preferences. Second, having social workers available 24 hours a day/7 days a week as backup for busy hospitalists, is essential. Finally, as is described by Dummett et al.,[27] including reminders regarding patient preferences in the documentation process (by embedding it in an automated note template) is also very important.

Acknowledgements

The authors thank Dr. Michelle Caughey, Dr. Philip Madvig, Dr. Patricia Conolly, Ms. Barbara Crawford, and Ms. Melissa Stern for their administrative support, Dr. Tracy Lieu for reviewing the manuscript, and Ms. Rachel Lesser for formatting the manuscript.

Disclosures: This work was supported by a grant from the Gordon and Betty Moore Foundation (Early Detection, Prevention, and Mitigation of Impending Physiologic Deterioration in Hospitalized Patients Outside Intensive Care: Phase 3, pilot), The Permanente Medical Group, Inc., and Kaiser Foundation Hospitals, Inc. As part of our agreement with the Gordon and Betty Moore Foundation, we made a commitment to disseminate our findings in articles such as this one. However, the Foundation played no role in how we actually structured our articles, nor did they review or preapprove any of the manuscripts submitted as part of the dissemination component. None of the other sponsors had any involvement in our decision to submit this manuscript or in the determination of its contents. None of the authors have any conflicts of interest to declare of relevance to this work.

APPENDIX 1

Key measures to assess the quality of supportive care extracted by manual chart review

Chart review questionOutcome desiredOutcome measuredRationale for selecting this outcome

Was the patient's decision‐maker documented following the alert? If yes: Time/date of documentation.

Timely identification and documentation of the patient's decision‐maker immediately following the alert

Whether the patient's decision‐maker was clearly identified and documented by a member of the treatment team (nurse, physician, and/or rapid response team) following the alert.

This outcome is measured independently of whether the patient's decision‐maker was already documented prior to the alert.

Clear documentation facilitates the notification of a patient's family/decision‐maker in a timely manner to enhance communication and clinical decision‐making to make sure that the patient's wishes and preferences are honored.

Was the patient's decision‐maker/family notified or was there an attempt to notify the patient's decision‐maker regarding the changes in the patient's condition following the alert? If yes: Time/date of notification/attempted contact.

Providing patient's family members/decision‐maker with an update in the patient's clinical condition following the alertWhether the medical team notified or attempted to contact the patient's family/decision‐maker to provide an update in the patient's clinical condition following the alert.Providing timely updates when a patient's clinical status changes enhances communication and helps to proactively involve patients and families in the decision‐making process.

Was there a goals of care discussion following the alert? If yes: Time/date of discussion

To clarify and to honor individual patient's goals of careWhether a goals of care discussion was initiated after the alert was issued. Criteria for Goals of Care discussion included any/all of the following:
  • Specific language in the documentation that stated verbatim Goals of Care Discussion
  • Providing prognosis and treatment options; eliciting preferences; AND documenting decisions made and preferences as a result of the discussion.
Goals of care discussions actively involve patients and families in the decision‐making process to ensure that their wishes and preferences are clearly documented and followed.
Was there a palliative care consultation during the patient's hospitalization?To provide comprehensive supportive care to patients and their families/loved ones.Whether palliative care was consulted during the patient's hospitalizationThe palliative care team plays an important role in helping patients/families make decisions, providing support, and ensuring that patients symptoms are addressed and properly managed
Was spiritual support offered to the patient and/or their family/loved during the patient's hospitalization?To offer and to provide spiritual support to patients and their families/loved onesWhether the patient/family was offered spiritual support during the patient's hospitalizationSpiritual support has been recognized as an important aspect of quality EOL care

 

APPENDIX 2

Respecting Choices, A Staged Approach to Advance Care Planning

Respecting Choices is a staged approach to advance care planning, where conversations begin when people are healthy and continue to occur throughout life.

Our Life Care Planning service consists ofthree distinct steps.

  1. My Values: First Steps is appropriate for all adults, but should definitely be initiated as a component of routine healthcare for those over the age of 55. The goals of First Steps are to motivate individuals to learn more about the importance of Life Care Planning, select a healthcare decision maker, and complete a basic written advance directive.
  2. My Choices: Next Steps is for patients with chronic, progressive illness who have begun to experience a decline in functional status or frequent hospitalizations. The goals of this stage of planning are to assist patients in understanding a) the progression of their illness, b) potential complications, and c) specific life‐sustaining treatments that may be required if their illness progresses. Understanding life‐sustaining treatments includes each treatment's benefits, burdens, and alternatives. With this understanding members will be better able to express what situations (e.g. complications or bad outcomes) would cause them to want to change their plan of care.Additionally, the individual's healthcare agent(s) and other loved ones are involved in the planning process so that they can be prepared to make decisions, if necessary, and to support the plan of care developed.
  3. My Care: Advanced Steps is intended for frail elders or others whose death in the next 12 months would not be surprising. It helps patients and their agent make specific and timely life‐sustaining treatment decisions that can be converted to medical orders to guide the actions of healthcare providers and be consistent with the goals of the individual.

 

(Reference: http://www.gundersenhealth.org/respecting-choices).

APPENDIX 3

Pilot site Palliative Care Referral Criteria

Automatic palliative care consults for adults at Sacramento site are as follows:

  1. 30 day readmits or >3 ED or acute readmissions in past year for CHF or COPD that have no Advance Directive and are not followed by Chronic Care Management
  2. Aspiration
  3. CVA with poor prognosis for regaining independence
  4. Hip fracture patients not weight bearing on post‐operative day 2
  5. Code blue survivor
  6. Skilled Nursing Facility resident with sepsis and or dementia
  7. Active hospice patients
  8. Sepsis patients with 10 or more ICD codes in the problem list

 

Potential palliative care consults for adults at Sacramento pilot site are as follows:

  1. Morbid obesity complicated by organ damage (e.g., congestive heart failure, refractory liver disease, chronic renal disease)
  2. Severe chronic kidney disease and/or congestive heart failure with poor functional status (chair or bed bound)
  3. Patient with pre‐operative arteriovenous fistulas and poor functional status, congestive heart failure, or age>80
  4. End stage liver disease with declining functional status, poor odds of transplant

 

 

References
  1. Institute of Medicine of the National Academies. Dying in America: Improving Quality and Honoring Individual Preferences Near the End of Life. Washington, DC: Institute of Medicine of the National Academies; 2014.
  2. Partners LR. Final chapter: Californians' attitudes and experiences with death and dying. California HealthCare Foundation website. Available at: http://www.chcf.org/publications/2012/02/final‐chapter‐death‐dying. Published February 2012. Accessed July 14, 2015.
  3. Rozenbaum EA, Shenkman L. Predicting outcome of inhospital cardiopulmonary resuscitation. Crit Care Med. 1988;16(6):583586.
  4. Hournihan F, Bishop G., Hillman KM, Dauffurn K, Lee A. The medical emergency team: a new strategy to identify and intervene in high‐risk surgical patients. Clin Intensive Care. 1995;6:269272.
  5. Winters BD, Pham J, Pronovost PJ. Rapid response teams—walk, don't run. JAMA. 2006;296(13):16451647.
  6. Winters BD, Pham JC, Hunt EA, Guallar E, Berenholtz S, Pronovost PJ. Rapid response systems: a systematic review. Crit Care Med. 2007;35(5):12381243.
  7. Litvak E, Pronovost PJ. Rethinking rapid response teams. JAMA. 2010;304(12):13751376.
  8. Jones DA, McIntyre T, Baldwin I, Mercer I, Kattula A, Bellomo R. The medical emergency team and end‐of‐life care: a pilot study. Crit Care Resusc. 2007;9(2):151156.
  9. Chen J, Flabouris A, Bellomo R, Hillman K, Finfer S. The Medical Emergency Team System and not‐for‐resuscitation orders: results from the MERIT study. Resuscitation. 2008;79(3):391397.
  10. Vazquez R, Gheorghe C, Grigoriyan A, Palvinskaya T, Amoateng‐Adjepong Y, Manthous CA. Enhanced end‐of‐life care associated with deploying a rapid response team: a pilot study. J Hosp Med. 2009;4(7):449452.
  11. Knott CI, Psirides AJ, Young PJ, Sim D. A retrospective cohort study of the effect of medical emergency teams on documentation of advance care directives. Crit Care Resusc. 2011;13(3):167174.
  12. Coventry C, Flabouris A, Sundararajan K, Cramey T. Rapid response team calls to patients with a pre‐existing not for resuscitation order. Resuscitation. 2013;84(8):10351039.
  13. Downar J, Barua R, Rodin D, et al. Changes in end of life care 5 years after the introduction of a rapid response team: a multicentre retrospective study. Resuscitation. 2013;84(10):13391344.
  14. Smith RL, Hayashi VN, Lee YI, Navarro‐Mariazeta L, Felner K. The medical emergency team call: a sentinel event that triggers goals of care discussion. Crit Care Med. 2014;42(2):322327.
  15. Sundararajan K, Flabouris A, Keeshan A, Cramey T. Documentation of limitation of medical therapy at the time of a rapid response team call. Aust Health Rev. 2014;38(2):218222.
  16. Visser P, Dwyer A, Moran J, et al. Medical emergency response in a sub‐acute hospital: improving the model of care for deteriorating patients. Aust Health Rev. 2014;38(2):169176.
  17. Respecting Choices advance care planning. Available at: http://www.gundersenhealth.org/respecting‐choices. Gundersen Health System website. Accessed March 28, 2015.
  18. Escobar G, Dellinger RP. Early detection, prevention, and mitigation of critical illness outside intensive care settings. J Hosp Med. 2016;11:000000.
  19. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. San Francisco, CA: Jossey‐Bass; 2009.
  20. Nadeem E, Olin SS, Hill LC, Hoagwood KE, Horwitz SM. Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Q. 2013;91(2):354394.
  21. Escobar G, Turk B, Ragins A, et al. Piloting electronic medical record-based early detection of inpatient deterioration in community hospitals. J Hosp Med. 2016;11:000000.
  22. Escobar GJ, Gardner M, Greene JG, Draper D, Kipnis P. Risk‐adjusting hospital mortality using a comprehensive electronic record in an integrated healthcare delivery system. Med Care. 2013;51(5):446453.
  23. Department of Health and Human Services. Palliative care and end‐of‐life care—a consensus report. National Quality Forum website. Available at: http://www.qualityforum.org/projects/palliative_care_and_end‐of‐life_care.aspx. Accessed April 1, 2015.
  24. Gu XS, Rosenbaum PR. Comparison of multivariate matching methods: structures, distances, and algorithms. J Comput Graph Stat. 1993;2(4):405420.
  25. Feng WW, Jun Y, Xu R. A method/macro based on propensity score and Mahalanobis distance to reduce bias in treatment comparison in observational study: Eli Lilly working paper. Available at: http://www.lexjansen.com/pharmasug/2006/publichealthresearch/pr05.pdf. Accessed January 24, 2013.
  26. Stuart EA. Matching methods for causal inference: a review and a look forward. Stat Sci. 2010;25(1):121.
  27. Dummett BA, Adams C, Scruth E, Liu V, Guo M, Escobar G. Incorporating an early detection system into routine clinical practice in two community hospitals. J Hosp Med. 2016;11:000000.
References
  1. Institute of Medicine of the National Academies. Dying in America: Improving Quality and Honoring Individual Preferences Near the End of Life. Washington, DC: Institute of Medicine of the National Academies; 2014.
  2. Partners LR. Final chapter: Californians' attitudes and experiences with death and dying. California HealthCare Foundation website. Available at: http://www.chcf.org/publications/2012/02/final‐chapter‐death‐dying. Published February 2012. Accessed July 14, 2015.
  3. Rozenbaum EA, Shenkman L. Predicting outcome of inhospital cardiopulmonary resuscitation. Crit Care Med. 1988;16(6):583586.
  4. Hournihan F, Bishop G., Hillman KM, Dauffurn K, Lee A. The medical emergency team: a new strategy to identify and intervene in high‐risk surgical patients. Clin Intensive Care. 1995;6:269272.
  5. Winters BD, Pham J, Pronovost PJ. Rapid response teams—walk, don't run. JAMA. 2006;296(13):16451647.
  6. Winters BD, Pham JC, Hunt EA, Guallar E, Berenholtz S, Pronovost PJ. Rapid response systems: a systematic review. Crit Care Med. 2007;35(5):12381243.
  7. Litvak E, Pronovost PJ. Rethinking rapid response teams. JAMA. 2010;304(12):13751376.
  8. Jones DA, McIntyre T, Baldwin I, Mercer I, Kattula A, Bellomo R. The medical emergency team and end‐of‐life care: a pilot study. Crit Care Resusc. 2007;9(2):151156.
  9. Chen J, Flabouris A, Bellomo R, Hillman K, Finfer S. The Medical Emergency Team System and not‐for‐resuscitation orders: results from the MERIT study. Resuscitation. 2008;79(3):391397.
  10. Vazquez R, Gheorghe C, Grigoriyan A, Palvinskaya T, Amoateng‐Adjepong Y, Manthous CA. Enhanced end‐of‐life care associated with deploying a rapid response team: a pilot study. J Hosp Med. 2009;4(7):449452.
  11. Knott CI, Psirides AJ, Young PJ, Sim D. A retrospective cohort study of the effect of medical emergency teams on documentation of advance care directives. Crit Care Resusc. 2011;13(3):167174.
  12. Coventry C, Flabouris A, Sundararajan K, Cramey T. Rapid response team calls to patients with a pre‐existing not for resuscitation order. Resuscitation. 2013;84(8):10351039.
  13. Downar J, Barua R, Rodin D, et al. Changes in end of life care 5 years after the introduction of a rapid response team: a multicentre retrospective study. Resuscitation. 2013;84(10):13391344.
  14. Smith RL, Hayashi VN, Lee YI, Navarro‐Mariazeta L, Felner K. The medical emergency team call: a sentinel event that triggers goals of care discussion. Crit Care Med. 2014;42(2):322327.
  15. Sundararajan K, Flabouris A, Keeshan A, Cramey T. Documentation of limitation of medical therapy at the time of a rapid response team call. Aust Health Rev. 2014;38(2):218222.
  16. Visser P, Dwyer A, Moran J, et al. Medical emergency response in a sub‐acute hospital: improving the model of care for deteriorating patients. Aust Health Rev. 2014;38(2):169176.
  17. Respecting Choices advance care planning. Available at: http://www.gundersenhealth.org/respecting‐choices. Gundersen Health System website. Accessed March 28, 2015.
  18. Escobar G, Dellinger RP. Early detection, prevention, and mitigation of critical illness outside intensive care settings. J Hosp Med. 2016;11:000000.
  19. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. San Francisco, CA: Jossey‐Bass; 2009.
  20. Nadeem E, Olin SS, Hill LC, Hoagwood KE, Horwitz SM. Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Q. 2013;91(2):354394.
  21. Escobar G, Turk B, Ragins A, et al. Piloting electronic medical record-based early detection of inpatient deterioration in community hospitals. J Hosp Med. 2016;11:000000.
  22. Escobar GJ, Gardner M, Greene JG, Draper D, Kipnis P. Risk‐adjusting hospital mortality using a comprehensive electronic record in an integrated healthcare delivery system. Med Care. 2013;51(5):446453.
  23. Department of Health and Human Services. Palliative care and end‐of‐life care—a consensus report. National Quality Forum website. Available at: http://www.qualityforum.org/projects/palliative_care_and_end‐of‐life_care.aspx. Accessed April 1, 2015.
  24. Gu XS, Rosenbaum PR. Comparison of multivariate matching methods: structures, distances, and algorithms. J Comput Graph Stat. 1993;2(4):405420.
  25. Feng WW, Jun Y, Xu R. A method/macro based on propensity score and Mahalanobis distance to reduce bias in treatment comparison in observational study: Eli Lilly working paper. Available at: http://www.lexjansen.com/pharmasug/2006/publichealthresearch/pr05.pdf. Accessed January 24, 2013.
  26. Stuart EA. Matching methods for causal inference: a review and a look forward. Stat Sci. 2010;25(1):121.
  27. Dummett BA, Adams C, Scruth E, Liu V, Guo M, Escobar G. Incorporating an early detection system into routine clinical practice in two community hospitals. J Hosp Med. 2016;11:000000.
Issue
Journal of Hospital Medicine - 11(1)
Issue
Journal of Hospital Medicine - 11(1)
Page Number
S40-S47
Page Number
S40-S47
Article Type
Display Headline
Early detection of critical illness outside the intensive care unit: Clarifying treatment plans and honoring goals of care using a supportive care team
Display Headline
Early detection of critical illness outside the intensive care unit: Clarifying treatment plans and honoring goals of care using a supportive care team
Sections
Article Source

© 2016 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Russ Granich, MD, South San Francisco Kaiser Permanente, 1200 El Camino Real, South San Francisco, CA 94080; Telephone: 650‐827‐6361; Fax: 650‐827‐6356; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Program for Early Detection of Sepsis

Article Type
Changed
Mon, 01/30/2017 - 11:15
Display Headline
Implementation of a multicenter performance improvement program for early detection and treatment of severe sepsis in general medical–surgical wards

Sepsis, the body's systemic response to infection leading to organ failure, can occur in patients throughout the hospital. However, patients initially diagnosed with sepsis on the wards experience the highest mortality for several reasons, including delayed recognition and treatment, particularly when localized infections progress to shock and organ failure. Consequently, hospitals have responded by having nurses screen patients for signs and symptoms of sepsis to identify cases earlier and improve outcomes. The intent of this article, which is based on our experience with a multihospital implementation effort, was to describe potential reasons for ward patients' poor prognosis. We provide a toolkit for how hospitals can implement a severe sepsis quality improvement (QI) program in general medicalsurgical wards.

In a previous study, we reported on our international effort, the Surviving Sepsis Campaign's (SSC) Phase III performance improvement (PI) program, targeting selected guideline recommendations (6‐ and 24‐hour bundles) in the emergency department (ED), the Intensive Care Unit (ICU), and wards in 165 volunteer hospitals in the United States, Europe, and South America.[1] The program was associated with increased bundle compliance and decreased mortality over time.[1, 2] The SSC's Phase III program, which focused on improvement efforts primarily in the ED and ICU, also exposed a need to address the high mortality in ward patients.[3] Patients admitted to the ICU directly from the ED with severe sepsis had a mortality rate of 26%, whereas those transferred to the ICU from the ward had significantly higher mortality (40.3%).[3]

Although the reasons for the higher mortality rate among ward patients have not been studied, several factors may play a role. First, the diagnosis of severe sepsis may be delayed in ward patients because physicians and nurses may not recognize the progression to sepsis and/or because hospitalized patients may not present with obvious systemic manifestations of sepsis as they do in the ED (Table 1).[4] Second, ward patients may have differences in the timing of their presentation and concurrent conditions confounding the diagnosis.[5] Third, treatment may be delayed once the diagnosis is made on the ward. The ICU and ED are designed to provide rapid high‐acuity care, whereas the wards have fewer systems and resources for rapid delivery of care needed for severe sepsis. Finally, some patients on the ward may develop sepsis from nosocomial infection, which can portend a worse prognosis.[6]

Presentation of Severe Sepsis in the Emergency Department and the Ward
 Emergency Department PresentationWard Presentation
Patient‐familyreported symptomsI just feel sick, family reports disorientation, not eatingCurrently hospitalized, family often not present, diagnosis may not be clear, baseline mental status unknown, lack of appetite may be linked to dislike of hospital food.
Systemic manifestationsTriage observed 2 or more signs of infection or patient reports temperature while at home plus additional finding on assessment.Signs of infection may appear 1 at a time, hours apart, and may appear to be mild changes to staff or missed entirely due to staff discontinuity.
Organ dysfunctionPresent on admission; triage nurse assesses for organ dysfunction.Develops over hours or days; may be subtle or acute.
Laboratory study processOrdered and evaluated within 1 hour.Not routinely completed daily, may be ordered after physician evaluation or during rounds. Results within 34 hours.

The SSC Phase III results led to the launch of a QI program, known as the SSC Phase IV Sepsis on the Wards Collaborative, funded by the Gordon and Betty Moore Foundation. This program, a partnership between the Society of Critical Care Medicine and the Society of Hospital Medicine (SHM), targeted ward patients and focused on early recognition through protocol‐driven regular nurse screening. The program applied the SSC 2012 guidelines with a primary focus on the 3‐hour bundle (Table 2).[7] The framework used for this program was the Institute for Healthcare Improvement's Plan‐Do‐Study‐Act (PDSA) model of improvement.[8, 9] The collaborative design included learning sessions designed to motivate and support improvement.[10] The program began with 60 academic and community hospitals in 4 US regions. Participating sites were required to have prior hospital experience in sepsis performance improvement as well as a formal commitment of support from their EDs and ICUs.

Surviving Sepsis Campaign 3‐Hour Severe Sepsis Bundle
To be completed within 3 hours of time of presentation
1. Measure lactate level
2. Obtain blood cultures prior to administration of antibiotics
3. Administer broad‐spectrum antibiotics
4. Administer 30 mL/kg crystalloid for hypotension or lactate 4 mmol/L (36 mg/dL)

We provided sites with a basic screening tool and guidance for routine severe sepsis screening, monitoring, and feedback (Figure 1). Because of the anticipated challenges of implementing routine nurse screening on every shift in all inpatient wards, participants identified 1 ward to pilot the every‐shift screening program. Each pilot ward refined the nurse screening process and developed site‐specific tools based on electronic health record (EHR) capability, informatics support, and available resources. After this initial phase, the program could be implemented in a hospital's remaining wards. The slogan adopted for the program was Screen every patient, every shift, every day.

Figure 1
Evaluation for severe sepsis screening tool. This checklist is designed to prompt the nurse to screen every patient during every shift for new signs of sepsis and organ dysfunction (Checklist is available at: http://www.survivingsepsis.org/SiteCollectionDocuments/ScreeningTool.pdf).

Although knowledge gained from the SSC Phase III program led to improvements in treating severe sepsis, ward patients continued to have poor outcomes. To address the potential contributions of delayed case identification, we developed an early recognition and treatment program. We outline the steps we took to develop this multisite PI program.

PREPARATORY WORK

During the planning phase, several procedural steps were taken before initiating the ward sepsis program (Table 3). These required 3 levels of involvement: senior administration, midlevel management, and patient‐level support.

Critical Steps Prior to Initiating a Ward Sepsis‐Detection Program
  • NOTE: Abbreviations: ED, emergency department; ICU, intensive care unit.

1.Obtain administrative support (ie, funding for data collection, project lead, informatics)
2.Align with ED and ICU
3.Identify 1 ward to pilot the program
4.Establish unit‐based champions on each shift (nurse, physician)
5.Review ward workflow
6.Develop nurse screening tool
7.Provide education

Administrative Support

In the course of our implementation effort, we found that sites that had high‐level administrative support were more likely to implement and sustain the intervention. For this reason, we consider such support to be critical. Examples of such support include chief medical officers, chief nursing officers, and chief quality officers. As an example, securing commitment from hospital leadership may be necessary to improve/change the EHR and provide funding for project management to achieve sustainable improvement in outcomes. Aligning leadership with frontline physicians, nurses, and support staff toward a common goal provides the platform for a successful program.[11]

ED and ICU Leadership Support

Maintaining lines of communication among the ED, ICU, and ward staff is critical to improving outcomes. Establishing a cohesive system (ED, ICU, and wards) aimed at early recognition and treatment of sepsis throughout the hospital stay can lead to improvement in continuity of care and outcomes. For example, when an ED severe sepsis patient is transferred to the ward and subsequently requires admission to the ICU due to declining clinical status, providing timely feedback to the ED can help improve care for subsequent patients. Collaboration between the ED and the ward can also contribute to improved transitions of care for patients with severe sepsis.

Hospitalist/Internal Medicine Leadership

Our experience with implementing sepsis bundles in the ED and ICU highlights the need for effective interdisciplinary collaboration with designated physician and nurse leaders/champions. We found that engaging local clinical leaders in the early recognition and management of a severe sepsis QI program is imperative for the program's success. Hospitalists are often the physician leaders for the inpatient wards, so it is essential to secure their early engagement, support, and leadership. Moreover, though collaboration with ED and ICU physicians may be useful, as described above, a hospitalist champion is likely to be more effective at educating other hospitalists about the program, overcoming physician resistance, and facilitating change.

Depending on a hospital's size and workflows, designated ward‐ or shift‐based hospitalists and nurses as champions can serve as key resources to support implementation. These individuals help establish mutual respect and a common mental model of how sepsis can evolve in ward patients. Even more important, by providing assistance with both the screening tool as well as with recognition itself, these individuals not only speed implementation, but also protect against rough patches (ie, those instances where workflow changes run into resistance).

EDUCATION

Diagnosing sepsis is not always easy, making education on sepsis recognition, evaluation, and treatment necessary prior to implementation. Retention of knowledge over time through review and refresher courses are methods we used in the program. Providing background material explaining why education is necessary and providing physicians and nurses with materials to help them recall the information over time were developed at several sites. Resources included sepsis posters, identification‐size badge cards with the sepsis bundle elements, and bulletin boards on the wards with information to reinforce sepsis recognition, evaluation, and treatment. Education for the ward‐centric program included an overview of the SSC guidelines, supportive literature, sepsis definitions, description of the infection's systemic manifestations, criteria for identification of new‐ onset organ dysfunction, and the details on current severe sepsis 3‐ and 6‐hour bundle requirements. We made clinicians aware of resources available on the SSC website.[12] Data emphasizing the incidence of sepsis, as well as outcomes and motives for the QI wards program, were incorporated during the collaborative meetings. Data can serve as strong motivators for action (eg, highlighting current incidence rates). Many hospitals combined presentation of these aggregate data with local review of selected cases of severe sepsis that occurred in their own wards.

Understanding that the training for and experiences of ED, ICU, and ward nurses varies, nurse education contained critical assessment skills in determining when to suspect a new or worsening infection. Training nurses to complete a comprehensive daily infection assessment may help them overcome uncertainty in judgement. Assessment skills include examination of invasive lines, surgical sites, wounds, and presence of a productive cough. Equally important, patients being treated for an infection would benefit from a daily assessment for improvement or worsening of the infection. Information uncovered may identify early signs of organ failure in addition to infections that may need further evaluation and treatment. Education provides knowledge, but achieving program success relies heavily on staff accepting that they can make a difference in sepsis patient identification, management, and outcomes.

SCREENING METHODS, COMMUNICATION, AND PROTOCOLS

The SSC tool for severe sepsis facilitates screening for (1) confirmed or suspected infection, (2) presence of 2 or more systemic manifestations of infection, and (3) acute organ dysfunction. This tool was the basis for the do (screening) portion of the PDSA model.

Continuous Screening

Technology can facilitate early recognition of severe sepsis with EHR‐based surveillance screening tools. Surveillance may include continuous review of vital signs and laboratory values with an automated alerting system. A valuable feature of the screening tool alert is the incorporation of the nurse's assessment. Decision support can improve the process by providing advice with systems requiring a reason to over‐ride the advice.[13] For example, an alert may include input from the nurse to determine if the abnormal data are thought to be related to an infectious process or due to another cause. If a suspected or confirmed infection is identified, further surveillance screening can include review of blood pressure readings and laboratory data to determine if organ dysfunction is present. If organ dysfunction criteria are identified, the alert can prompt the nurse to notify the physician to discuss whether the organ dysfunction is new and related to the infection and if implementation of the severe sepsis bundles is indicated (Figure 2). Additional continuous screening models may include variations of the example provided to include alerts to other clinicians or a response team.

Figure 2
Severe sepsis alert with situation, background, assessment, recommendation (SBAR) embedded. Abbreviations: BMP, basic metabolic panel; BP, blood pressure; CBC, complete blood count; INR, International Normalized Ratio; IV, intravenous; PTT, partial thromboplastin time; SIRS, systemic inflammatory response syndrome; SpO2, saturation of peripheral oxygen; WBC, white blood cells.

An automated screening tool within the EHR can be useful because the system continuously scans to identify signs and symptoms of sepsis, thus providing screening consistency, and offers data on the back end to be used as a mechanism for feedback to monitor effectiveness. Challenges with EHR severe sepsis alert development are resource allocation, testing, education, and ongoing evaluation and feedback. Other challenges include the potential for alert fatigue (false positive) and inappropriate response (false negative) to the infection prompt, thereby halting the next step in automated screening for organ dysfunction. Time to complete an automated screening tool varies based on strategic design and user understanding.

Screening Checklist

Whereas EHR tools may be effective in early recognition of sepsis, not all sites will have the capability to use these tools because of lack of informatics support, cost of development, and absence of an EHR in some hospitals.[14] An alternative to continuous screening is a sepsis checklist such as the severe sepsis screening tool (Figure 1). The checklist is designed to prompt nurses to screen every patient during every shift for new signs of sepsis and organ dysfunction.

The checklist ensures that 3 key issues are considered: presence of a suspected or confirmed infection, systemic manifestations of inflammation, and physiological manifestations of organ dysfunction. The paper tool is simple to use and can be completed in 10 to 20 minutes. It requires the nurse to review the progress notes, vital signs, and laboratory test results. Although the time investment seems onerous, the gain in consistency of screening and treatment compensates for the extra effort. Review of the checklist also provides a locus for feedback and new improvement cycles.

Scripted Communication

Once a patient with severe sepsis is identified, communicating this finding to the rest of the clinical team is essential. Because communication skills are not always emphasized in QI projects, we decided to emphasize a structured approach. We provided clinicians with scripts based on the SBAR (situation, background, assessment, and recommendation) technique aimed to improve communication (Figure 3).[15, 16] Using the SBAR technique also supports our efforts to build nurses' confidence and willingness to employ protocols that give them greater autonomy.

Figure 3
Script for communicating severe sepsis. Abbreviations: CBC = complete blood count; WBC, white blood cells.

Nurse‐Directed Protocols

Skillful identification and management of severe sepsis patients constitute the foundation for implementation of nurse‐directed protocols in this patient population. Such protocols promote autonomy and staff ownership. Severe sepsis protocols may include increasing the frequency of vital signs, placement of laboratory orders and, in sites with an established culture of increased nurse autonomy, initiation of intravenous access and a fluid bolus when specific criteria are met. Because nursing scope of practice varies from state to state and among hospitals, nurse‐directed severe sepsis protocols generally require review of current site practice guidelines, physician agreement, and approval by the medical executive committee prior to implementation. Despite these differences, maximizing nurse leadership involvement and nurse autonomy can help propel the program forward. Protocols may be implemented based on knowledge level and resources on a particular ward. A workflow evaluation may be included in this process to define staff performing each step, what is being reported, and where and when data are recorded.

DATA COLLECTION AND FEEDBACK

Nurse screening drives the ward program and ensuring its consistency is the key to early patient identification. We made ongoing repeated evaluation of the appropriate use of the screening tool, time to physician notification, and time to follow‐up intervention, a critical part of the study phase of the PDSA cycle. Once the nursing staff is consistently accurate and compliant (>90%) with screening, random (eg, once per week) screening tool review may be more suitable, thus requiring fewer resources (see Supporting Information, Appendix 1, in the online version of this article).

Data Collection

A key to improvement is to study the process, which requires data collection to assess compliance. In our experience, timely clinician feedback, along with data, led to effective process change. Real‐time data collection and discussion with the clinical team may lead to early recognition or intervention.

In our collaborative experience, we observed varied resources and timing for data collection across hospitals. For example, several participating sites had sepsis coordinators to collect data, whereas others relied on the quality department or nursing staff to collect data. Data may be collected concurrently (within 24 hours of severe sepsis presentation) or retrospectively. Retrospective data collection may allow for staff flexibility in data collection, but limits feedback to the clinicians. For example, with retrospective review, early recognition and treatment failure may go unrecognized until the data are analyzed and reported, which can be months after the patient has been discharged or expired.

Feedback to Caregivers

A consistent feedback process, which can occur at the individual or group level, may lead to prompt improvement in severe sepsis management. An example of individual feedback would be providing the nurse with the elapsed time from antibiotic order to time of administration. Early in the implementation phase, frequent (daily or weekly) feedback is helpful to build team cohesiveness. An example of feedback to build the team may include a unit‐based report on the last 5 severe sepsis patients managed by the group. Providing overall bundle compliance and outcome reports on a weekly and monthly basis will allow the clinical team to track progress. Examples of report cards and a dashboard are provided in the supplemental material, which highlight compliance with the bundle elements as well as time to achieve the bundle elements. (see Supporting Information, Appendix 2 and Appendix 3, in the online version of this article). Resources to evaluate and provide consistent data may require up to 10 to 15 hours per week for 1 unit. Automated reports may decrease the resources needed in collating and reporting data.

OUTCOME MEASURES

Although certainly important, mortality is not the only outcome measure worthy of measurement. Other relevant outcomes include transfers to a higher level of care and need for major supportive therapies (eg, dialysis, mechanical ventilation, vasopressor infusion). Whereas it is valuable to review transfers to a higher level of care, we emphasized that these are not necessarily adverse outcomes; in fact, in many cases such transfers are highly desirable. It is also important to track the overall impact of sepsis on hospital length of stay.

SUMMARY/CONCLUSIONS

Grounded in the Institute for Healthcare Improvement's PDSA QI model, we developed a program aimed at improving outcomes for severe sepsis ward patients. Our program's cornerstone is nurse‐led checklist‐based screening. Our faculty led learning sessions that concentrated on using a collaborative approach whose key components were education in early sepsis identification, use of a sepsis screening tool, and the SBAR method for effective communication. Pitfalls identified during the program included lack of knowledge for both nurses and physicians in early severe sepsis identification, resistance to routine screening, and lack of data collection and leadership support. The most successful participating sites were those with senior leadership backing, staff engagement, informatics support, and data collection resources. Ultimately, replicating a program such as ours will depend on team cohesiveness, and nurse empowerment through the use of nurse‐driven protocols. Programs like this may lead to progression toward standardizing practice (eg, antibiotic administration, fluid resuscitation), matching patient needs to resources, and building stronger partnerships between hospitalists and nurses.

Disclosures

This work was supported by a grant provided to the Society of Critical Care Medicine by the Gordon and Betty Moore Foundation (Early Identification and Management of Sepsis on the Wards). The work was supported by a grant from the Adventist Hospital System. As part of our agreement with the Gordon and Betty Moore Foundation, we made a commitment to disseminate our findings in articles such as this one. However, the Moore Foundation played no role in how we actually structured our articles, nor did they review or preapprove any of the manuscripts submitted as part of the dissemination component; the same was the case with the other sponsors. The authors report no conflicts of interest.

Files
References
  1. Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Intensive Care Med. 2010;36(2):222231.
  2. Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Crit Care Med. 2010;38(2):367374.
  3. Levy MM, Rhodes A, Phillips GS, et al. Surviving Sepsis Campaign: association between performance metrics and outcomes in a 7.5‐year study. Intensive Care Med. 2014;40(11):16231633.
  4. Rohde JM, Odden AJ, Bonham C, et al. The epidemiology of acute organ system dysfunction from severe sepsis outside of the intensive care unit. J Hosp Med. 2013;8(5):243247.
  5. Yealy DM, Huang DT, Delaney A, et al. Recognizing and managing sepsis: what needs to be done? BMC Med. 2015;13:98.
  6. Sopena N, Heras E, Casas I, et al. Risk factors for hospital‐acquired pneumonia outside the intensive care unit: a case‐control study. Am J Infect Control. 2014;42(1):3842.
  7. Dellinger RP, Levy MM, Rhodes A, et al. Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock, 2012. Crit Care Med. 2013;41(2):580637.
  8. Nadeem E, Olin SS, Hill LC, Hoagwood KE, Horwitz SM. Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Q. 2013;91(2):354394.
  9. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. San Francisco, CA: Jossey‐Bass; 2009.
  10. Nembhard IM. Learning and improving in quality improvement collaboratives: which collaborative features do participants value most? Health Serv Res. 2009;44(2 pt 1):359378.
  11. Pronovost PJ, Weast B, Bishop K, et al. Senior executive adopt‐a‐work unit: a model for safety improvement. Jt Comm J Qual Saf. 2004;30(2):5968.
  12. Surviving Sepsis Campaign. Available at: http://survivingsepsis.org/Resources/Pages/default.aspx. Accessed September 24, 2015.
  13. Roshanov PS, Fernandes N, Wilczynski JM, et al. Features of effective computerised clinical decision support systems: meta‐regression of 162 randomised trials. BMJ. 2013;346:f657.
  14. Bhounsule P, Peterson AM. characteristics of hospitals associated with complete and partial implementation of electronic health records. Perspect Health Inf Manag. 2016;13:1c.
  15. Institute for Healthcare Improvement. SBAR technique for communication: a situational briefing model. Available at: http://www.ihi.org/resources/pages/tools/sbartechniqueforcommunicationasituationalbriefingmodel.aspx. Accessed September 12, 2015.
  16. Compton J, Copeland K, Flanders S, et al. Implementing SBAR across a large multihospital health system. Jt Comm J Qual Patient Saf. 2012;38(6):261268.
Article PDF
Issue
Journal of Hospital Medicine - 11(1)
Page Number
S32-S39
Sections
Files
Files
Article PDF
Article PDF

Sepsis, the body's systemic response to infection leading to organ failure, can occur in patients throughout the hospital. However, patients initially diagnosed with sepsis on the wards experience the highest mortality for several reasons, including delayed recognition and treatment, particularly when localized infections progress to shock and organ failure. Consequently, hospitals have responded by having nurses screen patients for signs and symptoms of sepsis to identify cases earlier and improve outcomes. The intent of this article, which is based on our experience with a multihospital implementation effort, was to describe potential reasons for ward patients' poor prognosis. We provide a toolkit for how hospitals can implement a severe sepsis quality improvement (QI) program in general medicalsurgical wards.

In a previous study, we reported on our international effort, the Surviving Sepsis Campaign's (SSC) Phase III performance improvement (PI) program, targeting selected guideline recommendations (6‐ and 24‐hour bundles) in the emergency department (ED), the Intensive Care Unit (ICU), and wards in 165 volunteer hospitals in the United States, Europe, and South America.[1] The program was associated with increased bundle compliance and decreased mortality over time.[1, 2] The SSC's Phase III program, which focused on improvement efforts primarily in the ED and ICU, also exposed a need to address the high mortality in ward patients.[3] Patients admitted to the ICU directly from the ED with severe sepsis had a mortality rate of 26%, whereas those transferred to the ICU from the ward had significantly higher mortality (40.3%).[3]

Although the reasons for the higher mortality rate among ward patients have not been studied, several factors may play a role. First, the diagnosis of severe sepsis may be delayed in ward patients because physicians and nurses may not recognize the progression to sepsis and/or because hospitalized patients may not present with obvious systemic manifestations of sepsis as they do in the ED (Table 1).[4] Second, ward patients may have differences in the timing of their presentation and concurrent conditions confounding the diagnosis.[5] Third, treatment may be delayed once the diagnosis is made on the ward. The ICU and ED are designed to provide rapid high‐acuity care, whereas the wards have fewer systems and resources for rapid delivery of care needed for severe sepsis. Finally, some patients on the ward may develop sepsis from nosocomial infection, which can portend a worse prognosis.[6]

Presentation of Severe Sepsis in the Emergency Department and the Ward
 Emergency Department PresentationWard Presentation
Patient‐familyreported symptomsI just feel sick, family reports disorientation, not eatingCurrently hospitalized, family often not present, diagnosis may not be clear, baseline mental status unknown, lack of appetite may be linked to dislike of hospital food.
Systemic manifestationsTriage observed 2 or more signs of infection or patient reports temperature while at home plus additional finding on assessment.Signs of infection may appear 1 at a time, hours apart, and may appear to be mild changes to staff or missed entirely due to staff discontinuity.
Organ dysfunctionPresent on admission; triage nurse assesses for organ dysfunction.Develops over hours or days; may be subtle or acute.
Laboratory study processOrdered and evaluated within 1 hour.Not routinely completed daily, may be ordered after physician evaluation or during rounds. Results within 34 hours.

The SSC Phase III results led to the launch of a QI program, known as the SSC Phase IV Sepsis on the Wards Collaborative, funded by the Gordon and Betty Moore Foundation. This program, a partnership between the Society of Critical Care Medicine and the Society of Hospital Medicine (SHM), targeted ward patients and focused on early recognition through protocol‐driven regular nurse screening. The program applied the SSC 2012 guidelines with a primary focus on the 3‐hour bundle (Table 2).[7] The framework used for this program was the Institute for Healthcare Improvement's Plan‐Do‐Study‐Act (PDSA) model of improvement.[8, 9] The collaborative design included learning sessions designed to motivate and support improvement.[10] The program began with 60 academic and community hospitals in 4 US regions. Participating sites were required to have prior hospital experience in sepsis performance improvement as well as a formal commitment of support from their EDs and ICUs.

Surviving Sepsis Campaign 3‐Hour Severe Sepsis Bundle
To be completed within 3 hours of time of presentation
1. Measure lactate level
2. Obtain blood cultures prior to administration of antibiotics
3. Administer broad‐spectrum antibiotics
4. Administer 30 mL/kg crystalloid for hypotension or lactate 4 mmol/L (36 mg/dL)

We provided sites with a basic screening tool and guidance for routine severe sepsis screening, monitoring, and feedback (Figure 1). Because of the anticipated challenges of implementing routine nurse screening on every shift in all inpatient wards, participants identified 1 ward to pilot the every‐shift screening program. Each pilot ward refined the nurse screening process and developed site‐specific tools based on electronic health record (EHR) capability, informatics support, and available resources. After this initial phase, the program could be implemented in a hospital's remaining wards. The slogan adopted for the program was Screen every patient, every shift, every day.

Figure 1
Evaluation for severe sepsis screening tool. This checklist is designed to prompt the nurse to screen every patient during every shift for new signs of sepsis and organ dysfunction (Checklist is available at: http://www.survivingsepsis.org/SiteCollectionDocuments/ScreeningTool.pdf).

Although knowledge gained from the SSC Phase III program led to improvements in treating severe sepsis, ward patients continued to have poor outcomes. To address the potential contributions of delayed case identification, we developed an early recognition and treatment program. We outline the steps we took to develop this multisite PI program.

PREPARATORY WORK

During the planning phase, several procedural steps were taken before initiating the ward sepsis program (Table 3). These required 3 levels of involvement: senior administration, midlevel management, and patient‐level support.

Critical Steps Prior to Initiating a Ward Sepsis‐Detection Program
  • NOTE: Abbreviations: ED, emergency department; ICU, intensive care unit.

1.Obtain administrative support (ie, funding for data collection, project lead, informatics)
2.Align with ED and ICU
3.Identify 1 ward to pilot the program
4.Establish unit‐based champions on each shift (nurse, physician)
5.Review ward workflow
6.Develop nurse screening tool
7.Provide education

Administrative Support

In the course of our implementation effort, we found that sites that had high‐level administrative support were more likely to implement and sustain the intervention. For this reason, we consider such support to be critical. Examples of such support include chief medical officers, chief nursing officers, and chief quality officers. As an example, securing commitment from hospital leadership may be necessary to improve/change the EHR and provide funding for project management to achieve sustainable improvement in outcomes. Aligning leadership with frontline physicians, nurses, and support staff toward a common goal provides the platform for a successful program.[11]

ED and ICU Leadership Support

Maintaining lines of communication among the ED, ICU, and ward staff is critical to improving outcomes. Establishing a cohesive system (ED, ICU, and wards) aimed at early recognition and treatment of sepsis throughout the hospital stay can lead to improvement in continuity of care and outcomes. For example, when an ED severe sepsis patient is transferred to the ward and subsequently requires admission to the ICU due to declining clinical status, providing timely feedback to the ED can help improve care for subsequent patients. Collaboration between the ED and the ward can also contribute to improved transitions of care for patients with severe sepsis.

Hospitalist/Internal Medicine Leadership

Our experience with implementing sepsis bundles in the ED and ICU highlights the need for effective interdisciplinary collaboration with designated physician and nurse leaders/champions. We found that engaging local clinical leaders in the early recognition and management of a severe sepsis QI program is imperative for the program's success. Hospitalists are often the physician leaders for the inpatient wards, so it is essential to secure their early engagement, support, and leadership. Moreover, though collaboration with ED and ICU physicians may be useful, as described above, a hospitalist champion is likely to be more effective at educating other hospitalists about the program, overcoming physician resistance, and facilitating change.

Depending on a hospital's size and workflows, designated ward‐ or shift‐based hospitalists and nurses as champions can serve as key resources to support implementation. These individuals help establish mutual respect and a common mental model of how sepsis can evolve in ward patients. Even more important, by providing assistance with both the screening tool as well as with recognition itself, these individuals not only speed implementation, but also protect against rough patches (ie, those instances where workflow changes run into resistance).

EDUCATION

Diagnosing sepsis is not always easy, making education on sepsis recognition, evaluation, and treatment necessary prior to implementation. Retention of knowledge over time through review and refresher courses are methods we used in the program. Providing background material explaining why education is necessary and providing physicians and nurses with materials to help them recall the information over time were developed at several sites. Resources included sepsis posters, identification‐size badge cards with the sepsis bundle elements, and bulletin boards on the wards with information to reinforce sepsis recognition, evaluation, and treatment. Education for the ward‐centric program included an overview of the SSC guidelines, supportive literature, sepsis definitions, description of the infection's systemic manifestations, criteria for identification of new‐ onset organ dysfunction, and the details on current severe sepsis 3‐ and 6‐hour bundle requirements. We made clinicians aware of resources available on the SSC website.[12] Data emphasizing the incidence of sepsis, as well as outcomes and motives for the QI wards program, were incorporated during the collaborative meetings. Data can serve as strong motivators for action (eg, highlighting current incidence rates). Many hospitals combined presentation of these aggregate data with local review of selected cases of severe sepsis that occurred in their own wards.

Understanding that the training for and experiences of ED, ICU, and ward nurses varies, nurse education contained critical assessment skills in determining when to suspect a new or worsening infection. Training nurses to complete a comprehensive daily infection assessment may help them overcome uncertainty in judgement. Assessment skills include examination of invasive lines, surgical sites, wounds, and presence of a productive cough. Equally important, patients being treated for an infection would benefit from a daily assessment for improvement or worsening of the infection. Information uncovered may identify early signs of organ failure in addition to infections that may need further evaluation and treatment. Education provides knowledge, but achieving program success relies heavily on staff accepting that they can make a difference in sepsis patient identification, management, and outcomes.

SCREENING METHODS, COMMUNICATION, AND PROTOCOLS

The SSC tool for severe sepsis facilitates screening for (1) confirmed or suspected infection, (2) presence of 2 or more systemic manifestations of infection, and (3) acute organ dysfunction. This tool was the basis for the do (screening) portion of the PDSA model.

Continuous Screening

Technology can facilitate early recognition of severe sepsis with EHR‐based surveillance screening tools. Surveillance may include continuous review of vital signs and laboratory values with an automated alerting system. A valuable feature of the screening tool alert is the incorporation of the nurse's assessment. Decision support can improve the process by providing advice with systems requiring a reason to over‐ride the advice.[13] For example, an alert may include input from the nurse to determine if the abnormal data are thought to be related to an infectious process or due to another cause. If a suspected or confirmed infection is identified, further surveillance screening can include review of blood pressure readings and laboratory data to determine if organ dysfunction is present. If organ dysfunction criteria are identified, the alert can prompt the nurse to notify the physician to discuss whether the organ dysfunction is new and related to the infection and if implementation of the severe sepsis bundles is indicated (Figure 2). Additional continuous screening models may include variations of the example provided to include alerts to other clinicians or a response team.

Figure 2
Severe sepsis alert with situation, background, assessment, recommendation (SBAR) embedded. Abbreviations: BMP, basic metabolic panel; BP, blood pressure; CBC, complete blood count; INR, International Normalized Ratio; IV, intravenous; PTT, partial thromboplastin time; SIRS, systemic inflammatory response syndrome; SpO2, saturation of peripheral oxygen; WBC, white blood cells.

An automated screening tool within the EHR can be useful because the system continuously scans to identify signs and symptoms of sepsis, thus providing screening consistency, and offers data on the back end to be used as a mechanism for feedback to monitor effectiveness. Challenges with EHR severe sepsis alert development are resource allocation, testing, education, and ongoing evaluation and feedback. Other challenges include the potential for alert fatigue (false positive) and inappropriate response (false negative) to the infection prompt, thereby halting the next step in automated screening for organ dysfunction. Time to complete an automated screening tool varies based on strategic design and user understanding.

Screening Checklist

Whereas EHR tools may be effective in early recognition of sepsis, not all sites will have the capability to use these tools because of lack of informatics support, cost of development, and absence of an EHR in some hospitals.[14] An alternative to continuous screening is a sepsis checklist such as the severe sepsis screening tool (Figure 1). The checklist is designed to prompt nurses to screen every patient during every shift for new signs of sepsis and organ dysfunction.

The checklist ensures that 3 key issues are considered: presence of a suspected or confirmed infection, systemic manifestations of inflammation, and physiological manifestations of organ dysfunction. The paper tool is simple to use and can be completed in 10 to 20 minutes. It requires the nurse to review the progress notes, vital signs, and laboratory test results. Although the time investment seems onerous, the gain in consistency of screening and treatment compensates for the extra effort. Review of the checklist also provides a locus for feedback and new improvement cycles.

Scripted Communication

Once a patient with severe sepsis is identified, communicating this finding to the rest of the clinical team is essential. Because communication skills are not always emphasized in QI projects, we decided to emphasize a structured approach. We provided clinicians with scripts based on the SBAR (situation, background, assessment, and recommendation) technique aimed to improve communication (Figure 3).[15, 16] Using the SBAR technique also supports our efforts to build nurses' confidence and willingness to employ protocols that give them greater autonomy.

Figure 3
Script for communicating severe sepsis. Abbreviations: CBC = complete blood count; WBC, white blood cells.

Nurse‐Directed Protocols

Skillful identification and management of severe sepsis patients constitute the foundation for implementation of nurse‐directed protocols in this patient population. Such protocols promote autonomy and staff ownership. Severe sepsis protocols may include increasing the frequency of vital signs, placement of laboratory orders and, in sites with an established culture of increased nurse autonomy, initiation of intravenous access and a fluid bolus when specific criteria are met. Because nursing scope of practice varies from state to state and among hospitals, nurse‐directed severe sepsis protocols generally require review of current site practice guidelines, physician agreement, and approval by the medical executive committee prior to implementation. Despite these differences, maximizing nurse leadership involvement and nurse autonomy can help propel the program forward. Protocols may be implemented based on knowledge level and resources on a particular ward. A workflow evaluation may be included in this process to define staff performing each step, what is being reported, and where and when data are recorded.

DATA COLLECTION AND FEEDBACK

Nurse screening drives the ward program and ensuring its consistency is the key to early patient identification. We made ongoing repeated evaluation of the appropriate use of the screening tool, time to physician notification, and time to follow‐up intervention, a critical part of the study phase of the PDSA cycle. Once the nursing staff is consistently accurate and compliant (>90%) with screening, random (eg, once per week) screening tool review may be more suitable, thus requiring fewer resources (see Supporting Information, Appendix 1, in the online version of this article).

Data Collection

A key to improvement is to study the process, which requires data collection to assess compliance. In our experience, timely clinician feedback, along with data, led to effective process change. Real‐time data collection and discussion with the clinical team may lead to early recognition or intervention.

In our collaborative experience, we observed varied resources and timing for data collection across hospitals. For example, several participating sites had sepsis coordinators to collect data, whereas others relied on the quality department or nursing staff to collect data. Data may be collected concurrently (within 24 hours of severe sepsis presentation) or retrospectively. Retrospective data collection may allow for staff flexibility in data collection, but limits feedback to the clinicians. For example, with retrospective review, early recognition and treatment failure may go unrecognized until the data are analyzed and reported, which can be months after the patient has been discharged or expired.

Feedback to Caregivers

A consistent feedback process, which can occur at the individual or group level, may lead to prompt improvement in severe sepsis management. An example of individual feedback would be providing the nurse with the elapsed time from antibiotic order to time of administration. Early in the implementation phase, frequent (daily or weekly) feedback is helpful to build team cohesiveness. An example of feedback to build the team may include a unit‐based report on the last 5 severe sepsis patients managed by the group. Providing overall bundle compliance and outcome reports on a weekly and monthly basis will allow the clinical team to track progress. Examples of report cards and a dashboard are provided in the supplemental material, which highlight compliance with the bundle elements as well as time to achieve the bundle elements. (see Supporting Information, Appendix 2 and Appendix 3, in the online version of this article). Resources to evaluate and provide consistent data may require up to 10 to 15 hours per week for 1 unit. Automated reports may decrease the resources needed in collating and reporting data.

OUTCOME MEASURES

Although certainly important, mortality is not the only outcome measure worthy of measurement. Other relevant outcomes include transfers to a higher level of care and need for major supportive therapies (eg, dialysis, mechanical ventilation, vasopressor infusion). Whereas it is valuable to review transfers to a higher level of care, we emphasized that these are not necessarily adverse outcomes; in fact, in many cases such transfers are highly desirable. It is also important to track the overall impact of sepsis on hospital length of stay.

SUMMARY/CONCLUSIONS

Grounded in the Institute for Healthcare Improvement's PDSA QI model, we developed a program aimed at improving outcomes for severe sepsis ward patients. Our program's cornerstone is nurse‐led checklist‐based screening. Our faculty led learning sessions that concentrated on using a collaborative approach whose key components were education in early sepsis identification, use of a sepsis screening tool, and the SBAR method for effective communication. Pitfalls identified during the program included lack of knowledge for both nurses and physicians in early severe sepsis identification, resistance to routine screening, and lack of data collection and leadership support. The most successful participating sites were those with senior leadership backing, staff engagement, informatics support, and data collection resources. Ultimately, replicating a program such as ours will depend on team cohesiveness, and nurse empowerment through the use of nurse‐driven protocols. Programs like this may lead to progression toward standardizing practice (eg, antibiotic administration, fluid resuscitation), matching patient needs to resources, and building stronger partnerships between hospitalists and nurses.

Disclosures

This work was supported by a grant provided to the Society of Critical Care Medicine by the Gordon and Betty Moore Foundation (Early Identification and Management of Sepsis on the Wards). The work was supported by a grant from the Adventist Hospital System. As part of our agreement with the Gordon and Betty Moore Foundation, we made a commitment to disseminate our findings in articles such as this one. However, the Moore Foundation played no role in how we actually structured our articles, nor did they review or preapprove any of the manuscripts submitted as part of the dissemination component; the same was the case with the other sponsors. The authors report no conflicts of interest.

Sepsis, the body's systemic response to infection leading to organ failure, can occur in patients throughout the hospital. However, patients initially diagnosed with sepsis on the wards experience the highest mortality for several reasons, including delayed recognition and treatment, particularly when localized infections progress to shock and organ failure. Consequently, hospitals have responded by having nurses screen patients for signs and symptoms of sepsis to identify cases earlier and improve outcomes. The intent of this article, which is based on our experience with a multihospital implementation effort, was to describe potential reasons for ward patients' poor prognosis. We provide a toolkit for how hospitals can implement a severe sepsis quality improvement (QI) program in general medicalsurgical wards.

In a previous study, we reported on our international effort, the Surviving Sepsis Campaign's (SSC) Phase III performance improvement (PI) program, targeting selected guideline recommendations (6‐ and 24‐hour bundles) in the emergency department (ED), the Intensive Care Unit (ICU), and wards in 165 volunteer hospitals in the United States, Europe, and South America.[1] The program was associated with increased bundle compliance and decreased mortality over time.[1, 2] The SSC's Phase III program, which focused on improvement efforts primarily in the ED and ICU, also exposed a need to address the high mortality in ward patients.[3] Patients admitted to the ICU directly from the ED with severe sepsis had a mortality rate of 26%, whereas those transferred to the ICU from the ward had significantly higher mortality (40.3%).[3]

Although the reasons for the higher mortality rate among ward patients have not been studied, several factors may play a role. First, the diagnosis of severe sepsis may be delayed in ward patients because physicians and nurses may not recognize the progression to sepsis and/or because hospitalized patients may not present with obvious systemic manifestations of sepsis as they do in the ED (Table 1).[4] Second, ward patients may have differences in the timing of their presentation and concurrent conditions confounding the diagnosis.[5] Third, treatment may be delayed once the diagnosis is made on the ward. The ICU and ED are designed to provide rapid high‐acuity care, whereas the wards have fewer systems and resources for rapid delivery of care needed for severe sepsis. Finally, some patients on the ward may develop sepsis from nosocomial infection, which can portend a worse prognosis.[6]

Presentation of Severe Sepsis in the Emergency Department and the Ward
 Emergency Department PresentationWard Presentation
Patient‐familyreported symptomsI just feel sick, family reports disorientation, not eatingCurrently hospitalized, family often not present, diagnosis may not be clear, baseline mental status unknown, lack of appetite may be linked to dislike of hospital food.
Systemic manifestationsTriage observed 2 or more signs of infection or patient reports temperature while at home plus additional finding on assessment.Signs of infection may appear 1 at a time, hours apart, and may appear to be mild changes to staff or missed entirely due to staff discontinuity.
Organ dysfunctionPresent on admission; triage nurse assesses for organ dysfunction.Develops over hours or days; may be subtle or acute.
Laboratory study processOrdered and evaluated within 1 hour.Not routinely completed daily, may be ordered after physician evaluation or during rounds. Results within 34 hours.

The SSC Phase III results led to the launch of a QI program, known as the SSC Phase IV Sepsis on the Wards Collaborative, funded by the Gordon and Betty Moore Foundation. This program, a partnership between the Society of Critical Care Medicine and the Society of Hospital Medicine (SHM), targeted ward patients and focused on early recognition through protocol‐driven regular nurse screening. The program applied the SSC 2012 guidelines with a primary focus on the 3‐hour bundle (Table 2).[7] The framework used for this program was the Institute for Healthcare Improvement's Plan‐Do‐Study‐Act (PDSA) model of improvement.[8, 9] The collaborative design included learning sessions designed to motivate and support improvement.[10] The program began with 60 academic and community hospitals in 4 US regions. Participating sites were required to have prior hospital experience in sepsis performance improvement as well as a formal commitment of support from their EDs and ICUs.

Surviving Sepsis Campaign 3‐Hour Severe Sepsis Bundle
To be completed within 3 hours of time of presentation
1. Measure lactate level
2. Obtain blood cultures prior to administration of antibiotics
3. Administer broad‐spectrum antibiotics
4. Administer 30 mL/kg crystalloid for hypotension or lactate 4 mmol/L (36 mg/dL)

We provided sites with a basic screening tool and guidance for routine severe sepsis screening, monitoring, and feedback (Figure 1). Because of the anticipated challenges of implementing routine nurse screening on every shift in all inpatient wards, participants identified 1 ward to pilot the every‐shift screening program. Each pilot ward refined the nurse screening process and developed site‐specific tools based on electronic health record (EHR) capability, informatics support, and available resources. After this initial phase, the program could be implemented in a hospital's remaining wards. The slogan adopted for the program was Screen every patient, every shift, every day.

Figure 1
Evaluation for severe sepsis screening tool. This checklist is designed to prompt the nurse to screen every patient during every shift for new signs of sepsis and organ dysfunction (Checklist is available at: http://www.survivingsepsis.org/SiteCollectionDocuments/ScreeningTool.pdf).

Although knowledge gained from the SSC Phase III program led to improvements in treating severe sepsis, ward patients continued to have poor outcomes. To address the potential contributions of delayed case identification, we developed an early recognition and treatment program. We outline the steps we took to develop this multisite PI program.

PREPARATORY WORK

During the planning phase, several procedural steps were taken before initiating the ward sepsis program (Table 3). These required 3 levels of involvement: senior administration, midlevel management, and patient‐level support.

Critical Steps Prior to Initiating a Ward Sepsis‐Detection Program
  • NOTE: Abbreviations: ED, emergency department; ICU, intensive care unit.

1.Obtain administrative support (ie, funding for data collection, project lead, informatics)
2.Align with ED and ICU
3.Identify 1 ward to pilot the program
4.Establish unit‐based champions on each shift (nurse, physician)
5.Review ward workflow
6.Develop nurse screening tool
7.Provide education

Administrative Support

In the course of our implementation effort, we found that sites that had high‐level administrative support were more likely to implement and sustain the intervention. For this reason, we consider such support to be critical. Examples of such support include chief medical officers, chief nursing officers, and chief quality officers. As an example, securing commitment from hospital leadership may be necessary to improve/change the EHR and provide funding for project management to achieve sustainable improvement in outcomes. Aligning leadership with frontline physicians, nurses, and support staff toward a common goal provides the platform for a successful program.[11]

ED and ICU Leadership Support

Maintaining lines of communication among the ED, ICU, and ward staff is critical to improving outcomes. Establishing a cohesive system (ED, ICU, and wards) aimed at early recognition and treatment of sepsis throughout the hospital stay can lead to improvement in continuity of care and outcomes. For example, when an ED severe sepsis patient is transferred to the ward and subsequently requires admission to the ICU due to declining clinical status, providing timely feedback to the ED can help improve care for subsequent patients. Collaboration between the ED and the ward can also contribute to improved transitions of care for patients with severe sepsis.

Hospitalist/Internal Medicine Leadership

Our experience with implementing sepsis bundles in the ED and ICU highlights the need for effective interdisciplinary collaboration with designated physician and nurse leaders/champions. We found that engaging local clinical leaders in the early recognition and management of a severe sepsis QI program is imperative for the program's success. Hospitalists are often the physician leaders for the inpatient wards, so it is essential to secure their early engagement, support, and leadership. Moreover, though collaboration with ED and ICU physicians may be useful, as described above, a hospitalist champion is likely to be more effective at educating other hospitalists about the program, overcoming physician resistance, and facilitating change.

Depending on a hospital's size and workflows, designated ward‐ or shift‐based hospitalists and nurses as champions can serve as key resources to support implementation. These individuals help establish mutual respect and a common mental model of how sepsis can evolve in ward patients. Even more important, by providing assistance with both the screening tool as well as with recognition itself, these individuals not only speed implementation, but also protect against rough patches (ie, those instances where workflow changes run into resistance).

EDUCATION

Diagnosing sepsis is not always easy, making education on sepsis recognition, evaluation, and treatment necessary prior to implementation. Retention of knowledge over time through review and refresher courses are methods we used in the program. Providing background material explaining why education is necessary and providing physicians and nurses with materials to help them recall the information over time were developed at several sites. Resources included sepsis posters, identification‐size badge cards with the sepsis bundle elements, and bulletin boards on the wards with information to reinforce sepsis recognition, evaluation, and treatment. Education for the ward‐centric program included an overview of the SSC guidelines, supportive literature, sepsis definitions, description of the infection's systemic manifestations, criteria for identification of new‐ onset organ dysfunction, and the details on current severe sepsis 3‐ and 6‐hour bundle requirements. We made clinicians aware of resources available on the SSC website.[12] Data emphasizing the incidence of sepsis, as well as outcomes and motives for the QI wards program, were incorporated during the collaborative meetings. Data can serve as strong motivators for action (eg, highlighting current incidence rates). Many hospitals combined presentation of these aggregate data with local review of selected cases of severe sepsis that occurred in their own wards.

Understanding that the training for and experiences of ED, ICU, and ward nurses varies, nurse education contained critical assessment skills in determining when to suspect a new or worsening infection. Training nurses to complete a comprehensive daily infection assessment may help them overcome uncertainty in judgement. Assessment skills include examination of invasive lines, surgical sites, wounds, and presence of a productive cough. Equally important, patients being treated for an infection would benefit from a daily assessment for improvement or worsening of the infection. Information uncovered may identify early signs of organ failure in addition to infections that may need further evaluation and treatment. Education provides knowledge, but achieving program success relies heavily on staff accepting that they can make a difference in sepsis patient identification, management, and outcomes.

SCREENING METHODS, COMMUNICATION, AND PROTOCOLS

The SSC tool for severe sepsis facilitates screening for (1) confirmed or suspected infection, (2) presence of 2 or more systemic manifestations of infection, and (3) acute organ dysfunction. This tool was the basis for the do (screening) portion of the PDSA model.

Continuous Screening

Technology can facilitate early recognition of severe sepsis with EHR‐based surveillance screening tools. Surveillance may include continuous review of vital signs and laboratory values with an automated alerting system. A valuable feature of the screening tool alert is the incorporation of the nurse's assessment. Decision support can improve the process by providing advice with systems requiring a reason to over‐ride the advice.[13] For example, an alert may include input from the nurse to determine if the abnormal data are thought to be related to an infectious process or due to another cause. If a suspected or confirmed infection is identified, further surveillance screening can include review of blood pressure readings and laboratory data to determine if organ dysfunction is present. If organ dysfunction criteria are identified, the alert can prompt the nurse to notify the physician to discuss whether the organ dysfunction is new and related to the infection and if implementation of the severe sepsis bundles is indicated (Figure 2). Additional continuous screening models may include variations of the example provided to include alerts to other clinicians or a response team.

Figure 2
Severe sepsis alert with situation, background, assessment, recommendation (SBAR) embedded. Abbreviations: BMP, basic metabolic panel; BP, blood pressure; CBC, complete blood count; INR, International Normalized Ratio; IV, intravenous; PTT, partial thromboplastin time; SIRS, systemic inflammatory response syndrome; SpO2, saturation of peripheral oxygen; WBC, white blood cells.

An automated screening tool within the EHR can be useful because the system continuously scans to identify signs and symptoms of sepsis, thus providing screening consistency, and offers data on the back end to be used as a mechanism for feedback to monitor effectiveness. Challenges with EHR severe sepsis alert development are resource allocation, testing, education, and ongoing evaluation and feedback. Other challenges include the potential for alert fatigue (false positive) and inappropriate response (false negative) to the infection prompt, thereby halting the next step in automated screening for organ dysfunction. Time to complete an automated screening tool varies based on strategic design and user understanding.

Screening Checklist

Whereas EHR tools may be effective in early recognition of sepsis, not all sites will have the capability to use these tools because of lack of informatics support, cost of development, and absence of an EHR in some hospitals.[14] An alternative to continuous screening is a sepsis checklist such as the severe sepsis screening tool (Figure 1). The checklist is designed to prompt nurses to screen every patient during every shift for new signs of sepsis and organ dysfunction.

The checklist ensures that 3 key issues are considered: presence of a suspected or confirmed infection, systemic manifestations of inflammation, and physiological manifestations of organ dysfunction. The paper tool is simple to use and can be completed in 10 to 20 minutes. It requires the nurse to review the progress notes, vital signs, and laboratory test results. Although the time investment seems onerous, the gain in consistency of screening and treatment compensates for the extra effort. Review of the checklist also provides a locus for feedback and new improvement cycles.

Scripted Communication

Once a patient with severe sepsis is identified, communicating this finding to the rest of the clinical team is essential. Because communication skills are not always emphasized in QI projects, we decided to emphasize a structured approach. We provided clinicians with scripts based on the SBAR (situation, background, assessment, and recommendation) technique aimed to improve communication (Figure 3).[15, 16] Using the SBAR technique also supports our efforts to build nurses' confidence and willingness to employ protocols that give them greater autonomy.

Figure 3
Script for communicating severe sepsis. Abbreviations: CBC = complete blood count; WBC, white blood cells.

Nurse‐Directed Protocols

Skillful identification and management of severe sepsis patients constitute the foundation for implementation of nurse‐directed protocols in this patient population. Such protocols promote autonomy and staff ownership. Severe sepsis protocols may include increasing the frequency of vital signs, placement of laboratory orders and, in sites with an established culture of increased nurse autonomy, initiation of intravenous access and a fluid bolus when specific criteria are met. Because nursing scope of practice varies from state to state and among hospitals, nurse‐directed severe sepsis protocols generally require review of current site practice guidelines, physician agreement, and approval by the medical executive committee prior to implementation. Despite these differences, maximizing nurse leadership involvement and nurse autonomy can help propel the program forward. Protocols may be implemented based on knowledge level and resources on a particular ward. A workflow evaluation may be included in this process to define staff performing each step, what is being reported, and where and when data are recorded.

DATA COLLECTION AND FEEDBACK

Nurse screening drives the ward program and ensuring its consistency is the key to early patient identification. We made ongoing repeated evaluation of the appropriate use of the screening tool, time to physician notification, and time to follow‐up intervention, a critical part of the study phase of the PDSA cycle. Once the nursing staff is consistently accurate and compliant (>90%) with screening, random (eg, once per week) screening tool review may be more suitable, thus requiring fewer resources (see Supporting Information, Appendix 1, in the online version of this article).

Data Collection

A key to improvement is to study the process, which requires data collection to assess compliance. In our experience, timely clinician feedback, along with data, led to effective process change. Real‐time data collection and discussion with the clinical team may lead to early recognition or intervention.

In our collaborative experience, we observed varied resources and timing for data collection across hospitals. For example, several participating sites had sepsis coordinators to collect data, whereas others relied on the quality department or nursing staff to collect data. Data may be collected concurrently (within 24 hours of severe sepsis presentation) or retrospectively. Retrospective data collection may allow for staff flexibility in data collection, but limits feedback to the clinicians. For example, with retrospective review, early recognition and treatment failure may go unrecognized until the data are analyzed and reported, which can be months after the patient has been discharged or expired.

Feedback to Caregivers

A consistent feedback process, which can occur at the individual or group level, may lead to prompt improvement in severe sepsis management. An example of individual feedback would be providing the nurse with the elapsed time from antibiotic order to time of administration. Early in the implementation phase, frequent (daily or weekly) feedback is helpful to build team cohesiveness. An example of feedback to build the team may include a unit‐based report on the last 5 severe sepsis patients managed by the group. Providing overall bundle compliance and outcome reports on a weekly and monthly basis will allow the clinical team to track progress. Examples of report cards and a dashboard are provided in the supplemental material, which highlight compliance with the bundle elements as well as time to achieve the bundle elements. (see Supporting Information, Appendix 2 and Appendix 3, in the online version of this article). Resources to evaluate and provide consistent data may require up to 10 to 15 hours per week for 1 unit. Automated reports may decrease the resources needed in collating and reporting data.

OUTCOME MEASURES

Although certainly important, mortality is not the only outcome measure worthy of measurement. Other relevant outcomes include transfers to a higher level of care and need for major supportive therapies (eg, dialysis, mechanical ventilation, vasopressor infusion). Whereas it is valuable to review transfers to a higher level of care, we emphasized that these are not necessarily adverse outcomes; in fact, in many cases such transfers are highly desirable. It is also important to track the overall impact of sepsis on hospital length of stay.

SUMMARY/CONCLUSIONS

Grounded in the Institute for Healthcare Improvement's PDSA QI model, we developed a program aimed at improving outcomes for severe sepsis ward patients. Our program's cornerstone is nurse‐led checklist‐based screening. Our faculty led learning sessions that concentrated on using a collaborative approach whose key components were education in early sepsis identification, use of a sepsis screening tool, and the SBAR method for effective communication. Pitfalls identified during the program included lack of knowledge for both nurses and physicians in early severe sepsis identification, resistance to routine screening, and lack of data collection and leadership support. The most successful participating sites were those with senior leadership backing, staff engagement, informatics support, and data collection resources. Ultimately, replicating a program such as ours will depend on team cohesiveness, and nurse empowerment through the use of nurse‐driven protocols. Programs like this may lead to progression toward standardizing practice (eg, antibiotic administration, fluid resuscitation), matching patient needs to resources, and building stronger partnerships between hospitalists and nurses.

Disclosures

This work was supported by a grant provided to the Society of Critical Care Medicine by the Gordon and Betty Moore Foundation (Early Identification and Management of Sepsis on the Wards). The work was supported by a grant from the Adventist Hospital System. As part of our agreement with the Gordon and Betty Moore Foundation, we made a commitment to disseminate our findings in articles such as this one. However, the Moore Foundation played no role in how we actually structured our articles, nor did they review or preapprove any of the manuscripts submitted as part of the dissemination component; the same was the case with the other sponsors. The authors report no conflicts of interest.

References
  1. Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Intensive Care Med. 2010;36(2):222231.
  2. Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Crit Care Med. 2010;38(2):367374.
  3. Levy MM, Rhodes A, Phillips GS, et al. Surviving Sepsis Campaign: association between performance metrics and outcomes in a 7.5‐year study. Intensive Care Med. 2014;40(11):16231633.
  4. Rohde JM, Odden AJ, Bonham C, et al. The epidemiology of acute organ system dysfunction from severe sepsis outside of the intensive care unit. J Hosp Med. 2013;8(5):243247.
  5. Yealy DM, Huang DT, Delaney A, et al. Recognizing and managing sepsis: what needs to be done? BMC Med. 2015;13:98.
  6. Sopena N, Heras E, Casas I, et al. Risk factors for hospital‐acquired pneumonia outside the intensive care unit: a case‐control study. Am J Infect Control. 2014;42(1):3842.
  7. Dellinger RP, Levy MM, Rhodes A, et al. Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock, 2012. Crit Care Med. 2013;41(2):580637.
  8. Nadeem E, Olin SS, Hill LC, Hoagwood KE, Horwitz SM. Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Q. 2013;91(2):354394.
  9. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. San Francisco, CA: Jossey‐Bass; 2009.
  10. Nembhard IM. Learning and improving in quality improvement collaboratives: which collaborative features do participants value most? Health Serv Res. 2009;44(2 pt 1):359378.
  11. Pronovost PJ, Weast B, Bishop K, et al. Senior executive adopt‐a‐work unit: a model for safety improvement. Jt Comm J Qual Saf. 2004;30(2):5968.
  12. Surviving Sepsis Campaign. Available at: http://survivingsepsis.org/Resources/Pages/default.aspx. Accessed September 24, 2015.
  13. Roshanov PS, Fernandes N, Wilczynski JM, et al. Features of effective computerised clinical decision support systems: meta‐regression of 162 randomised trials. BMJ. 2013;346:f657.
  14. Bhounsule P, Peterson AM. characteristics of hospitals associated with complete and partial implementation of electronic health records. Perspect Health Inf Manag. 2016;13:1c.
  15. Institute for Healthcare Improvement. SBAR technique for communication: a situational briefing model. Available at: http://www.ihi.org/resources/pages/tools/sbartechniqueforcommunicationasituationalbriefingmodel.aspx. Accessed September 12, 2015.
  16. Compton J, Copeland K, Flanders S, et al. Implementing SBAR across a large multihospital health system. Jt Comm J Qual Patient Saf. 2012;38(6):261268.
References
  1. Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Intensive Care Med. 2010;36(2):222231.
  2. Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Crit Care Med. 2010;38(2):367374.
  3. Levy MM, Rhodes A, Phillips GS, et al. Surviving Sepsis Campaign: association between performance metrics and outcomes in a 7.5‐year study. Intensive Care Med. 2014;40(11):16231633.
  4. Rohde JM, Odden AJ, Bonham C, et al. The epidemiology of acute organ system dysfunction from severe sepsis outside of the intensive care unit. J Hosp Med. 2013;8(5):243247.
  5. Yealy DM, Huang DT, Delaney A, et al. Recognizing and managing sepsis: what needs to be done? BMC Med. 2015;13:98.
  6. Sopena N, Heras E, Casas I, et al. Risk factors for hospital‐acquired pneumonia outside the intensive care unit: a case‐control study. Am J Infect Control. 2014;42(1):3842.
  7. Dellinger RP, Levy MM, Rhodes A, et al. Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock, 2012. Crit Care Med. 2013;41(2):580637.
  8. Nadeem E, Olin SS, Hill LC, Hoagwood KE, Horwitz SM. Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Q. 2013;91(2):354394.
  9. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. San Francisco, CA: Jossey‐Bass; 2009.
  10. Nembhard IM. Learning and improving in quality improvement collaboratives: which collaborative features do participants value most? Health Serv Res. 2009;44(2 pt 1):359378.
  11. Pronovost PJ, Weast B, Bishop K, et al. Senior executive adopt‐a‐work unit: a model for safety improvement. Jt Comm J Qual Saf. 2004;30(2):5968.
  12. Surviving Sepsis Campaign. Available at: http://survivingsepsis.org/Resources/Pages/default.aspx. Accessed September 24, 2015.
  13. Roshanov PS, Fernandes N, Wilczynski JM, et al. Features of effective computerised clinical decision support systems: meta‐regression of 162 randomised trials. BMJ. 2013;346:f657.
  14. Bhounsule P, Peterson AM. characteristics of hospitals associated with complete and partial implementation of electronic health records. Perspect Health Inf Manag. 2016;13:1c.
  15. Institute for Healthcare Improvement. SBAR technique for communication: a situational briefing model. Available at: http://www.ihi.org/resources/pages/tools/sbartechniqueforcommunicationasituationalbriefingmodel.aspx. Accessed September 12, 2015.
  16. Compton J, Copeland K, Flanders S, et al. Implementing SBAR across a large multihospital health system. Jt Comm J Qual Patient Saf. 2012;38(6):261268.
Issue
Journal of Hospital Medicine - 11(1)
Issue
Journal of Hospital Medicine - 11(1)
Page Number
S32-S39
Page Number
S32-S39
Article Type
Display Headline
Implementation of a multicenter performance improvement program for early detection and treatment of severe sepsis in general medical–surgical wards
Display Headline
Implementation of a multicenter performance improvement program for early detection and treatment of severe sepsis in general medical–surgical wards
Sections
Article Source

© 2016 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Christa Schorr, Cooper Research Institute–Critical Care, Cooper University Hospital, One Cooper Plaza, Dorrance Building, Suite 411, Camden, NJ 08103; Telephone: 856‐968‐7493; Fax: 856‐968‐8378; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Critical Illness Outside the ICU

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Early detection, prevention, and mitigation of critical illness outside intensive care settings

This issue of the Journal of Hospital Medicine describes 2 research and quality improvement demonstration projects funded by the Gordon and Betty Moore Foundation. Early detection is central to both projects. This introductory article does not provide a global review of the now voluminous literature on rapid response teams (RRTs), sepsis detection systems, or treatment protocols. Rather, it takes a step back and reassesses just what early detection and quantification of critical illness are. It then examines the implications of early detection and its quantification.

CONCEPTUAL FRAMEWORK

We define severe illness as the presence of acute disease such that a person can no longer expect to improve without dedicated hospital treatment but which is not inevitably associated with mortality, postdischarge morbidity, or major loss of autonomy. In contrast, we define critical illness as acute disease with high a priori risk of mortality, postdischarge morbidity, and major (possibly total) loss of autonomy. We accept that the boundaries between ordinary illness, severe illness, and critical illness are blurred. The basic assumption behind all efforts at early detection is that these edges can be made sharp, and that the knowledge base required to do so can also lead to improvements in treatment protocols and patient outcomes. Further, it is assumed that at least some forms of critical illness can be prevented or mitigated by earlier detection, identification, and treatment.

Research over the last 2 decades has provided important support for this intuitive view as well as making it more nuanced. With respect to epidemiology, the big news is that sepsis is the biggest culprit, and that it accounts for a substantial proportion of all hospital deaths, including many previously considered unexpected hospital deaths due to in‐hospital deterioration.[1] With respect to treatment, a number of studies have demonstrated that crucial therapies previously considered to be intensive care unit (ICU) therapies can be initiated in the emergency department or general medicalsurgical ward.[2]

Figure 1 shows an idealized framework for illness presenting in the emergency department or general medicalsurgical wards. It illustrates the notion that a transition period exists when patients may be rescued with less intense therapy than will be required when condition progression occurs. Once a certain threshold is crossed, the risk of death or major postdischarge morbidity rises exponentially. Unaided human cognition's ability to determine where a given patient is in this continuum is dangerously variable and is highly dependent on the individuals training and experience. Consequently, as described in several of the articles in this issue as well as multiple other publications, health systems are employing comprehensive electronic medical records (EMRs) and are migrating to algorithmic approaches that combine multiple types of patient data.[3, 4] Although we are still some distance from being able to define exact boundaries between illness, severe illness, and critical illness, current EMRs permit much better definition of patient states, care processes, and short‐term outcomes.

Figure 1
Relationship between time, course of illness (solid line), risk of death or major disability (dashed line), and possible detection periods among patients who present in the emergency department or general medical–surgical ward. All axes employ hypothetical units, because empiric data are not currently available for all domains listed. Point C represents when unaided human cognition (ordinary clinical judgment) can first detect incipient deterioration. In theory, algorithmic approaches (point A) based on real‐time data from the electronic medical record (EMR) can provide earlier detection, and novel biomarkers (point B) could lead to even earlier detection.

Whereas our ability to quantify many processes and short‐term outcomes is expanding rapidly, quantification of the possible benefit of early detection is complicated by the fact that, even in the best of circumstances, not all patients can be rescued. For some patients, rescue may be temporary, raising the prospect of repeated episodes of critical illness and prolonged intensive care without any hope of leaving the hospital. Figure 2 shows that, for these patients, the problem is no longer simply one of preventing death and preserving function but, rather, preserving autonomy and dignity. In this context, early detection means earlier specification of patient preferences.[5, 6]

Figure 2
Progression to critical illness among patients near the end of life. Given that it may not be possible to prevent death, what matters most to patients and families is preservation of autonomy and ability to make choices concordant with their values and preferences. In theory, early detection combined with appropriate palliative care could maximize preservation of autonomy (upper arrow), whereas, in their absence, the health system enters the current default mode (lower arrow) in which intensive care is initiated despite low likelihood of preventing death or disability.

JUST WHAT CONSTITUTES EARLY DETECTION (AND HOW DO WE QUANTIFY IT)?

RRTs arose as the result of a number of studies showing thatin retrospectin‐hospital deteriorations should not have been unexpected. Given comprehensive inpatient EMRs, it is now possible to develop more rigorous definitions. A minimum set of parameters that one would need to specify for proper quantification of early detection is shown on Figure 3. The first is specifying a T0, that is, the moment when a prediction regarding event X (which needs to be defined) is issued. This is different from the (currently unmeasurable) biologic onset of illness as well as the first documented indication that critical illness was present. Further, it is important to be explicit about the event time frame (the time period during which a predicted event is expected to occur): we are predicting that X will occur within E hours of the T0. The time frame between the T0 and X, which we are referring to as lead time, is clinically very important, as it represents the time period during which the response arm (eg, RRT intervention) is to be instituted. Statistical approaches can be used to estimate it, but once an early detection system is in place, it can be quantified. Figure 3 is not restricted to electronic systems; all components shown can be and are used by unaided human cognition.

Figure 3
Characterizing early warning systems. At a T0, a detection system issues a probability estimate that an undesirable event, X (which must be defined explicitly) will occur within some elapsed time (point E) (EVENT TIME FRAME). Time required for a response arm to prepare an intervention is LEAD TIME. Development of detection systems is complicated by the fact that the time point when biological critical illness actually begins is currently unmeasurable, whereas system development is limited by how accurately X is documented. Probability estimates are based on data sources with different accumulation times. Some definitional data elements (eg, age, gender, diagnosis for this admission) are not recurrent (♦). Others, which could include streaming data, are recurrent, and the look‐back time frame must be clearly specified. For example, physiologic or biochemical data generally accumulate over a short time period (usually measured in hours); health services data (eg, elapsed length of stay in the hospital at T0; was this patient recently in the intensive care unit?) are typically measured in days, whereas chronic conditions can be measured in months to years.
Figure 4
Impact of patients with restricted resuscitation status (not full code, which includes partial code, do not resuscitate, and comfort care only) on unplanned transfers to the intensive care unit (ICU) and total 30‐day mortality. Data are from 21 Kaiser Permanente Northern California hospitals between May 1, 2012 and October 31, 2013. The left panels show patients with restricted resuscitation status (12.1% of patients; range across hospitals, 6.5% to 18.0%), who accounted for 53% of all deaths. Full code patients directly admitted to the ICU and all other hospital units are shown in the middle and right panels, respectively. Circles are drawn to scale (proportion of admissions in top panels, proportion of deaths in lower panels). Within each circle, the shaded area represents the proportion of patients who experienced unplanned transfer to intensive care (for direct ICU admits, this refers to return transfers to the ICU after discharge from the ICU).

It is essential to specify what data are used to generate probability estimates as well as the time frames used, which we refer to as the look‐back time frames. Several types of data could be employed, with some data elements (eg, age or gender) being discrete data with a 1:1 fixed correspondence between the patient and the data. Other data have a many‐to‐1 relationship, and an exact look‐back time frame must be specified for each data type. For example, it seems reasonable to specify a short (1224 hours) look‐back period for some types of data (eg, vital signs, lactate, admission diagnosis or chief complaint), an intermediate time period (13 days) for information on the current encounter, and a longer (months to years) time period for preexisting illness or comorbidity burden.

Because many events are rare, traditional measures used to assess model performance, such as the area under the receiver operator characteristic curve (C statistic), are not as helpful.[7] Consequently, much more emphasis needs to be given to 2 key metrics: number needed to evaluate (or workup to detection ratio) and threshold‐specific sensitivity (ability of the alert to detect X at a given threshold). With these, one can answer 3 questions that will be asked by the physicians and nurses who are not likely to be researchers, and who will have little interest in the statistics: How many patients do I need to work up each day? How many patients will I need to work up for each possible outcome identified? For this amount of work, how many of the possible outcomes will we catch?

Data availability for the study of severe and critical illness continues to expand. Practically, this means that future research will require more nuanced ontologies for the classification of physiologic derangement. Current approaches to severity scoring (collapsing data into composite scores) need to be replaced by dynamic approaches that consider differential effects on organ systems as well as what can be measured. Severity scoring will also need to incorporate the rate of change of a score (or probability derived from a score) in predicting the occurrence of an event of interest as well as judging response to treatment. Thus, instead of at time of ICU admission, the patient had a severity score of 76, we may have although this patient's severity score at the time of admission was decreasing by 4 points per hour per 10 mL/kg fluid given, the probability for respiratory instability was increasing by 2.3% per hour given 3 L/min supplemental oxygen. This approach is concordant with work done in other clinical settings (eg, in addition to an absolute value of maximal negative inspiratory pressure or vital capacity, the rate of deterioration of neuromuscular weakness in Guillain‐Barr syndrome is also important in predicting respiratory failure[8]).

Electronic data also could permit better definition of patient preferences regarding escalation of care. At present, available electronic data are limited (primarily, orders such as do not resuscitate).[9] However, this EMR domain is gradually expanding.[10, 11] Entities such as the National Institutes of Health could develop sophisticated and rapid questionnaires around patient preferences that are similar to those developed for the Patient Reported Outcomes Measurement Information System.[12] Such tools could have a significant effect on our ability to quantify the benefits of early detection as it relates to a patient's preferences (including better delineation of what treatments they would and would not want).

ACTIVATING A RESPONSE ARM

Early identification, antibiotic administration, fluid resuscitation, and source control are now widely felt to constitute low‐hanging fruit for decreasing morbidity and mortality in severe sepsis. All these measures are included in quality improvement programs and sepsis bundles.[13, 14, 15] However, before early interventions can be instituted, sepsis must at least be suspected, hence the need for early detection. The situation with respect to patient deterioration (for reasons other than sepsis) in general medical surgical wards is less clear‐cut. Reasons for deterioration are much more heterogenous and, consequently, early detection is likely necessary but not sufficient for outcomes improvement.

The 2 projects described in this issue describe nonspecific (indicating elevated risk but not specifying what led to the elevation of risk) and sepsis‐specific alerting systems. In the case of the nonspecific system, detection may not lead to an immediate deployment of a response arm. Instead, a secondary evaluation process must be triggered first. Following this evaluation component, a response arm may or may not be required. In contrast, the sepsis‐specific project essentially transforms the general medicalsurgical ward into a screening system. This screening system then also triggers specific bundle components.

Neither of these systems relies on unaided human cognition. In the case of the nonspecific system, a complex equation generates a probability that is displayed in the EMR, with protocols specifying what actions are to be taken when that probability exceeds a prespecified threshold. With respect to the sepsis screening system, clinicians are supported by EMR alerts as well as protocols that increase nursing autonomy when sepsis is suspected.

The distinction between nonspecific (eg, acute respiratory failure or hemodynamic deterioration) and specific (eg, severe sepsis) alerting systems is likely to disappear as advances in the field occur. For example, incorporation of natural language processing would permit inclusion of semantic data, which could be processed so as to prebucket an alert into one that not just gave a probability, but also a likely cause for the elevated probability.

In addition, both types of systems suffer from the limitation of working off a limited database because, in general, current textbooks and training programs primary focus remains that of treatment of full‐blown clinical syndromes. For example, little is known about how one should manage patients with intermediate lactate values, despite evidence showing that a significant percentage of patients who die from sepsis will initially have such values, with 1 study showing 63% as many deaths with initial lactate of 2.5 to 4.0 mmol/L as occurred with an initial lactate of >4.0 mmol/L.[16] Lastly, as is discussed below, both systems will encounter similar problems when it comes to quantifying benefit.

QUANTIFYING BENEFIT

Whereas the notion of deploying RRTs has clearly been successful, success in demonstrating unequivocal benefit remains elusive.[17, 18, 19] Outcome measures vary dramatically across studies and have included the number of RRT calls, decreases in code blue events on the ward, and decreases in inpatient mortality.[20] We suspect that other reasons are behind this problem. First is the lack of adequate risk adjustment and ignoring the impact of patients near the end of life on the denominator. Figure 4 shows recent data from 21 Kaiser Permanente Northern California (KPNC) hospitals, which can now capture care directive orders electronically,[21] illustrates this problem. The majority (53%) of hospital deaths occur among a highly variable proportion (range across hospitals, 6.5%18.0%) of patients who arrive at the hospital with a restricted resuscitation preference (do not resuscitate, partial code, and comfort care only). These patients do not want to die or crash and burn but, were they to trigger an alert, they would not necessarily want to be rescued by being transferred to the ICU either; moreover, internal KPNC analyses show that large numbers of these patients have sepsis and refuse aggressive treatment. The second major confounder is that ICUs save lives. Consequently, although early detection could lead to fewer transfers to the ICU, using the end point of ICU admission is very problematic, because in many cases the goal of alerting systems should be to get patients to the ICU sooner, which would not affect the outcome of transfer to the ICU in a downward direction; in fact, such systems might increase transfer to the ICU.

The complexities summarized in Figure 4 mean that it is likely that formal quantification of benefit will require examination of multiple measures, including balancing measures as described below. It is also evident that, in this respectlack of agreement as to what constitutes a good outcomethe issues being faced here are a reflection of a broader area of disagreement within our profession and society at large that extends to medical conditions other than critical illness.

POTENTIAL HARMS OF EARLY DETECTION

Implementation of early detection and rapid response systems are not inherently free of harm. If these systems are not shown to have benefit, then the cost of operating them is moving resources away from other, possibly evidence‐based, interventions.[22] At the individual level, alerts could frighten patients and their families (for example, some people are very uncomfortable with the idea that one can predict events). Physicians and nurses who work in the hospital are already quite busy, so every time an alert is issued, it adds to the demand on their already limited time, hence, the critical importance of strategies to minimize false alarms and alert fatigue. Moreover, altering existing workflows can be disruptive and unpopular.

A potentially more quantifiable problem is the impact of early detection systems on ICU operations. For example, if an RRT decides to transfer a patient from the ward to the ICU as a preventive measure (soft landing) and this in turn ties up an ICU bed, that bed is then unavailable for a new patient in the emergency department. Similarly, early detection systems coupled with structured protocols for promoting soft landings could result in a change in ICU case mix, with greater patient flow due to increased numbers of patients with lower severity and lower ICU length of stay. These considerations suggest the need to couple early detection with other supportive data systems and workflows (eg, systems that monitor bed capacity proactively).

Lastly, if documentation protocols are not established and followed, early detection systems could expose both individual clinicians as well as healthcare institutions to medicallegal risk. This consideration could be particularly important in those instances where an alert is issued and, for whatever reasons, clinicians do not take action and do not document that decision. At present, early detection systems are relatively uncommon, but they may gradually become standard of care. This means that in‐house out of ICU deteriorations, which are generally considered to be bad luck or due to a specific error or oversight, may then be considered to be preventable. Another possible scenario that could arise is that of plaintiffs invoking enterprise liability, where a hospital's not having an early detection system becomes considered negligent.

ARTICLES IN THIS ISSUE

In this issue of the Journal of Hospital Medicine, we examine early detection from various perspectives but around a common theme that usually gets less attention in the academic literature: implementation. The article by Schorr et al.[23] describes a disease‐specific approach that can be instantiated using either electronic or paper tools. Escobar et al.[24] describe the quantitative as well as the electronic architecture of an early warning system (EWS) pilot at 2 hospitals that are part of an integrated healthcare delivery system. Dummett et al.[25] then show how a clinical rescue component was developed to take advantage of the EWS, whereas Granich et al.[26] describe the complementary component (integration of supportive care and ensuring that patient preferences are respected). The paper by Liu et al.[27] concludes by placing all of this work in a much broader context, that of the learning healthcare system.

FUTURE DIRECTIONS: KEY GAPS IN THE FIELD

Important gaps remain with respect to early detection and response systems. Future research will need to focus on a number of areas. First and foremost, better approaches to quantifying the costbenefit relationships of these systems are needed; somehow, we need to move beyond a purely intuitive sense that they are good things. Related to this is the need to establish metrics that would permit rigorous comparisons between different approaches; this work needs to go beyond simple comparisons of the statistical characteristics of different predictive models. Ideally, it should include comparisons of different approaches for the response arms as well. We also need to characterize clinician understanding about detection systems, what constitutes impending or incipient critical illness, and the optimum way to provide early detection. Finally, better approaches to integrating health services research with basic science work must be developed; for example, how should one test new biomarkers in settings with early detection and response systems?

The most important frontier, however, is how one can make early detection and response systems more patient centered and how one can enhance their ability to respect patient preferences. Developing systems to improve clinical management is laudable, but somehow we need to also find ways to have these systems make a better connection to what patients want most and what matters most to them, something that may need to include new ways that sometimes suspend use of these systems. At the end of the day, after early detection, patients must have a care experience that they see as an unequivocal improvement.

Acknowledgements

The authors thank our 2 foundation program officers, Dr. Marybeth Sharpe and Ms. Kate Weiland, for their administrative support and encouragement. The authors also thank Dr. Tracy Lieu, Dr. Michelle Caughey, Dr. Philip Madvig, and Ms. Barbara Crawford for their administrative assistance, Dr. Vincent Liu for comments on the manuscript, and Ms. Rachel Lesser for her help with formatting the manuscript and figures.

Disclosures

This work was supported by the Gordon and Betty Moore Foundation, The Permanente Medical Group, Inc., and Kaiser Foundation Hospitals, Inc. As part of our agreement with the Moore Foundation, we made a commitment to disseminate our findings in articles such as this one. However, the Gordon and Betty Moore Foundation and its staff played no role in how we actually structured our articles, nor did they review or preapprove any of the manuscripts submitted as part of the dissemination component. None of the authors has any conflicts of interest to declare of relevance to this work.

Files
References
  1. Hall MJ, Williams SN, DeFrances CJ, Golosinskiy A. Inpatient care for septicemia or sepsis: a challenge for patients and hospitals. NCHS Data Brief. 2011(62):18.
  2. Levy MM, Rhodes A, Phillips GS, et al. Surviving sepsis campaign: association between performance metrics and outcomes in a 7.5‐year study. Crit Care Med. 2015;43(1):312.
  3. Escobar GJ, LaGuardia J, Turk BJ, Ragins A, Kipnis P, Draper D. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med. 2012;7(5):388395.
  4. Kollef MH, Chen Y, Heard K, et al. A randomized trial of real‐time automated clinical deterioration alerts sent to a rapid response team. J Hosp Med. 2014;9(7):424429.
  5. Vazquez R, Gheorghe C, Grigoriyan A, Palvinskaya T, Amoateng‐Adjepong Y, Manthous CA. Enhanced end‐of‐life care associated with deploying a rapid response team: a pilot study. J Hosp Med. 2009;4(7):449452.
  6. Smith RL, Hayashi VN, Lee YI, Navarro‐Mariazeta L, Felner K. The medical emergency team call: a sentinel event that triggers goals of care discussion. Crit Care Med. 2014;42(2):322327.
  7. Romero‐Brufau S, Huddleston JM, Escobar GJ, Liebow M. Why the C‐statistic is not informative to evaluate early warning scores and what metrics to use. Crit Care. 2015;19:285.
  8. Lawn ND, Fletcher DD, Henderson RD, Wolter TD, Wijdicks EF. Anticipating mechanical ventilation in Guillain‐Barre syndrome. Arch Neurol. 2001;58(6):893898.
  9. Kim YS, Escobar GJ, Halpern SD, Greene JD, Kipnis P, Liu V. The natural history of changes in preferences for life‐sustaining treatments and implications for inpatient mortality in younger and older hospitalized adults. J Am Geriatr Soc. 2016;64(5):981989.
  10. Sargious A, Lee SJ. Remote collection of questionnaires. Clin Exp Rheumatol. 2014;32(5 suppl 85):S168S172.
  11. Be prepared to make your health care wishes known. Health care directives. Allina Health website. Available at: http://www.allinahealth.org/Customer-Service/Be-prepared/Be-prepared-to-make-your-health-care-wishes-known. Accessed January 1, 2015.
  12. Patient Reported Outcomes Measurement Information System. Dynamic tools to measure health outcomes from the patient perspective. Available at: http://www.nihpromis.org. Accessed January 15, 2015.
  13. Schorr C, Cinel I, Townsend S, Ramsay G, Levy M, Dellinger RP. Methodology of the surviving sepsis campaign global initiative for improving care of the patient with severe sepsis. Minerva Anestesiol. 2009;75(suppl 1):2327.
  14. Marshall JC, Dellinger RP, Levy M. The Surviving Sepsis Campaign: a history and a perspective. Surg Infect (Larchmt). 2010;11(3):275281.
  15. Schorr CA, Dellinger RP. The Surviving Sepsis Campaign: past, present and future. Trends Mol Med. 2014;20(4):192194.
  16. Shapiro NI, Howell MD, Talmor D, et al. Serum lactate as a predictor of mortality in emergency department patients with infection. Ann Emerg Med. 2005;45(5):524528.
  17. Sharek PJ, Parast LM, Leong K, et al. Effect of a rapid response team on hospital‐wide mortality and code rates outside the ICU in a Children's Hospital. JAMA. 2007;298(19):22672274.
  18. Buist MD, Moore GE, Bernard SA, Waxman BP, Anderson JN, Nguyen TV. Effects of a medical emergency team on reduction of incidence of and mortality from unexpected cardiac arrests in hospital: preliminary study. BMJ. 2002;324(7334):387390.
  19. Leach LS, Mayo AM. Rapid response teams: qualitative analysis of their effectiveness. Am J Crit Care. 2013;22(3):198210.
  20. Chan PS, Khalid A, Longmore LS, Berg RA, Kosiborod M, Spertus JA. Hospital‐wide code rates and mortality before and after implementation of a rapid response team. JAMA. 2008;300(21):25062513.
  21. Escobar GJ, Gardner M, Greene JG, Draper D, Kipnis P. Risk‐adjusting hospital mortality using a comprehensive electronic record in an integrated healthcare delivery system. Med Care. 2013;51(5):446453.
  22. Winters BD, Pham J, Pronovost PJ. Rapid response teams—walk, don't run. JAMA. 2006;296(13):16451647.
  23. Schorr et al. J Hosp Med. 2016;11:000000.
  24. Escobar GJ, Turk BJ, Ragins A, et al. Piloting electronic medical record–based early detection of inpatient deterioration in community hospitals. J Hosp Med. 2016;11:000000.
  25. Dummett et al. J Hosp Med. 2016;11:000000.
  26. Granich et al. J Hosp Med. 2016;11:000000.
  27. Liu et al. Liu VX, Morehouse JW, Baker JM, Greene JD, Kipnis P, Gabriel J. Escobar GJ. Data that drive: closing the loop in the learning hospital system. J Hosp Med. 2016;11:000000.
Article PDF
Issue
Journal of Hospital Medicine - 11(1)
Page Number
S5-S10
Sections
Files
Files
Article PDF
Article PDF

This issue of the Journal of Hospital Medicine describes 2 research and quality improvement demonstration projects funded by the Gordon and Betty Moore Foundation. Early detection is central to both projects. This introductory article does not provide a global review of the now voluminous literature on rapid response teams (RRTs), sepsis detection systems, or treatment protocols. Rather, it takes a step back and reassesses just what early detection and quantification of critical illness are. It then examines the implications of early detection and its quantification.

CONCEPTUAL FRAMEWORK

We define severe illness as the presence of acute disease such that a person can no longer expect to improve without dedicated hospital treatment but which is not inevitably associated with mortality, postdischarge morbidity, or major loss of autonomy. In contrast, we define critical illness as acute disease with high a priori risk of mortality, postdischarge morbidity, and major (possibly total) loss of autonomy. We accept that the boundaries between ordinary illness, severe illness, and critical illness are blurred. The basic assumption behind all efforts at early detection is that these edges can be made sharp, and that the knowledge base required to do so can also lead to improvements in treatment protocols and patient outcomes. Further, it is assumed that at least some forms of critical illness can be prevented or mitigated by earlier detection, identification, and treatment.

Research over the last 2 decades has provided important support for this intuitive view as well as making it more nuanced. With respect to epidemiology, the big news is that sepsis is the biggest culprit, and that it accounts for a substantial proportion of all hospital deaths, including many previously considered unexpected hospital deaths due to in‐hospital deterioration.[1] With respect to treatment, a number of studies have demonstrated that crucial therapies previously considered to be intensive care unit (ICU) therapies can be initiated in the emergency department or general medicalsurgical ward.[2]

Figure 1 shows an idealized framework for illness presenting in the emergency department or general medicalsurgical wards. It illustrates the notion that a transition period exists when patients may be rescued with less intense therapy than will be required when condition progression occurs. Once a certain threshold is crossed, the risk of death or major postdischarge morbidity rises exponentially. Unaided human cognition's ability to determine where a given patient is in this continuum is dangerously variable and is highly dependent on the individuals training and experience. Consequently, as described in several of the articles in this issue as well as multiple other publications, health systems are employing comprehensive electronic medical records (EMRs) and are migrating to algorithmic approaches that combine multiple types of patient data.[3, 4] Although we are still some distance from being able to define exact boundaries between illness, severe illness, and critical illness, current EMRs permit much better definition of patient states, care processes, and short‐term outcomes.

Figure 1
Relationship between time, course of illness (solid line), risk of death or major disability (dashed line), and possible detection periods among patients who present in the emergency department or general medical–surgical ward. All axes employ hypothetical units, because empiric data are not currently available for all domains listed. Point C represents when unaided human cognition (ordinary clinical judgment) can first detect incipient deterioration. In theory, algorithmic approaches (point A) based on real‐time data from the electronic medical record (EMR) can provide earlier detection, and novel biomarkers (point B) could lead to even earlier detection.

Whereas our ability to quantify many processes and short‐term outcomes is expanding rapidly, quantification of the possible benefit of early detection is complicated by the fact that, even in the best of circumstances, not all patients can be rescued. For some patients, rescue may be temporary, raising the prospect of repeated episodes of critical illness and prolonged intensive care without any hope of leaving the hospital. Figure 2 shows that, for these patients, the problem is no longer simply one of preventing death and preserving function but, rather, preserving autonomy and dignity. In this context, early detection means earlier specification of patient preferences.[5, 6]

Figure 2
Progression to critical illness among patients near the end of life. Given that it may not be possible to prevent death, what matters most to patients and families is preservation of autonomy and ability to make choices concordant with their values and preferences. In theory, early detection combined with appropriate palliative care could maximize preservation of autonomy (upper arrow), whereas, in their absence, the health system enters the current default mode (lower arrow) in which intensive care is initiated despite low likelihood of preventing death or disability.

JUST WHAT CONSTITUTES EARLY DETECTION (AND HOW DO WE QUANTIFY IT)?

RRTs arose as the result of a number of studies showing thatin retrospectin‐hospital deteriorations should not have been unexpected. Given comprehensive inpatient EMRs, it is now possible to develop more rigorous definitions. A minimum set of parameters that one would need to specify for proper quantification of early detection is shown on Figure 3. The first is specifying a T0, that is, the moment when a prediction regarding event X (which needs to be defined) is issued. This is different from the (currently unmeasurable) biologic onset of illness as well as the first documented indication that critical illness was present. Further, it is important to be explicit about the event time frame (the time period during which a predicted event is expected to occur): we are predicting that X will occur within E hours of the T0. The time frame between the T0 and X, which we are referring to as lead time, is clinically very important, as it represents the time period during which the response arm (eg, RRT intervention) is to be instituted. Statistical approaches can be used to estimate it, but once an early detection system is in place, it can be quantified. Figure 3 is not restricted to electronic systems; all components shown can be and are used by unaided human cognition.

Figure 3
Characterizing early warning systems. At a T0, a detection system issues a probability estimate that an undesirable event, X (which must be defined explicitly) will occur within some elapsed time (point E) (EVENT TIME FRAME). Time required for a response arm to prepare an intervention is LEAD TIME. Development of detection systems is complicated by the fact that the time point when biological critical illness actually begins is currently unmeasurable, whereas system development is limited by how accurately X is documented. Probability estimates are based on data sources with different accumulation times. Some definitional data elements (eg, age, gender, diagnosis for this admission) are not recurrent (♦). Others, which could include streaming data, are recurrent, and the look‐back time frame must be clearly specified. For example, physiologic or biochemical data generally accumulate over a short time period (usually measured in hours); health services data (eg, elapsed length of stay in the hospital at T0; was this patient recently in the intensive care unit?) are typically measured in days, whereas chronic conditions can be measured in months to years.
Figure 4
Impact of patients with restricted resuscitation status (not full code, which includes partial code, do not resuscitate, and comfort care only) on unplanned transfers to the intensive care unit (ICU) and total 30‐day mortality. Data are from 21 Kaiser Permanente Northern California hospitals between May 1, 2012 and October 31, 2013. The left panels show patients with restricted resuscitation status (12.1% of patients; range across hospitals, 6.5% to 18.0%), who accounted for 53% of all deaths. Full code patients directly admitted to the ICU and all other hospital units are shown in the middle and right panels, respectively. Circles are drawn to scale (proportion of admissions in top panels, proportion of deaths in lower panels). Within each circle, the shaded area represents the proportion of patients who experienced unplanned transfer to intensive care (for direct ICU admits, this refers to return transfers to the ICU after discharge from the ICU).

It is essential to specify what data are used to generate probability estimates as well as the time frames used, which we refer to as the look‐back time frames. Several types of data could be employed, with some data elements (eg, age or gender) being discrete data with a 1:1 fixed correspondence between the patient and the data. Other data have a many‐to‐1 relationship, and an exact look‐back time frame must be specified for each data type. For example, it seems reasonable to specify a short (1224 hours) look‐back period for some types of data (eg, vital signs, lactate, admission diagnosis or chief complaint), an intermediate time period (13 days) for information on the current encounter, and a longer (months to years) time period for preexisting illness or comorbidity burden.

Because many events are rare, traditional measures used to assess model performance, such as the area under the receiver operator characteristic curve (C statistic), are not as helpful.[7] Consequently, much more emphasis needs to be given to 2 key metrics: number needed to evaluate (or workup to detection ratio) and threshold‐specific sensitivity (ability of the alert to detect X at a given threshold). With these, one can answer 3 questions that will be asked by the physicians and nurses who are not likely to be researchers, and who will have little interest in the statistics: How many patients do I need to work up each day? How many patients will I need to work up for each possible outcome identified? For this amount of work, how many of the possible outcomes will we catch?

Data availability for the study of severe and critical illness continues to expand. Practically, this means that future research will require more nuanced ontologies for the classification of physiologic derangement. Current approaches to severity scoring (collapsing data into composite scores) need to be replaced by dynamic approaches that consider differential effects on organ systems as well as what can be measured. Severity scoring will also need to incorporate the rate of change of a score (or probability derived from a score) in predicting the occurrence of an event of interest as well as judging response to treatment. Thus, instead of at time of ICU admission, the patient had a severity score of 76, we may have although this patient's severity score at the time of admission was decreasing by 4 points per hour per 10 mL/kg fluid given, the probability for respiratory instability was increasing by 2.3% per hour given 3 L/min supplemental oxygen. This approach is concordant with work done in other clinical settings (eg, in addition to an absolute value of maximal negative inspiratory pressure or vital capacity, the rate of deterioration of neuromuscular weakness in Guillain‐Barr syndrome is also important in predicting respiratory failure[8]).

Electronic data also could permit better definition of patient preferences regarding escalation of care. At present, available electronic data are limited (primarily, orders such as do not resuscitate).[9] However, this EMR domain is gradually expanding.[10, 11] Entities such as the National Institutes of Health could develop sophisticated and rapid questionnaires around patient preferences that are similar to those developed for the Patient Reported Outcomes Measurement Information System.[12] Such tools could have a significant effect on our ability to quantify the benefits of early detection as it relates to a patient's preferences (including better delineation of what treatments they would and would not want).

ACTIVATING A RESPONSE ARM

Early identification, antibiotic administration, fluid resuscitation, and source control are now widely felt to constitute low‐hanging fruit for decreasing morbidity and mortality in severe sepsis. All these measures are included in quality improvement programs and sepsis bundles.[13, 14, 15] However, before early interventions can be instituted, sepsis must at least be suspected, hence the need for early detection. The situation with respect to patient deterioration (for reasons other than sepsis) in general medical surgical wards is less clear‐cut. Reasons for deterioration are much more heterogenous and, consequently, early detection is likely necessary but not sufficient for outcomes improvement.

The 2 projects described in this issue describe nonspecific (indicating elevated risk but not specifying what led to the elevation of risk) and sepsis‐specific alerting systems. In the case of the nonspecific system, detection may not lead to an immediate deployment of a response arm. Instead, a secondary evaluation process must be triggered first. Following this evaluation component, a response arm may or may not be required. In contrast, the sepsis‐specific project essentially transforms the general medicalsurgical ward into a screening system. This screening system then also triggers specific bundle components.

Neither of these systems relies on unaided human cognition. In the case of the nonspecific system, a complex equation generates a probability that is displayed in the EMR, with protocols specifying what actions are to be taken when that probability exceeds a prespecified threshold. With respect to the sepsis screening system, clinicians are supported by EMR alerts as well as protocols that increase nursing autonomy when sepsis is suspected.

The distinction between nonspecific (eg, acute respiratory failure or hemodynamic deterioration) and specific (eg, severe sepsis) alerting systems is likely to disappear as advances in the field occur. For example, incorporation of natural language processing would permit inclusion of semantic data, which could be processed so as to prebucket an alert into one that not just gave a probability, but also a likely cause for the elevated probability.

In addition, both types of systems suffer from the limitation of working off a limited database because, in general, current textbooks and training programs primary focus remains that of treatment of full‐blown clinical syndromes. For example, little is known about how one should manage patients with intermediate lactate values, despite evidence showing that a significant percentage of patients who die from sepsis will initially have such values, with 1 study showing 63% as many deaths with initial lactate of 2.5 to 4.0 mmol/L as occurred with an initial lactate of >4.0 mmol/L.[16] Lastly, as is discussed below, both systems will encounter similar problems when it comes to quantifying benefit.

QUANTIFYING BENEFIT

Whereas the notion of deploying RRTs has clearly been successful, success in demonstrating unequivocal benefit remains elusive.[17, 18, 19] Outcome measures vary dramatically across studies and have included the number of RRT calls, decreases in code blue events on the ward, and decreases in inpatient mortality.[20] We suspect that other reasons are behind this problem. First is the lack of adequate risk adjustment and ignoring the impact of patients near the end of life on the denominator. Figure 4 shows recent data from 21 Kaiser Permanente Northern California (KPNC) hospitals, which can now capture care directive orders electronically,[21] illustrates this problem. The majority (53%) of hospital deaths occur among a highly variable proportion (range across hospitals, 6.5%18.0%) of patients who arrive at the hospital with a restricted resuscitation preference (do not resuscitate, partial code, and comfort care only). These patients do not want to die or crash and burn but, were they to trigger an alert, they would not necessarily want to be rescued by being transferred to the ICU either; moreover, internal KPNC analyses show that large numbers of these patients have sepsis and refuse aggressive treatment. The second major confounder is that ICUs save lives. Consequently, although early detection could lead to fewer transfers to the ICU, using the end point of ICU admission is very problematic, because in many cases the goal of alerting systems should be to get patients to the ICU sooner, which would not affect the outcome of transfer to the ICU in a downward direction; in fact, such systems might increase transfer to the ICU.

The complexities summarized in Figure 4 mean that it is likely that formal quantification of benefit will require examination of multiple measures, including balancing measures as described below. It is also evident that, in this respectlack of agreement as to what constitutes a good outcomethe issues being faced here are a reflection of a broader area of disagreement within our profession and society at large that extends to medical conditions other than critical illness.

POTENTIAL HARMS OF EARLY DETECTION

Implementation of early detection and rapid response systems are not inherently free of harm. If these systems are not shown to have benefit, then the cost of operating them is moving resources away from other, possibly evidence‐based, interventions.[22] At the individual level, alerts could frighten patients and their families (for example, some people are very uncomfortable with the idea that one can predict events). Physicians and nurses who work in the hospital are already quite busy, so every time an alert is issued, it adds to the demand on their already limited time, hence, the critical importance of strategies to minimize false alarms and alert fatigue. Moreover, altering existing workflows can be disruptive and unpopular.

A potentially more quantifiable problem is the impact of early detection systems on ICU operations. For example, if an RRT decides to transfer a patient from the ward to the ICU as a preventive measure (soft landing) and this in turn ties up an ICU bed, that bed is then unavailable for a new patient in the emergency department. Similarly, early detection systems coupled with structured protocols for promoting soft landings could result in a change in ICU case mix, with greater patient flow due to increased numbers of patients with lower severity and lower ICU length of stay. These considerations suggest the need to couple early detection with other supportive data systems and workflows (eg, systems that monitor bed capacity proactively).

Lastly, if documentation protocols are not established and followed, early detection systems could expose both individual clinicians as well as healthcare institutions to medicallegal risk. This consideration could be particularly important in those instances where an alert is issued and, for whatever reasons, clinicians do not take action and do not document that decision. At present, early detection systems are relatively uncommon, but they may gradually become standard of care. This means that in‐house out of ICU deteriorations, which are generally considered to be bad luck or due to a specific error or oversight, may then be considered to be preventable. Another possible scenario that could arise is that of plaintiffs invoking enterprise liability, where a hospital's not having an early detection system becomes considered negligent.

ARTICLES IN THIS ISSUE

In this issue of the Journal of Hospital Medicine, we examine early detection from various perspectives but around a common theme that usually gets less attention in the academic literature: implementation. The article by Schorr et al.[23] describes a disease‐specific approach that can be instantiated using either electronic or paper tools. Escobar et al.[24] describe the quantitative as well as the electronic architecture of an early warning system (EWS) pilot at 2 hospitals that are part of an integrated healthcare delivery system. Dummett et al.[25] then show how a clinical rescue component was developed to take advantage of the EWS, whereas Granich et al.[26] describe the complementary component (integration of supportive care and ensuring that patient preferences are respected). The paper by Liu et al.[27] concludes by placing all of this work in a much broader context, that of the learning healthcare system.

FUTURE DIRECTIONS: KEY GAPS IN THE FIELD

Important gaps remain with respect to early detection and response systems. Future research will need to focus on a number of areas. First and foremost, better approaches to quantifying the costbenefit relationships of these systems are needed; somehow, we need to move beyond a purely intuitive sense that they are good things. Related to this is the need to establish metrics that would permit rigorous comparisons between different approaches; this work needs to go beyond simple comparisons of the statistical characteristics of different predictive models. Ideally, it should include comparisons of different approaches for the response arms as well. We also need to characterize clinician understanding about detection systems, what constitutes impending or incipient critical illness, and the optimum way to provide early detection. Finally, better approaches to integrating health services research with basic science work must be developed; for example, how should one test new biomarkers in settings with early detection and response systems?

The most important frontier, however, is how one can make early detection and response systems more patient centered and how one can enhance their ability to respect patient preferences. Developing systems to improve clinical management is laudable, but somehow we need to also find ways to have these systems make a better connection to what patients want most and what matters most to them, something that may need to include new ways that sometimes suspend use of these systems. At the end of the day, after early detection, patients must have a care experience that they see as an unequivocal improvement.

Acknowledgements

The authors thank our 2 foundation program officers, Dr. Marybeth Sharpe and Ms. Kate Weiland, for their administrative support and encouragement. The authors also thank Dr. Tracy Lieu, Dr. Michelle Caughey, Dr. Philip Madvig, and Ms. Barbara Crawford for their administrative assistance, Dr. Vincent Liu for comments on the manuscript, and Ms. Rachel Lesser for her help with formatting the manuscript and figures.

Disclosures

This work was supported by the Gordon and Betty Moore Foundation, The Permanente Medical Group, Inc., and Kaiser Foundation Hospitals, Inc. As part of our agreement with the Moore Foundation, we made a commitment to disseminate our findings in articles such as this one. However, the Gordon and Betty Moore Foundation and its staff played no role in how we actually structured our articles, nor did they review or preapprove any of the manuscripts submitted as part of the dissemination component. None of the authors has any conflicts of interest to declare of relevance to this work.

This issue of the Journal of Hospital Medicine describes 2 research and quality improvement demonstration projects funded by the Gordon and Betty Moore Foundation. Early detection is central to both projects. This introductory article does not provide a global review of the now voluminous literature on rapid response teams (RRTs), sepsis detection systems, or treatment protocols. Rather, it takes a step back and reassesses just what early detection and quantification of critical illness are. It then examines the implications of early detection and its quantification.

CONCEPTUAL FRAMEWORK

We define severe illness as the presence of acute disease such that a person can no longer expect to improve without dedicated hospital treatment but which is not inevitably associated with mortality, postdischarge morbidity, or major loss of autonomy. In contrast, we define critical illness as acute disease with high a priori risk of mortality, postdischarge morbidity, and major (possibly total) loss of autonomy. We accept that the boundaries between ordinary illness, severe illness, and critical illness are blurred. The basic assumption behind all efforts at early detection is that these edges can be made sharp, and that the knowledge base required to do so can also lead to improvements in treatment protocols and patient outcomes. Further, it is assumed that at least some forms of critical illness can be prevented or mitigated by earlier detection, identification, and treatment.

Research over the last 2 decades has provided important support for this intuitive view as well as making it more nuanced. With respect to epidemiology, the big news is that sepsis is the biggest culprit, and that it accounts for a substantial proportion of all hospital deaths, including many previously considered unexpected hospital deaths due to in‐hospital deterioration.[1] With respect to treatment, a number of studies have demonstrated that crucial therapies previously considered to be intensive care unit (ICU) therapies can be initiated in the emergency department or general medicalsurgical ward.[2]

Figure 1 shows an idealized framework for illness presenting in the emergency department or general medicalsurgical wards. It illustrates the notion that a transition period exists when patients may be rescued with less intense therapy than will be required when condition progression occurs. Once a certain threshold is crossed, the risk of death or major postdischarge morbidity rises exponentially. Unaided human cognition's ability to determine where a given patient is in this continuum is dangerously variable and is highly dependent on the individuals training and experience. Consequently, as described in several of the articles in this issue as well as multiple other publications, health systems are employing comprehensive electronic medical records (EMRs) and are migrating to algorithmic approaches that combine multiple types of patient data.[3, 4] Although we are still some distance from being able to define exact boundaries between illness, severe illness, and critical illness, current EMRs permit much better definition of patient states, care processes, and short‐term outcomes.

Figure 1
Relationship between time, course of illness (solid line), risk of death or major disability (dashed line), and possible detection periods among patients who present in the emergency department or general medical–surgical ward. All axes employ hypothetical units, because empiric data are not currently available for all domains listed. Point C represents when unaided human cognition (ordinary clinical judgment) can first detect incipient deterioration. In theory, algorithmic approaches (point A) based on real‐time data from the electronic medical record (EMR) can provide earlier detection, and novel biomarkers (point B) could lead to even earlier detection.

Whereas our ability to quantify many processes and short‐term outcomes is expanding rapidly, quantification of the possible benefit of early detection is complicated by the fact that, even in the best of circumstances, not all patients can be rescued. For some patients, rescue may be temporary, raising the prospect of repeated episodes of critical illness and prolonged intensive care without any hope of leaving the hospital. Figure 2 shows that, for these patients, the problem is no longer simply one of preventing death and preserving function but, rather, preserving autonomy and dignity. In this context, early detection means earlier specification of patient preferences.[5, 6]

Figure 2
Progression to critical illness among patients near the end of life. Given that it may not be possible to prevent death, what matters most to patients and families is preservation of autonomy and ability to make choices concordant with their values and preferences. In theory, early detection combined with appropriate palliative care could maximize preservation of autonomy (upper arrow), whereas, in their absence, the health system enters the current default mode (lower arrow) in which intensive care is initiated despite low likelihood of preventing death or disability.

JUST WHAT CONSTITUTES EARLY DETECTION (AND HOW DO WE QUANTIFY IT)?

RRTs arose as the result of a number of studies showing thatin retrospectin‐hospital deteriorations should not have been unexpected. Given comprehensive inpatient EMRs, it is now possible to develop more rigorous definitions. A minimum set of parameters that one would need to specify for proper quantification of early detection is shown on Figure 3. The first is specifying a T0, that is, the moment when a prediction regarding event X (which needs to be defined) is issued. This is different from the (currently unmeasurable) biologic onset of illness as well as the first documented indication that critical illness was present. Further, it is important to be explicit about the event time frame (the time period during which a predicted event is expected to occur): we are predicting that X will occur within E hours of the T0. The time frame between the T0 and X, which we are referring to as lead time, is clinically very important, as it represents the time period during which the response arm (eg, RRT intervention) is to be instituted. Statistical approaches can be used to estimate it, but once an early detection system is in place, it can be quantified. Figure 3 is not restricted to electronic systems; all components shown can be and are used by unaided human cognition.

Figure 3
Characterizing early warning systems. At a T0, a detection system issues a probability estimate that an undesirable event, X (which must be defined explicitly) will occur within some elapsed time (point E) (EVENT TIME FRAME). Time required for a response arm to prepare an intervention is LEAD TIME. Development of detection systems is complicated by the fact that the time point when biological critical illness actually begins is currently unmeasurable, whereas system development is limited by how accurately X is documented. Probability estimates are based on data sources with different accumulation times. Some definitional data elements (eg, age, gender, diagnosis for this admission) are not recurrent (♦). Others, which could include streaming data, are recurrent, and the look‐back time frame must be clearly specified. For example, physiologic or biochemical data generally accumulate over a short time period (usually measured in hours); health services data (eg, elapsed length of stay in the hospital at T0; was this patient recently in the intensive care unit?) are typically measured in days, whereas chronic conditions can be measured in months to years.
Figure 4
Impact of patients with restricted resuscitation status (not full code, which includes partial code, do not resuscitate, and comfort care only) on unplanned transfers to the intensive care unit (ICU) and total 30‐day mortality. Data are from 21 Kaiser Permanente Northern California hospitals between May 1, 2012 and October 31, 2013. The left panels show patients with restricted resuscitation status (12.1% of patients; range across hospitals, 6.5% to 18.0%), who accounted for 53% of all deaths. Full code patients directly admitted to the ICU and all other hospital units are shown in the middle and right panels, respectively. Circles are drawn to scale (proportion of admissions in top panels, proportion of deaths in lower panels). Within each circle, the shaded area represents the proportion of patients who experienced unplanned transfer to intensive care (for direct ICU admits, this refers to return transfers to the ICU after discharge from the ICU).

It is essential to specify what data are used to generate probability estimates as well as the time frames used, which we refer to as the look‐back time frames. Several types of data could be employed, with some data elements (eg, age or gender) being discrete data with a 1:1 fixed correspondence between the patient and the data. Other data have a many‐to‐1 relationship, and an exact look‐back time frame must be specified for each data type. For example, it seems reasonable to specify a short (1224 hours) look‐back period for some types of data (eg, vital signs, lactate, admission diagnosis or chief complaint), an intermediate time period (13 days) for information on the current encounter, and a longer (months to years) time period for preexisting illness or comorbidity burden.

Because many events are rare, traditional measures used to assess model performance, such as the area under the receiver operator characteristic curve (C statistic), are not as helpful.[7] Consequently, much more emphasis needs to be given to 2 key metrics: number needed to evaluate (or workup to detection ratio) and threshold‐specific sensitivity (ability of the alert to detect X at a given threshold). With these, one can answer 3 questions that will be asked by the physicians and nurses who are not likely to be researchers, and who will have little interest in the statistics: How many patients do I need to work up each day? How many patients will I need to work up for each possible outcome identified? For this amount of work, how many of the possible outcomes will we catch?

Data availability for the study of severe and critical illness continues to expand. Practically, this means that future research will require more nuanced ontologies for the classification of physiologic derangement. Current approaches to severity scoring (collapsing data into composite scores) need to be replaced by dynamic approaches that consider differential effects on organ systems as well as what can be measured. Severity scoring will also need to incorporate the rate of change of a score (or probability derived from a score) in predicting the occurrence of an event of interest as well as judging response to treatment. Thus, instead of at time of ICU admission, the patient had a severity score of 76, we may have although this patient's severity score at the time of admission was decreasing by 4 points per hour per 10 mL/kg fluid given, the probability for respiratory instability was increasing by 2.3% per hour given 3 L/min supplemental oxygen. This approach is concordant with work done in other clinical settings (eg, in addition to an absolute value of maximal negative inspiratory pressure or vital capacity, the rate of deterioration of neuromuscular weakness in Guillain‐Barr syndrome is also important in predicting respiratory failure[8]).

Electronic data also could permit better definition of patient preferences regarding escalation of care. At present, available electronic data are limited (primarily, orders such as do not resuscitate).[9] However, this EMR domain is gradually expanding.[10, 11] Entities such as the National Institutes of Health could develop sophisticated and rapid questionnaires around patient preferences that are similar to those developed for the Patient Reported Outcomes Measurement Information System.[12] Such tools could have a significant effect on our ability to quantify the benefits of early detection as it relates to a patient's preferences (including better delineation of what treatments they would and would not want).

ACTIVATING A RESPONSE ARM

Early identification, antibiotic administration, fluid resuscitation, and source control are now widely felt to constitute low‐hanging fruit for decreasing morbidity and mortality in severe sepsis. All these measures are included in quality improvement programs and sepsis bundles.[13, 14, 15] However, before early interventions can be instituted, sepsis must at least be suspected, hence the need for early detection. The situation with respect to patient deterioration (for reasons other than sepsis) in general medical surgical wards is less clear‐cut. Reasons for deterioration are much more heterogenous and, consequently, early detection is likely necessary but not sufficient for outcomes improvement.

The 2 projects described in this issue describe nonspecific (indicating elevated risk but not specifying what led to the elevation of risk) and sepsis‐specific alerting systems. In the case of the nonspecific system, detection may not lead to an immediate deployment of a response arm. Instead, a secondary evaluation process must be triggered first. Following this evaluation component, a response arm may or may not be required. In contrast, the sepsis‐specific project essentially transforms the general medicalsurgical ward into a screening system. This screening system then also triggers specific bundle components.

Neither of these systems relies on unaided human cognition. In the case of the nonspecific system, a complex equation generates a probability that is displayed in the EMR, with protocols specifying what actions are to be taken when that probability exceeds a prespecified threshold. With respect to the sepsis screening system, clinicians are supported by EMR alerts as well as protocols that increase nursing autonomy when sepsis is suspected.

The distinction between nonspecific (eg, acute respiratory failure or hemodynamic deterioration) and specific (eg, severe sepsis) alerting systems is likely to disappear as advances in the field occur. For example, incorporation of natural language processing would permit inclusion of semantic data, which could be processed so as to prebucket an alert into one that not just gave a probability, but also a likely cause for the elevated probability.

In addition, both types of systems suffer from the limitation of working off a limited database because, in general, current textbooks and training programs primary focus remains that of treatment of full‐blown clinical syndromes. For example, little is known about how one should manage patients with intermediate lactate values, despite evidence showing that a significant percentage of patients who die from sepsis will initially have such values, with 1 study showing 63% as many deaths with initial lactate of 2.5 to 4.0 mmol/L as occurred with an initial lactate of >4.0 mmol/L.[16] Lastly, as is discussed below, both systems will encounter similar problems when it comes to quantifying benefit.

QUANTIFYING BENEFIT

Whereas the notion of deploying RRTs has clearly been successful, success in demonstrating unequivocal benefit remains elusive.[17, 18, 19] Outcome measures vary dramatically across studies and have included the number of RRT calls, decreases in code blue events on the ward, and decreases in inpatient mortality.[20] We suspect that other reasons are behind this problem. First is the lack of adequate risk adjustment and ignoring the impact of patients near the end of life on the denominator. Figure 4 shows recent data from 21 Kaiser Permanente Northern California (KPNC) hospitals, which can now capture care directive orders electronically,[21] illustrates this problem. The majority (53%) of hospital deaths occur among a highly variable proportion (range across hospitals, 6.5%18.0%) of patients who arrive at the hospital with a restricted resuscitation preference (do not resuscitate, partial code, and comfort care only). These patients do not want to die or crash and burn but, were they to trigger an alert, they would not necessarily want to be rescued by being transferred to the ICU either; moreover, internal KPNC analyses show that large numbers of these patients have sepsis and refuse aggressive treatment. The second major confounder is that ICUs save lives. Consequently, although early detection could lead to fewer transfers to the ICU, using the end point of ICU admission is very problematic, because in many cases the goal of alerting systems should be to get patients to the ICU sooner, which would not affect the outcome of transfer to the ICU in a downward direction; in fact, such systems might increase transfer to the ICU.

The complexities summarized in Figure 4 mean that it is likely that formal quantification of benefit will require examination of multiple measures, including balancing measures as described below. It is also evident that, in this respectlack of agreement as to what constitutes a good outcomethe issues being faced here are a reflection of a broader area of disagreement within our profession and society at large that extends to medical conditions other than critical illness.

POTENTIAL HARMS OF EARLY DETECTION

Implementation of early detection and rapid response systems are not inherently free of harm. If these systems are not shown to have benefit, then the cost of operating them is moving resources away from other, possibly evidence‐based, interventions.[22] At the individual level, alerts could frighten patients and their families (for example, some people are very uncomfortable with the idea that one can predict events). Physicians and nurses who work in the hospital are already quite busy, so every time an alert is issued, it adds to the demand on their already limited time, hence, the critical importance of strategies to minimize false alarms and alert fatigue. Moreover, altering existing workflows can be disruptive and unpopular.

A potentially more quantifiable problem is the impact of early detection systems on ICU operations. For example, if an RRT decides to transfer a patient from the ward to the ICU as a preventive measure (soft landing) and this in turn ties up an ICU bed, that bed is then unavailable for a new patient in the emergency department. Similarly, early detection systems coupled with structured protocols for promoting soft landings could result in a change in ICU case mix, with greater patient flow due to increased numbers of patients with lower severity and lower ICU length of stay. These considerations suggest the need to couple early detection with other supportive data systems and workflows (eg, systems that monitor bed capacity proactively).

Lastly, if documentation protocols are not established and followed, early detection systems could expose both individual clinicians as well as healthcare institutions to medicallegal risk. This consideration could be particularly important in those instances where an alert is issued and, for whatever reasons, clinicians do not take action and do not document that decision. At present, early detection systems are relatively uncommon, but they may gradually become standard of care. This means that in‐house out of ICU deteriorations, which are generally considered to be bad luck or due to a specific error or oversight, may then be considered to be preventable. Another possible scenario that could arise is that of plaintiffs invoking enterprise liability, where a hospital's not having an early detection system becomes considered negligent.

ARTICLES IN THIS ISSUE

In this issue of the Journal of Hospital Medicine, we examine early detection from various perspectives but around a common theme that usually gets less attention in the academic literature: implementation. The article by Schorr et al.[23] describes a disease‐specific approach that can be instantiated using either electronic or paper tools. Escobar et al.[24] describe the quantitative as well as the electronic architecture of an early warning system (EWS) pilot at 2 hospitals that are part of an integrated healthcare delivery system. Dummett et al.[25] then show how a clinical rescue component was developed to take advantage of the EWS, whereas Granich et al.[26] describe the complementary component (integration of supportive care and ensuring that patient preferences are respected). The paper by Liu et al.[27] concludes by placing all of this work in a much broader context, that of the learning healthcare system.

FUTURE DIRECTIONS: KEY GAPS IN THE FIELD

Important gaps remain with respect to early detection and response systems. Future research will need to focus on a number of areas. First and foremost, better approaches to quantifying the costbenefit relationships of these systems are needed; somehow, we need to move beyond a purely intuitive sense that they are good things. Related to this is the need to establish metrics that would permit rigorous comparisons between different approaches; this work needs to go beyond simple comparisons of the statistical characteristics of different predictive models. Ideally, it should include comparisons of different approaches for the response arms as well. We also need to characterize clinician understanding about detection systems, what constitutes impending or incipient critical illness, and the optimum way to provide early detection. Finally, better approaches to integrating health services research with basic science work must be developed; for example, how should one test new biomarkers in settings with early detection and response systems?

The most important frontier, however, is how one can make early detection and response systems more patient centered and how one can enhance their ability to respect patient preferences. Developing systems to improve clinical management is laudable, but somehow we need to also find ways to have these systems make a better connection to what patients want most and what matters most to them, something that may need to include new ways that sometimes suspend use of these systems. At the end of the day, after early detection, patients must have a care experience that they see as an unequivocal improvement.

Acknowledgements

The authors thank our 2 foundation program officers, Dr. Marybeth Sharpe and Ms. Kate Weiland, for their administrative support and encouragement. The authors also thank Dr. Tracy Lieu, Dr. Michelle Caughey, Dr. Philip Madvig, and Ms. Barbara Crawford for their administrative assistance, Dr. Vincent Liu for comments on the manuscript, and Ms. Rachel Lesser for her help with formatting the manuscript and figures.

Disclosures

This work was supported by the Gordon and Betty Moore Foundation, The Permanente Medical Group, Inc., and Kaiser Foundation Hospitals, Inc. As part of our agreement with the Moore Foundation, we made a commitment to disseminate our findings in articles such as this one. However, the Gordon and Betty Moore Foundation and its staff played no role in how we actually structured our articles, nor did they review or preapprove any of the manuscripts submitted as part of the dissemination component. None of the authors has any conflicts of interest to declare of relevance to this work.

References
  1. Hall MJ, Williams SN, DeFrances CJ, Golosinskiy A. Inpatient care for septicemia or sepsis: a challenge for patients and hospitals. NCHS Data Brief. 2011(62):18.
  2. Levy MM, Rhodes A, Phillips GS, et al. Surviving sepsis campaign: association between performance metrics and outcomes in a 7.5‐year study. Crit Care Med. 2015;43(1):312.
  3. Escobar GJ, LaGuardia J, Turk BJ, Ragins A, Kipnis P, Draper D. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med. 2012;7(5):388395.
  4. Kollef MH, Chen Y, Heard K, et al. A randomized trial of real‐time automated clinical deterioration alerts sent to a rapid response team. J Hosp Med. 2014;9(7):424429.
  5. Vazquez R, Gheorghe C, Grigoriyan A, Palvinskaya T, Amoateng‐Adjepong Y, Manthous CA. Enhanced end‐of‐life care associated with deploying a rapid response team: a pilot study. J Hosp Med. 2009;4(7):449452.
  6. Smith RL, Hayashi VN, Lee YI, Navarro‐Mariazeta L, Felner K. The medical emergency team call: a sentinel event that triggers goals of care discussion. Crit Care Med. 2014;42(2):322327.
  7. Romero‐Brufau S, Huddleston JM, Escobar GJ, Liebow M. Why the C‐statistic is not informative to evaluate early warning scores and what metrics to use. Crit Care. 2015;19:285.
  8. Lawn ND, Fletcher DD, Henderson RD, Wolter TD, Wijdicks EF. Anticipating mechanical ventilation in Guillain‐Barre syndrome. Arch Neurol. 2001;58(6):893898.
  9. Kim YS, Escobar GJ, Halpern SD, Greene JD, Kipnis P, Liu V. The natural history of changes in preferences for life‐sustaining treatments and implications for inpatient mortality in younger and older hospitalized adults. J Am Geriatr Soc. 2016;64(5):981989.
  10. Sargious A, Lee SJ. Remote collection of questionnaires. Clin Exp Rheumatol. 2014;32(5 suppl 85):S168S172.
  11. Be prepared to make your health care wishes known. Health care directives. Allina Health website. Available at: http://www.allinahealth.org/Customer-Service/Be-prepared/Be-prepared-to-make-your-health-care-wishes-known. Accessed January 1, 2015.
  12. Patient Reported Outcomes Measurement Information System. Dynamic tools to measure health outcomes from the patient perspective. Available at: http://www.nihpromis.org. Accessed January 15, 2015.
  13. Schorr C, Cinel I, Townsend S, Ramsay G, Levy M, Dellinger RP. Methodology of the surviving sepsis campaign global initiative for improving care of the patient with severe sepsis. Minerva Anestesiol. 2009;75(suppl 1):2327.
  14. Marshall JC, Dellinger RP, Levy M. The Surviving Sepsis Campaign: a history and a perspective. Surg Infect (Larchmt). 2010;11(3):275281.
  15. Schorr CA, Dellinger RP. The Surviving Sepsis Campaign: past, present and future. Trends Mol Med. 2014;20(4):192194.
  16. Shapiro NI, Howell MD, Talmor D, et al. Serum lactate as a predictor of mortality in emergency department patients with infection. Ann Emerg Med. 2005;45(5):524528.
  17. Sharek PJ, Parast LM, Leong K, et al. Effect of a rapid response team on hospital‐wide mortality and code rates outside the ICU in a Children's Hospital. JAMA. 2007;298(19):22672274.
  18. Buist MD, Moore GE, Bernard SA, Waxman BP, Anderson JN, Nguyen TV. Effects of a medical emergency team on reduction of incidence of and mortality from unexpected cardiac arrests in hospital: preliminary study. BMJ. 2002;324(7334):387390.
  19. Leach LS, Mayo AM. Rapid response teams: qualitative analysis of their effectiveness. Am J Crit Care. 2013;22(3):198210.
  20. Chan PS, Khalid A, Longmore LS, Berg RA, Kosiborod M, Spertus JA. Hospital‐wide code rates and mortality before and after implementation of a rapid response team. JAMA. 2008;300(21):25062513.
  21. Escobar GJ, Gardner M, Greene JG, Draper D, Kipnis P. Risk‐adjusting hospital mortality using a comprehensive electronic record in an integrated healthcare delivery system. Med Care. 2013;51(5):446453.
  22. Winters BD, Pham J, Pronovost PJ. Rapid response teams—walk, don't run. JAMA. 2006;296(13):16451647.
  23. Schorr et al. J Hosp Med. 2016;11:000000.
  24. Escobar GJ, Turk BJ, Ragins A, et al. Piloting electronic medical record–based early detection of inpatient deterioration in community hospitals. J Hosp Med. 2016;11:000000.
  25. Dummett et al. J Hosp Med. 2016;11:000000.
  26. Granich et al. J Hosp Med. 2016;11:000000.
  27. Liu et al. Liu VX, Morehouse JW, Baker JM, Greene JD, Kipnis P, Gabriel J. Escobar GJ. Data that drive: closing the loop in the learning hospital system. J Hosp Med. 2016;11:000000.
References
  1. Hall MJ, Williams SN, DeFrances CJ, Golosinskiy A. Inpatient care for septicemia or sepsis: a challenge for patients and hospitals. NCHS Data Brief. 2011(62):18.
  2. Levy MM, Rhodes A, Phillips GS, et al. Surviving sepsis campaign: association between performance metrics and outcomes in a 7.5‐year study. Crit Care Med. 2015;43(1):312.
  3. Escobar GJ, LaGuardia J, Turk BJ, Ragins A, Kipnis P, Draper D. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med. 2012;7(5):388395.
  4. Kollef MH, Chen Y, Heard K, et al. A randomized trial of real‐time automated clinical deterioration alerts sent to a rapid response team. J Hosp Med. 2014;9(7):424429.
  5. Vazquez R, Gheorghe C, Grigoriyan A, Palvinskaya T, Amoateng‐Adjepong Y, Manthous CA. Enhanced end‐of‐life care associated with deploying a rapid response team: a pilot study. J Hosp Med. 2009;4(7):449452.
  6. Smith RL, Hayashi VN, Lee YI, Navarro‐Mariazeta L, Felner K. The medical emergency team call: a sentinel event that triggers goals of care discussion. Crit Care Med. 2014;42(2):322327.
  7. Romero‐Brufau S, Huddleston JM, Escobar GJ, Liebow M. Why the C‐statistic is not informative to evaluate early warning scores and what metrics to use. Crit Care. 2015;19:285.
  8. Lawn ND, Fletcher DD, Henderson RD, Wolter TD, Wijdicks EF. Anticipating mechanical ventilation in Guillain‐Barre syndrome. Arch Neurol. 2001;58(6):893898.
  9. Kim YS, Escobar GJ, Halpern SD, Greene JD, Kipnis P, Liu V. The natural history of changes in preferences for life‐sustaining treatments and implications for inpatient mortality in younger and older hospitalized adults. J Am Geriatr Soc. 2016;64(5):981989.
  10. Sargious A, Lee SJ. Remote collection of questionnaires. Clin Exp Rheumatol. 2014;32(5 suppl 85):S168S172.
  11. Be prepared to make your health care wishes known. Health care directives. Allina Health website. Available at: http://www.allinahealth.org/Customer-Service/Be-prepared/Be-prepared-to-make-your-health-care-wishes-known. Accessed January 1, 2015.
  12. Patient Reported Outcomes Measurement Information System. Dynamic tools to measure health outcomes from the patient perspective. Available at: http://www.nihpromis.org. Accessed January 15, 2015.
  13. Schorr C, Cinel I, Townsend S, Ramsay G, Levy M, Dellinger RP. Methodology of the surviving sepsis campaign global initiative for improving care of the patient with severe sepsis. Minerva Anestesiol. 2009;75(suppl 1):2327.
  14. Marshall JC, Dellinger RP, Levy M. The Surviving Sepsis Campaign: a history and a perspective. Surg Infect (Larchmt). 2010;11(3):275281.
  15. Schorr CA, Dellinger RP. The Surviving Sepsis Campaign: past, present and future. Trends Mol Med. 2014;20(4):192194.
  16. Shapiro NI, Howell MD, Talmor D, et al. Serum lactate as a predictor of mortality in emergency department patients with infection. Ann Emerg Med. 2005;45(5):524528.
  17. Sharek PJ, Parast LM, Leong K, et al. Effect of a rapid response team on hospital‐wide mortality and code rates outside the ICU in a Children's Hospital. JAMA. 2007;298(19):22672274.
  18. Buist MD, Moore GE, Bernard SA, Waxman BP, Anderson JN, Nguyen TV. Effects of a medical emergency team on reduction of incidence of and mortality from unexpected cardiac arrests in hospital: preliminary study. BMJ. 2002;324(7334):387390.
  19. Leach LS, Mayo AM. Rapid response teams: qualitative analysis of their effectiveness. Am J Crit Care. 2013;22(3):198210.
  20. Chan PS, Khalid A, Longmore LS, Berg RA, Kosiborod M, Spertus JA. Hospital‐wide code rates and mortality before and after implementation of a rapid response team. JAMA. 2008;300(21):25062513.
  21. Escobar GJ, Gardner M, Greene JG, Draper D, Kipnis P. Risk‐adjusting hospital mortality using a comprehensive electronic record in an integrated healthcare delivery system. Med Care. 2013;51(5):446453.
  22. Winters BD, Pham J, Pronovost PJ. Rapid response teams—walk, don't run. JAMA. 2006;296(13):16451647.
  23. Schorr et al. J Hosp Med. 2016;11:000000.
  24. Escobar GJ, Turk BJ, Ragins A, et al. Piloting electronic medical record–based early detection of inpatient deterioration in community hospitals. J Hosp Med. 2016;11:000000.
  25. Dummett et al. J Hosp Med. 2016;11:000000.
  26. Granich et al. J Hosp Med. 2016;11:000000.
  27. Liu et al. Liu VX, Morehouse JW, Baker JM, Greene JD, Kipnis P, Gabriel J. Escobar GJ. Data that drive: closing the loop in the learning hospital system. J Hosp Med. 2016;11:000000.
Issue
Journal of Hospital Medicine - 11(1)
Issue
Journal of Hospital Medicine - 11(1)
Page Number
S5-S10
Page Number
S5-S10
Article Type
Display Headline
Early detection, prevention, and mitigation of critical illness outside intensive care settings
Display Headline
Early detection, prevention, and mitigation of critical illness outside intensive care settings
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Gabriel J. Escobar, MD, Systems Research Initiative, Kaiser Permanente Division of Research, Kaiser Permanente, 2000 Broadway Avenue, 032 R01, Oakland, CA 94612‐2304; Telephone: 510‐891‐5929; Fax: 510‐891‐3606; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files