User login
Why Do Lateral Unicompartmental Knee Arthroplasties Fail Today?
In 1975, Skolnick and colleagues1 introduced unicompartmental knee arthroplasty (UKA) for patients with isolated unicompartmental osteoarthritis (OA). They reported a study of 14 UKA procedures, of which 12 were at the medial and 2 at the lateral side. Forty years since this procedure was introduced, UKA is used in 8% to 12% of all knee arthroplasties.2-6 A minority of these procedures are performed at the lateral side (5%-10%).6-8
The considerable anatomical and kinematical differences between compartments9-14 make it impossible to directly compare outcomes of medial and lateral UKA. For example, a greater degree of femoral roll and more posterior translation at the lateral side in flexion9,10,13 can contribute to different pattern and volume differences of cartilage wear.15 Because of these differences, and because of implant design factors and lower surgical volume, lateral UKA is considered a technically more challenging surgery compared to medial UKA.12,16,17
Since isolated lateral compartment OA is relatively scarce, current literature on lateral UKA is limited, and most studies combine medial and lateral outcomes to report UKA outcomes and failure modes.3,4,18-20 However, as the UKA has grown in popularity over the last decade,2,21-25 the number of reports about the lateral UKA also has increased. Recent studies reported excellent short-term survivorship results of the lateral UKA (96%-99%)26,27 and smaller lateral UKA studies reported the 10-year survivorship with varying outcomes from good (84%)14,28-30 to excellent (94%-100%).8,31,32 Indeed, a recent systematic review showed survivorship of lateral UKA at 5, 10, and 15 years of 93%, 91%, and 89%, respectively.33Because of the differences between the medial and lateral compartment, it is important to know the failure modes of lateral UKA in order to improve clinical outcomes and revision rates. We performed a systematic review of cohort studies and registry-based studies that reported lateral UKA failure to assess the causes of lateral UKA failure. In addition, we compared the failure modes in cohort studies with those found in registry-based studies.
Patients and Methods
Search Strategy and Criteria
Databases of PubMed, Embase, and Cochrane (Cochrane Central Register of Clinical Trials) were searched with the terms “knee, arthroplasty, replacement,” “unicompartmental,” “unicondylar,” “partial,” “UKA,” “UKR,” “UCA,” “UCR,” “PKA,” “PKR,” “PCA,” “prosthesis failure,” “reoperation,” “survivorship,” and “treatment failure.” After removal of duplicates, 2 authors (JPvdL and HAZ) scanned the articles for their title and abstract to assess eligibility for the study.
Inclusion criteria were: (I) English language articles describing studies in humans published in the last 25 years, (II) retrospective and prospective studies, (III) featured lateral UKA, (IV) OA was indication for surgery, and (V) included failure modes data. The exclusion criteria were studies that featured: (I) only a specific group of failure (eg, bearing dislocations only), (II) previous surgery in ipsilateral knee (high tibial osteotomy, medial UKA), (III) acute concurrent knee diagnoses (acute anterior cruciate ligament rupture, acute meniscal tear), (IV) combined reporting of medial and lateral UKA, or (V) multiple studies with the same patient database.
Data Collection
All studies that reported modes of failure were used in this study and these failure modes were noted in a datasheet in Microsoft Excel 2011 (Microsoft).
Statistical Analysis
For this systematic review, statistical analysis was performed with IBM SPSS Statistics 22 (SPSS Inc.). We performed chi square tests and Fisher’s exact tests to assess a difference between cohort studies and registry-based studies with the null hypothesis of no difference between both groups. A difference was considered significant when P < .05.
Results
Through the search of the databases, 1294 studies were identified and 26 handpicked studies were added. Initially, based on the title and abstract, 184 of these studies were found eligible.
A total of 366 lateral UKA failures were included. The most common failure modes were progression of OA (29%), aseptic loosening (23%), and bearing dislocation (10%). Infection (6%), instability (6%), unexplained pain (6%), and fractures (4%) were less common causes of failure of lateral UKA (Table 2).
One hundred fifty-five of these failures were reported in the cohort studies. The most common modes of failure were OA progression (36%), bearing dislocation (17%) and aseptic loosening (16%). Less common were infection (10%), fractures (5%), pain (5%), and other causes (6%). In registry-based studies, with 211 lateral UKA failures, the most common modes of failure were aseptic loosening (28%), OA progression (24%), other causes (12%), instability (10%), pain (7%), bearing dislocation (5%), and polyethylene wear (4%) (Table 2).
When pooling cohort and registry-based studies, progression of OA was significantly more common than aseptic loosening (29% vs 23%, respectively; P < .01). It was also significantly more common in the cohort studies (36% vs 16%, respectively; P < .01) but no significant difference was found between progression of OA and aseptic loosening in registry-based studies (24% and 28%, respectively; P = .16) (Table 2).
When comparing cohort with registry-based studies, progression of OA was higher in cohort studies (36% vs. 24%; P < .01). Other failures modes that were more common in cohort studies compared with registry-based studies were bearing dislocation (17% vs 5%, respectively; P < .01) and infections (10% vs 3%, P < .01). Failure modes that were more common in registry-based studies than cohort studies were aseptic loosening (28% vs 16%, respectively; P < .01), other causes (12% vs 6%, respectively, P = .02), and instability (10% vs 1%, respectively, P < .01) (Table 2).
Discussion
In this systematic review, the most common failure modes in lateral UKA review were OA progression (29%), aseptic loosening (23%), and bearing dislocation (10%). Progression of OA and bearing dislocation were the most common modes of failure in cohort studies (36% and 17%, respectively), while aseptic loosening and OA progression were the most common failure modes in registry-based studies (28% and 24%, respectively).
As mentioned above, there are differences in anatomy and kinematics between the medial and lateral compartment. When the lateral UKA failure modes are compared with studies reporting medial UKA failure modes, differences in failure modes are seen.34 Siddiqui and Ahmad35 performed a systematic review of outcomes after UKA revision and presented a table with the failure modes of included studies. Unfortunately they did not report the ratio of medial and lateral UKA. However, when assuming an average percentage of 90% to 95% of medial UKA,6,7,36 the main failure mode in their review in 17 out of 21 studies was aseptic loosening. Indeed, a recent systematic review on medial UKA failure modes showed that aseptic loosening is the most common cause of failure following this procedure.34 Similarly, a search through registry-based studies6,7 and large cohort studies37-40 that only reported medial UKA failures showed that the majority of these studies7,37-39 also reported aseptic loosening as the main cause of failure in medial UKA. When comparing the results of our systematic review of lateral UKA failures with the results of these studies of medial UKA failures, it seems that OA progression seems to play a more dominant role in failures of lateral UKA, while aseptic loosening seems to be more common in medial UKA.
Differences in anatomy and kinematics of the medial and lateral compartment can explain this. Malalignment of the joint is an important factor in the etiology of OA41,42 and biomechanical studies showed that this malalignment can cause decreased viability and further degenerative changes of cartilage of the knee.43 Hernigou and Deschamps44 showed that the alignment of the knee after medial UKA is an important factor in postoperative joint changes. They found that overcorrection of varus deformity during medial UKA surgery, measured by the hip-knee-ankle (HKA) angle, was associated with increased OA at the lateral condyle and less tibial wear of the medial UKA. Undercorrection of the varus caused an increase in tibial wear of polyethylene. Chatellard and colleagues45 found the same results in the correction of varus, measured by HKA. In addition, they found that when the prosthetic (medial) joint space was smaller than healthy (lateral) joint space, this was correlated with lower prosthesis survival. A smaller joint space at the healthy side was correlated with OA progression at the lateral compartment and tibial component wear.
These studies explain the mechanism of progression of OA and aseptic loosening. Harrington46 assessed the load in patients with valgus and varus deformity. Patients with a valgus deformity have high mechanical load on the lateral condyle during the static phase, but during the dynamic phase, a major part of this load shifts to the medial condyle. In the patients with varus deformity, the mechanical load was noted on the medial condyle during both the static and dynamic phase. Ohdera and colleagues47 advised, based on this biomechanical study and their own experiences, to correct the knee during lateral UKA to a slight valgus angle (5°-7°) to prevent OA progression at the medial side. van der List and colleagues48 similarly showed that undercorrection of 3° to 7° was correlated with better functional outcomes when compared to more neutral alignment. Moreover, Khamaisy and colleagues49 recently showed that overcorrection during UKA surgery is more common in lateral than medial UKA.
These studies are important to understanding why OA progression is more common as a failure mode in lateral UKA. The shift of mechanical load from the lateral to medial epicondyle during the dynamic phase also could explain why aseptic loosening is less common in lateral UKA. As Hernigou and Deschamps44 and Chatellard and colleagues45 stated, undercorrection of varus deformity in medial UKA is associated with higher mechanical load on the medial prosthesis side and smaller joint space width. These factors are correlated with mechanical failure of medial UKA. We think this process can be applied to lateral UKA, with the addition that the mechanical load is higher on the healthy medial compartment during the dynamic phase. This causes more forces on the healthy (medial) side in lateral UKA, and in medial UKA more forces on the prosthesis (medial) side, which results in more OA progression in lateral UKA and more aseptic loosening in medial UKA. This finding is consistent with the results of our review of more OA progression and less aseptic loosening in lateral UKA. This study also suggests that medial and lateral UKA should not be reported together in studies that present survivorship, failure modes, or clinical outcomes.
A large discrepancy was seen in bearing dislocation between cohort studies (17%) and registry-based studies (5%). When we take a closer look to the bearing dislocation failures in the cohort studies, most of the failures were reported in only 2 cohort studies.50,51 In a study by Pandit and colleagues,50 3 different prosthesis designs were used in 3 different time periods. In the first series of lateral UKA (1983-1991), 6 out of 51 (12%) bearings dislocated. In the second series (1998-2004), a modified technique was used and 3 out of 65 (5%) bearings dislocated. In the third series (2004-2008), a modified technique and a domed tibial component was used and only 1 out of 68 bearings dislocated (1%). In a study published in 1996, Gunther and colleagues51 also used surgical techniques and implants that were modified over the course of the study period. Because of these modified techniques, different implant designs, and year of publication, bearing dislocation most likely plays a smaller role than the 17% reported in the cohort studies. This discrepancy is a good example of the important role for the registries and registry-based studies in reporting failure modes and survivorship, especially in lateral UKA due to the low surgical frequency. Pabinger and colleagues52 recently performed a systematic review of cohort studies and registry-based studies in which they stated that the reliability in non-registry-based studies should be questioned and they considered registry-based studies superior in reporting UKA outcomes and revision rates. Furthermore, given the differences in anatomic and kinematic differences between the medial and lateral compartment and different failure modes between medial and lateral UKA, it would be better if future studies presented the medial and lateral failures separately. As stated above, most large cohort studies and especially annual registries currently do not report modes of failure of medial and lateral UKA separately.3,4,18-20
There are limitations in this study. First, this systematic review is not a full meta-analysis but a pooled analysis of collected study series and retrospective studies. Therefore, we cannot exclude sampling bias, confounders, and selection bias from the literature. We included all studies reporting failure modes of lateral UKA and excluded all case reports. We made a conscious choice about including all lateral UKA failures because this is the first systematic review of lateral UKA failure modes. Another limitation is that the follow-up period of the studies differed (Table 1) and we did not correct for the follow-up period. As stated in the example of bearing dislocations, some of these studies reported old or different techniques, while other, more recently published studies used more modified techniques11,29,53-56 Unfortunately, most studies did not report the time of arthroplasty survival and therefore we could not correct for the follow-up period.
In conclusion, progression of OA is the most common failure mode in lateral UKA, followed by aseptic loosening. Anatomic and kinematic factors such as alignment, mechanical forces during dynamic phase, and correction of valgus seem to play important roles in failure modes of lateral UKA. In the future, failure modes of medial and lateral UKA should be reported separately.
Am J Orthop. 2016;45(7):432-438, 462. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.
1. Skolnick MD, Bryan RS, Peterson LFA. Unicompartmental polycentric knee arthroplasty. Description and preliminary results. Clin Orthop Relat Res. 1975;(112):208-214.
2. Riddle DL, Jiranek WA, McGlynn FJ. Yearly Incidence of Unicompartmental Knee Arthroplasty in the United States. J Arthroplasty. 2008;23(3):408-412.
3. Australian Orthopaedic Association. Hip and Knee Arthroplasty 2014 Annual Report. https://aoanjrr.sahmri.com/documents/10180/172286/Annual%20Report%202014. Accessed June 3, 2015.
4. Swedish Knee Arthroplasty Register. 2013 Annual Report.http://myknee.se/pdf/SKAR2013_Eng.pdf. Accessed June 3, 2015.
5. The New Zealand Joint Registry. Fourteen Year Report. January 1999 to December 2012. 2013. http://nzoa.org.nz/system/files/NJR 14 Year Report.pdf. Accessed June 3, 2015.
6. Baker PN, Jameson SS, Deehan DJ, Gregg PJ, Porter M, Tucker K. Mid-term equivalent survival of medial and lateral unicondylar knee replacement: an analysis of data from a National Joint Registry. J Bone Joint Surg Br. 2012;94(12):1641-1648.
7. Lewold S, Robertsson O, Knutson K, Lidgren L. Revision of unicompartmental knee arthroplasty: outcome in 1,135 cases from the Swedish Knee Arthroplasty study. Acta Orthop Scand. 1998;69(5):469-474.
8. Pennington DW, Swienckowski JJ, Lutes WB, Drake GN. Lateral unicompartmental knee arthroplasty: survivorship and technical considerations at an average follow-up of 12.4 years. J Arthroplasty. 2006;21(1):13-17.
9. Hill PF, Vedi V, Williams A, Iwaki H, Pinskerova V, Freeman MA. Tibiofemoral movement 2: the loaded and unloaded living knee studied by MRI. J Bone Joint Surg Br. 2000;82(8):1196-1198.
10. Nakagawa S, Kadoya Y, Todo S, et al. Tibiofemoral movement 3: full flexion in the living knee studied by MRI. J Bone Joint Surg Br. 2000;82(8):1199-1200.
11. Ashraf T, Newman JH, Evans RL, Ackroyd CE. Lateral unicompartmental knee replacement survivorship and clinical experience over 21 years. J Bone Joint Surg Br. 2002;84(8):1126-1130.
12. Scott RD. Lateral unicompartmental replacement: a road less traveled. Orthopedics. 2005;28(9):983-984.
13. Sah AP, Scott RD. Lateral unicompartmental knee arthroplasty through a medial approach. Study with an average five-year follow-up. J Bone Joint Surg Am. 2007;89(9):1948-1954.
14. Argenson JN, Parratte S, Bertani A, Flecher X, Aubaniac JM. Long-term results with a lateral unicondylar replacement. Clin Orthop Relat Res. 2008;466(11):2686-2693.
15. Weidow J, Pak J, Karrholm J. Different patterns of cartilage wear in medial and lateral gonarthrosis. Acta Orthop Scand. 2002;73(3):326-329.
16. Ollivier M, Abdel MP, Parratte S, Argenson JN. Lateral unicondylar knee arthroplasty (UKA): contemporary indications, surgical technique, and results. Int Orthop. 2014;38(2):449-455.
17. Demange MK, Von Keudell A, Probst C, Yoshioka H, Gomoll AH. Patient-specific implants for lateral unicompartmental knee arthroplasty. Int Orthop. 2015;39(8):1519-1526.
18. Khan Z, Nawaz SZ, Kahane S, Esler C, Chatterji U. Conversion of unicompartmental knee arthroplasty to total knee arthroplasty: the challenges and need for augments. Acta Orthop Belg. 2013;79(6):699-705.
19. Epinette JA, Brunschweiler B, Mertl P, et al. Unicompartmental knee arthroplasty modes of failure: wear is not the main reason for failure: a multicentre study of 418 failed knees. Orthop Traumatol Surg Res. 2012;98(6 Suppl):S124-S130.
20. Bordini B, Stea S, Falcioni S, Ancarani C, Toni A. Unicompartmental knee arthroplasty: 11-year experience from 3929 implants in RIPO register. Knee. 2014;21(6):1275-1279.
21. Bolognesi MP, Greiner MA, Attarian DE, et al. Unicompartmental knee arthroplasty and total knee arthroplasty among medicare beneficiaries, 2000 to 2009. J Bone Joint Surg Am. 2013;95(22):e174.
22. Nwachukwu BU, McCormick FM, Schairer WW, Frank RM, Provencher MT, Roche MW. Unicompartmental knee arthroplasty versus high tibial osteotomy: United States practice patterns for the surgical treatment of unicompartmental arthritis. J Arthroplasty. 2014;29(8):1586-1589.
23. van der List JP, Chawla H, Pearle AD. Robotic-assisted knee arthroplasty: an overview. Am J Orthop. 2016;45(4):202-211.
24. van der List JP, Chawla H, Joskowicz L, Pearle AD. Current state of computer navigation and robotics in unicompartmental and total knee arthroplasty: a systematic review with meta-analysis. Knee Surg Sports Traumatol Arthrosc. 2016 Sep 6. [Epub ahead of print]
25. Zuiderbaan HA, van der List JP, Kleeblad LJ, et al. Modern indications, results and global trends in the use of unicompartmental knee arthroplasty and high tibial osteotomy for the treatment of medial unicondylar knee osteoarthritis. Am J Orthop. 2016;45(6):E355-E361.
26. Smith JR, Robinson JR, Porteous AJ, et al. Fixed bearing lateral unicompartmental knee arthroplasty--short to midterm survivorship and knee scores for 101 prostheses. Knee. 2014;21(4):843-847.
27. Berend KR, Kolczun MC 2nd, George JW Jr, Lombardi AV Jr. Lateral unicompartmental knee arthroplasty through a lateral parapatellar approach has high early survivorship. Clin Orthop Relat Res. 2012;470(1):77-83.
28. Keblish PA, Briard JL. Mobile-bearing unicompartmental knee arthroplasty: a 2-center study with an 11-year (mean) follow-up. J Arthroplasty. 2004;19(7 Suppl 2):87-94.
29. Bertani A, Flecher X, Parratte S, Aubaniac JM, Argenson JN. Unicompartmental-knee arthroplasty for treatment of lateral gonarthrosis: about 30 cases. Midterm results. Rev Chir Orthop Reparatrice Appar Mot. 2008;94(8):763-770.
30. Sebilo A, Casin C, Lebel B, et al. Clinical and technical factors influencing outcomes of unicompartmental knee arthroplasty: Retrospective multicentre study of 944 knees. Orthop Traumatol Surg Res. 2013;99(4 Suppl):S227-S234.
31. Cartier P, Khefacha A, Sanouiller JL, Frederick K. Unicondylar knee arthroplasty in middle-aged patients: A minimum 5-year follow-up. Orthopedics. 2007;30(8 Suppl):62-65.
32. Lustig S, Paillot JL, Servien E, Henry J, Ait Si Selmi T, Neyret P. Cemented all polyethylene tibial insert unicompartimental knee arthroplasty: a long term follow-up study. Orthop Traumatol Surg Res. 2009;95(1):12-21.
33. van der List JP, McDonald LS, Pearle AD. Systematic review of medial versus lateral survivorship in unicompartmental knee arthroplasty. Knee. 2015;22(6):454-460.
34. van der List JP, Zuiderbaan HA, Pearle AD. Why do medial unicompartmental knee arthroplasties fail today? J Arthroplasty. 2016;31(5):1016-1021.
35. Siddiqui NA, Ahmad ZM. Revision of unicondylar to total knee arthroplasty: a systematic review. Open Orthop J. 2012;6:268-275.
36. Pennington DW, Swienckowski JJ, Lutes WB, Drake GN. Lateral unicompartmental knee arthroplasty: survivorship and technical considerations at an average follow-up of 12.4 years. J Arthroplasty. 2006;21(1):13-17.
37. Kalra S, Smith TO, Berko B, Walton NP. Assessment of radiolucent lines around the Oxford unicompartmental knee replacement: sensitivity and specificity for loosening. J Bone Joint Surg Br. 2011;93(6):777-781.
38. Wynn Jones H, Chan W, Harrison T, Smith TO, Masonda P, Walton NP. Revision of medial Oxford unicompartmental knee replacement to a total knee replacement: similar to a primary? Knee. 2012;19(4):339-343.
39. Sierra RJ, Kassel CA, Wetters NG, Berend KR, Della Valle CJ, Lombardi AV. Revision of unicompartmental arthroplasty to total knee arthroplasty: not always a slam dunk! J Arthroplasty. 2013;28(8 Suppl):128-132.
40. Citak M, Dersch K, Kamath AF, Haasper C, Gehrke T, Kendoff D. Common causes of failed unicompartmental knee arthroplasty: a single-centre analysis of four hundred and seventy one cases. Int Orthop. 2014;38(5):961-965.
41. Hunter DJ, Wilson DR. Role of alignment and biomechanics in osteoarthritis and implications for imaging. Radiol Clin North Am. 2009;47(4):553-566.
42. Hunter DJ, Sharma L, Skaife T. Alignment and osteoarthritis of the knee. J Bone Joint Surg Am. 2009;91 Suppl 1:85-89.
43. Roemhildt ML, Beynnon BD, Gauthier AE, Gardner-Morse M, Ertem F, Badger GJ. Chronic in vivo load alteration induces degenerative changes in the rat tibiofemoral joint. Osteoarthritis Cartilage. 2013;21(2):346-357.
44. Hernigou P, Deschamps G. Alignment influences wear in the knee after medial unicompartmental arthroplasty. Clin Orthop Relat Res. 2004;(423):161-165.
45. Chatellard R, Sauleau V, Colmar M, et al. Medial unicompartmental knee arthroplasty: does tibial component position influence clinical outcomes and arthroplasty survival? Orthop Traumatol Surg Res. 2013;99(4 Suppl):S219-S225.
46. Harrington IJ. Static and dynamic loading patterns in knee joints with deformities. J Bone Joint Surg Am. 1983;65(2):247-259.
47. Ohdera T, Tokunaga J, Kobayashi A. Unicompartmental knee arthroplasty for lateral gonarthrosis: midterm results. J Arthroplasty. 2001;16(2):196-200.
48. van der List JP, Chawla H, Villa JC, Zuiderbaan HA, Pearle AD. Early functional outcome after lateral UKA is sensitive to postoperative lower limb alignment. Knee Surg Sports Traumatol Arthrosc. 2015 Nov 26. [Epub ahead of print]
49. Khamaisy S, Gladnick BP, Nam D, Reinhardt KR, Heyse TJ, Pearle AD. Lower limb alignment control: Is it more challenging in lateral compared to medial unicondylar knee arthroplasty? Knee. 2015;22(4):347-350.
50. Pandit H, Jenkins C, Beard DJ, et al. Mobile bearing dislocation in lateral unicompartmental knee replacement. Knee. 2010;17(6):392-397.
51. Gunther TV, Murray DW, Miller R, et al. Lateral unicompartmental arthroplasty with the Oxford meniscal knee. Knee. 1996;3(1):33-39.
52. Pabinger C, Lumenta DB, Cupak D, Berghold A, Boehler N, Labek G. Quality of outcome data in knee arthroplasty: Comparison of registry data and worldwide non-registry studies from 4 decades. Acta Orthopaedica. 2015;86(1):58-62.
53. Lustig S, Elguindy A, Servien E, et al. 5- to 16-year follow-up of 54 consecutive lateral unicondylar knee arthroplasties with a fixed-all polyethylene bearing. J Arthroplasty. 2011;26(8):1318-1325.
54. Walton MJ, Weale AE, Newman JH. The progression of arthritis following lateral unicompartmental knee replacement. Knee. 2006;13(5):374-377.
55. Lustig S, Lording T, Frank F, Debette C, Servien E, Neyret P. Progression of medial osteoarthritis and long term results of lateral unicompartmental arthroplasty: 10 to 18 year follow-up of 54 consecutive implants. Knee. 2014;21(S1):S26-S32.
56. O’Rourke MR, Gardner JJ, Callaghan JJ, et al. Unicompartmental knee replacement: a minimum twenty-one-year followup, end-result study. Clin Orthop Relat Res. 2005;440:27-37.
57. Citak M, Cross MB, Gehrke T, Dersch K, Kendoff D. Modes of failure and revision of failed lateral unicompartmental knee arthroplasties. Knee. 2015;22(4):338-340.
58. Liebs TR, Herzberg W. Better quality of life after medial versus lateral unicondylar knee arthroplasty knee. Clin Orthop Relat Res. 2013;471(8):2629-2640.
59. Weston-Simons JS, Pandit H, Kendrick BJ, et al. The mid-term outcomes of the Oxford Domed Lateral unicompartmental knee replacement. Bone Joint J. 2014;96-B(1):59-64.
60. Thompson SA, Liabaud B, Nellans KW, Geller JA. Factors associated with poor outcomes following unicompartmental knee arthroplasty: redefining the “classic” indications for surgery. J Arthroplasty. 2013;28(9):1561-1564.
61. Saxler G, Temmen D, Bontemps G. Medium-term results of the AMC-unicompartmental knee arthroplasty. Knee. 2004;11(5):349-355.
62. Forster MC, Bauze AJ, Keene GCR. Lateral unicompartmental knee replacement: Fixed or mobile bearing? Knee Surg Sports Traumatol Arthrosc. 2007;15(9):1107-1111.
63. Streit MR, Walker T, Bruckner T, et al. Mobile-bearing lateral unicompartmental knee replacement with the Oxford domed tibial component: an independent series. J Bone Joint Surg Br. 2012;94(10):1356-1361.
64. Altuntas AO, Alsop H, Cobb JP. Early results of a domed tibia, mobile bearing lateral unicompartmental knee arthroplasty from an independent centre. Knee. 2013;20(6):466-470.
65. Ashraf T, Newman JH, Desai VV, Beard D, Nevelos JE. Polyethylene wear in a non-congruous unicompartmental knee replacement: a retrieval analysis. Knee. 2004;11(3):177-181.
66. Schelfaut S, Beckers L, Verdonk P, Bellemans J, Victor J. The risk of bearing dislocation in lateral unicompartmental knee arthroplasty using a mobile biconcave design. Knee Surg Sports Traumatol Arthrosc. 2013;21(11):2487-2494.
67. Marson B, Prasad N, Jenkins R, Lewis M. Lateral unicompartmental knee replacements: Early results from a District General Hospital. Eur J Orthop Surg Traumatol. 2014;24(6):987-991.
68. Walker T, Gotterbarm T, Bruckner T, Merle C, Streit MR. Total versus unicompartmental knee replacement for isolated lateral osteoarthritis: a matched-pairs study. Int Orthop. 2014;38(11):2259-2264.
In 1975, Skolnick and colleagues1 introduced unicompartmental knee arthroplasty (UKA) for patients with isolated unicompartmental osteoarthritis (OA). They reported a study of 14 UKA procedures, of which 12 were at the medial and 2 at the lateral side. Forty years since this procedure was introduced, UKA is used in 8% to 12% of all knee arthroplasties.2-6 A minority of these procedures are performed at the lateral side (5%-10%).6-8
The considerable anatomical and kinematical differences between compartments9-14 make it impossible to directly compare outcomes of medial and lateral UKA. For example, a greater degree of femoral roll and more posterior translation at the lateral side in flexion9,10,13 can contribute to different pattern and volume differences of cartilage wear.15 Because of these differences, and because of implant design factors and lower surgical volume, lateral UKA is considered a technically more challenging surgery compared to medial UKA.12,16,17
Since isolated lateral compartment OA is relatively scarce, current literature on lateral UKA is limited, and most studies combine medial and lateral outcomes to report UKA outcomes and failure modes.3,4,18-20 However, as the UKA has grown in popularity over the last decade,2,21-25 the number of reports about the lateral UKA also has increased. Recent studies reported excellent short-term survivorship results of the lateral UKA (96%-99%)26,27 and smaller lateral UKA studies reported the 10-year survivorship with varying outcomes from good (84%)14,28-30 to excellent (94%-100%).8,31,32 Indeed, a recent systematic review showed survivorship of lateral UKA at 5, 10, and 15 years of 93%, 91%, and 89%, respectively.33Because of the differences between the medial and lateral compartment, it is important to know the failure modes of lateral UKA in order to improve clinical outcomes and revision rates. We performed a systematic review of cohort studies and registry-based studies that reported lateral UKA failure to assess the causes of lateral UKA failure. In addition, we compared the failure modes in cohort studies with those found in registry-based studies.
Patients and Methods
Search Strategy and Criteria
Databases of PubMed, Embase, and Cochrane (Cochrane Central Register of Clinical Trials) were searched with the terms “knee, arthroplasty, replacement,” “unicompartmental,” “unicondylar,” “partial,” “UKA,” “UKR,” “UCA,” “UCR,” “PKA,” “PKR,” “PCA,” “prosthesis failure,” “reoperation,” “survivorship,” and “treatment failure.” After removal of duplicates, 2 authors (JPvdL and HAZ) scanned the articles for their title and abstract to assess eligibility for the study.
Inclusion criteria were: (I) English language articles describing studies in humans published in the last 25 years, (II) retrospective and prospective studies, (III) featured lateral UKA, (IV) OA was indication for surgery, and (V) included failure modes data. The exclusion criteria were studies that featured: (I) only a specific group of failure (eg, bearing dislocations only), (II) previous surgery in ipsilateral knee (high tibial osteotomy, medial UKA), (III) acute concurrent knee diagnoses (acute anterior cruciate ligament rupture, acute meniscal tear), (IV) combined reporting of medial and lateral UKA, or (V) multiple studies with the same patient database.
Data Collection
All studies that reported modes of failure were used in this study and these failure modes were noted in a datasheet in Microsoft Excel 2011 (Microsoft).
Statistical Analysis
For this systematic review, statistical analysis was performed with IBM SPSS Statistics 22 (SPSS Inc.). We performed chi square tests and Fisher’s exact tests to assess a difference between cohort studies and registry-based studies with the null hypothesis of no difference between both groups. A difference was considered significant when P < .05.
Results
Through the search of the databases, 1294 studies were identified and 26 handpicked studies were added. Initially, based on the title and abstract, 184 of these studies were found eligible.
A total of 366 lateral UKA failures were included. The most common failure modes were progression of OA (29%), aseptic loosening (23%), and bearing dislocation (10%). Infection (6%), instability (6%), unexplained pain (6%), and fractures (4%) were less common causes of failure of lateral UKA (Table 2).
One hundred fifty-five of these failures were reported in the cohort studies. The most common modes of failure were OA progression (36%), bearing dislocation (17%) and aseptic loosening (16%). Less common were infection (10%), fractures (5%), pain (5%), and other causes (6%). In registry-based studies, with 211 lateral UKA failures, the most common modes of failure were aseptic loosening (28%), OA progression (24%), other causes (12%), instability (10%), pain (7%), bearing dislocation (5%), and polyethylene wear (4%) (Table 2).
When pooling cohort and registry-based studies, progression of OA was significantly more common than aseptic loosening (29% vs 23%, respectively; P < .01). It was also significantly more common in the cohort studies (36% vs 16%, respectively; P < .01) but no significant difference was found between progression of OA and aseptic loosening in registry-based studies (24% and 28%, respectively; P = .16) (Table 2).
When comparing cohort with registry-based studies, progression of OA was higher in cohort studies (36% vs. 24%; P < .01). Other failures modes that were more common in cohort studies compared with registry-based studies were bearing dislocation (17% vs 5%, respectively; P < .01) and infections (10% vs 3%, P < .01). Failure modes that were more common in registry-based studies than cohort studies were aseptic loosening (28% vs 16%, respectively; P < .01), other causes (12% vs 6%, respectively, P = .02), and instability (10% vs 1%, respectively, P < .01) (Table 2).
Discussion
In this systematic review, the most common failure modes in lateral UKA review were OA progression (29%), aseptic loosening (23%), and bearing dislocation (10%). Progression of OA and bearing dislocation were the most common modes of failure in cohort studies (36% and 17%, respectively), while aseptic loosening and OA progression were the most common failure modes in registry-based studies (28% and 24%, respectively).
As mentioned above, there are differences in anatomy and kinematics between the medial and lateral compartment. When the lateral UKA failure modes are compared with studies reporting medial UKA failure modes, differences in failure modes are seen.34 Siddiqui and Ahmad35 performed a systematic review of outcomes after UKA revision and presented a table with the failure modes of included studies. Unfortunately they did not report the ratio of medial and lateral UKA. However, when assuming an average percentage of 90% to 95% of medial UKA,6,7,36 the main failure mode in their review in 17 out of 21 studies was aseptic loosening. Indeed, a recent systematic review on medial UKA failure modes showed that aseptic loosening is the most common cause of failure following this procedure.34 Similarly, a search through registry-based studies6,7 and large cohort studies37-40 that only reported medial UKA failures showed that the majority of these studies7,37-39 also reported aseptic loosening as the main cause of failure in medial UKA. When comparing the results of our systematic review of lateral UKA failures with the results of these studies of medial UKA failures, it seems that OA progression seems to play a more dominant role in failures of lateral UKA, while aseptic loosening seems to be more common in medial UKA.
Differences in anatomy and kinematics of the medial and lateral compartment can explain this. Malalignment of the joint is an important factor in the etiology of OA41,42 and biomechanical studies showed that this malalignment can cause decreased viability and further degenerative changes of cartilage of the knee.43 Hernigou and Deschamps44 showed that the alignment of the knee after medial UKA is an important factor in postoperative joint changes. They found that overcorrection of varus deformity during medial UKA surgery, measured by the hip-knee-ankle (HKA) angle, was associated with increased OA at the lateral condyle and less tibial wear of the medial UKA. Undercorrection of the varus caused an increase in tibial wear of polyethylene. Chatellard and colleagues45 found the same results in the correction of varus, measured by HKA. In addition, they found that when the prosthetic (medial) joint space was smaller than healthy (lateral) joint space, this was correlated with lower prosthesis survival. A smaller joint space at the healthy side was correlated with OA progression at the lateral compartment and tibial component wear.
These studies explain the mechanism of progression of OA and aseptic loosening. Harrington46 assessed the load in patients with valgus and varus deformity. Patients with a valgus deformity have high mechanical load on the lateral condyle during the static phase, but during the dynamic phase, a major part of this load shifts to the medial condyle. In the patients with varus deformity, the mechanical load was noted on the medial condyle during both the static and dynamic phase. Ohdera and colleagues47 advised, based on this biomechanical study and their own experiences, to correct the knee during lateral UKA to a slight valgus angle (5°-7°) to prevent OA progression at the medial side. van der List and colleagues48 similarly showed that undercorrection of 3° to 7° was correlated with better functional outcomes when compared to more neutral alignment. Moreover, Khamaisy and colleagues49 recently showed that overcorrection during UKA surgery is more common in lateral than medial UKA.
These studies are important to understanding why OA progression is more common as a failure mode in lateral UKA. The shift of mechanical load from the lateral to medial epicondyle during the dynamic phase also could explain why aseptic loosening is less common in lateral UKA. As Hernigou and Deschamps44 and Chatellard and colleagues45 stated, undercorrection of varus deformity in medial UKA is associated with higher mechanical load on the medial prosthesis side and smaller joint space width. These factors are correlated with mechanical failure of medial UKA. We think this process can be applied to lateral UKA, with the addition that the mechanical load is higher on the healthy medial compartment during the dynamic phase. This causes more forces on the healthy (medial) side in lateral UKA, and in medial UKA more forces on the prosthesis (medial) side, which results in more OA progression in lateral UKA and more aseptic loosening in medial UKA. This finding is consistent with the results of our review of more OA progression and less aseptic loosening in lateral UKA. This study also suggests that medial and lateral UKA should not be reported together in studies that present survivorship, failure modes, or clinical outcomes.
A large discrepancy was seen in bearing dislocation between cohort studies (17%) and registry-based studies (5%). When we take a closer look to the bearing dislocation failures in the cohort studies, most of the failures were reported in only 2 cohort studies.50,51 In a study by Pandit and colleagues,50 3 different prosthesis designs were used in 3 different time periods. In the first series of lateral UKA (1983-1991), 6 out of 51 (12%) bearings dislocated. In the second series (1998-2004), a modified technique was used and 3 out of 65 (5%) bearings dislocated. In the third series (2004-2008), a modified technique and a domed tibial component was used and only 1 out of 68 bearings dislocated (1%). In a study published in 1996, Gunther and colleagues51 also used surgical techniques and implants that were modified over the course of the study period. Because of these modified techniques, different implant designs, and year of publication, bearing dislocation most likely plays a smaller role than the 17% reported in the cohort studies. This discrepancy is a good example of the important role for the registries and registry-based studies in reporting failure modes and survivorship, especially in lateral UKA due to the low surgical frequency. Pabinger and colleagues52 recently performed a systematic review of cohort studies and registry-based studies in which they stated that the reliability in non-registry-based studies should be questioned and they considered registry-based studies superior in reporting UKA outcomes and revision rates. Furthermore, given the differences in anatomic and kinematic differences between the medial and lateral compartment and different failure modes between medial and lateral UKA, it would be better if future studies presented the medial and lateral failures separately. As stated above, most large cohort studies and especially annual registries currently do not report modes of failure of medial and lateral UKA separately.3,4,18-20
There are limitations in this study. First, this systematic review is not a full meta-analysis but a pooled analysis of collected study series and retrospective studies. Therefore, we cannot exclude sampling bias, confounders, and selection bias from the literature. We included all studies reporting failure modes of lateral UKA and excluded all case reports. We made a conscious choice about including all lateral UKA failures because this is the first systematic review of lateral UKA failure modes. Another limitation is that the follow-up period of the studies differed (Table 1) and we did not correct for the follow-up period. As stated in the example of bearing dislocations, some of these studies reported old or different techniques, while other, more recently published studies used more modified techniques11,29,53-56 Unfortunately, most studies did not report the time of arthroplasty survival and therefore we could not correct for the follow-up period.
In conclusion, progression of OA is the most common failure mode in lateral UKA, followed by aseptic loosening. Anatomic and kinematic factors such as alignment, mechanical forces during dynamic phase, and correction of valgus seem to play important roles in failure modes of lateral UKA. In the future, failure modes of medial and lateral UKA should be reported separately.
Am J Orthop. 2016;45(7):432-438, 462. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.
In 1975, Skolnick and colleagues1 introduced unicompartmental knee arthroplasty (UKA) for patients with isolated unicompartmental osteoarthritis (OA). They reported a study of 14 UKA procedures, of which 12 were at the medial and 2 at the lateral side. Forty years since this procedure was introduced, UKA is used in 8% to 12% of all knee arthroplasties.2-6 A minority of these procedures are performed at the lateral side (5%-10%).6-8
The considerable anatomical and kinematical differences between compartments9-14 make it impossible to directly compare outcomes of medial and lateral UKA. For example, a greater degree of femoral roll and more posterior translation at the lateral side in flexion9,10,13 can contribute to different pattern and volume differences of cartilage wear.15 Because of these differences, and because of implant design factors and lower surgical volume, lateral UKA is considered a technically more challenging surgery compared to medial UKA.12,16,17
Since isolated lateral compartment OA is relatively scarce, current literature on lateral UKA is limited, and most studies combine medial and lateral outcomes to report UKA outcomes and failure modes.3,4,18-20 However, as the UKA has grown in popularity over the last decade,2,21-25 the number of reports about the lateral UKA also has increased. Recent studies reported excellent short-term survivorship results of the lateral UKA (96%-99%)26,27 and smaller lateral UKA studies reported the 10-year survivorship with varying outcomes from good (84%)14,28-30 to excellent (94%-100%).8,31,32 Indeed, a recent systematic review showed survivorship of lateral UKA at 5, 10, and 15 years of 93%, 91%, and 89%, respectively.33Because of the differences between the medial and lateral compartment, it is important to know the failure modes of lateral UKA in order to improve clinical outcomes and revision rates. We performed a systematic review of cohort studies and registry-based studies that reported lateral UKA failure to assess the causes of lateral UKA failure. In addition, we compared the failure modes in cohort studies with those found in registry-based studies.
Patients and Methods
Search Strategy and Criteria
Databases of PubMed, Embase, and Cochrane (Cochrane Central Register of Clinical Trials) were searched with the terms “knee, arthroplasty, replacement,” “unicompartmental,” “unicondylar,” “partial,” “UKA,” “UKR,” “UCA,” “UCR,” “PKA,” “PKR,” “PCA,” “prosthesis failure,” “reoperation,” “survivorship,” and “treatment failure.” After removal of duplicates, 2 authors (JPvdL and HAZ) scanned the articles for their title and abstract to assess eligibility for the study.
Inclusion criteria were: (I) English language articles describing studies in humans published in the last 25 years, (II) retrospective and prospective studies, (III) featured lateral UKA, (IV) OA was indication for surgery, and (V) included failure modes data. The exclusion criteria were studies that featured: (I) only a specific group of failure (eg, bearing dislocations only), (II) previous surgery in ipsilateral knee (high tibial osteotomy, medial UKA), (III) acute concurrent knee diagnoses (acute anterior cruciate ligament rupture, acute meniscal tear), (IV) combined reporting of medial and lateral UKA, or (V) multiple studies with the same patient database.
Data Collection
All studies that reported modes of failure were used in this study and these failure modes were noted in a datasheet in Microsoft Excel 2011 (Microsoft).
Statistical Analysis
For this systematic review, statistical analysis was performed with IBM SPSS Statistics 22 (SPSS Inc.). We performed chi square tests and Fisher’s exact tests to assess a difference between cohort studies and registry-based studies with the null hypothesis of no difference between both groups. A difference was considered significant when P < .05.
Results
Through the search of the databases, 1294 studies were identified and 26 handpicked studies were added. Initially, based on the title and abstract, 184 of these studies were found eligible.
A total of 366 lateral UKA failures were included. The most common failure modes were progression of OA (29%), aseptic loosening (23%), and bearing dislocation (10%). Infection (6%), instability (6%), unexplained pain (6%), and fractures (4%) were less common causes of failure of lateral UKA (Table 2).
One hundred fifty-five of these failures were reported in the cohort studies. The most common modes of failure were OA progression (36%), bearing dislocation (17%) and aseptic loosening (16%). Less common were infection (10%), fractures (5%), pain (5%), and other causes (6%). In registry-based studies, with 211 lateral UKA failures, the most common modes of failure were aseptic loosening (28%), OA progression (24%), other causes (12%), instability (10%), pain (7%), bearing dislocation (5%), and polyethylene wear (4%) (Table 2).
When pooling cohort and registry-based studies, progression of OA was significantly more common than aseptic loosening (29% vs 23%, respectively; P < .01). It was also significantly more common in the cohort studies (36% vs 16%, respectively; P < .01) but no significant difference was found between progression of OA and aseptic loosening in registry-based studies (24% and 28%, respectively; P = .16) (Table 2).
When comparing cohort with registry-based studies, progression of OA was higher in cohort studies (36% vs. 24%; P < .01). Other failures modes that were more common in cohort studies compared with registry-based studies were bearing dislocation (17% vs 5%, respectively; P < .01) and infections (10% vs 3%, P < .01). Failure modes that were more common in registry-based studies than cohort studies were aseptic loosening (28% vs 16%, respectively; P < .01), other causes (12% vs 6%, respectively, P = .02), and instability (10% vs 1%, respectively, P < .01) (Table 2).
Discussion
In this systematic review, the most common failure modes in lateral UKA review were OA progression (29%), aseptic loosening (23%), and bearing dislocation (10%). Progression of OA and bearing dislocation were the most common modes of failure in cohort studies (36% and 17%, respectively), while aseptic loosening and OA progression were the most common failure modes in registry-based studies (28% and 24%, respectively).
As mentioned above, there are differences in anatomy and kinematics between the medial and lateral compartment. When the lateral UKA failure modes are compared with studies reporting medial UKA failure modes, differences in failure modes are seen.34 Siddiqui and Ahmad35 performed a systematic review of outcomes after UKA revision and presented a table with the failure modes of included studies. Unfortunately they did not report the ratio of medial and lateral UKA. However, when assuming an average percentage of 90% to 95% of medial UKA,6,7,36 the main failure mode in their review in 17 out of 21 studies was aseptic loosening. Indeed, a recent systematic review on medial UKA failure modes showed that aseptic loosening is the most common cause of failure following this procedure.34 Similarly, a search through registry-based studies6,7 and large cohort studies37-40 that only reported medial UKA failures showed that the majority of these studies7,37-39 also reported aseptic loosening as the main cause of failure in medial UKA. When comparing the results of our systematic review of lateral UKA failures with the results of these studies of medial UKA failures, it seems that OA progression seems to play a more dominant role in failures of lateral UKA, while aseptic loosening seems to be more common in medial UKA.
Differences in anatomy and kinematics of the medial and lateral compartment can explain this. Malalignment of the joint is an important factor in the etiology of OA41,42 and biomechanical studies showed that this malalignment can cause decreased viability and further degenerative changes of cartilage of the knee.43 Hernigou and Deschamps44 showed that the alignment of the knee after medial UKA is an important factor in postoperative joint changes. They found that overcorrection of varus deformity during medial UKA surgery, measured by the hip-knee-ankle (HKA) angle, was associated with increased OA at the lateral condyle and less tibial wear of the medial UKA. Undercorrection of the varus caused an increase in tibial wear of polyethylene. Chatellard and colleagues45 found the same results in the correction of varus, measured by HKA. In addition, they found that when the prosthetic (medial) joint space was smaller than healthy (lateral) joint space, this was correlated with lower prosthesis survival. A smaller joint space at the healthy side was correlated with OA progression at the lateral compartment and tibial component wear.
These studies explain the mechanism of progression of OA and aseptic loosening. Harrington46 assessed the load in patients with valgus and varus deformity. Patients with a valgus deformity have high mechanical load on the lateral condyle during the static phase, but during the dynamic phase, a major part of this load shifts to the medial condyle. In the patients with varus deformity, the mechanical load was noted on the medial condyle during both the static and dynamic phase. Ohdera and colleagues47 advised, based on this biomechanical study and their own experiences, to correct the knee during lateral UKA to a slight valgus angle (5°-7°) to prevent OA progression at the medial side. van der List and colleagues48 similarly showed that undercorrection of 3° to 7° was correlated with better functional outcomes when compared to more neutral alignment. Moreover, Khamaisy and colleagues49 recently showed that overcorrection during UKA surgery is more common in lateral than medial UKA.
These studies are important to understanding why OA progression is more common as a failure mode in lateral UKA. The shift of mechanical load from the lateral to medial epicondyle during the dynamic phase also could explain why aseptic loosening is less common in lateral UKA. As Hernigou and Deschamps44 and Chatellard and colleagues45 stated, undercorrection of varus deformity in medial UKA is associated with higher mechanical load on the medial prosthesis side and smaller joint space width. These factors are correlated with mechanical failure of medial UKA. We think this process can be applied to lateral UKA, with the addition that the mechanical load is higher on the healthy medial compartment during the dynamic phase. This causes more forces on the healthy (medial) side in lateral UKA, and in medial UKA more forces on the prosthesis (medial) side, which results in more OA progression in lateral UKA and more aseptic loosening in medial UKA. This finding is consistent with the results of our review of more OA progression and less aseptic loosening in lateral UKA. This study also suggests that medial and lateral UKA should not be reported together in studies that present survivorship, failure modes, or clinical outcomes.
A large discrepancy was seen in bearing dislocation between cohort studies (17%) and registry-based studies (5%). When we take a closer look to the bearing dislocation failures in the cohort studies, most of the failures were reported in only 2 cohort studies.50,51 In a study by Pandit and colleagues,50 3 different prosthesis designs were used in 3 different time periods. In the first series of lateral UKA (1983-1991), 6 out of 51 (12%) bearings dislocated. In the second series (1998-2004), a modified technique was used and 3 out of 65 (5%) bearings dislocated. In the third series (2004-2008), a modified technique and a domed tibial component was used and only 1 out of 68 bearings dislocated (1%). In a study published in 1996, Gunther and colleagues51 also used surgical techniques and implants that were modified over the course of the study period. Because of these modified techniques, different implant designs, and year of publication, bearing dislocation most likely plays a smaller role than the 17% reported in the cohort studies. This discrepancy is a good example of the important role for the registries and registry-based studies in reporting failure modes and survivorship, especially in lateral UKA due to the low surgical frequency. Pabinger and colleagues52 recently performed a systematic review of cohort studies and registry-based studies in which they stated that the reliability in non-registry-based studies should be questioned and they considered registry-based studies superior in reporting UKA outcomes and revision rates. Furthermore, given the differences in anatomic and kinematic differences between the medial and lateral compartment and different failure modes between medial and lateral UKA, it would be better if future studies presented the medial and lateral failures separately. As stated above, most large cohort studies and especially annual registries currently do not report modes of failure of medial and lateral UKA separately.3,4,18-20
There are limitations in this study. First, this systematic review is not a full meta-analysis but a pooled analysis of collected study series and retrospective studies. Therefore, we cannot exclude sampling bias, confounders, and selection bias from the literature. We included all studies reporting failure modes of lateral UKA and excluded all case reports. We made a conscious choice about including all lateral UKA failures because this is the first systematic review of lateral UKA failure modes. Another limitation is that the follow-up period of the studies differed (Table 1) and we did not correct for the follow-up period. As stated in the example of bearing dislocations, some of these studies reported old or different techniques, while other, more recently published studies used more modified techniques11,29,53-56 Unfortunately, most studies did not report the time of arthroplasty survival and therefore we could not correct for the follow-up period.
In conclusion, progression of OA is the most common failure mode in lateral UKA, followed by aseptic loosening. Anatomic and kinematic factors such as alignment, mechanical forces during dynamic phase, and correction of valgus seem to play important roles in failure modes of lateral UKA. In the future, failure modes of medial and lateral UKA should be reported separately.
Am J Orthop. 2016;45(7):432-438, 462. Copyright Frontline Medical Communications Inc. 2016. All rights reserved.
1. Skolnick MD, Bryan RS, Peterson LFA. Unicompartmental polycentric knee arthroplasty. Description and preliminary results. Clin Orthop Relat Res. 1975;(112):208-214.
2. Riddle DL, Jiranek WA, McGlynn FJ. Yearly Incidence of Unicompartmental Knee Arthroplasty in the United States. J Arthroplasty. 2008;23(3):408-412.
3. Australian Orthopaedic Association. Hip and Knee Arthroplasty 2014 Annual Report. https://aoanjrr.sahmri.com/documents/10180/172286/Annual%20Report%202014. Accessed June 3, 2015.
4. Swedish Knee Arthroplasty Register. 2013 Annual Report.http://myknee.se/pdf/SKAR2013_Eng.pdf. Accessed June 3, 2015.
5. The New Zealand Joint Registry. Fourteen Year Report. January 1999 to December 2012. 2013. http://nzoa.org.nz/system/files/NJR 14 Year Report.pdf. Accessed June 3, 2015.
6. Baker PN, Jameson SS, Deehan DJ, Gregg PJ, Porter M, Tucker K. Mid-term equivalent survival of medial and lateral unicondylar knee replacement: an analysis of data from a National Joint Registry. J Bone Joint Surg Br. 2012;94(12):1641-1648.
7. Lewold S, Robertsson O, Knutson K, Lidgren L. Revision of unicompartmental knee arthroplasty: outcome in 1,135 cases from the Swedish Knee Arthroplasty study. Acta Orthop Scand. 1998;69(5):469-474.
8. Pennington DW, Swienckowski JJ, Lutes WB, Drake GN. Lateral unicompartmental knee arthroplasty: survivorship and technical considerations at an average follow-up of 12.4 years. J Arthroplasty. 2006;21(1):13-17.
9. Hill PF, Vedi V, Williams A, Iwaki H, Pinskerova V, Freeman MA. Tibiofemoral movement 2: the loaded and unloaded living knee studied by MRI. J Bone Joint Surg Br. 2000;82(8):1196-1198.
10. Nakagawa S, Kadoya Y, Todo S, et al. Tibiofemoral movement 3: full flexion in the living knee studied by MRI. J Bone Joint Surg Br. 2000;82(8):1199-1200.
11. Ashraf T, Newman JH, Evans RL, Ackroyd CE. Lateral unicompartmental knee replacement survivorship and clinical experience over 21 years. J Bone Joint Surg Br. 2002;84(8):1126-1130.
12. Scott RD. Lateral unicompartmental replacement: a road less traveled. Orthopedics. 2005;28(9):983-984.
13. Sah AP, Scott RD. Lateral unicompartmental knee arthroplasty through a medial approach. Study with an average five-year follow-up. J Bone Joint Surg Am. 2007;89(9):1948-1954.
14. Argenson JN, Parratte S, Bertani A, Flecher X, Aubaniac JM. Long-term results with a lateral unicondylar replacement. Clin Orthop Relat Res. 2008;466(11):2686-2693.
15. Weidow J, Pak J, Karrholm J. Different patterns of cartilage wear in medial and lateral gonarthrosis. Acta Orthop Scand. 2002;73(3):326-329.
16. Ollivier M, Abdel MP, Parratte S, Argenson JN. Lateral unicondylar knee arthroplasty (UKA): contemporary indications, surgical technique, and results. Int Orthop. 2014;38(2):449-455.
17. Demange MK, Von Keudell A, Probst C, Yoshioka H, Gomoll AH. Patient-specific implants for lateral unicompartmental knee arthroplasty. Int Orthop. 2015;39(8):1519-1526.
18. Khan Z, Nawaz SZ, Kahane S, Esler C, Chatterji U. Conversion of unicompartmental knee arthroplasty to total knee arthroplasty: the challenges and need for augments. Acta Orthop Belg. 2013;79(6):699-705.
19. Epinette JA, Brunschweiler B, Mertl P, et al. Unicompartmental knee arthroplasty modes of failure: wear is not the main reason for failure: a multicentre study of 418 failed knees. Orthop Traumatol Surg Res. 2012;98(6 Suppl):S124-S130.
20. Bordini B, Stea S, Falcioni S, Ancarani C, Toni A. Unicompartmental knee arthroplasty: 11-year experience from 3929 implants in RIPO register. Knee. 2014;21(6):1275-1279.
21. Bolognesi MP, Greiner MA, Attarian DE, et al. Unicompartmental knee arthroplasty and total knee arthroplasty among medicare beneficiaries, 2000 to 2009. J Bone Joint Surg Am. 2013;95(22):e174.
22. Nwachukwu BU, McCormick FM, Schairer WW, Frank RM, Provencher MT, Roche MW. Unicompartmental knee arthroplasty versus high tibial osteotomy: United States practice patterns for the surgical treatment of unicompartmental arthritis. J Arthroplasty. 2014;29(8):1586-1589.
23. van der List JP, Chawla H, Pearle AD. Robotic-assisted knee arthroplasty: an overview. Am J Orthop. 2016;45(4):202-211.
24. van der List JP, Chawla H, Joskowicz L, Pearle AD. Current state of computer navigation and robotics in unicompartmental and total knee arthroplasty: a systematic review with meta-analysis. Knee Surg Sports Traumatol Arthrosc. 2016 Sep 6. [Epub ahead of print]
25. Zuiderbaan HA, van der List JP, Kleeblad LJ, et al. Modern indications, results and global trends in the use of unicompartmental knee arthroplasty and high tibial osteotomy for the treatment of medial unicondylar knee osteoarthritis. Am J Orthop. 2016;45(6):E355-E361.
26. Smith JR, Robinson JR, Porteous AJ, et al. Fixed bearing lateral unicompartmental knee arthroplasty--short to midterm survivorship and knee scores for 101 prostheses. Knee. 2014;21(4):843-847.
27. Berend KR, Kolczun MC 2nd, George JW Jr, Lombardi AV Jr. Lateral unicompartmental knee arthroplasty through a lateral parapatellar approach has high early survivorship. Clin Orthop Relat Res. 2012;470(1):77-83.
28. Keblish PA, Briard JL. Mobile-bearing unicompartmental knee arthroplasty: a 2-center study with an 11-year (mean) follow-up. J Arthroplasty. 2004;19(7 Suppl 2):87-94.
29. Bertani A, Flecher X, Parratte S, Aubaniac JM, Argenson JN. Unicompartmental-knee arthroplasty for treatment of lateral gonarthrosis: about 30 cases. Midterm results. Rev Chir Orthop Reparatrice Appar Mot. 2008;94(8):763-770.
30. Sebilo A, Casin C, Lebel B, et al. Clinical and technical factors influencing outcomes of unicompartmental knee arthroplasty: Retrospective multicentre study of 944 knees. Orthop Traumatol Surg Res. 2013;99(4 Suppl):S227-S234.
31. Cartier P, Khefacha A, Sanouiller JL, Frederick K. Unicondylar knee arthroplasty in middle-aged patients: A minimum 5-year follow-up. Orthopedics. 2007;30(8 Suppl):62-65.
32. Lustig S, Paillot JL, Servien E, Henry J, Ait Si Selmi T, Neyret P. Cemented all polyethylene tibial insert unicompartimental knee arthroplasty: a long term follow-up study. Orthop Traumatol Surg Res. 2009;95(1):12-21.
33. van der List JP, McDonald LS, Pearle AD. Systematic review of medial versus lateral survivorship in unicompartmental knee arthroplasty. Knee. 2015;22(6):454-460.
34. van der List JP, Zuiderbaan HA, Pearle AD. Why do medial unicompartmental knee arthroplasties fail today? J Arthroplasty. 2016;31(5):1016-1021.
35. Siddiqui NA, Ahmad ZM. Revision of unicondylar to total knee arthroplasty: a systematic review. Open Orthop J. 2012;6:268-275.
36. Pennington DW, Swienckowski JJ, Lutes WB, Drake GN. Lateral unicompartmental knee arthroplasty: survivorship and technical considerations at an average follow-up of 12.4 years. J Arthroplasty. 2006;21(1):13-17.
37. Kalra S, Smith TO, Berko B, Walton NP. Assessment of radiolucent lines around the Oxford unicompartmental knee replacement: sensitivity and specificity for loosening. J Bone Joint Surg Br. 2011;93(6):777-781.
38. Wynn Jones H, Chan W, Harrison T, Smith TO, Masonda P, Walton NP. Revision of medial Oxford unicompartmental knee replacement to a total knee replacement: similar to a primary? Knee. 2012;19(4):339-343.
39. Sierra RJ, Kassel CA, Wetters NG, Berend KR, Della Valle CJ, Lombardi AV. Revision of unicompartmental arthroplasty to total knee arthroplasty: not always a slam dunk! J Arthroplasty. 2013;28(8 Suppl):128-132.
40. Citak M, Dersch K, Kamath AF, Haasper C, Gehrke T, Kendoff D. Common causes of failed unicompartmental knee arthroplasty: a single-centre analysis of four hundred and seventy one cases. Int Orthop. 2014;38(5):961-965.
41. Hunter DJ, Wilson DR. Role of alignment and biomechanics in osteoarthritis and implications for imaging. Radiol Clin North Am. 2009;47(4):553-566.
42. Hunter DJ, Sharma L, Skaife T. Alignment and osteoarthritis of the knee. J Bone Joint Surg Am. 2009;91 Suppl 1:85-89.
43. Roemhildt ML, Beynnon BD, Gauthier AE, Gardner-Morse M, Ertem F, Badger GJ. Chronic in vivo load alteration induces degenerative changes in the rat tibiofemoral joint. Osteoarthritis Cartilage. 2013;21(2):346-357.
44. Hernigou P, Deschamps G. Alignment influences wear in the knee after medial unicompartmental arthroplasty. Clin Orthop Relat Res. 2004;(423):161-165.
45. Chatellard R, Sauleau V, Colmar M, et al. Medial unicompartmental knee arthroplasty: does tibial component position influence clinical outcomes and arthroplasty survival? Orthop Traumatol Surg Res. 2013;99(4 Suppl):S219-S225.
46. Harrington IJ. Static and dynamic loading patterns in knee joints with deformities. J Bone Joint Surg Am. 1983;65(2):247-259.
47. Ohdera T, Tokunaga J, Kobayashi A. Unicompartmental knee arthroplasty for lateral gonarthrosis: midterm results. J Arthroplasty. 2001;16(2):196-200.
48. van der List JP, Chawla H, Villa JC, Zuiderbaan HA, Pearle AD. Early functional outcome after lateral UKA is sensitive to postoperative lower limb alignment. Knee Surg Sports Traumatol Arthrosc. 2015 Nov 26. [Epub ahead of print]
49. Khamaisy S, Gladnick BP, Nam D, Reinhardt KR, Heyse TJ, Pearle AD. Lower limb alignment control: Is it more challenging in lateral compared to medial unicondylar knee arthroplasty? Knee. 2015;22(4):347-350.
50. Pandit H, Jenkins C, Beard DJ, et al. Mobile bearing dislocation in lateral unicompartmental knee replacement. Knee. 2010;17(6):392-397.
51. Gunther TV, Murray DW, Miller R, et al. Lateral unicompartmental arthroplasty with the Oxford meniscal knee. Knee. 1996;3(1):33-39.
52. Pabinger C, Lumenta DB, Cupak D, Berghold A, Boehler N, Labek G. Quality of outcome data in knee arthroplasty: Comparison of registry data and worldwide non-registry studies from 4 decades. Acta Orthopaedica. 2015;86(1):58-62.
53. Lustig S, Elguindy A, Servien E, et al. 5- to 16-year follow-up of 54 consecutive lateral unicondylar knee arthroplasties with a fixed-all polyethylene bearing. J Arthroplasty. 2011;26(8):1318-1325.
54. Walton MJ, Weale AE, Newman JH. The progression of arthritis following lateral unicompartmental knee replacement. Knee. 2006;13(5):374-377.
55. Lustig S, Lording T, Frank F, Debette C, Servien E, Neyret P. Progression of medial osteoarthritis and long term results of lateral unicompartmental arthroplasty: 10 to 18 year follow-up of 54 consecutive implants. Knee. 2014;21(S1):S26-S32.
56. O’Rourke MR, Gardner JJ, Callaghan JJ, et al. Unicompartmental knee replacement: a minimum twenty-one-year followup, end-result study. Clin Orthop Relat Res. 2005;440:27-37.
57. Citak M, Cross MB, Gehrke T, Dersch K, Kendoff D. Modes of failure and revision of failed lateral unicompartmental knee arthroplasties. Knee. 2015;22(4):338-340.
58. Liebs TR, Herzberg W. Better quality of life after medial versus lateral unicondylar knee arthroplasty knee. Clin Orthop Relat Res. 2013;471(8):2629-2640.
59. Weston-Simons JS, Pandit H, Kendrick BJ, et al. The mid-term outcomes of the Oxford Domed Lateral unicompartmental knee replacement. Bone Joint J. 2014;96-B(1):59-64.
60. Thompson SA, Liabaud B, Nellans KW, Geller JA. Factors associated with poor outcomes following unicompartmental knee arthroplasty: redefining the “classic” indications for surgery. J Arthroplasty. 2013;28(9):1561-1564.
61. Saxler G, Temmen D, Bontemps G. Medium-term results of the AMC-unicompartmental knee arthroplasty. Knee. 2004;11(5):349-355.
62. Forster MC, Bauze AJ, Keene GCR. Lateral unicompartmental knee replacement: Fixed or mobile bearing? Knee Surg Sports Traumatol Arthrosc. 2007;15(9):1107-1111.
63. Streit MR, Walker T, Bruckner T, et al. Mobile-bearing lateral unicompartmental knee replacement with the Oxford domed tibial component: an independent series. J Bone Joint Surg Br. 2012;94(10):1356-1361.
64. Altuntas AO, Alsop H, Cobb JP. Early results of a domed tibia, mobile bearing lateral unicompartmental knee arthroplasty from an independent centre. Knee. 2013;20(6):466-470.
65. Ashraf T, Newman JH, Desai VV, Beard D, Nevelos JE. Polyethylene wear in a non-congruous unicompartmental knee replacement: a retrieval analysis. Knee. 2004;11(3):177-181.
66. Schelfaut S, Beckers L, Verdonk P, Bellemans J, Victor J. The risk of bearing dislocation in lateral unicompartmental knee arthroplasty using a mobile biconcave design. Knee Surg Sports Traumatol Arthrosc. 2013;21(11):2487-2494.
67. Marson B, Prasad N, Jenkins R, Lewis M. Lateral unicompartmental knee replacements: Early results from a District General Hospital. Eur J Orthop Surg Traumatol. 2014;24(6):987-991.
68. Walker T, Gotterbarm T, Bruckner T, Merle C, Streit MR. Total versus unicompartmental knee replacement for isolated lateral osteoarthritis: a matched-pairs study. Int Orthop. 2014;38(11):2259-2264.
1. Skolnick MD, Bryan RS, Peterson LFA. Unicompartmental polycentric knee arthroplasty. Description and preliminary results. Clin Orthop Relat Res. 1975;(112):208-214.
2. Riddle DL, Jiranek WA, McGlynn FJ. Yearly Incidence of Unicompartmental Knee Arthroplasty in the United States. J Arthroplasty. 2008;23(3):408-412.
3. Australian Orthopaedic Association. Hip and Knee Arthroplasty 2014 Annual Report. https://aoanjrr.sahmri.com/documents/10180/172286/Annual%20Report%202014. Accessed June 3, 2015.
4. Swedish Knee Arthroplasty Register. 2013 Annual Report.http://myknee.se/pdf/SKAR2013_Eng.pdf. Accessed June 3, 2015.
5. The New Zealand Joint Registry. Fourteen Year Report. January 1999 to December 2012. 2013. http://nzoa.org.nz/system/files/NJR 14 Year Report.pdf. Accessed June 3, 2015.
6. Baker PN, Jameson SS, Deehan DJ, Gregg PJ, Porter M, Tucker K. Mid-term equivalent survival of medial and lateral unicondylar knee replacement: an analysis of data from a National Joint Registry. J Bone Joint Surg Br. 2012;94(12):1641-1648.
7. Lewold S, Robertsson O, Knutson K, Lidgren L. Revision of unicompartmental knee arthroplasty: outcome in 1,135 cases from the Swedish Knee Arthroplasty study. Acta Orthop Scand. 1998;69(5):469-474.
8. Pennington DW, Swienckowski JJ, Lutes WB, Drake GN. Lateral unicompartmental knee arthroplasty: survivorship and technical considerations at an average follow-up of 12.4 years. J Arthroplasty. 2006;21(1):13-17.
9. Hill PF, Vedi V, Williams A, Iwaki H, Pinskerova V, Freeman MA. Tibiofemoral movement 2: the loaded and unloaded living knee studied by MRI. J Bone Joint Surg Br. 2000;82(8):1196-1198.
10. Nakagawa S, Kadoya Y, Todo S, et al. Tibiofemoral movement 3: full flexion in the living knee studied by MRI. J Bone Joint Surg Br. 2000;82(8):1199-1200.
11. Ashraf T, Newman JH, Evans RL, Ackroyd CE. Lateral unicompartmental knee replacement survivorship and clinical experience over 21 years. J Bone Joint Surg Br. 2002;84(8):1126-1130.
12. Scott RD. Lateral unicompartmental replacement: a road less traveled. Orthopedics. 2005;28(9):983-984.
13. Sah AP, Scott RD. Lateral unicompartmental knee arthroplasty through a medial approach. Study with an average five-year follow-up. J Bone Joint Surg Am. 2007;89(9):1948-1954.
14. Argenson JN, Parratte S, Bertani A, Flecher X, Aubaniac JM. Long-term results with a lateral unicondylar replacement. Clin Orthop Relat Res. 2008;466(11):2686-2693.
15. Weidow J, Pak J, Karrholm J. Different patterns of cartilage wear in medial and lateral gonarthrosis. Acta Orthop Scand. 2002;73(3):326-329.
16. Ollivier M, Abdel MP, Parratte S, Argenson JN. Lateral unicondylar knee arthroplasty (UKA): contemporary indications, surgical technique, and results. Int Orthop. 2014;38(2):449-455.
17. Demange MK, Von Keudell A, Probst C, Yoshioka H, Gomoll AH. Patient-specific implants for lateral unicompartmental knee arthroplasty. Int Orthop. 2015;39(8):1519-1526.
18. Khan Z, Nawaz SZ, Kahane S, Esler C, Chatterji U. Conversion of unicompartmental knee arthroplasty to total knee arthroplasty: the challenges and need for augments. Acta Orthop Belg. 2013;79(6):699-705.
19. Epinette JA, Brunschweiler B, Mertl P, et al. Unicompartmental knee arthroplasty modes of failure: wear is not the main reason for failure: a multicentre study of 418 failed knees. Orthop Traumatol Surg Res. 2012;98(6 Suppl):S124-S130.
20. Bordini B, Stea S, Falcioni S, Ancarani C, Toni A. Unicompartmental knee arthroplasty: 11-year experience from 3929 implants in RIPO register. Knee. 2014;21(6):1275-1279.
21. Bolognesi MP, Greiner MA, Attarian DE, et al. Unicompartmental knee arthroplasty and total knee arthroplasty among medicare beneficiaries, 2000 to 2009. J Bone Joint Surg Am. 2013;95(22):e174.
22. Nwachukwu BU, McCormick FM, Schairer WW, Frank RM, Provencher MT, Roche MW. Unicompartmental knee arthroplasty versus high tibial osteotomy: United States practice patterns for the surgical treatment of unicompartmental arthritis. J Arthroplasty. 2014;29(8):1586-1589.
23. van der List JP, Chawla H, Pearle AD. Robotic-assisted knee arthroplasty: an overview. Am J Orthop. 2016;45(4):202-211.
24. van der List JP, Chawla H, Joskowicz L, Pearle AD. Current state of computer navigation and robotics in unicompartmental and total knee arthroplasty: a systematic review with meta-analysis. Knee Surg Sports Traumatol Arthrosc. 2016 Sep 6. [Epub ahead of print]
25. Zuiderbaan HA, van der List JP, Kleeblad LJ, et al. Modern indications, results and global trends in the use of unicompartmental knee arthroplasty and high tibial osteotomy for the treatment of medial unicondylar knee osteoarthritis. Am J Orthop. 2016;45(6):E355-E361.
26. Smith JR, Robinson JR, Porteous AJ, et al. Fixed bearing lateral unicompartmental knee arthroplasty--short to midterm survivorship and knee scores for 101 prostheses. Knee. 2014;21(4):843-847.
27. Berend KR, Kolczun MC 2nd, George JW Jr, Lombardi AV Jr. Lateral unicompartmental knee arthroplasty through a lateral parapatellar approach has high early survivorship. Clin Orthop Relat Res. 2012;470(1):77-83.
28. Keblish PA, Briard JL. Mobile-bearing unicompartmental knee arthroplasty: a 2-center study with an 11-year (mean) follow-up. J Arthroplasty. 2004;19(7 Suppl 2):87-94.
29. Bertani A, Flecher X, Parratte S, Aubaniac JM, Argenson JN. Unicompartmental-knee arthroplasty for treatment of lateral gonarthrosis: about 30 cases. Midterm results. Rev Chir Orthop Reparatrice Appar Mot. 2008;94(8):763-770.
30. Sebilo A, Casin C, Lebel B, et al. Clinical and technical factors influencing outcomes of unicompartmental knee arthroplasty: Retrospective multicentre study of 944 knees. Orthop Traumatol Surg Res. 2013;99(4 Suppl):S227-S234.
31. Cartier P, Khefacha A, Sanouiller JL, Frederick K. Unicondylar knee arthroplasty in middle-aged patients: A minimum 5-year follow-up. Orthopedics. 2007;30(8 Suppl):62-65.
32. Lustig S, Paillot JL, Servien E, Henry J, Ait Si Selmi T, Neyret P. Cemented all polyethylene tibial insert unicompartimental knee arthroplasty: a long term follow-up study. Orthop Traumatol Surg Res. 2009;95(1):12-21.
33. van der List JP, McDonald LS, Pearle AD. Systematic review of medial versus lateral survivorship in unicompartmental knee arthroplasty. Knee. 2015;22(6):454-460.
34. van der List JP, Zuiderbaan HA, Pearle AD. Why do medial unicompartmental knee arthroplasties fail today? J Arthroplasty. 2016;31(5):1016-1021.
35. Siddiqui NA, Ahmad ZM. Revision of unicondylar to total knee arthroplasty: a systematic review. Open Orthop J. 2012;6:268-275.
36. Pennington DW, Swienckowski JJ, Lutes WB, Drake GN. Lateral unicompartmental knee arthroplasty: survivorship and technical considerations at an average follow-up of 12.4 years. J Arthroplasty. 2006;21(1):13-17.
37. Kalra S, Smith TO, Berko B, Walton NP. Assessment of radiolucent lines around the Oxford unicompartmental knee replacement: sensitivity and specificity for loosening. J Bone Joint Surg Br. 2011;93(6):777-781.
38. Wynn Jones H, Chan W, Harrison T, Smith TO, Masonda P, Walton NP. Revision of medial Oxford unicompartmental knee replacement to a total knee replacement: similar to a primary? Knee. 2012;19(4):339-343.
39. Sierra RJ, Kassel CA, Wetters NG, Berend KR, Della Valle CJ, Lombardi AV. Revision of unicompartmental arthroplasty to total knee arthroplasty: not always a slam dunk! J Arthroplasty. 2013;28(8 Suppl):128-132.
40. Citak M, Dersch K, Kamath AF, Haasper C, Gehrke T, Kendoff D. Common causes of failed unicompartmental knee arthroplasty: a single-centre analysis of four hundred and seventy one cases. Int Orthop. 2014;38(5):961-965.
41. Hunter DJ, Wilson DR. Role of alignment and biomechanics in osteoarthritis and implications for imaging. Radiol Clin North Am. 2009;47(4):553-566.
42. Hunter DJ, Sharma L, Skaife T. Alignment and osteoarthritis of the knee. J Bone Joint Surg Am. 2009;91 Suppl 1:85-89.
43. Roemhildt ML, Beynnon BD, Gauthier AE, Gardner-Morse M, Ertem F, Badger GJ. Chronic in vivo load alteration induces degenerative changes in the rat tibiofemoral joint. Osteoarthritis Cartilage. 2013;21(2):346-357.
44. Hernigou P, Deschamps G. Alignment influences wear in the knee after medial unicompartmental arthroplasty. Clin Orthop Relat Res. 2004;(423):161-165.
45. Chatellard R, Sauleau V, Colmar M, et al. Medial unicompartmental knee arthroplasty: does tibial component position influence clinical outcomes and arthroplasty survival? Orthop Traumatol Surg Res. 2013;99(4 Suppl):S219-S225.
46. Harrington IJ. Static and dynamic loading patterns in knee joints with deformities. J Bone Joint Surg Am. 1983;65(2):247-259.
47. Ohdera T, Tokunaga J, Kobayashi A. Unicompartmental knee arthroplasty for lateral gonarthrosis: midterm results. J Arthroplasty. 2001;16(2):196-200.
48. van der List JP, Chawla H, Villa JC, Zuiderbaan HA, Pearle AD. Early functional outcome after lateral UKA is sensitive to postoperative lower limb alignment. Knee Surg Sports Traumatol Arthrosc. 2015 Nov 26. [Epub ahead of print]
49. Khamaisy S, Gladnick BP, Nam D, Reinhardt KR, Heyse TJ, Pearle AD. Lower limb alignment control: Is it more challenging in lateral compared to medial unicondylar knee arthroplasty? Knee. 2015;22(4):347-350.
50. Pandit H, Jenkins C, Beard DJ, et al. Mobile bearing dislocation in lateral unicompartmental knee replacement. Knee. 2010;17(6):392-397.
51. Gunther TV, Murray DW, Miller R, et al. Lateral unicompartmental arthroplasty with the Oxford meniscal knee. Knee. 1996;3(1):33-39.
52. Pabinger C, Lumenta DB, Cupak D, Berghold A, Boehler N, Labek G. Quality of outcome data in knee arthroplasty: Comparison of registry data and worldwide non-registry studies from 4 decades. Acta Orthopaedica. 2015;86(1):58-62.
53. Lustig S, Elguindy A, Servien E, et al. 5- to 16-year follow-up of 54 consecutive lateral unicondylar knee arthroplasties with a fixed-all polyethylene bearing. J Arthroplasty. 2011;26(8):1318-1325.
54. Walton MJ, Weale AE, Newman JH. The progression of arthritis following lateral unicompartmental knee replacement. Knee. 2006;13(5):374-377.
55. Lustig S, Lording T, Frank F, Debette C, Servien E, Neyret P. Progression of medial osteoarthritis and long term results of lateral unicompartmental arthroplasty: 10 to 18 year follow-up of 54 consecutive implants. Knee. 2014;21(S1):S26-S32.
56. O’Rourke MR, Gardner JJ, Callaghan JJ, et al. Unicompartmental knee replacement: a minimum twenty-one-year followup, end-result study. Clin Orthop Relat Res. 2005;440:27-37.
57. Citak M, Cross MB, Gehrke T, Dersch K, Kendoff D. Modes of failure and revision of failed lateral unicompartmental knee arthroplasties. Knee. 2015;22(4):338-340.
58. Liebs TR, Herzberg W. Better quality of life after medial versus lateral unicondylar knee arthroplasty knee. Clin Orthop Relat Res. 2013;471(8):2629-2640.
59. Weston-Simons JS, Pandit H, Kendrick BJ, et al. The mid-term outcomes of the Oxford Domed Lateral unicompartmental knee replacement. Bone Joint J. 2014;96-B(1):59-64.
60. Thompson SA, Liabaud B, Nellans KW, Geller JA. Factors associated with poor outcomes following unicompartmental knee arthroplasty: redefining the “classic” indications for surgery. J Arthroplasty. 2013;28(9):1561-1564.
61. Saxler G, Temmen D, Bontemps G. Medium-term results of the AMC-unicompartmental knee arthroplasty. Knee. 2004;11(5):349-355.
62. Forster MC, Bauze AJ, Keene GCR. Lateral unicompartmental knee replacement: Fixed or mobile bearing? Knee Surg Sports Traumatol Arthrosc. 2007;15(9):1107-1111.
63. Streit MR, Walker T, Bruckner T, et al. Mobile-bearing lateral unicompartmental knee replacement with the Oxford domed tibial component: an independent series. J Bone Joint Surg Br. 2012;94(10):1356-1361.
64. Altuntas AO, Alsop H, Cobb JP. Early results of a domed tibia, mobile bearing lateral unicompartmental knee arthroplasty from an independent centre. Knee. 2013;20(6):466-470.
65. Ashraf T, Newman JH, Desai VV, Beard D, Nevelos JE. Polyethylene wear in a non-congruous unicompartmental knee replacement: a retrieval analysis. Knee. 2004;11(3):177-181.
66. Schelfaut S, Beckers L, Verdonk P, Bellemans J, Victor J. The risk of bearing dislocation in lateral unicompartmental knee arthroplasty using a mobile biconcave design. Knee Surg Sports Traumatol Arthrosc. 2013;21(11):2487-2494.
67. Marson B, Prasad N, Jenkins R, Lewis M. Lateral unicompartmental knee replacements: Early results from a District General Hospital. Eur J Orthop Surg Traumatol. 2014;24(6):987-991.
68. Walker T, Gotterbarm T, Bruckner T, Merle C, Streit MR. Total versus unicompartmental knee replacement for isolated lateral osteoarthritis: a matched-pairs study. Int Orthop. 2014;38(11):2259-2264.
Diagnosis at a Glance: Debilitating Thigh Mass in an Obese Patient
Case
A 60-year-old morbidly obese man presented to the ED with a painless mass on his left thigh (Figure 1), which he stated had formed over a several day period 2 months earlier.
On examination, the patient appeared well, with normal vital signs and a body mass index of 56 kg/m2. A computed tomography (CT) scan was obtained to further evaluate the mass (Figure 2), and dermatology services were consulted.
Discussion
Massive localized lymphedema is a complication associated with morbid obesity. First described in 1998 by Farshid and Weiss,1 MLL is characterized by a benign pedunculated mass primarily of the lower extremity that slowly enlarges over years.2 The pathogenesis of MLL is currently unknown. Histologically, MLL contains lobules of mature fat with expanded connective tissue septa without the degree of cellular atypia in well-differentiated liposarcoma (WDL). Though similar to WDL, MLL can be differentiated by the clinical history of a slowly growing mass in a morbidly obese patient and examination findings of overlying reactive skin and soft-tissue changes associated with chronic lymphedema (eg, thickened peau d’orange skin).1,2
The diagnosis of MLL may be made clinically, and if there is no evidence of infection, the patient may be referred to a surgeon. If diagnostic uncertainty remains, biopsy and further CT imaging studies should be considered. The treatment for MLL is a direct excision if the mass is interfering with the patient’s gait. If left untreated, MLL can progress to angiosarcoma. Recurrence is possible, even after surgical excision.3
1. Farshid G, Weiss SW. Massive localized lymphedema in the morbidly obese: a histologically distinct reactive lesion simulating liposarcoma. Am J Surg Pathol. 1998;22(10):1277-1283.
2. Evans RJ, Scilley C. Massive localized lymphedema: A case series and literature review. Can J Plast Surg. 2011;19(3):e30-e31.
3. Moon Y, Pyon JK. A rare case of massive localized lymphedema in a morbidly obese patient. Arch of Plast Surg. 2016;43(1):125-127. doi:10.5999/aps.2016.43.1.125.
Case
A 60-year-old morbidly obese man presented to the ED with a painless mass on his left thigh (Figure 1), which he stated had formed over a several day period 2 months earlier.
On examination, the patient appeared well, with normal vital signs and a body mass index of 56 kg/m2. A computed tomography (CT) scan was obtained to further evaluate the mass (Figure 2), and dermatology services were consulted.
Discussion
Massive localized lymphedema is a complication associated with morbid obesity. First described in 1998 by Farshid and Weiss,1 MLL is characterized by a benign pedunculated mass primarily of the lower extremity that slowly enlarges over years.2 The pathogenesis of MLL is currently unknown. Histologically, MLL contains lobules of mature fat with expanded connective tissue septa without the degree of cellular atypia in well-differentiated liposarcoma (WDL). Though similar to WDL, MLL can be differentiated by the clinical history of a slowly growing mass in a morbidly obese patient and examination findings of overlying reactive skin and soft-tissue changes associated with chronic lymphedema (eg, thickened peau d’orange skin).1,2
The diagnosis of MLL may be made clinically, and if there is no evidence of infection, the patient may be referred to a surgeon. If diagnostic uncertainty remains, biopsy and further CT imaging studies should be considered. The treatment for MLL is a direct excision if the mass is interfering with the patient’s gait. If left untreated, MLL can progress to angiosarcoma. Recurrence is possible, even after surgical excision.3
Case
A 60-year-old morbidly obese man presented to the ED with a painless mass on his left thigh (Figure 1), which he stated had formed over a several day period 2 months earlier.
On examination, the patient appeared well, with normal vital signs and a body mass index of 56 kg/m2. A computed tomography (CT) scan was obtained to further evaluate the mass (Figure 2), and dermatology services were consulted.
Discussion
Massive localized lymphedema is a complication associated with morbid obesity. First described in 1998 by Farshid and Weiss,1 MLL is characterized by a benign pedunculated mass primarily of the lower extremity that slowly enlarges over years.2 The pathogenesis of MLL is currently unknown. Histologically, MLL contains lobules of mature fat with expanded connective tissue septa without the degree of cellular atypia in well-differentiated liposarcoma (WDL). Though similar to WDL, MLL can be differentiated by the clinical history of a slowly growing mass in a morbidly obese patient and examination findings of overlying reactive skin and soft-tissue changes associated with chronic lymphedema (eg, thickened peau d’orange skin).1,2
The diagnosis of MLL may be made clinically, and if there is no evidence of infection, the patient may be referred to a surgeon. If diagnostic uncertainty remains, biopsy and further CT imaging studies should be considered. The treatment for MLL is a direct excision if the mass is interfering with the patient’s gait. If left untreated, MLL can progress to angiosarcoma. Recurrence is possible, even after surgical excision.3
1. Farshid G, Weiss SW. Massive localized lymphedema in the morbidly obese: a histologically distinct reactive lesion simulating liposarcoma. Am J Surg Pathol. 1998;22(10):1277-1283.
2. Evans RJ, Scilley C. Massive localized lymphedema: A case series and literature review. Can J Plast Surg. 2011;19(3):e30-e31.
3. Moon Y, Pyon JK. A rare case of massive localized lymphedema in a morbidly obese patient. Arch of Plast Surg. 2016;43(1):125-127. doi:10.5999/aps.2016.43.1.125.
1. Farshid G, Weiss SW. Massive localized lymphedema in the morbidly obese: a histologically distinct reactive lesion simulating liposarcoma. Am J Surg Pathol. 1998;22(10):1277-1283.
2. Evans RJ, Scilley C. Massive localized lymphedema: A case series and literature review. Can J Plast Surg. 2011;19(3):e30-e31.
3. Moon Y, Pyon JK. A rare case of massive localized lymphedema in a morbidly obese patient. Arch of Plast Surg. 2016;43(1):125-127. doi:10.5999/aps.2016.43.1.125.
The Burden of COPD
Case Scenario
A 62-year-old man who regularly presented to the ED for exacerbations of chronic obstructive pulmonary disease (COPD) after running out of his medications presented again for evaluation and treatment. His outpatient care had been poorly coordinated, and he relied on the ED to provide him with the support he needed. This presentation represented his fifth visit to the ED over the past 3 months.
The patient’s medical history was positive for asthma since childhood, tobacco use, hypertension, and a recent diagnosis of congestive heart failure (CHF). Over the past year, he had four hospital admissions, and was currently unable to walk from his bedroom to another room without becoming short of breath. He also had recently experienced a 20-lb weight loss.
At this visit, the patient complained of chest pain and lightheadedness, which he described as suffocating. Prior to these recent symptoms, he enjoyed walking in his neighborhood and talking with friends. He was an avid reader and sports fan, but admitted that he now had trouble focusing on reading and following games on television. He lived alone, and his family lived across the country. The patient further admitted that although he had attempted to quit cigarette smoking, he was unable to give up his 50-pack per year habit. He had no completed advance health care directive and had significant challenges tending to his basic needs.
The Trajectory of COPD
Chronic obstructive pulmonary disease is a common chronic illness that causes significant morbidity and mortality. A 2016 National Health Services report cited respiratory illness, primarily from COPD, as the third leading cause of death in the United States in 2014.1The trajectory of this disease is marked by frequent exacerbations with partial recovery to baseline function. The burden of those living with COPD is significant and marked by a poor overall health-related quality of life (QOL). The ED has become a staging area for patients seeking care for exacerbations of COPD.2
The World Health Organization (WHO) and the Global Initiative for Chronic Obstructive Lung Disease (GOLD) have defined COPD as a spectrum of diseases including emphysema, chronic bronchitis, and chronic obstructive asthma characterized by persistent airflow limitation that is usually progressive and associated with an enhanced chronic inflammatory response to noxious particles or gases in the airways and lungs.3 Exacerbations and comorbidities contribute to the overall severity of COPD in individual patients.4
The case presented in this article illustrates the common scenario of a patient whose COPD has become severe and highly symptomatic with declining function to the point where he requires home support. His physical decline had been rapid and resulted in many unmet needs. When a patient such as this presents for emergent care, he must first be stabilized; then a care plan will need to be developed prior to discharge.
Management Goals
The overall goals of treating COPD are based on preserving function and are not curative in nature. Chronic obstructive pulmonary disease is a progressive illness that will intensify over time.5 As such, palliative care services are warranted. However, many patients with COPD do not receive palliative care services compared to patients with such other serious and life-limiting disease as cancer and heart disease.
Acute Exacerbations of COPD
Incidence
The frequency of acute exacerbations of COPD (AECOPD) increases with age, productive cough, long-standing COPD, previous hospitalizations related to COPD, eosinophilia, and comorbidities (eg, CHF). Patients with moderate to severe COPD and a history of prior exacerbations were found to have a higher likelihood of future exacerbations. From a quality and cost perspective, it may be useful to identify high-risk patients and strengthen their outpatient program to lessen the need for ED care and more intensive support.6,7
In our case scenario, the patient could have been stabilized at home with a well-controlled plan and home support, which would have resulted in an improved QOL and more time free from his high symptom burden.
Causes
Bacterial and viral respiratory infections are the most likely cause of AECOPD. Environmental pollution and pulmonary embolism are also triggers. Typically, patients with AECOPD present to the ED up to several times a year2 and represent the third most common cause of 30-day readmissions to the hospital.8 Prior exacerbations, dyspnea, and other medical comorbidities are also risk factors for more frequent hospital visits.
Presenting Signs and Symptoms
Each occurrence of AECOPD represents a worsening of a patient’s respiratory symptoms beyond normal variations. This might include increases in cough, sputum production, and dyspnea. The goal in caring for a person with an AECOPD is to stabilize the acute event and provide a treatment plan. The range of acuity for moderate to severe disease makes devising an appropriate treatment plan challenging, and after implementing the best plans, the patient’s course may be characterized by a prolonged cycle of admissions and readmissions without substantial return to baseline.
Management
In practice, ED management of AECOPD in older adults typically differs significantly from published guideline recommendations,9 which may result in pooroutcomes related to shortcomings in quality of care. Better adherence to guideline recommendations when caring for elderly patients with COPD may lead to improved clinical outcomes and better resource usage.
Risk Stratification
Complicating ED management is the challenge of determining the severity of illness and degree of the exacerbation. Airflow obstruction alone is not sufficient to predict outcomes, as any particular measure of obstruction is associated with a spectrum of forced expiratory volume in the first second (FEV1) and varying performance. Moreover, peak-flow measurements are not useful in the setting of AECOPD, as opposed to their use in acute asthma exacerbations, and are not predictive of changes in clinical status.
GOLD and NICE Criteria
Guidelines have been developed and widely promoted to assist ED and hospital and community clinicians in providing evidence-based management for COPD patients. The GOLD Criteria and the National Institute for Clinical Excellence (NICE) are both clinical guidelines on management of COPD.10
Although well recognized and commonly used, the original GOLD criteria did not take into account the frequency and importance of the extrapulmonary manifestations of COPD in predicting outcome. Typically, those with severe or very severe COPD have an average of 12 co-occurring symptoms, an even greater number of signs and symptoms than those occurring in patients with cancer or heart or renal disease.11
BODE Criteria
The body mass index, airflow obstruction, dyspnea and exercise capacity (BODE) criteria assess and predict the health-related QOL and mortality risk for patients with COPD. Risk is adjusted based on four factors—weight, airway obstruction, dyspnea, and exercise capacity (ie, 6-minute walk distance).13
Initial Evaluation and Work-Up
As previously noted, when an AECOPD patient arrives to the ED, the first priority is to stabilize the patient and initiate treatment. In this respect, initial identification of the patient’s pulse oxygen saturation (SpO2) is important.
Laboratory Evaluation
In cases of respiratory failure, obtaining arterial blood gas (ABG) values are critical. The ABG test will assist in determining acute exacerbations of chronic hypercapnia and the need for ventilatory support. When considering CHF, a plasma B-type natriuretic peptide is useful to assess for CHF.
Imaging Studies
A chest radiograph may be useful in the initial evaluation to identify abnormalities, including barotrauma (ie, pneumothorax) and infiltrates. Additionally, in patients with comorbidities, it is important to assess cardiac status, and a chest X-ray may assist in identification of pulmonary edema, pleural effusions, and cardiomegaly. If the radiograph does show a pulmonary infiltrate (ie, pneumonia), it will help identify the probable triggers, but even in these instances, a sputum gram stain will not assist in the diagnosis.
Treatment
Relieving airflow obstruction is achieved with inhaled short-acting bronchodilators and systemic glucocorticoids, by treating infection, and by providing supplemental oxygen and ventilatory support.
Bronchodilators
The short-acting beta-adrenergic agonists (eg, albuterol) act rapidly and are effective in producing bronchodilation. Nebulized therapy may be most comfortable for the acutely ill patient. Typical dosing is 2.5 mg albuterol diluted to 3 cc by nebulizer every hour. Higher doses are not more effective, and there is no evidence of a higher response rate from constant nebulized therapy, which can cause anxiety and tachycardia in patients.14 Anticholinergic agents (eg, ipratropium) are often added despite unclear data regarding clinical advantage. In one study evaluating the effectiveness of adding ipratropium to albuterol, patients receiving a combination had the same improvement in FEV1 at 90 minutes.15 Patients receiving ipratropium alone had the lowest rate of reported side effects.15
Systemic Glucocorticoids
Short-course systemic glucocorticoids are an important addition to treatment and have been found to improve spirometry and decrease relapse rate. The oral and intravenous (IV) routes provide the same benefit. For the acutely ill patient with challenges swallowing, the IV route is preferred. The optimal dose is not clear, but hydrocortisone doses of 100 mg to 125 mg every 6 hours for 3 days are effective, as is oral prednisone 30 mg per day for 14 days, or 60 mg per day for 3 days with a taper.
Antibiotic Therapy
Antibiotics are indicated for patients with moderate to severe AECOPD who are ill enough to be admitted to the hospital. Empiric broad spectrum treatment is recommended. The initial antibiotic regimen should target likely bacterial pathogens (Haemophilus influenzae, Moraxella catarrhalis, and Streptococcus pneumoniae in most patients) and take into account local patterns of antibiotic resistance. Flouroquinolones or third-generation cephalosporins generally provide sufficient coverage. For patients experiencing only a mild exacerbation, antibiotics are not warranted.
Magnesium Sulfate
Other supplemental medications that have been evaluated include magnesium sulfate for bronchial smooth muscle relaxation. Studies have found that while magnesium is helpful in asthma, results are mixed with COPD.16
Supplemental Oxygen
Oxygen therapy is important during an AECOPD episode. Often, concerns arise about decreasing respiratory drive, which is typically driven by hypoxia in patients who have chronic hypercapnia. Arterial blood gas determinations are important in managing a patient’s respiratory status and will assist in determining actual oxygenation and any coexistent metabolic disturbances.
Noninvasive Ventilation. Oxygen can be administered efficiently by a venturi mask, which delivers precise fractions of oxygen, or by nasal cannula. A facemask is less comfortable, but is available for higher oxygen requirements, providing up to 55% oxygen, while a nonrebreather mask delivers up to 90% oxygen.
When necessary, noninvasive positive pressure ventilation (NPPV) improves outcomes for those with severe dyspnea and signs of respiratory fatigue manifested as increased work of breathing. Noninvasive positive pressure ventilation can improve clinical outcomes and is the ventilator mode of choice for those patients with COPD. Indications include severe dyspnea with signs of increased work of breathing and respiratory acidosis (arterial pH <7.35) and partial pressure of arterial carbon dioxide (PaCO2) >45 mm Hg.
Whenever possible, NPPV should be initiated with a triggered mode to allow spontaneous breaths. Inspiratory pressure of 8 cm to 12 cm H2O and expiratory pressure of 3 cm to 5 cm of H2 are recommended.
Mechanical Ventilation. Mechanical ventilation is often undesirable because it may be extraordinarily difficult to wean a patient off the device and permit safe extubation. However, if a patient cannot be stabilized with NPPV, intubation and mechanical ventilation must be considered. Typically, this occurs when there is severe respiratory distress, tachypnea >30 breaths/min, accessory muscle use, and altered mentation.
Goals of intubation/mechanical respiration include correcting oxygenation and severe respiratory acidosis as well as reducing the work of breathing. Barotrauma is a significant risk when patients with COPD require mechanical ventilation. Volume-limited modes of ventilation are commonly used, while pressure support or pressure-limited modes are less suitable for patients with airflow limitation. Again, invasive ventilation should only be administered if a patient cannot tolerate NPPV.
Palliative Care in the ED
Palliative care is an approach that improves the QOL of patients and their families facing the issues associated with life-threatening illness, through the prevention and relief of suffering by means of early identification and accurate assessment and treatment of pain and physical, psychosocial, and spiritual problems.3 This approach to care is warranted for COPD patients given the myriad of burdensome symptoms and functional decline that occurs.17
Palliative care expands traditional treatment goals to include enhancing QOL; helping with medical decision making; and identifying the goals of care. Palliative care is provided by board-certified physicians for the most complex of cases. However, the primary practice of palliative care must be delivered at the bedside by the treating provider. Managing pain, dyspnea, nausea, vomiting, and changes in bowel habits, as well as discussing goals of care, are among the basic palliative care skills all providers need to have and apply when indicated.
Palliative Care for Dyspnea
Opioids. Primary palliative care in the ED includes the appropriate use of low-dose oral and parenteral opioids to treat dyspnea in AECOPD. The use of a low-dose opioid, such as morphine 2 mg IV, titrated up to a desired response, is a safe and effective practice.18 Note the 2-mg starting dose is considered low-dose.19
With respect to managing dyspnea in AECOPD patients, nebulized opioids have not been found to be better than nebulized saline. More specific data regarding the use of oral opioids for managing refractory dyspnea in patients with predominantly COPD have been recently published: Long-acting morphine 20 mg once daily provides symptomatic relief in refractory dyspnea in the community setting. For the opioid-naïve patient, a lower dose is recommended.20
Oxygenation. There is no hard evidence of the effectiveness of oxygen in the palliation of breathlessness. Humidified air is effective initially, as is providing a fan at the bedside. Short-burst oxygen therapy should only be considered for episodes of severe breathlessness in patients whose COPD is not relieved by other treatments. Oxygen should continue to be prescribed only if an improvement in breathlessness following therapy has been documented. The American Thoracic Society recommends continuous oxygen therapy in patients with COPD who have severe resting hypoxemia (PaCO2 ≤55 mm Hg or SpO2 ≤88%).21
POLST Form
The Physicians Order for Life-Sustaining Treatment (POLST) form is a set of medical orders, similar to the “do not resuscitate” (allow natural death) order. A POLST form is not an advance directive and does not serve as a substitute for a patient’s assignation of a health care agent or durable power of attorney for health care.22
The POLST form enables physicians to order treatments patients would want, identify those treatments that patients would not want, and not provide those the patient considers “extraordinary” and excessively burdensome. A POLST form does not allow for active euthanasia or physician-assisted suicide.
Identifying treatment preferences is an important part of the initial evaluation of all patients. When dealing with an airway issue in a COPD patient, management can become complex. Ideally, the POLST form should arrive with the patient in the ED and list preferences regarding possible intensive interventions such as intubation and chest compressions. Discussing these issues with a patient in extreme distress is difficult or impossible, and in these cases, access to pertinent medical records, discussing preferences with family caregivers, and availability of a POLST form are much better ways to determine therapy.
Palliative Home Care
Patient Safety Considerations
Weight loss and associated muscle wasting are common features in patients with severe COPD, creating a high-risk situation for falls and a need for assistance with activities of daily living. The patient who is frail when discharged home from the ED requires a home-care plan before leaving the ED, and strict follow-up with the patient’s primary care provider will typically be needed within 2 to 4 weeks.
Psychological Considerations
Being mindful of the anxiety and depression that accompany the physical limitations of those with COPD is important. Mood disturbances serve as risk factors for re-hospitalization and mortality.13Multiple palliative care interventions provide patients assistance with these issues, including the use of antidepressants that may aid sleep, stabilize mood, and stimulate appetite.
Early referral to the palliative care team will provide improved care for the patient and family. Palliative care referral will provide continued management of the physical symptoms and evaluation and treatment of the psychosocial issues that accompany COPD. Additionally, the palliative care team can assist with safe discharge planning and follow-up, including the provision of the patient’s home needs as well as the family’s ability to cope with the home setting.
Prognosis
Predicting prognosis is difficult for the COPD patient due to the highly variable illness trajectory. Some patients have a low FEV1 and yet are very functional. However, assessment of severity of lung function impairment, frequency of exacerbations, and need for long-term oxygen therapy helps identify those patients who are entering the final 12 months of life. Evaluating symptom burden and impact on activities of daily living for patients with COPD is comparable to those of cancer patients, and in both cases, palliative care approaches are necessary.
Predicting Morbidity and Mortality
A profile developed from observational studies can help predict 6- to 12-month morbidity and mortality in patients with advanced COPD. This profile includes the following criteria:
- Significant dyspnea;
- FEV1 <30%;
- Number of exacerbations;
- Left heart failure or other comorbidities;
- Weight loss or cachexia;
- Decreasing performance status;
- Age older than 70 years; and
- Depression.
Although additional research is required to refine and verify this profile, reviewing these data points can prompt providers to initiate discussions with patients about treatment preferences and end-of-life care.23,24
Palliative Performance Scale
The Palliative Performance Scale (PPS) is another scale used to predict prognosis and eligibility for hospice care.25 This score provides a patient’s estimated survival.25 For a patient with a PPS score of 50%, hospice education may be appropriate.
Case Scenario Continued
Both the BODE and GOLD criteria scores assisted in determining prognosis and risk profiles of the patient in our case scenario. By applying the BODE criteria, our patient had a 4-year survival benefit of under 18%. The GOLD criteria results for this patient also were consistent with the BODE criteria and reflected end-stage COPD. Since this patient also had a PPS score of 50%, hospice education and care were discussed and initiated.
Conclusion
Patients with AECOPD commonly present to the ED. Such patients suffer with a high burden of illness and a need for immediate symptom management. However, after these measures have been instituted, strong evidence suggests that these patients typically do not receive palliative care with the same frequency compared to cancer or heart disease patients.
Management of AECOPD in the ED must include rapid treatment of dyspnea and pain, but also a determination of treatment preferences and an understanding of the prognosis. Several criteria are available to guide prognostic awareness and may help further the goals of care and disposition. Primary palliative care should be started by the ED provider for appropriate patients, with early referral to the palliative care team.
1. National Center for Health Statistics. Health, United States 2015 With Special Feature on Racial and Ethnic Health Disparities. Hyattsville, MD: US Dept. Health and Human Services; 2016. http://www.cdc.gov/nchs/hus/. Accessed October 17, 2016.
2. Khialani B, Sivakumaran P, Keijzers G, Sriram KB. Emergency department management of acute exacerbations of chronic obstructive pulmonary disease and factors associated with hospitalization. J Res Med Sci . 2014;19(4):297-303.
3. World Health Organization Web site. Chronic respiratory diseases. COPD: Definition. http://www.who.int/respiratory/copd/definition/en/. Accessed October 17, 2016.
4. Rabe KF, Hurd S, Anzueto A, et al; Global Initiative for Chronic Obstructive Lung Disease. Global strategy for the diagnosis, management, and prevention of chronic obstructive pulmonary disease: GOLD executive summary. Am J Respir Crit Care Med . 2007;176(6):532-555.
5. Fan VS, Ramsey SD, Make BJ, Martinez FJ. Physiologic variables and functional status independently predict COPD hospitalizations and emergency department visits in patients with severe COPD. COPD . 2007;4(1):29-39.
6. Cydulka RK, Rowe BH, Clark S, Emerman CL, Camargo CA Jr; MARC Investigators. Emergency department management of acute exacerbations of chronic obstructive pulmonary disease in the elderly: the Multicenter Airway Research Collaboration. J Am Geriatr Soc . 2003;51(7):908-916.
7. Strassels SA, Smith DH, Sullivan SD, et al. The costs of treating COPD in the United States. Chest . 2001;119:3.
8. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee-for-service program. N Engl J Med . 2009;360(14):1418-1428. doi:10.1056/NEJMsa0803563.
9. Rowe BH, Bhutani M, Stickland MK, Cydulka R. Assessment and management of chronic obstructive pulmonary disease in the emergency department and beyond. Expert Rev Respir Med . 2011;5(4):549-559. doi:10.1586/ers.11.43.
10. National Institute for Clinical Excellence Web site. Chronic obstructive pulmonary disease in over 16s: diagnosis and management. Clinical Guideline CG101. https://www.nice.org.uk/Guidance/cg101. Published June 2010. Accessed October 17, 2016.
11. Christensen VL, Holm AM, Cooper B, Paul SM, Miaskowski C, Rustøen T. Differences in symptom burden among patients with moderate, severe, or very severe chronic obstructive pulmonary disease. J Pain Symptom Manage . 2016;51(5):849-859. doi:10.1016/j.jpainsymman.2015.12.324.
12. GOLD Reports. Global Initiative for Chronic Obstructive Lung Disease Web site. http://goldcopd.org/gold-reports/. Accessed October 17, 2016.
13. Funk GC, Kirchheiner K, Burghuber OC, Hartl S. BODE index versus GOLD classification for explaining anxious and depressive symptoms in patients with COPD—a cross-sectional study. Respir Res . 2009;10:1. doi:10.1186/1465-9921-10-1.
14. Bach PB, Brown C, Gelfand SE, McCrory DC; American College of Physicians-American Society of Internal Medicine; American College of Chest Physicians. Management of acute exacerbations of chronic obstructive pulmonary disease: a summary and appraisal of published evidence. Ann Intern Med . 2001;134(7):600-620.
15. McCrory DC, Brown CD. Inhaled short-acting beta 2-agonists versus ipratropium for acute exacerbations of chronic obstructive pulmonary disease. Cochrane Database Syst Rev . 2001;(2):CD002984.
16. Shivanthan MC, Rajapakse S. Magnesium for acute exacerbation of chronic obstructive pulmonary disease: A systematic review of randomised trials. Ann Thorac Med . 2014;9(2):77-80. doi:10.4103/1817-1737.128844.
17. Curtis JR. Palliative and end of life care for patients with severe COPD. Eur Respir J . 2008;32(3):796-803.
18. Rocker GM, Simpson AC, Young J, et al. Opioid therapy for refractory dyspnea in patients with advanced chronic obstructive pulmonary disease: patients’ experiences and outcomes. CMAJ Open . 2013;1(1):E27-E36.
19. Jennings AL, Davies AN, Higgins JP, Gibbs JS, Broadley KE. A systematic review of the use of opioids in the management of dyspnea. Thorax . 2002;57(11):939-944.
20. Abernethy AP, Currow DC, Frith P, Fazekas BS, McHugh A, Bui C. Randomised, double blind, placebo controlled crossover trial of sustained release morphine for the management of refractory dyspnoea. BMJ . 2003;327(7414):523-528.
21. Qaseem A, Wilt TJ, Weinberger SE, et al; American College of Physicians; American College of Chest Physicians; American Thoracic Society; European Respiratory Society. Diagnosis and management of stable chronic obstructive pulmonary disease: a clinical practice guideline update from the American College of Physicians, American College of Chest Physicians, American Thoracic Society, and European Respiratory Society. Ann Intern Med . 2011;155(3):179-191. doi:10.7326/0003-4819-155-3-201108020-00008.
22. National POLST Paradigm. http://polst.org/professionals-page/?pro=1. Accessed October 17, 2016.
23. Hansen-Flaschen J. Chronic obstructive pulmonary disease: the last year of life. Respir Care. 2004;49(1):90-97; discussion 97-98.
24. Spathis A, Booth S. End of life care in chronic obstructive pulmonary disease: in search of a good death. Int J Chron Obstruct Pulmon Dis . 2008;3(1):11-29.
25. Anderson F, Downing GM, Hill J, Casorso L, Lerch N. Palliative performance scale (PPS): a new tool. J Palliat Care . 1996;12(1):5-11.
Case Scenario
A 62-year-old man who regularly presented to the ED for exacerbations of chronic obstructive pulmonary disease (COPD) after running out of his medications presented again for evaluation and treatment. His outpatient care had been poorly coordinated, and he relied on the ED to provide him with the support he needed. This presentation represented his fifth visit to the ED over the past 3 months.
The patient’s medical history was positive for asthma since childhood, tobacco use, hypertension, and a recent diagnosis of congestive heart failure (CHF). Over the past year, he had four hospital admissions, and was currently unable to walk from his bedroom to another room without becoming short of breath. He also had recently experienced a 20-lb weight loss.
At this visit, the patient complained of chest pain and lightheadedness, which he described as suffocating. Prior to these recent symptoms, he enjoyed walking in his neighborhood and talking with friends. He was an avid reader and sports fan, but admitted that he now had trouble focusing on reading and following games on television. He lived alone, and his family lived across the country. The patient further admitted that although he had attempted to quit cigarette smoking, he was unable to give up his 50-pack per year habit. He had no completed advance health care directive and had significant challenges tending to his basic needs.
The Trajectory of COPD
Chronic obstructive pulmonary disease is a common chronic illness that causes significant morbidity and mortality. A 2016 National Health Services report cited respiratory illness, primarily from COPD, as the third leading cause of death in the United States in 2014.1The trajectory of this disease is marked by frequent exacerbations with partial recovery to baseline function. The burden of those living with COPD is significant and marked by a poor overall health-related quality of life (QOL). The ED has become a staging area for patients seeking care for exacerbations of COPD.2
The World Health Organization (WHO) and the Global Initiative for Chronic Obstructive Lung Disease (GOLD) have defined COPD as a spectrum of diseases including emphysema, chronic bronchitis, and chronic obstructive asthma characterized by persistent airflow limitation that is usually progressive and associated with an enhanced chronic inflammatory response to noxious particles or gases in the airways and lungs.3 Exacerbations and comorbidities contribute to the overall severity of COPD in individual patients.4
The case presented in this article illustrates the common scenario of a patient whose COPD has become severe and highly symptomatic with declining function to the point where he requires home support. His physical decline had been rapid and resulted in many unmet needs. When a patient such as this presents for emergent care, he must first be stabilized; then a care plan will need to be developed prior to discharge.
Management Goals
The overall goals of treating COPD are based on preserving function and are not curative in nature. Chronic obstructive pulmonary disease is a progressive illness that will intensify over time.5 As such, palliative care services are warranted. However, many patients with COPD do not receive palliative care services compared to patients with such other serious and life-limiting disease as cancer and heart disease.
Acute Exacerbations of COPD
Incidence
The frequency of acute exacerbations of COPD (AECOPD) increases with age, productive cough, long-standing COPD, previous hospitalizations related to COPD, eosinophilia, and comorbidities (eg, CHF). Patients with moderate to severe COPD and a history of prior exacerbations were found to have a higher likelihood of future exacerbations. From a quality and cost perspective, it may be useful to identify high-risk patients and strengthen their outpatient program to lessen the need for ED care and more intensive support.6,7
In our case scenario, the patient could have been stabilized at home with a well-controlled plan and home support, which would have resulted in an improved QOL and more time free from his high symptom burden.
Causes
Bacterial and viral respiratory infections are the most likely cause of AECOPD. Environmental pollution and pulmonary embolism are also triggers. Typically, patients with AECOPD present to the ED up to several times a year2 and represent the third most common cause of 30-day readmissions to the hospital.8 Prior exacerbations, dyspnea, and other medical comorbidities are also risk factors for more frequent hospital visits.
Presenting Signs and Symptoms
Each occurrence of AECOPD represents a worsening of a patient’s respiratory symptoms beyond normal variations. This might include increases in cough, sputum production, and dyspnea. The goal in caring for a person with an AECOPD is to stabilize the acute event and provide a treatment plan. The range of acuity for moderate to severe disease makes devising an appropriate treatment plan challenging, and after implementing the best plans, the patient’s course may be characterized by a prolonged cycle of admissions and readmissions without substantial return to baseline.
Management
In practice, ED management of AECOPD in older adults typically differs significantly from published guideline recommendations,9 which may result in pooroutcomes related to shortcomings in quality of care. Better adherence to guideline recommendations when caring for elderly patients with COPD may lead to improved clinical outcomes and better resource usage.
Risk Stratification
Complicating ED management is the challenge of determining the severity of illness and degree of the exacerbation. Airflow obstruction alone is not sufficient to predict outcomes, as any particular measure of obstruction is associated with a spectrum of forced expiratory volume in the first second (FEV1) and varying performance. Moreover, peak-flow measurements are not useful in the setting of AECOPD, as opposed to their use in acute asthma exacerbations, and are not predictive of changes in clinical status.
GOLD and NICE Criteria
Guidelines have been developed and widely promoted to assist ED and hospital and community clinicians in providing evidence-based management for COPD patients. The GOLD Criteria and the National Institute for Clinical Excellence (NICE) are both clinical guidelines on management of COPD.10
Although well recognized and commonly used, the original GOLD criteria did not take into account the frequency and importance of the extrapulmonary manifestations of COPD in predicting outcome. Typically, those with severe or very severe COPD have an average of 12 co-occurring symptoms, an even greater number of signs and symptoms than those occurring in patients with cancer or heart or renal disease.11
BODE Criteria
The body mass index, airflow obstruction, dyspnea and exercise capacity (BODE) criteria assess and predict the health-related QOL and mortality risk for patients with COPD. Risk is adjusted based on four factors—weight, airway obstruction, dyspnea, and exercise capacity (ie, 6-minute walk distance).13
Initial Evaluation and Work-Up
As previously noted, when an AECOPD patient arrives to the ED, the first priority is to stabilize the patient and initiate treatment. In this respect, initial identification of the patient’s pulse oxygen saturation (SpO2) is important.
Laboratory Evaluation
In cases of respiratory failure, obtaining arterial blood gas (ABG) values are critical. The ABG test will assist in determining acute exacerbations of chronic hypercapnia and the need for ventilatory support. When considering CHF, a plasma B-type natriuretic peptide is useful to assess for CHF.
Imaging Studies
A chest radiograph may be useful in the initial evaluation to identify abnormalities, including barotrauma (ie, pneumothorax) and infiltrates. Additionally, in patients with comorbidities, it is important to assess cardiac status, and a chest X-ray may assist in identification of pulmonary edema, pleural effusions, and cardiomegaly. If the radiograph does show a pulmonary infiltrate (ie, pneumonia), it will help identify the probable triggers, but even in these instances, a sputum gram stain will not assist in the diagnosis.
Treatment
Relieving airflow obstruction is achieved with inhaled short-acting bronchodilators and systemic glucocorticoids, by treating infection, and by providing supplemental oxygen and ventilatory support.
Bronchodilators
The short-acting beta-adrenergic agonists (eg, albuterol) act rapidly and are effective in producing bronchodilation. Nebulized therapy may be most comfortable for the acutely ill patient. Typical dosing is 2.5 mg albuterol diluted to 3 cc by nebulizer every hour. Higher doses are not more effective, and there is no evidence of a higher response rate from constant nebulized therapy, which can cause anxiety and tachycardia in patients.14 Anticholinergic agents (eg, ipratropium) are often added despite unclear data regarding clinical advantage. In one study evaluating the effectiveness of adding ipratropium to albuterol, patients receiving a combination had the same improvement in FEV1 at 90 minutes.15 Patients receiving ipratropium alone had the lowest rate of reported side effects.15
Systemic Glucocorticoids
Short-course systemic glucocorticoids are an important addition to treatment and have been found to improve spirometry and decrease relapse rate. The oral and intravenous (IV) routes provide the same benefit. For the acutely ill patient with challenges swallowing, the IV route is preferred. The optimal dose is not clear, but hydrocortisone doses of 100 mg to 125 mg every 6 hours for 3 days are effective, as is oral prednisone 30 mg per day for 14 days, or 60 mg per day for 3 days with a taper.
Antibiotic Therapy
Antibiotics are indicated for patients with moderate to severe AECOPD who are ill enough to be admitted to the hospital. Empiric broad spectrum treatment is recommended. The initial antibiotic regimen should target likely bacterial pathogens (Haemophilus influenzae, Moraxella catarrhalis, and Streptococcus pneumoniae in most patients) and take into account local patterns of antibiotic resistance. Flouroquinolones or third-generation cephalosporins generally provide sufficient coverage. For patients experiencing only a mild exacerbation, antibiotics are not warranted.
Magnesium Sulfate
Other supplemental medications that have been evaluated include magnesium sulfate for bronchial smooth muscle relaxation. Studies have found that while magnesium is helpful in asthma, results are mixed with COPD.16
Supplemental Oxygen
Oxygen therapy is important during an AECOPD episode. Often, concerns arise about decreasing respiratory drive, which is typically driven by hypoxia in patients who have chronic hypercapnia. Arterial blood gas determinations are important in managing a patient’s respiratory status and will assist in determining actual oxygenation and any coexistent metabolic disturbances.
Noninvasive Ventilation. Oxygen can be administered efficiently by a venturi mask, which delivers precise fractions of oxygen, or by nasal cannula. A facemask is less comfortable, but is available for higher oxygen requirements, providing up to 55% oxygen, while a nonrebreather mask delivers up to 90% oxygen.
When necessary, noninvasive positive pressure ventilation (NPPV) improves outcomes for those with severe dyspnea and signs of respiratory fatigue manifested as increased work of breathing. Noninvasive positive pressure ventilation can improve clinical outcomes and is the ventilator mode of choice for those patients with COPD. Indications include severe dyspnea with signs of increased work of breathing and respiratory acidosis (arterial pH <7.35) and partial pressure of arterial carbon dioxide (PaCO2) >45 mm Hg.
Whenever possible, NPPV should be initiated with a triggered mode to allow spontaneous breaths. Inspiratory pressure of 8 cm to 12 cm H2O and expiratory pressure of 3 cm to 5 cm of H2 are recommended.
Mechanical Ventilation. Mechanical ventilation is often undesirable because it may be extraordinarily difficult to wean a patient off the device and permit safe extubation. However, if a patient cannot be stabilized with NPPV, intubation and mechanical ventilation must be considered. Typically, this occurs when there is severe respiratory distress, tachypnea >30 breaths/min, accessory muscle use, and altered mentation.
Goals of intubation/mechanical respiration include correcting oxygenation and severe respiratory acidosis as well as reducing the work of breathing. Barotrauma is a significant risk when patients with COPD require mechanical ventilation. Volume-limited modes of ventilation are commonly used, while pressure support or pressure-limited modes are less suitable for patients with airflow limitation. Again, invasive ventilation should only be administered if a patient cannot tolerate NPPV.
Palliative Care in the ED
Palliative care is an approach that improves the QOL of patients and their families facing the issues associated with life-threatening illness, through the prevention and relief of suffering by means of early identification and accurate assessment and treatment of pain and physical, psychosocial, and spiritual problems.3 This approach to care is warranted for COPD patients given the myriad of burdensome symptoms and functional decline that occurs.17
Palliative care expands traditional treatment goals to include enhancing QOL; helping with medical decision making; and identifying the goals of care. Palliative care is provided by board-certified physicians for the most complex of cases. However, the primary practice of palliative care must be delivered at the bedside by the treating provider. Managing pain, dyspnea, nausea, vomiting, and changes in bowel habits, as well as discussing goals of care, are among the basic palliative care skills all providers need to have and apply when indicated.
Palliative Care for Dyspnea
Opioids. Primary palliative care in the ED includes the appropriate use of low-dose oral and parenteral opioids to treat dyspnea in AECOPD. The use of a low-dose opioid, such as morphine 2 mg IV, titrated up to a desired response, is a safe and effective practice.18 Note the 2-mg starting dose is considered low-dose.19
With respect to managing dyspnea in AECOPD patients, nebulized opioids have not been found to be better than nebulized saline. More specific data regarding the use of oral opioids for managing refractory dyspnea in patients with predominantly COPD have been recently published: Long-acting morphine 20 mg once daily provides symptomatic relief in refractory dyspnea in the community setting. For the opioid-naïve patient, a lower dose is recommended.20
Oxygenation. There is no hard evidence of the effectiveness of oxygen in the palliation of breathlessness. Humidified air is effective initially, as is providing a fan at the bedside. Short-burst oxygen therapy should only be considered for episodes of severe breathlessness in patients whose COPD is not relieved by other treatments. Oxygen should continue to be prescribed only if an improvement in breathlessness following therapy has been documented. The American Thoracic Society recommends continuous oxygen therapy in patients with COPD who have severe resting hypoxemia (PaCO2 ≤55 mm Hg or SpO2 ≤88%).21
POLST Form
The Physicians Order for Life-Sustaining Treatment (POLST) form is a set of medical orders, similar to the “do not resuscitate” (allow natural death) order. A POLST form is not an advance directive and does not serve as a substitute for a patient’s assignation of a health care agent or durable power of attorney for health care.22
The POLST form enables physicians to order treatments patients would want, identify those treatments that patients would not want, and not provide those the patient considers “extraordinary” and excessively burdensome. A POLST form does not allow for active euthanasia or physician-assisted suicide.
Identifying treatment preferences is an important part of the initial evaluation of all patients. When dealing with an airway issue in a COPD patient, management can become complex. Ideally, the POLST form should arrive with the patient in the ED and list preferences regarding possible intensive interventions such as intubation and chest compressions. Discussing these issues with a patient in extreme distress is difficult or impossible, and in these cases, access to pertinent medical records, discussing preferences with family caregivers, and availability of a POLST form are much better ways to determine therapy.
Palliative Home Care
Patient Safety Considerations
Weight loss and associated muscle wasting are common features in patients with severe COPD, creating a high-risk situation for falls and a need for assistance with activities of daily living. The patient who is frail when discharged home from the ED requires a home-care plan before leaving the ED, and strict follow-up with the patient’s primary care provider will typically be needed within 2 to 4 weeks.
Psychological Considerations
Being mindful of the anxiety and depression that accompany the physical limitations of those with COPD is important. Mood disturbances serve as risk factors for re-hospitalization and mortality.13Multiple palliative care interventions provide patients assistance with these issues, including the use of antidepressants that may aid sleep, stabilize mood, and stimulate appetite.
Early referral to the palliative care team will provide improved care for the patient and family. Palliative care referral will provide continued management of the physical symptoms and evaluation and treatment of the psychosocial issues that accompany COPD. Additionally, the palliative care team can assist with safe discharge planning and follow-up, including the provision of the patient’s home needs as well as the family’s ability to cope with the home setting.
Prognosis
Predicting prognosis is difficult for the COPD patient due to the highly variable illness trajectory. Some patients have a low FEV1 and yet are very functional. However, assessment of severity of lung function impairment, frequency of exacerbations, and need for long-term oxygen therapy helps identify those patients who are entering the final 12 months of life. Evaluating symptom burden and impact on activities of daily living for patients with COPD is comparable to those of cancer patients, and in both cases, palliative care approaches are necessary.
Predicting Morbidity and Mortality
A profile developed from observational studies can help predict 6- to 12-month morbidity and mortality in patients with advanced COPD. This profile includes the following criteria:
- Significant dyspnea;
- FEV1 <30%;
- Number of exacerbations;
- Left heart failure or other comorbidities;
- Weight loss or cachexia;
- Decreasing performance status;
- Age older than 70 years; and
- Depression.
Although additional research is required to refine and verify this profile, reviewing these data points can prompt providers to initiate discussions with patients about treatment preferences and end-of-life care.23,24
Palliative Performance Scale
The Palliative Performance Scale (PPS) is another scale used to predict prognosis and eligibility for hospice care.25 This score provides a patient’s estimated survival.25 For a patient with a PPS score of 50%, hospice education may be appropriate.
Case Scenario Continued
Both the BODE and GOLD criteria scores assisted in determining prognosis and risk profiles of the patient in our case scenario. By applying the BODE criteria, our patient had a 4-year survival benefit of under 18%. The GOLD criteria results for this patient also were consistent with the BODE criteria and reflected end-stage COPD. Since this patient also had a PPS score of 50%, hospice education and care were discussed and initiated.
Conclusion
Patients with AECOPD commonly present to the ED. Such patients suffer with a high burden of illness and a need for immediate symptom management. However, after these measures have been instituted, strong evidence suggests that these patients typically do not receive palliative care with the same frequency compared to cancer or heart disease patients.
Management of AECOPD in the ED must include rapid treatment of dyspnea and pain, but also a determination of treatment preferences and an understanding of the prognosis. Several criteria are available to guide prognostic awareness and may help further the goals of care and disposition. Primary palliative care should be started by the ED provider for appropriate patients, with early referral to the palliative care team.
Case Scenario
A 62-year-old man who regularly presented to the ED for exacerbations of chronic obstructive pulmonary disease (COPD) after running out of his medications presented again for evaluation and treatment. His outpatient care had been poorly coordinated, and he relied on the ED to provide him with the support he needed. This presentation represented his fifth visit to the ED over the past 3 months.
The patient’s medical history was positive for asthma since childhood, tobacco use, hypertension, and a recent diagnosis of congestive heart failure (CHF). Over the past year, he had four hospital admissions, and was currently unable to walk from his bedroom to another room without becoming short of breath. He also had recently experienced a 20-lb weight loss.
At this visit, the patient complained of chest pain and lightheadedness, which he described as suffocating. Prior to these recent symptoms, he enjoyed walking in his neighborhood and talking with friends. He was an avid reader and sports fan, but admitted that he now had trouble focusing on reading and following games on television. He lived alone, and his family lived across the country. The patient further admitted that although he had attempted to quit cigarette smoking, he was unable to give up his 50-pack per year habit. He had no completed advance health care directive and had significant challenges tending to his basic needs.
The Trajectory of COPD
Chronic obstructive pulmonary disease is a common chronic illness that causes significant morbidity and mortality. A 2016 National Health Services report cited respiratory illness, primarily from COPD, as the third leading cause of death in the United States in 2014.1The trajectory of this disease is marked by frequent exacerbations with partial recovery to baseline function. The burden of those living with COPD is significant and marked by a poor overall health-related quality of life (QOL). The ED has become a staging area for patients seeking care for exacerbations of COPD.2
The World Health Organization (WHO) and the Global Initiative for Chronic Obstructive Lung Disease (GOLD) have defined COPD as a spectrum of diseases including emphysema, chronic bronchitis, and chronic obstructive asthma characterized by persistent airflow limitation that is usually progressive and associated with an enhanced chronic inflammatory response to noxious particles or gases in the airways and lungs.3 Exacerbations and comorbidities contribute to the overall severity of COPD in individual patients.4
The case presented in this article illustrates the common scenario of a patient whose COPD has become severe and highly symptomatic with declining function to the point where he requires home support. His physical decline had been rapid and resulted in many unmet needs. When a patient such as this presents for emergent care, he must first be stabilized; then a care plan will need to be developed prior to discharge.
Management Goals
The overall goals of treating COPD are based on preserving function and are not curative in nature. Chronic obstructive pulmonary disease is a progressive illness that will intensify over time.5 As such, palliative care services are warranted. However, many patients with COPD do not receive palliative care services compared to patients with such other serious and life-limiting disease as cancer and heart disease.
Acute Exacerbations of COPD
Incidence
The frequency of acute exacerbations of COPD (AECOPD) increases with age, productive cough, long-standing COPD, previous hospitalizations related to COPD, eosinophilia, and comorbidities (eg, CHF). Patients with moderate to severe COPD and a history of prior exacerbations were found to have a higher likelihood of future exacerbations. From a quality and cost perspective, it may be useful to identify high-risk patients and strengthen their outpatient program to lessen the need for ED care and more intensive support.6,7
In our case scenario, the patient could have been stabilized at home with a well-controlled plan and home support, which would have resulted in an improved QOL and more time free from his high symptom burden.
Causes
Bacterial and viral respiratory infections are the most likely cause of AECOPD. Environmental pollution and pulmonary embolism are also triggers. Typically, patients with AECOPD present to the ED up to several times a year2 and represent the third most common cause of 30-day readmissions to the hospital.8 Prior exacerbations, dyspnea, and other medical comorbidities are also risk factors for more frequent hospital visits.
Presenting Signs and Symptoms
Each occurrence of AECOPD represents a worsening of a patient’s respiratory symptoms beyond normal variations. This might include increases in cough, sputum production, and dyspnea. The goal in caring for a person with an AECOPD is to stabilize the acute event and provide a treatment plan. The range of acuity for moderate to severe disease makes devising an appropriate treatment plan challenging, and after implementing the best plans, the patient’s course may be characterized by a prolonged cycle of admissions and readmissions without substantial return to baseline.
Management
In practice, ED management of AECOPD in older adults typically differs significantly from published guideline recommendations,9 which may result in pooroutcomes related to shortcomings in quality of care. Better adherence to guideline recommendations when caring for elderly patients with COPD may lead to improved clinical outcomes and better resource usage.
Risk Stratification
Complicating ED management is the challenge of determining the severity of illness and degree of the exacerbation. Airflow obstruction alone is not sufficient to predict outcomes, as any particular measure of obstruction is associated with a spectrum of forced expiratory volume in the first second (FEV1) and varying performance. Moreover, peak-flow measurements are not useful in the setting of AECOPD, as opposed to their use in acute asthma exacerbations, and are not predictive of changes in clinical status.
GOLD and NICE Criteria
Guidelines have been developed and widely promoted to assist ED and hospital and community clinicians in providing evidence-based management for COPD patients. The GOLD Criteria and the National Institute for Clinical Excellence (NICE) are both clinical guidelines on management of COPD.10
Although well recognized and commonly used, the original GOLD criteria did not take into account the frequency and importance of the extrapulmonary manifestations of COPD in predicting outcome. Typically, those with severe or very severe COPD have an average of 12 co-occurring symptoms, an even greater number of signs and symptoms than those occurring in patients with cancer or heart or renal disease.11
BODE Criteria
The body mass index, airflow obstruction, dyspnea and exercise capacity (BODE) criteria assess and predict the health-related QOL and mortality risk for patients with COPD. Risk is adjusted based on four factors—weight, airway obstruction, dyspnea, and exercise capacity (ie, 6-minute walk distance).13
Initial Evaluation and Work-Up
As previously noted, when an AECOPD patient arrives to the ED, the first priority is to stabilize the patient and initiate treatment. In this respect, initial identification of the patient’s pulse oxygen saturation (SpO2) is important.
Laboratory Evaluation
In cases of respiratory failure, obtaining arterial blood gas (ABG) values are critical. The ABG test will assist in determining acute exacerbations of chronic hypercapnia and the need for ventilatory support. When considering CHF, a plasma B-type natriuretic peptide is useful to assess for CHF.
Imaging Studies
A chest radiograph may be useful in the initial evaluation to identify abnormalities, including barotrauma (ie, pneumothorax) and infiltrates. Additionally, in patients with comorbidities, it is important to assess cardiac status, and a chest X-ray may assist in identification of pulmonary edema, pleural effusions, and cardiomegaly. If the radiograph does show a pulmonary infiltrate (ie, pneumonia), it will help identify the probable triggers, but even in these instances, a sputum gram stain will not assist in the diagnosis.
Treatment
Relieving airflow obstruction is achieved with inhaled short-acting bronchodilators and systemic glucocorticoids, by treating infection, and by providing supplemental oxygen and ventilatory support.
Bronchodilators
The short-acting beta-adrenergic agonists (eg, albuterol) act rapidly and are effective in producing bronchodilation. Nebulized therapy may be most comfortable for the acutely ill patient. Typical dosing is 2.5 mg albuterol diluted to 3 cc by nebulizer every hour. Higher doses are not more effective, and there is no evidence of a higher response rate from constant nebulized therapy, which can cause anxiety and tachycardia in patients.14 Anticholinergic agents (eg, ipratropium) are often added despite unclear data regarding clinical advantage. In one study evaluating the effectiveness of adding ipratropium to albuterol, patients receiving a combination had the same improvement in FEV1 at 90 minutes.15 Patients receiving ipratropium alone had the lowest rate of reported side effects.15
Systemic Glucocorticoids
Short-course systemic glucocorticoids are an important addition to treatment and have been found to improve spirometry and decrease relapse rate. The oral and intravenous (IV) routes provide the same benefit. For the acutely ill patient with challenges swallowing, the IV route is preferred. The optimal dose is not clear, but hydrocortisone doses of 100 mg to 125 mg every 6 hours for 3 days are effective, as is oral prednisone 30 mg per day for 14 days, or 60 mg per day for 3 days with a taper.
Antibiotic Therapy
Antibiotics are indicated for patients with moderate to severe AECOPD who are ill enough to be admitted to the hospital. Empiric broad spectrum treatment is recommended. The initial antibiotic regimen should target likely bacterial pathogens (Haemophilus influenzae, Moraxella catarrhalis, and Streptococcus pneumoniae in most patients) and take into account local patterns of antibiotic resistance. Flouroquinolones or third-generation cephalosporins generally provide sufficient coverage. For patients experiencing only a mild exacerbation, antibiotics are not warranted.
Magnesium Sulfate
Other supplemental medications that have been evaluated include magnesium sulfate for bronchial smooth muscle relaxation. Studies have found that while magnesium is helpful in asthma, results are mixed with COPD.16
Supplemental Oxygen
Oxygen therapy is important during an AECOPD episode. Often, concerns arise about decreasing respiratory drive, which is typically driven by hypoxia in patients who have chronic hypercapnia. Arterial blood gas determinations are important in managing a patient’s respiratory status and will assist in determining actual oxygenation and any coexistent metabolic disturbances.
Noninvasive Ventilation. Oxygen can be administered efficiently by a venturi mask, which delivers precise fractions of oxygen, or by nasal cannula. A facemask is less comfortable, but is available for higher oxygen requirements, providing up to 55% oxygen, while a nonrebreather mask delivers up to 90% oxygen.
When necessary, noninvasive positive pressure ventilation (NPPV) improves outcomes for those with severe dyspnea and signs of respiratory fatigue manifested as increased work of breathing. Noninvasive positive pressure ventilation can improve clinical outcomes and is the ventilator mode of choice for those patients with COPD. Indications include severe dyspnea with signs of increased work of breathing and respiratory acidosis (arterial pH <7.35) and partial pressure of arterial carbon dioxide (PaCO2) >45 mm Hg.
Whenever possible, NPPV should be initiated with a triggered mode to allow spontaneous breaths. Inspiratory pressure of 8 cm to 12 cm H2O and expiratory pressure of 3 cm to 5 cm of H2 are recommended.
Mechanical Ventilation. Mechanical ventilation is often undesirable because it may be extraordinarily difficult to wean a patient off the device and permit safe extubation. However, if a patient cannot be stabilized with NPPV, intubation and mechanical ventilation must be considered. Typically, this occurs when there is severe respiratory distress, tachypnea >30 breaths/min, accessory muscle use, and altered mentation.
Goals of intubation/mechanical respiration include correcting oxygenation and severe respiratory acidosis as well as reducing the work of breathing. Barotrauma is a significant risk when patients with COPD require mechanical ventilation. Volume-limited modes of ventilation are commonly used, while pressure support or pressure-limited modes are less suitable for patients with airflow limitation. Again, invasive ventilation should only be administered if a patient cannot tolerate NPPV.
Palliative Care in the ED
Palliative care is an approach that improves the QOL of patients and their families facing the issues associated with life-threatening illness, through the prevention and relief of suffering by means of early identification and accurate assessment and treatment of pain and physical, psychosocial, and spiritual problems.3 This approach to care is warranted for COPD patients given the myriad of burdensome symptoms and functional decline that occurs.17
Palliative care expands traditional treatment goals to include enhancing QOL; helping with medical decision making; and identifying the goals of care. Palliative care is provided by board-certified physicians for the most complex of cases. However, the primary practice of palliative care must be delivered at the bedside by the treating provider. Managing pain, dyspnea, nausea, vomiting, and changes in bowel habits, as well as discussing goals of care, are among the basic palliative care skills all providers need to have and apply when indicated.
Palliative Care for Dyspnea
Opioids. Primary palliative care in the ED includes the appropriate use of low-dose oral and parenteral opioids to treat dyspnea in AECOPD. The use of a low-dose opioid, such as morphine 2 mg IV, titrated up to a desired response, is a safe and effective practice.18 Note the 2-mg starting dose is considered low-dose.19
With respect to managing dyspnea in AECOPD patients, nebulized opioids have not been found to be better than nebulized saline. More specific data regarding the use of oral opioids for managing refractory dyspnea in patients with predominantly COPD have been recently published: Long-acting morphine 20 mg once daily provides symptomatic relief in refractory dyspnea in the community setting. For the opioid-naïve patient, a lower dose is recommended.20
Oxygenation. There is no hard evidence of the effectiveness of oxygen in the palliation of breathlessness. Humidified air is effective initially, as is providing a fan at the bedside. Short-burst oxygen therapy should only be considered for episodes of severe breathlessness in patients whose COPD is not relieved by other treatments. Oxygen should continue to be prescribed only if an improvement in breathlessness following therapy has been documented. The American Thoracic Society recommends continuous oxygen therapy in patients with COPD who have severe resting hypoxemia (PaCO2 ≤55 mm Hg or SpO2 ≤88%).21
POLST Form
The Physicians Order for Life-Sustaining Treatment (POLST) form is a set of medical orders, similar to the “do not resuscitate” (allow natural death) order. A POLST form is not an advance directive and does not serve as a substitute for a patient’s assignation of a health care agent or durable power of attorney for health care.22
The POLST form enables physicians to order treatments patients would want, identify those treatments that patients would not want, and not provide those the patient considers “extraordinary” and excessively burdensome. A POLST form does not allow for active euthanasia or physician-assisted suicide.
Identifying treatment preferences is an important part of the initial evaluation of all patients. When dealing with an airway issue in a COPD patient, management can become complex. Ideally, the POLST form should arrive with the patient in the ED and list preferences regarding possible intensive interventions such as intubation and chest compressions. Discussing these issues with a patient in extreme distress is difficult or impossible, and in these cases, access to pertinent medical records, discussing preferences with family caregivers, and availability of a POLST form are much better ways to determine therapy.
Palliative Home Care
Patient Safety Considerations
Weight loss and associated muscle wasting are common features in patients with severe COPD, creating a high-risk situation for falls and a need for assistance with activities of daily living. The patient who is frail when discharged home from the ED requires a home-care plan before leaving the ED, and strict follow-up with the patient’s primary care provider will typically be needed within 2 to 4 weeks.
Psychological Considerations
Being mindful of the anxiety and depression that accompany the physical limitations of those with COPD is important. Mood disturbances serve as risk factors for re-hospitalization and mortality.13Multiple palliative care interventions provide patients assistance with these issues, including the use of antidepressants that may aid sleep, stabilize mood, and stimulate appetite.
Early referral to the palliative care team will provide improved care for the patient and family. Palliative care referral will provide continued management of the physical symptoms and evaluation and treatment of the psychosocial issues that accompany COPD. Additionally, the palliative care team can assist with safe discharge planning and follow-up, including the provision of the patient’s home needs as well as the family’s ability to cope with the home setting.
Prognosis
Predicting prognosis is difficult for the COPD patient due to the highly variable illness trajectory. Some patients have a low FEV1 and yet are very functional. However, assessment of severity of lung function impairment, frequency of exacerbations, and need for long-term oxygen therapy helps identify those patients who are entering the final 12 months of life. Evaluating symptom burden and impact on activities of daily living for patients with COPD is comparable to those of cancer patients, and in both cases, palliative care approaches are necessary.
Predicting Morbidity and Mortality
A profile developed from observational studies can help predict 6- to 12-month morbidity and mortality in patients with advanced COPD. This profile includes the following criteria:
- Significant dyspnea;
- FEV1 <30%;
- Number of exacerbations;
- Left heart failure or other comorbidities;
- Weight loss or cachexia;
- Decreasing performance status;
- Age older than 70 years; and
- Depression.
Although additional research is required to refine and verify this profile, reviewing these data points can prompt providers to initiate discussions with patients about treatment preferences and end-of-life care.23,24
Palliative Performance Scale
The Palliative Performance Scale (PPS) is another scale used to predict prognosis and eligibility for hospice care.25 This score provides a patient’s estimated survival.25 For a patient with a PPS score of 50%, hospice education may be appropriate.
Case Scenario Continued
Both the BODE and GOLD criteria scores assisted in determining prognosis and risk profiles of the patient in our case scenario. By applying the BODE criteria, our patient had a 4-year survival benefit of under 18%. The GOLD criteria results for this patient also were consistent with the BODE criteria and reflected end-stage COPD. Since this patient also had a PPS score of 50%, hospice education and care were discussed and initiated.
Conclusion
Patients with AECOPD commonly present to the ED. Such patients suffer with a high burden of illness and a need for immediate symptom management. However, after these measures have been instituted, strong evidence suggests that these patients typically do not receive palliative care with the same frequency compared to cancer or heart disease patients.
Management of AECOPD in the ED must include rapid treatment of dyspnea and pain, but also a determination of treatment preferences and an understanding of the prognosis. Several criteria are available to guide prognostic awareness and may help further the goals of care and disposition. Primary palliative care should be started by the ED provider for appropriate patients, with early referral to the palliative care team.
1. National Center for Health Statistics. Health, United States 2015 With Special Feature on Racial and Ethnic Health Disparities. Hyattsville, MD: US Dept. Health and Human Services; 2016. http://www.cdc.gov/nchs/hus/. Accessed October 17, 2016.
2. Khialani B, Sivakumaran P, Keijzers G, Sriram KB. Emergency department management of acute exacerbations of chronic obstructive pulmonary disease and factors associated with hospitalization. J Res Med Sci . 2014;19(4):297-303.
3. World Health Organization Web site. Chronic respiratory diseases. COPD: Definition. http://www.who.int/respiratory/copd/definition/en/. Accessed October 17, 2016.
4. Rabe KF, Hurd S, Anzueto A, et al; Global Initiative for Chronic Obstructive Lung Disease. Global strategy for the diagnosis, management, and prevention of chronic obstructive pulmonary disease: GOLD executive summary. Am J Respir Crit Care Med . 2007;176(6):532-555.
5. Fan VS, Ramsey SD, Make BJ, Martinez FJ. Physiologic variables and functional status independently predict COPD hospitalizations and emergency department visits in patients with severe COPD. COPD . 2007;4(1):29-39.
6. Cydulka RK, Rowe BH, Clark S, Emerman CL, Camargo CA Jr; MARC Investigators. Emergency department management of acute exacerbations of chronic obstructive pulmonary disease in the elderly: the Multicenter Airway Research Collaboration. J Am Geriatr Soc . 2003;51(7):908-916.
7. Strassels SA, Smith DH, Sullivan SD, et al. The costs of treating COPD in the United States. Chest . 2001;119:3.
8. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee-for-service program. N Engl J Med . 2009;360(14):1418-1428. doi:10.1056/NEJMsa0803563.
9. Rowe BH, Bhutani M, Stickland MK, Cydulka R. Assessment and management of chronic obstructive pulmonary disease in the emergency department and beyond. Expert Rev Respir Med . 2011;5(4):549-559. doi:10.1586/ers.11.43.
10. National Institute for Clinical Excellence Web site. Chronic obstructive pulmonary disease in over 16s: diagnosis and management. Clinical Guideline CG101. https://www.nice.org.uk/Guidance/cg101. Published June 2010. Accessed October 17, 2016.
11. Christensen VL, Holm AM, Cooper B, Paul SM, Miaskowski C, Rustøen T. Differences in symptom burden among patients with moderate, severe, or very severe chronic obstructive pulmonary disease. J Pain Symptom Manage . 2016;51(5):849-859. doi:10.1016/j.jpainsymman.2015.12.324.
12. GOLD Reports. Global Initiative for Chronic Obstructive Lung Disease Web site. http://goldcopd.org/gold-reports/. Accessed October 17, 2016.
13. Funk GC, Kirchheiner K, Burghuber OC, Hartl S. BODE index versus GOLD classification for explaining anxious and depressive symptoms in patients with COPD—a cross-sectional study. Respir Res . 2009;10:1. doi:10.1186/1465-9921-10-1.
14. Bach PB, Brown C, Gelfand SE, McCrory DC; American College of Physicians-American Society of Internal Medicine; American College of Chest Physicians. Management of acute exacerbations of chronic obstructive pulmonary disease: a summary and appraisal of published evidence. Ann Intern Med . 2001;134(7):600-620.
15. McCrory DC, Brown CD. Inhaled short-acting beta 2-agonists versus ipratropium for acute exacerbations of chronic obstructive pulmonary disease. Cochrane Database Syst Rev . 2001;(2):CD002984.
16. Shivanthan MC, Rajapakse S. Magnesium for acute exacerbation of chronic obstructive pulmonary disease: A systematic review of randomised trials. Ann Thorac Med . 2014;9(2):77-80. doi:10.4103/1817-1737.128844.
17. Curtis JR. Palliative and end of life care for patients with severe COPD. Eur Respir J . 2008;32(3):796-803.
18. Rocker GM, Simpson AC, Young J, et al. Opioid therapy for refractory dyspnea in patients with advanced chronic obstructive pulmonary disease: patients’ experiences and outcomes. CMAJ Open . 2013;1(1):E27-E36.
19. Jennings AL, Davies AN, Higgins JP, Gibbs JS, Broadley KE. A systematic review of the use of opioids in the management of dyspnea. Thorax . 2002;57(11):939-944.
20. Abernethy AP, Currow DC, Frith P, Fazekas BS, McHugh A, Bui C. Randomised, double blind, placebo controlled crossover trial of sustained release morphine for the management of refractory dyspnoea. BMJ . 2003;327(7414):523-528.
21. Qaseem A, Wilt TJ, Weinberger SE, et al; American College of Physicians; American College of Chest Physicians; American Thoracic Society; European Respiratory Society. Diagnosis and management of stable chronic obstructive pulmonary disease: a clinical practice guideline update from the American College of Physicians, American College of Chest Physicians, American Thoracic Society, and European Respiratory Society. Ann Intern Med . 2011;155(3):179-191. doi:10.7326/0003-4819-155-3-201108020-00008.
22. National POLST Paradigm. http://polst.org/professionals-page/?pro=1. Accessed October 17, 2016.
23. Hansen-Flaschen J. Chronic obstructive pulmonary disease: the last year of life. Respir Care. 2004;49(1):90-97; discussion 97-98.
24. Spathis A, Booth S. End of life care in chronic obstructive pulmonary disease: in search of a good death. Int J Chron Obstruct Pulmon Dis . 2008;3(1):11-29.
25. Anderson F, Downing GM, Hill J, Casorso L, Lerch N. Palliative performance scale (PPS): a new tool. J Palliat Care . 1996;12(1):5-11.
1. National Center for Health Statistics. Health, United States 2015 With Special Feature on Racial and Ethnic Health Disparities. Hyattsville, MD: US Dept. Health and Human Services; 2016. http://www.cdc.gov/nchs/hus/. Accessed October 17, 2016.
2. Khialani B, Sivakumaran P, Keijzers G, Sriram KB. Emergency department management of acute exacerbations of chronic obstructive pulmonary disease and factors associated with hospitalization. J Res Med Sci . 2014;19(4):297-303.
3. World Health Organization Web site. Chronic respiratory diseases. COPD: Definition. http://www.who.int/respiratory/copd/definition/en/. Accessed October 17, 2016.
4. Rabe KF, Hurd S, Anzueto A, et al; Global Initiative for Chronic Obstructive Lung Disease. Global strategy for the diagnosis, management, and prevention of chronic obstructive pulmonary disease: GOLD executive summary. Am J Respir Crit Care Med . 2007;176(6):532-555.
5. Fan VS, Ramsey SD, Make BJ, Martinez FJ. Physiologic variables and functional status independently predict COPD hospitalizations and emergency department visits in patients with severe COPD. COPD . 2007;4(1):29-39.
6. Cydulka RK, Rowe BH, Clark S, Emerman CL, Camargo CA Jr; MARC Investigators. Emergency department management of acute exacerbations of chronic obstructive pulmonary disease in the elderly: the Multicenter Airway Research Collaboration. J Am Geriatr Soc . 2003;51(7):908-916.
7. Strassels SA, Smith DH, Sullivan SD, et al. The costs of treating COPD in the United States. Chest . 2001;119:3.
8. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee-for-service program. N Engl J Med . 2009;360(14):1418-1428. doi:10.1056/NEJMsa0803563.
9. Rowe BH, Bhutani M, Stickland MK, Cydulka R. Assessment and management of chronic obstructive pulmonary disease in the emergency department and beyond. Expert Rev Respir Med . 2011;5(4):549-559. doi:10.1586/ers.11.43.
10. National Institute for Clinical Excellence Web site. Chronic obstructive pulmonary disease in over 16s: diagnosis and management. Clinical Guideline CG101. https://www.nice.org.uk/Guidance/cg101. Published June 2010. Accessed October 17, 2016.
11. Christensen VL, Holm AM, Cooper B, Paul SM, Miaskowski C, Rustøen T. Differences in symptom burden among patients with moderate, severe, or very severe chronic obstructive pulmonary disease. J Pain Symptom Manage . 2016;51(5):849-859. doi:10.1016/j.jpainsymman.2015.12.324.
12. GOLD Reports. Global Initiative for Chronic Obstructive Lung Disease Web site. http://goldcopd.org/gold-reports/. Accessed October 17, 2016.
13. Funk GC, Kirchheiner K, Burghuber OC, Hartl S. BODE index versus GOLD classification for explaining anxious and depressive symptoms in patients with COPD—a cross-sectional study. Respir Res . 2009;10:1. doi:10.1186/1465-9921-10-1.
14. Bach PB, Brown C, Gelfand SE, McCrory DC; American College of Physicians-American Society of Internal Medicine; American College of Chest Physicians. Management of acute exacerbations of chronic obstructive pulmonary disease: a summary and appraisal of published evidence. Ann Intern Med . 2001;134(7):600-620.
15. McCrory DC, Brown CD. Inhaled short-acting beta 2-agonists versus ipratropium for acute exacerbations of chronic obstructive pulmonary disease. Cochrane Database Syst Rev . 2001;(2):CD002984.
16. Shivanthan MC, Rajapakse S. Magnesium for acute exacerbation of chronic obstructive pulmonary disease: A systematic review of randomised trials. Ann Thorac Med . 2014;9(2):77-80. doi:10.4103/1817-1737.128844.
17. Curtis JR. Palliative and end of life care for patients with severe COPD. Eur Respir J . 2008;32(3):796-803.
18. Rocker GM, Simpson AC, Young J, et al. Opioid therapy for refractory dyspnea in patients with advanced chronic obstructive pulmonary disease: patients’ experiences and outcomes. CMAJ Open . 2013;1(1):E27-E36.
19. Jennings AL, Davies AN, Higgins JP, Gibbs JS, Broadley KE. A systematic review of the use of opioids in the management of dyspnea. Thorax . 2002;57(11):939-944.
20. Abernethy AP, Currow DC, Frith P, Fazekas BS, McHugh A, Bui C. Randomised, double blind, placebo controlled crossover trial of sustained release morphine for the management of refractory dyspnoea. BMJ . 2003;327(7414):523-528.
21. Qaseem A, Wilt TJ, Weinberger SE, et al; American College of Physicians; American College of Chest Physicians; American Thoracic Society; European Respiratory Society. Diagnosis and management of stable chronic obstructive pulmonary disease: a clinical practice guideline update from the American College of Physicians, American College of Chest Physicians, American Thoracic Society, and European Respiratory Society. Ann Intern Med . 2011;155(3):179-191. doi:10.7326/0003-4819-155-3-201108020-00008.
22. National POLST Paradigm. http://polst.org/professionals-page/?pro=1. Accessed October 17, 2016.
23. Hansen-Flaschen J. Chronic obstructive pulmonary disease: the last year of life. Respir Care. 2004;49(1):90-97; discussion 97-98.
24. Spathis A, Booth S. End of life care in chronic obstructive pulmonary disease: in search of a good death. Int J Chron Obstruct Pulmon Dis . 2008;3(1):11-29.
25. Anderson F, Downing GM, Hill J, Casorso L, Lerch N. Palliative performance scale (PPS): a new tool. J Palliat Care . 1996;12(1):5-11.
First EDition: Regulation of Freestanding EDs, more
Regulation of Freestanding EDs Varies Widely by State
BY JEFF BAUER
There is great variation in state regulations concerning freestanding EDs, with no standard requirements for location, staffing patterns, or clinical capabilities, according to a recent study published in Health Affairs.
Researchers used information from state departments of health and other state agencies to compile a list of freestanding EDs in the United States. They identified state policies and regulations regarding freestanding EDs by contacting state departments of health, by searching the departments’ Web sites for regulations, and by searching an online legal research database.
Overall, the study identified 400 freestanding EDs in 32 states; Texas and Ohio had the highest number of such facilities. Twenty-three states had hospitals that operated affiliated freestanding EDs. Twenty-one states had policies concerning freestanding EDs. These policies were either incorporated into hospital regulations or listed independently. Among states with such regulations, there was great variation in the requirements for freestanding EDs to provide specific medical services, products, and technology. For example, 12 states with freestanding EDs required pediatric equipment to be on site, 13 required a cardiac defibrillator, and 9 required blood products for transfusion. Only two of the 32 states (6%) had policies that were in concordance with all seven of the American College of Emergency Physicians (ACEP) recommendations for freestanding EDs.
Twenty-nine states had no regulations. New York and Washington regulate freestanding EDs on a case-by-case basis, and California indirectly bars them in its hospital regulations.
The study’s authors concluded that variations in state regulations may lead to more freestanding EDs opening in states with fewer regulations, and fewer facilities in states with stricter regulations. They added that consistent regulation of freestanding EDs is needed so patients can better understand these facilities’ capabilities and costs.
1. Gutierrez C, Lindor RA, Baker O, Cutler D, Schuur JD. State regulation of freestanding emergency departments varies widely, affecting location, growth, and services provided. Health Aff (Millwood). 2016;35(10):1857-1866.
Psychiatric Patients Face Inordinately Long Wait Times in EDs
Deepak Chitnis
FRONTLINE MEDICAL NEWS
Individuals with psychiatric conditions are facing increasingly longer wait times in EDs across the country, including children, according to a pair of studies presented at the American College of Emergency Physicians (ACEP) 2016 annual meeting.
Suzanne Catherine Lippert, MD, of Stanford University and the lead author of both studies, said that seeing psychiatric patients sit in the ED for days prompted her to finally look into this issue.
Both studies were conducted retrospectively, looking at data from the National Hospital Ambulatory Medical Care Survey (NHAMCS) collected between 2001 and 2011, and focusing on patients who had been brought to mental health EDs with International Classification of Diseases, Ninth Revision codes indicating substance abuse or a primary psychiatric diagnosis. The first study, which looked at ED length of stay for psychiatric patients, defined length of stay as the time from the patient’s arrival at the ED to the time of disposition, divided into categories of >6 hours, >12 hours, and >24 hours. Overall, 65 million ED visits were included in the study.
Patients with bipolar disorder had the highest likelihood of waiting more than 24 hours in the ED, with an odds ratio of 3.7 (95% confidence interval, 1.5-9.4). This was followed by patients with a diagnosis of psychosis, a dual diagnosis of psychiatric disorders, multiple psychiatric diagnoses, or depression. The most common diagnoses were substance abuse, anxiety, and depression, which constituted 41%, 26%, and 23% of the diagnoses, respectively. Patients with psychosis were admitted 34% of the time and transferred 24% of the time; those who self-harmed were admitted 33% of the time and transferred 29% of the time; and patients with bipolar disorder were admitted 29% of the time and transferred 40% of the time. Patients who had either two or three diagnoses were admitted 9% and 10% of the time, respectively.
“Further investigation of the systems affecting these patients, including placement of involuntary holds, availability of ED psychiatric consultants, or outpatient resources would delineate potential intervention points for the care of these vulnerable patients,” Dr Lippert and her coauthors wrote.
The second study looked at the differences in waiting for care at EDs between psychiatric patients and medical patients. Length of stay was defined the same way it was in the previous study, with disposition meaning either “discharge, admission to medical or psychiatric bed, [or] transfer to any acute facility.” Length of stay was divided into the same three categories as the previous study.
Psychiatric patients were more likely than medical patients to wait more than 6 hours for disposition, regardless of what the disposition ended up being, by a rate of 23% vs 10%. Similarly, 7% of psychiatric patients vs just 2.3% of medical patients had to wait longer than 12 hours in the ED, while 1.3% of psychiatric patients had to wait longer than 24 hours, compared with only 0.5% of medical patients. The average length of stay was significantly longer for psychiatric patients: 194 minutes vs 138 minutes for medical patients (P < .01).
Additionally, psychiatric patients were more likely to be uninsured, with 22% not having insurance, compared with 15% of medical patients being uninsured. Furthermore, 4.6% of the psychiatric patients’ previous visit to the ED had been within the prior 72 hours, compared with 3.6% of medical patients. A total of 21% of psychiatric patients required admission, compared with 13% of medical patients, while 11% of psychiatric patients were transferred, compared with just 1.4% of medical patients.
“These results compel us to further investigate the potential causes of prolonged length of stay in psychiatric patients and to further characterize the population of psychiatric patients most at risk of prolonged stays,” Dr Lippert and her coinvestigators concluded.
American College of Emergency Physicians President Rebecca B. Parker, MD, explained that a survey of more than 1,700 emergency physicians revealed some “troubling” findings about the state of EDs over the last year.
The nation’s dwindling mental health resources are having a direct impact on patients having psychiatric emergencies, including children, Dr Parker said. “These patients are waiting longer for care, especially those patients who require hospitalization.”
Findings of the survey indicate that 48% of ED physicians witness psychiatric patients being “boarded” in their EDs at least once a day while they wait for a bed. Additionally, <17% of respondents said their ED has a psychiatrist on call to respond to psychiatric emergencies, with 11.7% responding that they have no psychiatrist on call to deal with such emergencies. And 52% of respondents said the mental health system in their community has become noticeably worse in just the last year.
Dr Parker voiced outrage about the situation. “Psychiatric patients wait in the emergency department for hours and even days for a bed, which delays the psychiatric care they so desperately need,” she said. “It also leads to delays in care and diminished resources for other emergency patients. The emergency department has become the dumping ground for these vulnerable patients who have been abandoned by every other part of the health care system.”
For more on the extended boarding of psychiatric patients in the ED, see “A Wintry Mix of Patients, Redux” by Editor in Chief Neal Flomenbaum, MD (Emerg Med. 2015;47[3]:101).
High Resting Heart Rate May Signal Exacerbation Risk in COPD Patients
Doug Brunk
Frontline Medical News
Higher resting heart rate (HR) may predict future risk of exacerbation in patients with recent chronic obstructive pulmonary disease (COPD) exacerbation, results from a multicenter study suggest.
“Resting heart [rate] is often...readily available clinical data,” lead study author Ahmad Ismail, MD, said in an interview in advance of the annual meeting of the American College of Chest Physicians. “Its significance is often overlooked in daily clinical practice until tachycardia or bradycardia happens. In COPD patients, it has been shown that the resting HR can predict mortality. However, there is a lack of data showing its association with the rates of exacerbations, the major player in determining overall outcome in patients with COPD.”
In an effort to identify the association between resting HR and risk of exacerbations, Dr Ismail of Universiti Teknologi MARA, Malaysia, and his associates at nine other centers evaluated 147 COPD patients who were recruited during acute exacerbation of COPD that required hospitalization between April 2012 and September 2015. The researchers recorded each patient’s sociodemographic data, anthropometric indices, and medication history during their acute exacerbation at the hospital. Next, they followed up with the patients in clinic at 3 months after the recruitment (month 0), and collected resting HR, spirometry, and COPD Assessment Test (CAT) scores. Subsequently, patients were followed up in clinic at 6 and 12 months, and followed up in between via telephone interviews to collect data on exacerbation history.
The mean age of the study population was 67 years, and 77% had higher resting HR, defined as exceeding 80 beats/min (bpm). The mean resting HR in the higher resting HR group was 92 bpm, compared with a mean of 70 bpm in the lower resting HR group. Dr Ismail reported that at month 3, patients with higher resting HR had a significantly higher proportion of exacerbations, compared with those who had a lower resting HR (54% vs 27%; P = .013). The trend was followed through until month 9. There was also a statistically significant moderate strength linear correlation between resting HR and exacerbation frequency at 3, 6, and 9 months (r = 0.400, P < .001; r = 0.440, P < .001; and r = 0.416, P = .004, respectively). The mean exacerbation frequency was also significantly higher in the higher resting HR group at month 3 and month 6 (2.00 vs 0.48, P < .001; and 3.42 vs 1.14, P = .004).
“Higher resting heart rate may predict future risk of exacerbation in patients with recent COPD exacerbation,” Dr Ismail concluded. “Further study however is required to determine the effect of lowering resting heart rate on the future risk of exacerbation.” He acknowledged certain limitations of the study, including the fact that it excluded patients who were on beta-blockers or any rate-modifying drugs, and those with history of cardiac failure and ischemic heart disease, and that there was no baseline echocardiogram performed to ensure the absence of ischemic heart disease and other possible causes of the higher resting HR. “We also had slightly higher than expected dropouts giving a nonsignificant result at 12 months follow-up, though the trend follows the overall results of the study,” he said.
Regulation of Freestanding EDs Varies Widely by State
BY JEFF BAUER
There is great variation in state regulations concerning freestanding EDs, with no standard requirements for location, staffing patterns, or clinical capabilities, according to a recent study published in Health Affairs.
Researchers used information from state departments of health and other state agencies to compile a list of freestanding EDs in the United States. They identified state policies and regulations regarding freestanding EDs by contacting state departments of health, by searching the departments’ Web sites for regulations, and by searching an online legal research database.
Overall, the study identified 400 freestanding EDs in 32 states; Texas and Ohio had the highest number of such facilities. Twenty-three states had hospitals that operated affiliated freestanding EDs. Twenty-one states had policies concerning freestanding EDs. These policies were either incorporated into hospital regulations or listed independently. Among states with such regulations, there was great variation in the requirements for freestanding EDs to provide specific medical services, products, and technology. For example, 12 states with freestanding EDs required pediatric equipment to be on site, 13 required a cardiac defibrillator, and 9 required blood products for transfusion. Only two of the 32 states (6%) had policies that were in concordance with all seven of the American College of Emergency Physicians (ACEP) recommendations for freestanding EDs.
Twenty-nine states had no regulations. New York and Washington regulate freestanding EDs on a case-by-case basis, and California indirectly bars them in its hospital regulations.
The study’s authors concluded that variations in state regulations may lead to more freestanding EDs opening in states with fewer regulations, and fewer facilities in states with stricter regulations. They added that consistent regulation of freestanding EDs is needed so patients can better understand these facilities’ capabilities and costs.
1. Gutierrez C, Lindor RA, Baker O, Cutler D, Schuur JD. State regulation of freestanding emergency departments varies widely, affecting location, growth, and services provided. Health Aff (Millwood). 2016;35(10):1857-1866.
Psychiatric Patients Face Inordinately Long Wait Times in EDs
Deepak Chitnis
FRONTLINE MEDICAL NEWS
Individuals with psychiatric conditions are facing increasingly longer wait times in EDs across the country, including children, according to a pair of studies presented at the American College of Emergency Physicians (ACEP) 2016 annual meeting.
Suzanne Catherine Lippert, MD, of Stanford University and the lead author of both studies, said that seeing psychiatric patients sit in the ED for days prompted her to finally look into this issue.
Both studies were conducted retrospectively, looking at data from the National Hospital Ambulatory Medical Care Survey (NHAMCS) collected between 2001 and 2011, and focusing on patients who had been brought to mental health EDs with International Classification of Diseases, Ninth Revision codes indicating substance abuse or a primary psychiatric diagnosis. The first study, which looked at ED length of stay for psychiatric patients, defined length of stay as the time from the patient’s arrival at the ED to the time of disposition, divided into categories of >6 hours, >12 hours, and >24 hours. Overall, 65 million ED visits were included in the study.
Patients with bipolar disorder had the highest likelihood of waiting more than 24 hours in the ED, with an odds ratio of 3.7 (95% confidence interval, 1.5-9.4). This was followed by patients with a diagnosis of psychosis, a dual diagnosis of psychiatric disorders, multiple psychiatric diagnoses, or depression. The most common diagnoses were substance abuse, anxiety, and depression, which constituted 41%, 26%, and 23% of the diagnoses, respectively. Patients with psychosis were admitted 34% of the time and transferred 24% of the time; those who self-harmed were admitted 33% of the time and transferred 29% of the time; and patients with bipolar disorder were admitted 29% of the time and transferred 40% of the time. Patients who had either two or three diagnoses were admitted 9% and 10% of the time, respectively.
“Further investigation of the systems affecting these patients, including placement of involuntary holds, availability of ED psychiatric consultants, or outpatient resources would delineate potential intervention points for the care of these vulnerable patients,” Dr Lippert and her coauthors wrote.
The second study looked at the differences in waiting for care at EDs between psychiatric patients and medical patients. Length of stay was defined the same way it was in the previous study, with disposition meaning either “discharge, admission to medical or psychiatric bed, [or] transfer to any acute facility.” Length of stay was divided into the same three categories as the previous study.
Psychiatric patients were more likely than medical patients to wait more than 6 hours for disposition, regardless of what the disposition ended up being, by a rate of 23% vs 10%. Similarly, 7% of psychiatric patients vs just 2.3% of medical patients had to wait longer than 12 hours in the ED, while 1.3% of psychiatric patients had to wait longer than 24 hours, compared with only 0.5% of medical patients. The average length of stay was significantly longer for psychiatric patients: 194 minutes vs 138 minutes for medical patients (P < .01).
Additionally, psychiatric patients were more likely to be uninsured, with 22% not having insurance, compared with 15% of medical patients being uninsured. Furthermore, 4.6% of the psychiatric patients’ previous visit to the ED had been within the prior 72 hours, compared with 3.6% of medical patients. A total of 21% of psychiatric patients required admission, compared with 13% of medical patients, while 11% of psychiatric patients were transferred, compared with just 1.4% of medical patients.
“These results compel us to further investigate the potential causes of prolonged length of stay in psychiatric patients and to further characterize the population of psychiatric patients most at risk of prolonged stays,” Dr Lippert and her coinvestigators concluded.
American College of Emergency Physicians President Rebecca B. Parker, MD, explained that a survey of more than 1,700 emergency physicians revealed some “troubling” findings about the state of EDs over the last year.
The nation’s dwindling mental health resources are having a direct impact on patients having psychiatric emergencies, including children, Dr Parker said. “These patients are waiting longer for care, especially those patients who require hospitalization.”
Findings of the survey indicate that 48% of ED physicians witness psychiatric patients being “boarded” in their EDs at least once a day while they wait for a bed. Additionally, <17% of respondents said their ED has a psychiatrist on call to respond to psychiatric emergencies, with 11.7% responding that they have no psychiatrist on call to deal with such emergencies. And 52% of respondents said the mental health system in their community has become noticeably worse in just the last year.
Dr Parker voiced outrage about the situation. “Psychiatric patients wait in the emergency department for hours and even days for a bed, which delays the psychiatric care they so desperately need,” she said. “It also leads to delays in care and diminished resources for other emergency patients. The emergency department has become the dumping ground for these vulnerable patients who have been abandoned by every other part of the health care system.”
For more on the extended boarding of psychiatric patients in the ED, see “A Wintry Mix of Patients, Redux” by Editor in Chief Neal Flomenbaum, MD (Emerg Med. 2015;47[3]:101).
High Resting Heart Rate May Signal Exacerbation Risk in COPD Patients
Doug Brunk
Frontline Medical News
Higher resting heart rate (HR) may predict future risk of exacerbation in patients with recent chronic obstructive pulmonary disease (COPD) exacerbation, results from a multicenter study suggest.
“Resting heart [rate] is often...readily available clinical data,” lead study author Ahmad Ismail, MD, said in an interview in advance of the annual meeting of the American College of Chest Physicians. “Its significance is often overlooked in daily clinical practice until tachycardia or bradycardia happens. In COPD patients, it has been shown that the resting HR can predict mortality. However, there is a lack of data showing its association with the rates of exacerbations, the major player in determining overall outcome in patients with COPD.”
In an effort to identify the association between resting HR and risk of exacerbations, Dr Ismail of Universiti Teknologi MARA, Malaysia, and his associates at nine other centers evaluated 147 COPD patients who were recruited during acute exacerbation of COPD that required hospitalization between April 2012 and September 2015. The researchers recorded each patient’s sociodemographic data, anthropometric indices, and medication history during their acute exacerbation at the hospital. Next, they followed up with the patients in clinic at 3 months after the recruitment (month 0), and collected resting HR, spirometry, and COPD Assessment Test (CAT) scores. Subsequently, patients were followed up in clinic at 6 and 12 months, and followed up in between via telephone interviews to collect data on exacerbation history.
The mean age of the study population was 67 years, and 77% had higher resting HR, defined as exceeding 80 beats/min (bpm). The mean resting HR in the higher resting HR group was 92 bpm, compared with a mean of 70 bpm in the lower resting HR group. Dr Ismail reported that at month 3, patients with higher resting HR had a significantly higher proportion of exacerbations, compared with those who had a lower resting HR (54% vs 27%; P = .013). The trend was followed through until month 9. There was also a statistically significant moderate strength linear correlation between resting HR and exacerbation frequency at 3, 6, and 9 months (r = 0.400, P < .001; r = 0.440, P < .001; and r = 0.416, P = .004, respectively). The mean exacerbation frequency was also significantly higher in the higher resting HR group at month 3 and month 6 (2.00 vs 0.48, P < .001; and 3.42 vs 1.14, P = .004).
“Higher resting heart rate may predict future risk of exacerbation in patients with recent COPD exacerbation,” Dr Ismail concluded. “Further study however is required to determine the effect of lowering resting heart rate on the future risk of exacerbation.” He acknowledged certain limitations of the study, including the fact that it excluded patients who were on beta-blockers or any rate-modifying drugs, and those with history of cardiac failure and ischemic heart disease, and that there was no baseline echocardiogram performed to ensure the absence of ischemic heart disease and other possible causes of the higher resting HR. “We also had slightly higher than expected dropouts giving a nonsignificant result at 12 months follow-up, though the trend follows the overall results of the study,” he said.
Regulation of Freestanding EDs Varies Widely by State
BY JEFF BAUER
There is great variation in state regulations concerning freestanding EDs, with no standard requirements for location, staffing patterns, or clinical capabilities, according to a recent study published in Health Affairs.
Researchers used information from state departments of health and other state agencies to compile a list of freestanding EDs in the United States. They identified state policies and regulations regarding freestanding EDs by contacting state departments of health, by searching the departments’ Web sites for regulations, and by searching an online legal research database.
Overall, the study identified 400 freestanding EDs in 32 states; Texas and Ohio had the highest number of such facilities. Twenty-three states had hospitals that operated affiliated freestanding EDs. Twenty-one states had policies concerning freestanding EDs. These policies were either incorporated into hospital regulations or listed independently. Among states with such regulations, there was great variation in the requirements for freestanding EDs to provide specific medical services, products, and technology. For example, 12 states with freestanding EDs required pediatric equipment to be on site, 13 required a cardiac defibrillator, and 9 required blood products for transfusion. Only two of the 32 states (6%) had policies that were in concordance with all seven of the American College of Emergency Physicians (ACEP) recommendations for freestanding EDs.
Twenty-nine states had no regulations. New York and Washington regulate freestanding EDs on a case-by-case basis, and California indirectly bars them in its hospital regulations.
The study’s authors concluded that variations in state regulations may lead to more freestanding EDs opening in states with fewer regulations, and fewer facilities in states with stricter regulations. They added that consistent regulation of freestanding EDs is needed so patients can better understand these facilities’ capabilities and costs.
1. Gutierrez C, Lindor RA, Baker O, Cutler D, Schuur JD. State regulation of freestanding emergency departments varies widely, affecting location, growth, and services provided. Health Aff (Millwood). 2016;35(10):1857-1866.
Psychiatric Patients Face Inordinately Long Wait Times in EDs
Deepak Chitnis
FRONTLINE MEDICAL NEWS
Individuals with psychiatric conditions are facing increasingly longer wait times in EDs across the country, including children, according to a pair of studies presented at the American College of Emergency Physicians (ACEP) 2016 annual meeting.
Suzanne Catherine Lippert, MD, of Stanford University and the lead author of both studies, said that seeing psychiatric patients sit in the ED for days prompted her to finally look into this issue.
Both studies were conducted retrospectively, looking at data from the National Hospital Ambulatory Medical Care Survey (NHAMCS) collected between 2001 and 2011, and focusing on patients who had been brought to mental health EDs with International Classification of Diseases, Ninth Revision codes indicating substance abuse or a primary psychiatric diagnosis. The first study, which looked at ED length of stay for psychiatric patients, defined length of stay as the time from the patient’s arrival at the ED to the time of disposition, divided into categories of >6 hours, >12 hours, and >24 hours. Overall, 65 million ED visits were included in the study.
Patients with bipolar disorder had the highest likelihood of waiting more than 24 hours in the ED, with an odds ratio of 3.7 (95% confidence interval, 1.5-9.4). This was followed by patients with a diagnosis of psychosis, a dual diagnosis of psychiatric disorders, multiple psychiatric diagnoses, or depression. The most common diagnoses were substance abuse, anxiety, and depression, which constituted 41%, 26%, and 23% of the diagnoses, respectively. Patients with psychosis were admitted 34% of the time and transferred 24% of the time; those who self-harmed were admitted 33% of the time and transferred 29% of the time; and patients with bipolar disorder were admitted 29% of the time and transferred 40% of the time. Patients who had either two or three diagnoses were admitted 9% and 10% of the time, respectively.
“Further investigation of the systems affecting these patients, including placement of involuntary holds, availability of ED psychiatric consultants, or outpatient resources would delineate potential intervention points for the care of these vulnerable patients,” Dr Lippert and her coauthors wrote.
The second study looked at the differences in waiting for care at EDs between psychiatric patients and medical patients. Length of stay was defined the same way it was in the previous study, with disposition meaning either “discharge, admission to medical or psychiatric bed, [or] transfer to any acute facility.” Length of stay was divided into the same three categories as the previous study.
Psychiatric patients were more likely than medical patients to wait more than 6 hours for disposition, regardless of what the disposition ended up being, by a rate of 23% vs 10%. Similarly, 7% of psychiatric patients vs just 2.3% of medical patients had to wait longer than 12 hours in the ED, while 1.3% of psychiatric patients had to wait longer than 24 hours, compared with only 0.5% of medical patients. The average length of stay was significantly longer for psychiatric patients: 194 minutes vs 138 minutes for medical patients (P < .01).
Additionally, psychiatric patients were more likely to be uninsured, with 22% not having insurance, compared with 15% of medical patients being uninsured. Furthermore, 4.6% of the psychiatric patients’ previous visit to the ED had been within the prior 72 hours, compared with 3.6% of medical patients. A total of 21% of psychiatric patients required admission, compared with 13% of medical patients, while 11% of psychiatric patients were transferred, compared with just 1.4% of medical patients.
“These results compel us to further investigate the potential causes of prolonged length of stay in psychiatric patients and to further characterize the population of psychiatric patients most at risk of prolonged stays,” Dr Lippert and her coinvestigators concluded.
American College of Emergency Physicians President Rebecca B. Parker, MD, explained that a survey of more than 1,700 emergency physicians revealed some “troubling” findings about the state of EDs over the last year.
The nation’s dwindling mental health resources are having a direct impact on patients having psychiatric emergencies, including children, Dr Parker said. “These patients are waiting longer for care, especially those patients who require hospitalization.”
Findings of the survey indicate that 48% of ED physicians witness psychiatric patients being “boarded” in their EDs at least once a day while they wait for a bed. Additionally, <17% of respondents said their ED has a psychiatrist on call to respond to psychiatric emergencies, with 11.7% responding that they have no psychiatrist on call to deal with such emergencies. And 52% of respondents said the mental health system in their community has become noticeably worse in just the last year.
Dr Parker voiced outrage about the situation. “Psychiatric patients wait in the emergency department for hours and even days for a bed, which delays the psychiatric care they so desperately need,” she said. “It also leads to delays in care and diminished resources for other emergency patients. The emergency department has become the dumping ground for these vulnerable patients who have been abandoned by every other part of the health care system.”
For more on the extended boarding of psychiatric patients in the ED, see “A Wintry Mix of Patients, Redux” by Editor in Chief Neal Flomenbaum, MD (Emerg Med. 2015;47[3]:101).
High Resting Heart Rate May Signal Exacerbation Risk in COPD Patients
Doug Brunk
Frontline Medical News
Higher resting heart rate (HR) may predict future risk of exacerbation in patients with recent chronic obstructive pulmonary disease (COPD) exacerbation, results from a multicenter study suggest.
“Resting heart [rate] is often...readily available clinical data,” lead study author Ahmad Ismail, MD, said in an interview in advance of the annual meeting of the American College of Chest Physicians. “Its significance is often overlooked in daily clinical practice until tachycardia or bradycardia happens. In COPD patients, it has been shown that the resting HR can predict mortality. However, there is a lack of data showing its association with the rates of exacerbations, the major player in determining overall outcome in patients with COPD.”
In an effort to identify the association between resting HR and risk of exacerbations, Dr Ismail of Universiti Teknologi MARA, Malaysia, and his associates at nine other centers evaluated 147 COPD patients who were recruited during acute exacerbation of COPD that required hospitalization between April 2012 and September 2015. The researchers recorded each patient’s sociodemographic data, anthropometric indices, and medication history during their acute exacerbation at the hospital. Next, they followed up with the patients in clinic at 3 months after the recruitment (month 0), and collected resting HR, spirometry, and COPD Assessment Test (CAT) scores. Subsequently, patients were followed up in clinic at 6 and 12 months, and followed up in between via telephone interviews to collect data on exacerbation history.
The mean age of the study population was 67 years, and 77% had higher resting HR, defined as exceeding 80 beats/min (bpm). The mean resting HR in the higher resting HR group was 92 bpm, compared with a mean of 70 bpm in the lower resting HR group. Dr Ismail reported that at month 3, patients with higher resting HR had a significantly higher proportion of exacerbations, compared with those who had a lower resting HR (54% vs 27%; P = .013). The trend was followed through until month 9. There was also a statistically significant moderate strength linear correlation between resting HR and exacerbation frequency at 3, 6, and 9 months (r = 0.400, P < .001; r = 0.440, P < .001; and r = 0.416, P = .004, respectively). The mean exacerbation frequency was also significantly higher in the higher resting HR group at month 3 and month 6 (2.00 vs 0.48, P < .001; and 3.42 vs 1.14, P = .004).
“Higher resting heart rate may predict future risk of exacerbation in patients with recent COPD exacerbation,” Dr Ismail concluded. “Further study however is required to determine the effect of lowering resting heart rate on the future risk of exacerbation.” He acknowledged certain limitations of the study, including the fact that it excluded patients who were on beta-blockers or any rate-modifying drugs, and those with history of cardiac failure and ischemic heart disease, and that there was no baseline echocardiogram performed to ensure the absence of ischemic heart disease and other possible causes of the higher resting HR. “We also had slightly higher than expected dropouts giving a nonsignificant result at 12 months follow-up, though the trend follows the overall results of the study,” he said.
The Role of Self-Compassion in Chronic Illness Care
From the Department of Psychology, University of Sheffield, Sheffield, UK.
Abstract
- Objective: To present current research and theory on the potential of self-compassion for improving health-related outcomes in chronic illness, and make recommendations for the application of self-compassion interventions in clinical care to improve well-being and facilitate self-management of health in patients with chronic illness.
- Methods: Narrative review of the literature.
- Results: Current theory indicates that the self-kindness, common humanity, and mindfulness components of self-compassion can foster adaptive responses to the perceived setbacks and shortcomings that people experience in the context of living with a chronic illness. Research on self-compassion in relation to health has been examined primarily within non-medical populations. Cross-sectional and experimental studies have demonstrated clear links between self-compassion and lower levels of both perceived stress and physiological indictors of stress. A growing evidence base also indicates that self-compassion is associated with more frequent practice of health-promoting behaviors in healthy populations. Research on self-compassion with chronic illness populations is limited but has demonstrated cross-sectional links to adaptive coping, lower stress and distress, and the practice of important health behaviors. There are several interventions for increasing self-compassion in clinical settings, with limited data suggesting beneficial effects for clinical populations.
- Conclusion: Self-compassion holds promise as an important quality to cultivate to enhance health-related outcomes in those with chronic health conditions. Further systematic and rigorous research evaluating the effectiveness of self-compassion interventions in chronic illness populations is warranted to fully understand the role of this quality for chronic illness care.
Living with a chronic illness presents a number of challenges that can take a toll on both physical and psychological well-being. Pain, fatigue, and decreased daily functioning are symptoms common to many chronic illnesses that can negatively impact psychological well-being by creating uncertainty about attaining personal goals [1], and contributing to doubts and concerns about being able to fulfil one’s personal and work-related responsibilities [2]. The stress associated with negotiating the challenges of chronic illness can further complicate adjustment by exacerbating existing symptoms via stress-mediated and inflammation regulation pathways [3–5] and compromising the practice of important disease management and health maintenance behaviors [6,7]. These experiences can in turn fuel self-blame and other negative self-evaluations about not being able to meet personal and others’ expectations about managing one’s illness and create a downward spiral of poor adjustment and well-being [8,9].
A growing evidence base suggests that self-compassion is an important quality to help manage the stress and behavior-related issues that can compromise chronic illness care. Defined by Neff [10] as taking a kind, accepting, and non-judgmental stance towards oneself in times of failure or difficulty, self-compassion is associated with several indicators of adjustment in non-medical populations including resilience [11,12] and adaptive coping [13]. In support of the notion that self-compassion can play a role in promoting health behaviors, a recent meta-analysis found that self-compassion is linked to better practice of a range of health-promoting behaviors due in part to its links to adaptive emotions [14]. Research on the role of self-compassion for health-related outcomes with chronic illness populations is limited but nonetheless promising [15–17] , and suggests that self-compassion may be a worthwhile quality to cultivate to improve well-being and facilitate disease self-management.
In this article we present current research and theory on the potential of self-compassion as a clinical concept for improving health-related outcomes in chronic illness. After presenting a brief overview of the theoretical underpinnings of self-compassion and its measurement, we present the current state of research on the role of self-compassion in reducing stress and facilitating health behaviors in general medical populations. We then outline the emerging evidence illustrating a potential role for extending this research to chronic illness populations and make recommendations for the application of self-compassion interventions in clinical care, as a means to improving well-being and facilitating self-management of health for this group.
Self-Compassion: A Healthier Way of Responding to Challenges
Research into the correlates and effects of self-compassion has been primarily guided by the model of self-compassion proposed by Kristen Neff [10]. This view of self-compassion is derived from Buddhist psychology and reconceptualised in a secular manner to refer to the compassion expressed towards the self when experiencing suffering, whether it be due to circumstances beyond one’s control or within one’s control [18]. The 3 key components of self-compassion are proposed to work synergistically to promote kind rather than critical responses to failures and difficult circumstances. Self-kindness (versus self-judgment) involves taking a kind, caring and non-evaluative stance towards perceived inadequacies, shortcomings, and mistakes, and may be particularly valuable for countering the negative self-evaluations that can accompany not being able to meet one’s expectations due to the restrictions of living with a chronic condition [9]. Common humanity (versus isolation) refers to the sense of connection to others that arises from acknowledging the common human experience of imperfection and making mistakes, and being more aware that others may face similar challenging circumstances [18]. Framing hardship from this perspective can help people let go of the “why me?” view of their illness which can compromise adjustment [19], and instead foster a greater connection with others who live with similar conditions. Mindfulness (versus over identification) is the final component of self-compassion as conceptualised by Neff [10], and refers to taking a balanced and non-judgmental view of emotional experiences, grounding them in the present moment and neither ignoring nor becoming overly embroiled in the negative feelings that accompany painful experiences. Neff [10,18] proposes that mindfulness helps counteract the over-identification with one’s suffering that can reduce objectivity and taking a larger perspective on the situation. This mindful stance may be particularly beneficial for dealing with the ongoing pain and suffering of living with a chronic health condition, and encourage healthier ways of viewing the limitations associated with chronic illness. Correlational evidence from a study of healthy students further suggests that certain individual components of self-compassion may be particularly beneficial in the context of health, as the self-kindness and common humanity components were each found to be linked to better physical health and managing life stressors [20].
Although there are other conceptualizations of self-compassion [21], this 3-faceted model is the most widely used in research, in part because of the availability of a measure, the Self-compassion Scale [22], which explicitly assesses each of the facets of self-compassion. The 26-item scale is designed to assess positive and negative dimensions of each facet of self-compassion, but the total score is used more often than the separate subscales [23]. The measure assesses dispositional or trait self-compassion, with an underlying assumption that some individuals can be more or less self-compassionate in the way they regularly respond to challenges or failures. Importantly, self-compassion can also be prompted or fostered as a way of responding to failures and challenges, presenting the possibility that self-compassion can be increased among those who may benefit the most from responding with greater self-kindness and less self-judgement [24–26].
Whether conceived of as a momentary state or as an enduring quality, self-compassion has demonstrated consistent links with an array of indicators of psychological well-being. For example, one meta-analysis found that self-compassion is robustly and negatively linked with psychopathology (average r = –0.54), including depression and anxiety [27], 2 mental health issues that are prevalent in chronic illness populations [28,29]. Several studies have also noted associations of self-compassion with emotional resilience [18,30], and better coping and lower stress [12,13].
Self-Compassion Is Associated with Lower Perceived Stress
Relevant for our focus on chronic illness care, there is some evidence that self-compassion can be effective for improving well-being, and reducing stress in particular, in people with chronic illness. Across two illness samples, cancer and mixed chronic illnesses, those who scored low on a measure of self-compassion had higher levels of depression and stress compared to a healthy control sample [15], suggesting self-compassion may be protective against poor adjustment. Similar results have been found for breast cancer patients, with self-compassion explaining lower distress related to body image [16], and HIV patients, with self-compassion linked to lower stress, anxiety, and shame [31].
The protective role of self-compassion for stress appears to be explained primarily by the set of coping strategies that self-compassionate people use to deal with challenging circumstances. In their review, Allen and Leary [13] noted that self-compassionate people use coping styles that are adaptive and problem-focused (e.g., planning, social-support-seeking, and positive reframing), and tend to not use maladaptive coping styles (e.g., cognitively or behaviorally disengaging from the stressor and other escape-avoidance coping). Consistent with appraisal-based models of coping [32], adaptive coping strategies focus on removing the stressful event, garnering resources to better deal with the stressor, or recasting the stressor as less threatening, and therefore are instrumental in reducing the levels of stress that might normally be perceived in the absence of such coping approaches. Having access to a repertoire of adaptive coping strategies is particularly important in the context of chronic illness which can present a variety of daily challenges related to pain, functional and psychosocial limitations that require a flexible approach to changing demands.
Self-compassion with its links to adaptive coping may be particularly relevant for coping with such demands. One study put this assertion to the test by examining the role of coping strategies in explaining the link between self-compassion and stress in two chronic illness samples, inflammatory bowel disease (IBD) and arthritis [17]. In both samples, higher trait self-compassion was associated with a set of adaptive coping strategies which in turn explained greater coping efficacy and lower perceived stress, with the overall model explaining 43% of the variance in stress after controlling for health status and disease duration. Key adaptive coping strategies included greater use of active coping (a problem-focused coping strategy aimed at removing or reducing the stressor), positive reframing, and acceptance. The self-compassion–stress link was also explained in part by less use of maladaptive strategies, including denial, behavioral disengagement, and self-blame coping [17]. The latter coping strategy in particular is linked to poor adjustment in chronic illness as it reflects efforts to take control over uncontrollable symptoms by viewing illness-related changes, such as flare-ups, as a personal failure to manage one’s illness [9,33]. Together these findings, which were remarkably consistent across 2 distinct chronic illness groups, provide solid evidence to suggest that self-compassion provides individuals living with a chronic illness with a coping advantage that fosters adjustment through engaging in appropriate cognitive and behavioral coping strategies to minimize perceived stress.
Self-Compassion Can Reduce Physiological Stress
A caveat regarding the research to date on self-compassion and stress in chronic illness is that all studies are cross-sectional, which limits any conclusions about the direction of causality. Ignoring the fact that self-compassion in each of these studies was assessed as a relatively stable trait-like quality, one could argue that individuals who are less stressed have a greater opportunity to express kindness to themselves as they are not pre-occupied with illness-related demands and challenges. However, emerging research on self-compassion and the physiological correlates of stress provide a compelling case for the directionality assumed in the cross-sectional research. In one study, healthy young adults were subjected to a standard stress-inducing laboratory task (involving mental mathematics and public speaking), with plasma concentrations of the pro-inflammatory cytokine, interleukin-6 (IL-6), assessed before and after the task on 2 days [34]. Those with higher trait self-compassion responded to the stress task with significantly lower IL-6 levels even after controlling for other potential confounds such as demographics, self-esteem, depressive symptoms, and distress. Self-compassion was also linked to lower baseline levels of IL-6 on both days. These findings suggest that self-compassion may be both an enduring and response-specific protective factor against stress-induced inflammation.
There is also evidence supporting the efficacy of self-compassion interventions for reducing stress. In a study of healthy young women, those who underwent a brief training in self-compassion were found to have lower sympathetic nervous system reactivity (salivary alpha-amylase), and more adaptive parasympathetic nervous system reactivity (heart rate variability) in response to a stress-inducing lab task, compared to placebo control and no-training control groups [35]. That this study was conducted with women only is notable, as research indicates that women tend to have lower levels of self-compassion compared to men [18]. Together with the study on trait self-compassion and biomarkers of stress-induced inflammation, this research provides supportive evidence for the role of self-compassion in reducing the harmful physiological effects of stress. Self-compassion may therefore be particularly beneficial for both psychological and physical well-being in chronic illness given the known and negative impact of stress on symptoms for a number of chronic illnesses such as diabetes [36], cardiovascular disease [32], arthritis [4], and IBD [38].
Self-Compassion and the Regulation of Health Behaviors
Another key role for self-compassion in chronic illness care is through the facilitation of health-promoting behaviors. Health maintenance and disease management behaviors, such as getting diagnostic tests, taking medication, and weight management, are central for managing symptoms and minimizing the risk of disease progression or complications. For example, staying physically fit, maintaining a healthy diet, managing stress, and getting adequate sleep are critical for weight management and the behavioral control of symptoms for a number of chronic diseases [39,40]. Nonetheless, weight management behaviors often require initiating significant lifestyle changes which need to be maintained in order to be effective. Such behaviors can be particularly challenging for individuals with chronic illness symptoms such as pain and fatigue, which can present significant barriers [41] and trigger self-critical coping about not being able to adequately self-care or manage one’s disease [8,9]. Rather than being motivating, theory and evidence indicate that negative evaluations tend to increase stress and promote procrastination of important health behaviors [7,42].
In addition to theory noting why self-compassion may facilitate the regulation of important health behaviors [43,44], there is now a burgeoning body of research supporting the beneficial role of self-compassion in health behaviors [12,43,45]. Each of the 3 components of self-compassion (self-kindness, common humanity, and mindfulness) are posited to facilitate adaptive self-regulatory responses to the inevitable and momentary failures that occur when people try to enact their health goals. For example, not following through with dietary recommendations and giving into temptation can result in feelings of shame, negative self-evaluations, and reactive eating [46], which in turn can result in discontinuation of one’s diet. These minor failures would be viewed less negatively by people who are self-compassionate, because they realise that others have made similar mistakes (common humanity) and, therefore, do not become excessively self-critical (self-kindness) or immersed in feelings of guilt, shame or frustration (mindfulness), negative emotions which are known to interfere with self-regulation [43,47]. Indeed, self-compassion is associated with having fewer negative reactions in response to imagining a scenario in which a diet goal is transgressed [48].
There is also evidence that collectively, these components of self-compassion facilitate experiencing a healthy balance of positive and negative emotions in the context of health behavior change. Self-compassion appears to temper the negative responses to minor setbacks and failures that occur whilst trying to reach health goals, and foster the positive emotions required to maintain motivation during the pursuit of health goals. The most compelling support for this proposition comes from a meta-analysis of 15 samples (n = 3252) in which self-compassion was consistently and positively (average r = 0.25) associated with the practice of a range of health-promoting behaviors relevant for chronic illness care, including healthy eating, regular exercise, healthy sleep behaviors, and stress management [12]. The explanatory roles of positive and negative affect were also tested, with the results indicating that higher levels of positive affect and lower levels of negative affect were significant mediators of the link between self-compassion and health behaviors.
With respect to mood regulation, it is important to note that self-compassion is not simply an optimistic bias that predisposes individuals towards responding only in a positive way to perceived failures or setbacks. Rather, self-compassion fosters taking a balanced perspective on one’s failures, recognizing both the positive and negative aspects, and harnessing the negative mood that arises from a state of discrepancy to motivate self-improvement. For example, in experimental studies, both enduring and momentary self-compassionate states are associated with increased self-improvement motivation and behavior after experiencing failure and regret [49,50], in part because self-compassion fosters personal acceptance [50]. This adaptive responding can translate into better adherence and health behaviors in chronic health conditions after lapses in self-care which might otherwise foster self-criticism and poor disease management. Preliminary evidence from the author’s lab supports this proposition, as self-compassion was positively associated with both treatment adherence and the practice of wellness behaviors, due in part to lower levels of perceived stress, in samples of cancer patients and survivors [51], and people with chronic fatigue syndrome [52].
Clinical Applications of Self-Compassion for Chronic Illness Care
Given the growing evidence linking self-compassion to well-being and health behaviors, the next logical step is to consider ways of cultivating self-compassion for those individuals experiencing chronic health conditions.
Training in mindfulness might be one way to foster self-compassion within a health care setting. Mindfulness-Based Cognitive Behavior Therapy (MBCT [53]), and Mindfulness-Based Stress Reduction (MBSR [54]), are both programs that use mindfulness skills to notice distressing thoughts and feelings, hold these experiences in awareness, and cultivate acceptance and self-compassion [53]. MBSR, usually delivered as an 8-week group-based program, has been found to have significant effects on depression, anxiety and psychological distress in people with chronic somatic diseases [55]. However, fostering self-compassion forms only part of MBCT and MBSR. Indeed there are very few therapeutic interventions that specifically and primarily target self-compassion; however, where they are used they show promise.
Compassionate Mind Training (CMT [24]), Compassion-Focused Therapy (CFT [21]), and the Mindful Self-Compassion program [26] are examples of such targeted interventions. These therapeutic models, again usually delivered in group settings, aim to foster a kinder and more accepting attitude towards oneself through the use of formal meditations (such as living kindness meditation; LKM), home practice and informal practices for daily life (such as self-compassionate letter writing), and have been demonstrated to be effective with, for example, community participants [26], people who hear malevolent voices [56], and those with chronic mood difficulties [24].
Additionally, there are a number of brief self-compassion practices that have been evaluated as an intervention in their own right and demonstrate positive effects. LKM aiming to develop a state of unconditional kindness towards both oneself and others and compassion meditation (CM [57]), are the most commonly described. CM involves techniques to cultivate compassion, or deep, genuine sympathy for those stricken by misfortune, including oneself, “together with an earnest wish to ease this suffering” [58]. The effects of these kindness-based meditations on health and well-being have been summarized in a recent review [59] which illustrates that, whilst limited data exists currently, promising effects have been shown for a number of different groups. Positive effects have for example been demonstrated for patients with chronic back pain [60] and for people with experiences traditionally conceptualized as psychosis [61], suggesting these practice may also be beneficial for other chronic health conditions.
Alongside the potential benefits, how interventions cultivating self-compassion can be delivered in clinical practice is worthy of consideration. Previous applications have included group work (including MBCT, MBSR), one-to-one therapy (such as CFT) and self-directed practice via bibliotherapy or online materials. The different options available here suggest this kind of intervention is highly accessible, potentially inexpensive and could be used as a complimentary approach alongside other more traditional medical disease management treatments or as a stand-alone psychotherapeutic intervention when required.
In order to best support the successful introduction and evaluation of such interventions, consideration of compassionate practice by staff within health-care settings is also needed. Cultivating a culture of compassion through compassionate leadership [62] is required. We know services with higher levels of caring practice have higher quality care, greater well-being for staff and in turn more compassionate care for patients [63] than those services that are struggling. It is hoped that taking a broad systemic compassionate approach (via training, ongoing supervision and ethos cultivation) would ensure that the language used, information communicated, and disease management approaches are planned and delivered in a way that fosters patients’ sense of self-efficacy and kindness towards themselves, with all the benefits outlined above.
Conclusion
Theory and research indicate that self-compassion fosters adaptive responses to perceived failures and setbacks, and is therefore associated with well-being, reduced stress and more frequent health behaviors. The emerging evidence base on the benefits of self-compassion for coping with the challenges of chronic health conditions is promising, and suggests that the benefits of self-compassion noted in non-medical populations may extend to chronic illness care. Interventions cultivating self-compassion may be especially beneficial for those with chronic health conditions through the mechanisms identified earlier; reducing stress (and thereby impacting on an individual’s relationship with their physical health); improving self-management skills with condition related behaviors and health-promoting behaviors; altering one’s relationship with illness-related shame and self-blame; and in boosting resilience. Systematic and rigorous evaluation of such interventions with people with chronic health conditions is now needed, evaluating impacts on well-being, health behaviors, and disease management and outcomes.
Corresponding author: Fuschia M. Sirois, Dept. of Psychology, University of Sheffield, 1 Vicar Lane, Sheffeld, S1 1HD, [email protected].
Fianacial disclosures: None.
1. Hamilton N, Karoly P, Kitzman H. Self-regulation and chronic pain:The role of emotion. Cogn Ther Res 2007;28:559–576.
2. Luyten P, Kempke S, Van Wambeke P, et al. Self-critical perfectionism, stress generation, and stress sensitivity in patients with chronic fatigue syndrome: relationship with severity of depression. Psychiatry 2011;74:21–30.
3. Cohen S, Janicki-Deverts D, Doyle WJ, et al. Chronic stress, glucocorticoid receptor resistance, inflammation, and disease risk. Proceedings of the National Academy of Sciences 2012.
4. Evers AWM, Verhoeven EWM, van Middendorp H, et al. Does stress affect the joints? Daily stressors, stress vulnerability, immune and HPA axis activity, and short-term disease and symptom fluctuations in rheumatoid arthritis. Ann Rheum Dis 2014;73:1683–8.
5. Maunder RG, Levenstein S. The role of stress in the development and clinical course of inflammatory bowel disease: epidemiological evidence. Curr Molecular Med 2008;8:247–52.
6. Rod NH, Grønbæk M, Schnohr P, et al. Perceived stress as a risk factor for changes in health behavior and cardiac risk profile: a longitudinal study. J Intern Med 2009;266:467–75.
7. Sirois FM. Is procrastination a vulnerability factor for hypertension and cardiovascular disease? Testing an extension of the procrastination-health model. J Behav Med 2015;38:578–89.
8. Moskovitz DN, Maunder RG, Cohen Z, et al. Coping behavior and social support contribute independently to quality of life after surgery for inflammatory bowel disease. Dis Colon Rectum 2000;43:517–21.
9. Voth J, Sirois FM. The role of self-blame and responsibility in adjustment to inflammatory bowel disease. Rehab Psych 2009;54:99–108.
10. Neff KD. Self-compassion: An alternative conceptualization of a healthy attitude toward oneself. Self Ident 2003;2:85–101.
11. Neff KD, Kirkpatrick KL, Rude SS. Self-compassion and adaptive psychological functioning. J Res Personality 2007;41:139–54.
12. Sirois FM. Procrastination and stress: Exploring the role of self-compassion. Self Ident 2014;13:128–45.
13. Allen AB, Leary MR. Self-compassion, stress, and coping. Soc Person Psych Comp 2010;4:107–18.
14. Sirois FM, Kitner R, Hirsch JK. Self-compassion, affect, and health behaviors. Health Psychol 2014.
15. Pinto-Gouveia J, Duarte C, Matos M, Fráguas S. The protective role of self-compassion in relation to psychopathology symptoms and quality of life in chronic illness and in cancer patients. Clin Psychol Psychother 2014;21:311–23.
16. Przezdziecki A, Sherman KA, Baillie A, et al. My changed body: breast cancer, body image, distress and self-compassion. Psychooncology 2013;22:1872–9.
17. Sirois FM, Molnar DS, Hirsch JK. Self-compassion, stress, and coping in the context of chronic illness. Self Identity 2015:1–14.
18. Neff KD. Self-compassion, self-esteem, and well-being. Social Personality Psych Compass 2011;5:1–12.
19. Davis DG, Morgan MS. Finding meaning, perceiving growth, and acceptance of tinnitus. Rehabil Psych 2008;53:128–38.
20. Hall CW, Row KA, Wuensch KL, Godley KR. The role of self-compassion in physical and psychological well-being. J Psychology 2013;147:311–23.
21. Gilbert P. Introducing compassion-focused therapy. Advance Psych Treat 2009;15:199–208.
22. Neff KD. Development and validation of a scale to measure self-compassion. Self Identity 2003;2:223–50.
23. Neff KD. The self-compassion scale is a valid and theoretically coherent measure of self-compassion. Mindfulness 2016;7:264–74.
24. Gilbert P, Procter S. Compassionate mind training for people with high shame and self-criticism: overview and pilot study of a group therapy approach. Clin Psychol Psychother 2006;13:353–79.
25. Leary MR, Tate EB, Adams CE, et al. Self-compassion and reactions to unpleasant self-relevant events: the implications of treating oneself kindly. J Personality Social Psychol 2007;92:887–904.
26. Neff KD, Germer CK. A pilot study and randomized controlled trial of the mindful self-compassion program. J Clin Psychol 2013;69:28–44.
27. MacBeth A, Gumley A. Exploring compassion: A meta-analysis of the association between self-compassion and psychopathology. Clin Psych Rev 2012;32:545–52.
28. Murphy LB, Sacks JJ, Brady TJ, et al. Anxiety and depression among US adults with arthritis: Prevalence and correlates. Arthritis Care Res 2012:64:968–76.
29. Walker JR, Ediger JP, Graff LA, et al. The Manitoba IBD Cohort Study: a population-based study of the prevalence of lifetime and 12-month anxiety and mood disorders. Am J Gastroenterol 2008;103:1989–97.
30. Neff KD, McGehee P. Self-compassion and psychological resilience among adolescents and young adults. Self Ident 2009;9:225–40.
31. Brion J, Leary M, Drabkin A. Self-compassion and reactions to serious illness: The case of HIV. J Health Psychol 2014;19:218–29.
32. Lazarus RS, Folkman S. Stress, appraisal, and coping. New York: Springer; 1984.
33. Thompson SC, Cheek PR, Graham MA. The other side of perceived control: disadvantages and negative effects. In: Spacapan S, Oskamp S, editors. The social psychology of health. Newbury Park: Sage; 1988:69–93.
34. Breines JG, Thoma MV, Gianferante D, et al. Self-compassion as a predictor of interleukin-6 response to acute psychosocial stress. Brain, Behavior, and Immunity 2014;37:109–14.
36. Lloyd C, Smith J, Weinger K. Stress and diabetes: a review of the links. Diabetes Spectrum 2005;18:121–7.
35. Arch JJ, Brown KW, Dean DJ, et al. Self-compassion training modulates alpha-amylase, heart rate variability, and subjective responses to social evaluative threat in women. Psychoneuroendocrinology 2013;42:49–58.
37. Dimsdale JE. Psychological stress and cardiovascular disease. J Am Coll Cardiol 2008;51:1237–46.
38. Maunder RG. Evidence that stress contributes to inflammatory bowel disease: evaluation, synthesis, and future directions. Inflam Bowel Dis 2005;11:600–8.
39. Daskalopoulou SS, Khan NA, Quinn RR, et al. The 2012 Canadian Hypertension Education program recommendations for the management of hypertension: blood pressure measurement, diagnosis, assessment of risk, and therapy. Can J Cardiol 2012;28:270–87.
40. Gulliksson M, Burell G, Vessby B, et al. Randomized controlled trial of cognitive behavioral therapy vs standard treatment to prevent recurrent cardiovascular events in patients with coronary heart disease: Secondary prevention in uppsala primary health care project (suprim). Arch Intern Med 2011;171:134–40.
41. Jerant AF, Friederichs-Fitzwater MMV, Moore M. Patients’ perceived barriers to active self-management of chronic conditions. Patient Ed Couns 2005;57:300–7.
42. Sirois FM. Procrastination, stress, and chronic health conditions: a temporal perspective. In: Sirois FM, Pychyl T, editors. Procrastination, health, and well-being. Elsevier; 2016.
43. Sirois FM. A self-regulation resource model of self-compassion and health behavior intentions in emerging adults. Prev Med Rep 2015;2:218–22.
44. Terry ML, Leary MR. Self-compassion, self-regulation, and health. Self Identity 2011;10:352–62.
45. Dunne S, Sheffield D, Chilcot J. Brief report: Self-compassion, physical health and the mediating role of health-promoting behaviors. J Health Psychol 2016.
46. Polivy J, Herman CP, Deo R. Getting a bigger slice of the pie. Effects on eating and emotion in restrained and unrestrained eaters. Appetite 2010;55:426–30.
47. Wagner DD, Heatherton TF. Self-regulation and its failure: The seven deadly threats to self-regulation. In: Mikulincer M, Shaver PR, Borgida E, Bargh JA, editors. APA handbook of personality and social psychology, Volume 1: Attitudes and social cognition. Washington: American Psychological Association. Forthcoming.
48. Adams CE, Leary MR. Promoting self-compassionate attitudes toward eating among restrictive and guilty eaters. J Soc Clin Psychol 2007;26:1120–44.
49. Breines JG, Chen S. Self-compassion increases self-improvement motivation. Pers Social Psych Bull 2012;38:1133–43.
50. Zhang JW, Chen S. Self-compassion promotes personal improvement from regret experiences via acceptance. Pers Social Psych Bull 2016;42:244–58.
51. Sirois FM, Hirsch JK. Self-compassion is associated with health behaviors in cancer patients and survivors. Forthcoming.
52. Sirois FM. Self-compassion, adheremce, and health behaviors in chronic fatigue syndrome: The role of stress. Forthcoming.
53. Segal ZV, Williams JMG, Teasdale JD. Mindfulness-based cognitive therapy for depression: a new approach to preventing relapse. New York: Guilford Press; 2002.
54. Kabat-Zinn J. Full catastrophe living: Using the wisdom of your body and mind to face stress, pain, and illness. New York: Dell; 1990.
55. Bohlmeijer E, Prenger R, Taal E, Cuijpers P. The effects of mindfulness-based stress reduction therapy on mental health of adults with a chronic medical disease: A meta-analysis. J Psychosom Res 2010;68:539–44.
56. Mayhew SL, Gilbert P. Compassionate mind training with people who hear malevolent voices: a case series report. Clin Psychol Psychother 2008;15:113–38.
57. Hofmann SG, Grossman P, Hinton DE. Loving-kindness and compassion meditation: potential for psychological interventions. Clin Psychol Rev 2011;31:1126–32.
58. Hopkins J. Cultivating compassion. New York: Broadway Books; 2001.
59. Galante J, Galante I, Bekkers M-J Gallacher. Effect of kindness-based meditation on health and well-being: a systematic review and meta-analysis. J Consult Clin Psychol 2014;82:1101–14.
60. Carson JW, Keefe FJ, Lynch TR, et al. Loving-kindness meditation for chronic low back pain: results from a pilot trial. J Holist Nurs 2005;23:287–304.
61. Johnson DP, Penn DL, Fredrickson BL, et al. A pilot study of loving-kindness meditation for the negative symptoms of schizophrenia. Schizophr Res 2011;129:137–40.
62. West M, Steward K, Eckert R, Pasmore B. Developing collective leadership for health care. London: The King’s Fund; 2014.
63. Dixon-Woods M, Baker R, Charles K, et al. Culture and behavior in the English National Health Service: overview of lessons from a large multimethod study. BMJ Qual Saf 2014;23:106–15.
From the Department of Psychology, University of Sheffield, Sheffield, UK.
Abstract
- Objective: To present current research and theory on the potential of self-compassion for improving health-related outcomes in chronic illness, and make recommendations for the application of self-compassion interventions in clinical care to improve well-being and facilitate self-management of health in patients with chronic illness.
- Methods: Narrative review of the literature.
- Results: Current theory indicates that the self-kindness, common humanity, and mindfulness components of self-compassion can foster adaptive responses to the perceived setbacks and shortcomings that people experience in the context of living with a chronic illness. Research on self-compassion in relation to health has been examined primarily within non-medical populations. Cross-sectional and experimental studies have demonstrated clear links between self-compassion and lower levels of both perceived stress and physiological indictors of stress. A growing evidence base also indicates that self-compassion is associated with more frequent practice of health-promoting behaviors in healthy populations. Research on self-compassion with chronic illness populations is limited but has demonstrated cross-sectional links to adaptive coping, lower stress and distress, and the practice of important health behaviors. There are several interventions for increasing self-compassion in clinical settings, with limited data suggesting beneficial effects for clinical populations.
- Conclusion: Self-compassion holds promise as an important quality to cultivate to enhance health-related outcomes in those with chronic health conditions. Further systematic and rigorous research evaluating the effectiveness of self-compassion interventions in chronic illness populations is warranted to fully understand the role of this quality for chronic illness care.
Living with a chronic illness presents a number of challenges that can take a toll on both physical and psychological well-being. Pain, fatigue, and decreased daily functioning are symptoms common to many chronic illnesses that can negatively impact psychological well-being by creating uncertainty about attaining personal goals [1], and contributing to doubts and concerns about being able to fulfil one’s personal and work-related responsibilities [2]. The stress associated with negotiating the challenges of chronic illness can further complicate adjustment by exacerbating existing symptoms via stress-mediated and inflammation regulation pathways [3–5] and compromising the practice of important disease management and health maintenance behaviors [6,7]. These experiences can in turn fuel self-blame and other negative self-evaluations about not being able to meet personal and others’ expectations about managing one’s illness and create a downward spiral of poor adjustment and well-being [8,9].
A growing evidence base suggests that self-compassion is an important quality to help manage the stress and behavior-related issues that can compromise chronic illness care. Defined by Neff [10] as taking a kind, accepting, and non-judgmental stance towards oneself in times of failure or difficulty, self-compassion is associated with several indicators of adjustment in non-medical populations including resilience [11,12] and adaptive coping [13]. In support of the notion that self-compassion can play a role in promoting health behaviors, a recent meta-analysis found that self-compassion is linked to better practice of a range of health-promoting behaviors due in part to its links to adaptive emotions [14]. Research on the role of self-compassion for health-related outcomes with chronic illness populations is limited but nonetheless promising [15–17] , and suggests that self-compassion may be a worthwhile quality to cultivate to improve well-being and facilitate disease self-management.
In this article we present current research and theory on the potential of self-compassion as a clinical concept for improving health-related outcomes in chronic illness. After presenting a brief overview of the theoretical underpinnings of self-compassion and its measurement, we present the current state of research on the role of self-compassion in reducing stress and facilitating health behaviors in general medical populations. We then outline the emerging evidence illustrating a potential role for extending this research to chronic illness populations and make recommendations for the application of self-compassion interventions in clinical care, as a means to improving well-being and facilitating self-management of health for this group.
Self-Compassion: A Healthier Way of Responding to Challenges
Research into the correlates and effects of self-compassion has been primarily guided by the model of self-compassion proposed by Kristen Neff [10]. This view of self-compassion is derived from Buddhist psychology and reconceptualised in a secular manner to refer to the compassion expressed towards the self when experiencing suffering, whether it be due to circumstances beyond one’s control or within one’s control [18]. The 3 key components of self-compassion are proposed to work synergistically to promote kind rather than critical responses to failures and difficult circumstances. Self-kindness (versus self-judgment) involves taking a kind, caring and non-evaluative stance towards perceived inadequacies, shortcomings, and mistakes, and may be particularly valuable for countering the negative self-evaluations that can accompany not being able to meet one’s expectations due to the restrictions of living with a chronic condition [9]. Common humanity (versus isolation) refers to the sense of connection to others that arises from acknowledging the common human experience of imperfection and making mistakes, and being more aware that others may face similar challenging circumstances [18]. Framing hardship from this perspective can help people let go of the “why me?” view of their illness which can compromise adjustment [19], and instead foster a greater connection with others who live with similar conditions. Mindfulness (versus over identification) is the final component of self-compassion as conceptualised by Neff [10], and refers to taking a balanced and non-judgmental view of emotional experiences, grounding them in the present moment and neither ignoring nor becoming overly embroiled in the negative feelings that accompany painful experiences. Neff [10,18] proposes that mindfulness helps counteract the over-identification with one’s suffering that can reduce objectivity and taking a larger perspective on the situation. This mindful stance may be particularly beneficial for dealing with the ongoing pain and suffering of living with a chronic health condition, and encourage healthier ways of viewing the limitations associated with chronic illness. Correlational evidence from a study of healthy students further suggests that certain individual components of self-compassion may be particularly beneficial in the context of health, as the self-kindness and common humanity components were each found to be linked to better physical health and managing life stressors [20].
Although there are other conceptualizations of self-compassion [21], this 3-faceted model is the most widely used in research, in part because of the availability of a measure, the Self-compassion Scale [22], which explicitly assesses each of the facets of self-compassion. The 26-item scale is designed to assess positive and negative dimensions of each facet of self-compassion, but the total score is used more often than the separate subscales [23]. The measure assesses dispositional or trait self-compassion, with an underlying assumption that some individuals can be more or less self-compassionate in the way they regularly respond to challenges or failures. Importantly, self-compassion can also be prompted or fostered as a way of responding to failures and challenges, presenting the possibility that self-compassion can be increased among those who may benefit the most from responding with greater self-kindness and less self-judgement [24–26].
Whether conceived of as a momentary state or as an enduring quality, self-compassion has demonstrated consistent links with an array of indicators of psychological well-being. For example, one meta-analysis found that self-compassion is robustly and negatively linked with psychopathology (average r = –0.54), including depression and anxiety [27], 2 mental health issues that are prevalent in chronic illness populations [28,29]. Several studies have also noted associations of self-compassion with emotional resilience [18,30], and better coping and lower stress [12,13].
Self-Compassion Is Associated with Lower Perceived Stress
Relevant for our focus on chronic illness care, there is some evidence that self-compassion can be effective for improving well-being, and reducing stress in particular, in people with chronic illness. Across two illness samples, cancer and mixed chronic illnesses, those who scored low on a measure of self-compassion had higher levels of depression and stress compared to a healthy control sample [15], suggesting self-compassion may be protective against poor adjustment. Similar results have been found for breast cancer patients, with self-compassion explaining lower distress related to body image [16], and HIV patients, with self-compassion linked to lower stress, anxiety, and shame [31].
The protective role of self-compassion for stress appears to be explained primarily by the set of coping strategies that self-compassionate people use to deal with challenging circumstances. In their review, Allen and Leary [13] noted that self-compassionate people use coping styles that are adaptive and problem-focused (e.g., planning, social-support-seeking, and positive reframing), and tend to not use maladaptive coping styles (e.g., cognitively or behaviorally disengaging from the stressor and other escape-avoidance coping). Consistent with appraisal-based models of coping [32], adaptive coping strategies focus on removing the stressful event, garnering resources to better deal with the stressor, or recasting the stressor as less threatening, and therefore are instrumental in reducing the levels of stress that might normally be perceived in the absence of such coping approaches. Having access to a repertoire of adaptive coping strategies is particularly important in the context of chronic illness which can present a variety of daily challenges related to pain, functional and psychosocial limitations that require a flexible approach to changing demands.
Self-compassion with its links to adaptive coping may be particularly relevant for coping with such demands. One study put this assertion to the test by examining the role of coping strategies in explaining the link between self-compassion and stress in two chronic illness samples, inflammatory bowel disease (IBD) and arthritis [17]. In both samples, higher trait self-compassion was associated with a set of adaptive coping strategies which in turn explained greater coping efficacy and lower perceived stress, with the overall model explaining 43% of the variance in stress after controlling for health status and disease duration. Key adaptive coping strategies included greater use of active coping (a problem-focused coping strategy aimed at removing or reducing the stressor), positive reframing, and acceptance. The self-compassion–stress link was also explained in part by less use of maladaptive strategies, including denial, behavioral disengagement, and self-blame coping [17]. The latter coping strategy in particular is linked to poor adjustment in chronic illness as it reflects efforts to take control over uncontrollable symptoms by viewing illness-related changes, such as flare-ups, as a personal failure to manage one’s illness [9,33]. Together these findings, which were remarkably consistent across 2 distinct chronic illness groups, provide solid evidence to suggest that self-compassion provides individuals living with a chronic illness with a coping advantage that fosters adjustment through engaging in appropriate cognitive and behavioral coping strategies to minimize perceived stress.
Self-Compassion Can Reduce Physiological Stress
A caveat regarding the research to date on self-compassion and stress in chronic illness is that all studies are cross-sectional, which limits any conclusions about the direction of causality. Ignoring the fact that self-compassion in each of these studies was assessed as a relatively stable trait-like quality, one could argue that individuals who are less stressed have a greater opportunity to express kindness to themselves as they are not pre-occupied with illness-related demands and challenges. However, emerging research on self-compassion and the physiological correlates of stress provide a compelling case for the directionality assumed in the cross-sectional research. In one study, healthy young adults were subjected to a standard stress-inducing laboratory task (involving mental mathematics and public speaking), with plasma concentrations of the pro-inflammatory cytokine, interleukin-6 (IL-6), assessed before and after the task on 2 days [34]. Those with higher trait self-compassion responded to the stress task with significantly lower IL-6 levels even after controlling for other potential confounds such as demographics, self-esteem, depressive symptoms, and distress. Self-compassion was also linked to lower baseline levels of IL-6 on both days. These findings suggest that self-compassion may be both an enduring and response-specific protective factor against stress-induced inflammation.
There is also evidence supporting the efficacy of self-compassion interventions for reducing stress. In a study of healthy young women, those who underwent a brief training in self-compassion were found to have lower sympathetic nervous system reactivity (salivary alpha-amylase), and more adaptive parasympathetic nervous system reactivity (heart rate variability) in response to a stress-inducing lab task, compared to placebo control and no-training control groups [35]. That this study was conducted with women only is notable, as research indicates that women tend to have lower levels of self-compassion compared to men [18]. Together with the study on trait self-compassion and biomarkers of stress-induced inflammation, this research provides supportive evidence for the role of self-compassion in reducing the harmful physiological effects of stress. Self-compassion may therefore be particularly beneficial for both psychological and physical well-being in chronic illness given the known and negative impact of stress on symptoms for a number of chronic illnesses such as diabetes [36], cardiovascular disease [32], arthritis [4], and IBD [38].
Self-Compassion and the Regulation of Health Behaviors
Another key role for self-compassion in chronic illness care is through the facilitation of health-promoting behaviors. Health maintenance and disease management behaviors, such as getting diagnostic tests, taking medication, and weight management, are central for managing symptoms and minimizing the risk of disease progression or complications. For example, staying physically fit, maintaining a healthy diet, managing stress, and getting adequate sleep are critical for weight management and the behavioral control of symptoms for a number of chronic diseases [39,40]. Nonetheless, weight management behaviors often require initiating significant lifestyle changes which need to be maintained in order to be effective. Such behaviors can be particularly challenging for individuals with chronic illness symptoms such as pain and fatigue, which can present significant barriers [41] and trigger self-critical coping about not being able to adequately self-care or manage one’s disease [8,9]. Rather than being motivating, theory and evidence indicate that negative evaluations tend to increase stress and promote procrastination of important health behaviors [7,42].
In addition to theory noting why self-compassion may facilitate the regulation of important health behaviors [43,44], there is now a burgeoning body of research supporting the beneficial role of self-compassion in health behaviors [12,43,45]. Each of the 3 components of self-compassion (self-kindness, common humanity, and mindfulness) are posited to facilitate adaptive self-regulatory responses to the inevitable and momentary failures that occur when people try to enact their health goals. For example, not following through with dietary recommendations and giving into temptation can result in feelings of shame, negative self-evaluations, and reactive eating [46], which in turn can result in discontinuation of one’s diet. These minor failures would be viewed less negatively by people who are self-compassionate, because they realise that others have made similar mistakes (common humanity) and, therefore, do not become excessively self-critical (self-kindness) or immersed in feelings of guilt, shame or frustration (mindfulness), negative emotions which are known to interfere with self-regulation [43,47]. Indeed, self-compassion is associated with having fewer negative reactions in response to imagining a scenario in which a diet goal is transgressed [48].
There is also evidence that collectively, these components of self-compassion facilitate experiencing a healthy balance of positive and negative emotions in the context of health behavior change. Self-compassion appears to temper the negative responses to minor setbacks and failures that occur whilst trying to reach health goals, and foster the positive emotions required to maintain motivation during the pursuit of health goals. The most compelling support for this proposition comes from a meta-analysis of 15 samples (n = 3252) in which self-compassion was consistently and positively (average r = 0.25) associated with the practice of a range of health-promoting behaviors relevant for chronic illness care, including healthy eating, regular exercise, healthy sleep behaviors, and stress management [12]. The explanatory roles of positive and negative affect were also tested, with the results indicating that higher levels of positive affect and lower levels of negative affect were significant mediators of the link between self-compassion and health behaviors.
With respect to mood regulation, it is important to note that self-compassion is not simply an optimistic bias that predisposes individuals towards responding only in a positive way to perceived failures or setbacks. Rather, self-compassion fosters taking a balanced perspective on one’s failures, recognizing both the positive and negative aspects, and harnessing the negative mood that arises from a state of discrepancy to motivate self-improvement. For example, in experimental studies, both enduring and momentary self-compassionate states are associated with increased self-improvement motivation and behavior after experiencing failure and regret [49,50], in part because self-compassion fosters personal acceptance [50]. This adaptive responding can translate into better adherence and health behaviors in chronic health conditions after lapses in self-care which might otherwise foster self-criticism and poor disease management. Preliminary evidence from the author’s lab supports this proposition, as self-compassion was positively associated with both treatment adherence and the practice of wellness behaviors, due in part to lower levels of perceived stress, in samples of cancer patients and survivors [51], and people with chronic fatigue syndrome [52].
Clinical Applications of Self-Compassion for Chronic Illness Care
Given the growing evidence linking self-compassion to well-being and health behaviors, the next logical step is to consider ways of cultivating self-compassion for those individuals experiencing chronic health conditions.
Training in mindfulness might be one way to foster self-compassion within a health care setting. Mindfulness-Based Cognitive Behavior Therapy (MBCT [53]), and Mindfulness-Based Stress Reduction (MBSR [54]), are both programs that use mindfulness skills to notice distressing thoughts and feelings, hold these experiences in awareness, and cultivate acceptance and self-compassion [53]. MBSR, usually delivered as an 8-week group-based program, has been found to have significant effects on depression, anxiety and psychological distress in people with chronic somatic diseases [55]. However, fostering self-compassion forms only part of MBCT and MBSR. Indeed there are very few therapeutic interventions that specifically and primarily target self-compassion; however, where they are used they show promise.
Compassionate Mind Training (CMT [24]), Compassion-Focused Therapy (CFT [21]), and the Mindful Self-Compassion program [26] are examples of such targeted interventions. These therapeutic models, again usually delivered in group settings, aim to foster a kinder and more accepting attitude towards oneself through the use of formal meditations (such as living kindness meditation; LKM), home practice and informal practices for daily life (such as self-compassionate letter writing), and have been demonstrated to be effective with, for example, community participants [26], people who hear malevolent voices [56], and those with chronic mood difficulties [24].
Additionally, there are a number of brief self-compassion practices that have been evaluated as an intervention in their own right and demonstrate positive effects. LKM aiming to develop a state of unconditional kindness towards both oneself and others and compassion meditation (CM [57]), are the most commonly described. CM involves techniques to cultivate compassion, or deep, genuine sympathy for those stricken by misfortune, including oneself, “together with an earnest wish to ease this suffering” [58]. The effects of these kindness-based meditations on health and well-being have been summarized in a recent review [59] which illustrates that, whilst limited data exists currently, promising effects have been shown for a number of different groups. Positive effects have for example been demonstrated for patients with chronic back pain [60] and for people with experiences traditionally conceptualized as psychosis [61], suggesting these practice may also be beneficial for other chronic health conditions.
Alongside the potential benefits, how interventions cultivating self-compassion can be delivered in clinical practice is worthy of consideration. Previous applications have included group work (including MBCT, MBSR), one-to-one therapy (such as CFT) and self-directed practice via bibliotherapy or online materials. The different options available here suggest this kind of intervention is highly accessible, potentially inexpensive and could be used as a complimentary approach alongside other more traditional medical disease management treatments or as a stand-alone psychotherapeutic intervention when required.
In order to best support the successful introduction and evaluation of such interventions, consideration of compassionate practice by staff within health-care settings is also needed. Cultivating a culture of compassion through compassionate leadership [62] is required. We know services with higher levels of caring practice have higher quality care, greater well-being for staff and in turn more compassionate care for patients [63] than those services that are struggling. It is hoped that taking a broad systemic compassionate approach (via training, ongoing supervision and ethos cultivation) would ensure that the language used, information communicated, and disease management approaches are planned and delivered in a way that fosters patients’ sense of self-efficacy and kindness towards themselves, with all the benefits outlined above.
Conclusion
Theory and research indicate that self-compassion fosters adaptive responses to perceived failures and setbacks, and is therefore associated with well-being, reduced stress and more frequent health behaviors. The emerging evidence base on the benefits of self-compassion for coping with the challenges of chronic health conditions is promising, and suggests that the benefits of self-compassion noted in non-medical populations may extend to chronic illness care. Interventions cultivating self-compassion may be especially beneficial for those with chronic health conditions through the mechanisms identified earlier; reducing stress (and thereby impacting on an individual’s relationship with their physical health); improving self-management skills with condition related behaviors and health-promoting behaviors; altering one’s relationship with illness-related shame and self-blame; and in boosting resilience. Systematic and rigorous evaluation of such interventions with people with chronic health conditions is now needed, evaluating impacts on well-being, health behaviors, and disease management and outcomes.
Corresponding author: Fuschia M. Sirois, Dept. of Psychology, University of Sheffield, 1 Vicar Lane, Sheffeld, S1 1HD, [email protected].
Fianacial disclosures: None.
From the Department of Psychology, University of Sheffield, Sheffield, UK.
Abstract
- Objective: To present current research and theory on the potential of self-compassion for improving health-related outcomes in chronic illness, and make recommendations for the application of self-compassion interventions in clinical care to improve well-being and facilitate self-management of health in patients with chronic illness.
- Methods: Narrative review of the literature.
- Results: Current theory indicates that the self-kindness, common humanity, and mindfulness components of self-compassion can foster adaptive responses to the perceived setbacks and shortcomings that people experience in the context of living with a chronic illness. Research on self-compassion in relation to health has been examined primarily within non-medical populations. Cross-sectional and experimental studies have demonstrated clear links between self-compassion and lower levels of both perceived stress and physiological indictors of stress. A growing evidence base also indicates that self-compassion is associated with more frequent practice of health-promoting behaviors in healthy populations. Research on self-compassion with chronic illness populations is limited but has demonstrated cross-sectional links to adaptive coping, lower stress and distress, and the practice of important health behaviors. There are several interventions for increasing self-compassion in clinical settings, with limited data suggesting beneficial effects for clinical populations.
- Conclusion: Self-compassion holds promise as an important quality to cultivate to enhance health-related outcomes in those with chronic health conditions. Further systematic and rigorous research evaluating the effectiveness of self-compassion interventions in chronic illness populations is warranted to fully understand the role of this quality for chronic illness care.
Living with a chronic illness presents a number of challenges that can take a toll on both physical and psychological well-being. Pain, fatigue, and decreased daily functioning are symptoms common to many chronic illnesses that can negatively impact psychological well-being by creating uncertainty about attaining personal goals [1], and contributing to doubts and concerns about being able to fulfil one’s personal and work-related responsibilities [2]. The stress associated with negotiating the challenges of chronic illness can further complicate adjustment by exacerbating existing symptoms via stress-mediated and inflammation regulation pathways [3–5] and compromising the practice of important disease management and health maintenance behaviors [6,7]. These experiences can in turn fuel self-blame and other negative self-evaluations about not being able to meet personal and others’ expectations about managing one’s illness and create a downward spiral of poor adjustment and well-being [8,9].
A growing evidence base suggests that self-compassion is an important quality to help manage the stress and behavior-related issues that can compromise chronic illness care. Defined by Neff [10] as taking a kind, accepting, and non-judgmental stance towards oneself in times of failure or difficulty, self-compassion is associated with several indicators of adjustment in non-medical populations including resilience [11,12] and adaptive coping [13]. In support of the notion that self-compassion can play a role in promoting health behaviors, a recent meta-analysis found that self-compassion is linked to better practice of a range of health-promoting behaviors due in part to its links to adaptive emotions [14]. Research on the role of self-compassion for health-related outcomes with chronic illness populations is limited but nonetheless promising [15–17] , and suggests that self-compassion may be a worthwhile quality to cultivate to improve well-being and facilitate disease self-management.
In this article we present current research and theory on the potential of self-compassion as a clinical concept for improving health-related outcomes in chronic illness. After presenting a brief overview of the theoretical underpinnings of self-compassion and its measurement, we present the current state of research on the role of self-compassion in reducing stress and facilitating health behaviors in general medical populations. We then outline the emerging evidence illustrating a potential role for extending this research to chronic illness populations and make recommendations for the application of self-compassion interventions in clinical care, as a means to improving well-being and facilitating self-management of health for this group.
Self-Compassion: A Healthier Way of Responding to Challenges
Research into the correlates and effects of self-compassion has been primarily guided by the model of self-compassion proposed by Kristen Neff [10]. This view of self-compassion is derived from Buddhist psychology and reconceptualised in a secular manner to refer to the compassion expressed towards the self when experiencing suffering, whether it be due to circumstances beyond one’s control or within one’s control [18]. The 3 key components of self-compassion are proposed to work synergistically to promote kind rather than critical responses to failures and difficult circumstances. Self-kindness (versus self-judgment) involves taking a kind, caring and non-evaluative stance towards perceived inadequacies, shortcomings, and mistakes, and may be particularly valuable for countering the negative self-evaluations that can accompany not being able to meet one’s expectations due to the restrictions of living with a chronic condition [9]. Common humanity (versus isolation) refers to the sense of connection to others that arises from acknowledging the common human experience of imperfection and making mistakes, and being more aware that others may face similar challenging circumstances [18]. Framing hardship from this perspective can help people let go of the “why me?” view of their illness which can compromise adjustment [19], and instead foster a greater connection with others who live with similar conditions. Mindfulness (versus over identification) is the final component of self-compassion as conceptualised by Neff [10], and refers to taking a balanced and non-judgmental view of emotional experiences, grounding them in the present moment and neither ignoring nor becoming overly embroiled in the negative feelings that accompany painful experiences. Neff [10,18] proposes that mindfulness helps counteract the over-identification with one’s suffering that can reduce objectivity and taking a larger perspective on the situation. This mindful stance may be particularly beneficial for dealing with the ongoing pain and suffering of living with a chronic health condition, and encourage healthier ways of viewing the limitations associated with chronic illness. Correlational evidence from a study of healthy students further suggests that certain individual components of self-compassion may be particularly beneficial in the context of health, as the self-kindness and common humanity components were each found to be linked to better physical health and managing life stressors [20].
Although there are other conceptualizations of self-compassion [21], this 3-faceted model is the most widely used in research, in part because of the availability of a measure, the Self-compassion Scale [22], which explicitly assesses each of the facets of self-compassion. The 26-item scale is designed to assess positive and negative dimensions of each facet of self-compassion, but the total score is used more often than the separate subscales [23]. The measure assesses dispositional or trait self-compassion, with an underlying assumption that some individuals can be more or less self-compassionate in the way they regularly respond to challenges or failures. Importantly, self-compassion can also be prompted or fostered as a way of responding to failures and challenges, presenting the possibility that self-compassion can be increased among those who may benefit the most from responding with greater self-kindness and less self-judgement [24–26].
Whether conceived of as a momentary state or as an enduring quality, self-compassion has demonstrated consistent links with an array of indicators of psychological well-being. For example, one meta-analysis found that self-compassion is robustly and negatively linked with psychopathology (average r = –0.54), including depression and anxiety [27], 2 mental health issues that are prevalent in chronic illness populations [28,29]. Several studies have also noted associations of self-compassion with emotional resilience [18,30], and better coping and lower stress [12,13].
Self-Compassion Is Associated with Lower Perceived Stress
Relevant for our focus on chronic illness care, there is some evidence that self-compassion can be effective for improving well-being, and reducing stress in particular, in people with chronic illness. Across two illness samples, cancer and mixed chronic illnesses, those who scored low on a measure of self-compassion had higher levels of depression and stress compared to a healthy control sample [15], suggesting self-compassion may be protective against poor adjustment. Similar results have been found for breast cancer patients, with self-compassion explaining lower distress related to body image [16], and HIV patients, with self-compassion linked to lower stress, anxiety, and shame [31].
The protective role of self-compassion for stress appears to be explained primarily by the set of coping strategies that self-compassionate people use to deal with challenging circumstances. In their review, Allen and Leary [13] noted that self-compassionate people use coping styles that are adaptive and problem-focused (e.g., planning, social-support-seeking, and positive reframing), and tend to not use maladaptive coping styles (e.g., cognitively or behaviorally disengaging from the stressor and other escape-avoidance coping). Consistent with appraisal-based models of coping [32], adaptive coping strategies focus on removing the stressful event, garnering resources to better deal with the stressor, or recasting the stressor as less threatening, and therefore are instrumental in reducing the levels of stress that might normally be perceived in the absence of such coping approaches. Having access to a repertoire of adaptive coping strategies is particularly important in the context of chronic illness which can present a variety of daily challenges related to pain, functional and psychosocial limitations that require a flexible approach to changing demands.
Self-compassion with its links to adaptive coping may be particularly relevant for coping with such demands. One study put this assertion to the test by examining the role of coping strategies in explaining the link between self-compassion and stress in two chronic illness samples, inflammatory bowel disease (IBD) and arthritis [17]. In both samples, higher trait self-compassion was associated with a set of adaptive coping strategies which in turn explained greater coping efficacy and lower perceived stress, with the overall model explaining 43% of the variance in stress after controlling for health status and disease duration. Key adaptive coping strategies included greater use of active coping (a problem-focused coping strategy aimed at removing or reducing the stressor), positive reframing, and acceptance. The self-compassion–stress link was also explained in part by less use of maladaptive strategies, including denial, behavioral disengagement, and self-blame coping [17]. The latter coping strategy in particular is linked to poor adjustment in chronic illness as it reflects efforts to take control over uncontrollable symptoms by viewing illness-related changes, such as flare-ups, as a personal failure to manage one’s illness [9,33]. Together these findings, which were remarkably consistent across 2 distinct chronic illness groups, provide solid evidence to suggest that self-compassion provides individuals living with a chronic illness with a coping advantage that fosters adjustment through engaging in appropriate cognitive and behavioral coping strategies to minimize perceived stress.
Self-Compassion Can Reduce Physiological Stress
A caveat regarding the research to date on self-compassion and stress in chronic illness is that all studies are cross-sectional, which limits any conclusions about the direction of causality. Ignoring the fact that self-compassion in each of these studies was assessed as a relatively stable trait-like quality, one could argue that individuals who are less stressed have a greater opportunity to express kindness to themselves as they are not pre-occupied with illness-related demands and challenges. However, emerging research on self-compassion and the physiological correlates of stress provide a compelling case for the directionality assumed in the cross-sectional research. In one study, healthy young adults were subjected to a standard stress-inducing laboratory task (involving mental mathematics and public speaking), with plasma concentrations of the pro-inflammatory cytokine, interleukin-6 (IL-6), assessed before and after the task on 2 days [34]. Those with higher trait self-compassion responded to the stress task with significantly lower IL-6 levels even after controlling for other potential confounds such as demographics, self-esteem, depressive symptoms, and distress. Self-compassion was also linked to lower baseline levels of IL-6 on both days. These findings suggest that self-compassion may be both an enduring and response-specific protective factor against stress-induced inflammation.
There is also evidence supporting the efficacy of self-compassion interventions for reducing stress. In a study of healthy young women, those who underwent a brief training in self-compassion were found to have lower sympathetic nervous system reactivity (salivary alpha-amylase), and more adaptive parasympathetic nervous system reactivity (heart rate variability) in response to a stress-inducing lab task, compared to placebo control and no-training control groups [35]. That this study was conducted with women only is notable, as research indicates that women tend to have lower levels of self-compassion compared to men [18]. Together with the study on trait self-compassion and biomarkers of stress-induced inflammation, this research provides supportive evidence for the role of self-compassion in reducing the harmful physiological effects of stress. Self-compassion may therefore be particularly beneficial for both psychological and physical well-being in chronic illness given the known and negative impact of stress on symptoms for a number of chronic illnesses such as diabetes [36], cardiovascular disease [32], arthritis [4], and IBD [38].
Self-Compassion and the Regulation of Health Behaviors
Another key role for self-compassion in chronic illness care is through the facilitation of health-promoting behaviors. Health maintenance and disease management behaviors, such as getting diagnostic tests, taking medication, and weight management, are central for managing symptoms and minimizing the risk of disease progression or complications. For example, staying physically fit, maintaining a healthy diet, managing stress, and getting adequate sleep are critical for weight management and the behavioral control of symptoms for a number of chronic diseases [39,40]. Nonetheless, weight management behaviors often require initiating significant lifestyle changes which need to be maintained in order to be effective. Such behaviors can be particularly challenging for individuals with chronic illness symptoms such as pain and fatigue, which can present significant barriers [41] and trigger self-critical coping about not being able to adequately self-care or manage one’s disease [8,9]. Rather than being motivating, theory and evidence indicate that negative evaluations tend to increase stress and promote procrastination of important health behaviors [7,42].
In addition to theory noting why self-compassion may facilitate the regulation of important health behaviors [43,44], there is now a burgeoning body of research supporting the beneficial role of self-compassion in health behaviors [12,43,45]. Each of the 3 components of self-compassion (self-kindness, common humanity, and mindfulness) are posited to facilitate adaptive self-regulatory responses to the inevitable and momentary failures that occur when people try to enact their health goals. For example, not following through with dietary recommendations and giving into temptation can result in feelings of shame, negative self-evaluations, and reactive eating [46], which in turn can result in discontinuation of one’s diet. These minor failures would be viewed less negatively by people who are self-compassionate, because they realise that others have made similar mistakes (common humanity) and, therefore, do not become excessively self-critical (self-kindness) or immersed in feelings of guilt, shame or frustration (mindfulness), negative emotions which are known to interfere with self-regulation [43,47]. Indeed, self-compassion is associated with having fewer negative reactions in response to imagining a scenario in which a diet goal is transgressed [48].
There is also evidence that collectively, these components of self-compassion facilitate experiencing a healthy balance of positive and negative emotions in the context of health behavior change. Self-compassion appears to temper the negative responses to minor setbacks and failures that occur whilst trying to reach health goals, and foster the positive emotions required to maintain motivation during the pursuit of health goals. The most compelling support for this proposition comes from a meta-analysis of 15 samples (n = 3252) in which self-compassion was consistently and positively (average r = 0.25) associated with the practice of a range of health-promoting behaviors relevant for chronic illness care, including healthy eating, regular exercise, healthy sleep behaviors, and stress management [12]. The explanatory roles of positive and negative affect were also tested, with the results indicating that higher levels of positive affect and lower levels of negative affect were significant mediators of the link between self-compassion and health behaviors.
With respect to mood regulation, it is important to note that self-compassion is not simply an optimistic bias that predisposes individuals towards responding only in a positive way to perceived failures or setbacks. Rather, self-compassion fosters taking a balanced perspective on one’s failures, recognizing both the positive and negative aspects, and harnessing the negative mood that arises from a state of discrepancy to motivate self-improvement. For example, in experimental studies, both enduring and momentary self-compassionate states are associated with increased self-improvement motivation and behavior after experiencing failure and regret [49,50], in part because self-compassion fosters personal acceptance [50]. This adaptive responding can translate into better adherence and health behaviors in chronic health conditions after lapses in self-care which might otherwise foster self-criticism and poor disease management. Preliminary evidence from the author’s lab supports this proposition, as self-compassion was positively associated with both treatment adherence and the practice of wellness behaviors, due in part to lower levels of perceived stress, in samples of cancer patients and survivors [51], and people with chronic fatigue syndrome [52].
Clinical Applications of Self-Compassion for Chronic Illness Care
Given the growing evidence linking self-compassion to well-being and health behaviors, the next logical step is to consider ways of cultivating self-compassion for those individuals experiencing chronic health conditions.
Training in mindfulness might be one way to foster self-compassion within a health care setting. Mindfulness-Based Cognitive Behavior Therapy (MBCT [53]), and Mindfulness-Based Stress Reduction (MBSR [54]), are both programs that use mindfulness skills to notice distressing thoughts and feelings, hold these experiences in awareness, and cultivate acceptance and self-compassion [53]. MBSR, usually delivered as an 8-week group-based program, has been found to have significant effects on depression, anxiety and psychological distress in people with chronic somatic diseases [55]. However, fostering self-compassion forms only part of MBCT and MBSR. Indeed there are very few therapeutic interventions that specifically and primarily target self-compassion; however, where they are used they show promise.
Compassionate Mind Training (CMT [24]), Compassion-Focused Therapy (CFT [21]), and the Mindful Self-Compassion program [26] are examples of such targeted interventions. These therapeutic models, again usually delivered in group settings, aim to foster a kinder and more accepting attitude towards oneself through the use of formal meditations (such as living kindness meditation; LKM), home practice and informal practices for daily life (such as self-compassionate letter writing), and have been demonstrated to be effective with, for example, community participants [26], people who hear malevolent voices [56], and those with chronic mood difficulties [24].
Additionally, there are a number of brief self-compassion practices that have been evaluated as an intervention in their own right and demonstrate positive effects. LKM aiming to develop a state of unconditional kindness towards both oneself and others and compassion meditation (CM [57]), are the most commonly described. CM involves techniques to cultivate compassion, or deep, genuine sympathy for those stricken by misfortune, including oneself, “together with an earnest wish to ease this suffering” [58]. The effects of these kindness-based meditations on health and well-being have been summarized in a recent review [59] which illustrates that, whilst limited data exists currently, promising effects have been shown for a number of different groups. Positive effects have for example been demonstrated for patients with chronic back pain [60] and for people with experiences traditionally conceptualized as psychosis [61], suggesting these practice may also be beneficial for other chronic health conditions.
Alongside the potential benefits, how interventions cultivating self-compassion can be delivered in clinical practice is worthy of consideration. Previous applications have included group work (including MBCT, MBSR), one-to-one therapy (such as CFT) and self-directed practice via bibliotherapy or online materials. The different options available here suggest this kind of intervention is highly accessible, potentially inexpensive and could be used as a complimentary approach alongside other more traditional medical disease management treatments or as a stand-alone psychotherapeutic intervention when required.
In order to best support the successful introduction and evaluation of such interventions, consideration of compassionate practice by staff within health-care settings is also needed. Cultivating a culture of compassion through compassionate leadership [62] is required. We know services with higher levels of caring practice have higher quality care, greater well-being for staff and in turn more compassionate care for patients [63] than those services that are struggling. It is hoped that taking a broad systemic compassionate approach (via training, ongoing supervision and ethos cultivation) would ensure that the language used, information communicated, and disease management approaches are planned and delivered in a way that fosters patients’ sense of self-efficacy and kindness towards themselves, with all the benefits outlined above.
Conclusion
Theory and research indicate that self-compassion fosters adaptive responses to perceived failures and setbacks, and is therefore associated with well-being, reduced stress and more frequent health behaviors. The emerging evidence base on the benefits of self-compassion for coping with the challenges of chronic health conditions is promising, and suggests that the benefits of self-compassion noted in non-medical populations may extend to chronic illness care. Interventions cultivating self-compassion may be especially beneficial for those with chronic health conditions through the mechanisms identified earlier; reducing stress (and thereby impacting on an individual’s relationship with their physical health); improving self-management skills with condition related behaviors and health-promoting behaviors; altering one’s relationship with illness-related shame and self-blame; and in boosting resilience. Systematic and rigorous evaluation of such interventions with people with chronic health conditions is now needed, evaluating impacts on well-being, health behaviors, and disease management and outcomes.
Corresponding author: Fuschia M. Sirois, Dept. of Psychology, University of Sheffield, 1 Vicar Lane, Sheffeld, S1 1HD, [email protected].
Fianacial disclosures: None.
1. Hamilton N, Karoly P, Kitzman H. Self-regulation and chronic pain:The role of emotion. Cogn Ther Res 2007;28:559–576.
2. Luyten P, Kempke S, Van Wambeke P, et al. Self-critical perfectionism, stress generation, and stress sensitivity in patients with chronic fatigue syndrome: relationship with severity of depression. Psychiatry 2011;74:21–30.
3. Cohen S, Janicki-Deverts D, Doyle WJ, et al. Chronic stress, glucocorticoid receptor resistance, inflammation, and disease risk. Proceedings of the National Academy of Sciences 2012.
4. Evers AWM, Verhoeven EWM, van Middendorp H, et al. Does stress affect the joints? Daily stressors, stress vulnerability, immune and HPA axis activity, and short-term disease and symptom fluctuations in rheumatoid arthritis. Ann Rheum Dis 2014;73:1683–8.
5. Maunder RG, Levenstein S. The role of stress in the development and clinical course of inflammatory bowel disease: epidemiological evidence. Curr Molecular Med 2008;8:247–52.
6. Rod NH, Grønbæk M, Schnohr P, et al. Perceived stress as a risk factor for changes in health behavior and cardiac risk profile: a longitudinal study. J Intern Med 2009;266:467–75.
7. Sirois FM. Is procrastination a vulnerability factor for hypertension and cardiovascular disease? Testing an extension of the procrastination-health model. J Behav Med 2015;38:578–89.
8. Moskovitz DN, Maunder RG, Cohen Z, et al. Coping behavior and social support contribute independently to quality of life after surgery for inflammatory bowel disease. Dis Colon Rectum 2000;43:517–21.
9. Voth J, Sirois FM. The role of self-blame and responsibility in adjustment to inflammatory bowel disease. Rehab Psych 2009;54:99–108.
10. Neff KD. Self-compassion: An alternative conceptualization of a healthy attitude toward oneself. Self Ident 2003;2:85–101.
11. Neff KD, Kirkpatrick KL, Rude SS. Self-compassion and adaptive psychological functioning. J Res Personality 2007;41:139–54.
12. Sirois FM. Procrastination and stress: Exploring the role of self-compassion. Self Ident 2014;13:128–45.
13. Allen AB, Leary MR. Self-compassion, stress, and coping. Soc Person Psych Comp 2010;4:107–18.
14. Sirois FM, Kitner R, Hirsch JK. Self-compassion, affect, and health behaviors. Health Psychol 2014.
15. Pinto-Gouveia J, Duarte C, Matos M, Fráguas S. The protective role of self-compassion in relation to psychopathology symptoms and quality of life in chronic illness and in cancer patients. Clin Psychol Psychother 2014;21:311–23.
16. Przezdziecki A, Sherman KA, Baillie A, et al. My changed body: breast cancer, body image, distress and self-compassion. Psychooncology 2013;22:1872–9.
17. Sirois FM, Molnar DS, Hirsch JK. Self-compassion, stress, and coping in the context of chronic illness. Self Identity 2015:1–14.
18. Neff KD. Self-compassion, self-esteem, and well-being. Social Personality Psych Compass 2011;5:1–12.
19. Davis DG, Morgan MS. Finding meaning, perceiving growth, and acceptance of tinnitus. Rehabil Psych 2008;53:128–38.
20. Hall CW, Row KA, Wuensch KL, Godley KR. The role of self-compassion in physical and psychological well-being. J Psychology 2013;147:311–23.
21. Gilbert P. Introducing compassion-focused therapy. Advance Psych Treat 2009;15:199–208.
22. Neff KD. Development and validation of a scale to measure self-compassion. Self Identity 2003;2:223–50.
23. Neff KD. The self-compassion scale is a valid and theoretically coherent measure of self-compassion. Mindfulness 2016;7:264–74.
24. Gilbert P, Procter S. Compassionate mind training for people with high shame and self-criticism: overview and pilot study of a group therapy approach. Clin Psychol Psychother 2006;13:353–79.
25. Leary MR, Tate EB, Adams CE, et al. Self-compassion and reactions to unpleasant self-relevant events: the implications of treating oneself kindly. J Personality Social Psychol 2007;92:887–904.
26. Neff KD, Germer CK. A pilot study and randomized controlled trial of the mindful self-compassion program. J Clin Psychol 2013;69:28–44.
27. MacBeth A, Gumley A. Exploring compassion: A meta-analysis of the association between self-compassion and psychopathology. Clin Psych Rev 2012;32:545–52.
28. Murphy LB, Sacks JJ, Brady TJ, et al. Anxiety and depression among US adults with arthritis: Prevalence and correlates. Arthritis Care Res 2012:64:968–76.
29. Walker JR, Ediger JP, Graff LA, et al. The Manitoba IBD Cohort Study: a population-based study of the prevalence of lifetime and 12-month anxiety and mood disorders. Am J Gastroenterol 2008;103:1989–97.
30. Neff KD, McGehee P. Self-compassion and psychological resilience among adolescents and young adults. Self Ident 2009;9:225–40.
31. Brion J, Leary M, Drabkin A. Self-compassion and reactions to serious illness: The case of HIV. J Health Psychol 2014;19:218–29.
32. Lazarus RS, Folkman S. Stress, appraisal, and coping. New York: Springer; 1984.
33. Thompson SC, Cheek PR, Graham MA. The other side of perceived control: disadvantages and negative effects. In: Spacapan S, Oskamp S, editors. The social psychology of health. Newbury Park: Sage; 1988:69–93.
34. Breines JG, Thoma MV, Gianferante D, et al. Self-compassion as a predictor of interleukin-6 response to acute psychosocial stress. Brain, Behavior, and Immunity 2014;37:109–14.
36. Lloyd C, Smith J, Weinger K. Stress and diabetes: a review of the links. Diabetes Spectrum 2005;18:121–7.
35. Arch JJ, Brown KW, Dean DJ, et al. Self-compassion training modulates alpha-amylase, heart rate variability, and subjective responses to social evaluative threat in women. Psychoneuroendocrinology 2013;42:49–58.
37. Dimsdale JE. Psychological stress and cardiovascular disease. J Am Coll Cardiol 2008;51:1237–46.
38. Maunder RG. Evidence that stress contributes to inflammatory bowel disease: evaluation, synthesis, and future directions. Inflam Bowel Dis 2005;11:600–8.
39. Daskalopoulou SS, Khan NA, Quinn RR, et al. The 2012 Canadian Hypertension Education program recommendations for the management of hypertension: blood pressure measurement, diagnosis, assessment of risk, and therapy. Can J Cardiol 2012;28:270–87.
40. Gulliksson M, Burell G, Vessby B, et al. Randomized controlled trial of cognitive behavioral therapy vs standard treatment to prevent recurrent cardiovascular events in patients with coronary heart disease: Secondary prevention in uppsala primary health care project (suprim). Arch Intern Med 2011;171:134–40.
41. Jerant AF, Friederichs-Fitzwater MMV, Moore M. Patients’ perceived barriers to active self-management of chronic conditions. Patient Ed Couns 2005;57:300–7.
42. Sirois FM. Procrastination, stress, and chronic health conditions: a temporal perspective. In: Sirois FM, Pychyl T, editors. Procrastination, health, and well-being. Elsevier; 2016.
43. Sirois FM. A self-regulation resource model of self-compassion and health behavior intentions in emerging adults. Prev Med Rep 2015;2:218–22.
44. Terry ML, Leary MR. Self-compassion, self-regulation, and health. Self Identity 2011;10:352–62.
45. Dunne S, Sheffield D, Chilcot J. Brief report: Self-compassion, physical health and the mediating role of health-promoting behaviors. J Health Psychol 2016.
46. Polivy J, Herman CP, Deo R. Getting a bigger slice of the pie. Effects on eating and emotion in restrained and unrestrained eaters. Appetite 2010;55:426–30.
47. Wagner DD, Heatherton TF. Self-regulation and its failure: The seven deadly threats to self-regulation. In: Mikulincer M, Shaver PR, Borgida E, Bargh JA, editors. APA handbook of personality and social psychology, Volume 1: Attitudes and social cognition. Washington: American Psychological Association. Forthcoming.
48. Adams CE, Leary MR. Promoting self-compassionate attitudes toward eating among restrictive and guilty eaters. J Soc Clin Psychol 2007;26:1120–44.
49. Breines JG, Chen S. Self-compassion increases self-improvement motivation. Pers Social Psych Bull 2012;38:1133–43.
50. Zhang JW, Chen S. Self-compassion promotes personal improvement from regret experiences via acceptance. Pers Social Psych Bull 2016;42:244–58.
51. Sirois FM, Hirsch JK. Self-compassion is associated with health behaviors in cancer patients and survivors. Forthcoming.
52. Sirois FM. Self-compassion, adheremce, and health behaviors in chronic fatigue syndrome: The role of stress. Forthcoming.
53. Segal ZV, Williams JMG, Teasdale JD. Mindfulness-based cognitive therapy for depression: a new approach to preventing relapse. New York: Guilford Press; 2002.
54. Kabat-Zinn J. Full catastrophe living: Using the wisdom of your body and mind to face stress, pain, and illness. New York: Dell; 1990.
55. Bohlmeijer E, Prenger R, Taal E, Cuijpers P. The effects of mindfulness-based stress reduction therapy on mental health of adults with a chronic medical disease: A meta-analysis. J Psychosom Res 2010;68:539–44.
56. Mayhew SL, Gilbert P. Compassionate mind training with people who hear malevolent voices: a case series report. Clin Psychol Psychother 2008;15:113–38.
57. Hofmann SG, Grossman P, Hinton DE. Loving-kindness and compassion meditation: potential for psychological interventions. Clin Psychol Rev 2011;31:1126–32.
58. Hopkins J. Cultivating compassion. New York: Broadway Books; 2001.
59. Galante J, Galante I, Bekkers M-J Gallacher. Effect of kindness-based meditation on health and well-being: a systematic review and meta-analysis. J Consult Clin Psychol 2014;82:1101–14.
60. Carson JW, Keefe FJ, Lynch TR, et al. Loving-kindness meditation for chronic low back pain: results from a pilot trial. J Holist Nurs 2005;23:287–304.
61. Johnson DP, Penn DL, Fredrickson BL, et al. A pilot study of loving-kindness meditation for the negative symptoms of schizophrenia. Schizophr Res 2011;129:137–40.
62. West M, Steward K, Eckert R, Pasmore B. Developing collective leadership for health care. London: The King’s Fund; 2014.
63. Dixon-Woods M, Baker R, Charles K, et al. Culture and behavior in the English National Health Service: overview of lessons from a large multimethod study. BMJ Qual Saf 2014;23:106–15.
1. Hamilton N, Karoly P, Kitzman H. Self-regulation and chronic pain:The role of emotion. Cogn Ther Res 2007;28:559–576.
2. Luyten P, Kempke S, Van Wambeke P, et al. Self-critical perfectionism, stress generation, and stress sensitivity in patients with chronic fatigue syndrome: relationship with severity of depression. Psychiatry 2011;74:21–30.
3. Cohen S, Janicki-Deverts D, Doyle WJ, et al. Chronic stress, glucocorticoid receptor resistance, inflammation, and disease risk. Proceedings of the National Academy of Sciences 2012.
4. Evers AWM, Verhoeven EWM, van Middendorp H, et al. Does stress affect the joints? Daily stressors, stress vulnerability, immune and HPA axis activity, and short-term disease and symptom fluctuations in rheumatoid arthritis. Ann Rheum Dis 2014;73:1683–8.
5. Maunder RG, Levenstein S. The role of stress in the development and clinical course of inflammatory bowel disease: epidemiological evidence. Curr Molecular Med 2008;8:247–52.
6. Rod NH, Grønbæk M, Schnohr P, et al. Perceived stress as a risk factor for changes in health behavior and cardiac risk profile: a longitudinal study. J Intern Med 2009;266:467–75.
7. Sirois FM. Is procrastination a vulnerability factor for hypertension and cardiovascular disease? Testing an extension of the procrastination-health model. J Behav Med 2015;38:578–89.
8. Moskovitz DN, Maunder RG, Cohen Z, et al. Coping behavior and social support contribute independently to quality of life after surgery for inflammatory bowel disease. Dis Colon Rectum 2000;43:517–21.
9. Voth J, Sirois FM. The role of self-blame and responsibility in adjustment to inflammatory bowel disease. Rehab Psych 2009;54:99–108.
10. Neff KD. Self-compassion: An alternative conceptualization of a healthy attitude toward oneself. Self Ident 2003;2:85–101.
11. Neff KD, Kirkpatrick KL, Rude SS. Self-compassion and adaptive psychological functioning. J Res Personality 2007;41:139–54.
12. Sirois FM. Procrastination and stress: Exploring the role of self-compassion. Self Ident 2014;13:128–45.
13. Allen AB, Leary MR. Self-compassion, stress, and coping. Soc Person Psych Comp 2010;4:107–18.
14. Sirois FM, Kitner R, Hirsch JK. Self-compassion, affect, and health behaviors. Health Psychol 2014.
15. Pinto-Gouveia J, Duarte C, Matos M, Fráguas S. The protective role of self-compassion in relation to psychopathology symptoms and quality of life in chronic illness and in cancer patients. Clin Psychol Psychother 2014;21:311–23.
16. Przezdziecki A, Sherman KA, Baillie A, et al. My changed body: breast cancer, body image, distress and self-compassion. Psychooncology 2013;22:1872–9.
17. Sirois FM, Molnar DS, Hirsch JK. Self-compassion, stress, and coping in the context of chronic illness. Self Identity 2015:1–14.
18. Neff KD. Self-compassion, self-esteem, and well-being. Social Personality Psych Compass 2011;5:1–12.
19. Davis DG, Morgan MS. Finding meaning, perceiving growth, and acceptance of tinnitus. Rehabil Psych 2008;53:128–38.
20. Hall CW, Row KA, Wuensch KL, Godley KR. The role of self-compassion in physical and psychological well-being. J Psychology 2013;147:311–23.
21. Gilbert P. Introducing compassion-focused therapy. Advance Psych Treat 2009;15:199–208.
22. Neff KD. Development and validation of a scale to measure self-compassion. Self Identity 2003;2:223–50.
23. Neff KD. The self-compassion scale is a valid and theoretically coherent measure of self-compassion. Mindfulness 2016;7:264–74.
24. Gilbert P, Procter S. Compassionate mind training for people with high shame and self-criticism: overview and pilot study of a group therapy approach. Clin Psychol Psychother 2006;13:353–79.
25. Leary MR, Tate EB, Adams CE, et al. Self-compassion and reactions to unpleasant self-relevant events: the implications of treating oneself kindly. J Personality Social Psychol 2007;92:887–904.
26. Neff KD, Germer CK. A pilot study and randomized controlled trial of the mindful self-compassion program. J Clin Psychol 2013;69:28–44.
27. MacBeth A, Gumley A. Exploring compassion: A meta-analysis of the association between self-compassion and psychopathology. Clin Psych Rev 2012;32:545–52.
28. Murphy LB, Sacks JJ, Brady TJ, et al. Anxiety and depression among US adults with arthritis: Prevalence and correlates. Arthritis Care Res 2012:64:968–76.
29. Walker JR, Ediger JP, Graff LA, et al. The Manitoba IBD Cohort Study: a population-based study of the prevalence of lifetime and 12-month anxiety and mood disorders. Am J Gastroenterol 2008;103:1989–97.
30. Neff KD, McGehee P. Self-compassion and psychological resilience among adolescents and young adults. Self Ident 2009;9:225–40.
31. Brion J, Leary M, Drabkin A. Self-compassion and reactions to serious illness: The case of HIV. J Health Psychol 2014;19:218–29.
32. Lazarus RS, Folkman S. Stress, appraisal, and coping. New York: Springer; 1984.
33. Thompson SC, Cheek PR, Graham MA. The other side of perceived control: disadvantages and negative effects. In: Spacapan S, Oskamp S, editors. The social psychology of health. Newbury Park: Sage; 1988:69–93.
34. Breines JG, Thoma MV, Gianferante D, et al. Self-compassion as a predictor of interleukin-6 response to acute psychosocial stress. Brain, Behavior, and Immunity 2014;37:109–14.
36. Lloyd C, Smith J, Weinger K. Stress and diabetes: a review of the links. Diabetes Spectrum 2005;18:121–7.
35. Arch JJ, Brown KW, Dean DJ, et al. Self-compassion training modulates alpha-amylase, heart rate variability, and subjective responses to social evaluative threat in women. Psychoneuroendocrinology 2013;42:49–58.
37. Dimsdale JE. Psychological stress and cardiovascular disease. J Am Coll Cardiol 2008;51:1237–46.
38. Maunder RG. Evidence that stress contributes to inflammatory bowel disease: evaluation, synthesis, and future directions. Inflam Bowel Dis 2005;11:600–8.
39. Daskalopoulou SS, Khan NA, Quinn RR, et al. The 2012 Canadian Hypertension Education program recommendations for the management of hypertension: blood pressure measurement, diagnosis, assessment of risk, and therapy. Can J Cardiol 2012;28:270–87.
40. Gulliksson M, Burell G, Vessby B, et al. Randomized controlled trial of cognitive behavioral therapy vs standard treatment to prevent recurrent cardiovascular events in patients with coronary heart disease: Secondary prevention in uppsala primary health care project (suprim). Arch Intern Med 2011;171:134–40.
41. Jerant AF, Friederichs-Fitzwater MMV, Moore M. Patients’ perceived barriers to active self-management of chronic conditions. Patient Ed Couns 2005;57:300–7.
42. Sirois FM. Procrastination, stress, and chronic health conditions: a temporal perspective. In: Sirois FM, Pychyl T, editors. Procrastination, health, and well-being. Elsevier; 2016.
43. Sirois FM. A self-regulation resource model of self-compassion and health behavior intentions in emerging adults. Prev Med Rep 2015;2:218–22.
44. Terry ML, Leary MR. Self-compassion, self-regulation, and health. Self Identity 2011;10:352–62.
45. Dunne S, Sheffield D, Chilcot J. Brief report: Self-compassion, physical health and the mediating role of health-promoting behaviors. J Health Psychol 2016.
46. Polivy J, Herman CP, Deo R. Getting a bigger slice of the pie. Effects on eating and emotion in restrained and unrestrained eaters. Appetite 2010;55:426–30.
47. Wagner DD, Heatherton TF. Self-regulation and its failure: The seven deadly threats to self-regulation. In: Mikulincer M, Shaver PR, Borgida E, Bargh JA, editors. APA handbook of personality and social psychology, Volume 1: Attitudes and social cognition. Washington: American Psychological Association. Forthcoming.
48. Adams CE, Leary MR. Promoting self-compassionate attitudes toward eating among restrictive and guilty eaters. J Soc Clin Psychol 2007;26:1120–44.
49. Breines JG, Chen S. Self-compassion increases self-improvement motivation. Pers Social Psych Bull 2012;38:1133–43.
50. Zhang JW, Chen S. Self-compassion promotes personal improvement from regret experiences via acceptance. Pers Social Psych Bull 2016;42:244–58.
51. Sirois FM, Hirsch JK. Self-compassion is associated with health behaviors in cancer patients and survivors. Forthcoming.
52. Sirois FM. Self-compassion, adheremce, and health behaviors in chronic fatigue syndrome: The role of stress. Forthcoming.
53. Segal ZV, Williams JMG, Teasdale JD. Mindfulness-based cognitive therapy for depression: a new approach to preventing relapse. New York: Guilford Press; 2002.
54. Kabat-Zinn J. Full catastrophe living: Using the wisdom of your body and mind to face stress, pain, and illness. New York: Dell; 1990.
55. Bohlmeijer E, Prenger R, Taal E, Cuijpers P. The effects of mindfulness-based stress reduction therapy on mental health of adults with a chronic medical disease: A meta-analysis. J Psychosom Res 2010;68:539–44.
56. Mayhew SL, Gilbert P. Compassionate mind training with people who hear malevolent voices: a case series report. Clin Psychol Psychother 2008;15:113–38.
57. Hofmann SG, Grossman P, Hinton DE. Loving-kindness and compassion meditation: potential for psychological interventions. Clin Psychol Rev 2011;31:1126–32.
58. Hopkins J. Cultivating compassion. New York: Broadway Books; 2001.
59. Galante J, Galante I, Bekkers M-J Gallacher. Effect of kindness-based meditation on health and well-being: a systematic review and meta-analysis. J Consult Clin Psychol 2014;82:1101–14.
60. Carson JW, Keefe FJ, Lynch TR, et al. Loving-kindness meditation for chronic low back pain: results from a pilot trial. J Holist Nurs 2005;23:287–304.
61. Johnson DP, Penn DL, Fredrickson BL, et al. A pilot study of loving-kindness meditation for the negative symptoms of schizophrenia. Schizophr Res 2011;129:137–40.
62. West M, Steward K, Eckert R, Pasmore B. Developing collective leadership for health care. London: The King’s Fund; 2014.
63. Dixon-Woods M, Baker R, Charles K, et al. Culture and behavior in the English National Health Service: overview of lessons from a large multimethod study. BMJ Qual Saf 2014;23:106–15.
2016 Update on pelvic floor dysfunction
The genitourinary syndrome of menopause (GSM) is a constellation of symptoms and signs of a hypoestrogenic state resulting in some or all of the following: vaginal dryness, burning, irritation, dyspareunia, urinary urgency, dysuria, and recurrent urinary tract infections.1 In 2014, the International Society for the Study of Women’s Sexual Health and the North American Menopause Society endorsed “GSM” as a new term to replace the less comprehensive description, vulvovaginal atrophy (VVA).1
The prevalence of GSM is around 50%, but it may increase each year after menopause, reaching up to 84.2%.2,3 Only about half of women affected seek medical care, with the most commonly reported symptoms being vaginal dryness and dyspareunia.3,4
Nonhormonal vaginal moisturizers andlubricants remain first-line treatment. The benefits are temporary and short lived because these options do not change the physiologic makeup of the vaginal wall; these treatments therefore provide relief only if the GSM symptoms are limited or mild.5
In this Update on pelvic floor dysfunction, we review 2 randomized, placebo-controlled trials of hormonal options (vaginal estrogen and oral ospemifene) and examine the latest information regarding fractional CO2 vaginal laser treatment. Also included are evidence-based guidelines for vaginal estrogen use and recommendations and conclusions for use of vaginal estrogen in women with a history of estrogen-dependent breast cancer. (The terms used in the studies described [ie, VVA versus GSM] have been maintained for accuracy of reporting.)
Low-dose estrogen vaginal cream ameliorates moderate to severe VVA with limited adverse events
Freedman M, Kaunitz AM, Reape KZ, Hait H, Shu H. Twice-weekly synthetic conjugated estrogens vaginal cream for the treatment of vaginal atrophy. Menopause. 2009;16(4):735-741.
In a multicenter, double-blind, randomized, placebo-controlled study, Freedman and colleagues evaluated the efficacy of a 1-g dose of synthetic conjugated estrogens A (SCE-A) cream versus placebo in postmenopausal women with moderate to severe VVA.
Details of the study
The investigators enrolled 305 participants aged 30 to 80 years (mean [SD] age, 60 [6.6] years) who were naturally or surgically postmenopausal. The enrollment criteria included ≤5% superficial cells on vaginal smear, vaginal pH >5.0, and at least 1 moderate or severe symptom of VVA (vaginal dryness, soreness, irritation/itching, pain with intercourse, or bleeding after intercourse).
Participants were randomly assigned in a 1:1:1:1 ratio to twice-weekly therapy with 1 g (0.625 mg/g) SCE-A vaginal cream, 2 g SCE-A vaginal cream, 1 g placebo, or 2 g placebo. Study visits occurred on days 14, 21, 28, 56, and 84 (12-week end point). The 3 co-primary outcomes were cytology, vaginal pH, and most bothersome symptom (MBS). Primary outcomes and safety/adverse events (AEs) were recorded at each study visit, and transvaginal ultrasound and endometrial biopsy were performed for women with a uterus at the beginning and end of the study.
Mean change and percent change in the 3 primary outcomes were assessed between baseline and each study visit. MBS was scored on a scale of 0 to 3 (0 = none, 1 = mild, 2 = moderate, 3 = severe). The principal indicators of efficacy were the changes from baseline to the end of treatment (12 weeks) for each of the 3 end points. Since the 1-g and 2-g SCE-A dose groups showed a similar degree of efficacy on all 3 co-primary end points, approval from the US Food and Drug Administration (FDA) was sought only for the lower dose, in keeping with the use of the lowest effective dose; therefore, results from only the 1-g SCE-A dose group and matching placebo group were presented in the article. A sample size calculation determined that at least 111 participants in each group were needed to provide 90% power for statistical testing.
Estrogen reduced MBS severity, improved vaginal indices
The modified intent-to-treat (MITT) cohort was used for outcome analysis, and data from 275 participants were available at the 12-week end point. At baseline, 132 participants (48%) indicated vaginal dryness and 86 women (31.3%) indicated pain during intercourse as the MBS. In the SCE-A group at baseline, the vaginal maturation index (VMI) was 31.31 compared with 31.84 in the placebo group. At 12 weeks, the SCE-A group had a mean reduction of 1.71 in overall MBS severity compared with the placebo group’s mean reduction of 1.11 (P<.0001). The SCE-A group had a greater increase in the VMI (with a mean change of 31.46 vs 5.16 in the placebo group [P<.0001]) and a greater decrease in the vaginal pH (mean pH at the end of treatment for the SCE-A group was 4.84, a decrease of 1.48, and for the placebo group was 5.96, a decrease of 0.31 [P<.0001]).
Adverse events. The incidence of AEs was similar for the 1-g SCE-A group and the 1-g placebo group, with no AE occurring at a rate of higher than 5%. There were 15 (10%) treatment-related AEs in the estrogen group and 16 (10.3%) in the placebo group. The SCE-A group had 3 AEs (2%) leading to discontinuation, while the placebo group had 2 AEs (1.3%) leading to discontinuation. There were no clinically significant endometrial biopsy findings at the conclusion of the study.
Strengths and limitations. This study evaluated clinical and physiologic outcomes as well as uterine response to transvaginal estrogen. The use of MBS allows symptoms to be scored objectively compared with prior subjective symptom assessment, which varied widely. However, fewer indicated symptoms will permit limited conclusions.
For evidence-based recommended and suggested treatments for various genitourinary symptoms, we recommended as a resource the Society of Gynecologic Surgeons clinical practice guidelines on vaginal estrogen for the treatment of GSM (TABLE 1).5
In addition, for women with a history of estrogen-dependent breast cancer experiencing urogenital symptoms, the American College of Obstetricians and Gynecologists recommends nonhormonal agents as first-line therapy, with vaginal estrogen treatment reserved for woman whose symptoms are unresponsive to nonhormonal therapies (TABLE 2).6


Ospemifene improves vaginal physiology and dyspareunia
Bachmann GA, Komi JO; Ospemifene Study Group. Ospemifene effectively treats vulvovaginal atrophy in postmenopausal women: results from a pivotal phase 3 study. Menopause. 2010;17(3):480–486.
Bachmann and colleagues evaluated the efficacy and safety of ospemifene for the treatment of VVA. This is one of the efficacy studies on which FDA approval was based. Ospemifene is a selective estrogen receptor modulator (SERM) that acts as an estrogen agonist/antagonist.
Details of the study
The study included 826 postmenopausal women randomly assigned to 30 mg/day of ospemifene, 60 mg/day of ospemifene, or placebo for 12 weeks. Participants were aged 40 to 80 years and met the criteria for VVA (defined as ≤5% superficial cells on vaginal smear [maturation index], vaginal pH >5.0, and at least 1 moderate or severe symptom of VVA). All women were given a nonhormonal lubricant for use as needed.
There were 4 co-primary end points: percentage of superficial cells on the vaginal smear, percentage of parabasal cells on the vaginal smear, vaginal pH, and self-assessed MBS using a Likert scale (0, none; 1, mild; 2, moderate; 3, severe). The symptom score was calculated as the change from baseline to week 12 for each MBS. Safety was assessed by patient report; if a participant had an intact uterus and cervix, Pap test, endometrial thickness, and endometrial histologic analysis were performed at baseline and at 12 weeks. Baseline characteristics were similar among all treatment groups. A total of 46% of participants reported dyspareunia as their MBS, and 39% reported vaginal dryness.
Two dose levels of ospemifene effectively relieve symptoms
After 12 weeks of treatment, both the 30-mg and the 60-mg dose of ospemifene produced a statistically significant improvement in vaginal dryness and objective results of maturation index and vaginal pH compared with placebo. Vaginal dryness decreased in the ospemifene 30-mg group (1.22) and in the ospemifene 60-mg group (1.26) compared with placebo (0.84) (P = .04 for the 30-mg group and P = .021 for the 60-mg group). The percentage of superficial cells was increased in both treatment groups compared with placebo (7.8% for the 30-mg group, 10.8% for the 60-mg group, 2.2% for the placebo group; P<.001 for both). The percentage of parabasal cells decreased in both treatment groups compared with participants who received placebo (21.9% in the 30-mg group, 30.1% in the 60-mg group, and 3.98% in the placebo group; P<.001 for both). Both treatment groups had a decrease in vaginal pH versus the placebo group as well (0.67 decrease in the 30-mg group, 1.01 decrease in the 60-mg group, and 0.10 decrease in the placebo group; P<.001 for both). The 60-mg/day ospemifene dose improved dyspareunia compared with placebo and was more effective than the 30-mg dose for all end points.
Adverse effects. Hot flashes were reported in 9.6% of the 30-mg ospemifene group and in 8.3% of the 60-mg group, compared with 3.4% in the placebo group. The increased percentage of participants with hot flashes in the ospemifene groups did not lead to increased discontinuation with the study. Urinary tract infections, defined by symptoms only, were more common in the ospemifene groups (4.6% in the 30-mg group, 7.2% in the 60-mg group, and 2.2% in the placebo group). In each group, 5% of patients discontinued the study because of AEs. There were 5 serious AEs in the 30-mg ospemifene group, 4 serious AEs in the placebo group, and none in the 60-mg group. No venous thromboembolic events were reported.
Strengths and limitations. Vaginal physiology as well as common symptoms of GSM were assessed in this large study. However, AEs were self-reported. While ospemifene was found safe and well tolerated when the study was extended for an additional 52 weeks (in women without a uterus) and 40 weeks (in women with a uterus), longer follow-up is needed to determine endometrial safety.7,8
Some patients may prefer an oral agent over a vaginally applied medication. While ospemifene is not an estrogen, it is a SERM that may increase the risk of endometrial cancer and thromboembolic events as stated in the boxed warning of the ospemifene prescribing information.
Fractional CO2 laser for VVA shows efficacy, patient satisfaction
Sokol ER, Karram MM. An assessment of the safety and efficacy of a fractional CO2 laser system for the treatment of vulvovaginal atrophy. Menopause. 2016;23(10):1102–1107.
In this first US pilot study, postmenopausal women received 3 fractional CO2 laser treatments, 6 weeks apart. The investigators evaluated the safety and efficacy of the treatment for GSM.
Details of the study
Thirty women (mean age, 58.6 years) who were nonsmokers, postmenopausal, had less than stage 2 prolapse, no vaginal procedures for the past 6 months, and did not use vaginal creams, moisturizers, lubricants, or homeopathic preparations for the past 3 months were enrolled. Participants received 3 laser treatments with the SmartXide2, MonaLisa Touch (DEKA M.E.L.A. SRL, Florence, Italy) device at 6-week intervals followed by a 3-month follow-up.
The primary outcome was visual analog scale (VAS) change in 6 categories (vaginal pain, burning, itching, dryness, dyspareunia, and dysuria) assessed from baseline to after each treatment, including 3 months after the final treatment, using an 11-point scale with 0 the lowest (no symptoms) and 10 the highest (extreme bother). Secondary outcomes were Vaginal Health Index (VHI) score, maximal tolerable dilator size, Female Sexual Function Index (FSFI) questionnaire score, general quality of life, degree of difficulty performing the procedure, participant satisfaction, vaginal pH, adverse effects, and treatment discomfort assessed using the VAS.
Improved VVA symptoms and vaginal caliber
Twenty-seven women completed the study. There was a statistically significant change in all 6 symptom categories measured with the VAS. Improvement change (SD) on the VAS was 1.7 (3.2) for pain, 1.4 (2.9) for burning, 1.4 (1.9) for itching, 1.0 (2.4) for dysuria, comparing baseline scores to scores after 3 treatments (all with P<.05). A greater improvement was noted for dryness, 6.1 (2.7), and for dyspareunia, 5.4 (2.9) (both P<.001). There was also a statistically significant change in overall improvement on the VHI and the FSFI. The mean (SD) VHI score at baseline was 14.4 (2.9; range, 8 to 20) and the mean (SD) after 3 laser treatments was 21.4 (2.9; range, 16 to 25), with an overall mean (SD) improvement of 7.0 (3.1; P<.001).
Twenty-six participants completed a follow-up FSFI, with a mean (SD) baseline score of 11.3 (7.3; range, 2 to 25) and a follow-up mean (SD) score of 8.8 (7.3; range, −3.7 to 27.2) (P<.001). There was an increase in dilator size of 83% when comparing baseline to follow-up. At baseline, 24 participants (80%) could comfortably accept an XS or S dilator, and at follow-up 23 of 24 women (96%) could comfortably accept an M or L dilator.
Adverse effects. At their follow-up, 96% of participants were satisfied or extremely satisfied with treatment. Two women reported mild-to-moderate pain lasting 2 to 3 days, and 2 had minor bleeding; however, no women withdrew or discontinued treatment because of adverse events.
Study limitations. This study evaluated the majority of GSM symptoms as well as change in vaginal caliber after a nonhormonal therapy. The cohort was small and had no placebo group. In addition, with the limited observation period, it is difficult to determine the duration of effect and long-term safety of repeated treatments.
Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.
- Portman DJ, Gass ML: Vulvovaginal Atrophy Terminology Consensus Conference Panel. Genitourinary syndrome of menopause: new terminology for vulvovaginal atrophy from the International Society for the Study of Women’s Sexual Health and the North American Menopause Society. Maturitas. 2014;79(3):349–354.
- Parish SJ, Nappi RE, Krychman ML, et al. Impact of vulvovaginal health on postmenopausal women: a review of surveys on symptoms of vulvovaginal atrophy. Int J Womens Health. 2013;5:437–447.
- Palma F, Volpe A, Villa P, Cagnacci A; Writing Groupt of AGATA Study. Vaginal atrophy of women in postmenopause. Results from a multicentric observational study: the AGATA study. Maturitas. 2016;83:40–44.
- Kingsberg SA, Sysocki S, Magnus L, Krychman ML. Vulvar and vaginal atrophy in postmenopausal women: findings from the REVIVE (REal Women’s VIews of Treatment Options for Menopausal Vaginal ChangEs) survey. J Sex Med. 2013;10(7):1790–1799.
- Rahn DD, Carberry C, Sanses TV, et al; Society of Gynecologic Surgeons Systematic Review Group. Vaginal estrogen for genitourinary syndrome of menopause: a systematic review. Obstet Gynecol. 2014;124(6):1147–1156.
- Farrell R; American College of Obstetricians and Gynecologists Committee on Gynecologic Practice. Committee Opinion No. 659: the use of vaginal estrogen in women with a history of estrogen-dependent breast cancer. Obstet Gynecol. 2016;127(3):e93–e96.
- Simon JA, Lin VH, Radovich C, Bachmann GA; Ospemiphene Study Group. One-year long-term safety extension study of ospemifene for the treatment of vulvar and vaginal atrophy in postmenopausal women with a uterus. Menopause. 2013;20(4):418–427.
- Simon J, Portman D, Mabey RG Jr; Ospemifene Study Group. Long-term safety of ospemifene (52-week extension) in the treatment of vulvar and vaginal atrophy in hysterectomized postmenopausal women. Maturitas. 2014;77(3):274–281.
The genitourinary syndrome of menopause (GSM) is a constellation of symptoms and signs of a hypoestrogenic state resulting in some or all of the following: vaginal dryness, burning, irritation, dyspareunia, urinary urgency, dysuria, and recurrent urinary tract infections.1 In 2014, the International Society for the Study of Women’s Sexual Health and the North American Menopause Society endorsed “GSM” as a new term to replace the less comprehensive description, vulvovaginal atrophy (VVA).1
The prevalence of GSM is around 50%, but it may increase each year after menopause, reaching up to 84.2%.2,3 Only about half of women affected seek medical care, with the most commonly reported symptoms being vaginal dryness and dyspareunia.3,4
Nonhormonal vaginal moisturizers andlubricants remain first-line treatment. The benefits are temporary and short lived because these options do not change the physiologic makeup of the vaginal wall; these treatments therefore provide relief only if the GSM symptoms are limited or mild.5
In this Update on pelvic floor dysfunction, we review 2 randomized, placebo-controlled trials of hormonal options (vaginal estrogen and oral ospemifene) and examine the latest information regarding fractional CO2 vaginal laser treatment. Also included are evidence-based guidelines for vaginal estrogen use and recommendations and conclusions for use of vaginal estrogen in women with a history of estrogen-dependent breast cancer. (The terms used in the studies described [ie, VVA versus GSM] have been maintained for accuracy of reporting.)
Low-dose estrogen vaginal cream ameliorates moderate to severe VVA with limited adverse events
Freedman M, Kaunitz AM, Reape KZ, Hait H, Shu H. Twice-weekly synthetic conjugated estrogens vaginal cream for the treatment of vaginal atrophy. Menopause. 2009;16(4):735-741.
In a multicenter, double-blind, randomized, placebo-controlled study, Freedman and colleagues evaluated the efficacy of a 1-g dose of synthetic conjugated estrogens A (SCE-A) cream versus placebo in postmenopausal women with moderate to severe VVA.
Details of the study
The investigators enrolled 305 participants aged 30 to 80 years (mean [SD] age, 60 [6.6] years) who were naturally or surgically postmenopausal. The enrollment criteria included ≤5% superficial cells on vaginal smear, vaginal pH >5.0, and at least 1 moderate or severe symptom of VVA (vaginal dryness, soreness, irritation/itching, pain with intercourse, or bleeding after intercourse).
Participants were randomly assigned in a 1:1:1:1 ratio to twice-weekly therapy with 1 g (0.625 mg/g) SCE-A vaginal cream, 2 g SCE-A vaginal cream, 1 g placebo, or 2 g placebo. Study visits occurred on days 14, 21, 28, 56, and 84 (12-week end point). The 3 co-primary outcomes were cytology, vaginal pH, and most bothersome symptom (MBS). Primary outcomes and safety/adverse events (AEs) were recorded at each study visit, and transvaginal ultrasound and endometrial biopsy were performed for women with a uterus at the beginning and end of the study.
Mean change and percent change in the 3 primary outcomes were assessed between baseline and each study visit. MBS was scored on a scale of 0 to 3 (0 = none, 1 = mild, 2 = moderate, 3 = severe). The principal indicators of efficacy were the changes from baseline to the end of treatment (12 weeks) for each of the 3 end points. Since the 1-g and 2-g SCE-A dose groups showed a similar degree of efficacy on all 3 co-primary end points, approval from the US Food and Drug Administration (FDA) was sought only for the lower dose, in keeping with the use of the lowest effective dose; therefore, results from only the 1-g SCE-A dose group and matching placebo group were presented in the article. A sample size calculation determined that at least 111 participants in each group were needed to provide 90% power for statistical testing.
Estrogen reduced MBS severity, improved vaginal indices
The modified intent-to-treat (MITT) cohort was used for outcome analysis, and data from 275 participants were available at the 12-week end point. At baseline, 132 participants (48%) indicated vaginal dryness and 86 women (31.3%) indicated pain during intercourse as the MBS. In the SCE-A group at baseline, the vaginal maturation index (VMI) was 31.31 compared with 31.84 in the placebo group. At 12 weeks, the SCE-A group had a mean reduction of 1.71 in overall MBS severity compared with the placebo group’s mean reduction of 1.11 (P<.0001). The SCE-A group had a greater increase in the VMI (with a mean change of 31.46 vs 5.16 in the placebo group [P<.0001]) and a greater decrease in the vaginal pH (mean pH at the end of treatment for the SCE-A group was 4.84, a decrease of 1.48, and for the placebo group was 5.96, a decrease of 0.31 [P<.0001]).
Adverse events. The incidence of AEs was similar for the 1-g SCE-A group and the 1-g placebo group, with no AE occurring at a rate of higher than 5%. There were 15 (10%) treatment-related AEs in the estrogen group and 16 (10.3%) in the placebo group. The SCE-A group had 3 AEs (2%) leading to discontinuation, while the placebo group had 2 AEs (1.3%) leading to discontinuation. There were no clinically significant endometrial biopsy findings at the conclusion of the study.
Strengths and limitations. This study evaluated clinical and physiologic outcomes as well as uterine response to transvaginal estrogen. The use of MBS allows symptoms to be scored objectively compared with prior subjective symptom assessment, which varied widely. However, fewer indicated symptoms will permit limited conclusions.
For evidence-based recommended and suggested treatments for various genitourinary symptoms, we recommended as a resource the Society of Gynecologic Surgeons clinical practice guidelines on vaginal estrogen for the treatment of GSM (TABLE 1).5
In addition, for women with a history of estrogen-dependent breast cancer experiencing urogenital symptoms, the American College of Obstetricians and Gynecologists recommends nonhormonal agents as first-line therapy, with vaginal estrogen treatment reserved for woman whose symptoms are unresponsive to nonhormonal therapies (TABLE 2).6


Ospemifene improves vaginal physiology and dyspareunia
Bachmann GA, Komi JO; Ospemifene Study Group. Ospemifene effectively treats vulvovaginal atrophy in postmenopausal women: results from a pivotal phase 3 study. Menopause. 2010;17(3):480–486.
Bachmann and colleagues evaluated the efficacy and safety of ospemifene for the treatment of VVA. This is one of the efficacy studies on which FDA approval was based. Ospemifene is a selective estrogen receptor modulator (SERM) that acts as an estrogen agonist/antagonist.
Details of the study
The study included 826 postmenopausal women randomly assigned to 30 mg/day of ospemifene, 60 mg/day of ospemifene, or placebo for 12 weeks. Participants were aged 40 to 80 years and met the criteria for VVA (defined as ≤5% superficial cells on vaginal smear [maturation index], vaginal pH >5.0, and at least 1 moderate or severe symptom of VVA). All women were given a nonhormonal lubricant for use as needed.
There were 4 co-primary end points: percentage of superficial cells on the vaginal smear, percentage of parabasal cells on the vaginal smear, vaginal pH, and self-assessed MBS using a Likert scale (0, none; 1, mild; 2, moderate; 3, severe). The symptom score was calculated as the change from baseline to week 12 for each MBS. Safety was assessed by patient report; if a participant had an intact uterus and cervix, Pap test, endometrial thickness, and endometrial histologic analysis were performed at baseline and at 12 weeks. Baseline characteristics were similar among all treatment groups. A total of 46% of participants reported dyspareunia as their MBS, and 39% reported vaginal dryness.
Two dose levels of ospemifene effectively relieve symptoms
After 12 weeks of treatment, both the 30-mg and the 60-mg dose of ospemifene produced a statistically significant improvement in vaginal dryness and objective results of maturation index and vaginal pH compared with placebo. Vaginal dryness decreased in the ospemifene 30-mg group (1.22) and in the ospemifene 60-mg group (1.26) compared with placebo (0.84) (P = .04 for the 30-mg group and P = .021 for the 60-mg group). The percentage of superficial cells was increased in both treatment groups compared with placebo (7.8% for the 30-mg group, 10.8% for the 60-mg group, 2.2% for the placebo group; P<.001 for both). The percentage of parabasal cells decreased in both treatment groups compared with participants who received placebo (21.9% in the 30-mg group, 30.1% in the 60-mg group, and 3.98% in the placebo group; P<.001 for both). Both treatment groups had a decrease in vaginal pH versus the placebo group as well (0.67 decrease in the 30-mg group, 1.01 decrease in the 60-mg group, and 0.10 decrease in the placebo group; P<.001 for both). The 60-mg/day ospemifene dose improved dyspareunia compared with placebo and was more effective than the 30-mg dose for all end points.
Adverse effects. Hot flashes were reported in 9.6% of the 30-mg ospemifene group and in 8.3% of the 60-mg group, compared with 3.4% in the placebo group. The increased percentage of participants with hot flashes in the ospemifene groups did not lead to increased discontinuation with the study. Urinary tract infections, defined by symptoms only, were more common in the ospemifene groups (4.6% in the 30-mg group, 7.2% in the 60-mg group, and 2.2% in the placebo group). In each group, 5% of patients discontinued the study because of AEs. There were 5 serious AEs in the 30-mg ospemifene group, 4 serious AEs in the placebo group, and none in the 60-mg group. No venous thromboembolic events were reported.
Strengths and limitations. Vaginal physiology as well as common symptoms of GSM were assessed in this large study. However, AEs were self-reported. While ospemifene was found safe and well tolerated when the study was extended for an additional 52 weeks (in women without a uterus) and 40 weeks (in women with a uterus), longer follow-up is needed to determine endometrial safety.7,8
Some patients may prefer an oral agent over a vaginally applied medication. While ospemifene is not an estrogen, it is a SERM that may increase the risk of endometrial cancer and thromboembolic events as stated in the boxed warning of the ospemifene prescribing information.
Fractional CO2 laser for VVA shows efficacy, patient satisfaction
Sokol ER, Karram MM. An assessment of the safety and efficacy of a fractional CO2 laser system for the treatment of vulvovaginal atrophy. Menopause. 2016;23(10):1102–1107.
In this first US pilot study, postmenopausal women received 3 fractional CO2 laser treatments, 6 weeks apart. The investigators evaluated the safety and efficacy of the treatment for GSM.
Details of the study
Thirty women (mean age, 58.6 years) who were nonsmokers, postmenopausal, had less than stage 2 prolapse, no vaginal procedures for the past 6 months, and did not use vaginal creams, moisturizers, lubricants, or homeopathic preparations for the past 3 months were enrolled. Participants received 3 laser treatments with the SmartXide2, MonaLisa Touch (DEKA M.E.L.A. SRL, Florence, Italy) device at 6-week intervals followed by a 3-month follow-up.
The primary outcome was visual analog scale (VAS) change in 6 categories (vaginal pain, burning, itching, dryness, dyspareunia, and dysuria) assessed from baseline to after each treatment, including 3 months after the final treatment, using an 11-point scale with 0 the lowest (no symptoms) and 10 the highest (extreme bother). Secondary outcomes were Vaginal Health Index (VHI) score, maximal tolerable dilator size, Female Sexual Function Index (FSFI) questionnaire score, general quality of life, degree of difficulty performing the procedure, participant satisfaction, vaginal pH, adverse effects, and treatment discomfort assessed using the VAS.
Improved VVA symptoms and vaginal caliber
Twenty-seven women completed the study. There was a statistically significant change in all 6 symptom categories measured with the VAS. Improvement change (SD) on the VAS was 1.7 (3.2) for pain, 1.4 (2.9) for burning, 1.4 (1.9) for itching, 1.0 (2.4) for dysuria, comparing baseline scores to scores after 3 treatments (all with P<.05). A greater improvement was noted for dryness, 6.1 (2.7), and for dyspareunia, 5.4 (2.9) (both P<.001). There was also a statistically significant change in overall improvement on the VHI and the FSFI. The mean (SD) VHI score at baseline was 14.4 (2.9; range, 8 to 20) and the mean (SD) after 3 laser treatments was 21.4 (2.9; range, 16 to 25), with an overall mean (SD) improvement of 7.0 (3.1; P<.001).
Twenty-six participants completed a follow-up FSFI, with a mean (SD) baseline score of 11.3 (7.3; range, 2 to 25) and a follow-up mean (SD) score of 8.8 (7.3; range, −3.7 to 27.2) (P<.001). There was an increase in dilator size of 83% when comparing baseline to follow-up. At baseline, 24 participants (80%) could comfortably accept an XS or S dilator, and at follow-up 23 of 24 women (96%) could comfortably accept an M or L dilator.
Adverse effects. At their follow-up, 96% of participants were satisfied or extremely satisfied with treatment. Two women reported mild-to-moderate pain lasting 2 to 3 days, and 2 had minor bleeding; however, no women withdrew or discontinued treatment because of adverse events.
Study limitations. This study evaluated the majority of GSM symptoms as well as change in vaginal caliber after a nonhormonal therapy. The cohort was small and had no placebo group. In addition, with the limited observation period, it is difficult to determine the duration of effect and long-term safety of repeated treatments.
Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.
The genitourinary syndrome of menopause (GSM) is a constellation of symptoms and signs of a hypoestrogenic state resulting in some or all of the following: vaginal dryness, burning, irritation, dyspareunia, urinary urgency, dysuria, and recurrent urinary tract infections.1 In 2014, the International Society for the Study of Women’s Sexual Health and the North American Menopause Society endorsed “GSM” as a new term to replace the less comprehensive description, vulvovaginal atrophy (VVA).1
The prevalence of GSM is around 50%, but it may increase each year after menopause, reaching up to 84.2%.2,3 Only about half of women affected seek medical care, with the most commonly reported symptoms being vaginal dryness and dyspareunia.3,4
Nonhormonal vaginal moisturizers andlubricants remain first-line treatment. The benefits are temporary and short lived because these options do not change the physiologic makeup of the vaginal wall; these treatments therefore provide relief only if the GSM symptoms are limited or mild.5
In this Update on pelvic floor dysfunction, we review 2 randomized, placebo-controlled trials of hormonal options (vaginal estrogen and oral ospemifene) and examine the latest information regarding fractional CO2 vaginal laser treatment. Also included are evidence-based guidelines for vaginal estrogen use and recommendations and conclusions for use of vaginal estrogen in women with a history of estrogen-dependent breast cancer. (The terms used in the studies described [ie, VVA versus GSM] have been maintained for accuracy of reporting.)
Low-dose estrogen vaginal cream ameliorates moderate to severe VVA with limited adverse events
Freedman M, Kaunitz AM, Reape KZ, Hait H, Shu H. Twice-weekly synthetic conjugated estrogens vaginal cream for the treatment of vaginal atrophy. Menopause. 2009;16(4):735-741.
In a multicenter, double-blind, randomized, placebo-controlled study, Freedman and colleagues evaluated the efficacy of a 1-g dose of synthetic conjugated estrogens A (SCE-A) cream versus placebo in postmenopausal women with moderate to severe VVA.
Details of the study
The investigators enrolled 305 participants aged 30 to 80 years (mean [SD] age, 60 [6.6] years) who were naturally or surgically postmenopausal. The enrollment criteria included ≤5% superficial cells on vaginal smear, vaginal pH >5.0, and at least 1 moderate or severe symptom of VVA (vaginal dryness, soreness, irritation/itching, pain with intercourse, or bleeding after intercourse).
Participants were randomly assigned in a 1:1:1:1 ratio to twice-weekly therapy with 1 g (0.625 mg/g) SCE-A vaginal cream, 2 g SCE-A vaginal cream, 1 g placebo, or 2 g placebo. Study visits occurred on days 14, 21, 28, 56, and 84 (12-week end point). The 3 co-primary outcomes were cytology, vaginal pH, and most bothersome symptom (MBS). Primary outcomes and safety/adverse events (AEs) were recorded at each study visit, and transvaginal ultrasound and endometrial biopsy were performed for women with a uterus at the beginning and end of the study.
Mean change and percent change in the 3 primary outcomes were assessed between baseline and each study visit. MBS was scored on a scale of 0 to 3 (0 = none, 1 = mild, 2 = moderate, 3 = severe). The principal indicators of efficacy were the changes from baseline to the end of treatment (12 weeks) for each of the 3 end points. Since the 1-g and 2-g SCE-A dose groups showed a similar degree of efficacy on all 3 co-primary end points, approval from the US Food and Drug Administration (FDA) was sought only for the lower dose, in keeping with the use of the lowest effective dose; therefore, results from only the 1-g SCE-A dose group and matching placebo group were presented in the article. A sample size calculation determined that at least 111 participants in each group were needed to provide 90% power for statistical testing.
Estrogen reduced MBS severity, improved vaginal indices
The modified intent-to-treat (MITT) cohort was used for outcome analysis, and data from 275 participants were available at the 12-week end point. At baseline, 132 participants (48%) indicated vaginal dryness and 86 women (31.3%) indicated pain during intercourse as the MBS. In the SCE-A group at baseline, the vaginal maturation index (VMI) was 31.31 compared with 31.84 in the placebo group. At 12 weeks, the SCE-A group had a mean reduction of 1.71 in overall MBS severity compared with the placebo group’s mean reduction of 1.11 (P<.0001). The SCE-A group had a greater increase in the VMI (with a mean change of 31.46 vs 5.16 in the placebo group [P<.0001]) and a greater decrease in the vaginal pH (mean pH at the end of treatment for the SCE-A group was 4.84, a decrease of 1.48, and for the placebo group was 5.96, a decrease of 0.31 [P<.0001]).
Adverse events. The incidence of AEs was similar for the 1-g SCE-A group and the 1-g placebo group, with no AE occurring at a rate of higher than 5%. There were 15 (10%) treatment-related AEs in the estrogen group and 16 (10.3%) in the placebo group. The SCE-A group had 3 AEs (2%) leading to discontinuation, while the placebo group had 2 AEs (1.3%) leading to discontinuation. There were no clinically significant endometrial biopsy findings at the conclusion of the study.
Strengths and limitations. This study evaluated clinical and physiologic outcomes as well as uterine response to transvaginal estrogen. The use of MBS allows symptoms to be scored objectively compared with prior subjective symptom assessment, which varied widely. However, fewer indicated symptoms will permit limited conclusions.
For evidence-based recommended and suggested treatments for various genitourinary symptoms, we recommended as a resource the Society of Gynecologic Surgeons clinical practice guidelines on vaginal estrogen for the treatment of GSM (TABLE 1).5
In addition, for women with a history of estrogen-dependent breast cancer experiencing urogenital symptoms, the American College of Obstetricians and Gynecologists recommends nonhormonal agents as first-line therapy, with vaginal estrogen treatment reserved for woman whose symptoms are unresponsive to nonhormonal therapies (TABLE 2).6


Ospemifene improves vaginal physiology and dyspareunia
Bachmann GA, Komi JO; Ospemifene Study Group. Ospemifene effectively treats vulvovaginal atrophy in postmenopausal women: results from a pivotal phase 3 study. Menopause. 2010;17(3):480–486.
Bachmann and colleagues evaluated the efficacy and safety of ospemifene for the treatment of VVA. This is one of the efficacy studies on which FDA approval was based. Ospemifene is a selective estrogen receptor modulator (SERM) that acts as an estrogen agonist/antagonist.
Details of the study
The study included 826 postmenopausal women randomly assigned to 30 mg/day of ospemifene, 60 mg/day of ospemifene, or placebo for 12 weeks. Participants were aged 40 to 80 years and met the criteria for VVA (defined as ≤5% superficial cells on vaginal smear [maturation index], vaginal pH >5.0, and at least 1 moderate or severe symptom of VVA). All women were given a nonhormonal lubricant for use as needed.
There were 4 co-primary end points: percentage of superficial cells on the vaginal smear, percentage of parabasal cells on the vaginal smear, vaginal pH, and self-assessed MBS using a Likert scale (0, none; 1, mild; 2, moderate; 3, severe). The symptom score was calculated as the change from baseline to week 12 for each MBS. Safety was assessed by patient report; if a participant had an intact uterus and cervix, Pap test, endometrial thickness, and endometrial histologic analysis were performed at baseline and at 12 weeks. Baseline characteristics were similar among all treatment groups. A total of 46% of participants reported dyspareunia as their MBS, and 39% reported vaginal dryness.
Two dose levels of ospemifene effectively relieve symptoms
After 12 weeks of treatment, both the 30-mg and the 60-mg dose of ospemifene produced a statistically significant improvement in vaginal dryness and objective results of maturation index and vaginal pH compared with placebo. Vaginal dryness decreased in the ospemifene 30-mg group (1.22) and in the ospemifene 60-mg group (1.26) compared with placebo (0.84) (P = .04 for the 30-mg group and P = .021 for the 60-mg group). The percentage of superficial cells was increased in both treatment groups compared with placebo (7.8% for the 30-mg group, 10.8% for the 60-mg group, 2.2% for the placebo group; P<.001 for both). The percentage of parabasal cells decreased in both treatment groups compared with participants who received placebo (21.9% in the 30-mg group, 30.1% in the 60-mg group, and 3.98% in the placebo group; P<.001 for both). Both treatment groups had a decrease in vaginal pH versus the placebo group as well (0.67 decrease in the 30-mg group, 1.01 decrease in the 60-mg group, and 0.10 decrease in the placebo group; P<.001 for both). The 60-mg/day ospemifene dose improved dyspareunia compared with placebo and was more effective than the 30-mg dose for all end points.
Adverse effects. Hot flashes were reported in 9.6% of the 30-mg ospemifene group and in 8.3% of the 60-mg group, compared with 3.4% in the placebo group. The increased percentage of participants with hot flashes in the ospemifene groups did not lead to increased discontinuation with the study. Urinary tract infections, defined by symptoms only, were more common in the ospemifene groups (4.6% in the 30-mg group, 7.2% in the 60-mg group, and 2.2% in the placebo group). In each group, 5% of patients discontinued the study because of AEs. There were 5 serious AEs in the 30-mg ospemifene group, 4 serious AEs in the placebo group, and none in the 60-mg group. No venous thromboembolic events were reported.
Strengths and limitations. Vaginal physiology as well as common symptoms of GSM were assessed in this large study. However, AEs were self-reported. While ospemifene was found safe and well tolerated when the study was extended for an additional 52 weeks (in women without a uterus) and 40 weeks (in women with a uterus), longer follow-up is needed to determine endometrial safety.7,8
Some patients may prefer an oral agent over a vaginally applied medication. While ospemifene is not an estrogen, it is a SERM that may increase the risk of endometrial cancer and thromboembolic events as stated in the boxed warning of the ospemifene prescribing information.
Fractional CO2 laser for VVA shows efficacy, patient satisfaction
Sokol ER, Karram MM. An assessment of the safety and efficacy of a fractional CO2 laser system for the treatment of vulvovaginal atrophy. Menopause. 2016;23(10):1102–1107.
In this first US pilot study, postmenopausal women received 3 fractional CO2 laser treatments, 6 weeks apart. The investigators evaluated the safety and efficacy of the treatment for GSM.
Details of the study
Thirty women (mean age, 58.6 years) who were nonsmokers, postmenopausal, had less than stage 2 prolapse, no vaginal procedures for the past 6 months, and did not use vaginal creams, moisturizers, lubricants, or homeopathic preparations for the past 3 months were enrolled. Participants received 3 laser treatments with the SmartXide2, MonaLisa Touch (DEKA M.E.L.A. SRL, Florence, Italy) device at 6-week intervals followed by a 3-month follow-up.
The primary outcome was visual analog scale (VAS) change in 6 categories (vaginal pain, burning, itching, dryness, dyspareunia, and dysuria) assessed from baseline to after each treatment, including 3 months after the final treatment, using an 11-point scale with 0 the lowest (no symptoms) and 10 the highest (extreme bother). Secondary outcomes were Vaginal Health Index (VHI) score, maximal tolerable dilator size, Female Sexual Function Index (FSFI) questionnaire score, general quality of life, degree of difficulty performing the procedure, participant satisfaction, vaginal pH, adverse effects, and treatment discomfort assessed using the VAS.
Improved VVA symptoms and vaginal caliber
Twenty-seven women completed the study. There was a statistically significant change in all 6 symptom categories measured with the VAS. Improvement change (SD) on the VAS was 1.7 (3.2) for pain, 1.4 (2.9) for burning, 1.4 (1.9) for itching, 1.0 (2.4) for dysuria, comparing baseline scores to scores after 3 treatments (all with P<.05). A greater improvement was noted for dryness, 6.1 (2.7), and for dyspareunia, 5.4 (2.9) (both P<.001). There was also a statistically significant change in overall improvement on the VHI and the FSFI. The mean (SD) VHI score at baseline was 14.4 (2.9; range, 8 to 20) and the mean (SD) after 3 laser treatments was 21.4 (2.9; range, 16 to 25), with an overall mean (SD) improvement of 7.0 (3.1; P<.001).
Twenty-six participants completed a follow-up FSFI, with a mean (SD) baseline score of 11.3 (7.3; range, 2 to 25) and a follow-up mean (SD) score of 8.8 (7.3; range, −3.7 to 27.2) (P<.001). There was an increase in dilator size of 83% when comparing baseline to follow-up. At baseline, 24 participants (80%) could comfortably accept an XS or S dilator, and at follow-up 23 of 24 women (96%) could comfortably accept an M or L dilator.
Adverse effects. At their follow-up, 96% of participants were satisfied or extremely satisfied with treatment. Two women reported mild-to-moderate pain lasting 2 to 3 days, and 2 had minor bleeding; however, no women withdrew or discontinued treatment because of adverse events.
Study limitations. This study evaluated the majority of GSM symptoms as well as change in vaginal caliber after a nonhormonal therapy. The cohort was small and had no placebo group. In addition, with the limited observation period, it is difficult to determine the duration of effect and long-term safety of repeated treatments.
Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.
- Portman DJ, Gass ML: Vulvovaginal Atrophy Terminology Consensus Conference Panel. Genitourinary syndrome of menopause: new terminology for vulvovaginal atrophy from the International Society for the Study of Women’s Sexual Health and the North American Menopause Society. Maturitas. 2014;79(3):349–354.
- Parish SJ, Nappi RE, Krychman ML, et al. Impact of vulvovaginal health on postmenopausal women: a review of surveys on symptoms of vulvovaginal atrophy. Int J Womens Health. 2013;5:437–447.
- Palma F, Volpe A, Villa P, Cagnacci A; Writing Groupt of AGATA Study. Vaginal atrophy of women in postmenopause. Results from a multicentric observational study: the AGATA study. Maturitas. 2016;83:40–44.
- Kingsberg SA, Sysocki S, Magnus L, Krychman ML. Vulvar and vaginal atrophy in postmenopausal women: findings from the REVIVE (REal Women’s VIews of Treatment Options for Menopausal Vaginal ChangEs) survey. J Sex Med. 2013;10(7):1790–1799.
- Rahn DD, Carberry C, Sanses TV, et al; Society of Gynecologic Surgeons Systematic Review Group. Vaginal estrogen for genitourinary syndrome of menopause: a systematic review. Obstet Gynecol. 2014;124(6):1147–1156.
- Farrell R; American College of Obstetricians and Gynecologists Committee on Gynecologic Practice. Committee Opinion No. 659: the use of vaginal estrogen in women with a history of estrogen-dependent breast cancer. Obstet Gynecol. 2016;127(3):e93–e96.
- Simon JA, Lin VH, Radovich C, Bachmann GA; Ospemiphene Study Group. One-year long-term safety extension study of ospemifene for the treatment of vulvar and vaginal atrophy in postmenopausal women with a uterus. Menopause. 2013;20(4):418–427.
- Simon J, Portman D, Mabey RG Jr; Ospemifene Study Group. Long-term safety of ospemifene (52-week extension) in the treatment of vulvar and vaginal atrophy in hysterectomized postmenopausal women. Maturitas. 2014;77(3):274–281.
- Portman DJ, Gass ML: Vulvovaginal Atrophy Terminology Consensus Conference Panel. Genitourinary syndrome of menopause: new terminology for vulvovaginal atrophy from the International Society for the Study of Women’s Sexual Health and the North American Menopause Society. Maturitas. 2014;79(3):349–354.
- Parish SJ, Nappi RE, Krychman ML, et al. Impact of vulvovaginal health on postmenopausal women: a review of surveys on symptoms of vulvovaginal atrophy. Int J Womens Health. 2013;5:437–447.
- Palma F, Volpe A, Villa P, Cagnacci A; Writing Groupt of AGATA Study. Vaginal atrophy of women in postmenopause. Results from a multicentric observational study: the AGATA study. Maturitas. 2016;83:40–44.
- Kingsberg SA, Sysocki S, Magnus L, Krychman ML. Vulvar and vaginal atrophy in postmenopausal women: findings from the REVIVE (REal Women’s VIews of Treatment Options for Menopausal Vaginal ChangEs) survey. J Sex Med. 2013;10(7):1790–1799.
- Rahn DD, Carberry C, Sanses TV, et al; Society of Gynecologic Surgeons Systematic Review Group. Vaginal estrogen for genitourinary syndrome of menopause: a systematic review. Obstet Gynecol. 2014;124(6):1147–1156.
- Farrell R; American College of Obstetricians and Gynecologists Committee on Gynecologic Practice. Committee Opinion No. 659: the use of vaginal estrogen in women with a history of estrogen-dependent breast cancer. Obstet Gynecol. 2016;127(3):e93–e96.
- Simon JA, Lin VH, Radovich C, Bachmann GA; Ospemiphene Study Group. One-year long-term safety extension study of ospemifene for the treatment of vulvar and vaginal atrophy in postmenopausal women with a uterus. Menopause. 2013;20(4):418–427.
- Simon J, Portman D, Mabey RG Jr; Ospemifene Study Group. Long-term safety of ospemifene (52-week extension) in the treatment of vulvar and vaginal atrophy in hysterectomized postmenopausal women. Maturitas. 2014;77(3):274–281.
In this Article
- Low-dose estrogen vaginal cream
- Ospemifene therapy
- Fractional CO2 laser treatment
Preventing infection after cesarean delivery: Evidence-based guidance
Cesarean delivery is now the most commonly performed major operation in hospitals across the United States. Approximately 30% of the 4 million deliveries that occur each year are by cesarean. Endometritis and wound infection (superficial and deep surgical site infection) are the most common postoperative complications of cesarean delivery. These 2 infections usually can be treated in a straightforward manner with antibiotics or surgical drainage. In some cases, however, they can lead to serious sequelae, such as pelvic abscess, septic pelvic vein thrombophlebitis, and wound dehiscence/evisceration, thereby prolonging the patient’s hospitalization and significantly increasing medical expenses.
Accordingly, in the past 50 years many investigators have proposed various specific measures to reduce the risk of postcesarean infection. In this article, we critically evaluate 2 of these major interventions: methods of skin preparation and administration of prophylactic antibiotics. In part 2 of this series next month, we will review the evidence regarding preoperative bathing with an antiseptic, preoperative vaginal cleansing with an antiseptic solution, methods of placental extraction, closure of the deep subcutaneous layer of the abdomen, and closure of the skin.
CASE Cesarean delivery required for nonprogressing labor
A 26-year-old obese primigravid woman, body mass index (BMI) 37 kg m2, at 40 weeks’ gestation has been in labor for 20 hours. Her membranes have been ruptured for 16 hours. Her cervix is completely effaced and is 7 cm dilated. The fetal head is at −1 cm station. Her cervical examination findings have not changed in 4 hours despite adequate uterine contractility documented by intrauterine pressure catheter. You are now ready to proceed with cesarean delivery, and you want to do everything possible to prevent the patient from developing a postoperative infection.
What are the best practices for postcesarean infection prevention in this patient?

Skin preparation
Adequate preoperative skin preparation is an important first step in preventing post‑ cesarean infection.
How should you prepare the patient’s skin for surgery?
Two issues to address when preparing the abdominal wall for surgery are hair removal and skin cleansing. More than 40 years ago, Cruse and Foord definitively answered the question about hair removal.1 In a landmark cohort investigation of more than 23,000 patients having many different types of operative procedures, they demonstrated that shaving the hair on the evening before surgery resulted in a higher rate of wound infection than clipping the hair, removing the hair with a depilatory cream just before surgery, or not removing the hair at all.
Three recent investigations have thoughtfully addressed the issue of skin cleansing. Darouiche and colleagues conducted a prospective, randomized, multicenter trial comparing chlorhexidine-alcohol with povidone-iodine for skin preparation before surgery.2 Their investigation included 849 patients having many different types of surgical procedures, only a minority of which were in obstetric and gynecologic patients. They demonstrated fewer superficial wound infections in patients in the chlorhexidine-alcohol group (4.2% vs 8.6%, P = .008). Of even greater importance, patients in the chlorhexidine-alcohol group had fewer deep wound infections (1% vs 3%, P = .005).
Ngai and co-workers recently reported the results of a randomized controlled trial (RCT) in which women undergoing nonurgent cesarean delivery had their skin cleansed with povidone-iodine with alcohol, chlorhexidine with alcohol, or the sequential combination of both solutions.3 The overall rate of surgical site infection was just 4.3%. The 3 groups had comparable infection rates and, accordingly, the authors were unable to conclude that one type of skin preparation was superior to the other.
The most informative recent investigation was by Tuuli and colleagues, who evaluated 1,147 patients having cesarean delivery assigned to undergo skin preparation with either chlorhexidine-alcohol or iodine-alcohol.4 Unlike the study by Ngai and co-workers, in this study approximately 40% of the patients in each treatment arm had unscheduled, urgent cesarean deliveries.3,4 Overall, the rate of infection in the chlorhexidine-alcohol group was 4.0% compared with 7.3% in the iodine-alcohol group (relative risk [RR], 0.55; 95% confidence interval [CI], 0.34–0.90, P = .02).
What the evidence says
Based on the evidence cited above, we advise removing hair at the incision site with clippers or depilatory cream just before the start of surgery. The abdomen should then be cleansed with a chlorhexidine-alcohol solution (Level I Evidence, Level 1A Recommendation; TABLE).

Antibiotic prophylaxis
Questions to consider regarding antibiotic prophylaxis for cesarean delivery include appropriateness of treatment, antibiotic(s) selection, timing of administration, dose, and special circumstances.
Should you give the patient prophylactic antibiotics?
Prophylactic antibiotics are justified for surgical procedures whenever 3 major criteria are met5:
- the surgical site is inevitably contaminated with bacteria
- in the absence of prophylaxis, the frequency of infection at the operative site is unacceptably high
- operative site infections have the potential to lead to serious, potentially life-threatening sequelae.
Without a doubt, all 3 of these criteria are fulfilled when considering either urgent or nonurgent cesarean delivery. When cesarean delivery follows a long labor complicated by ruptured membranes, multiple internal vaginal examinations, and internal fetal monitoring, the operative site is inevitably contaminated with hundreds of thousands of pathogenic bacteria. Even when cesarean delivery is scheduled to occur before the onset of labor and ruptured membranes, a high concentration of vaginal organisms is introduced into the uterine and pelvic cavities coincident with making the hysterotomy incision.6
In the era before prophylactic antibiotics were used routinely, postoperative infection rates in some highly indigent patient populations approached 85%.5 Finally, as noted previously, postcesarean endometritis may progress to pelvic abscess formation, septic pelvic vein thrombophlebitis, and septic shock; wound infections may be complicated by dehiscence and evisceration.
When should you administer antibiotics: Before the surgical incision or after cord clamping?
More than 50 years ago, Burke conducted the classic sequence of basic science experiments that forms the foundation for use of prophylactic antibiotics.7 Using a guinea pig model, he showed that prophylactic antibiotics exert their most pronounced effect when they are administered before the surgical incision is made and before bacterial contamination occurs. Prophylaxis that is delayed more than 4 hours after the start of surgery will likely be ineffective.
Interestingly, however, when clinicians first began using prophylactic antibiotics for cesarean delivery, some investigators expressed concern about the possible exposure of the neonate to antibiotics just before delivery—specifically, whether this exposure would increase the frequency of evaluations for suspected sepsis or would promote resistance among organisms that would make neonatal sepsis more difficult to treat.
Gordon and colleagues published an important report in 1979 that showed that preoperative administration of ampicillin did not increase the frequency of immediate or delayed neonatal infections.8 However, delaying the administration of ampicillin until after the umbilical cord was clamped was just as effective in preventing post‑cesarean endometritis. Subsequently, Cunningham and co-workers showed that preoperative administration of prophylactic antibiotics significantly increased the frequency of sepsis workups in exposed neonates compared with infants with no preoperative antibiotic exposure (28% vs 15%; P<.025).9 Based on these 2 reports, obstetricians adopted a policy of delaying antibiotic administration until after the infant’s umbilical cord was clamped.
In 2007, Sullivan and colleagues challenged this long-standing practice.10 In a carefully designed prospective, randomized, double-blind trial, they showed that patients who received preoperative cefazolin had a significant reduction in the frequency of endometritis compared with women who received the same antibiotic after cord clamping (1% vs 5%; RR, 0.2; 95% CI, 0.2–0.94). The rate of wound infection was lower in the preoperative antibiotic group (3% vs 5%), but this difference did not reach statistical significance. The total infection-related morbidity was significantly reduced in women who received antibiotics preoperatively (4.0% vs 11.5%; RR, 0.4; 95% CI, 0.18–0.87). Additionally, there was no increase in the frequency of proven or suspected neonatal infection in the infants exposed to antibiotics before delivery.
Subsequent to the publication by Sullivan and colleagues, other reports have confirmed that administration of antibiotics prior to surgery is superior to administration after clamping of the umbilical cord.10–12 Thus, we have come full circle back to Burke’s principle established more than a half century ago.7
Which antibiotic(s) should you administer for prophylaxis, and how many doses?
In an earlier review, one of us (PD) examined the evidence regarding choice of antibiotics and number of doses, concluding that a single dose of a first-generation cephalosporin, such as cefazolin, was the preferred regimen.5 The single dose was comparable in effectiveness to 2- or 3-dose regimens and to single- or multiple-dose regimens of broader-spectrum agents. For more than 20 years now, the standard of care for antibiotic prophylaxis has been a single 1- to 2-g dose of cefazolin.
Several recent reports, however, have raised the question of whether the prophylactic effect could be enhanced if the spectrum of activity of the antibiotic regimen was broadened to include an agent effective against Ureaplasma species.
Tita and colleagues evaluated an indigent patient population with an inherently high rate of postoperative infection; they showed that adding azithromycin 500 mg to cefazolin significantly reduced the rate of postcesarean endometritis.13 In a follow-up report from the same institution, Tita and co-workers demonstrated that adding azithromycin also significantly reduced the frequency of wound infection.14 In both of these investigations, the antibiotics were administered after cord clamping.
In a subsequent report, Ward and Duff15 showed that the combination of azithromycin plus cefazolin administered preoperatively resulted in a very low rate of both endometritis and wound infection in a population similar to that studied by Tita et al.13,14
Very recently, Tita and associates published the results of the Cesarean Section Optimal Antibiotic Prophylaxis (C/SOAP) trial conducted at 14 US hospitals.16 This study included 2,013 women undergoing cesarean delivery during labor or after membrane rupture who were randomly assigned to receive intravenous azithromycin 500 mg (n = 1,019) or placebo (n = 994). All women also received standard antibiotic prophylaxis with cefazolin. The primary outcome (a composite of endometritis, wound infection, or other infection within 6 weeks) was significantly lower in the azithromycin group than in the placebo group (6.1% vs 12.0%, P<.001). In addition, there were significant differences between the treatment groups in the rates of endometritis (3.8% in the azithromycin group vs 6.1% in the placebo group, P = .02) as well as in the rates of wound infection (2.4% vs 6.6%, respectively, P<.001). Of additional note, there were no differences between the 2 groups in the composite neonatal outcome of death and serious neonatal complications (14.3% vs 13.6%, P = .63).The investigators concluded that extended-spectrum prophylaxis with adjunctive azithromycin safely reduces infection rates without raising the risk of neonatal adverse outcomes.
What the evidence says
We conclude that all patients, even those having a scheduled cesarean before the onset of labor or ruptured membranes, should receive prophylactic antibiotics in a single dose administered preoperatively rather than after cord clamping (Level I Evidence, Level 1A Recommendation; TABLE). In high-risk populations (eg, women in labor with ruptured membranes who are having an urgent cesarean), for whom the baseline risk of infection is high, administer the combination of cefazolin plus azithromycin in lieu of cefazolin alone (Level I Evidence, Level 1A Recommendation; TABLE).
If the patient has a history of an immediate hypersensitivity reaction to beta-lactam antibiotics, we recommend the combination of clindamycin (900 mg) plus gentamicin (1.5 mg/kg) as a single infusion prior to surgery. We base this recommendation on the need to provide reasonable coverage against a broad range of pathogens. Clindamycin covers gram-positive aerobes, such as staphylococci species and group B streptococci, and anaerobes; gentamicin covers aerobic gram-negative bacilli. A single agent, such as clindamycin or metronidazole, does not provide the broad-based coverage necessary for effective prophylaxis (Level III Evidence, Level 1C Recommendation; TABLE).
If the patient is overweight or obese, should you modify the antibiotic dose?
The prevalence of obesity in the United States continues to increase. One-third of all US reproductive-aged women are obese, and 6% of women are extremely obese.17 Obesity increases the risk of postcesarean infection 3- to 5- fold.18 Because both pregnancy and obesity increase the total volume of a drug’s distribution, achieving adequate antibiotic tissue concentrations may be hindered by a dilutional effect. Furthermore, pharmacokinetic studies consistently have shown that the tissue concentration of an antibiotic—which, ideally, should be above the minimum inhibitory concentration (MIC) for common bacteria—determines the susceptibility of those tissues to infection, regardless of whether the serum concentration of the antibiotic is in the therapeutic range.19
These concerns have led to several recent investigations evaluating different doses of cefazolin for obese patients. Pevzner and colleagues conducted a prospective cohort study of 29 women having a scheduled cesarean delivery.20 The patients were divided into 3 groups: lean (BMI <30 kg m2), obese (BMI 30.0–39.9 kg m2), and extremely obese (BMI >40 kg m2). All women received a 2-g dose of cefazolin 30 to 60 minutes before surgery. Cefazolin concentrations in adipose tissue obtained at the time of skin incision were inversely proportional to maternal BMI (r, −0.67; P<.001). All specimens demonstrated a therapeutic concentration (>1 µg/g) of cefazolin for gram-positive cocci, but 20% of the obese women and 33% of the extremely obese women did not achieve the MIC (>4 µg/g) for gram-negative bacilli (P = .29 and P = .14, respectively). At the time of skin closure, 20% of obese women and 44% of extremely obese women did not have tissue concentrations that exceeded the MIC for gram-negative bacteria.
Swank and associates conducted a prospective cohort study that included 28 women.18 They demonstrated that, after a 2-g dose of cefazolin, only 20% of the obese women (BMI 30–40 kg m2) and 0% of the extremely obese women (BMI >40 kg m2) achieved an adipose tissue concentration that exceeded the MIC for gram-negative rods (8 µg/mL). However, 100% and 71.4%, respectively, achieved such a tissue concentration after a 3-g dose. When the women were stratified by actual weight, there was a statistically significant difference between those who weighed less than 120 kg and those who weighed more than 120 kg. Seventy-nine percent of the former had a tissue concentration of cefazolin greater than 8 µg/mL compared with 0% of the women who weighed more than 120 kg. Based on these observations, the authors recommended a 3-g dose of cefazolin for women who weigh more than 120 kg.
In a double-blind RCT with 26 obese women (BMI ≥30 kg m2), Young and colleagues demonstrated that, at the time of hysterotomy and fascial closure, significantly higher concentrations of cefazolin were found in the adipose tissue of obese women who received a 3-g dose of antibiotic compared with those who received a 2-g dose.21 However, all concentrations of cefazolin were consistently above the MIC of cefazolin for gram-positive cocci (1 µg/g) and gram-negative bacilli (4 µg/g). Further, Maggio and co-workers conducted a double-blind RCT comparing a 2-g dose of cefazolin versus a 3-g dose in 57 obese women (BMI ≥30 kg m2).22 They found no statistically significant difference in the percentage of women who had tissue concentrations of cefazolin greater than the MIC for gram-positive cocci (8 µg/g). All samples were above the MIC of cefazolin for gram-negative bacilli (2 µg/g). Based on these data, these investigators did not recommend increasing the dose of cefazolin from 2 g to 3 g in obese patients.21,22
The studies discussed above are difficult to compare for 3 reasons. First, each study used a different MIC of cefazolin for both gram-positive and gram-negative bacteria. Second, the authors sampled different maternal tissues or serum at varying times during the cesarean delivery. Third, the studies did not specifically investigate, or were not powered sufficiently to address, the more important clinical outcome of surgical site infection. In a recent historical cohort study, Ward and Duff were unable to show that increasing the dose of cefazolin to 2 g in all women with a BMI <30 kg m2 and to 3 g in all women with a BMI >30 kg m2 reduced the rate of endometritis and wound infection below the level already achieved with combined prophylaxis with cefazolin (1 g) plus azithromycin (500 mg).15
Sutton and colleagues recently assessed the pharmacokinetics of azithromycin when used as prophylaxis for cesarean delivery.23 They studied 30 women who had a scheduled cesarean delivery and who received a 500-mg intravenous dose of azithromycin that was initiated 15, 30, or 60 minutes before the surgical incision and then infused over 1 hour. They obtained maternal plasma samples multiple times during the first 8 hours after surgery. They also obtained samples of amniotic fluid, placenta, myometrium, adipose tissue, and umbilical cord blood intraoperatively. The median concentration of azithromycin in adipose tissue was 102 ng/g, which is below the MIC50 for Ureaplasma species (250 ng/mL). The median concentration in myometrial tissue was 402 ng/g. The concentration in maternal plasma consistently exceeded the MIC50 for Ureaplasma species.
What the evidence says
All women, regardless of weight,
CASE Resolved
For the 26-year-old obese laboring patient about to undergo cesarean delivery, reasonable steps for prevention of infection include removing the hair at the incision site with clippers or depilatory cream immediately prior to the start of surgery; cleansing the abdomen with a chlorhexidine-alcohol solution; and administering cefazolin (2 g) plus azithromycin (500 mg) preoperatively.
Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.
- Cruse PJ, Foord R. A five‑year prospective study of 23,649 surgical wounds. Arch Surg. 1973;107(2):206–210.
- Darouiche RO, Wall MJ Jr, Itani KM, et al. Chlorhexidine‑alcohol versus povidone‑iodine for surgical‑site antisepsis. N Engl J Med. 2010;362(1):18–26.
- Ngai IM, Van Arsdale A, Govindappagari S, et al. Skin preparation for prevention of surgical site infection after cesarean delivery. Obstet Gynecol. 2015;126(6):1251–1257.
- Tuuli MG, Liu J, Stout MJ, et al. A randomized trial comparing skin antiseptic agents at cesarean delivery. N Engl J Med. 2016;374(7):647–655.
- Duff P. Prophylactic antibiotics for cesarean delivery: a simple cost‑effective strategy for prevention of postoperative morbidity. Am J Obstet Gynecol. 1987;157(4 pt 1):794–798.
- Dinsmoor MJ, Gilbert S, Landon MB, et al; Eunice Kennedy Schriver National Institute of Child Health and Human Development Maternal‑Fetal Medicine Units Network. Perioperative antibiotic prophylaxis for nonlaboring cesarean delivery. Obstet Gynecol. 2009;114(4):752–756.
- Burke JF. The effective period of preventive antibiotic action in experimental incisions and dermal lesions. Surgery. 1961;50:161–168.
- Gordon HR, Phelps D, Blanchard K. Prophylactic cesarean section antibiotics: maternal and neonatal morbidity before or after cord clamping. Obstet Gynecol. 1979;53(2):151–156.
- Cunningham FG, Leveno KJ, DePalma RT, Roark M, Rosenfeld CR. Perioperative antimicrobials for cesarean delivery: before or after cord clamping? Obstet Gynecol. 1983;62(2):151–154.
- Sullivan SA, Smith T, Chang E, Hulsey T, Vandorsten JP, Soper D. Administration of cefazolin prior to skin incision is superior to cefazolin at cord clamping in preventing postcesarean infectious morbidity: a randomized controlled trial. Am J Obstet Gynecol. 2007;196(5):455.e1–e5.
- Costantine MM, Rahman M, Ghulmiyah L, et al. Timing of perioperative antibiotics for cesarean delivery: a metaanalysis. Am J Obstet Gynecol. 2008;199(3):301.e1–e6.
- Owens SM, Brozanski BS, Meyn LA, Wiesenfeld HC. Antimicrobial prophylaxis for cesarean delivery before skin incision. Obstet Gynecol. 2009;114(3):573–579.
- Tita AT, Hauth JC, Grimes A, Owen J, Stamm AM, Andrews WW. Decreasing incidence of postcesarean endometritis with extended‑spectrum antibiotic prophylaxis. Obstet Gynecol. 2008;111(1):51–56.
- Tita AT, Owen J, Stamm AM, Grimes A, Hauth JC, Andrews WW. Impact of extended‑spectrum antibiotic prophylaxis on incidence of postcesarean surgical wound infection. Am J Obstet Gynecol. 2008;199(3):303.e1–e3.
- Ward E, Duff P. A comparison of 3 antibiotic regimens for prevention of postcesarean endometritis: an historical cohort study. Am J Obstet Gynecol. 2016;214(6):751.e1–e4.
- Tita AT, Szychowski JM, Boggess K, et al; C/SOAP Trial Consortium. Adjunctive azithromycin prophylaxis for cesarean delivery. N Engl J Med. 2016;375(13):1231–1241.
- Ogden CL, Carroll MD, Curtin LR, McDowell MA, Tabak CJ, Flegel KM. Prevalence of overweight and obesity in the United States, 1999–2004. JAMA. 2006:295(13):1549–1555.
- Swank ML, Wing DA, Nicolau DP, McNulty JA. Increased 3‑gram cefazolin dosing for cesarean delivery prophylaxis in obese women. Am J Obstet Gynecol. 2015;213(3):415.e1–e8.
- Liu P, Derendorf H. Antimicrobial tissue concentrations. Infect Dis Clin North Am. 2003:17(3):599–613.
- Pevzner L, Swank M, Krepel C, Wing DA, Chan K, Edmiston CE Jr. Effects of maternal obesity on tissue concentrations of prophylactic cefazolin during cesarean delivery. Obstet Gynecol. 2011;117(4):877–882.
- Young OM, Shaik IH, Twedt R, et al. Pharmacokinetics of cefazolin prophylaxis in obese gravidae at time of cesarean delivery. Am J Obstet Gynecol. 2015;213(4):541.e1–e7.
- Maggio L, Nicolau DP, DaCosta M, Rouse DJ, Hughes BL. Cefazolin prophylaxis in obese women undergoing cesarean delivery: a randomized controlled trial. Obstet Gynecol. 2015;125(5):1205–1210.
- Sutton AL, Acosta EP, Larson KB, Kerstner‑Wood CD, Tita AT, Biggio JR. Perinatal pharmacokinetics of azithromycin for cesarean prophylaxis. Am J Obstet Gynecol. 2015;212(6):812. e1–e6.
Cesarean delivery is now the most commonly performed major operation in hospitals across the United States. Approximately 30% of the 4 million deliveries that occur each year are by cesarean. Endometritis and wound infection (superficial and deep surgical site infection) are the most common postoperative complications of cesarean delivery. These 2 infections usually can be treated in a straightforward manner with antibiotics or surgical drainage. In some cases, however, they can lead to serious sequelae, such as pelvic abscess, septic pelvic vein thrombophlebitis, and wound dehiscence/evisceration, thereby prolonging the patient’s hospitalization and significantly increasing medical expenses.
Accordingly, in the past 50 years many investigators have proposed various specific measures to reduce the risk of postcesarean infection. In this article, we critically evaluate 2 of these major interventions: methods of skin preparation and administration of prophylactic antibiotics. In part 2 of this series next month, we will review the evidence regarding preoperative bathing with an antiseptic, preoperative vaginal cleansing with an antiseptic solution, methods of placental extraction, closure of the deep subcutaneous layer of the abdomen, and closure of the skin.
CASE Cesarean delivery required for nonprogressing labor
A 26-year-old obese primigravid woman, body mass index (BMI) 37 kg m2, at 40 weeks’ gestation has been in labor for 20 hours. Her membranes have been ruptured for 16 hours. Her cervix is completely effaced and is 7 cm dilated. The fetal head is at −1 cm station. Her cervical examination findings have not changed in 4 hours despite adequate uterine contractility documented by intrauterine pressure catheter. You are now ready to proceed with cesarean delivery, and you want to do everything possible to prevent the patient from developing a postoperative infection.
What are the best practices for postcesarean infection prevention in this patient?

Skin preparation
Adequate preoperative skin preparation is an important first step in preventing post‑ cesarean infection.
How should you prepare the patient’s skin for surgery?
Two issues to address when preparing the abdominal wall for surgery are hair removal and skin cleansing. More than 40 years ago, Cruse and Foord definitively answered the question about hair removal.1 In a landmark cohort investigation of more than 23,000 patients having many different types of operative procedures, they demonstrated that shaving the hair on the evening before surgery resulted in a higher rate of wound infection than clipping the hair, removing the hair with a depilatory cream just before surgery, or not removing the hair at all.
Three recent investigations have thoughtfully addressed the issue of skin cleansing. Darouiche and colleagues conducted a prospective, randomized, multicenter trial comparing chlorhexidine-alcohol with povidone-iodine for skin preparation before surgery.2 Their investigation included 849 patients having many different types of surgical procedures, only a minority of which were in obstetric and gynecologic patients. They demonstrated fewer superficial wound infections in patients in the chlorhexidine-alcohol group (4.2% vs 8.6%, P = .008). Of even greater importance, patients in the chlorhexidine-alcohol group had fewer deep wound infections (1% vs 3%, P = .005).
Ngai and co-workers recently reported the results of a randomized controlled trial (RCT) in which women undergoing nonurgent cesarean delivery had their skin cleansed with povidone-iodine with alcohol, chlorhexidine with alcohol, or the sequential combination of both solutions.3 The overall rate of surgical site infection was just 4.3%. The 3 groups had comparable infection rates and, accordingly, the authors were unable to conclude that one type of skin preparation was superior to the other.
The most informative recent investigation was by Tuuli and colleagues, who evaluated 1,147 patients having cesarean delivery assigned to undergo skin preparation with either chlorhexidine-alcohol or iodine-alcohol.4 Unlike the study by Ngai and co-workers, in this study approximately 40% of the patients in each treatment arm had unscheduled, urgent cesarean deliveries.3,4 Overall, the rate of infection in the chlorhexidine-alcohol group was 4.0% compared with 7.3% in the iodine-alcohol group (relative risk [RR], 0.55; 95% confidence interval [CI], 0.34–0.90, P = .02).
What the evidence says
Based on the evidence cited above, we advise removing hair at the incision site with clippers or depilatory cream just before the start of surgery. The abdomen should then be cleansed with a chlorhexidine-alcohol solution (Level I Evidence, Level 1A Recommendation; TABLE).

Antibiotic prophylaxis
Questions to consider regarding antibiotic prophylaxis for cesarean delivery include appropriateness of treatment, antibiotic(s) selection, timing of administration, dose, and special circumstances.
Should you give the patient prophylactic antibiotics?
Prophylactic antibiotics are justified for surgical procedures whenever 3 major criteria are met5:
- the surgical site is inevitably contaminated with bacteria
- in the absence of prophylaxis, the frequency of infection at the operative site is unacceptably high
- operative site infections have the potential to lead to serious, potentially life-threatening sequelae.
Without a doubt, all 3 of these criteria are fulfilled when considering either urgent or nonurgent cesarean delivery. When cesarean delivery follows a long labor complicated by ruptured membranes, multiple internal vaginal examinations, and internal fetal monitoring, the operative site is inevitably contaminated with hundreds of thousands of pathogenic bacteria. Even when cesarean delivery is scheduled to occur before the onset of labor and ruptured membranes, a high concentration of vaginal organisms is introduced into the uterine and pelvic cavities coincident with making the hysterotomy incision.6
In the era before prophylactic antibiotics were used routinely, postoperative infection rates in some highly indigent patient populations approached 85%.5 Finally, as noted previously, postcesarean endometritis may progress to pelvic abscess formation, septic pelvic vein thrombophlebitis, and septic shock; wound infections may be complicated by dehiscence and evisceration.
When should you administer antibiotics: Before the surgical incision or after cord clamping?
More than 50 years ago, Burke conducted the classic sequence of basic science experiments that forms the foundation for use of prophylactic antibiotics.7 Using a guinea pig model, he showed that prophylactic antibiotics exert their most pronounced effect when they are administered before the surgical incision is made and before bacterial contamination occurs. Prophylaxis that is delayed more than 4 hours after the start of surgery will likely be ineffective.
Interestingly, however, when clinicians first began using prophylactic antibiotics for cesarean delivery, some investigators expressed concern about the possible exposure of the neonate to antibiotics just before delivery—specifically, whether this exposure would increase the frequency of evaluations for suspected sepsis or would promote resistance among organisms that would make neonatal sepsis more difficult to treat.
Gordon and colleagues published an important report in 1979 that showed that preoperative administration of ampicillin did not increase the frequency of immediate or delayed neonatal infections.8 However, delaying the administration of ampicillin until after the umbilical cord was clamped was just as effective in preventing post‑cesarean endometritis. Subsequently, Cunningham and co-workers showed that preoperative administration of prophylactic antibiotics significantly increased the frequency of sepsis workups in exposed neonates compared with infants with no preoperative antibiotic exposure (28% vs 15%; P<.025).9 Based on these 2 reports, obstetricians adopted a policy of delaying antibiotic administration until after the infant’s umbilical cord was clamped.
In 2007, Sullivan and colleagues challenged this long-standing practice.10 In a carefully designed prospective, randomized, double-blind trial, they showed that patients who received preoperative cefazolin had a significant reduction in the frequency of endometritis compared with women who received the same antibiotic after cord clamping (1% vs 5%; RR, 0.2; 95% CI, 0.2–0.94). The rate of wound infection was lower in the preoperative antibiotic group (3% vs 5%), but this difference did not reach statistical significance. The total infection-related morbidity was significantly reduced in women who received antibiotics preoperatively (4.0% vs 11.5%; RR, 0.4; 95% CI, 0.18–0.87). Additionally, there was no increase in the frequency of proven or suspected neonatal infection in the infants exposed to antibiotics before delivery.
Subsequent to the publication by Sullivan and colleagues, other reports have confirmed that administration of antibiotics prior to surgery is superior to administration after clamping of the umbilical cord.10–12 Thus, we have come full circle back to Burke’s principle established more than a half century ago.7
Which antibiotic(s) should you administer for prophylaxis, and how many doses?
In an earlier review, one of us (PD) examined the evidence regarding choice of antibiotics and number of doses, concluding that a single dose of a first-generation cephalosporin, such as cefazolin, was the preferred regimen.5 The single dose was comparable in effectiveness to 2- or 3-dose regimens and to single- or multiple-dose regimens of broader-spectrum agents. For more than 20 years now, the standard of care for antibiotic prophylaxis has been a single 1- to 2-g dose of cefazolin.
Several recent reports, however, have raised the question of whether the prophylactic effect could be enhanced if the spectrum of activity of the antibiotic regimen was broadened to include an agent effective against Ureaplasma species.
Tita and colleagues evaluated an indigent patient population with an inherently high rate of postoperative infection; they showed that adding azithromycin 500 mg to cefazolin significantly reduced the rate of postcesarean endometritis.13 In a follow-up report from the same institution, Tita and co-workers demonstrated that adding azithromycin also significantly reduced the frequency of wound infection.14 In both of these investigations, the antibiotics were administered after cord clamping.
In a subsequent report, Ward and Duff15 showed that the combination of azithromycin plus cefazolin administered preoperatively resulted in a very low rate of both endometritis and wound infection in a population similar to that studied by Tita et al.13,14
Very recently, Tita and associates published the results of the Cesarean Section Optimal Antibiotic Prophylaxis (C/SOAP) trial conducted at 14 US hospitals.16 This study included 2,013 women undergoing cesarean delivery during labor or after membrane rupture who were randomly assigned to receive intravenous azithromycin 500 mg (n = 1,019) or placebo (n = 994). All women also received standard antibiotic prophylaxis with cefazolin. The primary outcome (a composite of endometritis, wound infection, or other infection within 6 weeks) was significantly lower in the azithromycin group than in the placebo group (6.1% vs 12.0%, P<.001). In addition, there were significant differences between the treatment groups in the rates of endometritis (3.8% in the azithromycin group vs 6.1% in the placebo group, P = .02) as well as in the rates of wound infection (2.4% vs 6.6%, respectively, P<.001). Of additional note, there were no differences between the 2 groups in the composite neonatal outcome of death and serious neonatal complications (14.3% vs 13.6%, P = .63).The investigators concluded that extended-spectrum prophylaxis with adjunctive azithromycin safely reduces infection rates without raising the risk of neonatal adverse outcomes.
What the evidence says
We conclude that all patients, even those having a scheduled cesarean before the onset of labor or ruptured membranes, should receive prophylactic antibiotics in a single dose administered preoperatively rather than after cord clamping (Level I Evidence, Level 1A Recommendation; TABLE). In high-risk populations (eg, women in labor with ruptured membranes who are having an urgent cesarean), for whom the baseline risk of infection is high, administer the combination of cefazolin plus azithromycin in lieu of cefazolin alone (Level I Evidence, Level 1A Recommendation; TABLE).
If the patient has a history of an immediate hypersensitivity reaction to beta-lactam antibiotics, we recommend the combination of clindamycin (900 mg) plus gentamicin (1.5 mg/kg) as a single infusion prior to surgery. We base this recommendation on the need to provide reasonable coverage against a broad range of pathogens. Clindamycin covers gram-positive aerobes, such as staphylococci species and group B streptococci, and anaerobes; gentamicin covers aerobic gram-negative bacilli. A single agent, such as clindamycin or metronidazole, does not provide the broad-based coverage necessary for effective prophylaxis (Level III Evidence, Level 1C Recommendation; TABLE).
If the patient is overweight or obese, should you modify the antibiotic dose?
The prevalence of obesity in the United States continues to increase. One-third of all US reproductive-aged women are obese, and 6% of women are extremely obese.17 Obesity increases the risk of postcesarean infection 3- to 5- fold.18 Because both pregnancy and obesity increase the total volume of a drug’s distribution, achieving adequate antibiotic tissue concentrations may be hindered by a dilutional effect. Furthermore, pharmacokinetic studies consistently have shown that the tissue concentration of an antibiotic—which, ideally, should be above the minimum inhibitory concentration (MIC) for common bacteria—determines the susceptibility of those tissues to infection, regardless of whether the serum concentration of the antibiotic is in the therapeutic range.19
These concerns have led to several recent investigations evaluating different doses of cefazolin for obese patients. Pevzner and colleagues conducted a prospective cohort study of 29 women having a scheduled cesarean delivery.20 The patients were divided into 3 groups: lean (BMI <30 kg m2), obese (BMI 30.0–39.9 kg m2), and extremely obese (BMI >40 kg m2). All women received a 2-g dose of cefazolin 30 to 60 minutes before surgery. Cefazolin concentrations in adipose tissue obtained at the time of skin incision were inversely proportional to maternal BMI (r, −0.67; P<.001). All specimens demonstrated a therapeutic concentration (>1 µg/g) of cefazolin for gram-positive cocci, but 20% of the obese women and 33% of the extremely obese women did not achieve the MIC (>4 µg/g) for gram-negative bacilli (P = .29 and P = .14, respectively). At the time of skin closure, 20% of obese women and 44% of extremely obese women did not have tissue concentrations that exceeded the MIC for gram-negative bacteria.
Swank and associates conducted a prospective cohort study that included 28 women.18 They demonstrated that, after a 2-g dose of cefazolin, only 20% of the obese women (BMI 30–40 kg m2) and 0% of the extremely obese women (BMI >40 kg m2) achieved an adipose tissue concentration that exceeded the MIC for gram-negative rods (8 µg/mL). However, 100% and 71.4%, respectively, achieved such a tissue concentration after a 3-g dose. When the women were stratified by actual weight, there was a statistically significant difference between those who weighed less than 120 kg and those who weighed more than 120 kg. Seventy-nine percent of the former had a tissue concentration of cefazolin greater than 8 µg/mL compared with 0% of the women who weighed more than 120 kg. Based on these observations, the authors recommended a 3-g dose of cefazolin for women who weigh more than 120 kg.
In a double-blind RCT with 26 obese women (BMI ≥30 kg m2), Young and colleagues demonstrated that, at the time of hysterotomy and fascial closure, significantly higher concentrations of cefazolin were found in the adipose tissue of obese women who received a 3-g dose of antibiotic compared with those who received a 2-g dose.21 However, all concentrations of cefazolin were consistently above the MIC of cefazolin for gram-positive cocci (1 µg/g) and gram-negative bacilli (4 µg/g). Further, Maggio and co-workers conducted a double-blind RCT comparing a 2-g dose of cefazolin versus a 3-g dose in 57 obese women (BMI ≥30 kg m2).22 They found no statistically significant difference in the percentage of women who had tissue concentrations of cefazolin greater than the MIC for gram-positive cocci (8 µg/g). All samples were above the MIC of cefazolin for gram-negative bacilli (2 µg/g). Based on these data, these investigators did not recommend increasing the dose of cefazolin from 2 g to 3 g in obese patients.21,22
The studies discussed above are difficult to compare for 3 reasons. First, each study used a different MIC of cefazolin for both gram-positive and gram-negative bacteria. Second, the authors sampled different maternal tissues or serum at varying times during the cesarean delivery. Third, the studies did not specifically investigate, or were not powered sufficiently to address, the more important clinical outcome of surgical site infection. In a recent historical cohort study, Ward and Duff were unable to show that increasing the dose of cefazolin to 2 g in all women with a BMI <30 kg m2 and to 3 g in all women with a BMI >30 kg m2 reduced the rate of endometritis and wound infection below the level already achieved with combined prophylaxis with cefazolin (1 g) plus azithromycin (500 mg).15
Sutton and colleagues recently assessed the pharmacokinetics of azithromycin when used as prophylaxis for cesarean delivery.23 They studied 30 women who had a scheduled cesarean delivery and who received a 500-mg intravenous dose of azithromycin that was initiated 15, 30, or 60 minutes before the surgical incision and then infused over 1 hour. They obtained maternal plasma samples multiple times during the first 8 hours after surgery. They also obtained samples of amniotic fluid, placenta, myometrium, adipose tissue, and umbilical cord blood intraoperatively. The median concentration of azithromycin in adipose tissue was 102 ng/g, which is below the MIC50 for Ureaplasma species (250 ng/mL). The median concentration in myometrial tissue was 402 ng/g. The concentration in maternal plasma consistently exceeded the MIC50 for Ureaplasma species.
What the evidence says
All women, regardless of weight,
CASE Resolved
For the 26-year-old obese laboring patient about to undergo cesarean delivery, reasonable steps for prevention of infection include removing the hair at the incision site with clippers or depilatory cream immediately prior to the start of surgery; cleansing the abdomen with a chlorhexidine-alcohol solution; and administering cefazolin (2 g) plus azithromycin (500 mg) preoperatively.
Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.
Cesarean delivery is now the most commonly performed major operation in hospitals across the United States. Approximately 30% of the 4 million deliveries that occur each year are by cesarean. Endometritis and wound infection (superficial and deep surgical site infection) are the most common postoperative complications of cesarean delivery. These 2 infections usually can be treated in a straightforward manner with antibiotics or surgical drainage. In some cases, however, they can lead to serious sequelae, such as pelvic abscess, septic pelvic vein thrombophlebitis, and wound dehiscence/evisceration, thereby prolonging the patient’s hospitalization and significantly increasing medical expenses.
Accordingly, in the past 50 years many investigators have proposed various specific measures to reduce the risk of postcesarean infection. In this article, we critically evaluate 2 of these major interventions: methods of skin preparation and administration of prophylactic antibiotics. In part 2 of this series next month, we will review the evidence regarding preoperative bathing with an antiseptic, preoperative vaginal cleansing with an antiseptic solution, methods of placental extraction, closure of the deep subcutaneous layer of the abdomen, and closure of the skin.
CASE Cesarean delivery required for nonprogressing labor
A 26-year-old obese primigravid woman, body mass index (BMI) 37 kg m2, at 40 weeks’ gestation has been in labor for 20 hours. Her membranes have been ruptured for 16 hours. Her cervix is completely effaced and is 7 cm dilated. The fetal head is at −1 cm station. Her cervical examination findings have not changed in 4 hours despite adequate uterine contractility documented by intrauterine pressure catheter. You are now ready to proceed with cesarean delivery, and you want to do everything possible to prevent the patient from developing a postoperative infection.
What are the best practices for postcesarean infection prevention in this patient?

Skin preparation
Adequate preoperative skin preparation is an important first step in preventing post‑ cesarean infection.
How should you prepare the patient’s skin for surgery?
Two issues to address when preparing the abdominal wall for surgery are hair removal and skin cleansing. More than 40 years ago, Cruse and Foord definitively answered the question about hair removal.1 In a landmark cohort investigation of more than 23,000 patients having many different types of operative procedures, they demonstrated that shaving the hair on the evening before surgery resulted in a higher rate of wound infection than clipping the hair, removing the hair with a depilatory cream just before surgery, or not removing the hair at all.
Three recent investigations have thoughtfully addressed the issue of skin cleansing. Darouiche and colleagues conducted a prospective, randomized, multicenter trial comparing chlorhexidine-alcohol with povidone-iodine for skin preparation before surgery.2 Their investigation included 849 patients having many different types of surgical procedures, only a minority of which were in obstetric and gynecologic patients. They demonstrated fewer superficial wound infections in patients in the chlorhexidine-alcohol group (4.2% vs 8.6%, P = .008). Of even greater importance, patients in the chlorhexidine-alcohol group had fewer deep wound infections (1% vs 3%, P = .005).
Ngai and co-workers recently reported the results of a randomized controlled trial (RCT) in which women undergoing nonurgent cesarean delivery had their skin cleansed with povidone-iodine with alcohol, chlorhexidine with alcohol, or the sequential combination of both solutions.3 The overall rate of surgical site infection was just 4.3%. The 3 groups had comparable infection rates and, accordingly, the authors were unable to conclude that one type of skin preparation was superior to the other.
The most informative recent investigation was by Tuuli and colleagues, who evaluated 1,147 patients having cesarean delivery assigned to undergo skin preparation with either chlorhexidine-alcohol or iodine-alcohol.4 Unlike the study by Ngai and co-workers, in this study approximately 40% of the patients in each treatment arm had unscheduled, urgent cesarean deliveries.3,4 Overall, the rate of infection in the chlorhexidine-alcohol group was 4.0% compared with 7.3% in the iodine-alcohol group (relative risk [RR], 0.55; 95% confidence interval [CI], 0.34–0.90, P = .02).
What the evidence says
Based on the evidence cited above, we advise removing hair at the incision site with clippers or depilatory cream just before the start of surgery. The abdomen should then be cleansed with a chlorhexidine-alcohol solution (Level I Evidence, Level 1A Recommendation; TABLE).

Antibiotic prophylaxis
Questions to consider regarding antibiotic prophylaxis for cesarean delivery include appropriateness of treatment, antibiotic(s) selection, timing of administration, dose, and special circumstances.
Should you give the patient prophylactic antibiotics?
Prophylactic antibiotics are justified for surgical procedures whenever 3 major criteria are met5:
- the surgical site is inevitably contaminated with bacteria
- in the absence of prophylaxis, the frequency of infection at the operative site is unacceptably high
- operative site infections have the potential to lead to serious, potentially life-threatening sequelae.
Without a doubt, all 3 of these criteria are fulfilled when considering either urgent or nonurgent cesarean delivery. When cesarean delivery follows a long labor complicated by ruptured membranes, multiple internal vaginal examinations, and internal fetal monitoring, the operative site is inevitably contaminated with hundreds of thousands of pathogenic bacteria. Even when cesarean delivery is scheduled to occur before the onset of labor and ruptured membranes, a high concentration of vaginal organisms is introduced into the uterine and pelvic cavities coincident with making the hysterotomy incision.6
In the era before prophylactic antibiotics were used routinely, postoperative infection rates in some highly indigent patient populations approached 85%.5 Finally, as noted previously, postcesarean endometritis may progress to pelvic abscess formation, septic pelvic vein thrombophlebitis, and septic shock; wound infections may be complicated by dehiscence and evisceration.
When should you administer antibiotics: Before the surgical incision or after cord clamping?
More than 50 years ago, Burke conducted the classic sequence of basic science experiments that forms the foundation for use of prophylactic antibiotics.7 Using a guinea pig model, he showed that prophylactic antibiotics exert their most pronounced effect when they are administered before the surgical incision is made and before bacterial contamination occurs. Prophylaxis that is delayed more than 4 hours after the start of surgery will likely be ineffective.
Interestingly, however, when clinicians first began using prophylactic antibiotics for cesarean delivery, some investigators expressed concern about the possible exposure of the neonate to antibiotics just before delivery—specifically, whether this exposure would increase the frequency of evaluations for suspected sepsis or would promote resistance among organisms that would make neonatal sepsis more difficult to treat.
Gordon and colleagues published an important report in 1979 that showed that preoperative administration of ampicillin did not increase the frequency of immediate or delayed neonatal infections.8 However, delaying the administration of ampicillin until after the umbilical cord was clamped was just as effective in preventing post‑cesarean endometritis. Subsequently, Cunningham and co-workers showed that preoperative administration of prophylactic antibiotics significantly increased the frequency of sepsis workups in exposed neonates compared with infants with no preoperative antibiotic exposure (28% vs 15%; P<.025).9 Based on these 2 reports, obstetricians adopted a policy of delaying antibiotic administration until after the infant’s umbilical cord was clamped.
In 2007, Sullivan and colleagues challenged this long-standing practice.10 In a carefully designed prospective, randomized, double-blind trial, they showed that patients who received preoperative cefazolin had a significant reduction in the frequency of endometritis compared with women who received the same antibiotic after cord clamping (1% vs 5%; RR, 0.2; 95% CI, 0.2–0.94). The rate of wound infection was lower in the preoperative antibiotic group (3% vs 5%), but this difference did not reach statistical significance. The total infection-related morbidity was significantly reduced in women who received antibiotics preoperatively (4.0% vs 11.5%; RR, 0.4; 95% CI, 0.18–0.87). Additionally, there was no increase in the frequency of proven or suspected neonatal infection in the infants exposed to antibiotics before delivery.
Subsequent to the publication by Sullivan and colleagues, other reports have confirmed that administration of antibiotics prior to surgery is superior to administration after clamping of the umbilical cord.10–12 Thus, we have come full circle back to Burke’s principle established more than a half century ago.7
Which antibiotic(s) should you administer for prophylaxis, and how many doses?
In an earlier review, one of us (PD) examined the evidence regarding choice of antibiotics and number of doses, concluding that a single dose of a first-generation cephalosporin, such as cefazolin, was the preferred regimen.5 The single dose was comparable in effectiveness to 2- or 3-dose regimens and to single- or multiple-dose regimens of broader-spectrum agents. For more than 20 years now, the standard of care for antibiotic prophylaxis has been a single 1- to 2-g dose of cefazolin.
Several recent reports, however, have raised the question of whether the prophylactic effect could be enhanced if the spectrum of activity of the antibiotic regimen was broadened to include an agent effective against Ureaplasma species.
Tita and colleagues evaluated an indigent patient population with an inherently high rate of postoperative infection; they showed that adding azithromycin 500 mg to cefazolin significantly reduced the rate of postcesarean endometritis.13 In a follow-up report from the same institution, Tita and co-workers demonstrated that adding azithromycin also significantly reduced the frequency of wound infection.14 In both of these investigations, the antibiotics were administered after cord clamping.
In a subsequent report, Ward and Duff15 showed that the combination of azithromycin plus cefazolin administered preoperatively resulted in a very low rate of both endometritis and wound infection in a population similar to that studied by Tita et al.13,14
Very recently, Tita and associates published the results of the Cesarean Section Optimal Antibiotic Prophylaxis (C/SOAP) trial conducted at 14 US hospitals.16 This study included 2,013 women undergoing cesarean delivery during labor or after membrane rupture who were randomly assigned to receive intravenous azithromycin 500 mg (n = 1,019) or placebo (n = 994). All women also received standard antibiotic prophylaxis with cefazolin. The primary outcome (a composite of endometritis, wound infection, or other infection within 6 weeks) was significantly lower in the azithromycin group than in the placebo group (6.1% vs 12.0%, P<.001). In addition, there were significant differences between the treatment groups in the rates of endometritis (3.8% in the azithromycin group vs 6.1% in the placebo group, P = .02) as well as in the rates of wound infection (2.4% vs 6.6%, respectively, P<.001). Of additional note, there were no differences between the 2 groups in the composite neonatal outcome of death and serious neonatal complications (14.3% vs 13.6%, P = .63).The investigators concluded that extended-spectrum prophylaxis with adjunctive azithromycin safely reduces infection rates without raising the risk of neonatal adverse outcomes.
What the evidence says
We conclude that all patients, even those having a scheduled cesarean before the onset of labor or ruptured membranes, should receive prophylactic antibiotics in a single dose administered preoperatively rather than after cord clamping (Level I Evidence, Level 1A Recommendation; TABLE). In high-risk populations (eg, women in labor with ruptured membranes who are having an urgent cesarean), for whom the baseline risk of infection is high, administer the combination of cefazolin plus azithromycin in lieu of cefazolin alone (Level I Evidence, Level 1A Recommendation; TABLE).
If the patient has a history of an immediate hypersensitivity reaction to beta-lactam antibiotics, we recommend the combination of clindamycin (900 mg) plus gentamicin (1.5 mg/kg) as a single infusion prior to surgery. We base this recommendation on the need to provide reasonable coverage against a broad range of pathogens. Clindamycin covers gram-positive aerobes, such as staphylococci species and group B streptococci, and anaerobes; gentamicin covers aerobic gram-negative bacilli. A single agent, such as clindamycin or metronidazole, does not provide the broad-based coverage necessary for effective prophylaxis (Level III Evidence, Level 1C Recommendation; TABLE).
If the patient is overweight or obese, should you modify the antibiotic dose?
The prevalence of obesity in the United States continues to increase. One-third of all US reproductive-aged women are obese, and 6% of women are extremely obese.17 Obesity increases the risk of postcesarean infection 3- to 5- fold.18 Because both pregnancy and obesity increase the total volume of a drug’s distribution, achieving adequate antibiotic tissue concentrations may be hindered by a dilutional effect. Furthermore, pharmacokinetic studies consistently have shown that the tissue concentration of an antibiotic—which, ideally, should be above the minimum inhibitory concentration (MIC) for common bacteria—determines the susceptibility of those tissues to infection, regardless of whether the serum concentration of the antibiotic is in the therapeutic range.19
These concerns have led to several recent investigations evaluating different doses of cefazolin for obese patients. Pevzner and colleagues conducted a prospective cohort study of 29 women having a scheduled cesarean delivery.20 The patients were divided into 3 groups: lean (BMI <30 kg m2), obese (BMI 30.0–39.9 kg m2), and extremely obese (BMI >40 kg m2). All women received a 2-g dose of cefazolin 30 to 60 minutes before surgery. Cefazolin concentrations in adipose tissue obtained at the time of skin incision were inversely proportional to maternal BMI (r, −0.67; P<.001). All specimens demonstrated a therapeutic concentration (>1 µg/g) of cefazolin for gram-positive cocci, but 20% of the obese women and 33% of the extremely obese women did not achieve the MIC (>4 µg/g) for gram-negative bacilli (P = .29 and P = .14, respectively). At the time of skin closure, 20% of obese women and 44% of extremely obese women did not have tissue concentrations that exceeded the MIC for gram-negative bacteria.
Swank and associates conducted a prospective cohort study that included 28 women.18 They demonstrated that, after a 2-g dose of cefazolin, only 20% of the obese women (BMI 30–40 kg m2) and 0% of the extremely obese women (BMI >40 kg m2) achieved an adipose tissue concentration that exceeded the MIC for gram-negative rods (8 µg/mL). However, 100% and 71.4%, respectively, achieved such a tissue concentration after a 3-g dose. When the women were stratified by actual weight, there was a statistically significant difference between those who weighed less than 120 kg and those who weighed more than 120 kg. Seventy-nine percent of the former had a tissue concentration of cefazolin greater than 8 µg/mL compared with 0% of the women who weighed more than 120 kg. Based on these observations, the authors recommended a 3-g dose of cefazolin for women who weigh more than 120 kg.
In a double-blind RCT with 26 obese women (BMI ≥30 kg m2), Young and colleagues demonstrated that, at the time of hysterotomy and fascial closure, significantly higher concentrations of cefazolin were found in the adipose tissue of obese women who received a 3-g dose of antibiotic compared with those who received a 2-g dose.21 However, all concentrations of cefazolin were consistently above the MIC of cefazolin for gram-positive cocci (1 µg/g) and gram-negative bacilli (4 µg/g). Further, Maggio and co-workers conducted a double-blind RCT comparing a 2-g dose of cefazolin versus a 3-g dose in 57 obese women (BMI ≥30 kg m2).22 They found no statistically significant difference in the percentage of women who had tissue concentrations of cefazolin greater than the MIC for gram-positive cocci (8 µg/g). All samples were above the MIC of cefazolin for gram-negative bacilli (2 µg/g). Based on these data, these investigators did not recommend increasing the dose of cefazolin from 2 g to 3 g in obese patients.21,22
The studies discussed above are difficult to compare for 3 reasons. First, each study used a different MIC of cefazolin for both gram-positive and gram-negative bacteria. Second, the authors sampled different maternal tissues or serum at varying times during the cesarean delivery. Third, the studies did not specifically investigate, or were not powered sufficiently to address, the more important clinical outcome of surgical site infection. In a recent historical cohort study, Ward and Duff were unable to show that increasing the dose of cefazolin to 2 g in all women with a BMI <30 kg m2 and to 3 g in all women with a BMI >30 kg m2 reduced the rate of endometritis and wound infection below the level already achieved with combined prophylaxis with cefazolin (1 g) plus azithromycin (500 mg).15
Sutton and colleagues recently assessed the pharmacokinetics of azithromycin when used as prophylaxis for cesarean delivery.23 They studied 30 women who had a scheduled cesarean delivery and who received a 500-mg intravenous dose of azithromycin that was initiated 15, 30, or 60 minutes before the surgical incision and then infused over 1 hour. They obtained maternal plasma samples multiple times during the first 8 hours after surgery. They also obtained samples of amniotic fluid, placenta, myometrium, adipose tissue, and umbilical cord blood intraoperatively. The median concentration of azithromycin in adipose tissue was 102 ng/g, which is below the MIC50 for Ureaplasma species (250 ng/mL). The median concentration in myometrial tissue was 402 ng/g. The concentration in maternal plasma consistently exceeded the MIC50 for Ureaplasma species.
What the evidence says
All women, regardless of weight,
CASE Resolved
For the 26-year-old obese laboring patient about to undergo cesarean delivery, reasonable steps for prevention of infection include removing the hair at the incision site with clippers or depilatory cream immediately prior to the start of surgery; cleansing the abdomen with a chlorhexidine-alcohol solution; and administering cefazolin (2 g) plus azithromycin (500 mg) preoperatively.
Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.
- Cruse PJ, Foord R. A five‑year prospective study of 23,649 surgical wounds. Arch Surg. 1973;107(2):206–210.
- Darouiche RO, Wall MJ Jr, Itani KM, et al. Chlorhexidine‑alcohol versus povidone‑iodine for surgical‑site antisepsis. N Engl J Med. 2010;362(1):18–26.
- Ngai IM, Van Arsdale A, Govindappagari S, et al. Skin preparation for prevention of surgical site infection after cesarean delivery. Obstet Gynecol. 2015;126(6):1251–1257.
- Tuuli MG, Liu J, Stout MJ, et al. A randomized trial comparing skin antiseptic agents at cesarean delivery. N Engl J Med. 2016;374(7):647–655.
- Duff P. Prophylactic antibiotics for cesarean delivery: a simple cost‑effective strategy for prevention of postoperative morbidity. Am J Obstet Gynecol. 1987;157(4 pt 1):794–798.
- Dinsmoor MJ, Gilbert S, Landon MB, et al; Eunice Kennedy Schriver National Institute of Child Health and Human Development Maternal‑Fetal Medicine Units Network. Perioperative antibiotic prophylaxis for nonlaboring cesarean delivery. Obstet Gynecol. 2009;114(4):752–756.
- Burke JF. The effective period of preventive antibiotic action in experimental incisions and dermal lesions. Surgery. 1961;50:161–168.
- Gordon HR, Phelps D, Blanchard K. Prophylactic cesarean section antibiotics: maternal and neonatal morbidity before or after cord clamping. Obstet Gynecol. 1979;53(2):151–156.
- Cunningham FG, Leveno KJ, DePalma RT, Roark M, Rosenfeld CR. Perioperative antimicrobials for cesarean delivery: before or after cord clamping? Obstet Gynecol. 1983;62(2):151–154.
- Sullivan SA, Smith T, Chang E, Hulsey T, Vandorsten JP, Soper D. Administration of cefazolin prior to skin incision is superior to cefazolin at cord clamping in preventing postcesarean infectious morbidity: a randomized controlled trial. Am J Obstet Gynecol. 2007;196(5):455.e1–e5.
- Costantine MM, Rahman M, Ghulmiyah L, et al. Timing of perioperative antibiotics for cesarean delivery: a metaanalysis. Am J Obstet Gynecol. 2008;199(3):301.e1–e6.
- Owens SM, Brozanski BS, Meyn LA, Wiesenfeld HC. Antimicrobial prophylaxis for cesarean delivery before skin incision. Obstet Gynecol. 2009;114(3):573–579.
- Tita AT, Hauth JC, Grimes A, Owen J, Stamm AM, Andrews WW. Decreasing incidence of postcesarean endometritis with extended‑spectrum antibiotic prophylaxis. Obstet Gynecol. 2008;111(1):51–56.
- Tita AT, Owen J, Stamm AM, Grimes A, Hauth JC, Andrews WW. Impact of extended‑spectrum antibiotic prophylaxis on incidence of postcesarean surgical wound infection. Am J Obstet Gynecol. 2008;199(3):303.e1–e3.
- Ward E, Duff P. A comparison of 3 antibiotic regimens for prevention of postcesarean endometritis: an historical cohort study. Am J Obstet Gynecol. 2016;214(6):751.e1–e4.
- Tita AT, Szychowski JM, Boggess K, et al; C/SOAP Trial Consortium. Adjunctive azithromycin prophylaxis for cesarean delivery. N Engl J Med. 2016;375(13):1231–1241.
- Ogden CL, Carroll MD, Curtin LR, McDowell MA, Tabak CJ, Flegel KM. Prevalence of overweight and obesity in the United States, 1999–2004. JAMA. 2006:295(13):1549–1555.
- Swank ML, Wing DA, Nicolau DP, McNulty JA. Increased 3‑gram cefazolin dosing for cesarean delivery prophylaxis in obese women. Am J Obstet Gynecol. 2015;213(3):415.e1–e8.
- Liu P, Derendorf H. Antimicrobial tissue concentrations. Infect Dis Clin North Am. 2003:17(3):599–613.
- Pevzner L, Swank M, Krepel C, Wing DA, Chan K, Edmiston CE Jr. Effects of maternal obesity on tissue concentrations of prophylactic cefazolin during cesarean delivery. Obstet Gynecol. 2011;117(4):877–882.
- Young OM, Shaik IH, Twedt R, et al. Pharmacokinetics of cefazolin prophylaxis in obese gravidae at time of cesarean delivery. Am J Obstet Gynecol. 2015;213(4):541.e1–e7.
- Maggio L, Nicolau DP, DaCosta M, Rouse DJ, Hughes BL. Cefazolin prophylaxis in obese women undergoing cesarean delivery: a randomized controlled trial. Obstet Gynecol. 2015;125(5):1205–1210.
- Sutton AL, Acosta EP, Larson KB, Kerstner‑Wood CD, Tita AT, Biggio JR. Perinatal pharmacokinetics of azithromycin for cesarean prophylaxis. Am J Obstet Gynecol. 2015;212(6):812. e1–e6.
- Cruse PJ, Foord R. A five‑year prospective study of 23,649 surgical wounds. Arch Surg. 1973;107(2):206–210.
- Darouiche RO, Wall MJ Jr, Itani KM, et al. Chlorhexidine‑alcohol versus povidone‑iodine for surgical‑site antisepsis. N Engl J Med. 2010;362(1):18–26.
- Ngai IM, Van Arsdale A, Govindappagari S, et al. Skin preparation for prevention of surgical site infection after cesarean delivery. Obstet Gynecol. 2015;126(6):1251–1257.
- Tuuli MG, Liu J, Stout MJ, et al. A randomized trial comparing skin antiseptic agents at cesarean delivery. N Engl J Med. 2016;374(7):647–655.
- Duff P. Prophylactic antibiotics for cesarean delivery: a simple cost‑effective strategy for prevention of postoperative morbidity. Am J Obstet Gynecol. 1987;157(4 pt 1):794–798.
- Dinsmoor MJ, Gilbert S, Landon MB, et al; Eunice Kennedy Schriver National Institute of Child Health and Human Development Maternal‑Fetal Medicine Units Network. Perioperative antibiotic prophylaxis for nonlaboring cesarean delivery. Obstet Gynecol. 2009;114(4):752–756.
- Burke JF. The effective period of preventive antibiotic action in experimental incisions and dermal lesions. Surgery. 1961;50:161–168.
- Gordon HR, Phelps D, Blanchard K. Prophylactic cesarean section antibiotics: maternal and neonatal morbidity before or after cord clamping. Obstet Gynecol. 1979;53(2):151–156.
- Cunningham FG, Leveno KJ, DePalma RT, Roark M, Rosenfeld CR. Perioperative antimicrobials for cesarean delivery: before or after cord clamping? Obstet Gynecol. 1983;62(2):151–154.
- Sullivan SA, Smith T, Chang E, Hulsey T, Vandorsten JP, Soper D. Administration of cefazolin prior to skin incision is superior to cefazolin at cord clamping in preventing postcesarean infectious morbidity: a randomized controlled trial. Am J Obstet Gynecol. 2007;196(5):455.e1–e5.
- Costantine MM, Rahman M, Ghulmiyah L, et al. Timing of perioperative antibiotics for cesarean delivery: a metaanalysis. Am J Obstet Gynecol. 2008;199(3):301.e1–e6.
- Owens SM, Brozanski BS, Meyn LA, Wiesenfeld HC. Antimicrobial prophylaxis for cesarean delivery before skin incision. Obstet Gynecol. 2009;114(3):573–579.
- Tita AT, Hauth JC, Grimes A, Owen J, Stamm AM, Andrews WW. Decreasing incidence of postcesarean endometritis with extended‑spectrum antibiotic prophylaxis. Obstet Gynecol. 2008;111(1):51–56.
- Tita AT, Owen J, Stamm AM, Grimes A, Hauth JC, Andrews WW. Impact of extended‑spectrum antibiotic prophylaxis on incidence of postcesarean surgical wound infection. Am J Obstet Gynecol. 2008;199(3):303.e1–e3.
- Ward E, Duff P. A comparison of 3 antibiotic regimens for prevention of postcesarean endometritis: an historical cohort study. Am J Obstet Gynecol. 2016;214(6):751.e1–e4.
- Tita AT, Szychowski JM, Boggess K, et al; C/SOAP Trial Consortium. Adjunctive azithromycin prophylaxis for cesarean delivery. N Engl J Med. 2016;375(13):1231–1241.
- Ogden CL, Carroll MD, Curtin LR, McDowell MA, Tabak CJ, Flegel KM. Prevalence of overweight and obesity in the United States, 1999–2004. JAMA. 2006:295(13):1549–1555.
- Swank ML, Wing DA, Nicolau DP, McNulty JA. Increased 3‑gram cefazolin dosing for cesarean delivery prophylaxis in obese women. Am J Obstet Gynecol. 2015;213(3):415.e1–e8.
- Liu P, Derendorf H. Antimicrobial tissue concentrations. Infect Dis Clin North Am. 2003:17(3):599–613.
- Pevzner L, Swank M, Krepel C, Wing DA, Chan K, Edmiston CE Jr. Effects of maternal obesity on tissue concentrations of prophylactic cefazolin during cesarean delivery. Obstet Gynecol. 2011;117(4):877–882.
- Young OM, Shaik IH, Twedt R, et al. Pharmacokinetics of cefazolin prophylaxis in obese gravidae at time of cesarean delivery. Am J Obstet Gynecol. 2015;213(4):541.e1–e7.
- Maggio L, Nicolau DP, DaCosta M, Rouse DJ, Hughes BL. Cefazolin prophylaxis in obese women undergoing cesarean delivery: a randomized controlled trial. Obstet Gynecol. 2015;125(5):1205–1210.
- Sutton AL, Acosta EP, Larson KB, Kerstner‑Wood CD, Tita AT, Biggio JR. Perinatal pharmacokinetics of azithromycin for cesarean prophylaxis. Am J Obstet Gynecol. 2015;212(6):812. e1–e6.
In this Article
- Prepping the skin for surgery
- Selecting the antibiotic(s) for infection prevention
- Prophylaxis for the obese patient
Current Concepts in Lip Augmentation
Historically, a variety of tools have been used to alter one’s appearance for cultural or religious purposes or to conform to standards of beauty. As a defining feature of the face, the lips provide a unique opportunity for facial aesthetic enhancement. There has been a paradigm shift in medicine favoring preventative health and a desire to slow and even reverse the aging process.1 Acknowledging that product technology, skill sets, and cultural ideals continually evolve, this article highlights perioral anatomy, explains aging of the lower face, and reviews techniques to achieve perioral rejuvenation through volume restoration and muscle control.
Perioral Anatomy
The layers of the lips include the epidermis, subcutaneous tissue, orbicularis oris muscle fibers, and mucosa. The upper lip extends from the base of the nose to the mucosa inferiorly and to the nasolabial folds laterally. The curvilinear lower lip extends from the mucosa to the mandible inferiorly and to the oral commissures laterally.2 Circumferential at the vermilion-cutaneous junction, a raised area of pale skin known as the white roll accentuates the vermilion border and provides an important landmark during lip augmentation.3 At the upper lip, this elevation of the vermilion joins at a V-shaped depression centrally to form the Cupid’s bow. The cutaneous upper lip has 2 raised vertical pillars known as the philtral columns, which are formed from decussating fibers of the orbicularis oris muscle.2 The resultant midline depression is the philtrum. These defining features of the upper lip are to be preserved during augmentation procedures (Figure 1).4

The superior and inferior labial arteries, both branches of the facial artery, supply the upper and lower lip, respectively. The anastomotic arch of the superior labial artery is susceptible to injury from deep injection of the upper lip between the muscle layer and mucosa; therefore, caution must be exercised in this area.5 Injections into the vermilion and lower lip can be safely performed with less concern for vascular compromise. The vermilion derives its red color from the translucency of capillaries in the superficial papillae.2 The capillary plexus at the papillae and rich sensory nerve network render the lip a highly vascular and sensitive structure.
Aging of the Lower Face
Subcutaneous fat atrophy, loss of elasticity, gravitational forces, and remodeling of the skeletal foundation all contribute to aging of the lower face. Starting as early as the third decade of life, intrinsic factors including hormonal changes and genetically determined processes produce alterations in skin quality and structure. Similarly, extrinsic aging through environmental influences, namely exposure to UV radiation and smoking, accelerate the loss of skin integrity.6
The decreased laxity of the skin in combination with repeated contraction of the orbicularis oris muscle results in perioral rhytides.7 For women in particular, vertically oriented perioral rhytides develop above the vermilion; terminal hair follicles, thicker skin, and a greater density of subcutaneous fat are presumptive protective factors for males.8 With time, the cutaneous portion of the upper lip lengthens and there is redistribution of volume with effacement of the upper lip vermilion.9 Additionally, the demarcation of the vermilion becomes blurred secondary to pallor, flattening of the philtral columns, and loss of projection of the Cupid’s bow.10
Downturning of the oral commissures is observed secondary to a combination of gravity, bone resorption, and soft tissue volume loss. Hyperactivity of the depressor anguli oris muscle exacerbates the mesolabial folds, producing marionette lines and a saddened expression.7 With ongoing volume loss and ligament laxity, tissue redistributes near the jaws and chin, giving rise to jowls. Similarly, perioral volume loss and descent of the malar fat-pad deepen the nasolabial folds in the aging midface.6
The main objective of perioral rejuvenation is to reinstate a harmonious refreshed look to the lower face; however, aesthetic analysis should occur within the context of the face as a whole, as the lips should complement the surrounding perioral cosmetic unit and overall skeletal foundation of the face. To accomplish this goal, the dermatologist’s armamentarium contains a broad variety of approaches including restriction of muscle movement, volume restoration, and surface contouring.
Volume Restoration
Treatment Options
In 2015, hyaluronic acid (HA) fillers constituted 80% of all injectable soft-tissue fillers, an 8% increase from 2014.11 Hyaluronic acid has achieved immense popularity as a temporary dermal filler given its biocompatibility, longevity, and reversibility via hyaluronidase.12
Hyaluronic acid is a naturally occurring glycosaminoglycan that comprises the connective tissue matrix. The molecular composition affords HA its hydrophilic property, which augments dermal volume.7 Endogenous HA has a short half-life, and chemical modification by a cross-linking process extends longevity by 6 to 12 months. The various HA fillers are distinguished by method of purification, size of molecules, concentration and degree of cross-linking, and viscosity.7,13,14 These differences dictate overall clinical performance such as flow properties, longevity, and stability. As a general rule, a high-viscosity product is more appropriate for deeper augmentation; fillers with low viscosity are more appropriate for correction of shallow defects.1 Table 1 lists the HA fillers that are currently approved by the US Food and Drug Administration for lip augmentation and/or perioral rhytides in adults 21 years and older.15-17
Randomized controlled trials comparing the efficacy, longevity, and tolerability of different HA products are lacking in the literature and, where present, have strong industry influence.18,19 The advent of assessment scales has provided an objective evaluation of perioral and lip augmentation, facilitating comparisons between products in both clinical research and practice.20

Semipermanent biostimulatory dermal fillers such as calcium hydroxylapatite and poly-L-lactic acid are not recommended for lip augmentation due to an increased incidence of submucosal nodule formation.6,14,21 Likewise, permanent fillers are not recommended given their irreversibility and risk of nodule formation around the lips.14,22 Nonetheless, liquid silicone (purified polydimethylsiloxane) administered via a microdroplet technique (0.01 mL of silicone at a time, no more than 1 cc per lip per session) has been used off label as a permanent filling agent for lip augmentation with limited complications.23 Regardless, trepidations about its use with respect to reported risks continue to limit its application.22
Similarly, surgical lip implants such as expanded polytetrafluoroethylene is an option for a subset of patients desiring permanent enhancement but are less commonly utilized given the side-effect profile, irreversibility, and relatively invasive nature of the procedure.22 Lastly, autologous fat transfer has been used in correction of the nasolabial and mesolabial folds as well as in lip augmentation; however, irregular surface contours and unpredictable longevity secondary to postinjection resorption (20%–90%) has limited its popularity.3,14,21
HA Injection Technique
With respect to HA fillers in the perioral area, numerous approaches have been described.10,22 The techniques in Table 2 provide a foundation for lip rejuvenation.

Several injection techniques exist, including serial puncture, linear threading, cross-hatching, and fanning in a retrograde or anterograde manner.24 A blunt microcannula (27 gauge, 38 mm) may be used in place of sharp needles and offers the benefit of increased patient comfort, reduced edema and ecchymosis, and shortened recovery period.25,26 Gentle massage of the product after injection can assist with an even contour. Lastly, a key determinant of successful outcomes is using an adequate volume of HA filler (1–2 mL for shaping the vermilion border and volumizing the lips).27 Figure 2 highlights a clinical example of HA filler for lip augmentation.

Fortunately, most complications encountered with HA lip augmentation are mild and transient. The most commonly observed side effects include injection-site reactions such as pain, erythema, and edema. Similarly, most adverse effects are related to injection technique. All HA fillers are prone to the Tyndall effect, a consequence of too superficial an injection plane. Patients with history of recurrent herpes simplex virus infections should receive prophylactic antiviral therapy.12
Muscle Control
An emerging concept in rejuvenation of the lower face recognizes not only restoration of volume but also control of muscle movement. Local injection of botulinum toxin type A induces relaxation of hyperfunctional facial muscles through temporary inhibition of neurotransmitter release.6 The potential for paralysis of the oral cavity may limit the application of botulinum toxin type A in that region.7 Nonetheless, the off-label potential of botulinum toxin type A has expanded to include several targets in the lower face. The orbicularis oris muscle is targeted to soften perioral rhytides. Conservative dosing (1–2 U per lip quadrant or approximately 5 U total) and superficial injection is emphasized in this area.27 Similarly, the depressor anguli oris muscle is targeted by injection of 4 U bilaterally to soften the marionette lines. In the chin area, the mentalis muscle can be targeted by injection of 2 U deep into each belly of the muscle to reduce the mental crease and dimpling.28 Combination treatment with dermal filler and neurotoxin demonstrates effects that last longer than either modality alone without additional adverse events.29 With combination therapy, guidelines suggest treating with filler first.27
Conclusion
A greater understanding of the extrinsic and intrinsic factors that contribute to the structural and surface changes of the aging face coupled with a preference for minimally invasive procedures has revolutionized the dermatologist’s approach to perioral rejuvenation. Serving as a focal point of the face, the lips and perioral skin are well poised to benefit from this paradigm shift. A multifaceted approach utilizing dermal fillers and neurotoxins may be most appropriate and has demonstrated optimal outcomes in facial aesthetics.
- Buck DW, Alam M, Kim JYS. Injectable fillers for facial rejuvenation: a review. J Plast Reconstr Aesthet Surg. 2009;62:11-18.
- Guareschi M, Stella E. Lips. In: Goisis M, ed. Injections in Aesthetic Medicine. Milan, Italy: Springer; 2014:125-136.
- Byrne PJ, Hilger PA. Lip augmentation. Facial Plast Surg. 2004;20:31-38.
- Niamtu J. Rejuvenation of the lip and perioral areas. In: Bell WH, Guerroro CA, eds. Distraction Osteogenesis of the Facial Skeleton. Ontario, Canada: BC Decker Inc; 2007:38-48.
- Tansatit T, Apinuntrum P, Phetudom T. A typical pattern of the labial arteries with implication for lip augmentation with injectable fillers. Aesthet Plast Surg. 2014;38:1083-1089.
- Sadick NS, Karcher C, Palmisano L. Cosmetic dermatology of the aging face. Clin Dermatol. 2009;27(suppl):S3-S12.
- Ali MJ, Ende K, Mass CS. Perioral rejuvenation and lip augmentation. Facial Plast Surg Clin N Am. 2007;15:491-500.
- Chien AL, Qi J, Cheng N, et al. Perioral wrinkles are associated with female gender, aging, and smoking: development of a gender-specific photonumeric scale. J Am Acad Dermatol. 2016;74:924-930.
- Iblher N, Stark GB, Penna V. The aging perioral region—do we really know what is happening? J Nutr Health Aging. 2012;16:581-585.
- Sarnoff DS, Gotkin RH. Six steps to the “perfect” lip. J Drugs Dermatol. 2012;11:1081-1088.
- American Society of Plastic Surgeons. 2015 Cosmetic plastic surgery statistics. https://d2wirczt3b6wjm.cloudfront.net/News/Statistics/2015/cosmetic-procedure-trends-2015.pdf. Published February 26, 2015. Accessed October 5, 2016.
- Abduljabbar MH, Basendwh MA. Complications of hyaluronic acid fillers and their managements. J Dermatol Surg. 2016;20:1-7.
- Luebberding S, Alexiades-Armenakas M. Facial volume augmentation in 2014: overview of different filler options. J Drugs Dermatol. 2013;12:1339-1344.
- Huang Attenello N, Mass CS. Injectable fillers: review of material and properties. Facial Plast Surg. 2015;31:29-34.
- Eccleston D, Murphy DK. Juvéderm Volbella in the perioral area: a 12-month perspective, multicenter, open-label study. Clin Cosmet Investig Dermatol. 2012;5:167-172.
- Raspaldo H, Chantrey J, Belhaouari L, et al. Lip and perioral enhancement: a 12-month prospective, randomized, controlled study. J Drugs Dermatol. 2015;14:1444-1452.
- Soft tissue fillers approved by the center for devices and radiological health. US Food and Drug Administration website. http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/CosmeticDevices/WrinkleFillers/ucm227749.htm. Updated July 27, 2015. Accessed October 5, 2016.
- Butterwick K, Marmur E, Narurkar V, et al. HYC-24L demonstrates greater effectiveness with less pain than CPM-22.5 for treatment of perioral lines in a randomized controlled trial. Dermatol Surg. 2015;41:1351-1360.
- San Miguel Moragas J, Reddy RR, Hernández Alfaro F, et al. Systematic review of “filling” procedures for lip augmentation regarding types of material, outcomes and complications. J Craniomaxillofac Surg. 2015;43:883-906.
- Cohen JL, Thomas J, Paradkar D, et al. An interrater and intrarater reliability study of 3 photographic scales for the classification of perioral aesthetic features. Dermatol Surg. 2014;40:663-670.
- Broder KW, Cohen SR. An overview of permanent and semipermanent fillers. Plast Reconstr Surg. 2006;118(3 suppl):7S-14S.
- Sarnoff DS, Saini R, Gotkin RH. Comparison of filling agents for lip augmentation. Aesthet Surg J. 2008;28:556-563.
- Moscona RA, Fodor L. A retrospective study on liquid injectable silicone for lip augmentation: long-term results and patient satisfaction. J Plast Reconstr Aesthet Surg. 2010;63:1694-1698.
- Bertucci V, Lynde CB. Current concepts in the use of small-particle hyaluronic acid. Plast Reconstr Surg. 2015;136(5 suppl):132S-138S.
- Wilson AJ, Taglienti AJ, Chang CS, et al. Current applications of facial volumization with fillers. Plast Reconstr Surg. 2016;137:E872-E889.
- Dewandre L, Caperton C, Fulton J. Filler injections with the blunt-tip microcannula compared to the sharp hypodermic needle. J Drugs Dermatol. 2012;11:1098-1103.
- Carruthers JD, Glogau RG, Blitzer A; Facial Aesthetics Consensus Group Faculty. Advances in facial rejuvenation: botulinum toxin type A, hyaluronic acid dermal fillers, and combination therapies-consensus recommendations. Plast Reconstr Surg. 2008;121(5 suppl):5S-30S.
- Wu DC, Fabi SG, Goldman MP. Neurotoxins: current concepts in cosmetic use on the face and neck-lower face. Plast Reconstr Surg. 2015;136(5 suppl):76S-79S.
- Carruthers A, Carruthers J, Monheit GD, et al. Multicenter, randomized, parallel-group study of the safety and effectiveness of onabotulinumtoxin A and hyaluronic acid dermal fillers (24-mg/mL smooth, cohesive gel) alone and in combination for lower facial rejuvenation. Dermatol Surg. 2010;36:2121-2134.
Historically, a variety of tools have been used to alter one’s appearance for cultural or religious purposes or to conform to standards of beauty. As a defining feature of the face, the lips provide a unique opportunity for facial aesthetic enhancement. There has been a paradigm shift in medicine favoring preventative health and a desire to slow and even reverse the aging process.1 Acknowledging that product technology, skill sets, and cultural ideals continually evolve, this article highlights perioral anatomy, explains aging of the lower face, and reviews techniques to achieve perioral rejuvenation through volume restoration and muscle control.
Perioral Anatomy
The layers of the lips include the epidermis, subcutaneous tissue, orbicularis oris muscle fibers, and mucosa. The upper lip extends from the base of the nose to the mucosa inferiorly and to the nasolabial folds laterally. The curvilinear lower lip extends from the mucosa to the mandible inferiorly and to the oral commissures laterally.2 Circumferential at the vermilion-cutaneous junction, a raised area of pale skin known as the white roll accentuates the vermilion border and provides an important landmark during lip augmentation.3 At the upper lip, this elevation of the vermilion joins at a V-shaped depression centrally to form the Cupid’s bow. The cutaneous upper lip has 2 raised vertical pillars known as the philtral columns, which are formed from decussating fibers of the orbicularis oris muscle.2 The resultant midline depression is the philtrum. These defining features of the upper lip are to be preserved during augmentation procedures (Figure 1).4

The superior and inferior labial arteries, both branches of the facial artery, supply the upper and lower lip, respectively. The anastomotic arch of the superior labial artery is susceptible to injury from deep injection of the upper lip between the muscle layer and mucosa; therefore, caution must be exercised in this area.5 Injections into the vermilion and lower lip can be safely performed with less concern for vascular compromise. The vermilion derives its red color from the translucency of capillaries in the superficial papillae.2 The capillary plexus at the papillae and rich sensory nerve network render the lip a highly vascular and sensitive structure.
Aging of the Lower Face
Subcutaneous fat atrophy, loss of elasticity, gravitational forces, and remodeling of the skeletal foundation all contribute to aging of the lower face. Starting as early as the third decade of life, intrinsic factors including hormonal changes and genetically determined processes produce alterations in skin quality and structure. Similarly, extrinsic aging through environmental influences, namely exposure to UV radiation and smoking, accelerate the loss of skin integrity.6
The decreased laxity of the skin in combination with repeated contraction of the orbicularis oris muscle results in perioral rhytides.7 For women in particular, vertically oriented perioral rhytides develop above the vermilion; terminal hair follicles, thicker skin, and a greater density of subcutaneous fat are presumptive protective factors for males.8 With time, the cutaneous portion of the upper lip lengthens and there is redistribution of volume with effacement of the upper lip vermilion.9 Additionally, the demarcation of the vermilion becomes blurred secondary to pallor, flattening of the philtral columns, and loss of projection of the Cupid’s bow.10
Downturning of the oral commissures is observed secondary to a combination of gravity, bone resorption, and soft tissue volume loss. Hyperactivity of the depressor anguli oris muscle exacerbates the mesolabial folds, producing marionette lines and a saddened expression.7 With ongoing volume loss and ligament laxity, tissue redistributes near the jaws and chin, giving rise to jowls. Similarly, perioral volume loss and descent of the malar fat-pad deepen the nasolabial folds in the aging midface.6
The main objective of perioral rejuvenation is to reinstate a harmonious refreshed look to the lower face; however, aesthetic analysis should occur within the context of the face as a whole, as the lips should complement the surrounding perioral cosmetic unit and overall skeletal foundation of the face. To accomplish this goal, the dermatologist’s armamentarium contains a broad variety of approaches including restriction of muscle movement, volume restoration, and surface contouring.
Volume Restoration
Treatment Options
In 2015, hyaluronic acid (HA) fillers constituted 80% of all injectable soft-tissue fillers, an 8% increase from 2014.11 Hyaluronic acid has achieved immense popularity as a temporary dermal filler given its biocompatibility, longevity, and reversibility via hyaluronidase.12
Hyaluronic acid is a naturally occurring glycosaminoglycan that comprises the connective tissue matrix. The molecular composition affords HA its hydrophilic property, which augments dermal volume.7 Endogenous HA has a short half-life, and chemical modification by a cross-linking process extends longevity by 6 to 12 months. The various HA fillers are distinguished by method of purification, size of molecules, concentration and degree of cross-linking, and viscosity.7,13,14 These differences dictate overall clinical performance such as flow properties, longevity, and stability. As a general rule, a high-viscosity product is more appropriate for deeper augmentation; fillers with low viscosity are more appropriate for correction of shallow defects.1 Table 1 lists the HA fillers that are currently approved by the US Food and Drug Administration for lip augmentation and/or perioral rhytides in adults 21 years and older.15-17
Randomized controlled trials comparing the efficacy, longevity, and tolerability of different HA products are lacking in the literature and, where present, have strong industry influence.18,19 The advent of assessment scales has provided an objective evaluation of perioral and lip augmentation, facilitating comparisons between products in both clinical research and practice.20

Semipermanent biostimulatory dermal fillers such as calcium hydroxylapatite and poly-L-lactic acid are not recommended for lip augmentation due to an increased incidence of submucosal nodule formation.6,14,21 Likewise, permanent fillers are not recommended given their irreversibility and risk of nodule formation around the lips.14,22 Nonetheless, liquid silicone (purified polydimethylsiloxane) administered via a microdroplet technique (0.01 mL of silicone at a time, no more than 1 cc per lip per session) has been used off label as a permanent filling agent for lip augmentation with limited complications.23 Regardless, trepidations about its use with respect to reported risks continue to limit its application.22
Similarly, surgical lip implants such as expanded polytetrafluoroethylene is an option for a subset of patients desiring permanent enhancement but are less commonly utilized given the side-effect profile, irreversibility, and relatively invasive nature of the procedure.22 Lastly, autologous fat transfer has been used in correction of the nasolabial and mesolabial folds as well as in lip augmentation; however, irregular surface contours and unpredictable longevity secondary to postinjection resorption (20%–90%) has limited its popularity.3,14,21
HA Injection Technique
With respect to HA fillers in the perioral area, numerous approaches have been described.10,22 The techniques in Table 2 provide a foundation for lip rejuvenation.

Several injection techniques exist, including serial puncture, linear threading, cross-hatching, and fanning in a retrograde or anterograde manner.24 A blunt microcannula (27 gauge, 38 mm) may be used in place of sharp needles and offers the benefit of increased patient comfort, reduced edema and ecchymosis, and shortened recovery period.25,26 Gentle massage of the product after injection can assist with an even contour. Lastly, a key determinant of successful outcomes is using an adequate volume of HA filler (1–2 mL for shaping the vermilion border and volumizing the lips).27 Figure 2 highlights a clinical example of HA filler for lip augmentation.

Fortunately, most complications encountered with HA lip augmentation are mild and transient. The most commonly observed side effects include injection-site reactions such as pain, erythema, and edema. Similarly, most adverse effects are related to injection technique. All HA fillers are prone to the Tyndall effect, a consequence of too superficial an injection plane. Patients with history of recurrent herpes simplex virus infections should receive prophylactic antiviral therapy.12
Muscle Control
An emerging concept in rejuvenation of the lower face recognizes not only restoration of volume but also control of muscle movement. Local injection of botulinum toxin type A induces relaxation of hyperfunctional facial muscles through temporary inhibition of neurotransmitter release.6 The potential for paralysis of the oral cavity may limit the application of botulinum toxin type A in that region.7 Nonetheless, the off-label potential of botulinum toxin type A has expanded to include several targets in the lower face. The orbicularis oris muscle is targeted to soften perioral rhytides. Conservative dosing (1–2 U per lip quadrant or approximately 5 U total) and superficial injection is emphasized in this area.27 Similarly, the depressor anguli oris muscle is targeted by injection of 4 U bilaterally to soften the marionette lines. In the chin area, the mentalis muscle can be targeted by injection of 2 U deep into each belly of the muscle to reduce the mental crease and dimpling.28 Combination treatment with dermal filler and neurotoxin demonstrates effects that last longer than either modality alone without additional adverse events.29 With combination therapy, guidelines suggest treating with filler first.27
Conclusion
A greater understanding of the extrinsic and intrinsic factors that contribute to the structural and surface changes of the aging face coupled with a preference for minimally invasive procedures has revolutionized the dermatologist’s approach to perioral rejuvenation. Serving as a focal point of the face, the lips and perioral skin are well poised to benefit from this paradigm shift. A multifaceted approach utilizing dermal fillers and neurotoxins may be most appropriate and has demonstrated optimal outcomes in facial aesthetics.
Historically, a variety of tools have been used to alter one’s appearance for cultural or religious purposes or to conform to standards of beauty. As a defining feature of the face, the lips provide a unique opportunity for facial aesthetic enhancement. There has been a paradigm shift in medicine favoring preventative health and a desire to slow and even reverse the aging process.1 Acknowledging that product technology, skill sets, and cultural ideals continually evolve, this article highlights perioral anatomy, explains aging of the lower face, and reviews techniques to achieve perioral rejuvenation through volume restoration and muscle control.
Perioral Anatomy
The layers of the lips include the epidermis, subcutaneous tissue, orbicularis oris muscle fibers, and mucosa. The upper lip extends from the base of the nose to the mucosa inferiorly and to the nasolabial folds laterally. The curvilinear lower lip extends from the mucosa to the mandible inferiorly and to the oral commissures laterally.2 Circumferential at the vermilion-cutaneous junction, a raised area of pale skin known as the white roll accentuates the vermilion border and provides an important landmark during lip augmentation.3 At the upper lip, this elevation of the vermilion joins at a V-shaped depression centrally to form the Cupid’s bow. The cutaneous upper lip has 2 raised vertical pillars known as the philtral columns, which are formed from decussating fibers of the orbicularis oris muscle.2 The resultant midline depression is the philtrum. These defining features of the upper lip are to be preserved during augmentation procedures (Figure 1).4

The superior and inferior labial arteries, both branches of the facial artery, supply the upper and lower lip, respectively. The anastomotic arch of the superior labial artery is susceptible to injury from deep injection of the upper lip between the muscle layer and mucosa; therefore, caution must be exercised in this area.5 Injections into the vermilion and lower lip can be safely performed with less concern for vascular compromise. The vermilion derives its red color from the translucency of capillaries in the superficial papillae.2 The capillary plexus at the papillae and rich sensory nerve network render the lip a highly vascular and sensitive structure.
Aging of the Lower Face
Subcutaneous fat atrophy, loss of elasticity, gravitational forces, and remodeling of the skeletal foundation all contribute to aging of the lower face. Starting as early as the third decade of life, intrinsic factors including hormonal changes and genetically determined processes produce alterations in skin quality and structure. Similarly, extrinsic aging through environmental influences, namely exposure to UV radiation and smoking, accelerate the loss of skin integrity.6
The decreased laxity of the skin in combination with repeated contraction of the orbicularis oris muscle results in perioral rhytides.7 For women in particular, vertically oriented perioral rhytides develop above the vermilion; terminal hair follicles, thicker skin, and a greater density of subcutaneous fat are presumptive protective factors for males.8 With time, the cutaneous portion of the upper lip lengthens and there is redistribution of volume with effacement of the upper lip vermilion.9 Additionally, the demarcation of the vermilion becomes blurred secondary to pallor, flattening of the philtral columns, and loss of projection of the Cupid’s bow.10
Downturning of the oral commissures is observed secondary to a combination of gravity, bone resorption, and soft tissue volume loss. Hyperactivity of the depressor anguli oris muscle exacerbates the mesolabial folds, producing marionette lines and a saddened expression.7 With ongoing volume loss and ligament laxity, tissue redistributes near the jaws and chin, giving rise to jowls. Similarly, perioral volume loss and descent of the malar fat-pad deepen the nasolabial folds in the aging midface.6
The main objective of perioral rejuvenation is to reinstate a harmonious refreshed look to the lower face; however, aesthetic analysis should occur within the context of the face as a whole, as the lips should complement the surrounding perioral cosmetic unit and overall skeletal foundation of the face. To accomplish this goal, the dermatologist’s armamentarium contains a broad variety of approaches including restriction of muscle movement, volume restoration, and surface contouring.
Volume Restoration
Treatment Options
In 2015, hyaluronic acid (HA) fillers constituted 80% of all injectable soft-tissue fillers, an 8% increase from 2014.11 Hyaluronic acid has achieved immense popularity as a temporary dermal filler given its biocompatibility, longevity, and reversibility via hyaluronidase.12
Hyaluronic acid is a naturally occurring glycosaminoglycan that comprises the connective tissue matrix. The molecular composition affords HA its hydrophilic property, which augments dermal volume.7 Endogenous HA has a short half-life, and chemical modification by a cross-linking process extends longevity by 6 to 12 months. The various HA fillers are distinguished by method of purification, size of molecules, concentration and degree of cross-linking, and viscosity.7,13,14 These differences dictate overall clinical performance such as flow properties, longevity, and stability. As a general rule, a high-viscosity product is more appropriate for deeper augmentation; fillers with low viscosity are more appropriate for correction of shallow defects.1 Table 1 lists the HA fillers that are currently approved by the US Food and Drug Administration for lip augmentation and/or perioral rhytides in adults 21 years and older.15-17
Randomized controlled trials comparing the efficacy, longevity, and tolerability of different HA products are lacking in the literature and, where present, have strong industry influence.18,19 The advent of assessment scales has provided an objective evaluation of perioral and lip augmentation, facilitating comparisons between products in both clinical research and practice.20

Semipermanent biostimulatory dermal fillers such as calcium hydroxylapatite and poly-L-lactic acid are not recommended for lip augmentation due to an increased incidence of submucosal nodule formation.6,14,21 Likewise, permanent fillers are not recommended given their irreversibility and risk of nodule formation around the lips.14,22 Nonetheless, liquid silicone (purified polydimethylsiloxane) administered via a microdroplet technique (0.01 mL of silicone at a time, no more than 1 cc per lip per session) has been used off label as a permanent filling agent for lip augmentation with limited complications.23 Regardless, trepidations about its use with respect to reported risks continue to limit its application.22
Similarly, surgical lip implants such as expanded polytetrafluoroethylene is an option for a subset of patients desiring permanent enhancement but are less commonly utilized given the side-effect profile, irreversibility, and relatively invasive nature of the procedure.22 Lastly, autologous fat transfer has been used in correction of the nasolabial and mesolabial folds as well as in lip augmentation; however, irregular surface contours and unpredictable longevity secondary to postinjection resorption (20%–90%) has limited its popularity.3,14,21
HA Injection Technique
With respect to HA fillers in the perioral area, numerous approaches have been described.10,22 The techniques in Table 2 provide a foundation for lip rejuvenation.

Several injection techniques exist, including serial puncture, linear threading, cross-hatching, and fanning in a retrograde or anterograde manner.24 A blunt microcannula (27 gauge, 38 mm) may be used in place of sharp needles and offers the benefit of increased patient comfort, reduced edema and ecchymosis, and shortened recovery period.25,26 Gentle massage of the product after injection can assist with an even contour. Lastly, a key determinant of successful outcomes is using an adequate volume of HA filler (1–2 mL for shaping the vermilion border and volumizing the lips).27 Figure 2 highlights a clinical example of HA filler for lip augmentation.

Fortunately, most complications encountered with HA lip augmentation are mild and transient. The most commonly observed side effects include injection-site reactions such as pain, erythema, and edema. Similarly, most adverse effects are related to injection technique. All HA fillers are prone to the Tyndall effect, a consequence of too superficial an injection plane. Patients with history of recurrent herpes simplex virus infections should receive prophylactic antiviral therapy.12
Muscle Control
An emerging concept in rejuvenation of the lower face recognizes not only restoration of volume but also control of muscle movement. Local injection of botulinum toxin type A induces relaxation of hyperfunctional facial muscles through temporary inhibition of neurotransmitter release.6 The potential for paralysis of the oral cavity may limit the application of botulinum toxin type A in that region.7 Nonetheless, the off-label potential of botulinum toxin type A has expanded to include several targets in the lower face. The orbicularis oris muscle is targeted to soften perioral rhytides. Conservative dosing (1–2 U per lip quadrant or approximately 5 U total) and superficial injection is emphasized in this area.27 Similarly, the depressor anguli oris muscle is targeted by injection of 4 U bilaterally to soften the marionette lines. In the chin area, the mentalis muscle can be targeted by injection of 2 U deep into each belly of the muscle to reduce the mental crease and dimpling.28 Combination treatment with dermal filler and neurotoxin demonstrates effects that last longer than either modality alone without additional adverse events.29 With combination therapy, guidelines suggest treating with filler first.27
Conclusion
A greater understanding of the extrinsic and intrinsic factors that contribute to the structural and surface changes of the aging face coupled with a preference for minimally invasive procedures has revolutionized the dermatologist’s approach to perioral rejuvenation. Serving as a focal point of the face, the lips and perioral skin are well poised to benefit from this paradigm shift. A multifaceted approach utilizing dermal fillers and neurotoxins may be most appropriate and has demonstrated optimal outcomes in facial aesthetics.
- Buck DW, Alam M, Kim JYS. Injectable fillers for facial rejuvenation: a review. J Plast Reconstr Aesthet Surg. 2009;62:11-18.
- Guareschi M, Stella E. Lips. In: Goisis M, ed. Injections in Aesthetic Medicine. Milan, Italy: Springer; 2014:125-136.
- Byrne PJ, Hilger PA. Lip augmentation. Facial Plast Surg. 2004;20:31-38.
- Niamtu J. Rejuvenation of the lip and perioral areas. In: Bell WH, Guerroro CA, eds. Distraction Osteogenesis of the Facial Skeleton. Ontario, Canada: BC Decker Inc; 2007:38-48.
- Tansatit T, Apinuntrum P, Phetudom T. A typical pattern of the labial arteries with implication for lip augmentation with injectable fillers. Aesthet Plast Surg. 2014;38:1083-1089.
- Sadick NS, Karcher C, Palmisano L. Cosmetic dermatology of the aging face. Clin Dermatol. 2009;27(suppl):S3-S12.
- Ali MJ, Ende K, Mass CS. Perioral rejuvenation and lip augmentation. Facial Plast Surg Clin N Am. 2007;15:491-500.
- Chien AL, Qi J, Cheng N, et al. Perioral wrinkles are associated with female gender, aging, and smoking: development of a gender-specific photonumeric scale. J Am Acad Dermatol. 2016;74:924-930.
- Iblher N, Stark GB, Penna V. The aging perioral region—do we really know what is happening? J Nutr Health Aging. 2012;16:581-585.
- Sarnoff DS, Gotkin RH. Six steps to the “perfect” lip. J Drugs Dermatol. 2012;11:1081-1088.
- American Society of Plastic Surgeons. 2015 Cosmetic plastic surgery statistics. https://d2wirczt3b6wjm.cloudfront.net/News/Statistics/2015/cosmetic-procedure-trends-2015.pdf. Published February 26, 2015. Accessed October 5, 2016.
- Abduljabbar MH, Basendwh MA. Complications of hyaluronic acid fillers and their managements. J Dermatol Surg. 2016;20:1-7.
- Luebberding S, Alexiades-Armenakas M. Facial volume augmentation in 2014: overview of different filler options. J Drugs Dermatol. 2013;12:1339-1344.
- Huang Attenello N, Mass CS. Injectable fillers: review of material and properties. Facial Plast Surg. 2015;31:29-34.
- Eccleston D, Murphy DK. Juvéderm Volbella in the perioral area: a 12-month perspective, multicenter, open-label study. Clin Cosmet Investig Dermatol. 2012;5:167-172.
- Raspaldo H, Chantrey J, Belhaouari L, et al. Lip and perioral enhancement: a 12-month prospective, randomized, controlled study. J Drugs Dermatol. 2015;14:1444-1452.
- Soft tissue fillers approved by the center for devices and radiological health. US Food and Drug Administration website. http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/CosmeticDevices/WrinkleFillers/ucm227749.htm. Updated July 27, 2015. Accessed October 5, 2016.
- Butterwick K, Marmur E, Narurkar V, et al. HYC-24L demonstrates greater effectiveness with less pain than CPM-22.5 for treatment of perioral lines in a randomized controlled trial. Dermatol Surg. 2015;41:1351-1360.
- San Miguel Moragas J, Reddy RR, Hernández Alfaro F, et al. Systematic review of “filling” procedures for lip augmentation regarding types of material, outcomes and complications. J Craniomaxillofac Surg. 2015;43:883-906.
- Cohen JL, Thomas J, Paradkar D, et al. An interrater and intrarater reliability study of 3 photographic scales for the classification of perioral aesthetic features. Dermatol Surg. 2014;40:663-670.
- Broder KW, Cohen SR. An overview of permanent and semipermanent fillers. Plast Reconstr Surg. 2006;118(3 suppl):7S-14S.
- Sarnoff DS, Saini R, Gotkin RH. Comparison of filling agents for lip augmentation. Aesthet Surg J. 2008;28:556-563.
- Moscona RA, Fodor L. A retrospective study on liquid injectable silicone for lip augmentation: long-term results and patient satisfaction. J Plast Reconstr Aesthet Surg. 2010;63:1694-1698.
- Bertucci V, Lynde CB. Current concepts in the use of small-particle hyaluronic acid. Plast Reconstr Surg. 2015;136(5 suppl):132S-138S.
- Wilson AJ, Taglienti AJ, Chang CS, et al. Current applications of facial volumization with fillers. Plast Reconstr Surg. 2016;137:E872-E889.
- Dewandre L, Caperton C, Fulton J. Filler injections with the blunt-tip microcannula compared to the sharp hypodermic needle. J Drugs Dermatol. 2012;11:1098-1103.
- Carruthers JD, Glogau RG, Blitzer A; Facial Aesthetics Consensus Group Faculty. Advances in facial rejuvenation: botulinum toxin type A, hyaluronic acid dermal fillers, and combination therapies-consensus recommendations. Plast Reconstr Surg. 2008;121(5 suppl):5S-30S.
- Wu DC, Fabi SG, Goldman MP. Neurotoxins: current concepts in cosmetic use on the face and neck-lower face. Plast Reconstr Surg. 2015;136(5 suppl):76S-79S.
- Carruthers A, Carruthers J, Monheit GD, et al. Multicenter, randomized, parallel-group study of the safety and effectiveness of onabotulinumtoxin A and hyaluronic acid dermal fillers (24-mg/mL smooth, cohesive gel) alone and in combination for lower facial rejuvenation. Dermatol Surg. 2010;36:2121-2134.
- Buck DW, Alam M, Kim JYS. Injectable fillers for facial rejuvenation: a review. J Plast Reconstr Aesthet Surg. 2009;62:11-18.
- Guareschi M, Stella E. Lips. In: Goisis M, ed. Injections in Aesthetic Medicine. Milan, Italy: Springer; 2014:125-136.
- Byrne PJ, Hilger PA. Lip augmentation. Facial Plast Surg. 2004;20:31-38.
- Niamtu J. Rejuvenation of the lip and perioral areas. In: Bell WH, Guerroro CA, eds. Distraction Osteogenesis of the Facial Skeleton. Ontario, Canada: BC Decker Inc; 2007:38-48.
- Tansatit T, Apinuntrum P, Phetudom T. A typical pattern of the labial arteries with implication for lip augmentation with injectable fillers. Aesthet Plast Surg. 2014;38:1083-1089.
- Sadick NS, Karcher C, Palmisano L. Cosmetic dermatology of the aging face. Clin Dermatol. 2009;27(suppl):S3-S12.
- Ali MJ, Ende K, Mass CS. Perioral rejuvenation and lip augmentation. Facial Plast Surg Clin N Am. 2007;15:491-500.
- Chien AL, Qi J, Cheng N, et al. Perioral wrinkles are associated with female gender, aging, and smoking: development of a gender-specific photonumeric scale. J Am Acad Dermatol. 2016;74:924-930.
- Iblher N, Stark GB, Penna V. The aging perioral region—do we really know what is happening? J Nutr Health Aging. 2012;16:581-585.
- Sarnoff DS, Gotkin RH. Six steps to the “perfect” lip. J Drugs Dermatol. 2012;11:1081-1088.
- American Society of Plastic Surgeons. 2015 Cosmetic plastic surgery statistics. https://d2wirczt3b6wjm.cloudfront.net/News/Statistics/2015/cosmetic-procedure-trends-2015.pdf. Published February 26, 2015. Accessed October 5, 2016.
- Abduljabbar MH, Basendwh MA. Complications of hyaluronic acid fillers and their managements. J Dermatol Surg. 2016;20:1-7.
- Luebberding S, Alexiades-Armenakas M. Facial volume augmentation in 2014: overview of different filler options. J Drugs Dermatol. 2013;12:1339-1344.
- Huang Attenello N, Mass CS. Injectable fillers: review of material and properties. Facial Plast Surg. 2015;31:29-34.
- Eccleston D, Murphy DK. Juvéderm Volbella in the perioral area: a 12-month perspective, multicenter, open-label study. Clin Cosmet Investig Dermatol. 2012;5:167-172.
- Raspaldo H, Chantrey J, Belhaouari L, et al. Lip and perioral enhancement: a 12-month prospective, randomized, controlled study. J Drugs Dermatol. 2015;14:1444-1452.
- Soft tissue fillers approved by the center for devices and radiological health. US Food and Drug Administration website. http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/CosmeticDevices/WrinkleFillers/ucm227749.htm. Updated July 27, 2015. Accessed October 5, 2016.
- Butterwick K, Marmur E, Narurkar V, et al. HYC-24L demonstrates greater effectiveness with less pain than CPM-22.5 for treatment of perioral lines in a randomized controlled trial. Dermatol Surg. 2015;41:1351-1360.
- San Miguel Moragas J, Reddy RR, Hernández Alfaro F, et al. Systematic review of “filling” procedures for lip augmentation regarding types of material, outcomes and complications. J Craniomaxillofac Surg. 2015;43:883-906.
- Cohen JL, Thomas J, Paradkar D, et al. An interrater and intrarater reliability study of 3 photographic scales for the classification of perioral aesthetic features. Dermatol Surg. 2014;40:663-670.
- Broder KW, Cohen SR. An overview of permanent and semipermanent fillers. Plast Reconstr Surg. 2006;118(3 suppl):7S-14S.
- Sarnoff DS, Saini R, Gotkin RH. Comparison of filling agents for lip augmentation. Aesthet Surg J. 2008;28:556-563.
- Moscona RA, Fodor L. A retrospective study on liquid injectable silicone for lip augmentation: long-term results and patient satisfaction. J Plast Reconstr Aesthet Surg. 2010;63:1694-1698.
- Bertucci V, Lynde CB. Current concepts in the use of small-particle hyaluronic acid. Plast Reconstr Surg. 2015;136(5 suppl):132S-138S.
- Wilson AJ, Taglienti AJ, Chang CS, et al. Current applications of facial volumization with fillers. Plast Reconstr Surg. 2016;137:E872-E889.
- Dewandre L, Caperton C, Fulton J. Filler injections with the blunt-tip microcannula compared to the sharp hypodermic needle. J Drugs Dermatol. 2012;11:1098-1103.
- Carruthers JD, Glogau RG, Blitzer A; Facial Aesthetics Consensus Group Faculty. Advances in facial rejuvenation: botulinum toxin type A, hyaluronic acid dermal fillers, and combination therapies-consensus recommendations. Plast Reconstr Surg. 2008;121(5 suppl):5S-30S.
- Wu DC, Fabi SG, Goldman MP. Neurotoxins: current concepts in cosmetic use on the face and neck-lower face. Plast Reconstr Surg. 2015;136(5 suppl):76S-79S.
- Carruthers A, Carruthers J, Monheit GD, et al. Multicenter, randomized, parallel-group study of the safety and effectiveness of onabotulinumtoxin A and hyaluronic acid dermal fillers (24-mg/mL smooth, cohesive gel) alone and in combination for lower facial rejuvenation. Dermatol Surg. 2010;36:2121-2134.
Practice Points
- Hyaluronic acid (HA) fillers are approved by the US Food and Drug Administration for lip augmentation and/or treatment of perioral rhytides in adults 21 years and older.
- Most complications encountered with HA lip augmentation are mild and transient and can include injection-site reactions such as pain, erythema, and edema.
- Combination treatment with dermal fillers and neurotoxins (off label) may demonstrate effects that last longer than either modality alone without additional adverse events.
Pelvic fracture pattern predicts the need for hemorrhage control
WAIKOLOA, HAWAII – Blunt trauma patients admitted in shock with anterior posterior compression III or vertical shear fracture patterns, or patients with open pelvic fracture are at greatest risk of severe bleeding requiring pelvic hemorrhage control intervention, results from a multicenter trial demonstrated.
Thirty years ago, researchers defined a classification of pelvic fracture based on a pattern of force applied to the pelvis, Todd W. Costantini, MD, said at the annual meeting of the American Association for the Surgery of Trauma. They identified three main force patterns, including lateral compression, anterior posterior compression, and vertical shear (Radiology. 1986 Aug;160 [2]:445-51).
In a recently published study, Dr. Costantini and his associates found wide variability in the use of various pelvic hemorrhage control methods (J Trauma Acute Care Surg. 2016 May;80 [5]:717-25). “While angioembolization alone and external fixator placement alone were the most common methods used, there were various combinations of these methods used at different times by different institutions,” he said.
These results prompted the researchers to prospectively evaluate the correlation between pelvic fracture pattern and modern care of pelvic hemorrhage control at 11 Level I trauma centers over a two year period. Inclusion criteria for the study, which was sponsored by the AAST Multi-institutional Trials Committee, were patients over the age of 18, blunt mechanism of injury, and shock on admission, which was defined as an admission systolic blood pressure of less than 90 mm Hg, or heart rate greater than 120, or base deficit greater than 5. Exclusion criteria included isolated hip fracture, pregnancy, and lack of pelvic imaging.
The researchers evaluated the pelvic fracture pattern for each patient in the study. “Each pelvic image was evaluated by a trauma surgeon, orthopedic surgeon, or radiologist and classified using the Young-Burgess Classification system,” Dr. Costantini said. Next, they used univariate and multivariate logistic regression analysis to analyze predictors for hemorrhage control intervention and mortality. The objective was to determine whether pelvic fracture pattern would predict the need for a hemorrhage control intervention.
Of the 46,716 trauma patients admitted over the two year period, 1,339 sustained a pelvic fracture. Of these, 178 met criteria for shock. The researchers excluded 15 patients due to lack of pelvic imaging, which left 163 patients in the final analysis. Their mean age was 44 years and 58% were male. On admission, their mean systolic blood pressure was 93 mm Hg, their mean heart rate was 117 beats per minute, and their median Injury Severity Score was 28. The mean hospital length of stay was 12 days and the mortality rate was 30%. The three most common mechanisms of injury were motor vehicle crash (42%), followed by pedestrian versus auto (23%), and falls (18%).
Compared with patients who did not require hemorrhage control intervention, those who did received more transfusion of packed red blood cells (13 vs. 7 units, respectively; P less than .01) and fresh frozen plasma (10 vs. 5 units; P = .01). In addition, 67% of patients with open pelvic fracture required a hemorrhage control intervention. The rate of mortality was similar between the patients who required a pelvic hemorrhage control intervention and those who did not (34% vs. 28%; P = .47).
The three most common types of pelvic fracture patterns were lateral compression I (36%) and II (23%), followed by vertical shear (13%). Patients with lateral compression I and II fractures were least likely to require hemorrhage control intervention (22% and 19%, respectively). However, on univariate analysis, patients with anterior posterior compression III fractures and those with vertical shear fractures were more likely to require a pelvic hemorrhage control intervention, compared with those who sustained other types of pelvic fractures (83% and 55%, respectively).
On multivariate analysis, the three main independent predictors of need for a hemorrhagic control intervention were anterior posterior compression III fracture (odds ratio, 109.43; P less than .001), open pelvic fracture (OR, 7.36; P = .014), and vertical shear fracture (OR, 6.99; P = .002). Pelvic fracture pattern did not predict mortality on multivariate analysis.
The invited discussant, Joseph M. Galante, MD, trauma medical director for the University of California, Davis Health System, characterized the study as important, “because it examines all forms of hemorrhage control, not just arterioembolism in the treatment of pelvic fractures,” he said. “The ability to predict who will need hemorrhage control allows for earlier mobilization to resources, both in the operating room or interventional suite and in the resuscitation bay.”
Dr. Costantini reported having no financial disclosures.
WAIKOLOA, HAWAII – Blunt trauma patients admitted in shock with anterior posterior compression III or vertical shear fracture patterns, or patients with open pelvic fracture are at greatest risk of severe bleeding requiring pelvic hemorrhage control intervention, results from a multicenter trial demonstrated.
Thirty years ago, researchers defined a classification of pelvic fracture based on a pattern of force applied to the pelvis, Todd W. Costantini, MD, said at the annual meeting of the American Association for the Surgery of Trauma. They identified three main force patterns, including lateral compression, anterior posterior compression, and vertical shear (Radiology. 1986 Aug;160 [2]:445-51).
In a recently published study, Dr. Costantini and his associates found wide variability in the use of various pelvic hemorrhage control methods (J Trauma Acute Care Surg. 2016 May;80 [5]:717-25). “While angioembolization alone and external fixator placement alone were the most common methods used, there were various combinations of these methods used at different times by different institutions,” he said.
These results prompted the researchers to prospectively evaluate the correlation between pelvic fracture pattern and modern care of pelvic hemorrhage control at 11 Level I trauma centers over a two year period. Inclusion criteria for the study, which was sponsored by the AAST Multi-institutional Trials Committee, were patients over the age of 18, blunt mechanism of injury, and shock on admission, which was defined as an admission systolic blood pressure of less than 90 mm Hg, or heart rate greater than 120, or base deficit greater than 5. Exclusion criteria included isolated hip fracture, pregnancy, and lack of pelvic imaging.
The researchers evaluated the pelvic fracture pattern for each patient in the study. “Each pelvic image was evaluated by a trauma surgeon, orthopedic surgeon, or radiologist and classified using the Young-Burgess Classification system,” Dr. Costantini said. Next, they used univariate and multivariate logistic regression analysis to analyze predictors for hemorrhage control intervention and mortality. The objective was to determine whether pelvic fracture pattern would predict the need for a hemorrhage control intervention.
Of the 46,716 trauma patients admitted over the two year period, 1,339 sustained a pelvic fracture. Of these, 178 met criteria for shock. The researchers excluded 15 patients due to lack of pelvic imaging, which left 163 patients in the final analysis. Their mean age was 44 years and 58% were male. On admission, their mean systolic blood pressure was 93 mm Hg, their mean heart rate was 117 beats per minute, and their median Injury Severity Score was 28. The mean hospital length of stay was 12 days and the mortality rate was 30%. The three most common mechanisms of injury were motor vehicle crash (42%), followed by pedestrian versus auto (23%), and falls (18%).
Compared with patients who did not require hemorrhage control intervention, those who did received more transfusion of packed red blood cells (13 vs. 7 units, respectively; P less than .01) and fresh frozen plasma (10 vs. 5 units; P = .01). In addition, 67% of patients with open pelvic fracture required a hemorrhage control intervention. The rate of mortality was similar between the patients who required a pelvic hemorrhage control intervention and those who did not (34% vs. 28%; P = .47).
The three most common types of pelvic fracture patterns were lateral compression I (36%) and II (23%), followed by vertical shear (13%). Patients with lateral compression I and II fractures were least likely to require hemorrhage control intervention (22% and 19%, respectively). However, on univariate analysis, patients with anterior posterior compression III fractures and those with vertical shear fractures were more likely to require a pelvic hemorrhage control intervention, compared with those who sustained other types of pelvic fractures (83% and 55%, respectively).
On multivariate analysis, the three main independent predictors of need for a hemorrhagic control intervention were anterior posterior compression III fracture (odds ratio, 109.43; P less than .001), open pelvic fracture (OR, 7.36; P = .014), and vertical shear fracture (OR, 6.99; P = .002). Pelvic fracture pattern did not predict mortality on multivariate analysis.
The invited discussant, Joseph M. Galante, MD, trauma medical director for the University of California, Davis Health System, characterized the study as important, “because it examines all forms of hemorrhage control, not just arterioembolism in the treatment of pelvic fractures,” he said. “The ability to predict who will need hemorrhage control allows for earlier mobilization to resources, both in the operating room or interventional suite and in the resuscitation bay.”
Dr. Costantini reported having no financial disclosures.
WAIKOLOA, HAWAII – Blunt trauma patients admitted in shock with anterior posterior compression III or vertical shear fracture patterns, or patients with open pelvic fracture are at greatest risk of severe bleeding requiring pelvic hemorrhage control intervention, results from a multicenter trial demonstrated.
Thirty years ago, researchers defined a classification of pelvic fracture based on a pattern of force applied to the pelvis, Todd W. Costantini, MD, said at the annual meeting of the American Association for the Surgery of Trauma. They identified three main force patterns, including lateral compression, anterior posterior compression, and vertical shear (Radiology. 1986 Aug;160 [2]:445-51).
In a recently published study, Dr. Costantini and his associates found wide variability in the use of various pelvic hemorrhage control methods (J Trauma Acute Care Surg. 2016 May;80 [5]:717-25). “While angioembolization alone and external fixator placement alone were the most common methods used, there were various combinations of these methods used at different times by different institutions,” he said.
These results prompted the researchers to prospectively evaluate the correlation between pelvic fracture pattern and modern care of pelvic hemorrhage control at 11 Level I trauma centers over a two year period. Inclusion criteria for the study, which was sponsored by the AAST Multi-institutional Trials Committee, were patients over the age of 18, blunt mechanism of injury, and shock on admission, which was defined as an admission systolic blood pressure of less than 90 mm Hg, or heart rate greater than 120, or base deficit greater than 5. Exclusion criteria included isolated hip fracture, pregnancy, and lack of pelvic imaging.
The researchers evaluated the pelvic fracture pattern for each patient in the study. “Each pelvic image was evaluated by a trauma surgeon, orthopedic surgeon, or radiologist and classified using the Young-Burgess Classification system,” Dr. Costantini said. Next, they used univariate and multivariate logistic regression analysis to analyze predictors for hemorrhage control intervention and mortality. The objective was to determine whether pelvic fracture pattern would predict the need for a hemorrhage control intervention.
Of the 46,716 trauma patients admitted over the two year period, 1,339 sustained a pelvic fracture. Of these, 178 met criteria for shock. The researchers excluded 15 patients due to lack of pelvic imaging, which left 163 patients in the final analysis. Their mean age was 44 years and 58% were male. On admission, their mean systolic blood pressure was 93 mm Hg, their mean heart rate was 117 beats per minute, and their median Injury Severity Score was 28. The mean hospital length of stay was 12 days and the mortality rate was 30%. The three most common mechanisms of injury were motor vehicle crash (42%), followed by pedestrian versus auto (23%), and falls (18%).
Compared with patients who did not require hemorrhage control intervention, those who did received more transfusion of packed red blood cells (13 vs. 7 units, respectively; P less than .01) and fresh frozen plasma (10 vs. 5 units; P = .01). In addition, 67% of patients with open pelvic fracture required a hemorrhage control intervention. The rate of mortality was similar between the patients who required a pelvic hemorrhage control intervention and those who did not (34% vs. 28%; P = .47).
The three most common types of pelvic fracture patterns were lateral compression I (36%) and II (23%), followed by vertical shear (13%). Patients with lateral compression I and II fractures were least likely to require hemorrhage control intervention (22% and 19%, respectively). However, on univariate analysis, patients with anterior posterior compression III fractures and those with vertical shear fractures were more likely to require a pelvic hemorrhage control intervention, compared with those who sustained other types of pelvic fractures (83% and 55%, respectively).
On multivariate analysis, the three main independent predictors of need for a hemorrhagic control intervention were anterior posterior compression III fracture (odds ratio, 109.43; P less than .001), open pelvic fracture (OR, 7.36; P = .014), and vertical shear fracture (OR, 6.99; P = .002). Pelvic fracture pattern did not predict mortality on multivariate analysis.
The invited discussant, Joseph M. Galante, MD, trauma medical director for the University of California, Davis Health System, characterized the study as important, “because it examines all forms of hemorrhage control, not just arterioembolism in the treatment of pelvic fractures,” he said. “The ability to predict who will need hemorrhage control allows for earlier mobilization to resources, both in the operating room or interventional suite and in the resuscitation bay.”
Dr. Costantini reported having no financial disclosures.
AT THE AAST ANNUAL MEETING
Key clinical point:
Major finding: On multivariate analysis, the three main independent predictors of need for a hemorrhagic control intervention were anterior posterior compression III fracture (odds ratio, 109.43; P less than .001), open pelvic fracture (OR, 7.36; P = .014), and vertical shear fracture (OR, 6.99; P = .002). Data source: A prospective evaluation of 163 patients with pelvic fracture who were admitted to 11 Level I trauma centers over a two-year period.
Disclosures: Dr. Costantini reported having no financial disclosures.
The Highs and Lows of Medical Marijuana
Marijuana has been used medicinally worldwide for thousands of years.1,2 In the early 1990s, the discovery of cannabinoid receptors in the central and peripheral nervous systems began to propagate interest in other potential therapeutic values of marijuana.3 Since then, marijuana has been used by patients experiencing chemotherapy-induced anorexia, nausea and vomiting, pain, and forms of spasticity. Use among patients with glaucoma and HIV/AIDS has also been widely reported.
In light of this—and of increasing efforts to legalize medical marijuana use across the United States—clinicians should be cognizant of the substance’s negative effects, as well as its potential health benefits. Marijuana has significant systemic effects and associated risks of which patients and health care providers should be aware. Questions remain regarding the safety, efficacy, and long-term impact of use. Use of marijuana for medical purposes requires a careful examination of the risks and benefits.
PHARMACOKINETICS
Marijuana contains approximately 60 cannabinoids, two of which have been specifically identified as primary components. The first, delta-9 tetrahydrocannabinol (THC), is believed to be the most psychoactive.4,5 THC was identified in 1964 and is responsible for the well-documented symptoms of euphoria, appetite stimulation, impaired memory and cognition, and analgesia. The THC content in marijuana products varies widely and has increased over time, complicating research on the long-term effects of marijuana use.5,6
The second compound, cannabidiol (CBD), is a serotonin receptor agonist that lacks psychoactive effects. Potential benefits of CBD include antiemetic and anxiolytic properties, as well as anti-inflammatory effects. There is some evidence to suggest that CBD might also have antipsychotic properties.1,4
AVAILABLE FORMULATIONS
Two synthetic forms of THC have been approved by the FDA since 1985 for medicinal use: nabilone (categorized as a Schedule II drug) and dronabinol (Schedule III). Both are cannabinoid receptor agonists approved for treating chemotherapy-induced nausea and vomiting. They are recommended for use after failure of standard therapies, such as 5-HT3 receptor antagonists, but overall interest has decreased since the advent of agents such as ondansetron.2,4
Nabiximols, an oral buccal spray, is a combination of THC and CBD. It was approved in Canada in 2005 for pain management in cancer patients and for multiple sclerosis–related pain and spasticity. It is not currently available in the US.2,4
Marijuana use is currently legal in 25 states and the District of Columbia.7,8 However, state laws regarding the criteria for medical use are vague and varied. For example, not all states require that clinicians review risks and benefits of marijuana use with patients. Even for those that do, the lack of clinical trials on the safety and efficacy of marijuana make it difficult for clinicians to properly educate themselves and their patients.9
LIMITATIONS OF RESEARCH
Why the lack of data? In 1937, a federal tax restricted marijuana prescription in the US, and in 1942, marijuana was removed from the US Pharmacopeia.2,4 The Controlled Substances Act in 1970 designated marijuana as a Schedule I drug, a categorization for drugs with high potential for abuse and no currently accepted medical use.9 Following this designation, research on marijuana was nearly halted in the US. Several medical organizations have subsequently called for reclassification to Schedule II in order to facilitate scientific research into marijuana’s medicinal benefits and risks.
Research is also limited due to the comorbid use of tobacco and other drugs in study subjects, the variation of cannabinoid levels among products, and differences in the route of administration—particularly smoking versus oral or buccal routes.5 Conducting marijuana research in a fashion similar to pharmaceuticals would not only serve the medical community but also the legislative faction.
Despite these obstacles, there is some available evidence on medical use of marijuana. A review of the associated risks and potential uses for the substance follows.
RISKS ASSOCIATED WITH MARIJUANA USE
Acute effects
Most symptoms of marijuana intoxication are attributed to the THC component and occur due to the presence of cannabinoid receptors in the central nervous system (see Table 1).5,10 Additional objective signs of acute or chronic intoxication include conjunctival injection, tachycardia, cannabis odor, yellowing of fingertips (from smoking), cough, and food cravings.10
A more recently identified effect of long-term marijuana use is a paradoxical hyperemesis syndrome, in which individuals experience nausea, vomiting, and abdominal pain. They obtain relief with hot showers or baths.6,8
Since there is a near absence of cannabinoid receptors in the brain stem, marijuana does not stimulate the autonomic nervous system. It is therefore believed that marijuana use cannot be fatal. Corroborating this theory, no deaths have been reported from marijuana overdose.2,11
Withdrawal symptoms
Approximately 10% of regular marijuana users become physically and psychologically dependent on the substance. Once tolerance develops, withdrawal symptoms occur with cessation of use (see Table 2).2,5,10 Symptoms peak within the first week following cessation and may last up to two weeks. Sleep disturbances may occur for more than one month.10
Unlike with other substances of abuse, there are no pharmaceutical agents to treat marijuana withdrawal; rather, treatment is supportive. Marijuana users often resume use following a period of cessation in order to avoid withdrawal.
Chronic effects
Dental/oral. Smoking marijuana is associated with an increased risk for dental caries, periodontal disease, and oral infections.1 Premalignant oral lesions, such as leukoplakia and erythroplakia, have also been reported. Patient education on the risks and need for proper oral hygiene is vital, as are regular dental examinations.
Respiratory. There are several known pulmonary implications of smoking marijuana, and therefore, this route of administration is not recommended for medicinal use. Respiratory effects of marijuana smoke are similar to those seen with tobacco: cough, dyspnea, sputum production, wheezing, bronchitis, pharyngitis, and hoarseness.4 Increased rates of pneumonia and other respiratory infections have also been identified.6 Research on long-term marijuana smoking has revealed hyperinflation and airway resistance.6 At this time, evidence is inconclusive as to whether smoking marijuana leads to chronic obstructive pulmonary disease.1
Studies have compared the chemical content of tobacco and marijuana and found similar components, including carcinogens, but data regarding concentrations of these chemicals are conflicting.1,4 It is unknown whether vaping (a trending practice in which a device is used to heat the substance prior to inhalation) reduces this risk.4
Unfortunately, data regarding the carcinogenic effects of long-term marijuana smoking are inconclusive; some studies have shown potential protective effects.4-6 Other evidence suggests that the risk is lower in comparison to tobacco smoking.6
Cardiovascular. The effects of marijuana on the cardiovascular system are not fully understood. Known symptoms include tachycardia, peripheral vasodilation, hypotension, and syncope.4 There is some evidence that marijuana use carries an increased risk for angina in patients with previously established heart disease.5 Patients, especially those with known cardiovascular disease, should be educated about these risks.
Reproductive. There are several identified reproductive consequences of marijuana use. Research has found decreased sperm count and gynecomastia in men and impaired ovulation in women.4 Studies on marijuana use in pregnancy consistently reveal low birth weight—this effect is, however, less than that seen with tobacco smoking.5 Other complications or developmental abnormalities may occur, but there is currently a lack of evidence to support further conclusions.
Neurologic. The use of marijuana results in short-term memory loss and other cognitive impairments. There is conflicting evidence as to whether long-term effects remain after cessation.5,6 Because acute intoxication impairs motor skills, it is associated with increased rates of motor vehicle accidents.6 Driving while under the influence of marijuana should be cautioned against.
Psychiatric. Marijuana use is associated with the onset and exacerbation of acute psychosis. However, its role as a causal factor in schizophrenia has not been established.4,10 There is some evidence to suggest that CBD has antipsychotic properties, warranting further research. An amotivational syndrome has also been affiliated with chronic marijuana use; affected individuals exhibit a lack of goal-directed behavior, which may result in work or school dysfunction.10 Several studies have supported an association between marijuana use and risk for depression and anxiety. Due to the extensive risk factors for these disorders, including genetic and environmental, causality has yet to be established.5,6
Conditions for Which Marijuana May Offer Therapeutic Benefits
Glaucoma
Research has demonstrated that marijuana decreases intraocular pressure, and many patients with glaucoma use marijuana. However, it is not recommended as firstline treatment.
The beneficial effects of smoked marijuana are short-lived, requiring patients to dose repeatedly throughout the day. Use is also often discontinued due to adverse effects including dry mouth, dizziness, confusion, and anxiety.8
Topical preparations of THC have not been successfully developed due to the low water solubility of cannabis and minimal penetration through the cornea to the intraocular space.8 Standard treatments available for glaucoma are more effective and without obvious psychoactive effects.6
Nausea
One of the first medical uses of marijuana was for nausea. Due to the presence of cannabinoid receptors that govern food intake, marijuana is known to stimulate appetite, making its use in reducing chemotherapy-associated nausea and vomiting widespread.2,6 Despite the variation in state laws regarding medical use of marijuana, cancer is included as a qualifying illness in every state that allows it.8 Cannabis-based medications may be useful for treating refractory nausea secondary to chemotherapy; however, dronabinol and nabilone are not recommended as firstline therapies.12
HIV/AIDS
Short-term evidence suggests that patients with HIV and/or AIDS benefit from marijuana use through improved appetite, weight gain, lessened pain, and improved quality of life.6,13 Studies with small sample sizes have been conducted using smoked marijuana and dronabinol.8 Long-term studies are needed to compare the use of marijuana with other nutritional and caloric supplements. Overall, reliable research regarding the therapeutic value of marijuana in these patients is inconclusive, and therefore no recommendations for incorporating marijuana into the treatment regimen have been made.8
Multiple sclerosis
For centuries, marijuana has been used for pain relief. The discovery of cannabinoid receptors in high concentrations throughout pain pathways of the brain supports the notion that marijuana plays a role in analgesia. While response to acute pain is poor, there is evidence to suggest that various cannabis formulations relieve chronic neuropathic pain and spasticity, as seen in multiple sclerosis.3,6
Subjective improvements in pain and spasticity were seen with the use of oral cannabis extract, THC, and nabiximols.11 Smoked marijuana is of uncertain efficacy and is not recommended for use in this patient population; it has been shown to potentially worsen cognition.8,11
Seizures
Research into the role of marijuana in decreasing seizure frequency is inconclusive.11 Large studies with human subjects are lacking, and most data thus far have come from animals and case studies.8 Some case reports have suggested a decrease in seizures with marijuana use, but further investigation is needed.6
At this time, it is not appropriate to recommend marijuana for patients with seizure disorders, but the use of cannabidiol might be more promising. Studies are ongoing.14
Alzheimer disease
Alzheimer disease is the most common cause of dementia.8 Despite known adverse effects on memory and cognition with acute use, studies have shown that marijuana might inhibit the development of amyloid beta plaques in Alzheimer disease.4 Further research on dronabinol has not provided sufficient data to support its use, and no studies utilizing smoked marijuana have been performed.8 Therefore, no recommendations exist for the use of marijuana in this patient population, and further research is warranted.
Ongoing research
There are some additional areas of potential therapeutic use of marijuana. Limited evidence has revealed that marijuana has anti-inflammatory properties, leading researchers to examine its use for autoimmune diseases, such as rheumatoid arthritis and Crohn disease. Studies investigating marijuana’s potential ability to inhibit cancer growth and metastasis are ongoing.
Unfortunately, research in patients with Parkinson disease has not shown improvement in dyskinesias.11 Studies on other movement disorders, such as Tourette syndrome and Huntington disease, have not shown symptom improvement with marijuana use. Research on these conditions and others is ongoing.
CONCLUSION
Marijuana use has negative effects on a variety of body systems, but it also may provide therapeutic benefit in certain patient populations. Clinicians and patients are currently hampered by the dearth of reliable information on its safety and efficacy (resulting from federal restrictions and other factors). Comparative studies between marijuana and established standards of care are needed, as is additional research to identify therapeutic effects that could be maximized and ways to minimize or eliminate negative sequelae.
1. Greydanus DE, Hawver EK, Greydanus MM, Merrick J. Cannabis: effective and safe analgesic? J Pain Manage. 2014;7(3):209-233.
2. Bostwick JM. Blurred boundaries: the therapeutics and politics of medical marijuana. Mayo Clin Proc. 2012;87(2):172-186.
3. Karst M, Wippermann S, Ahrens J. Role of cannabinoids in the treatment of pain and (painful) spasticity. Drugs. 2010;70(18):2409-2438.
4. Owen KP, Sutter ME, Albertson TE. Marijuana: respiratory tract effects. Clin Rev Allergy Immunol. 2014;46(1):65-81.
5. Hall W, Degenhardt L. Adverse health effects of non-medical cannabis use. Lancet. 2009;374(9698):1383-1391.
6. Volkow ND, Baler RD, Compton WM, Weiss SRB. Adverse health effects of marijuana use. N Engl J Med. 2014;370(23):2219-2227.
7. National Conference of State Legislatures. State medical marijuana laws (updated 7/20/2016). www.ncsl.org/research/health/state-medical-marijuana-laws.aspx. Accessed September 7, 2016.
8. Belendiuk KA, Baldini LL, Bonn-Miller MO. Narrative review of the safety and efficacy of marijuana for the treatment of commonly state-approved medical and psychiatric disorders. Addict Sci Clin Pract. 2015;10(1):1-10.
9. Hoffmann DE, Weber E. Medical marijuana and the law. N Engl J Med. 2010;362(16):1453-1457.
10. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Arlington, VA: American Psychiatric Publishing; 2013.
11. Koppel B, Brust J, Fife T, et al. Systematic review: efficacy and safety of medical marijuana in selected neurologic disorders. Neurology. 2014; 82(17):1556-1563.
12. Smith LA, Azariah F, Lavender VT, Stoner NS, Bettiol S. Cannabinoids for nausea and vomiting in adults with cancer receiving chemotherapy. Cochrane Database Syst Rev. 2015;(11):CD009464.
13. Lutge EE, Gray A, Siegfried N. The medical use of cannabis for reducing morbidity and mortality in patients with HIV/AIDS. Cochrane Database Syst Rev. 2013;(4):CD005175.
14. Gloss D, Vickrey B. Cannabinoids for epilepsy. Cochrane Database Syst Rev. 2012;(6):CD009270.
Marijuana has been used medicinally worldwide for thousands of years.1,2 In the early 1990s, the discovery of cannabinoid receptors in the central and peripheral nervous systems began to propagate interest in other potential therapeutic values of marijuana.3 Since then, marijuana has been used by patients experiencing chemotherapy-induced anorexia, nausea and vomiting, pain, and forms of spasticity. Use among patients with glaucoma and HIV/AIDS has also been widely reported.
In light of this—and of increasing efforts to legalize medical marijuana use across the United States—clinicians should be cognizant of the substance’s negative effects, as well as its potential health benefits. Marijuana has significant systemic effects and associated risks of which patients and health care providers should be aware. Questions remain regarding the safety, efficacy, and long-term impact of use. Use of marijuana for medical purposes requires a careful examination of the risks and benefits.
PHARMACOKINETICS
Marijuana contains approximately 60 cannabinoids, two of which have been specifically identified as primary components. The first, delta-9 tetrahydrocannabinol (THC), is believed to be the most psychoactive.4,5 THC was identified in 1964 and is responsible for the well-documented symptoms of euphoria, appetite stimulation, impaired memory and cognition, and analgesia. The THC content in marijuana products varies widely and has increased over time, complicating research on the long-term effects of marijuana use.5,6
The second compound, cannabidiol (CBD), is a serotonin receptor agonist that lacks psychoactive effects. Potential benefits of CBD include antiemetic and anxiolytic properties, as well as anti-inflammatory effects. There is some evidence to suggest that CBD might also have antipsychotic properties.1,4
AVAILABLE FORMULATIONS
Two synthetic forms of THC have been approved by the FDA since 1985 for medicinal use: nabilone (categorized as a Schedule II drug) and dronabinol (Schedule III). Both are cannabinoid receptor agonists approved for treating chemotherapy-induced nausea and vomiting. They are recommended for use after failure of standard therapies, such as 5-HT3 receptor antagonists, but overall interest has decreased since the advent of agents such as ondansetron.2,4
Nabiximols, an oral buccal spray, is a combination of THC and CBD. It was approved in Canada in 2005 for pain management in cancer patients and for multiple sclerosis–related pain and spasticity. It is not currently available in the US.2,4
Marijuana use is currently legal in 25 states and the District of Columbia.7,8 However, state laws regarding the criteria for medical use are vague and varied. For example, not all states require that clinicians review risks and benefits of marijuana use with patients. Even for those that do, the lack of clinical trials on the safety and efficacy of marijuana make it difficult for clinicians to properly educate themselves and their patients.9
LIMITATIONS OF RESEARCH
Why the lack of data? In 1937, a federal tax restricted marijuana prescription in the US, and in 1942, marijuana was removed from the US Pharmacopeia.2,4 The Controlled Substances Act in 1970 designated marijuana as a Schedule I drug, a categorization for drugs with high potential for abuse and no currently accepted medical use.9 Following this designation, research on marijuana was nearly halted in the US. Several medical organizations have subsequently called for reclassification to Schedule II in order to facilitate scientific research into marijuana’s medicinal benefits and risks.
Research is also limited due to the comorbid use of tobacco and other drugs in study subjects, the variation of cannabinoid levels among products, and differences in the route of administration—particularly smoking versus oral or buccal routes.5 Conducting marijuana research in a fashion similar to pharmaceuticals would not only serve the medical community but also the legislative faction.
Despite these obstacles, there is some available evidence on medical use of marijuana. A review of the associated risks and potential uses for the substance follows.
RISKS ASSOCIATED WITH MARIJUANA USE
Acute effects
Most symptoms of marijuana intoxication are attributed to the THC component and occur due to the presence of cannabinoid receptors in the central nervous system (see Table 1).5,10 Additional objective signs of acute or chronic intoxication include conjunctival injection, tachycardia, cannabis odor, yellowing of fingertips (from smoking), cough, and food cravings.10
A more recently identified effect of long-term marijuana use is a paradoxical hyperemesis syndrome, in which individuals experience nausea, vomiting, and abdominal pain. They obtain relief with hot showers or baths.6,8
Since there is a near absence of cannabinoid receptors in the brain stem, marijuana does not stimulate the autonomic nervous system. It is therefore believed that marijuana use cannot be fatal. Corroborating this theory, no deaths have been reported from marijuana overdose.2,11
Withdrawal symptoms
Approximately 10% of regular marijuana users become physically and psychologically dependent on the substance. Once tolerance develops, withdrawal symptoms occur with cessation of use (see Table 2).2,5,10 Symptoms peak within the first week following cessation and may last up to two weeks. Sleep disturbances may occur for more than one month.10
Unlike with other substances of abuse, there are no pharmaceutical agents to treat marijuana withdrawal; rather, treatment is supportive. Marijuana users often resume use following a period of cessation in order to avoid withdrawal.
Chronic effects
Dental/oral. Smoking marijuana is associated with an increased risk for dental caries, periodontal disease, and oral infections.1 Premalignant oral lesions, such as leukoplakia and erythroplakia, have also been reported. Patient education on the risks and need for proper oral hygiene is vital, as are regular dental examinations.
Respiratory. There are several known pulmonary implications of smoking marijuana, and therefore, this route of administration is not recommended for medicinal use. Respiratory effects of marijuana smoke are similar to those seen with tobacco: cough, dyspnea, sputum production, wheezing, bronchitis, pharyngitis, and hoarseness.4 Increased rates of pneumonia and other respiratory infections have also been identified.6 Research on long-term marijuana smoking has revealed hyperinflation and airway resistance.6 At this time, evidence is inconclusive as to whether smoking marijuana leads to chronic obstructive pulmonary disease.1
Studies have compared the chemical content of tobacco and marijuana and found similar components, including carcinogens, but data regarding concentrations of these chemicals are conflicting.1,4 It is unknown whether vaping (a trending practice in which a device is used to heat the substance prior to inhalation) reduces this risk.4
Unfortunately, data regarding the carcinogenic effects of long-term marijuana smoking are inconclusive; some studies have shown potential protective effects.4-6 Other evidence suggests that the risk is lower in comparison to tobacco smoking.6
Cardiovascular. The effects of marijuana on the cardiovascular system are not fully understood. Known symptoms include tachycardia, peripheral vasodilation, hypotension, and syncope.4 There is some evidence that marijuana use carries an increased risk for angina in patients with previously established heart disease.5 Patients, especially those with known cardiovascular disease, should be educated about these risks.
Reproductive. There are several identified reproductive consequences of marijuana use. Research has found decreased sperm count and gynecomastia in men and impaired ovulation in women.4 Studies on marijuana use in pregnancy consistently reveal low birth weight—this effect is, however, less than that seen with tobacco smoking.5 Other complications or developmental abnormalities may occur, but there is currently a lack of evidence to support further conclusions.
Neurologic. The use of marijuana results in short-term memory loss and other cognitive impairments. There is conflicting evidence as to whether long-term effects remain after cessation.5,6 Because acute intoxication impairs motor skills, it is associated with increased rates of motor vehicle accidents.6 Driving while under the influence of marijuana should be cautioned against.
Psychiatric. Marijuana use is associated with the onset and exacerbation of acute psychosis. However, its role as a causal factor in schizophrenia has not been established.4,10 There is some evidence to suggest that CBD has antipsychotic properties, warranting further research. An amotivational syndrome has also been affiliated with chronic marijuana use; affected individuals exhibit a lack of goal-directed behavior, which may result in work or school dysfunction.10 Several studies have supported an association between marijuana use and risk for depression and anxiety. Due to the extensive risk factors for these disorders, including genetic and environmental, causality has yet to be established.5,6
Conditions for Which Marijuana May Offer Therapeutic Benefits
Glaucoma
Research has demonstrated that marijuana decreases intraocular pressure, and many patients with glaucoma use marijuana. However, it is not recommended as firstline treatment.
The beneficial effects of smoked marijuana are short-lived, requiring patients to dose repeatedly throughout the day. Use is also often discontinued due to adverse effects including dry mouth, dizziness, confusion, and anxiety.8
Topical preparations of THC have not been successfully developed due to the low water solubility of cannabis and minimal penetration through the cornea to the intraocular space.8 Standard treatments available for glaucoma are more effective and without obvious psychoactive effects.6
Nausea
One of the first medical uses of marijuana was for nausea. Due to the presence of cannabinoid receptors that govern food intake, marijuana is known to stimulate appetite, making its use in reducing chemotherapy-associated nausea and vomiting widespread.2,6 Despite the variation in state laws regarding medical use of marijuana, cancer is included as a qualifying illness in every state that allows it.8 Cannabis-based medications may be useful for treating refractory nausea secondary to chemotherapy; however, dronabinol and nabilone are not recommended as firstline therapies.12
HIV/AIDS
Short-term evidence suggests that patients with HIV and/or AIDS benefit from marijuana use through improved appetite, weight gain, lessened pain, and improved quality of life.6,13 Studies with small sample sizes have been conducted using smoked marijuana and dronabinol.8 Long-term studies are needed to compare the use of marijuana with other nutritional and caloric supplements. Overall, reliable research regarding the therapeutic value of marijuana in these patients is inconclusive, and therefore no recommendations for incorporating marijuana into the treatment regimen have been made.8
Multiple sclerosis
For centuries, marijuana has been used for pain relief. The discovery of cannabinoid receptors in high concentrations throughout pain pathways of the brain supports the notion that marijuana plays a role in analgesia. While response to acute pain is poor, there is evidence to suggest that various cannabis formulations relieve chronic neuropathic pain and spasticity, as seen in multiple sclerosis.3,6
Subjective improvements in pain and spasticity were seen with the use of oral cannabis extract, THC, and nabiximols.11 Smoked marijuana is of uncertain efficacy and is not recommended for use in this patient population; it has been shown to potentially worsen cognition.8,11
Seizures
Research into the role of marijuana in decreasing seizure frequency is inconclusive.11 Large studies with human subjects are lacking, and most data thus far have come from animals and case studies.8 Some case reports have suggested a decrease in seizures with marijuana use, but further investigation is needed.6
At this time, it is not appropriate to recommend marijuana for patients with seizure disorders, but the use of cannabidiol might be more promising. Studies are ongoing.14
Alzheimer disease
Alzheimer disease is the most common cause of dementia.8 Despite known adverse effects on memory and cognition with acute use, studies have shown that marijuana might inhibit the development of amyloid beta plaques in Alzheimer disease.4 Further research on dronabinol has not provided sufficient data to support its use, and no studies utilizing smoked marijuana have been performed.8 Therefore, no recommendations exist for the use of marijuana in this patient population, and further research is warranted.
Ongoing research
There are some additional areas of potential therapeutic use of marijuana. Limited evidence has revealed that marijuana has anti-inflammatory properties, leading researchers to examine its use for autoimmune diseases, such as rheumatoid arthritis and Crohn disease. Studies investigating marijuana’s potential ability to inhibit cancer growth and metastasis are ongoing.
Unfortunately, research in patients with Parkinson disease has not shown improvement in dyskinesias.11 Studies on other movement disorders, such as Tourette syndrome and Huntington disease, have not shown symptom improvement with marijuana use. Research on these conditions and others is ongoing.
CONCLUSION
Marijuana use has negative effects on a variety of body systems, but it also may provide therapeutic benefit in certain patient populations. Clinicians and patients are currently hampered by the dearth of reliable information on its safety and efficacy (resulting from federal restrictions and other factors). Comparative studies between marijuana and established standards of care are needed, as is additional research to identify therapeutic effects that could be maximized and ways to minimize or eliminate negative sequelae.
Marijuana has been used medicinally worldwide for thousands of years.1,2 In the early 1990s, the discovery of cannabinoid receptors in the central and peripheral nervous systems began to propagate interest in other potential therapeutic values of marijuana.3 Since then, marijuana has been used by patients experiencing chemotherapy-induced anorexia, nausea and vomiting, pain, and forms of spasticity. Use among patients with glaucoma and HIV/AIDS has also been widely reported.
In light of this—and of increasing efforts to legalize medical marijuana use across the United States—clinicians should be cognizant of the substance’s negative effects, as well as its potential health benefits. Marijuana has significant systemic effects and associated risks of which patients and health care providers should be aware. Questions remain regarding the safety, efficacy, and long-term impact of use. Use of marijuana for medical purposes requires a careful examination of the risks and benefits.
PHARMACOKINETICS
Marijuana contains approximately 60 cannabinoids, two of which have been specifically identified as primary components. The first, delta-9 tetrahydrocannabinol (THC), is believed to be the most psychoactive.4,5 THC was identified in 1964 and is responsible for the well-documented symptoms of euphoria, appetite stimulation, impaired memory and cognition, and analgesia. The THC content in marijuana products varies widely and has increased over time, complicating research on the long-term effects of marijuana use.5,6
The second compound, cannabidiol (CBD), is a serotonin receptor agonist that lacks psychoactive effects. Potential benefits of CBD include antiemetic and anxiolytic properties, as well as anti-inflammatory effects. There is some evidence to suggest that CBD might also have antipsychotic properties.1,4
AVAILABLE FORMULATIONS
Two synthetic forms of THC have been approved by the FDA since 1985 for medicinal use: nabilone (categorized as a Schedule II drug) and dronabinol (Schedule III). Both are cannabinoid receptor agonists approved for treating chemotherapy-induced nausea and vomiting. They are recommended for use after failure of standard therapies, such as 5-HT3 receptor antagonists, but overall interest has decreased since the advent of agents such as ondansetron.2,4
Nabiximols, an oral buccal spray, is a combination of THC and CBD. It was approved in Canada in 2005 for pain management in cancer patients and for multiple sclerosis–related pain and spasticity. It is not currently available in the US.2,4
Marijuana use is currently legal in 25 states and the District of Columbia.7,8 However, state laws regarding the criteria for medical use are vague and varied. For example, not all states require that clinicians review risks and benefits of marijuana use with patients. Even for those that do, the lack of clinical trials on the safety and efficacy of marijuana make it difficult for clinicians to properly educate themselves and their patients.9
LIMITATIONS OF RESEARCH
Why the lack of data? In 1937, a federal tax restricted marijuana prescription in the US, and in 1942, marijuana was removed from the US Pharmacopeia.2,4 The Controlled Substances Act in 1970 designated marijuana as a Schedule I drug, a categorization for drugs with high potential for abuse and no currently accepted medical use.9 Following this designation, research on marijuana was nearly halted in the US. Several medical organizations have subsequently called for reclassification to Schedule II in order to facilitate scientific research into marijuana’s medicinal benefits and risks.
Research is also limited due to the comorbid use of tobacco and other drugs in study subjects, the variation of cannabinoid levels among products, and differences in the route of administration—particularly smoking versus oral or buccal routes.5 Conducting marijuana research in a fashion similar to pharmaceuticals would not only serve the medical community but also the legislative faction.
Despite these obstacles, there is some available evidence on medical use of marijuana. A review of the associated risks and potential uses for the substance follows.
RISKS ASSOCIATED WITH MARIJUANA USE
Acute effects
Most symptoms of marijuana intoxication are attributed to the THC component and occur due to the presence of cannabinoid receptors in the central nervous system (see Table 1).5,10 Additional objective signs of acute or chronic intoxication include conjunctival injection, tachycardia, cannabis odor, yellowing of fingertips (from smoking), cough, and food cravings.10
A more recently identified effect of long-term marijuana use is a paradoxical hyperemesis syndrome, in which individuals experience nausea, vomiting, and abdominal pain. They obtain relief with hot showers or baths.6,8
Since there is a near absence of cannabinoid receptors in the brain stem, marijuana does not stimulate the autonomic nervous system. It is therefore believed that marijuana use cannot be fatal. Corroborating this theory, no deaths have been reported from marijuana overdose.2,11
Withdrawal symptoms
Approximately 10% of regular marijuana users become physically and psychologically dependent on the substance. Once tolerance develops, withdrawal symptoms occur with cessation of use (see Table 2).2,5,10 Symptoms peak within the first week following cessation and may last up to two weeks. Sleep disturbances may occur for more than one month.10
Unlike with other substances of abuse, there are no pharmaceutical agents to treat marijuana withdrawal; rather, treatment is supportive. Marijuana users often resume use following a period of cessation in order to avoid withdrawal.
Chronic effects
Dental/oral. Smoking marijuana is associated with an increased risk for dental caries, periodontal disease, and oral infections.1 Premalignant oral lesions, such as leukoplakia and erythroplakia, have also been reported. Patient education on the risks and need for proper oral hygiene is vital, as are regular dental examinations.
Respiratory. There are several known pulmonary implications of smoking marijuana, and therefore, this route of administration is not recommended for medicinal use. Respiratory effects of marijuana smoke are similar to those seen with tobacco: cough, dyspnea, sputum production, wheezing, bronchitis, pharyngitis, and hoarseness.4 Increased rates of pneumonia and other respiratory infections have also been identified.6 Research on long-term marijuana smoking has revealed hyperinflation and airway resistance.6 At this time, evidence is inconclusive as to whether smoking marijuana leads to chronic obstructive pulmonary disease.1
Studies have compared the chemical content of tobacco and marijuana and found similar components, including carcinogens, but data regarding concentrations of these chemicals are conflicting.1,4 It is unknown whether vaping (a trending practice in which a device is used to heat the substance prior to inhalation) reduces this risk.4
Unfortunately, data regarding the carcinogenic effects of long-term marijuana smoking are inconclusive; some studies have shown potential protective effects.4-6 Other evidence suggests that the risk is lower in comparison to tobacco smoking.6
Cardiovascular. The effects of marijuana on the cardiovascular system are not fully understood. Known symptoms include tachycardia, peripheral vasodilation, hypotension, and syncope.4 There is some evidence that marijuana use carries an increased risk for angina in patients with previously established heart disease.5 Patients, especially those with known cardiovascular disease, should be educated about these risks.
Reproductive. There are several identified reproductive consequences of marijuana use. Research has found decreased sperm count and gynecomastia in men and impaired ovulation in women.4 Studies on marijuana use in pregnancy consistently reveal low birth weight—this effect is, however, less than that seen with tobacco smoking.5 Other complications or developmental abnormalities may occur, but there is currently a lack of evidence to support further conclusions.
Neurologic. The use of marijuana results in short-term memory loss and other cognitive impairments. There is conflicting evidence as to whether long-term effects remain after cessation.5,6 Because acute intoxication impairs motor skills, it is associated with increased rates of motor vehicle accidents.6 Driving while under the influence of marijuana should be cautioned against.
Psychiatric. Marijuana use is associated with the onset and exacerbation of acute psychosis. However, its role as a causal factor in schizophrenia has not been established.4,10 There is some evidence to suggest that CBD has antipsychotic properties, warranting further research. An amotivational syndrome has also been affiliated with chronic marijuana use; affected individuals exhibit a lack of goal-directed behavior, which may result in work or school dysfunction.10 Several studies have supported an association between marijuana use and risk for depression and anxiety. Due to the extensive risk factors for these disorders, including genetic and environmental, causality has yet to be established.5,6
Conditions for Which Marijuana May Offer Therapeutic Benefits
Glaucoma
Research has demonstrated that marijuana decreases intraocular pressure, and many patients with glaucoma use marijuana. However, it is not recommended as firstline treatment.
The beneficial effects of smoked marijuana are short-lived, requiring patients to dose repeatedly throughout the day. Use is also often discontinued due to adverse effects including dry mouth, dizziness, confusion, and anxiety.8
Topical preparations of THC have not been successfully developed due to the low water solubility of cannabis and minimal penetration through the cornea to the intraocular space.8 Standard treatments available for glaucoma are more effective and without obvious psychoactive effects.6
Nausea
One of the first medical uses of marijuana was for nausea. Due to the presence of cannabinoid receptors that govern food intake, marijuana is known to stimulate appetite, making its use in reducing chemotherapy-associated nausea and vomiting widespread.2,6 Despite the variation in state laws regarding medical use of marijuana, cancer is included as a qualifying illness in every state that allows it.8 Cannabis-based medications may be useful for treating refractory nausea secondary to chemotherapy; however, dronabinol and nabilone are not recommended as firstline therapies.12
HIV/AIDS
Short-term evidence suggests that patients with HIV and/or AIDS benefit from marijuana use through improved appetite, weight gain, lessened pain, and improved quality of life.6,13 Studies with small sample sizes have been conducted using smoked marijuana and dronabinol.8 Long-term studies are needed to compare the use of marijuana with other nutritional and caloric supplements. Overall, reliable research regarding the therapeutic value of marijuana in these patients is inconclusive, and therefore no recommendations for incorporating marijuana into the treatment regimen have been made.8
Multiple sclerosis
For centuries, marijuana has been used for pain relief. The discovery of cannabinoid receptors in high concentrations throughout pain pathways of the brain supports the notion that marijuana plays a role in analgesia. While response to acute pain is poor, there is evidence to suggest that various cannabis formulations relieve chronic neuropathic pain and spasticity, as seen in multiple sclerosis.3,6
Subjective improvements in pain and spasticity were seen with the use of oral cannabis extract, THC, and nabiximols.11 Smoked marijuana is of uncertain efficacy and is not recommended for use in this patient population; it has been shown to potentially worsen cognition.8,11
Seizures
Research into the role of marijuana in decreasing seizure frequency is inconclusive.11 Large studies with human subjects are lacking, and most data thus far have come from animals and case studies.8 Some case reports have suggested a decrease in seizures with marijuana use, but further investigation is needed.6
At this time, it is not appropriate to recommend marijuana for patients with seizure disorders, but the use of cannabidiol might be more promising. Studies are ongoing.14
Alzheimer disease
Alzheimer disease is the most common cause of dementia.8 Despite known adverse effects on memory and cognition with acute use, studies have shown that marijuana might inhibit the development of amyloid beta plaques in Alzheimer disease.4 Further research on dronabinol has not provided sufficient data to support its use, and no studies utilizing smoked marijuana have been performed.8 Therefore, no recommendations exist for the use of marijuana in this patient population, and further research is warranted.
Ongoing research
There are some additional areas of potential therapeutic use of marijuana. Limited evidence has revealed that marijuana has anti-inflammatory properties, leading researchers to examine its use for autoimmune diseases, such as rheumatoid arthritis and Crohn disease. Studies investigating marijuana’s potential ability to inhibit cancer growth and metastasis are ongoing.
Unfortunately, research in patients with Parkinson disease has not shown improvement in dyskinesias.11 Studies on other movement disorders, such as Tourette syndrome and Huntington disease, have not shown symptom improvement with marijuana use. Research on these conditions and others is ongoing.
CONCLUSION
Marijuana use has negative effects on a variety of body systems, but it also may provide therapeutic benefit in certain patient populations. Clinicians and patients are currently hampered by the dearth of reliable information on its safety and efficacy (resulting from federal restrictions and other factors). Comparative studies between marijuana and established standards of care are needed, as is additional research to identify therapeutic effects that could be maximized and ways to minimize or eliminate negative sequelae.
1. Greydanus DE, Hawver EK, Greydanus MM, Merrick J. Cannabis: effective and safe analgesic? J Pain Manage. 2014;7(3):209-233.
2. Bostwick JM. Blurred boundaries: the therapeutics and politics of medical marijuana. Mayo Clin Proc. 2012;87(2):172-186.
3. Karst M, Wippermann S, Ahrens J. Role of cannabinoids in the treatment of pain and (painful) spasticity. Drugs. 2010;70(18):2409-2438.
4. Owen KP, Sutter ME, Albertson TE. Marijuana: respiratory tract effects. Clin Rev Allergy Immunol. 2014;46(1):65-81.
5. Hall W, Degenhardt L. Adverse health effects of non-medical cannabis use. Lancet. 2009;374(9698):1383-1391.
6. Volkow ND, Baler RD, Compton WM, Weiss SRB. Adverse health effects of marijuana use. N Engl J Med. 2014;370(23):2219-2227.
7. National Conference of State Legislatures. State medical marijuana laws (updated 7/20/2016). www.ncsl.org/research/health/state-medical-marijuana-laws.aspx. Accessed September 7, 2016.
8. Belendiuk KA, Baldini LL, Bonn-Miller MO. Narrative review of the safety and efficacy of marijuana for the treatment of commonly state-approved medical and psychiatric disorders. Addict Sci Clin Pract. 2015;10(1):1-10.
9. Hoffmann DE, Weber E. Medical marijuana and the law. N Engl J Med. 2010;362(16):1453-1457.
10. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Arlington, VA: American Psychiatric Publishing; 2013.
11. Koppel B, Brust J, Fife T, et al. Systematic review: efficacy and safety of medical marijuana in selected neurologic disorders. Neurology. 2014; 82(17):1556-1563.
12. Smith LA, Azariah F, Lavender VT, Stoner NS, Bettiol S. Cannabinoids for nausea and vomiting in adults with cancer receiving chemotherapy. Cochrane Database Syst Rev. 2015;(11):CD009464.
13. Lutge EE, Gray A, Siegfried N. The medical use of cannabis for reducing morbidity and mortality in patients with HIV/AIDS. Cochrane Database Syst Rev. 2013;(4):CD005175.
14. Gloss D, Vickrey B. Cannabinoids for epilepsy. Cochrane Database Syst Rev. 2012;(6):CD009270.
1. Greydanus DE, Hawver EK, Greydanus MM, Merrick J. Cannabis: effective and safe analgesic? J Pain Manage. 2014;7(3):209-233.
2. Bostwick JM. Blurred boundaries: the therapeutics and politics of medical marijuana. Mayo Clin Proc. 2012;87(2):172-186.
3. Karst M, Wippermann S, Ahrens J. Role of cannabinoids in the treatment of pain and (painful) spasticity. Drugs. 2010;70(18):2409-2438.
4. Owen KP, Sutter ME, Albertson TE. Marijuana: respiratory tract effects. Clin Rev Allergy Immunol. 2014;46(1):65-81.
5. Hall W, Degenhardt L. Adverse health effects of non-medical cannabis use. Lancet. 2009;374(9698):1383-1391.
6. Volkow ND, Baler RD, Compton WM, Weiss SRB. Adverse health effects of marijuana use. N Engl J Med. 2014;370(23):2219-2227.
7. National Conference of State Legislatures. State medical marijuana laws (updated 7/20/2016). www.ncsl.org/research/health/state-medical-marijuana-laws.aspx. Accessed September 7, 2016.
8. Belendiuk KA, Baldini LL, Bonn-Miller MO. Narrative review of the safety and efficacy of marijuana for the treatment of commonly state-approved medical and psychiatric disorders. Addict Sci Clin Pract. 2015;10(1):1-10.
9. Hoffmann DE, Weber E. Medical marijuana and the law. N Engl J Med. 2010;362(16):1453-1457.
10. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Arlington, VA: American Psychiatric Publishing; 2013.
11. Koppel B, Brust J, Fife T, et al. Systematic review: efficacy and safety of medical marijuana in selected neurologic disorders. Neurology. 2014; 82(17):1556-1563.
12. Smith LA, Azariah F, Lavender VT, Stoner NS, Bettiol S. Cannabinoids for nausea and vomiting in adults with cancer receiving chemotherapy. Cochrane Database Syst Rev. 2015;(11):CD009464.
13. Lutge EE, Gray A, Siegfried N. The medical use of cannabis for reducing morbidity and mortality in patients with HIV/AIDS. Cochrane Database Syst Rev. 2013;(4):CD005175.
14. Gloss D, Vickrey B. Cannabinoids for epilepsy. Cochrane Database Syst Rev. 2012;(6):CD009270.








