User login
An Anniversary Postponed and a Diagnosis Delayed: Vietnam and PTSD
Many events both personal and public have been deferred during the 15 plus months of the pandemic. Almost everyone has an example of a friend or family member who would have been sitting at what President Biden, during his memorial speech for the 500,000 victims of the virus referred to as the “empty chair” at a holiday gathering sans COVID-19.2 For many in our country, part of the agonizing effort to awaken from the long nightmare of the pandemic is to resume the rhythm of rituals national, local, and personal that mark the year with meaning and offer rest and rejuvenation from the daily toil of duty. There are family dinners now cautiously resumed due to vaccinations; small celebrations of belated birthdays in family pods; socially distanced outdoor gatherings suspended in the cold communicable winter now gingerly possible with the warmth of spring.
As a nation, one of the events that was put on hold was the commemoration of the Vietnam War. On March 16, 2021, following guidance from the Centers for Disease Control and Prevention, the US Department of Veterans Affairs (VA) announced it was postponing commemoration events “until further notice.”3 Annually, the VA partners with the US Department of Defense, state, and local organizations to recognize “the service and sacrifices made by the nearly 3 million service members who served in Vietnam.”4
In 2012, President Barak Obama signed a proclamation establishing a 13-year commemoration of the 50th anniversary of the Vietnam War.5 Five years later, President Donald Trump signed the War Veterans Recognition Act of 2017, designating March 29 annually as National Vietnam War Veterans Day.6 Though many of the events planned for March and April could not take place, the Vietnam War Commemoration (https://www.vietnamwar50th.com) offers information and ideas for honoring and supporting Vietnam War veterans. As Memorial Day approaches in this year of so much loss and heroism, I encourage you to find a way to thank Vietnam veterans who may have received the opposite of gratitude when they initially returned home.
As my small contribution to the commemoration, this editorial will focus on the psychiatric disorder of memory: posttraumatic stress disorder (PTSD) and how the Vietnam War brought definition—albeit delayed—to the agonizing diagnosis that too many veterans experience.
The known clinical entity of PTSD is ancient. Narrative descriptions of the disorder are written in the Mesopotamian Epic of Gilgamesh and in Deuteronomy 20:1-9.7 American and European military physicians have given various names to the destructive effects of combat on body and mind from “soldier’s heart” in the American Civil War, to “shell shock” in World War I to “battle fatigue” during World War II.8 These were all descriptive diagnoses field practitioners used to grasp the psychosomatic decompensation they observed in service members who had been exposed to the horrors of war. The VA was the impetus and agent of the earliest attempts at scientific definition. The American Psychiatric Association further developed this nosology in 1952 with the diagnosis of gross stress reaction in the first Diagnostic and Statistical Manual of Mental Disorders (DSM)-1.9
The combat experience shaped the definition: the stressor had to be extreme, the civilian comparison would be a natural disaster; the reaction could occur only in a previously normal individual, it would be attributed to the extant psychiatric condition in anyone with a premorbid illness; and if it did not remit by 6 months, another primary psychiatric diagnosis must be assigned.
From our vantage point, this set of criteria is obviously woefully inadequate, yet it was at least a beginning of formal recognition of the experience that veterans endured in wartime and real progress compared with what happened next. When DSM-1 was revised in 1968, the diagnosis of gross stress reaction was eliminated without explanation. Researcher Andreasen and others speculate that its disappearance can be attributed to association of the diagnosis with war in a country that had been at peace since the end of the Korean War in 1953.10 Yet military historians among my readers will immediately counter that the Vietnam War began 2 years later and that the year of the revision saw major combat operations.
Many veterans living with the psychological and physical suffering of their service in Vietnam and the organizations that supported them advocated for the psychiatric profession to formally acknowledge post-Vietnam syndrome.11 Five years after the end of the Vietnam War, the experts who authored DSM-III, decided to include a new stress-induced diagnosis.12 Although the manual did not limit the traumatic experience to combat in Vietnam as some veterans wanted, there is no doubt that the criteria reflect the extensive research validating the illness narratives of thousands of service men and women.
The DSM-III criteria clearly had war in mind when it stipulated that the stressor had to be outside the range of usual human experience that would likely trigger significant symptoms in almost anyone as well as specifying chronic symptoms lasting more than 6 months. Despite the controversy about the diagnosis, Vietnam veterans helped bring the PTSD diagnosis to official psychiatric nomenclature and in a more recognizable form that began to capture the intensity of their reexperiencing of the trauma, the psychosocial difficulties numbing caused, and the pervasive interference of hyperarousal and vigilance many aspects and areas of life.13
The National Vietnam Veterans Longitudinal Study examined the course of PTSD over 25 years, using the newly formulated diagnostic criteria for PTSD.14 Results were reported to Congress in 2012 and showed that 11% of men and 7% of women who were in a war theater were still struggling with PTSD 40 years after the war. Of those, 37% met major depressive disorder criteria. Male veterans who in 1987 still met criteria for PTSD were twice as likely to have died than the comparator group of veterans without PTSD. Two-thirds of veterans with PTSD from war zone exposure discussed behavioral health or substance misuse concerns with a health care provider, and 37% of those were receiving VA care.14
Given these disturbing data, perhaps the best way we can pay homage to the aging Vietnam veterans is to support continued research into effective evidence-based treatments for PTSD and funding for the training and recruiting of mental health practitioners to all 3 branches of federal health care who can deliver that care compassionately and competently.
1. The Vietnam War: a new film by Ken Burns and Lynn Novick, to air fall 2017 on PBS. Press release. Updated August 17, 2020. Accessed April 26, 2021. https://www.pbs.org/about/about-pbs/blogs/news/the-vietnam-war-a-new-film-by-ken-burns-and-lynn-novick-to-air-fall-2017-on-pbs
2. The White House Briefing Room. Remarks by President Biden on the more than 500,000 Americans lives lost to COVID-19. Published February 22, 2021. Accessed April 26, 2021.https://www.whitehouse.gov/briefing-room/speeches-remarks/2021/02/22/remarks-by-president-biden-on-the-more-than-500000-american-lives-lost-to-covid-19/
3. US Department of Veterans Affairs. Vantage Point. VA postpones 50th anniversary of the Vietnam War commemoration events. Published March 16, 2021. Accessed April 26, 2021. https://blogs.va.gov/VAntage/72694/va-postpones-50th-anniversary-vietnam-war-commemoration-events
4. US Department of Defense. Nation observes Vietnam War Veterans Day. Published March 29, 2021. Accessed April 26, 2021. https://www.defense.gov/Explore/Features/Story/Article/2545524/nation-observes-vietnam-war-veterans-day
5. The White House. Commemoration of the 50th anniversary of the Vietnam War. Published May 25, 2012. Accessed April 26, 2021. https://obamawhitehouse.archives.gov/the-press-office/2012/05/25/presidential-proclamation-commemoration-50th-anniversary-vietnam-war
6. Vietnam War Veterans Recognition Act. Public Law 115-15. U.S. Government Publishing Office, Washington DC, 2017.
7. Crocq M-A, Crocq L. From shell shock and war neurosis to posttraumatic stress disorder: a history of psychotraumatology. Dialogues Clin Neurosci .2000;2(1):47-55. doi:10.31887/DCNS.2000.2.1/macrocq
8. US Department of Veterans Affairs. History of PTSD in veterans: Civil War to DSM-5. Accessed April 26, 2021. https://www.ptsd.va.gov/understand/what/history_ptsd.asp
9. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders . Washington, DC: American Psychiatric Association; 1952.
10. Andreasen NC. Posttraumatic stress disorder: a history and a critique. Ann NY Acad Sci. 2010;1208;67-71. doi:10.1111/j.1749-6632.2010.05699.x
11. Shata CF. Post-Vietnam syndrome. The New York Times . Published May 6, 1972. Accessed April 26, 2021. https://www.nytimes.com/1972/05/06/archives/postvietnam-syndrome.html
12. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. (DSM-III) . Washington, DC. American Psychiatric Association; 1980.
13. Kinzie JD, Goetz RR. A century of controversy surrounding posttraumatic stress stress: spectrum syndromes: the impact on DSM-III and DSM-IV. J Trauma Stress. 1996;9(2):156-179. doi:10.1007/BF02110653
14. Schlenger WE, Corry NH. Four decades later: Vietnam veterans and PTSD. Published January/February 2015. Accessed April 25, 2021. http://vvaveteran.org/35-1/35-1_longitudinalstudy.html
Many events both personal and public have been deferred during the 15 plus months of the pandemic. Almost everyone has an example of a friend or family member who would have been sitting at what President Biden, during his memorial speech for the 500,000 victims of the virus referred to as the “empty chair” at a holiday gathering sans COVID-19.2 For many in our country, part of the agonizing effort to awaken from the long nightmare of the pandemic is to resume the rhythm of rituals national, local, and personal that mark the year with meaning and offer rest and rejuvenation from the daily toil of duty. There are family dinners now cautiously resumed due to vaccinations; small celebrations of belated birthdays in family pods; socially distanced outdoor gatherings suspended in the cold communicable winter now gingerly possible with the warmth of spring.
As a nation, one of the events that was put on hold was the commemoration of the Vietnam War. On March 16, 2021, following guidance from the Centers for Disease Control and Prevention, the US Department of Veterans Affairs (VA) announced it was postponing commemoration events “until further notice.”3 Annually, the VA partners with the US Department of Defense, state, and local organizations to recognize “the service and sacrifices made by the nearly 3 million service members who served in Vietnam.”4
In 2012, President Barak Obama signed a proclamation establishing a 13-year commemoration of the 50th anniversary of the Vietnam War.5 Five years later, President Donald Trump signed the War Veterans Recognition Act of 2017, designating March 29 annually as National Vietnam War Veterans Day.6 Though many of the events planned for March and April could not take place, the Vietnam War Commemoration (https://www.vietnamwar50th.com) offers information and ideas for honoring and supporting Vietnam War veterans. As Memorial Day approaches in this year of so much loss and heroism, I encourage you to find a way to thank Vietnam veterans who may have received the opposite of gratitude when they initially returned home.
As my small contribution to the commemoration, this editorial will focus on the psychiatric disorder of memory: posttraumatic stress disorder (PTSD) and how the Vietnam War brought definition—albeit delayed—to the agonizing diagnosis that too many veterans experience.
The known clinical entity of PTSD is ancient. Narrative descriptions of the disorder are written in the Mesopotamian Epic of Gilgamesh and in Deuteronomy 20:1-9.7 American and European military physicians have given various names to the destructive effects of combat on body and mind from “soldier’s heart” in the American Civil War, to “shell shock” in World War I to “battle fatigue” during World War II.8 These were all descriptive diagnoses field practitioners used to grasp the psychosomatic decompensation they observed in service members who had been exposed to the horrors of war. The VA was the impetus and agent of the earliest attempts at scientific definition. The American Psychiatric Association further developed this nosology in 1952 with the diagnosis of gross stress reaction in the first Diagnostic and Statistical Manual of Mental Disorders (DSM)-1.9
The combat experience shaped the definition: the stressor had to be extreme, the civilian comparison would be a natural disaster; the reaction could occur only in a previously normal individual, it would be attributed to the extant psychiatric condition in anyone with a premorbid illness; and if it did not remit by 6 months, another primary psychiatric diagnosis must be assigned.
From our vantage point, this set of criteria is obviously woefully inadequate, yet it was at least a beginning of formal recognition of the experience that veterans endured in wartime and real progress compared with what happened next. When DSM-1 was revised in 1968, the diagnosis of gross stress reaction was eliminated without explanation. Researcher Andreasen and others speculate that its disappearance can be attributed to association of the diagnosis with war in a country that had been at peace since the end of the Korean War in 1953.10 Yet military historians among my readers will immediately counter that the Vietnam War began 2 years later and that the year of the revision saw major combat operations.
Many veterans living with the psychological and physical suffering of their service in Vietnam and the organizations that supported them advocated for the psychiatric profession to formally acknowledge post-Vietnam syndrome.11 Five years after the end of the Vietnam War, the experts who authored DSM-III, decided to include a new stress-induced diagnosis.12 Although the manual did not limit the traumatic experience to combat in Vietnam as some veterans wanted, there is no doubt that the criteria reflect the extensive research validating the illness narratives of thousands of service men and women.
The DSM-III criteria clearly had war in mind when it stipulated that the stressor had to be outside the range of usual human experience that would likely trigger significant symptoms in almost anyone as well as specifying chronic symptoms lasting more than 6 months. Despite the controversy about the diagnosis, Vietnam veterans helped bring the PTSD diagnosis to official psychiatric nomenclature and in a more recognizable form that began to capture the intensity of their reexperiencing of the trauma, the psychosocial difficulties numbing caused, and the pervasive interference of hyperarousal and vigilance many aspects and areas of life.13
The National Vietnam Veterans Longitudinal Study examined the course of PTSD over 25 years, using the newly formulated diagnostic criteria for PTSD.14 Results were reported to Congress in 2012 and showed that 11% of men and 7% of women who were in a war theater were still struggling with PTSD 40 years after the war. Of those, 37% met major depressive disorder criteria. Male veterans who in 1987 still met criteria for PTSD were twice as likely to have died than the comparator group of veterans without PTSD. Two-thirds of veterans with PTSD from war zone exposure discussed behavioral health or substance misuse concerns with a health care provider, and 37% of those were receiving VA care.14
Given these disturbing data, perhaps the best way we can pay homage to the aging Vietnam veterans is to support continued research into effective evidence-based treatments for PTSD and funding for the training and recruiting of mental health practitioners to all 3 branches of federal health care who can deliver that care compassionately and competently.
Many events both personal and public have been deferred during the 15 plus months of the pandemic. Almost everyone has an example of a friend or family member who would have been sitting at what President Biden, during his memorial speech for the 500,000 victims of the virus referred to as the “empty chair” at a holiday gathering sans COVID-19.2 For many in our country, part of the agonizing effort to awaken from the long nightmare of the pandemic is to resume the rhythm of rituals national, local, and personal that mark the year with meaning and offer rest and rejuvenation from the daily toil of duty. There are family dinners now cautiously resumed due to vaccinations; small celebrations of belated birthdays in family pods; socially distanced outdoor gatherings suspended in the cold communicable winter now gingerly possible with the warmth of spring.
As a nation, one of the events that was put on hold was the commemoration of the Vietnam War. On March 16, 2021, following guidance from the Centers for Disease Control and Prevention, the US Department of Veterans Affairs (VA) announced it was postponing commemoration events “until further notice.”3 Annually, the VA partners with the US Department of Defense, state, and local organizations to recognize “the service and sacrifices made by the nearly 3 million service members who served in Vietnam.”4
In 2012, President Barak Obama signed a proclamation establishing a 13-year commemoration of the 50th anniversary of the Vietnam War.5 Five years later, President Donald Trump signed the War Veterans Recognition Act of 2017, designating March 29 annually as National Vietnam War Veterans Day.6 Though many of the events planned for March and April could not take place, the Vietnam War Commemoration (https://www.vietnamwar50th.com) offers information and ideas for honoring and supporting Vietnam War veterans. As Memorial Day approaches in this year of so much loss and heroism, I encourage you to find a way to thank Vietnam veterans who may have received the opposite of gratitude when they initially returned home.
As my small contribution to the commemoration, this editorial will focus on the psychiatric disorder of memory: posttraumatic stress disorder (PTSD) and how the Vietnam War brought definition—albeit delayed—to the agonizing diagnosis that too many veterans experience.
The known clinical entity of PTSD is ancient. Narrative descriptions of the disorder are written in the Mesopotamian Epic of Gilgamesh and in Deuteronomy 20:1-9.7 American and European military physicians have given various names to the destructive effects of combat on body and mind from “soldier’s heart” in the American Civil War, to “shell shock” in World War I to “battle fatigue” during World War II.8 These were all descriptive diagnoses field practitioners used to grasp the psychosomatic decompensation they observed in service members who had been exposed to the horrors of war. The VA was the impetus and agent of the earliest attempts at scientific definition. The American Psychiatric Association further developed this nosology in 1952 with the diagnosis of gross stress reaction in the first Diagnostic and Statistical Manual of Mental Disorders (DSM)-1.9
The combat experience shaped the definition: the stressor had to be extreme, the civilian comparison would be a natural disaster; the reaction could occur only in a previously normal individual, it would be attributed to the extant psychiatric condition in anyone with a premorbid illness; and if it did not remit by 6 months, another primary psychiatric diagnosis must be assigned.
From our vantage point, this set of criteria is obviously woefully inadequate, yet it was at least a beginning of formal recognition of the experience that veterans endured in wartime and real progress compared with what happened next. When DSM-1 was revised in 1968, the diagnosis of gross stress reaction was eliminated without explanation. Researcher Andreasen and others speculate that its disappearance can be attributed to association of the diagnosis with war in a country that had been at peace since the end of the Korean War in 1953.10 Yet military historians among my readers will immediately counter that the Vietnam War began 2 years later and that the year of the revision saw major combat operations.
Many veterans living with the psychological and physical suffering of their service in Vietnam and the organizations that supported them advocated for the psychiatric profession to formally acknowledge post-Vietnam syndrome.11 Five years after the end of the Vietnam War, the experts who authored DSM-III, decided to include a new stress-induced diagnosis.12 Although the manual did not limit the traumatic experience to combat in Vietnam as some veterans wanted, there is no doubt that the criteria reflect the extensive research validating the illness narratives of thousands of service men and women.
The DSM-III criteria clearly had war in mind when it stipulated that the stressor had to be outside the range of usual human experience that would likely trigger significant symptoms in almost anyone as well as specifying chronic symptoms lasting more than 6 months. Despite the controversy about the diagnosis, Vietnam veterans helped bring the PTSD diagnosis to official psychiatric nomenclature and in a more recognizable form that began to capture the intensity of their reexperiencing of the trauma, the psychosocial difficulties numbing caused, and the pervasive interference of hyperarousal and vigilance many aspects and areas of life.13
The National Vietnam Veterans Longitudinal Study examined the course of PTSD over 25 years, using the newly formulated diagnostic criteria for PTSD.14 Results were reported to Congress in 2012 and showed that 11% of men and 7% of women who were in a war theater were still struggling with PTSD 40 years after the war. Of those, 37% met major depressive disorder criteria. Male veterans who in 1987 still met criteria for PTSD were twice as likely to have died than the comparator group of veterans without PTSD. Two-thirds of veterans with PTSD from war zone exposure discussed behavioral health or substance misuse concerns with a health care provider, and 37% of those were receiving VA care.14
Given these disturbing data, perhaps the best way we can pay homage to the aging Vietnam veterans is to support continued research into effective evidence-based treatments for PTSD and funding for the training and recruiting of mental health practitioners to all 3 branches of federal health care who can deliver that care compassionately and competently.
1. The Vietnam War: a new film by Ken Burns and Lynn Novick, to air fall 2017 on PBS. Press release. Updated August 17, 2020. Accessed April 26, 2021. https://www.pbs.org/about/about-pbs/blogs/news/the-vietnam-war-a-new-film-by-ken-burns-and-lynn-novick-to-air-fall-2017-on-pbs
2. The White House Briefing Room. Remarks by President Biden on the more than 500,000 Americans lives lost to COVID-19. Published February 22, 2021. Accessed April 26, 2021.https://www.whitehouse.gov/briefing-room/speeches-remarks/2021/02/22/remarks-by-president-biden-on-the-more-than-500000-american-lives-lost-to-covid-19/
3. US Department of Veterans Affairs. Vantage Point. VA postpones 50th anniversary of the Vietnam War commemoration events. Published March 16, 2021. Accessed April 26, 2021. https://blogs.va.gov/VAntage/72694/va-postpones-50th-anniversary-vietnam-war-commemoration-events
4. US Department of Defense. Nation observes Vietnam War Veterans Day. Published March 29, 2021. Accessed April 26, 2021. https://www.defense.gov/Explore/Features/Story/Article/2545524/nation-observes-vietnam-war-veterans-day
5. The White House. Commemoration of the 50th anniversary of the Vietnam War. Published May 25, 2012. Accessed April 26, 2021. https://obamawhitehouse.archives.gov/the-press-office/2012/05/25/presidential-proclamation-commemoration-50th-anniversary-vietnam-war
6. Vietnam War Veterans Recognition Act. Public Law 115-15. U.S. Government Publishing Office, Washington DC, 2017.
7. Crocq M-A, Crocq L. From shell shock and war neurosis to posttraumatic stress disorder: a history of psychotraumatology. Dialogues Clin Neurosci .2000;2(1):47-55. doi:10.31887/DCNS.2000.2.1/macrocq
8. US Department of Veterans Affairs. History of PTSD in veterans: Civil War to DSM-5. Accessed April 26, 2021. https://www.ptsd.va.gov/understand/what/history_ptsd.asp
9. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders . Washington, DC: American Psychiatric Association; 1952.
10. Andreasen NC. Posttraumatic stress disorder: a history and a critique. Ann NY Acad Sci. 2010;1208;67-71. doi:10.1111/j.1749-6632.2010.05699.x
11. Shata CF. Post-Vietnam syndrome. The New York Times . Published May 6, 1972. Accessed April 26, 2021. https://www.nytimes.com/1972/05/06/archives/postvietnam-syndrome.html
12. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. (DSM-III) . Washington, DC. American Psychiatric Association; 1980.
13. Kinzie JD, Goetz RR. A century of controversy surrounding posttraumatic stress stress: spectrum syndromes: the impact on DSM-III and DSM-IV. J Trauma Stress. 1996;9(2):156-179. doi:10.1007/BF02110653
14. Schlenger WE, Corry NH. Four decades later: Vietnam veterans and PTSD. Published January/February 2015. Accessed April 25, 2021. http://vvaveteran.org/35-1/35-1_longitudinalstudy.html
1. The Vietnam War: a new film by Ken Burns and Lynn Novick, to air fall 2017 on PBS. Press release. Updated August 17, 2020. Accessed April 26, 2021. https://www.pbs.org/about/about-pbs/blogs/news/the-vietnam-war-a-new-film-by-ken-burns-and-lynn-novick-to-air-fall-2017-on-pbs
2. The White House Briefing Room. Remarks by President Biden on the more than 500,000 Americans lives lost to COVID-19. Published February 22, 2021. Accessed April 26, 2021.https://www.whitehouse.gov/briefing-room/speeches-remarks/2021/02/22/remarks-by-president-biden-on-the-more-than-500000-american-lives-lost-to-covid-19/
3. US Department of Veterans Affairs. Vantage Point. VA postpones 50th anniversary of the Vietnam War commemoration events. Published March 16, 2021. Accessed April 26, 2021. https://blogs.va.gov/VAntage/72694/va-postpones-50th-anniversary-vietnam-war-commemoration-events
4. US Department of Defense. Nation observes Vietnam War Veterans Day. Published March 29, 2021. Accessed April 26, 2021. https://www.defense.gov/Explore/Features/Story/Article/2545524/nation-observes-vietnam-war-veterans-day
5. The White House. Commemoration of the 50th anniversary of the Vietnam War. Published May 25, 2012. Accessed April 26, 2021. https://obamawhitehouse.archives.gov/the-press-office/2012/05/25/presidential-proclamation-commemoration-50th-anniversary-vietnam-war
6. Vietnam War Veterans Recognition Act. Public Law 115-15. U.S. Government Publishing Office, Washington DC, 2017.
7. Crocq M-A, Crocq L. From shell shock and war neurosis to posttraumatic stress disorder: a history of psychotraumatology. Dialogues Clin Neurosci .2000;2(1):47-55. doi:10.31887/DCNS.2000.2.1/macrocq
8. US Department of Veterans Affairs. History of PTSD in veterans: Civil War to DSM-5. Accessed April 26, 2021. https://www.ptsd.va.gov/understand/what/history_ptsd.asp
9. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders . Washington, DC: American Psychiatric Association; 1952.
10. Andreasen NC. Posttraumatic stress disorder: a history and a critique. Ann NY Acad Sci. 2010;1208;67-71. doi:10.1111/j.1749-6632.2010.05699.x
11. Shata CF. Post-Vietnam syndrome. The New York Times . Published May 6, 1972. Accessed April 26, 2021. https://www.nytimes.com/1972/05/06/archives/postvietnam-syndrome.html
12. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. (DSM-III) . Washington, DC. American Psychiatric Association; 1980.
13. Kinzie JD, Goetz RR. A century of controversy surrounding posttraumatic stress stress: spectrum syndromes: the impact on DSM-III and DSM-IV. J Trauma Stress. 1996;9(2):156-179. doi:10.1007/BF02110653
14. Schlenger WE, Corry NH. Four decades later: Vietnam veterans and PTSD. Published January/February 2015. Accessed April 25, 2021. http://vvaveteran.org/35-1/35-1_longitudinalstudy.html
Factors Associated with Radiation Toxicity and Survival in Patients with Presumed Early-Stage Non-Small Cell Lung Cancer Receiving Empiric Stereotactic Ablative Radiotherapy
Stereotactic ablative radiotherapy (SABR) has become the standard of care for inoperable early-stage non-small cell lung cancer (NSCLC). Many patients are unable to undergo a biopsy safely because of poor pulmonary function or underlying emphysema and are then empirically treated with radiotherapy if they meet criteria. In these patients, local control can be achieved with SABR with minimal toxicity.1 Considering that median overall survival (OS) among patients with untreated stage I NSCLC has been reported to be as low as 9 months, early treatment with SABR could lead to increased survival of 29 to 60 months.2-4
The RTOG 0236 trial showed a median OS of 48 months and the randomized phase III CHISEL trial showed a median OS of 60 months; however, these survival data were reported in patients who were able to safely undergo a biopsy and had confirmed NSCLC.4,5 For patients without a diagnosis confirmed by biopsy and who are treated with empiric SABR, patient factors that influence radiation toxicity and OS are not well defined.
It is not clear if empiric radiation benefits survival or if treatment causes decline in lung function, considering that underlying chronic lung disease precludes these patients from biopsy. The purpose of this study was to evaluate the factors associated with radiation toxicity with empiric SABR and to evaluate OS in this population without a biopsy-confirmed diagnosis.
Methods
This was a single center retrospective review of patients treated at the radiation oncology department at the Kansas City Veterans Affairs Medical Center from August 2014 to February 2019. Data were collected on 69 patients with pulmonary nodules identified by chest computed tomography (CT) and/or positron emission tomography (PET)-CT that were highly suspicious for primary NSCLC.
These patients were presented at a multidisciplinary meeting that involved pulmonologists, oncologists, radiation oncologists, and thoracic surgeons. Patients were deemed to be poor candidates for biopsy because of severe underlying emphysema, which would put them at high risk for pneumothorax with a percutaneous needle biopsy, or were unable to tolerate general anesthesia for navigational bronchoscopy or surgical biopsy because of poor lung function. These patients were diagnosed with presumed stage I NSCLC using the criteria: minimum of 2 sequential CT scans with enlarging nodule; absence of metastases on PET-CT; the single nodule had to be fluorodeoxyglucose avid with a minimum standardized uptake value of 2.5, and absence of clinical history or physical examination consistent with small cell lung cancer or infection.
After a consensus was reached that patients met these criteria, individuals were referred for empiric SABR. Follow-up visits were at 1 month, 3 months, and every 6 months. Variables analyzed included: patient demographics, pre- and posttreatment pulmonary function tests (PFT) when available, pre-treatment oxygen use, tumor size and location (peripheral, central, or ultra-central), radiation doses, and grade of toxicity as defined by Human and Health Services Common Terminology Criteria for Adverse Events version 5.0 (dyspnea and cough both counted as pulmonary toxicity): acute ≤ 90 days and late > 90 days (Table 1).
SPSS versions 24 and 26 were used for statistical analysis. Median and range were obtained for continuous variables with a normal distribution. Kaplan-Meier log-rank testing was used to analyze OS. χ2 and Mann-Whitney U tests were used to analyze association between independent variables and OS. Analysis of significant findings were repeated with operable patients excluded for further analysis.
Results
The median follow-up was 18 months (range, 1 to 54). The median age was 71 years (range, 59 to 95) (Table 2). Most patients (97.1%) were male. The majority of patients (79.4%) had a 0 or 1 for the Eastern Cooperative Oncology group performance status, indicating fully active or restricted in physically strenuous activity but ambulatory and able to perform light work. All patients were either current or former smokers with an average pack-year history of 69.4. Only 11.6% of patients had operable disease, but received empiric SABR because they declined surgery. Four patients did not have pretreatment spirometry available and 37 did not have pretreatment diffusing capacity for carbon monoxide (DLCO) data.
Most patients had a pretreatment forced expiratory volume during the first seconds (FEV1) value and DLCO < 60% of predicted (60% and 84% of the patients, respectively). The median tumor diameter was 2 cm. Of the 68.2% of patients who did not have chronic hypoxemic respiratory failure before SABR, 16% developed a new requirement for supplemental oxygen. Sixty-two tumors (89.9%) were peripheral. There were 4 local recurrences (5.7%), 10 regional (different lobe and nodal) failures (14.3%), and 15 distant metastases (21.4%).
Nineteen of 67 patients (26.3%) had acute toxicity of which 9 had acute grade ≥ 2 toxicity; information regarding toxicity was missing on 2 patients. Thirty-two of 65 (49.9%) patients had late toxicity of which 20 (30.8%) had late grade ≥ 2 toxicity. The main factor associated with development of acute toxicity was pretreatment oxygendependence (P = .047). This was not significant when comparing only inoperable patients. Twenty patients (29.9%) developed some type of acute toxicity; pulmonary toxicity was most common (22.4%) (Table 3). All patients with acute toxicity also developed late toxicity except for 1 who died before 3 months. Predominantly, the deaths in our sample were from causes other than the malignancy or treatment, such as sepsis, deconditioning after a fall, cardiovascular complications, etc. Acute toxicity of grade ≥ 2 was significantly associated with late toxicity (P < .001 for both) in both operable and inoperable patients (P < .001).
Development of any acute toxicity grade ≥ 2 was significantly associated with oxygendependence at baseline (P = .003), central location (P < .001), and new oxygen requirement (P = .02). Only central tumor location was found to be significant (P = .001) within the inoperable cohort. There were no significant differences in outcome based on pulmonary function testing (FEV1, forced vital capacity, or DLCO) or the analyzed PFT subgroups (FEV1 < 1.0 L, FEV1 < 1.5 L, FEV1 < 30%, and FEV1 < 35%).
At the time of data collection, 30 patients were deceased (43.5%). There was a statistically significant association between OS and operability (P = .03; Table 4, Figure 1). Decreased OS was significantly associated with acute toxicity (P = .001) and acute toxicity grade ≥ 2 (P = .005; Figures 2 and 3). For the inoperable patients, both acute toxicity (P < .001) and acute toxicity grade ≥ 2 (P = .026) remained significant.
Discussion
SABR is an effective treatment for inoperable early-stage NSCLC, however its therapeutic ratio in a more frail population who cannot withstand biopsy is not well established. Additionally, the prevalence of benign disease in patients with solitary pulmonary nodules can be between 9% and 21%.6 Haidar and colleagues looked at 55 patients who received empiric SABR and found a median OS of 30.2 months with an 8.7% risk of local failure, 13% risk of regional failure with 8.7% acute toxicity, and 13% chronic toxicity.7 Data from Harkenrider and colleagues (n = 34) revealed similar results with a 2-year OS of 85%, local control of 97.1%, and regional control of 80%. The authors noted no grade ≥ 3 acute toxicities and an incidence of grade ≥ 3 late toxicities of 8.8%.1 These findings are concordant with our study results, confirming the safety and efficacy of SABR. Furthermore, a National Cancer Database analysis of observation vs empiric SABR found an OS of 10.1 months and 29 months respectively, with a hazard ratio of 0.64 (P < .001).3 Additionally, Fischer-Valuck and colleagues (n = 88) compared biopsy confirmed vs unbiopsied patients treated with SABR and found no difference in the 3-year local progression-free survival (93.1% vs 94.1%), regional lymph node metastasis and distant metastases free survival (92.5% vs 87.4%), or OS (59.9% vs 58.9%).8 With a median OS of ≤ 1 year for untreated stage I NSCLC,these studies support treating patients with empiric SABR.4
Other researchers have sought parameters to identify patients for whom radiation therapy would be too toxic. Guckenberger and colleagues aimed to establish a lower limit of pretreatment PFT to exclude patients and found only a 7% incidence of grade ≥ 2 adverse effects and toxicity did not increase with lower pulmonary function.9 They concluded that SABR was safe even for patients with poor pulmonary function. Other institutions have confirmed such findings and have been unable to find a cut-off PFT to exclude patients from empiric SABR.10,11 An analysis from the RTOG 0236 trial also noted that poor baseline PFT could not predict pulmonary toxicity or survival. Additionally, the study demonstrated only minimal decreases in patients’ FEV1 (5.8%) and DLCO (6%) at 2 years.12
Our study sought to identify a cut-off on FEV1 or DLCO that could be associated with increased toxicity. We also evaluated the incidence of acute toxicities grade ≥ 2 by stratifying patients according to FEV1 into subgroups: FEV1 < 1.0 L, FEV1 < 1.5 L, FEV1 < 30% of predicted and FEV1 < 35% of predicted. However, similar to other studies, we did not find any value that was significantly associated with increased toxicity that could preclude empiric SABR. One possible reason is that no treatment is offered for patients with extremely poor lung function as deemed by clinical judgement, therefore data on these patients is unavailable. In contradiction to other studies, our study found that oxygen dependence before treatment was significantly associated with development of acute toxicities. The exact mechanism for this association is unknown and could not be elucidated by baseline PFT. One possible explanation is that SABR could lead to oxygen free radical generation. In addition, our study indicated that those who developed acute toxicities had worse OS.
Limitations
Our study is limited by caveats of a retrospective study and its small sample size, but is in line with the reported literature (ranging from 33 to 88 patients).1,7,8 Another limitation is that data on pretreatment DLCO was missing in 37 patients and the lack of statistical robustness in terms of the smaller inoperable cohort, which limits the analyses of these factors in regards to anticipated morbidity from SABR. Also, given this is data collected from the US Department of Veterans Affairs, only 3% of our sample was female.
Conclusions
Empiric SABR for patients with presumed early-stage NSCLC appears to be safe and might positively impact OS. Development of any acute toxicity grade ≥ 2 was significantly associated with dependence on supplemental oxygen before treatment, central tumor location, and development of new oxygen requirement. No association was found in patients with poor pulmonary function before treatment because we could not find a FEV1 or DLCO cutoff that could preclude patients from empiric SABR. Considering the poor survival of untreated early-stage NSCLC, coupled with the efficacy and safety of empiric SABR for those with presumed disease, definitive SABR should be offered selectively within this patient population.
Acknowledgments
Drs. Park, Whiting and Castillo contributed to data collection. Drs. Park, Govindan and Castillo contributed to the statistical analysis and writing the first draft and final manuscript. Drs. Park, Govindan, Huang, and Reddy contributed to the discussion section.
1. Harkenrider MM, Bertke MH, Dunlap NE. Stereotactic body radiation therapy for unbiopsied early-stage lung cancer: a multi-institutional analysis. Am J Clin Oncol. 2014;37(4):337-342. doi:10.1097/COC.0b013e318277d822
2. Raz DJ, Zell JA, Ou SH, Gandara DR, Anton-Culver H, Jablons DM. Natural history of stage I non-small cell lung cancer: implications for early detection. Chest. 2007;132(1):193-199. doi:10.1378/chest.06-3096
3. Nanda RH, Liu Y, Gillespie TW, et al. Stereotactic body radiation therapy versus no treatment for early stage non-small cell lung cancer in medically inoperable elderly patients: a National Cancer Data Base analysis. Cancer. 2015;121(23):4222-4230. doi:10.1002/cncr.29640
4. Ball D, Mai GT, Vinod S, et al. Stereotactic ablative radiotherapy versus standard radiotherapy in stage 1 non-small-cell lung cancer (TROG 09.02 CHISEL): a phase 3, open-label, randomised controlled trial. Lancet Oncol. 2019;20(4):494-503. doi:10.1016/S1470-2045(18)30896-9
5. Timmerman R, Paulus R, Galvin J, et al. Stereotactic body radiation therapy for inoperable early stage lung cancer. JAMA. 2010;303(11):1070-1076. doi:10.1001/jama.2010.261
6. Smith MA, Battafarano RJ, Meyers BF, Zoole JB, Cooper JD, Patterson GA. Prevalence of benign disease in patients undergoing resection for suspected lung cancer. Ann Thorac Surg. 2006;81(5):1824-1828. doi:10.1016/j.athoracsur.2005.11.010
7. Haidar YM, Rahn DA 3rd, Nath S, et al. Comparison of outcomes following stereotactic body radiotherapy for nonsmall cell lung cancer in patients with and without pathological confirmation. Ther Adv Respir Dis. 2014;8(1):3-12. doi:10.1177/1753465813512545
8. Fischer-Valuck BW, Boggs H, Katz S, Durci M, Acharya S, Rosen LR. Comparison of stereotactic body radiation therapy for biopsy-proven versus radiographically diagnosed early-stage non-small lung cancer: a single-institution experience. Tumori. 2015;101(3):287-293. doi:10.5301/tj.5000279
9. Guckenberger M, Kestin LL, Hope AJ, et al. Is there a lower limit of pretreatment pulmonary function for safe and effective stereotactic body radiotherapy for early-stage non-small cell lung cancer? J Thorac Oncol. 2012;7:542-551. doi:10.1097/JTO.0b013e31824165d7
10. Wang J, Cao J, Yuan S, et al. Poor baseline pulmonary function may not increase the risk of radiation-induced lung toxicity. Int J Radiat Oncol Biol Phys. 2013;85(3):798-804. doi:10.1016/j.ijrobp.2012.06.040
11. Henderson M, McGarry R, Yiannoutsos C, et al. Baseline pulmonary function as a predictor for survival and decline in pulmonary function over time in patients undergoing stereotactic body radiotherapy for the treatment of stage I non-small-cell lung cancer. Int J Radiat Oncol Biol Phys. 2008;72(2):404-409. doi:10.1016/j.ijrobp.2007.12.051
12. Stanic S, Paulus R, Timmerman RD, et al. No clinically significant changes in pulmonary function following stereotactic body radiation therapy for early- stage peripheral non-small cell lung cancer: an analysis of RTOG 0236. Int J Radiat Oncol Biol Phys. 2014;88(5):1092-1099. doi:10.1016/j.ijrobp.2013.12.050
Stereotactic ablative radiotherapy (SABR) has become the standard of care for inoperable early-stage non-small cell lung cancer (NSCLC). Many patients are unable to undergo a biopsy safely because of poor pulmonary function or underlying emphysema and are then empirically treated with radiotherapy if they meet criteria. In these patients, local control can be achieved with SABR with minimal toxicity.1 Considering that median overall survival (OS) among patients with untreated stage I NSCLC has been reported to be as low as 9 months, early treatment with SABR could lead to increased survival of 29 to 60 months.2-4
The RTOG 0236 trial showed a median OS of 48 months and the randomized phase III CHISEL trial showed a median OS of 60 months; however, these survival data were reported in patients who were able to safely undergo a biopsy and had confirmed NSCLC.4,5 For patients without a diagnosis confirmed by biopsy and who are treated with empiric SABR, patient factors that influence radiation toxicity and OS are not well defined.
It is not clear if empiric radiation benefits survival or if treatment causes decline in lung function, considering that underlying chronic lung disease precludes these patients from biopsy. The purpose of this study was to evaluate the factors associated with radiation toxicity with empiric SABR and to evaluate OS in this population without a biopsy-confirmed diagnosis.
Methods
This was a single center retrospective review of patients treated at the radiation oncology department at the Kansas City Veterans Affairs Medical Center from August 2014 to February 2019. Data were collected on 69 patients with pulmonary nodules identified by chest computed tomography (CT) and/or positron emission tomography (PET)-CT that were highly suspicious for primary NSCLC.
These patients were presented at a multidisciplinary meeting that involved pulmonologists, oncologists, radiation oncologists, and thoracic surgeons. Patients were deemed to be poor candidates for biopsy because of severe underlying emphysema, which would put them at high risk for pneumothorax with a percutaneous needle biopsy, or were unable to tolerate general anesthesia for navigational bronchoscopy or surgical biopsy because of poor lung function. These patients were diagnosed with presumed stage I NSCLC using the criteria: minimum of 2 sequential CT scans with enlarging nodule; absence of metastases on PET-CT; the single nodule had to be fluorodeoxyglucose avid with a minimum standardized uptake value of 2.5, and absence of clinical history or physical examination consistent with small cell lung cancer or infection.
After a consensus was reached that patients met these criteria, individuals were referred for empiric SABR. Follow-up visits were at 1 month, 3 months, and every 6 months. Variables analyzed included: patient demographics, pre- and posttreatment pulmonary function tests (PFT) when available, pre-treatment oxygen use, tumor size and location (peripheral, central, or ultra-central), radiation doses, and grade of toxicity as defined by Human and Health Services Common Terminology Criteria for Adverse Events version 5.0 (dyspnea and cough both counted as pulmonary toxicity): acute ≤ 90 days and late > 90 days (Table 1).
SPSS versions 24 and 26 were used for statistical analysis. Median and range were obtained for continuous variables with a normal distribution. Kaplan-Meier log-rank testing was used to analyze OS. χ2 and Mann-Whitney U tests were used to analyze association between independent variables and OS. Analysis of significant findings were repeated with operable patients excluded for further analysis.
Results
The median follow-up was 18 months (range, 1 to 54). The median age was 71 years (range, 59 to 95) (Table 2). Most patients (97.1%) were male. The majority of patients (79.4%) had a 0 or 1 for the Eastern Cooperative Oncology group performance status, indicating fully active or restricted in physically strenuous activity but ambulatory and able to perform light work. All patients were either current or former smokers with an average pack-year history of 69.4. Only 11.6% of patients had operable disease, but received empiric SABR because they declined surgery. Four patients did not have pretreatment spirometry available and 37 did not have pretreatment diffusing capacity for carbon monoxide (DLCO) data.
Most patients had a pretreatment forced expiratory volume during the first seconds (FEV1) value and DLCO < 60% of predicted (60% and 84% of the patients, respectively). The median tumor diameter was 2 cm. Of the 68.2% of patients who did not have chronic hypoxemic respiratory failure before SABR, 16% developed a new requirement for supplemental oxygen. Sixty-two tumors (89.9%) were peripheral. There were 4 local recurrences (5.7%), 10 regional (different lobe and nodal) failures (14.3%), and 15 distant metastases (21.4%).
Nineteen of 67 patients (26.3%) had acute toxicity of which 9 had acute grade ≥ 2 toxicity; information regarding toxicity was missing on 2 patients. Thirty-two of 65 (49.9%) patients had late toxicity of which 20 (30.8%) had late grade ≥ 2 toxicity. The main factor associated with development of acute toxicity was pretreatment oxygendependence (P = .047). This was not significant when comparing only inoperable patients. Twenty patients (29.9%) developed some type of acute toxicity; pulmonary toxicity was most common (22.4%) (Table 3). All patients with acute toxicity also developed late toxicity except for 1 who died before 3 months. Predominantly, the deaths in our sample were from causes other than the malignancy or treatment, such as sepsis, deconditioning after a fall, cardiovascular complications, etc. Acute toxicity of grade ≥ 2 was significantly associated with late toxicity (P < .001 for both) in both operable and inoperable patients (P < .001).
Development of any acute toxicity grade ≥ 2 was significantly associated with oxygendependence at baseline (P = .003), central location (P < .001), and new oxygen requirement (P = .02). Only central tumor location was found to be significant (P = .001) within the inoperable cohort. There were no significant differences in outcome based on pulmonary function testing (FEV1, forced vital capacity, or DLCO) or the analyzed PFT subgroups (FEV1 < 1.0 L, FEV1 < 1.5 L, FEV1 < 30%, and FEV1 < 35%).
At the time of data collection, 30 patients were deceased (43.5%). There was a statistically significant association between OS and operability (P = .03; Table 4, Figure 1). Decreased OS was significantly associated with acute toxicity (P = .001) and acute toxicity grade ≥ 2 (P = .005; Figures 2 and 3). For the inoperable patients, both acute toxicity (P < .001) and acute toxicity grade ≥ 2 (P = .026) remained significant.
Discussion
SABR is an effective treatment for inoperable early-stage NSCLC, however its therapeutic ratio in a more frail population who cannot withstand biopsy is not well established. Additionally, the prevalence of benign disease in patients with solitary pulmonary nodules can be between 9% and 21%.6 Haidar and colleagues looked at 55 patients who received empiric SABR and found a median OS of 30.2 months with an 8.7% risk of local failure, 13% risk of regional failure with 8.7% acute toxicity, and 13% chronic toxicity.7 Data from Harkenrider and colleagues (n = 34) revealed similar results with a 2-year OS of 85%, local control of 97.1%, and regional control of 80%. The authors noted no grade ≥ 3 acute toxicities and an incidence of grade ≥ 3 late toxicities of 8.8%.1 These findings are concordant with our study results, confirming the safety and efficacy of SABR. Furthermore, a National Cancer Database analysis of observation vs empiric SABR found an OS of 10.1 months and 29 months respectively, with a hazard ratio of 0.64 (P < .001).3 Additionally, Fischer-Valuck and colleagues (n = 88) compared biopsy confirmed vs unbiopsied patients treated with SABR and found no difference in the 3-year local progression-free survival (93.1% vs 94.1%), regional lymph node metastasis and distant metastases free survival (92.5% vs 87.4%), or OS (59.9% vs 58.9%).8 With a median OS of ≤ 1 year for untreated stage I NSCLC,these studies support treating patients with empiric SABR.4
Other researchers have sought parameters to identify patients for whom radiation therapy would be too toxic. Guckenberger and colleagues aimed to establish a lower limit of pretreatment PFT to exclude patients and found only a 7% incidence of grade ≥ 2 adverse effects and toxicity did not increase with lower pulmonary function.9 They concluded that SABR was safe even for patients with poor pulmonary function. Other institutions have confirmed such findings and have been unable to find a cut-off PFT to exclude patients from empiric SABR.10,11 An analysis from the RTOG 0236 trial also noted that poor baseline PFT could not predict pulmonary toxicity or survival. Additionally, the study demonstrated only minimal decreases in patients’ FEV1 (5.8%) and DLCO (6%) at 2 years.12
Our study sought to identify a cut-off on FEV1 or DLCO that could be associated with increased toxicity. We also evaluated the incidence of acute toxicities grade ≥ 2 by stratifying patients according to FEV1 into subgroups: FEV1 < 1.0 L, FEV1 < 1.5 L, FEV1 < 30% of predicted and FEV1 < 35% of predicted. However, similar to other studies, we did not find any value that was significantly associated with increased toxicity that could preclude empiric SABR. One possible reason is that no treatment is offered for patients with extremely poor lung function as deemed by clinical judgement, therefore data on these patients is unavailable. In contradiction to other studies, our study found that oxygen dependence before treatment was significantly associated with development of acute toxicities. The exact mechanism for this association is unknown and could not be elucidated by baseline PFT. One possible explanation is that SABR could lead to oxygen free radical generation. In addition, our study indicated that those who developed acute toxicities had worse OS.
Limitations
Our study is limited by caveats of a retrospective study and its small sample size, but is in line with the reported literature (ranging from 33 to 88 patients).1,7,8 Another limitation is that data on pretreatment DLCO was missing in 37 patients and the lack of statistical robustness in terms of the smaller inoperable cohort, which limits the analyses of these factors in regards to anticipated morbidity from SABR. Also, given this is data collected from the US Department of Veterans Affairs, only 3% of our sample was female.
Conclusions
Empiric SABR for patients with presumed early-stage NSCLC appears to be safe and might positively impact OS. Development of any acute toxicity grade ≥ 2 was significantly associated with dependence on supplemental oxygen before treatment, central tumor location, and development of new oxygen requirement. No association was found in patients with poor pulmonary function before treatment because we could not find a FEV1 or DLCO cutoff that could preclude patients from empiric SABR. Considering the poor survival of untreated early-stage NSCLC, coupled with the efficacy and safety of empiric SABR for those with presumed disease, definitive SABR should be offered selectively within this patient population.
Acknowledgments
Drs. Park, Whiting and Castillo contributed to data collection. Drs. Park, Govindan and Castillo contributed to the statistical analysis and writing the first draft and final manuscript. Drs. Park, Govindan, Huang, and Reddy contributed to the discussion section.
Stereotactic ablative radiotherapy (SABR) has become the standard of care for inoperable early-stage non-small cell lung cancer (NSCLC). Many patients are unable to undergo a biopsy safely because of poor pulmonary function or underlying emphysema and are then empirically treated with radiotherapy if they meet criteria. In these patients, local control can be achieved with SABR with minimal toxicity.1 Considering that median overall survival (OS) among patients with untreated stage I NSCLC has been reported to be as low as 9 months, early treatment with SABR could lead to increased survival of 29 to 60 months.2-4
The RTOG 0236 trial showed a median OS of 48 months and the randomized phase III CHISEL trial showed a median OS of 60 months; however, these survival data were reported in patients who were able to safely undergo a biopsy and had confirmed NSCLC.4,5 For patients without a diagnosis confirmed by biopsy and who are treated with empiric SABR, patient factors that influence radiation toxicity and OS are not well defined.
It is not clear if empiric radiation benefits survival or if treatment causes decline in lung function, considering that underlying chronic lung disease precludes these patients from biopsy. The purpose of this study was to evaluate the factors associated with radiation toxicity with empiric SABR and to evaluate OS in this population without a biopsy-confirmed diagnosis.
Methods
This was a single center retrospective review of patients treated at the radiation oncology department at the Kansas City Veterans Affairs Medical Center from August 2014 to February 2019. Data were collected on 69 patients with pulmonary nodules identified by chest computed tomography (CT) and/or positron emission tomography (PET)-CT that were highly suspicious for primary NSCLC.
These patients were presented at a multidisciplinary meeting that involved pulmonologists, oncologists, radiation oncologists, and thoracic surgeons. Patients were deemed to be poor candidates for biopsy because of severe underlying emphysema, which would put them at high risk for pneumothorax with a percutaneous needle biopsy, or were unable to tolerate general anesthesia for navigational bronchoscopy or surgical biopsy because of poor lung function. These patients were diagnosed with presumed stage I NSCLC using the criteria: minimum of 2 sequential CT scans with enlarging nodule; absence of metastases on PET-CT; the single nodule had to be fluorodeoxyglucose avid with a minimum standardized uptake value of 2.5, and absence of clinical history or physical examination consistent with small cell lung cancer or infection.
After a consensus was reached that patients met these criteria, individuals were referred for empiric SABR. Follow-up visits were at 1 month, 3 months, and every 6 months. Variables analyzed included: patient demographics, pre- and posttreatment pulmonary function tests (PFT) when available, pre-treatment oxygen use, tumor size and location (peripheral, central, or ultra-central), radiation doses, and grade of toxicity as defined by Human and Health Services Common Terminology Criteria for Adverse Events version 5.0 (dyspnea and cough both counted as pulmonary toxicity): acute ≤ 90 days and late > 90 days (Table 1).
SPSS versions 24 and 26 were used for statistical analysis. Median and range were obtained for continuous variables with a normal distribution. Kaplan-Meier log-rank testing was used to analyze OS. χ2 and Mann-Whitney U tests were used to analyze association between independent variables and OS. Analysis of significant findings were repeated with operable patients excluded for further analysis.
Results
The median follow-up was 18 months (range, 1 to 54). The median age was 71 years (range, 59 to 95) (Table 2). Most patients (97.1%) were male. The majority of patients (79.4%) had a 0 or 1 for the Eastern Cooperative Oncology group performance status, indicating fully active or restricted in physically strenuous activity but ambulatory and able to perform light work. All patients were either current or former smokers with an average pack-year history of 69.4. Only 11.6% of patients had operable disease, but received empiric SABR because they declined surgery. Four patients did not have pretreatment spirometry available and 37 did not have pretreatment diffusing capacity for carbon monoxide (DLCO) data.
Most patients had a pretreatment forced expiratory volume during the first seconds (FEV1) value and DLCO < 60% of predicted (60% and 84% of the patients, respectively). The median tumor diameter was 2 cm. Of the 68.2% of patients who did not have chronic hypoxemic respiratory failure before SABR, 16% developed a new requirement for supplemental oxygen. Sixty-two tumors (89.9%) were peripheral. There were 4 local recurrences (5.7%), 10 regional (different lobe and nodal) failures (14.3%), and 15 distant metastases (21.4%).
Nineteen of 67 patients (26.3%) had acute toxicity of which 9 had acute grade ≥ 2 toxicity; information regarding toxicity was missing on 2 patients. Thirty-two of 65 (49.9%) patients had late toxicity of which 20 (30.8%) had late grade ≥ 2 toxicity. The main factor associated with development of acute toxicity was pretreatment oxygendependence (P = .047). This was not significant when comparing only inoperable patients. Twenty patients (29.9%) developed some type of acute toxicity; pulmonary toxicity was most common (22.4%) (Table 3). All patients with acute toxicity also developed late toxicity except for 1 who died before 3 months. Predominantly, the deaths in our sample were from causes other than the malignancy or treatment, such as sepsis, deconditioning after a fall, cardiovascular complications, etc. Acute toxicity of grade ≥ 2 was significantly associated with late toxicity (P < .001 for both) in both operable and inoperable patients (P < .001).
Development of any acute toxicity grade ≥ 2 was significantly associated with oxygendependence at baseline (P = .003), central location (P < .001), and new oxygen requirement (P = .02). Only central tumor location was found to be significant (P = .001) within the inoperable cohort. There were no significant differences in outcome based on pulmonary function testing (FEV1, forced vital capacity, or DLCO) or the analyzed PFT subgroups (FEV1 < 1.0 L, FEV1 < 1.5 L, FEV1 < 30%, and FEV1 < 35%).
At the time of data collection, 30 patients were deceased (43.5%). There was a statistically significant association between OS and operability (P = .03; Table 4, Figure 1). Decreased OS was significantly associated with acute toxicity (P = .001) and acute toxicity grade ≥ 2 (P = .005; Figures 2 and 3). For the inoperable patients, both acute toxicity (P < .001) and acute toxicity grade ≥ 2 (P = .026) remained significant.
Discussion
SABR is an effective treatment for inoperable early-stage NSCLC, however its therapeutic ratio in a more frail population who cannot withstand biopsy is not well established. Additionally, the prevalence of benign disease in patients with solitary pulmonary nodules can be between 9% and 21%.6 Haidar and colleagues looked at 55 patients who received empiric SABR and found a median OS of 30.2 months with an 8.7% risk of local failure, 13% risk of regional failure with 8.7% acute toxicity, and 13% chronic toxicity.7 Data from Harkenrider and colleagues (n = 34) revealed similar results with a 2-year OS of 85%, local control of 97.1%, and regional control of 80%. The authors noted no grade ≥ 3 acute toxicities and an incidence of grade ≥ 3 late toxicities of 8.8%.1 These findings are concordant with our study results, confirming the safety and efficacy of SABR. Furthermore, a National Cancer Database analysis of observation vs empiric SABR found an OS of 10.1 months and 29 months respectively, with a hazard ratio of 0.64 (P < .001).3 Additionally, Fischer-Valuck and colleagues (n = 88) compared biopsy confirmed vs unbiopsied patients treated with SABR and found no difference in the 3-year local progression-free survival (93.1% vs 94.1%), regional lymph node metastasis and distant metastases free survival (92.5% vs 87.4%), or OS (59.9% vs 58.9%).8 With a median OS of ≤ 1 year for untreated stage I NSCLC,these studies support treating patients with empiric SABR.4
Other researchers have sought parameters to identify patients for whom radiation therapy would be too toxic. Guckenberger and colleagues aimed to establish a lower limit of pretreatment PFT to exclude patients and found only a 7% incidence of grade ≥ 2 adverse effects and toxicity did not increase with lower pulmonary function.9 They concluded that SABR was safe even for patients with poor pulmonary function. Other institutions have confirmed such findings and have been unable to find a cut-off PFT to exclude patients from empiric SABR.10,11 An analysis from the RTOG 0236 trial also noted that poor baseline PFT could not predict pulmonary toxicity or survival. Additionally, the study demonstrated only minimal decreases in patients’ FEV1 (5.8%) and DLCO (6%) at 2 years.12
Our study sought to identify a cut-off on FEV1 or DLCO that could be associated with increased toxicity. We also evaluated the incidence of acute toxicities grade ≥ 2 by stratifying patients according to FEV1 into subgroups: FEV1 < 1.0 L, FEV1 < 1.5 L, FEV1 < 30% of predicted and FEV1 < 35% of predicted. However, similar to other studies, we did not find any value that was significantly associated with increased toxicity that could preclude empiric SABR. One possible reason is that no treatment is offered for patients with extremely poor lung function as deemed by clinical judgement, therefore data on these patients is unavailable. In contradiction to other studies, our study found that oxygen dependence before treatment was significantly associated with development of acute toxicities. The exact mechanism for this association is unknown and could not be elucidated by baseline PFT. One possible explanation is that SABR could lead to oxygen free radical generation. In addition, our study indicated that those who developed acute toxicities had worse OS.
Limitations
Our study is limited by caveats of a retrospective study and its small sample size, but is in line with the reported literature (ranging from 33 to 88 patients).1,7,8 Another limitation is that data on pretreatment DLCO was missing in 37 patients and the lack of statistical robustness in terms of the smaller inoperable cohort, which limits the analyses of these factors in regards to anticipated morbidity from SABR. Also, given this is data collected from the US Department of Veterans Affairs, only 3% of our sample was female.
Conclusions
Empiric SABR for patients with presumed early-stage NSCLC appears to be safe and might positively impact OS. Development of any acute toxicity grade ≥ 2 was significantly associated with dependence on supplemental oxygen before treatment, central tumor location, and development of new oxygen requirement. No association was found in patients with poor pulmonary function before treatment because we could not find a FEV1 or DLCO cutoff that could preclude patients from empiric SABR. Considering the poor survival of untreated early-stage NSCLC, coupled with the efficacy and safety of empiric SABR for those with presumed disease, definitive SABR should be offered selectively within this patient population.
Acknowledgments
Drs. Park, Whiting and Castillo contributed to data collection. Drs. Park, Govindan and Castillo contributed to the statistical analysis and writing the first draft and final manuscript. Drs. Park, Govindan, Huang, and Reddy contributed to the discussion section.
1. Harkenrider MM, Bertke MH, Dunlap NE. Stereotactic body radiation therapy for unbiopsied early-stage lung cancer: a multi-institutional analysis. Am J Clin Oncol. 2014;37(4):337-342. doi:10.1097/COC.0b013e318277d822
2. Raz DJ, Zell JA, Ou SH, Gandara DR, Anton-Culver H, Jablons DM. Natural history of stage I non-small cell lung cancer: implications for early detection. Chest. 2007;132(1):193-199. doi:10.1378/chest.06-3096
3. Nanda RH, Liu Y, Gillespie TW, et al. Stereotactic body radiation therapy versus no treatment for early stage non-small cell lung cancer in medically inoperable elderly patients: a National Cancer Data Base analysis. Cancer. 2015;121(23):4222-4230. doi:10.1002/cncr.29640
4. Ball D, Mai GT, Vinod S, et al. Stereotactic ablative radiotherapy versus standard radiotherapy in stage 1 non-small-cell lung cancer (TROG 09.02 CHISEL): a phase 3, open-label, randomised controlled trial. Lancet Oncol. 2019;20(4):494-503. doi:10.1016/S1470-2045(18)30896-9
5. Timmerman R, Paulus R, Galvin J, et al. Stereotactic body radiation therapy for inoperable early stage lung cancer. JAMA. 2010;303(11):1070-1076. doi:10.1001/jama.2010.261
6. Smith MA, Battafarano RJ, Meyers BF, Zoole JB, Cooper JD, Patterson GA. Prevalence of benign disease in patients undergoing resection for suspected lung cancer. Ann Thorac Surg. 2006;81(5):1824-1828. doi:10.1016/j.athoracsur.2005.11.010
7. Haidar YM, Rahn DA 3rd, Nath S, et al. Comparison of outcomes following stereotactic body radiotherapy for nonsmall cell lung cancer in patients with and without pathological confirmation. Ther Adv Respir Dis. 2014;8(1):3-12. doi:10.1177/1753465813512545
8. Fischer-Valuck BW, Boggs H, Katz S, Durci M, Acharya S, Rosen LR. Comparison of stereotactic body radiation therapy for biopsy-proven versus radiographically diagnosed early-stage non-small lung cancer: a single-institution experience. Tumori. 2015;101(3):287-293. doi:10.5301/tj.5000279
9. Guckenberger M, Kestin LL, Hope AJ, et al. Is there a lower limit of pretreatment pulmonary function for safe and effective stereotactic body radiotherapy for early-stage non-small cell lung cancer? J Thorac Oncol. 2012;7:542-551. doi:10.1097/JTO.0b013e31824165d7
10. Wang J, Cao J, Yuan S, et al. Poor baseline pulmonary function may not increase the risk of radiation-induced lung toxicity. Int J Radiat Oncol Biol Phys. 2013;85(3):798-804. doi:10.1016/j.ijrobp.2012.06.040
11. Henderson M, McGarry R, Yiannoutsos C, et al. Baseline pulmonary function as a predictor for survival and decline in pulmonary function over time in patients undergoing stereotactic body radiotherapy for the treatment of stage I non-small-cell lung cancer. Int J Radiat Oncol Biol Phys. 2008;72(2):404-409. doi:10.1016/j.ijrobp.2007.12.051
12. Stanic S, Paulus R, Timmerman RD, et al. No clinically significant changes in pulmonary function following stereotactic body radiation therapy for early- stage peripheral non-small cell lung cancer: an analysis of RTOG 0236. Int J Radiat Oncol Biol Phys. 2014;88(5):1092-1099. doi:10.1016/j.ijrobp.2013.12.050
1. Harkenrider MM, Bertke MH, Dunlap NE. Stereotactic body radiation therapy for unbiopsied early-stage lung cancer: a multi-institutional analysis. Am J Clin Oncol. 2014;37(4):337-342. doi:10.1097/COC.0b013e318277d822
2. Raz DJ, Zell JA, Ou SH, Gandara DR, Anton-Culver H, Jablons DM. Natural history of stage I non-small cell lung cancer: implications for early detection. Chest. 2007;132(1):193-199. doi:10.1378/chest.06-3096
3. Nanda RH, Liu Y, Gillespie TW, et al. Stereotactic body radiation therapy versus no treatment for early stage non-small cell lung cancer in medically inoperable elderly patients: a National Cancer Data Base analysis. Cancer. 2015;121(23):4222-4230. doi:10.1002/cncr.29640
4. Ball D, Mai GT, Vinod S, et al. Stereotactic ablative radiotherapy versus standard radiotherapy in stage 1 non-small-cell lung cancer (TROG 09.02 CHISEL): a phase 3, open-label, randomised controlled trial. Lancet Oncol. 2019;20(4):494-503. doi:10.1016/S1470-2045(18)30896-9
5. Timmerman R, Paulus R, Galvin J, et al. Stereotactic body radiation therapy for inoperable early stage lung cancer. JAMA. 2010;303(11):1070-1076. doi:10.1001/jama.2010.261
6. Smith MA, Battafarano RJ, Meyers BF, Zoole JB, Cooper JD, Patterson GA. Prevalence of benign disease in patients undergoing resection for suspected lung cancer. Ann Thorac Surg. 2006;81(5):1824-1828. doi:10.1016/j.athoracsur.2005.11.010
7. Haidar YM, Rahn DA 3rd, Nath S, et al. Comparison of outcomes following stereotactic body radiotherapy for nonsmall cell lung cancer in patients with and without pathological confirmation. Ther Adv Respir Dis. 2014;8(1):3-12. doi:10.1177/1753465813512545
8. Fischer-Valuck BW, Boggs H, Katz S, Durci M, Acharya S, Rosen LR. Comparison of stereotactic body radiation therapy for biopsy-proven versus radiographically diagnosed early-stage non-small lung cancer: a single-institution experience. Tumori. 2015;101(3):287-293. doi:10.5301/tj.5000279
9. Guckenberger M, Kestin LL, Hope AJ, et al. Is there a lower limit of pretreatment pulmonary function for safe and effective stereotactic body radiotherapy for early-stage non-small cell lung cancer? J Thorac Oncol. 2012;7:542-551. doi:10.1097/JTO.0b013e31824165d7
10. Wang J, Cao J, Yuan S, et al. Poor baseline pulmonary function may not increase the risk of radiation-induced lung toxicity. Int J Radiat Oncol Biol Phys. 2013;85(3):798-804. doi:10.1016/j.ijrobp.2012.06.040
11. Henderson M, McGarry R, Yiannoutsos C, et al. Baseline pulmonary function as a predictor for survival and decline in pulmonary function over time in patients undergoing stereotactic body radiotherapy for the treatment of stage I non-small-cell lung cancer. Int J Radiat Oncol Biol Phys. 2008;72(2):404-409. doi:10.1016/j.ijrobp.2007.12.051
12. Stanic S, Paulus R, Timmerman RD, et al. No clinically significant changes in pulmonary function following stereotactic body radiation therapy for early- stage peripheral non-small cell lung cancer: an analysis of RTOG 0236. Int J Radiat Oncol Biol Phys. 2014;88(5):1092-1099. doi:10.1016/j.ijrobp.2013.12.050
Is empathy the limit to sociopathy?
Society is having a moment of reflection about the role of law enforcement and correctional facilities in addressing societal problems. During this moment, psychiatry is being asked by courts to arbitrate who qualifies and ultimately deserves certain judgments.
In particular, we are asked to assess how dangerous an individual may be using violent risk assessment tools and measures of antisocial disorders. As such, we are tasked with pointing out the negative factors of defendants. Alternatively, psychiatry is also asked to explain, using biopsychosocial determinants, what led an individual to act in a deviant manner. As such, we are tasked with pointing out mitigating factors of defendants. In this article, we attempt to look at limitations in both paradigms to encourage a more prudent forensic approach.
Negative factors
The criteria in the Diagnostic and Statistical Manual of Mental Disorders (DSM) are not composed of rigid rules with validity markers to measure their veracity but leave room for clinical judgment, variance across individuals, and future research and treatment needs.
There are some benefits to having room for clinical judgment, but it can also lead to overdiagnosis.1 This problem is particularly reflected in the diagnosis of antisocial personality disorder (ASPD), the criteria of which includes failure to conform to social norms, deceitfulness, impulsivity, irritability, recklessness, irresponsibility, and lack of remorse. Each of these criteria is ripe for subjectivity by an inexperienced or biased reviewer.
For example, it is common in our practice to see only two discrete events interpreted as a “pattern of behavior.” Such events could include two lapses in judgment to demonstrate a pattern of behavior meeting the criteria for ASPD. Using this logic, however, most Americans would meet those criteria. According to the National Survey of Drug Use and Health, the majority of Americans have tried illicit substances.2 We presume that many have tried illicit substances at least two times in their lives – in theory creating a pattern – and that subsequently they omitted that information on standard employment application forms. In doing so, they could easily be interpreted in court to have demonstrated failure to follow rules, deceitfulness in wrongfully filing an employment application, impulsivity in deciding to use drugs, recklessness in choosing to use drugs, irresponsibility for using drugs, and a lack of remorse by not acknowledging the use on an employment application, thereby meeting criteria for antisocial personality disorder.
The well-respected Hare Psychopathy Checklist contains similar opportunities for subjective interpretation by a biased evaluator. Conning, glibness, lack of guilt, lack of realistic goals, and irresponsibility are easily diverted to pathologize an individual into an exaggerated sense of menace. Journalist Jon Ronson famously challenged those concepts in his book, “The Psychopath Test: A Journey Through the Madness Industry,” a New York Times bestseller. It is common in our practice to see evaluators list dozens of scales allegedly proving someone’s dangerousness, without realizing the recurrent subjectivity involved in all those assessments.
Forensic evaluators arguing for conviction often rely on violence risk assessments to establish defendants’ propensity for future violence and to predict recidivism. There are numerous violence risk assessment tools, including: the Violence Risk Scale,3 the HCR-20 version 3 (HCR-20 v3),4 and Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). Yet, despite their perceived rigor and reliability from being established assessments, their usefulness continues to be challenged.5Julia Dressel and Hany Farid, PhD, showed in 2018 how people with little to no criminal justice expertise and given only the sex, age, and previous criminal history of defendants were no less accurate than COMPAS.6 Those findings are concerning and should give us pause when we are tempted to rely on seemingly objective measures that can lead us astray. Not only can such reliance result in injudicious court decisions, but it can saddle defendants with a documented report of their perceived elevated risk for violence.
In the forensic setting, ASPD is often treated like a lifelong diagnosis. This is in part because of personality disorders being defined since the DSM-III as “enduring patterns ... [that] continue throughout most of adult life.” Even if a defendant who is diagnosed with ASPD no longer behaves antisocially, a historical ASPD diagnosis is difficult to escape. Historical behavior is part of the diagnosis, and there are no guidelines to determine at what point a person can be rid of it or what redeeming qualities or circumstances make a prior diagnosis inappropriate.
Yet, some evidence suggests that ASPD is one of the least reliable psychiatric diagnoses and that the agreement between providers of such a diagnosis was “questionable.”7Robert D. Hare, PhD, himself has been described as believing that “an awful lot of people misuse his checklist.”8 And a recent study found no “evidence for the claim that [Hare Psychopathy Checklist] psychopaths are untreatable ... on the contrary, there was replicated evidence of positive treatment outcomes.”9 Unfortunately, legal structures often help enshrine an erroneous ASPD diagnosis by imposing more punishing sentences to those diagnosed. Instead, we should recognize that ASPD can also be the culmination of biological as well as changing social and environmental circumstances.
Mitigating factors
On the other side, the defense expert also faces significant challenges, though the tools are different. Contrary to the prosecuting expert who loads an arsenal of subjective assessment tools, the defense expert will point to childhood trauma and mental illness as extenuating explanations for a crime. Having suffered abuse as a child is advanced to justify someone’s subsequent violence. This problem is reflected in the diagnosis of posttraumatic stress disorder (PTSD). An unscrupulous expert may simply allow an evaluee to endorse symptoms without clinical correlates or rigorous validation to advance this narrative.
For example, psychiatrists commonly ascribe the DSM criteria A for PTSD, “directly experiencing the traumatic event(s),” to a smaller slight in life. Some experts suggest that a medical diagnosis, even if not life-threatening but perceived as such, could warrant the diagnosis.10 This would expand our understanding of trauma and its consequences significantly. Yet already, a survey of Detroit area residents in 1998 found that 89.6% of the interviewees reported having experienced a significant trauma and that the average number of traumatic experiences was 4.8.11 The meaning of a diagnosis that can be applied to almost 90% of a population has unclear usefulness, especially if meant to diminish guilt and responsibility.
More recently, citing Adverse Childhood Experiences (ACEs) has been a common method of supporting mitigating evaluations. Using the ACEs questionnaires, researchers have supported the idea that social programs are a key player in an improved criminal justice system. The ACEs study identified 10 forms of childhood trauma in 17,000 patients, including abuse, neglect, abandonment, household dysfunction, and exposure to violence, that were strongly associated with negative psychological outcomes, engagement in high-risk behaviors, significant medical consequences, and even early death.12 However, similarly to past trauma, the prevalence of ACEs in the forensic population is the norm, not the exception.
Additional thoughts
Of particular concern is when diagnostic criteria intersect or seemingly contradict one another. For example, acts such as an outburst of anger may be interpreted by one evaluator as a sign of deviance, irritability, or recklessness – and meeting antisocial personality disorder criteria. Whereas another evaluator may interpret the same incident as hypervigilance, exaggerated startle response, or self-destructive behavior in PTSD.
An incident of not assisting someone in need may be interpreted as lack of remorse and glibness from antisocial characteristics or avoidance and detachment from others as a reaction to past trauma. Flashbacks from trauma can be interpreted by some as violent fantasies. Even the experience of trauma can be viewed as a risk factor for future violence. In some ways, our perspectives are influenced by our examination of someone’s history through the lens of sociopathy or empathy.
In summary
Psychiatry is entrusted by courts to comment on negative and mitigating factors. Negative factors hinge in part on our subjective impression of sociopathy, and mitigating factors hinge, in part, on our empathy for a defendant’s trauma. Psychiatry should recognize the limitations of both sides and humble itself in providing balanced evaluations to courts.
Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Amendolara is a first-year psychiatry resident at University of California, San Diego. He spent years advocating for survivors of rape and domestic violence at the Crime Victims Treatment Center in New York and conducted public health research at Lourdes Center for Public Health in Camden, N.J. Dr. Amendolara has no disclosures. Dr. Ngo is a second-year child neurology resident at University of California, Los Angeles. She received a master’s degree in narrative medicine from Columbia University, New York. She has no disclosures.
References
1. Frances A. Saving Normal: An Insider’s Revolt Against Out-of-Control Psychiatric Diagnosis, DSM-5, Big Pharma and the Medicalization of Ordinary Life. Harper Collins, 2013.
2. Key Substance Use and Mental Health Indicators in the United States. National Survey on Drug Use and Health. 2018.
3. Wong SCP and Gordon A. Psychol Public Policy Law. 2006;12(3):279-309.
4. Douglas KS et al. Mental Health Law & Policy Institute. About the Historical Clinical Risk Management-20, Version 3.
5. Angwin J et al. ProPublica. 2016 May 23.
6. Dressel J and Farid H. Sci Adv. 2018;4(1). doi: 10.1126/sciady.aao5580.
7. Freedman R et al. Am J Psychiatry. 2013 Jan;170(1):1-5.
8. Lillie B. The complexities of the psychopath test: A Q&A with Ron Jonson. TEDBlog. 2012 Aug 15.
9. Larsen RR et al. Psychol Public Policy Law. 2020;26(3):297-311.
10. Cordova MJ. Psychiatric Times. 2020 Jul 31;37(7).
11. Breslau N et al. Arch Gen Psychiatry. 1998;55(7):626-32.
12. Reavis JA et al. Perm J. 2013 Spring;17(2):44-8.
Society is having a moment of reflection about the role of law enforcement and correctional facilities in addressing societal problems. During this moment, psychiatry is being asked by courts to arbitrate who qualifies and ultimately deserves certain judgments.
In particular, we are asked to assess how dangerous an individual may be using violent risk assessment tools and measures of antisocial disorders. As such, we are tasked with pointing out the negative factors of defendants. Alternatively, psychiatry is also asked to explain, using biopsychosocial determinants, what led an individual to act in a deviant manner. As such, we are tasked with pointing out mitigating factors of defendants. In this article, we attempt to look at limitations in both paradigms to encourage a more prudent forensic approach.
Negative factors
The criteria in the Diagnostic and Statistical Manual of Mental Disorders (DSM) are not composed of rigid rules with validity markers to measure their veracity but leave room for clinical judgment, variance across individuals, and future research and treatment needs.
There are some benefits to having room for clinical judgment, but it can also lead to overdiagnosis.1 This problem is particularly reflected in the diagnosis of antisocial personality disorder (ASPD), the criteria of which includes failure to conform to social norms, deceitfulness, impulsivity, irritability, recklessness, irresponsibility, and lack of remorse. Each of these criteria is ripe for subjectivity by an inexperienced or biased reviewer.
For example, it is common in our practice to see only two discrete events interpreted as a “pattern of behavior.” Such events could include two lapses in judgment to demonstrate a pattern of behavior meeting the criteria for ASPD. Using this logic, however, most Americans would meet those criteria. According to the National Survey of Drug Use and Health, the majority of Americans have tried illicit substances.2 We presume that many have tried illicit substances at least two times in their lives – in theory creating a pattern – and that subsequently they omitted that information on standard employment application forms. In doing so, they could easily be interpreted in court to have demonstrated failure to follow rules, deceitfulness in wrongfully filing an employment application, impulsivity in deciding to use drugs, recklessness in choosing to use drugs, irresponsibility for using drugs, and a lack of remorse by not acknowledging the use on an employment application, thereby meeting criteria for antisocial personality disorder.
The well-respected Hare Psychopathy Checklist contains similar opportunities for subjective interpretation by a biased evaluator. Conning, glibness, lack of guilt, lack of realistic goals, and irresponsibility are easily diverted to pathologize an individual into an exaggerated sense of menace. Journalist Jon Ronson famously challenged those concepts in his book, “The Psychopath Test: A Journey Through the Madness Industry,” a New York Times bestseller. It is common in our practice to see evaluators list dozens of scales allegedly proving someone’s dangerousness, without realizing the recurrent subjectivity involved in all those assessments.
Forensic evaluators arguing for conviction often rely on violence risk assessments to establish defendants’ propensity for future violence and to predict recidivism. There are numerous violence risk assessment tools, including: the Violence Risk Scale,3 the HCR-20 version 3 (HCR-20 v3),4 and Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). Yet, despite their perceived rigor and reliability from being established assessments, their usefulness continues to be challenged.5Julia Dressel and Hany Farid, PhD, showed in 2018 how people with little to no criminal justice expertise and given only the sex, age, and previous criminal history of defendants were no less accurate than COMPAS.6 Those findings are concerning and should give us pause when we are tempted to rely on seemingly objective measures that can lead us astray. Not only can such reliance result in injudicious court decisions, but it can saddle defendants with a documented report of their perceived elevated risk for violence.
In the forensic setting, ASPD is often treated like a lifelong diagnosis. This is in part because of personality disorders being defined since the DSM-III as “enduring patterns ... [that] continue throughout most of adult life.” Even if a defendant who is diagnosed with ASPD no longer behaves antisocially, a historical ASPD diagnosis is difficult to escape. Historical behavior is part of the diagnosis, and there are no guidelines to determine at what point a person can be rid of it or what redeeming qualities or circumstances make a prior diagnosis inappropriate.
Yet, some evidence suggests that ASPD is one of the least reliable psychiatric diagnoses and that the agreement between providers of such a diagnosis was “questionable.”7Robert D. Hare, PhD, himself has been described as believing that “an awful lot of people misuse his checklist.”8 And a recent study found no “evidence for the claim that [Hare Psychopathy Checklist] psychopaths are untreatable ... on the contrary, there was replicated evidence of positive treatment outcomes.”9 Unfortunately, legal structures often help enshrine an erroneous ASPD diagnosis by imposing more punishing sentences to those diagnosed. Instead, we should recognize that ASPD can also be the culmination of biological as well as changing social and environmental circumstances.
Mitigating factors
On the other side, the defense expert also faces significant challenges, though the tools are different. Contrary to the prosecuting expert who loads an arsenal of subjective assessment tools, the defense expert will point to childhood trauma and mental illness as extenuating explanations for a crime. Having suffered abuse as a child is advanced to justify someone’s subsequent violence. This problem is reflected in the diagnosis of posttraumatic stress disorder (PTSD). An unscrupulous expert may simply allow an evaluee to endorse symptoms without clinical correlates or rigorous validation to advance this narrative.
For example, psychiatrists commonly ascribe the DSM criteria A for PTSD, “directly experiencing the traumatic event(s),” to a smaller slight in life. Some experts suggest that a medical diagnosis, even if not life-threatening but perceived as such, could warrant the diagnosis.10 This would expand our understanding of trauma and its consequences significantly. Yet already, a survey of Detroit area residents in 1998 found that 89.6% of the interviewees reported having experienced a significant trauma and that the average number of traumatic experiences was 4.8.11 The meaning of a diagnosis that can be applied to almost 90% of a population has unclear usefulness, especially if meant to diminish guilt and responsibility.
More recently, citing Adverse Childhood Experiences (ACEs) has been a common method of supporting mitigating evaluations. Using the ACEs questionnaires, researchers have supported the idea that social programs are a key player in an improved criminal justice system. The ACEs study identified 10 forms of childhood trauma in 17,000 patients, including abuse, neglect, abandonment, household dysfunction, and exposure to violence, that were strongly associated with negative psychological outcomes, engagement in high-risk behaviors, significant medical consequences, and even early death.12 However, similarly to past trauma, the prevalence of ACEs in the forensic population is the norm, not the exception.
Additional thoughts
Of particular concern is when diagnostic criteria intersect or seemingly contradict one another. For example, acts such as an outburst of anger may be interpreted by one evaluator as a sign of deviance, irritability, or recklessness – and meeting antisocial personality disorder criteria. Whereas another evaluator may interpret the same incident as hypervigilance, exaggerated startle response, or self-destructive behavior in PTSD.
An incident of not assisting someone in need may be interpreted as lack of remorse and glibness from antisocial characteristics or avoidance and detachment from others as a reaction to past trauma. Flashbacks from trauma can be interpreted by some as violent fantasies. Even the experience of trauma can be viewed as a risk factor for future violence. In some ways, our perspectives are influenced by our examination of someone’s history through the lens of sociopathy or empathy.
In summary
Psychiatry is entrusted by courts to comment on negative and mitigating factors. Negative factors hinge in part on our subjective impression of sociopathy, and mitigating factors hinge, in part, on our empathy for a defendant’s trauma. Psychiatry should recognize the limitations of both sides and humble itself in providing balanced evaluations to courts.
Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Amendolara is a first-year psychiatry resident at University of California, San Diego. He spent years advocating for survivors of rape and domestic violence at the Crime Victims Treatment Center in New York and conducted public health research at Lourdes Center for Public Health in Camden, N.J. Dr. Amendolara has no disclosures. Dr. Ngo is a second-year child neurology resident at University of California, Los Angeles. She received a master’s degree in narrative medicine from Columbia University, New York. She has no disclosures.
References
1. Frances A. Saving Normal: An Insider’s Revolt Against Out-of-Control Psychiatric Diagnosis, DSM-5, Big Pharma and the Medicalization of Ordinary Life. Harper Collins, 2013.
2. Key Substance Use and Mental Health Indicators in the United States. National Survey on Drug Use and Health. 2018.
3. Wong SCP and Gordon A. Psychol Public Policy Law. 2006;12(3):279-309.
4. Douglas KS et al. Mental Health Law & Policy Institute. About the Historical Clinical Risk Management-20, Version 3.
5. Angwin J et al. ProPublica. 2016 May 23.
6. Dressel J and Farid H. Sci Adv. 2018;4(1). doi: 10.1126/sciady.aao5580.
7. Freedman R et al. Am J Psychiatry. 2013 Jan;170(1):1-5.
8. Lillie B. The complexities of the psychopath test: A Q&A with Ron Jonson. TEDBlog. 2012 Aug 15.
9. Larsen RR et al. Psychol Public Policy Law. 2020;26(3):297-311.
10. Cordova MJ. Psychiatric Times. 2020 Jul 31;37(7).
11. Breslau N et al. Arch Gen Psychiatry. 1998;55(7):626-32.
12. Reavis JA et al. Perm J. 2013 Spring;17(2):44-8.
Society is having a moment of reflection about the role of law enforcement and correctional facilities in addressing societal problems. During this moment, psychiatry is being asked by courts to arbitrate who qualifies and ultimately deserves certain judgments.
In particular, we are asked to assess how dangerous an individual may be using violent risk assessment tools and measures of antisocial disorders. As such, we are tasked with pointing out the negative factors of defendants. Alternatively, psychiatry is also asked to explain, using biopsychosocial determinants, what led an individual to act in a deviant manner. As such, we are tasked with pointing out mitigating factors of defendants. In this article, we attempt to look at limitations in both paradigms to encourage a more prudent forensic approach.
Negative factors
The criteria in the Diagnostic and Statistical Manual of Mental Disorders (DSM) are not composed of rigid rules with validity markers to measure their veracity but leave room for clinical judgment, variance across individuals, and future research and treatment needs.
There are some benefits to having room for clinical judgment, but it can also lead to overdiagnosis.1 This problem is particularly reflected in the diagnosis of antisocial personality disorder (ASPD), the criteria of which includes failure to conform to social norms, deceitfulness, impulsivity, irritability, recklessness, irresponsibility, and lack of remorse. Each of these criteria is ripe for subjectivity by an inexperienced or biased reviewer.
For example, it is common in our practice to see only two discrete events interpreted as a “pattern of behavior.” Such events could include two lapses in judgment to demonstrate a pattern of behavior meeting the criteria for ASPD. Using this logic, however, most Americans would meet those criteria. According to the National Survey of Drug Use and Health, the majority of Americans have tried illicit substances.2 We presume that many have tried illicit substances at least two times in their lives – in theory creating a pattern – and that subsequently they omitted that information on standard employment application forms. In doing so, they could easily be interpreted in court to have demonstrated failure to follow rules, deceitfulness in wrongfully filing an employment application, impulsivity in deciding to use drugs, recklessness in choosing to use drugs, irresponsibility for using drugs, and a lack of remorse by not acknowledging the use on an employment application, thereby meeting criteria for antisocial personality disorder.
The well-respected Hare Psychopathy Checklist contains similar opportunities for subjective interpretation by a biased evaluator. Conning, glibness, lack of guilt, lack of realistic goals, and irresponsibility are easily diverted to pathologize an individual into an exaggerated sense of menace. Journalist Jon Ronson famously challenged those concepts in his book, “The Psychopath Test: A Journey Through the Madness Industry,” a New York Times bestseller. It is common in our practice to see evaluators list dozens of scales allegedly proving someone’s dangerousness, without realizing the recurrent subjectivity involved in all those assessments.
Forensic evaluators arguing for conviction often rely on violence risk assessments to establish defendants’ propensity for future violence and to predict recidivism. There are numerous violence risk assessment tools, including: the Violence Risk Scale,3 the HCR-20 version 3 (HCR-20 v3),4 and Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). Yet, despite their perceived rigor and reliability from being established assessments, their usefulness continues to be challenged.5Julia Dressel and Hany Farid, PhD, showed in 2018 how people with little to no criminal justice expertise and given only the sex, age, and previous criminal history of defendants were no less accurate than COMPAS.6 Those findings are concerning and should give us pause when we are tempted to rely on seemingly objective measures that can lead us astray. Not only can such reliance result in injudicious court decisions, but it can saddle defendants with a documented report of their perceived elevated risk for violence.
In the forensic setting, ASPD is often treated like a lifelong diagnosis. This is in part because of personality disorders being defined since the DSM-III as “enduring patterns ... [that] continue throughout most of adult life.” Even if a defendant who is diagnosed with ASPD no longer behaves antisocially, a historical ASPD diagnosis is difficult to escape. Historical behavior is part of the diagnosis, and there are no guidelines to determine at what point a person can be rid of it or what redeeming qualities or circumstances make a prior diagnosis inappropriate.
Yet, some evidence suggests that ASPD is one of the least reliable psychiatric diagnoses and that the agreement between providers of such a diagnosis was “questionable.”7Robert D. Hare, PhD, himself has been described as believing that “an awful lot of people misuse his checklist.”8 And a recent study found no “evidence for the claim that [Hare Psychopathy Checklist] psychopaths are untreatable ... on the contrary, there was replicated evidence of positive treatment outcomes.”9 Unfortunately, legal structures often help enshrine an erroneous ASPD diagnosis by imposing more punishing sentences to those diagnosed. Instead, we should recognize that ASPD can also be the culmination of biological as well as changing social and environmental circumstances.
Mitigating factors
On the other side, the defense expert also faces significant challenges, though the tools are different. Contrary to the prosecuting expert who loads an arsenal of subjective assessment tools, the defense expert will point to childhood trauma and mental illness as extenuating explanations for a crime. Having suffered abuse as a child is advanced to justify someone’s subsequent violence. This problem is reflected in the diagnosis of posttraumatic stress disorder (PTSD). An unscrupulous expert may simply allow an evaluee to endorse symptoms without clinical correlates or rigorous validation to advance this narrative.
For example, psychiatrists commonly ascribe the DSM criteria A for PTSD, “directly experiencing the traumatic event(s),” to a smaller slight in life. Some experts suggest that a medical diagnosis, even if not life-threatening but perceived as such, could warrant the diagnosis.10 This would expand our understanding of trauma and its consequences significantly. Yet already, a survey of Detroit area residents in 1998 found that 89.6% of the interviewees reported having experienced a significant trauma and that the average number of traumatic experiences was 4.8.11 The meaning of a diagnosis that can be applied to almost 90% of a population has unclear usefulness, especially if meant to diminish guilt and responsibility.
More recently, citing Adverse Childhood Experiences (ACEs) has been a common method of supporting mitigating evaluations. Using the ACEs questionnaires, researchers have supported the idea that social programs are a key player in an improved criminal justice system. The ACEs study identified 10 forms of childhood trauma in 17,000 patients, including abuse, neglect, abandonment, household dysfunction, and exposure to violence, that were strongly associated with negative psychological outcomes, engagement in high-risk behaviors, significant medical consequences, and even early death.12 However, similarly to past trauma, the prevalence of ACEs in the forensic population is the norm, not the exception.
Additional thoughts
Of particular concern is when diagnostic criteria intersect or seemingly contradict one another. For example, acts such as an outburst of anger may be interpreted by one evaluator as a sign of deviance, irritability, or recklessness – and meeting antisocial personality disorder criteria. Whereas another evaluator may interpret the same incident as hypervigilance, exaggerated startle response, or self-destructive behavior in PTSD.
An incident of not assisting someone in need may be interpreted as lack of remorse and glibness from antisocial characteristics or avoidance and detachment from others as a reaction to past trauma. Flashbacks from trauma can be interpreted by some as violent fantasies. Even the experience of trauma can be viewed as a risk factor for future violence. In some ways, our perspectives are influenced by our examination of someone’s history through the lens of sociopathy or empathy.
In summary
Psychiatry is entrusted by courts to comment on negative and mitigating factors. Negative factors hinge in part on our subjective impression of sociopathy, and mitigating factors hinge, in part, on our empathy for a defendant’s trauma. Psychiatry should recognize the limitations of both sides and humble itself in providing balanced evaluations to courts.
Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Amendolara is a first-year psychiatry resident at University of California, San Diego. He spent years advocating for survivors of rape and domestic violence at the Crime Victims Treatment Center in New York and conducted public health research at Lourdes Center for Public Health in Camden, N.J. Dr. Amendolara has no disclosures. Dr. Ngo is a second-year child neurology resident at University of California, Los Angeles. She received a master’s degree in narrative medicine from Columbia University, New York. She has no disclosures.
References
1. Frances A. Saving Normal: An Insider’s Revolt Against Out-of-Control Psychiatric Diagnosis, DSM-5, Big Pharma and the Medicalization of Ordinary Life. Harper Collins, 2013.
2. Key Substance Use and Mental Health Indicators in the United States. National Survey on Drug Use and Health. 2018.
3. Wong SCP and Gordon A. Psychol Public Policy Law. 2006;12(3):279-309.
4. Douglas KS et al. Mental Health Law & Policy Institute. About the Historical Clinical Risk Management-20, Version 3.
5. Angwin J et al. ProPublica. 2016 May 23.
6. Dressel J and Farid H. Sci Adv. 2018;4(1). doi: 10.1126/sciady.aao5580.
7. Freedman R et al. Am J Psychiatry. 2013 Jan;170(1):1-5.
8. Lillie B. The complexities of the psychopath test: A Q&A with Ron Jonson. TEDBlog. 2012 Aug 15.
9. Larsen RR et al. Psychol Public Policy Law. 2020;26(3):297-311.
10. Cordova MJ. Psychiatric Times. 2020 Jul 31;37(7).
11. Breslau N et al. Arch Gen Psychiatry. 1998;55(7):626-32.
12. Reavis JA et al. Perm J. 2013 Spring;17(2):44-8.
Impact of an Oral Antineoplastic Renewal Clinic on Medication Possession Ratio and Cost-Savings
Evaluation of oral antineoplastic agent (OAN) adherence patterns have identified correlations between nonadherence or over-adherence and poorer disease-related outcomes. Multiple studies have focused on imatinib use in chronic myeloid leukemia (CML) due to its continuous, long-term use. A study by Ganesan and colleagues found that nonadherence to imatinib showed a significant decrease in 5-year event-free survival between 76.7% of adherent participants compared with 59.8% of nonadherent participants.1 This study found that 44% of patients who were adherent to imatinib achieved complete cytogenetic response vs only 26% of patients who were nonadherent. In another study of imatinib for CML, major molecular response (MMR) was strongly correlated with adherence and no patients with adherence < 80% were able to achieve MMR.2 Similarly, in studies of tamoxifen for breast cancer, < 80% adherence resulted in a 10% decrease in survival when compared to those who were more adherent.3,4
In addition to the clinical implications of nonadherence, there can be a significant cost associated with suboptimal use of these medications. The price of a single dose of OAN medication may cost as much as $440.5
The benefits of multidisciplinary care teams have been identified in many studies.6,7 While studies are limited in oncology, pharmacists provide vital contributions to the oncology multidisciplinary team when managing OANs as these health care professionals have expert knowledge of the medications, potential adverse events (AEs), and necessary monitoring parameters.8 In one study, patients seen by the pharmacist-led oral chemotherapy management program experienced improved clinical outcomes and response to therapy when compared with preintervention patients (early molecular response, 88.9% vs 54.8%, P = .01; major molecular response, 83.3% vs 57.6%, P = .06).9 During the study, 318 AEs were reported, leading to 235 pharmacist interventions to ameliorate AEs and improve adherence.
The primary objective of this study was to measure the impact of a pharmacist-driven OAN renewal clinic on medication adherence. The secondary objective was to estimate cost-savings of this new service.
Methods
Prior to July 2014, several limitations were identified related to OAN prescribing and monitoring at the Richard L. Roudebush Veterans Affairs Medical Center in Indianapolis, Indiana (RLRVAMC). The prescription ordering process relied primarily on the patient to initiate refills, rather than the prescriber OAN prescriptions also lacked consistency for number of refills or quantities dispensed. Furthermore, ordering of antineoplastic products was not limited to hematology/oncology providers. Patients were identified with significant supply on hand at the time of medication discontinuation, creating concerns for medication waste, tolerability, and nonadherence.
As a result, opportunities were identified to improve the prescribing process, recommended monitoring, toxicity and tolerability evaluation, medication reconciliation, and medication adherence. In July of 2014, the RLRVAMC adopted a new chemotherapy order entry system capable of restricting prescriptions to hematology/oncology providers and limiting dispensed quantities and refill amounts. A comprehensive pharmacist driven OAN renewal clinic was implemented on September 1, 2014 with the goal of improving long-term adherence and tolerability, in addition to minimizing medication waste.
Patients were eligible for enrollment in the clinic if they had a cancer diagnosis and were concomitantly prescribed an OAN outlined in Table 1. All eligible patients were automatically enrolled in the clinic when they were deemed stable on their OAN by a hematology/oncology pharmacy specialist. Stability was defined as ≤ Grade 1 symptoms associated with the toxicities of OAN therapy managed with or without intervention as defined by the Common Terminology Criteria for Adverse Events (CTCAE) version 4.03. Once enrolled in the renewal clinic, patients were called by an oncology pharmacy resident (PGY2) 1 week prior to any OAN refill due date. Patients were asked a series of 5 adherence and tolerability questions (Table 2) to evaluate renewal criteria for approval or need for further evaluation. These questions were developed based on targeted information and published reports on monitoring adherence.10,11 Criteria for renewal included: < 10% self-reported missed doses of the OAN during the previous dispensing period, no hospitalizations or emergency department visits since most recent hematology/oncology provider appointment, no changes to concomitant medication therapies, and no new or worsening medication-related AEs. Patients meeting all criteria were given a 30-day supply of OAN. Prescribing, dispensing, and delivery of OAN were facilitated by the pharmacist. Patient cases that did not meet criteria for renewal were escalated to the hematology/oncology provider or oncology clinical pharmacy specialist for further evaluation.
Study Design and Setting
This was a pre/post retrospective cohort, quality improvement study of patients enrolled in the RLRVAMC OAN pharmacist renewal clinic. The study was deemed exempt from institutional review board (IRB) by the US Department of Veterans Affairs (VA) Research and Development Department.
Study Population
Patients were included in the preimplementation group if they had received at least 2 prescriptions of an eligible OAN. Therapy for the preimplementation group was required to be a monthly duration > 21 days and between the dates of September 1, 2013 and August 31, 2014. Patients were included in the postimplementation group if they had received at least 2 prescriptions of the studied OANs between September 1, 2014 and January 31, 2015. Patients were excluded if they had filled < 2 prescriptions of OAN; were managed by a non-VA oncologist or hematologist; or received an OAN other than those listed in Table 1.
Data Collection
For all patients in both the pre- and postimplementation cohorts, a standardized data collection tool was used to collect the following via electronic health record review by a PGY2 oncology resident: age, race, gender, oral antineoplastic agent, refill dates, days’ supply, estimated unit cost per dose cancer diagnosis, distance from the RLRVAMC, copay status, presence of hospitalizations/ED visits/dosage reductions, discontinuation rates, reasons for discontinuation, and total number of current prescriptions. The presence or absence of dosage reductions were collected to identify concerns for tolerability, but only the original dose for the preimplementation group and dosage at time of clinic enrollment for the postimplementation group was included in the analysis.
Outcomes and Statistical Analyses
The primary outcome was medication adherence defined as the median medication possession ratio (MPR) before and after implementation of the clinic. Secondary outcomes included the proportion of patients who were adherent from before implementation to after implementation and estimated cost-savings of this clinic after implementation. MPR was used to estimate medication adherence by taking the cumulative day supply of medication on hand divided by the number of days on therapy.12 Number of days on therapy was determined by taking the difference on the start date of the new medication regimen and the discontinuation date of the same regimen. Patients were grouped by adherence into one of the following categories: < 0.8, 0.8 to 0.89, 0.9 to 1, and > 1.1. Patients were considered adherent if they reported taking ≥ 90% (MPR ≥ 0.9) of prescribed doses, adopted from the study by Anderson and colleagues.12 A patient with an MPR > 1, likely due to filling prior to the anticipated refill date, was considered 100% adherent (MPR = 1). If a patient switched OAN during the study, both agents were included as separate entities.
A conservative estimate of cost-savings was made by multiplying the RLRVAMC cost per unit of medication at time of initial prescription fill by the number of units taken each day multiplied by the total days’ supply on hand at time of therapy discontinuation. Patients with an MPR < 1 at time of therapy discontinuation were assumed to have zero remaining units on hand and zero cost savings was estimated. Waste, for purposes of cost-savings, was calculated for all MPR values > 1. Additional supply anticipated to be on hand from dose reductions was not included in the estimated cost of unused medication.
Descriptive statistics compared demographic characteristics between the pre- and postimplementation groups. MPR data were not normally distributed, which required the use of nonparametric Mann-Whitney U tests to compare pre- and postMPRs. Pearson χ2 compared the proportion of adherent patients between groups while descriptive statistics were used to estimate cost savings. Significance was determined based on a P value < .05. IBM SPSS Statistics software was used for all statistical analyses. As this was a complete sample of all eligible subjects, no sample size calculation was performed.
Results
In the preimplementation period, 246 patients received an OAN and 61 patients received an OAN in the postimplementation period (Figure 1). Of the 246 patients in the preimplementation period, 98 were eligible and included in the preimplementation group. Similarly, of the 61 patients in the postimplementation period, 35 patients met inclusion criteria for the postimplementation group. The study population was predominantly male with an average age of approximately 70 years in both groups (Table 3). More than 70% of the population in each group was White. No statistically significant differences between groups were identified. The most commonly prescribed OAN in the preimplementation group were abiraterone, imatinib, and enzalutamide (Table 3). In the postimplementation group, the most commonly prescribed agents were abiraterone, imatinib, pazopanib, and dasatinib. No significant differences were observed in prescribing of individual agents between the pre- and postimplementation groups or other characteristics that may affect adherence including patient copay status, number of concomitant medications, and driving distance from the RLRVAMC.
Thirty-six (36.7%) patients in the preimplementation group were considered nonadherent (MPR < 0.9) and 18 (18.4%) had an MPR < 0.8. Fifteen (15.3%) patients in the preimplementation clinic were considered overadherent (MPR > 1.1). Forty-seven (47.9%) patients in the preimplementation group were considered adherent (MPR 0.9 - 1.1) while all 35 (100%) patients in the postimplementation group were considered adherent (MPR 0.9 - 1.1). No non- or overadherent patients were identified in the postimplementation group (Figure 2). The median MPR for all patients in the preimplementation group was 0.94 compared with 1.06 (P < .001) in the postimplementation group.
Thirty-five (35.7%) patients had therapy discontinued or held in the preimplementation group compared with 2 (5.7%) patients in the postimplementation group (P < .001). Reasons for discontinuation in the preimplementation group included disease progression (n = 27), death (n = 3), lost to follow up (n = 2), and intolerability of therapy (n = 3). Both patients that discontinued therapy in the postimplementation group did so due to disease progression. Of the 35 patients who had their OAN discontinued or held in the preimplementation group, 14 patients had excess supply on hand at time of discontinuation. The estimated value of the unused medication was $37,890. Nine (25%) of the 35 patients who discontinued therapy had a dosage reduction during the course of therapy and the additional supply was not included in the cost estimate. Similarly, 1 of the 2 patients in the postimplementation group had their OAN discontinued during study. The cost of oversupply of medication at the time of therapy discontinuation was estimated at $1,555. No patients in the postimplementation group had dose reductions. After implementation of the OAN renewal clinic, the total cost savings between pre ($37,890) and postimplementation ($1,555) groups was $36,355.
Discussion
OANs are widely used therapies, with more than 25 million doses administered per year in the United States alone.12 The use of these agents will continue to grow as more targeted agents become available and patients request more convenient treatment options. The role for hematology/oncology clinical pharmacy services must adapt to this increased usage of OANs, including increasing pharmacist involvement in medication education, adherence and tolerability assessments, and proactive drug interaction monitoring.However, additional research is needed to determine optimal management strategies.
Our study aimed to compare OAN adherence among patients at a tertiary care VA hospital before and after implementation of a renewal clinic. The preimplementation population had a median MPR of 0.94 compared with 1.06 in the postimplementation group (P < .001). Although an ideal MPR is 1.0, we aimed for a slightly higher MPR to allow a supply buffer in the event of prescription delivery delays, as more than 90% of prescriptions are mailed to patients from a regional mail-order pharmacy. Importantly, the median MPRs do not adequately convey the impact from this clinic. The proportion of patients who were considered adherent to OANs increased from 47.9% in the preimplementation to 100% in the postimplementation period. These finding suggest that the clinical pharmacist role to assess and encourage adherence through monitoring tolerability of these OANs improved the overall medication taking experience of these patients.
Upon initial evaluation of adherence pre- and postimplementation, median adherence rates in both groups appeared to be above goal at 0.94 and 1.06 respectively. Patients in the postimplementation group intentionally received a 5- to 7-day supply buffer to account for potential prescription delivery delays due to holidays and inclement weather. This would indicate that the patients in the postimplementation group would have 15% oversupply due to the 5-day supply buffer. After correcting for patients with confounding reasons for excess (dose reductions, breaks in treatment, etc.), the median MPR in the prerefill clinic group decreased to 0.9 and the MPR in the postrefill clinic group increased slightly to 1.08. Although the median adherence rate in both the pre- and postimplementation groups were above goal of 0.90, 36% of the patients in the preimplementation group were considered nonadherent (MPR < 0.9) compared with no patients in the postimplementation group. Therefore, our intervention to improve patient adherence appeared to be beneficial at our institution.
In addition to improving adherence, one of the goals of the renewal clinic was to minimize excess supply at the time of therapy discontinuation. This was accomplished by aligning medication fills with medical visits and objective monitoring, as well as limiting supply to no more than 30 days. Of the patients in the postimplementation group, only 1 patient had remaining medication at the time of therapy discontinuation compared with 14 patients in the preimplementation group. The estimated cost savings from excess supply was $36,335. Limiting the amount of unused supply not only saves money for the patient and the institution, but also decreases opportunity for improper hazardous waste disposal and unnecessary exposure of hazardous materials to others.
Our results show the pharmacist intervention in the coordination of renewals improved adherence, minimized medication waste, and saved money. The cost of pharmacist time participating in the refill clinic was not calculated. Each visit was completed in approximately 5 minutes, with subsequent documentation and coordination taking an additional 5 to 10 minutes. During the launch of this service, the oncology pharmacy resident provided all coverage of the clinic. Oversite of the resident was provided by hematology/oncology clinical pharmacy specialists. We have continued to utilize pharmacy resident coverage since that time to meet education needs and keep the estimated cost per visit low. Another option in the case that pharmacy residents are not available would be utilization of a pharmacy technician, intern, or professional student to conduct the adherence and tolerability phone assessments. Our escalation protocol allows intervention by clinical pharmacy specialist and/or other health care providers when necessary. Trainees have only required basic training on how to use the protocol.
Limitations
Due to this study’s retrospective design, an inherent limitation is dependence on prescriber and refill records for documentation of initiation and discontinuation dates. Therefore, only the association of impact of pharmacist intervention on medication adherence can be determined as opposed to causation. We did not take into account discrepancies in day supply secondary to ‘held’ therapies, dose reductions, or doses supplied during an inpatient admission, which may alter estimates of MPR and cost-savings data. Patients in the postimplementation group intentionally received a 5 to 7-day supply buffer to account for potential prescription delivery delays due to holidays and inclement weather. This would indicate that the patients in the postimplementation group would have 15% oversupply due to the 5-day supply buffer, thereby skewing MPR values. This study did not account for cost avoidance resulting from early identification and management of toxicity. Finally, the postimplementation data only spans 4 months and a longer duration of time is needed to more accurately determine sustainability of renewal clinic interventions and provide comprehensive evaluation of cost-avoidance.
Conclusion
Implementation of an OAN renewal clinic was associated with an increase in MPR, improved proportion of patients considered adherent, and an estimated $36,335 cost-savings. However, prospective evaluation and a longer study duration are needed to determine causality of improved adherence and cost-savings associated with a pharmacist-driven OAN renewal clinic.
1. Ganesan P, Sagar TG, Dubashi B, et al. Nonadherence to imatinib adversely affects event free survival in chronic phase chronic myeloid leukemia. Am J Hematol 2011; 86: 471-474. doi:10.1002/ajh.22019
2. Marin D, Bazeos A, Mahon FX, et al. Adherence is the critical factor for achieving molecular responses in patients with chronic myeloid leukemia who achieve complete cytogenetic responses on imatinib. J Clin Oncol 2010; 28: 2381-2388. doi:10.1200/JCO.2009.26.3087
3. McCowan C, Shearer J, Donnan PT, et al. Cohort study examining tamoxifen adherence and its relationship to mortality in women with breast cancer. Br J Cancer 2008; 99: 1763-1768. doi:10.1038/sj.bjc.6604758
4. Lexicomp Online. Sunitinib. Hudson, Ohio: Lexi-Comp, Inc; August 20, 2019.
5. Babiker A, El Husseini M, Al Nemri A, et al. Health care professional development: Working as a team to improve patient care. Sudan J Paediatr. 2014;14(2):9-16.
6. Spence MM, Makarem AF, Reyes SL, et al. Evaluation of an outpatient pharmacy clinical services program on adherence and clinical outcomes among patients with diabetes and/or coronary artery disease. J Manag Care Spec Pharm. 2014;20(10):1036-1045. doi:10.18553/jmcp.2014.20.10.1036
7. Holle LM, Puri S, Clement JM. Physician-pharmacist collaboration for oral chemotherapy monitoring: Insights from an academic genitourinary oncology practice. J Oncol Pharm Pract 2015; doi:10.1177/1078155215581524
8. Muluneh B, Schneider M, Faso A, et al. Improved Adherence Rates and Clinical Outcomes of an Integrated, Closed-Loop, Pharmacist-Led Oral Chemotherapy Management Program. Journal of Oncology Practice. 2018;14(6):371-333. doi:10.1200/JOP.17.00039.
9. Font R, Espinas JA, Gil-Gil M, et al. Prescription refill, patient self-report and physician report in assessing adherence to oral endocrine therapy in early breast cancer patients: a retrospective cohort study in Catalonia, Spain. British Journal of Cancer. 2012 ;107(8):1249-1256. doi:10.1038/bjc.2012.389.
10. Anderson KR, Chambers CR, Lam N, et al. Medication adherence among adults prescribed imatinib, dasatinib, or nilotinib for the treatment of chronic myeloid leukemia. J Oncol Pharm Practice. 2015;21(1):19–25. doi:10.1177/1078155213520261
11. Weingart SN, Brown E, Bach PB, et al. NCCN Task Force Report: oral chemotherapy. J Natl Compr Canc Netw. 2008;6(3): S1-S14.
Evaluation of oral antineoplastic agent (OAN) adherence patterns have identified correlations between nonadherence or over-adherence and poorer disease-related outcomes. Multiple studies have focused on imatinib use in chronic myeloid leukemia (CML) due to its continuous, long-term use. A study by Ganesan and colleagues found that nonadherence to imatinib showed a significant decrease in 5-year event-free survival between 76.7% of adherent participants compared with 59.8% of nonadherent participants.1 This study found that 44% of patients who were adherent to imatinib achieved complete cytogenetic response vs only 26% of patients who were nonadherent. In another study of imatinib for CML, major molecular response (MMR) was strongly correlated with adherence and no patients with adherence < 80% were able to achieve MMR.2 Similarly, in studies of tamoxifen for breast cancer, < 80% adherence resulted in a 10% decrease in survival when compared to those who were more adherent.3,4
In addition to the clinical implications of nonadherence, there can be a significant cost associated with suboptimal use of these medications. The price of a single dose of OAN medication may cost as much as $440.5
The benefits of multidisciplinary care teams have been identified in many studies.6,7 While studies are limited in oncology, pharmacists provide vital contributions to the oncology multidisciplinary team when managing OANs as these health care professionals have expert knowledge of the medications, potential adverse events (AEs), and necessary monitoring parameters.8 In one study, patients seen by the pharmacist-led oral chemotherapy management program experienced improved clinical outcomes and response to therapy when compared with preintervention patients (early molecular response, 88.9% vs 54.8%, P = .01; major molecular response, 83.3% vs 57.6%, P = .06).9 During the study, 318 AEs were reported, leading to 235 pharmacist interventions to ameliorate AEs and improve adherence.
The primary objective of this study was to measure the impact of a pharmacist-driven OAN renewal clinic on medication adherence. The secondary objective was to estimate cost-savings of this new service.
Methods
Prior to July 2014, several limitations were identified related to OAN prescribing and monitoring at the Richard L. Roudebush Veterans Affairs Medical Center in Indianapolis, Indiana (RLRVAMC). The prescription ordering process relied primarily on the patient to initiate refills, rather than the prescriber OAN prescriptions also lacked consistency for number of refills or quantities dispensed. Furthermore, ordering of antineoplastic products was not limited to hematology/oncology providers. Patients were identified with significant supply on hand at the time of medication discontinuation, creating concerns for medication waste, tolerability, and nonadherence.
As a result, opportunities were identified to improve the prescribing process, recommended monitoring, toxicity and tolerability evaluation, medication reconciliation, and medication adherence. In July of 2014, the RLRVAMC adopted a new chemotherapy order entry system capable of restricting prescriptions to hematology/oncology providers and limiting dispensed quantities and refill amounts. A comprehensive pharmacist driven OAN renewal clinic was implemented on September 1, 2014 with the goal of improving long-term adherence and tolerability, in addition to minimizing medication waste.
Patients were eligible for enrollment in the clinic if they had a cancer diagnosis and were concomitantly prescribed an OAN outlined in Table 1. All eligible patients were automatically enrolled in the clinic when they were deemed stable on their OAN by a hematology/oncology pharmacy specialist. Stability was defined as ≤ Grade 1 symptoms associated with the toxicities of OAN therapy managed with or without intervention as defined by the Common Terminology Criteria for Adverse Events (CTCAE) version 4.03. Once enrolled in the renewal clinic, patients were called by an oncology pharmacy resident (PGY2) 1 week prior to any OAN refill due date. Patients were asked a series of 5 adherence and tolerability questions (Table 2) to evaluate renewal criteria for approval or need for further evaluation. These questions were developed based on targeted information and published reports on monitoring adherence.10,11 Criteria for renewal included: < 10% self-reported missed doses of the OAN during the previous dispensing period, no hospitalizations or emergency department visits since most recent hematology/oncology provider appointment, no changes to concomitant medication therapies, and no new or worsening medication-related AEs. Patients meeting all criteria were given a 30-day supply of OAN. Prescribing, dispensing, and delivery of OAN were facilitated by the pharmacist. Patient cases that did not meet criteria for renewal were escalated to the hematology/oncology provider or oncology clinical pharmacy specialist for further evaluation.
Study Design and Setting
This was a pre/post retrospective cohort, quality improvement study of patients enrolled in the RLRVAMC OAN pharmacist renewal clinic. The study was deemed exempt from institutional review board (IRB) by the US Department of Veterans Affairs (VA) Research and Development Department.
Study Population
Patients were included in the preimplementation group if they had received at least 2 prescriptions of an eligible OAN. Therapy for the preimplementation group was required to be a monthly duration > 21 days and between the dates of September 1, 2013 and August 31, 2014. Patients were included in the postimplementation group if they had received at least 2 prescriptions of the studied OANs between September 1, 2014 and January 31, 2015. Patients were excluded if they had filled < 2 prescriptions of OAN; were managed by a non-VA oncologist or hematologist; or received an OAN other than those listed in Table 1.
Data Collection
For all patients in both the pre- and postimplementation cohorts, a standardized data collection tool was used to collect the following via electronic health record review by a PGY2 oncology resident: age, race, gender, oral antineoplastic agent, refill dates, days’ supply, estimated unit cost per dose cancer diagnosis, distance from the RLRVAMC, copay status, presence of hospitalizations/ED visits/dosage reductions, discontinuation rates, reasons for discontinuation, and total number of current prescriptions. The presence or absence of dosage reductions were collected to identify concerns for tolerability, but only the original dose for the preimplementation group and dosage at time of clinic enrollment for the postimplementation group was included in the analysis.
Outcomes and Statistical Analyses
The primary outcome was medication adherence defined as the median medication possession ratio (MPR) before and after implementation of the clinic. Secondary outcomes included the proportion of patients who were adherent from before implementation to after implementation and estimated cost-savings of this clinic after implementation. MPR was used to estimate medication adherence by taking the cumulative day supply of medication on hand divided by the number of days on therapy.12 Number of days on therapy was determined by taking the difference on the start date of the new medication regimen and the discontinuation date of the same regimen. Patients were grouped by adherence into one of the following categories: < 0.8, 0.8 to 0.89, 0.9 to 1, and > 1.1. Patients were considered adherent if they reported taking ≥ 90% (MPR ≥ 0.9) of prescribed doses, adopted from the study by Anderson and colleagues.12 A patient with an MPR > 1, likely due to filling prior to the anticipated refill date, was considered 100% adherent (MPR = 1). If a patient switched OAN during the study, both agents were included as separate entities.
A conservative estimate of cost-savings was made by multiplying the RLRVAMC cost per unit of medication at time of initial prescription fill by the number of units taken each day multiplied by the total days’ supply on hand at time of therapy discontinuation. Patients with an MPR < 1 at time of therapy discontinuation were assumed to have zero remaining units on hand and zero cost savings was estimated. Waste, for purposes of cost-savings, was calculated for all MPR values > 1. Additional supply anticipated to be on hand from dose reductions was not included in the estimated cost of unused medication.
Descriptive statistics compared demographic characteristics between the pre- and postimplementation groups. MPR data were not normally distributed, which required the use of nonparametric Mann-Whitney U tests to compare pre- and postMPRs. Pearson χ2 compared the proportion of adherent patients between groups while descriptive statistics were used to estimate cost savings. Significance was determined based on a P value < .05. IBM SPSS Statistics software was used for all statistical analyses. As this was a complete sample of all eligible subjects, no sample size calculation was performed.
Results
In the preimplementation period, 246 patients received an OAN and 61 patients received an OAN in the postimplementation period (Figure 1). Of the 246 patients in the preimplementation period, 98 were eligible and included in the preimplementation group. Similarly, of the 61 patients in the postimplementation period, 35 patients met inclusion criteria for the postimplementation group. The study population was predominantly male with an average age of approximately 70 years in both groups (Table 3). More than 70% of the population in each group was White. No statistically significant differences between groups were identified. The most commonly prescribed OAN in the preimplementation group were abiraterone, imatinib, and enzalutamide (Table 3). In the postimplementation group, the most commonly prescribed agents were abiraterone, imatinib, pazopanib, and dasatinib. No significant differences were observed in prescribing of individual agents between the pre- and postimplementation groups or other characteristics that may affect adherence including patient copay status, number of concomitant medications, and driving distance from the RLRVAMC.
Thirty-six (36.7%) patients in the preimplementation group were considered nonadherent (MPR < 0.9) and 18 (18.4%) had an MPR < 0.8. Fifteen (15.3%) patients in the preimplementation clinic were considered overadherent (MPR > 1.1). Forty-seven (47.9%) patients in the preimplementation group were considered adherent (MPR 0.9 - 1.1) while all 35 (100%) patients in the postimplementation group were considered adherent (MPR 0.9 - 1.1). No non- or overadherent patients were identified in the postimplementation group (Figure 2). The median MPR for all patients in the preimplementation group was 0.94 compared with 1.06 (P < .001) in the postimplementation group.
Thirty-five (35.7%) patients had therapy discontinued or held in the preimplementation group compared with 2 (5.7%) patients in the postimplementation group (P < .001). Reasons for discontinuation in the preimplementation group included disease progression (n = 27), death (n = 3), lost to follow up (n = 2), and intolerability of therapy (n = 3). Both patients that discontinued therapy in the postimplementation group did so due to disease progression. Of the 35 patients who had their OAN discontinued or held in the preimplementation group, 14 patients had excess supply on hand at time of discontinuation. The estimated value of the unused medication was $37,890. Nine (25%) of the 35 patients who discontinued therapy had a dosage reduction during the course of therapy and the additional supply was not included in the cost estimate. Similarly, 1 of the 2 patients in the postimplementation group had their OAN discontinued during study. The cost of oversupply of medication at the time of therapy discontinuation was estimated at $1,555. No patients in the postimplementation group had dose reductions. After implementation of the OAN renewal clinic, the total cost savings between pre ($37,890) and postimplementation ($1,555) groups was $36,355.
Discussion
OANs are widely used therapies, with more than 25 million doses administered per year in the United States alone.12 The use of these agents will continue to grow as more targeted agents become available and patients request more convenient treatment options. The role for hematology/oncology clinical pharmacy services must adapt to this increased usage of OANs, including increasing pharmacist involvement in medication education, adherence and tolerability assessments, and proactive drug interaction monitoring.However, additional research is needed to determine optimal management strategies.
Our study aimed to compare OAN adherence among patients at a tertiary care VA hospital before and after implementation of a renewal clinic. The preimplementation population had a median MPR of 0.94 compared with 1.06 in the postimplementation group (P < .001). Although an ideal MPR is 1.0, we aimed for a slightly higher MPR to allow a supply buffer in the event of prescription delivery delays, as more than 90% of prescriptions are mailed to patients from a regional mail-order pharmacy. Importantly, the median MPRs do not adequately convey the impact from this clinic. The proportion of patients who were considered adherent to OANs increased from 47.9% in the preimplementation to 100% in the postimplementation period. These finding suggest that the clinical pharmacist role to assess and encourage adherence through monitoring tolerability of these OANs improved the overall medication taking experience of these patients.
Upon initial evaluation of adherence pre- and postimplementation, median adherence rates in both groups appeared to be above goal at 0.94 and 1.06 respectively. Patients in the postimplementation group intentionally received a 5- to 7-day supply buffer to account for potential prescription delivery delays due to holidays and inclement weather. This would indicate that the patients in the postimplementation group would have 15% oversupply due to the 5-day supply buffer. After correcting for patients with confounding reasons for excess (dose reductions, breaks in treatment, etc.), the median MPR in the prerefill clinic group decreased to 0.9 and the MPR in the postrefill clinic group increased slightly to 1.08. Although the median adherence rate in both the pre- and postimplementation groups were above goal of 0.90, 36% of the patients in the preimplementation group were considered nonadherent (MPR < 0.9) compared with no patients in the postimplementation group. Therefore, our intervention to improve patient adherence appeared to be beneficial at our institution.
In addition to improving adherence, one of the goals of the renewal clinic was to minimize excess supply at the time of therapy discontinuation. This was accomplished by aligning medication fills with medical visits and objective monitoring, as well as limiting supply to no more than 30 days. Of the patients in the postimplementation group, only 1 patient had remaining medication at the time of therapy discontinuation compared with 14 patients in the preimplementation group. The estimated cost savings from excess supply was $36,335. Limiting the amount of unused supply not only saves money for the patient and the institution, but also decreases opportunity for improper hazardous waste disposal and unnecessary exposure of hazardous materials to others.
Our results show the pharmacist intervention in the coordination of renewals improved adherence, minimized medication waste, and saved money. The cost of pharmacist time participating in the refill clinic was not calculated. Each visit was completed in approximately 5 minutes, with subsequent documentation and coordination taking an additional 5 to 10 minutes. During the launch of this service, the oncology pharmacy resident provided all coverage of the clinic. Oversite of the resident was provided by hematology/oncology clinical pharmacy specialists. We have continued to utilize pharmacy resident coverage since that time to meet education needs and keep the estimated cost per visit low. Another option in the case that pharmacy residents are not available would be utilization of a pharmacy technician, intern, or professional student to conduct the adherence and tolerability phone assessments. Our escalation protocol allows intervention by clinical pharmacy specialist and/or other health care providers when necessary. Trainees have only required basic training on how to use the protocol.
Limitations
Due to this study’s retrospective design, an inherent limitation is dependence on prescriber and refill records for documentation of initiation and discontinuation dates. Therefore, only the association of impact of pharmacist intervention on medication adherence can be determined as opposed to causation. We did not take into account discrepancies in day supply secondary to ‘held’ therapies, dose reductions, or doses supplied during an inpatient admission, which may alter estimates of MPR and cost-savings data. Patients in the postimplementation group intentionally received a 5 to 7-day supply buffer to account for potential prescription delivery delays due to holidays and inclement weather. This would indicate that the patients in the postimplementation group would have 15% oversupply due to the 5-day supply buffer, thereby skewing MPR values. This study did not account for cost avoidance resulting from early identification and management of toxicity. Finally, the postimplementation data only spans 4 months and a longer duration of time is needed to more accurately determine sustainability of renewal clinic interventions and provide comprehensive evaluation of cost-avoidance.
Conclusion
Implementation of an OAN renewal clinic was associated with an increase in MPR, improved proportion of patients considered adherent, and an estimated $36,335 cost-savings. However, prospective evaluation and a longer study duration are needed to determine causality of improved adherence and cost-savings associated with a pharmacist-driven OAN renewal clinic.
Evaluation of oral antineoplastic agent (OAN) adherence patterns have identified correlations between nonadherence or over-adherence and poorer disease-related outcomes. Multiple studies have focused on imatinib use in chronic myeloid leukemia (CML) due to its continuous, long-term use. A study by Ganesan and colleagues found that nonadherence to imatinib showed a significant decrease in 5-year event-free survival between 76.7% of adherent participants compared with 59.8% of nonadherent participants.1 This study found that 44% of patients who were adherent to imatinib achieved complete cytogenetic response vs only 26% of patients who were nonadherent. In another study of imatinib for CML, major molecular response (MMR) was strongly correlated with adherence and no patients with adherence < 80% were able to achieve MMR.2 Similarly, in studies of tamoxifen for breast cancer, < 80% adherence resulted in a 10% decrease in survival when compared to those who were more adherent.3,4
In addition to the clinical implications of nonadherence, there can be a significant cost associated with suboptimal use of these medications. The price of a single dose of OAN medication may cost as much as $440.5
The benefits of multidisciplinary care teams have been identified in many studies.6,7 While studies are limited in oncology, pharmacists provide vital contributions to the oncology multidisciplinary team when managing OANs as these health care professionals have expert knowledge of the medications, potential adverse events (AEs), and necessary monitoring parameters.8 In one study, patients seen by the pharmacist-led oral chemotherapy management program experienced improved clinical outcomes and response to therapy when compared with preintervention patients (early molecular response, 88.9% vs 54.8%, P = .01; major molecular response, 83.3% vs 57.6%, P = .06).9 During the study, 318 AEs were reported, leading to 235 pharmacist interventions to ameliorate AEs and improve adherence.
The primary objective of this study was to measure the impact of a pharmacist-driven OAN renewal clinic on medication adherence. The secondary objective was to estimate cost-savings of this new service.
Methods
Prior to July 2014, several limitations were identified related to OAN prescribing and monitoring at the Richard L. Roudebush Veterans Affairs Medical Center in Indianapolis, Indiana (RLRVAMC). The prescription ordering process relied primarily on the patient to initiate refills, rather than the prescriber OAN prescriptions also lacked consistency for number of refills or quantities dispensed. Furthermore, ordering of antineoplastic products was not limited to hematology/oncology providers. Patients were identified with significant supply on hand at the time of medication discontinuation, creating concerns for medication waste, tolerability, and nonadherence.
As a result, opportunities were identified to improve the prescribing process, recommended monitoring, toxicity and tolerability evaluation, medication reconciliation, and medication adherence. In July of 2014, the RLRVAMC adopted a new chemotherapy order entry system capable of restricting prescriptions to hematology/oncology providers and limiting dispensed quantities and refill amounts. A comprehensive pharmacist driven OAN renewal clinic was implemented on September 1, 2014 with the goal of improving long-term adherence and tolerability, in addition to minimizing medication waste.
Patients were eligible for enrollment in the clinic if they had a cancer diagnosis and were concomitantly prescribed an OAN outlined in Table 1. All eligible patients were automatically enrolled in the clinic when they were deemed stable on their OAN by a hematology/oncology pharmacy specialist. Stability was defined as ≤ Grade 1 symptoms associated with the toxicities of OAN therapy managed with or without intervention as defined by the Common Terminology Criteria for Adverse Events (CTCAE) version 4.03. Once enrolled in the renewal clinic, patients were called by an oncology pharmacy resident (PGY2) 1 week prior to any OAN refill due date. Patients were asked a series of 5 adherence and tolerability questions (Table 2) to evaluate renewal criteria for approval or need for further evaluation. These questions were developed based on targeted information and published reports on monitoring adherence.10,11 Criteria for renewal included: < 10% self-reported missed doses of the OAN during the previous dispensing period, no hospitalizations or emergency department visits since most recent hematology/oncology provider appointment, no changes to concomitant medication therapies, and no new or worsening medication-related AEs. Patients meeting all criteria were given a 30-day supply of OAN. Prescribing, dispensing, and delivery of OAN were facilitated by the pharmacist. Patient cases that did not meet criteria for renewal were escalated to the hematology/oncology provider or oncology clinical pharmacy specialist for further evaluation.
Study Design and Setting
This was a pre/post retrospective cohort, quality improvement study of patients enrolled in the RLRVAMC OAN pharmacist renewal clinic. The study was deemed exempt from institutional review board (IRB) by the US Department of Veterans Affairs (VA) Research and Development Department.
Study Population
Patients were included in the preimplementation group if they had received at least 2 prescriptions of an eligible OAN. Therapy for the preimplementation group was required to be a monthly duration > 21 days and between the dates of September 1, 2013 and August 31, 2014. Patients were included in the postimplementation group if they had received at least 2 prescriptions of the studied OANs between September 1, 2014 and January 31, 2015. Patients were excluded if they had filled < 2 prescriptions of OAN; were managed by a non-VA oncologist or hematologist; or received an OAN other than those listed in Table 1.
Data Collection
For all patients in both the pre- and postimplementation cohorts, a standardized data collection tool was used to collect the following via electronic health record review by a PGY2 oncology resident: age, race, gender, oral antineoplastic agent, refill dates, days’ supply, estimated unit cost per dose cancer diagnosis, distance from the RLRVAMC, copay status, presence of hospitalizations/ED visits/dosage reductions, discontinuation rates, reasons for discontinuation, and total number of current prescriptions. The presence or absence of dosage reductions were collected to identify concerns for tolerability, but only the original dose for the preimplementation group and dosage at time of clinic enrollment for the postimplementation group was included in the analysis.
Outcomes and Statistical Analyses
The primary outcome was medication adherence defined as the median medication possession ratio (MPR) before and after implementation of the clinic. Secondary outcomes included the proportion of patients who were adherent from before implementation to after implementation and estimated cost-savings of this clinic after implementation. MPR was used to estimate medication adherence by taking the cumulative day supply of medication on hand divided by the number of days on therapy.12 Number of days on therapy was determined by taking the difference on the start date of the new medication regimen and the discontinuation date of the same regimen. Patients were grouped by adherence into one of the following categories: < 0.8, 0.8 to 0.89, 0.9 to 1, and > 1.1. Patients were considered adherent if they reported taking ≥ 90% (MPR ≥ 0.9) of prescribed doses, adopted from the study by Anderson and colleagues.12 A patient with an MPR > 1, likely due to filling prior to the anticipated refill date, was considered 100% adherent (MPR = 1). If a patient switched OAN during the study, both agents were included as separate entities.
A conservative estimate of cost-savings was made by multiplying the RLRVAMC cost per unit of medication at time of initial prescription fill by the number of units taken each day multiplied by the total days’ supply on hand at time of therapy discontinuation. Patients with an MPR < 1 at time of therapy discontinuation were assumed to have zero remaining units on hand and zero cost savings was estimated. Waste, for purposes of cost-savings, was calculated for all MPR values > 1. Additional supply anticipated to be on hand from dose reductions was not included in the estimated cost of unused medication.
Descriptive statistics compared demographic characteristics between the pre- and postimplementation groups. MPR data were not normally distributed, which required the use of nonparametric Mann-Whitney U tests to compare pre- and postMPRs. Pearson χ2 compared the proportion of adherent patients between groups while descriptive statistics were used to estimate cost savings. Significance was determined based on a P value < .05. IBM SPSS Statistics software was used for all statistical analyses. As this was a complete sample of all eligible subjects, no sample size calculation was performed.
Results
In the preimplementation period, 246 patients received an OAN and 61 patients received an OAN in the postimplementation period (Figure 1). Of the 246 patients in the preimplementation period, 98 were eligible and included in the preimplementation group. Similarly, of the 61 patients in the postimplementation period, 35 patients met inclusion criteria for the postimplementation group. The study population was predominantly male with an average age of approximately 70 years in both groups (Table 3). More than 70% of the population in each group was White. No statistically significant differences between groups were identified. The most commonly prescribed OAN in the preimplementation group were abiraterone, imatinib, and enzalutamide (Table 3). In the postimplementation group, the most commonly prescribed agents were abiraterone, imatinib, pazopanib, and dasatinib. No significant differences were observed in prescribing of individual agents between the pre- and postimplementation groups or other characteristics that may affect adherence including patient copay status, number of concomitant medications, and driving distance from the RLRVAMC.
Thirty-six (36.7%) patients in the preimplementation group were considered nonadherent (MPR < 0.9) and 18 (18.4%) had an MPR < 0.8. Fifteen (15.3%) patients in the preimplementation clinic were considered overadherent (MPR > 1.1). Forty-seven (47.9%) patients in the preimplementation group were considered adherent (MPR 0.9 - 1.1) while all 35 (100%) patients in the postimplementation group were considered adherent (MPR 0.9 - 1.1). No non- or overadherent patients were identified in the postimplementation group (Figure 2). The median MPR for all patients in the preimplementation group was 0.94 compared with 1.06 (P < .001) in the postimplementation group.
Thirty-five (35.7%) patients had therapy discontinued or held in the preimplementation group compared with 2 (5.7%) patients in the postimplementation group (P < .001). Reasons for discontinuation in the preimplementation group included disease progression (n = 27), death (n = 3), lost to follow up (n = 2), and intolerability of therapy (n = 3). Both patients that discontinued therapy in the postimplementation group did so due to disease progression. Of the 35 patients who had their OAN discontinued or held in the preimplementation group, 14 patients had excess supply on hand at time of discontinuation. The estimated value of the unused medication was $37,890. Nine (25%) of the 35 patients who discontinued therapy had a dosage reduction during the course of therapy and the additional supply was not included in the cost estimate. Similarly, 1 of the 2 patients in the postimplementation group had their OAN discontinued during study. The cost of oversupply of medication at the time of therapy discontinuation was estimated at $1,555. No patients in the postimplementation group had dose reductions. After implementation of the OAN renewal clinic, the total cost savings between pre ($37,890) and postimplementation ($1,555) groups was $36,355.
Discussion
OANs are widely used therapies, with more than 25 million doses administered per year in the United States alone.12 The use of these agents will continue to grow as more targeted agents become available and patients request more convenient treatment options. The role for hematology/oncology clinical pharmacy services must adapt to this increased usage of OANs, including increasing pharmacist involvement in medication education, adherence and tolerability assessments, and proactive drug interaction monitoring.However, additional research is needed to determine optimal management strategies.
Our study aimed to compare OAN adherence among patients at a tertiary care VA hospital before and after implementation of a renewal clinic. The preimplementation population had a median MPR of 0.94 compared with 1.06 in the postimplementation group (P < .001). Although an ideal MPR is 1.0, we aimed for a slightly higher MPR to allow a supply buffer in the event of prescription delivery delays, as more than 90% of prescriptions are mailed to patients from a regional mail-order pharmacy. Importantly, the median MPRs do not adequately convey the impact from this clinic. The proportion of patients who were considered adherent to OANs increased from 47.9% in the preimplementation to 100% in the postimplementation period. These finding suggest that the clinical pharmacist role to assess and encourage adherence through monitoring tolerability of these OANs improved the overall medication taking experience of these patients.
Upon initial evaluation of adherence pre- and postimplementation, median adherence rates in both groups appeared to be above goal at 0.94 and 1.06 respectively. Patients in the postimplementation group intentionally received a 5- to 7-day supply buffer to account for potential prescription delivery delays due to holidays and inclement weather. This would indicate that the patients in the postimplementation group would have 15% oversupply due to the 5-day supply buffer. After correcting for patients with confounding reasons for excess (dose reductions, breaks in treatment, etc.), the median MPR in the prerefill clinic group decreased to 0.9 and the MPR in the postrefill clinic group increased slightly to 1.08. Although the median adherence rate in both the pre- and postimplementation groups were above goal of 0.90, 36% of the patients in the preimplementation group were considered nonadherent (MPR < 0.9) compared with no patients in the postimplementation group. Therefore, our intervention to improve patient adherence appeared to be beneficial at our institution.
In addition to improving adherence, one of the goals of the renewal clinic was to minimize excess supply at the time of therapy discontinuation. This was accomplished by aligning medication fills with medical visits and objective monitoring, as well as limiting supply to no more than 30 days. Of the patients in the postimplementation group, only 1 patient had remaining medication at the time of therapy discontinuation compared with 14 patients in the preimplementation group. The estimated cost savings from excess supply was $36,335. Limiting the amount of unused supply not only saves money for the patient and the institution, but also decreases opportunity for improper hazardous waste disposal and unnecessary exposure of hazardous materials to others.
Our results show the pharmacist intervention in the coordination of renewals improved adherence, minimized medication waste, and saved money. The cost of pharmacist time participating in the refill clinic was not calculated. Each visit was completed in approximately 5 minutes, with subsequent documentation and coordination taking an additional 5 to 10 minutes. During the launch of this service, the oncology pharmacy resident provided all coverage of the clinic. Oversite of the resident was provided by hematology/oncology clinical pharmacy specialists. We have continued to utilize pharmacy resident coverage since that time to meet education needs and keep the estimated cost per visit low. Another option in the case that pharmacy residents are not available would be utilization of a pharmacy technician, intern, or professional student to conduct the adherence and tolerability phone assessments. Our escalation protocol allows intervention by clinical pharmacy specialist and/or other health care providers when necessary. Trainees have only required basic training on how to use the protocol.
Limitations
Due to this study’s retrospective design, an inherent limitation is dependence on prescriber and refill records for documentation of initiation and discontinuation dates. Therefore, only the association of impact of pharmacist intervention on medication adherence can be determined as opposed to causation. We did not take into account discrepancies in day supply secondary to ‘held’ therapies, dose reductions, or doses supplied during an inpatient admission, which may alter estimates of MPR and cost-savings data. Patients in the postimplementation group intentionally received a 5 to 7-day supply buffer to account for potential prescription delivery delays due to holidays and inclement weather. This would indicate that the patients in the postimplementation group would have 15% oversupply due to the 5-day supply buffer, thereby skewing MPR values. This study did not account for cost avoidance resulting from early identification and management of toxicity. Finally, the postimplementation data only spans 4 months and a longer duration of time is needed to more accurately determine sustainability of renewal clinic interventions and provide comprehensive evaluation of cost-avoidance.
Conclusion
Implementation of an OAN renewal clinic was associated with an increase in MPR, improved proportion of patients considered adherent, and an estimated $36,335 cost-savings. However, prospective evaluation and a longer study duration are needed to determine causality of improved adherence and cost-savings associated with a pharmacist-driven OAN renewal clinic.
1. Ganesan P, Sagar TG, Dubashi B, et al. Nonadherence to imatinib adversely affects event free survival in chronic phase chronic myeloid leukemia. Am J Hematol 2011; 86: 471-474. doi:10.1002/ajh.22019
2. Marin D, Bazeos A, Mahon FX, et al. Adherence is the critical factor for achieving molecular responses in patients with chronic myeloid leukemia who achieve complete cytogenetic responses on imatinib. J Clin Oncol 2010; 28: 2381-2388. doi:10.1200/JCO.2009.26.3087
3. McCowan C, Shearer J, Donnan PT, et al. Cohort study examining tamoxifen adherence and its relationship to mortality in women with breast cancer. Br J Cancer 2008; 99: 1763-1768. doi:10.1038/sj.bjc.6604758
4. Lexicomp Online. Sunitinib. Hudson, Ohio: Lexi-Comp, Inc; August 20, 2019.
5. Babiker A, El Husseini M, Al Nemri A, et al. Health care professional development: Working as a team to improve patient care. Sudan J Paediatr. 2014;14(2):9-16.
6. Spence MM, Makarem AF, Reyes SL, et al. Evaluation of an outpatient pharmacy clinical services program on adherence and clinical outcomes among patients with diabetes and/or coronary artery disease. J Manag Care Spec Pharm. 2014;20(10):1036-1045. doi:10.18553/jmcp.2014.20.10.1036
7. Holle LM, Puri S, Clement JM. Physician-pharmacist collaboration for oral chemotherapy monitoring: Insights from an academic genitourinary oncology practice. J Oncol Pharm Pract 2015; doi:10.1177/1078155215581524
8. Muluneh B, Schneider M, Faso A, et al. Improved Adherence Rates and Clinical Outcomes of an Integrated, Closed-Loop, Pharmacist-Led Oral Chemotherapy Management Program. Journal of Oncology Practice. 2018;14(6):371-333. doi:10.1200/JOP.17.00039.
9. Font R, Espinas JA, Gil-Gil M, et al. Prescription refill, patient self-report and physician report in assessing adherence to oral endocrine therapy in early breast cancer patients: a retrospective cohort study in Catalonia, Spain. British Journal of Cancer. 2012 ;107(8):1249-1256. doi:10.1038/bjc.2012.389.
10. Anderson KR, Chambers CR, Lam N, et al. Medication adherence among adults prescribed imatinib, dasatinib, or nilotinib for the treatment of chronic myeloid leukemia. J Oncol Pharm Practice. 2015;21(1):19–25. doi:10.1177/1078155213520261
11. Weingart SN, Brown E, Bach PB, et al. NCCN Task Force Report: oral chemotherapy. J Natl Compr Canc Netw. 2008;6(3): S1-S14.
1. Ganesan P, Sagar TG, Dubashi B, et al. Nonadherence to imatinib adversely affects event free survival in chronic phase chronic myeloid leukemia. Am J Hematol 2011; 86: 471-474. doi:10.1002/ajh.22019
2. Marin D, Bazeos A, Mahon FX, et al. Adherence is the critical factor for achieving molecular responses in patients with chronic myeloid leukemia who achieve complete cytogenetic responses on imatinib. J Clin Oncol 2010; 28: 2381-2388. doi:10.1200/JCO.2009.26.3087
3. McCowan C, Shearer J, Donnan PT, et al. Cohort study examining tamoxifen adherence and its relationship to mortality in women with breast cancer. Br J Cancer 2008; 99: 1763-1768. doi:10.1038/sj.bjc.6604758
4. Lexicomp Online. Sunitinib. Hudson, Ohio: Lexi-Comp, Inc; August 20, 2019.
5. Babiker A, El Husseini M, Al Nemri A, et al. Health care professional development: Working as a team to improve patient care. Sudan J Paediatr. 2014;14(2):9-16.
6. Spence MM, Makarem AF, Reyes SL, et al. Evaluation of an outpatient pharmacy clinical services program on adherence and clinical outcomes among patients with diabetes and/or coronary artery disease. J Manag Care Spec Pharm. 2014;20(10):1036-1045. doi:10.18553/jmcp.2014.20.10.1036
7. Holle LM, Puri S, Clement JM. Physician-pharmacist collaboration for oral chemotherapy monitoring: Insights from an academic genitourinary oncology practice. J Oncol Pharm Pract 2015; doi:10.1177/1078155215581524
8. Muluneh B, Schneider M, Faso A, et al. Improved Adherence Rates and Clinical Outcomes of an Integrated, Closed-Loop, Pharmacist-Led Oral Chemotherapy Management Program. Journal of Oncology Practice. 2018;14(6):371-333. doi:10.1200/JOP.17.00039.
9. Font R, Espinas JA, Gil-Gil M, et al. Prescription refill, patient self-report and physician report in assessing adherence to oral endocrine therapy in early breast cancer patients: a retrospective cohort study in Catalonia, Spain. British Journal of Cancer. 2012 ;107(8):1249-1256. doi:10.1038/bjc.2012.389.
10. Anderson KR, Chambers CR, Lam N, et al. Medication adherence among adults prescribed imatinib, dasatinib, or nilotinib for the treatment of chronic myeloid leukemia. J Oncol Pharm Practice. 2015;21(1):19–25. doi:10.1177/1078155213520261
11. Weingart SN, Brown E, Bach PB, et al. NCCN Task Force Report: oral chemotherapy. J Natl Compr Canc Netw. 2008;6(3): S1-S14.
Albuterol, Acidosis, and Aneurysms
A patient with a complicated medical history on admission for dyspnea was administered nebulizer therapy but after 72 hours developed asymptomatic acute kidney injury and anion-gap metabolic acidosis.
An 88-year-old male veteran with a medical history of chronic obstructive pulmonary disease (COPD) on home oxygen, chronic alcohol use, squamous cell carcinoma of the lung status after left upper lobectomy, and a 5.7 cm thoracic aortic aneurysm was admitted to the inpatient medical service for progressive dyspnea and productive cough. The patient was in his usual state of health until 2 days before presentation. A chest computed tomography scan showed a right lower lobe infiltrate, concerning for pneumonia, and stable thoracic aortic aneurysm (Figure). On admission, the patient was started on IV ceftriaxone 2 g daily for pneumonia and
The patient responded well to therapy, and his cough and dyspnea improved. However, 72 hours after admission, he developed an asymptomatic acute kidney injury (AKI) and anion-gap metabolic acidosis. His serum creatinine increased from baseline 0.6 mg/dL to 1.2 mg/dL. He also had an anion gap of 21 mmol/L and a decrease in bicarbonate from 23 mmol/L to 17 mmol/L. His condition was further complicated by new-onset hypertension (153/111 mm Hg). His calculated fractional excretion of sodium (FENa) was 0.5%, and his lactate level returned elevated at 3.6 mmol/L. On further investigation, he reported alcohol use the night prior; however, his β-hydroxybutyrate was negative, and serum alcohol level was undetectable. Meanwhile, the patient continued to receive antibiotics and scheduled nebulizer treatments. Although his AKI resolved with initial fluid resuscitation, his repeat lactate levels continued to trend upward to a peak of 4.0 mmol/L.
- What is your diagnosis?
- How would you treat this patient?
Although IV fluids resolved his AKI, prerenal in etiology given the calculated FENa at 0.5%, his lactate levels continued to uptrend to a peak of 4.0 mmol/L complicated by elevated blood pressure (BP) > 150/100 mm Hg. Given his thoracic aneurysm, his BP was treated with metoprolol tartrate and amlodipine 10 mg daily. The patient remained asymptomatic with no evidence of ischemia or sepsis.
We suspected the nebulizer treatments to be the etiology of the patient’s hyperlactatemia and subsequent anion-gap metabolic acidosis. His scheduled albuterol and ipratropium nebulizer treatments were discontinued, and the patient experienced rapid resolution of his anion gap and hyperlactatemia to 1.2 mmol/L over 24 hours. On discontinuation of the nebulization therapy, mild wheezing was noted on physical examination. The patient reported no symptoms and was at his baseline. The patient finished his antibiotic course for his community-acquired pneumonia and was discharged in stable condition with instructions to continue his previously established home COPD medication regimen of umeclidinium/vilanterol 62.5/25 mcg daily and albuterol metered-dose inhaler as needed.
Discussion
Short-acting β-agonists, such as albuterol, are widely used in COPD and are a guideline-recommended treatment in maintenance and exacerbation of asthma and COPD.1 Short-acting β-agonist adverse effects (AEs) include nausea, vomiting, tremors, headache, and tachycardia; abnormal laboratory results include hypocalcemia, hypokalemia, hypophosphatemia, hypomagnesemia, and hyperglycemia.2,3 Albuterol-induced hyperlactatemia and lactic acidosis also are known but often overlooked and underreported AEs.
In a randomized control trial, researchers identified a positive correlation between nebulized albuterol use and hyperlactatemia in asthmatics with asthma exacerbation.4 One systematic review identified ≤ 20% of patients on either IV or nebulized high-dose treatments with selective β2-agonists may experience hyperlactatemia.5 However, aerosolized administration of albuterol as opposed to IV administration is less likely to result in AEs and abnormal laboratory results given decreased systemic absorption.3
Hyperlactatemia and lactic acidosis are associated with increased morbidity and mortality.6 Lactic acidosis is classified as either type A or type B. Type A lactic acidosis is characterized by hypoperfusion as subsequent ischemic injuries lead to anaerobic metabolism and elevated lactate. Diseases such as septic, cardiogenic, and hypovolemic shock are often associated with type A lactic acidosis. Type B lactic acidosis, however, encapsulates all nonhypoperfusion-related elevations in lactate, including malignancy, ethanol intoxication, and medication-induced lactic acidosis.7,8
In this case, the diagnosis was elusive as the patient had multiple comorbidities. His history included COPD, which is associated with elevated lactate levels.5 However, his initial laboratory workup did not show an anion gap, confirming a lack of an underlying acidotic process on admission. Because the patient was admitted for pneumonia, a known infectious source, complicated by an acute elevation in lactate, sepsis must be and was effectively ruled out. The patient also reported alcohol use during his admission, which confounded his presentation but was unlikely to impact the etiology of his lactic acidosis, given the unremarkable β-hydroxybutyrate and serum alcohol levels.
Furthermore, the patient harbored an enlarged thoracic aortic aneurysm and remained hypertensive above the goal of BP 130/80 mm Hg for patients with thoracoabdominal aneurysms.9 Lactic acidosis in the context of hemodynamic instability for this patient might have indicated tissue hypoperfusion secondary to a ruptured aneurysm or aortic dissection. Fortunately, the patient did not manifest any signs or symptoms suggestive of a ruptured aortic aneurysm. Last, on discontinuing the nebulizer therapy, the patient’s hyperlactatemia resolved within 24 hours, highly indicative of albuterol-induced lactic acidosis as the proper diagnosis.
As a β-agonist, albuterol stimulates β-adrenergic receptors, which increases lipolysis and glycolysis. The biochemical reactions increase the product pyruvate, which is used in both aerobic and anaerobic metabolisms. With an increase in pyruvate, capacity for aerobic metabolism is maximized with increased shunting toward anaerobic metabolism, leading to elevated lactate levels and lactic acidosis.8,10,11
Regardless, albuterol-induced lactic acidosis is a diagnosis of exclusion.6 It is thus prudent to rule out life-threatening etiologies of hyperlactatemia, given the association with increased morbidity and mortality. This case illustrates the importance of ruling out life-threatening etiologies of hyperlactatemia and lactic acidosis in an older patient with multiple comorbidities. This case also recognizes the acute AEs of hyperlactatemia and lactic acidosis secondary to scheduled albuterol nebulization therapy in acutely ill patients. Of note, patients presenting with an acute medical illness may be more susceptible to hyperlactatemia secondary to scheduled albuterol nebulization therapy.
Conclusions
We encourage heightened clinical suspicion of albuterol-induced lactic acidosis in acutely ill patients with COPD on albuterol therapy on rule out of life-threatening etiologies and
1. Global Initiative for Asthma. Pocket Guide to COPD Diagnosis, Management, and Prevention: A Guide for Health Care Professionals (2020 Report). Global Initiative for Chronic Lung Diseases, Inc; 2020. Accessed April 16, 2021. https://goldcopd.org/wp-content/uploads/2019/12/GOLD-2020-FINAL-ver1.2-03Dec19_WMV.pdf
2. Jat KR, Khairwa A. Levalbuterol versus albuterol for acute asthma: a systematic review and meta-analysis. Pulm Pharmacol Ther. 2013;26(2):239-248. doi:10.1016/j.pupt.2012.11.003
3. Ahrens RC, Smith GD. Albuterol: an adrenergic agent for use in the treatment of asthma pharmacology, pharmacokinetics and clinical use. Pharmacotherapy. 1984;4(3):105- 121. doi:10.1002/j.1875-9114.1984.tb03330.x
4. Lewis LM, Ferguson I, House SL, et al. Albuterol administration is commonly associated with increases in serum lactate in patients with asthma treated for acute exacerbation of asthma. Chest. 2014;145(1):53-59. doi:10.1378/chest.13-0930
5. Liedtke AG, Lava SAG, Milani GP, et al. Selective β2-adrenoceptor agonists and relevant hyperlactatemia: systematic review and meta-analysis. J Clin Med. 2019;9(1):71. doi:10.3390/jcm9010071
6. Smith ZR, Horng M, Rech MA. Medication-induced hyperlactatemia and lactic acidosis: a systematic review of the literature. Pharmacotherapy. 2019;39(9):946-963. doi:10.1002/phar.2316
7. Hockstein M, Diercks D. Significant lactic acidosis from albuterol. Clin Pract Cases Emerg Med. 2018;2(2):128-131. doi:10.5811/cpcem.2018.1.36024
8. Foucher CD, Tubben RE. Lactic acidosis. StatPearls Publishing; 2020. Updated November 21, 2020. Accessed April 16, 2021. https://www.ncbi.nlm.nih.gov/books/NBK470202
9. Aronow WS. Treatment of thoracic aortic aneurysm. Ann Transl Med. 2018;6(3):66. doi:10.21037/atm.2018.01.07
10. Lau E, Mazer J, Carino G. Inhaled β-agonist therapy and respiratory muscle fatigue as under-recognised causes of lactic acidosis. BMJ Case Rep. 2013;2013:bcr2013201015. Published October 14, 2013. doi:10.1136/bcr-2013-201015
11. Ramakrishna KN, Virk J, Gambhir HS. Albuterol-induced lactic acidosis. Am J Ther. 2019;26(5):e635-e636. doi:10.1097/MJT.0000000000000843
A patient with a complicated medical history on admission for dyspnea was administered nebulizer therapy but after 72 hours developed asymptomatic acute kidney injury and anion-gap metabolic acidosis.
A patient with a complicated medical history on admission for dyspnea was administered nebulizer therapy but after 72 hours developed asymptomatic acute kidney injury and anion-gap metabolic acidosis.
An 88-year-old male veteran with a medical history of chronic obstructive pulmonary disease (COPD) on home oxygen, chronic alcohol use, squamous cell carcinoma of the lung status after left upper lobectomy, and a 5.7 cm thoracic aortic aneurysm was admitted to the inpatient medical service for progressive dyspnea and productive cough. The patient was in his usual state of health until 2 days before presentation. A chest computed tomography scan showed a right lower lobe infiltrate, concerning for pneumonia, and stable thoracic aortic aneurysm (Figure). On admission, the patient was started on IV ceftriaxone 2 g daily for pneumonia and
The patient responded well to therapy, and his cough and dyspnea improved. However, 72 hours after admission, he developed an asymptomatic acute kidney injury (AKI) and anion-gap metabolic acidosis. His serum creatinine increased from baseline 0.6 mg/dL to 1.2 mg/dL. He also had an anion gap of 21 mmol/L and a decrease in bicarbonate from 23 mmol/L to 17 mmol/L. His condition was further complicated by new-onset hypertension (153/111 mm Hg). His calculated fractional excretion of sodium (FENa) was 0.5%, and his lactate level returned elevated at 3.6 mmol/L. On further investigation, he reported alcohol use the night prior; however, his β-hydroxybutyrate was negative, and serum alcohol level was undetectable. Meanwhile, the patient continued to receive antibiotics and scheduled nebulizer treatments. Although his AKI resolved with initial fluid resuscitation, his repeat lactate levels continued to trend upward to a peak of 4.0 mmol/L.
- What is your diagnosis?
- How would you treat this patient?
Although IV fluids resolved his AKI, prerenal in etiology given the calculated FENa at 0.5%, his lactate levels continued to uptrend to a peak of 4.0 mmol/L complicated by elevated blood pressure (BP) > 150/100 mm Hg. Given his thoracic aneurysm, his BP was treated with metoprolol tartrate and amlodipine 10 mg daily. The patient remained asymptomatic with no evidence of ischemia or sepsis.
We suspected the nebulizer treatments to be the etiology of the patient’s hyperlactatemia and subsequent anion-gap metabolic acidosis. His scheduled albuterol and ipratropium nebulizer treatments were discontinued, and the patient experienced rapid resolution of his anion gap and hyperlactatemia to 1.2 mmol/L over 24 hours. On discontinuation of the nebulization therapy, mild wheezing was noted on physical examination. The patient reported no symptoms and was at his baseline. The patient finished his antibiotic course for his community-acquired pneumonia and was discharged in stable condition with instructions to continue his previously established home COPD medication regimen of umeclidinium/vilanterol 62.5/25 mcg daily and albuterol metered-dose inhaler as needed.
Discussion
Short-acting β-agonists, such as albuterol, are widely used in COPD and are a guideline-recommended treatment in maintenance and exacerbation of asthma and COPD.1 Short-acting β-agonist adverse effects (AEs) include nausea, vomiting, tremors, headache, and tachycardia; abnormal laboratory results include hypocalcemia, hypokalemia, hypophosphatemia, hypomagnesemia, and hyperglycemia.2,3 Albuterol-induced hyperlactatemia and lactic acidosis also are known but often overlooked and underreported AEs.
In a randomized control trial, researchers identified a positive correlation between nebulized albuterol use and hyperlactatemia in asthmatics with asthma exacerbation.4 One systematic review identified ≤ 20% of patients on either IV or nebulized high-dose treatments with selective β2-agonists may experience hyperlactatemia.5 However, aerosolized administration of albuterol as opposed to IV administration is less likely to result in AEs and abnormal laboratory results given decreased systemic absorption.3
Hyperlactatemia and lactic acidosis are associated with increased morbidity and mortality.6 Lactic acidosis is classified as either type A or type B. Type A lactic acidosis is characterized by hypoperfusion as subsequent ischemic injuries lead to anaerobic metabolism and elevated lactate. Diseases such as septic, cardiogenic, and hypovolemic shock are often associated with type A lactic acidosis. Type B lactic acidosis, however, encapsulates all nonhypoperfusion-related elevations in lactate, including malignancy, ethanol intoxication, and medication-induced lactic acidosis.7,8
In this case, the diagnosis was elusive as the patient had multiple comorbidities. His history included COPD, which is associated with elevated lactate levels.5 However, his initial laboratory workup did not show an anion gap, confirming a lack of an underlying acidotic process on admission. Because the patient was admitted for pneumonia, a known infectious source, complicated by an acute elevation in lactate, sepsis must be and was effectively ruled out. The patient also reported alcohol use during his admission, which confounded his presentation but was unlikely to impact the etiology of his lactic acidosis, given the unremarkable β-hydroxybutyrate and serum alcohol levels.
Furthermore, the patient harbored an enlarged thoracic aortic aneurysm and remained hypertensive above the goal of BP 130/80 mm Hg for patients with thoracoabdominal aneurysms.9 Lactic acidosis in the context of hemodynamic instability for this patient might have indicated tissue hypoperfusion secondary to a ruptured aneurysm or aortic dissection. Fortunately, the patient did not manifest any signs or symptoms suggestive of a ruptured aortic aneurysm. Last, on discontinuing the nebulizer therapy, the patient’s hyperlactatemia resolved within 24 hours, highly indicative of albuterol-induced lactic acidosis as the proper diagnosis.
As a β-agonist, albuterol stimulates β-adrenergic receptors, which increases lipolysis and glycolysis. The biochemical reactions increase the product pyruvate, which is used in both aerobic and anaerobic metabolisms. With an increase in pyruvate, capacity for aerobic metabolism is maximized with increased shunting toward anaerobic metabolism, leading to elevated lactate levels and lactic acidosis.8,10,11
Regardless, albuterol-induced lactic acidosis is a diagnosis of exclusion.6 It is thus prudent to rule out life-threatening etiologies of hyperlactatemia, given the association with increased morbidity and mortality. This case illustrates the importance of ruling out life-threatening etiologies of hyperlactatemia and lactic acidosis in an older patient with multiple comorbidities. This case also recognizes the acute AEs of hyperlactatemia and lactic acidosis secondary to scheduled albuterol nebulization therapy in acutely ill patients. Of note, patients presenting with an acute medical illness may be more susceptible to hyperlactatemia secondary to scheduled albuterol nebulization therapy.
Conclusions
We encourage heightened clinical suspicion of albuterol-induced lactic acidosis in acutely ill patients with COPD on albuterol therapy on rule out of life-threatening etiologies and
An 88-year-old male veteran with a medical history of chronic obstructive pulmonary disease (COPD) on home oxygen, chronic alcohol use, squamous cell carcinoma of the lung status after left upper lobectomy, and a 5.7 cm thoracic aortic aneurysm was admitted to the inpatient medical service for progressive dyspnea and productive cough. The patient was in his usual state of health until 2 days before presentation. A chest computed tomography scan showed a right lower lobe infiltrate, concerning for pneumonia, and stable thoracic aortic aneurysm (Figure). On admission, the patient was started on IV ceftriaxone 2 g daily for pneumonia and
The patient responded well to therapy, and his cough and dyspnea improved. However, 72 hours after admission, he developed an asymptomatic acute kidney injury (AKI) and anion-gap metabolic acidosis. His serum creatinine increased from baseline 0.6 mg/dL to 1.2 mg/dL. He also had an anion gap of 21 mmol/L and a decrease in bicarbonate from 23 mmol/L to 17 mmol/L. His condition was further complicated by new-onset hypertension (153/111 mm Hg). His calculated fractional excretion of sodium (FENa) was 0.5%, and his lactate level returned elevated at 3.6 mmol/L. On further investigation, he reported alcohol use the night prior; however, his β-hydroxybutyrate was negative, and serum alcohol level was undetectable. Meanwhile, the patient continued to receive antibiotics and scheduled nebulizer treatments. Although his AKI resolved with initial fluid resuscitation, his repeat lactate levels continued to trend upward to a peak of 4.0 mmol/L.
- What is your diagnosis?
- How would you treat this patient?
Although IV fluids resolved his AKI, prerenal in etiology given the calculated FENa at 0.5%, his lactate levels continued to uptrend to a peak of 4.0 mmol/L complicated by elevated blood pressure (BP) > 150/100 mm Hg. Given his thoracic aneurysm, his BP was treated with metoprolol tartrate and amlodipine 10 mg daily. The patient remained asymptomatic with no evidence of ischemia or sepsis.
We suspected the nebulizer treatments to be the etiology of the patient’s hyperlactatemia and subsequent anion-gap metabolic acidosis. His scheduled albuterol and ipratropium nebulizer treatments were discontinued, and the patient experienced rapid resolution of his anion gap and hyperlactatemia to 1.2 mmol/L over 24 hours. On discontinuation of the nebulization therapy, mild wheezing was noted on physical examination. The patient reported no symptoms and was at his baseline. The patient finished his antibiotic course for his community-acquired pneumonia and was discharged in stable condition with instructions to continue his previously established home COPD medication regimen of umeclidinium/vilanterol 62.5/25 mcg daily and albuterol metered-dose inhaler as needed.
Discussion
Short-acting β-agonists, such as albuterol, are widely used in COPD and are a guideline-recommended treatment in maintenance and exacerbation of asthma and COPD.1 Short-acting β-agonist adverse effects (AEs) include nausea, vomiting, tremors, headache, and tachycardia; abnormal laboratory results include hypocalcemia, hypokalemia, hypophosphatemia, hypomagnesemia, and hyperglycemia.2,3 Albuterol-induced hyperlactatemia and lactic acidosis also are known but often overlooked and underreported AEs.
In a randomized control trial, researchers identified a positive correlation between nebulized albuterol use and hyperlactatemia in asthmatics with asthma exacerbation.4 One systematic review identified ≤ 20% of patients on either IV or nebulized high-dose treatments with selective β2-agonists may experience hyperlactatemia.5 However, aerosolized administration of albuterol as opposed to IV administration is less likely to result in AEs and abnormal laboratory results given decreased systemic absorption.3
Hyperlactatemia and lactic acidosis are associated with increased morbidity and mortality.6 Lactic acidosis is classified as either type A or type B. Type A lactic acidosis is characterized by hypoperfusion as subsequent ischemic injuries lead to anaerobic metabolism and elevated lactate. Diseases such as septic, cardiogenic, and hypovolemic shock are often associated with type A lactic acidosis. Type B lactic acidosis, however, encapsulates all nonhypoperfusion-related elevations in lactate, including malignancy, ethanol intoxication, and medication-induced lactic acidosis.7,8
In this case, the diagnosis was elusive as the patient had multiple comorbidities. His history included COPD, which is associated with elevated lactate levels.5 However, his initial laboratory workup did not show an anion gap, confirming a lack of an underlying acidotic process on admission. Because the patient was admitted for pneumonia, a known infectious source, complicated by an acute elevation in lactate, sepsis must be and was effectively ruled out. The patient also reported alcohol use during his admission, which confounded his presentation but was unlikely to impact the etiology of his lactic acidosis, given the unremarkable β-hydroxybutyrate and serum alcohol levels.
Furthermore, the patient harbored an enlarged thoracic aortic aneurysm and remained hypertensive above the goal of BP 130/80 mm Hg for patients with thoracoabdominal aneurysms.9 Lactic acidosis in the context of hemodynamic instability for this patient might have indicated tissue hypoperfusion secondary to a ruptured aneurysm or aortic dissection. Fortunately, the patient did not manifest any signs or symptoms suggestive of a ruptured aortic aneurysm. Last, on discontinuing the nebulizer therapy, the patient’s hyperlactatemia resolved within 24 hours, highly indicative of albuterol-induced lactic acidosis as the proper diagnosis.
As a β-agonist, albuterol stimulates β-adrenergic receptors, which increases lipolysis and glycolysis. The biochemical reactions increase the product pyruvate, which is used in both aerobic and anaerobic metabolisms. With an increase in pyruvate, capacity for aerobic metabolism is maximized with increased shunting toward anaerobic metabolism, leading to elevated lactate levels and lactic acidosis.8,10,11
Regardless, albuterol-induced lactic acidosis is a diagnosis of exclusion.6 It is thus prudent to rule out life-threatening etiologies of hyperlactatemia, given the association with increased morbidity and mortality. This case illustrates the importance of ruling out life-threatening etiologies of hyperlactatemia and lactic acidosis in an older patient with multiple comorbidities. This case also recognizes the acute AEs of hyperlactatemia and lactic acidosis secondary to scheduled albuterol nebulization therapy in acutely ill patients. Of note, patients presenting with an acute medical illness may be more susceptible to hyperlactatemia secondary to scheduled albuterol nebulization therapy.
Conclusions
We encourage heightened clinical suspicion of albuterol-induced lactic acidosis in acutely ill patients with COPD on albuterol therapy on rule out of life-threatening etiologies and
1. Global Initiative for Asthma. Pocket Guide to COPD Diagnosis, Management, and Prevention: A Guide for Health Care Professionals (2020 Report). Global Initiative for Chronic Lung Diseases, Inc; 2020. Accessed April 16, 2021. https://goldcopd.org/wp-content/uploads/2019/12/GOLD-2020-FINAL-ver1.2-03Dec19_WMV.pdf
2. Jat KR, Khairwa A. Levalbuterol versus albuterol for acute asthma: a systematic review and meta-analysis. Pulm Pharmacol Ther. 2013;26(2):239-248. doi:10.1016/j.pupt.2012.11.003
3. Ahrens RC, Smith GD. Albuterol: an adrenergic agent for use in the treatment of asthma pharmacology, pharmacokinetics and clinical use. Pharmacotherapy. 1984;4(3):105- 121. doi:10.1002/j.1875-9114.1984.tb03330.x
4. Lewis LM, Ferguson I, House SL, et al. Albuterol administration is commonly associated with increases in serum lactate in patients with asthma treated for acute exacerbation of asthma. Chest. 2014;145(1):53-59. doi:10.1378/chest.13-0930
5. Liedtke AG, Lava SAG, Milani GP, et al. Selective β2-adrenoceptor agonists and relevant hyperlactatemia: systematic review and meta-analysis. J Clin Med. 2019;9(1):71. doi:10.3390/jcm9010071
6. Smith ZR, Horng M, Rech MA. Medication-induced hyperlactatemia and lactic acidosis: a systematic review of the literature. Pharmacotherapy. 2019;39(9):946-963. doi:10.1002/phar.2316
7. Hockstein M, Diercks D. Significant lactic acidosis from albuterol. Clin Pract Cases Emerg Med. 2018;2(2):128-131. doi:10.5811/cpcem.2018.1.36024
8. Foucher CD, Tubben RE. Lactic acidosis. StatPearls Publishing; 2020. Updated November 21, 2020. Accessed April 16, 2021. https://www.ncbi.nlm.nih.gov/books/NBK470202
9. Aronow WS. Treatment of thoracic aortic aneurysm. Ann Transl Med. 2018;6(3):66. doi:10.21037/atm.2018.01.07
10. Lau E, Mazer J, Carino G. Inhaled β-agonist therapy and respiratory muscle fatigue as under-recognised causes of lactic acidosis. BMJ Case Rep. 2013;2013:bcr2013201015. Published October 14, 2013. doi:10.1136/bcr-2013-201015
11. Ramakrishna KN, Virk J, Gambhir HS. Albuterol-induced lactic acidosis. Am J Ther. 2019;26(5):e635-e636. doi:10.1097/MJT.0000000000000843
1. Global Initiative for Asthma. Pocket Guide to COPD Diagnosis, Management, and Prevention: A Guide for Health Care Professionals (2020 Report). Global Initiative for Chronic Lung Diseases, Inc; 2020. Accessed April 16, 2021. https://goldcopd.org/wp-content/uploads/2019/12/GOLD-2020-FINAL-ver1.2-03Dec19_WMV.pdf
2. Jat KR, Khairwa A. Levalbuterol versus albuterol for acute asthma: a systematic review and meta-analysis. Pulm Pharmacol Ther. 2013;26(2):239-248. doi:10.1016/j.pupt.2012.11.003
3. Ahrens RC, Smith GD. Albuterol: an adrenergic agent for use in the treatment of asthma pharmacology, pharmacokinetics and clinical use. Pharmacotherapy. 1984;4(3):105- 121. doi:10.1002/j.1875-9114.1984.tb03330.x
4. Lewis LM, Ferguson I, House SL, et al. Albuterol administration is commonly associated with increases in serum lactate in patients with asthma treated for acute exacerbation of asthma. Chest. 2014;145(1):53-59. doi:10.1378/chest.13-0930
5. Liedtke AG, Lava SAG, Milani GP, et al. Selective β2-adrenoceptor agonists and relevant hyperlactatemia: systematic review and meta-analysis. J Clin Med. 2019;9(1):71. doi:10.3390/jcm9010071
6. Smith ZR, Horng M, Rech MA. Medication-induced hyperlactatemia and lactic acidosis: a systematic review of the literature. Pharmacotherapy. 2019;39(9):946-963. doi:10.1002/phar.2316
7. Hockstein M, Diercks D. Significant lactic acidosis from albuterol. Clin Pract Cases Emerg Med. 2018;2(2):128-131. doi:10.5811/cpcem.2018.1.36024
8. Foucher CD, Tubben RE. Lactic acidosis. StatPearls Publishing; 2020. Updated November 21, 2020. Accessed April 16, 2021. https://www.ncbi.nlm.nih.gov/books/NBK470202
9. Aronow WS. Treatment of thoracic aortic aneurysm. Ann Transl Med. 2018;6(3):66. doi:10.21037/atm.2018.01.07
10. Lau E, Mazer J, Carino G. Inhaled β-agonist therapy and respiratory muscle fatigue as under-recognised causes of lactic acidosis. BMJ Case Rep. 2013;2013:bcr2013201015. Published October 14, 2013. doi:10.1136/bcr-2013-201015
11. Ramakrishna KN, Virk J, Gambhir HS. Albuterol-induced lactic acidosis. Am J Ther. 2019;26(5):e635-e636. doi:10.1097/MJT.0000000000000843
Reduction of Opioid Use With Enhanced Recovery Program for Total Knee Arthroplasty
Total knee arthroplasty (TKA) is one of the most common surgical procedures in the United States. The volume of TKAs is projected to substantially increase over the next 30 years.1 Adequate pain control after TKA is critically important to achieve early mobilization, shorten the length of hospital stay, and reduce postoperative complications. The evolution and inclusion of multimodal pain-management protocols have had a major impact on the clinical outcomes for TKA patients.2,3
Pain-management protocols typically use several modalities to control pain throughout the perioperative period. Multimodal opioid and nonopioid oral medications are administered during the pre- and postoperative periods and often involve a combination of acetaminophen, gabapentinoids, and cyclooxygenase-2 inhibitors.4 Peripheral nerve blocks and central neuraxial blockades are widely used and have been shown to be effective in reducing postoperative pain as well as overall opioid consumption.5,6 Finally, intraoperative periarticular injections have been shown to reduce postoperative pain and opioid consumption as well as improve patient satisfaction scores.7-9 These strategies are routinely used in TKA with the goal of minimizing overall opioid consumption and adverse events, reducing perioperative complications, and improving patient satisfaction.
Periarticular injections during surgery are an integral part of the multimodal pain-management protocols, though no consensus has been reached on proper injection formulation or technique. Liposomal bupivacaine is a local anesthetic depot formulation approved by the US Food and Drug Administration for surgical patients. The reported results have been discrepant regarding the efficacy of using liposomal bupivacaine injection in patients with TKA. Several studies have reported no added benefit of liposomal bupivacaine in contrast to a mixture of local anesthetics.10,11 Other studies have demonstrated superior pain relief.12 Many factors may contribute to the discrepant data, such as injection techniques, infiltration volume, and the assessment tools used to measure efficacy and safety.13
The US Department of Veterans Affairs (VA) Veterans Health Administration (VHA) provides care to a large patient population. Many of the patients in that system have high-risk profiles, including medical comorbidities; exposure to chronic pain and opioid use; and psychological and central nervous system injuries, including posttraumatic stress disorder and traumatic brain injury. Hadlandsmyth and colleagues reported increased risk of prolonged opioid use in VA patients after TKA surgery.14 They found that 20% of the patients were still on long-term opioids more than 90 days after TKA.
The purpose of this study was to evaluate the efficacy of the implementation of a comprehensive enhanced recovery after surgery (ERAS) protocol at a regional VA medical center. We hypothesize that the addition of liposomal bupivacaine in a multidisciplinary ERAS protocol would reduce the length of hospital stay and opioid consumption without any deleterious effects on postoperative outcomes.
Methods
A postoperative recovery protocol was implemented in 2013 at VA North Texas Health Care System (VANTHCS) in Dallas, and many of the patients continued to have issues with satisfactory pain control, prolonged length of stay, and extended opioid consumption postoperatively. A multimodal pain-management protocol and multidisciplinary perioperative case-management protocol were implemented in 2016 to further improve the clinical outcomes of patients undergoing TKA surgery. The senior surgeon (JM) organized a multidisciplinary team of health care providers to identify and implement potential solutions. This task force met weekly and consisted of surgeons, anesthesiologists, certified registered nurse anesthetists, orthopedic physician assistants, a nurse coordinator, a physical therapist, and an occupational therapist, as well as operating room, postanesthesia care unit (PACU), and surgical ward nurses. In addition, the staff from the home health agencies and social services attended the weekly meetings.
We conducted a retrospective review of all patients who had undergone unilateral TKA from 2013 to 2018 at VANTHCS. This was a consecutive, unselected cohort. All patients were under the care of a single surgeon using identical implant systems and identical surgical techniques. This study was approved by the institutional review board at VANTHCS. Patients were divided into 2 distinct and consecutive cohorts. The standard of care (SOC) group included all patients from 2013 to 2016. The ERAS group included all patients after the institution of the standardized protocol until the end of the study period.
Data on patient demographics, the American Society of Anesthesiologists risk classification, and preoperative functional status were extracted. Anesthesia techniques included either general endotracheal anesthesia or subarachnoid block with monitored anesthesia care. The quantity of the opioids given during surgery, in the PACU, during the inpatient stay, as discharge prescriptions, and as refills of the narcotic prescriptions up to 3 months postsurgery were recorded. All opioids were converted into morphine equivalent dosages (MED) in order to be properly analyzed using the statistical methodologies described in the statistical section.15 The VHA is a closed health care delivery system; therefore, all of the prescriptions ordered by surgery providers were recorded in the electronic health record.
ERAS Protocol
The SOC cohort was predominantly managed with general endotracheal anesthesia. The ERAS group was predominantly managed with subarachnoid blocks (Table 1). For the ERAS protocol preoperatively, the patients were administered oral gabapentin 300 mg, acetaminophen 650 mg, and oxycodone 20 mg, and IV ondansetron 4 mg. Intraoperatively, minimal opioids were used. In the PACU, the patients received dilaudid 0.25 mg IV as needed every 15 minutes for up to 1 mg/h. The nursing staff was trained to use the visual analog pain scale scores to titrate the medication. During the inpatient stay, patients received 1 g IV acetaminophen every 6 hours for 3 doses. The patients thereafter received oral acetaminophen as needed. Other medications in the multimodal pain-management protocol included gabapentin 300 mg twice daily, meloxicam 15 mg daily, and oxycodone 10 mg every 4 hours as needed. Rescue medication for insufficient pain relief was dilaudid 0.25 mg IV every 15 minutes for visual analog pain scale > 8. On discharge, the patients received a prescription of 30 tablets of hydrocodone 10 mg.
Periarticular Injections
Intraoperatively, all patients in the SOC and ERAS groups received periarticular injections. The liposomal bupivacaine injection was added to the standard injection mixture for the ERAS group. For the SOC group, the total volume of 100 ml was divided into 10 separate 10 cc syringes, and for the ERAS group, the total volume of 140 ml was divided into 14 separate 10 cc syringes. The SOC group injections were performed with an 18-gauge needle and the periarticular soft tissues grossly infiltrated. The ERAS group injections were done with more attention to anatomical detail. Injection sites for the ERAS group included the posterior joint capsule, the medial compartment, the lateral compartment, the tibial fat pad, the quadriceps and the patellar tendon, the femoral and tibial periosteum circumferentially, and the anterior joint capsule. Each needle-stick in the ERAS group delivered 1 to 1.5 ml through a 22-gauge needle to each compartment of the knee.
Outcome Variable
The primary outcome measure was total oral MED intraoperatively, in the PACU, during the hospital inpatient stay, in the hospital discharge prescription, and during the 3-month period after hospital discharge. Incidence of nausea and vomiting during the inpatient stay and any narcotic use at 6 months postsurgery were secondary binary outcomes.
Statistical Analysis
Demographic data and the clinical characteristics for the entire group were described using the sample mean and SD for continuous variables and the frequency and percentage for categorical variables. Differences between the 2 cohorts were analyzed using a 2-independent-sample t test and Fisher exact test.
The estimation of the total oral MED throughout all phases of care was done using a separate Poisson model due to the data being not normally distributed. A log-linear regression model was used to evaluate the main effect of ERAS vs the SOC cohort on the total oral MED used. Finally, a separate multiple logistic regression model was used to estimate the odds of postoperative nausea and vomiting and narcotic use at 6 months postsurgery between the cohorts. The adjusted odds ratio (OR) was estimated from the logistic model. Age, sex, body mass index, preoperative functional independence score, narcotic use within 3 months prior to surgery, anesthesia type used (subarachnoid block with monitored anesthesia care vs general endotracheal anesthesia), and postoperative complications (yes/no) were included as covariates in each model. The length of hospital stay and the above-mentioned factors were also included as covariates in the model estimating the total oral MED during the hospital stay, on hospital discharge, during the 3-month period after hospital discharge, and at 6 months following hospital discharge.
Statistical analysis was done using SAS version 9.4. The level of significance was set at α = 0.05 (2 tailed), and we implemented the false discovery rate (FDR) procedure to control false positives over multiple tests.16
Results
Two hundred forty-nine patients had 296 elective unilateral TKAs in this study from 2013 through 2018. Thirty-one patients had both unilateral TKAs under the SOC protocol; 5 patients had both unilateral TKAs under the ERAS protocol. Eleven of the patients who eventually had both knees replaced had 1 operation under each protocol The SOC group included 196 TKAs and the ERAS group included 100 TKAs. Of the 196 SOC patients, 94% were male. The mean age was 68.2 years (range, 48-86). The length of hospital stay ranged from 36.6 to 664.3 hours. Of the 100 ERAS patients, 96% were male (Table 2). The mean age was 66.7 years (range, 48-85). The length of hospital stay ranged from 12.5 to 45 hours.
Perioperative Opioid Use
Of the SOC patients, 99.0% received narcotics intraoperatively (range, 0-198 mg MED), and 74.5% received narcotics during PACU recovery (range, 0-141 mg MED). The total oral MED during the hospital stay for the SOC patients ranged from 10 to 2,946 mg. Of the ERAS patients, 86% received no narcotics during surgery (range, 0-110 mg MED), and 98% received no narcotics during PACU recovery (range, 0-65 mg MED). The total oral MED during the hospital stay for the ERAS patients ranged from 10 to 240 mg.
The MED used was significantly lower for the ERAS patients than it was for the SOC patients during surgery (10.5 mg vs 57.4 mg, P = .0001, FDR = .0002) and in the PACU (1.3 mg vs 13.6 mg, P = .0002, FDR = .0004), during the inpatient stay (66.7 mg vs 169.5 mg, P = .0001, FDR = .0002), and on hospital discharge (419.3 mg vs 776.7 mg, P = .0001, FDR = .0002). However, there was no significant difference in the total MED prescriptions filled between patients on the ERAS protocol vs those who received SOC during the 3-month period after hospital discharge (858.3 mg vs 1126.1 mg, P = .29, FDR = .29)(Table 3).
Finally, the logistic regression analysis, adjusting for the covariates demonstrated that the ERAS patients were less likely to take narcotics at 6 months following hospital discharge (OR, 0.23; P = .013; FDR = .018) and less likely to have postoperative nausea and vomiting (OR, 0.18; P = .019; FDR = .02) than SOC patients. There was no statistically significant difference between complication rates for the SOC and ERAS groups, which were 11.2% and 5.0%, respectively, with an overall complication rate of 9.1% (P = .09)(Table 4).
Discussion
Orthopedic surgery has been associated with long-term opioid use and misuse. Orthopedic surgeons are frequently among the highest prescribers of narcotics. According to Volkow and colleagues, orthopedic surgeons were the fourth largest prescribers of opioids in 2009, behind primary care physicians, internists, and dentists.17 The opioid crisis in the United States is well recognized. In 2017, > 70,000 deaths occurred due to drug overdoses, with 68% involving a prescription or illicit opioid. The Centers for Disease Control and Prevention has estimated a total economic burden of $78.5 billion per year as a direct result of misused prescribed opioids.18 This includes the cost of health care, lost productivity, addiction treatment, and the impact on the criminal justice system.
The current opioid crisis places further emphasis on opioid-reducing or sparing techniques in patients undergoing TKA. The use of liposomal bupivacaine for intraoperative periarticular injection is debated in the literature regarding its efficacy and whether it should be included in multimodal protocols. Researchers have argued that liposomal bupivacaine is not superior to regular bupivacaine and because of its increased cost is not justified.19,20 A meta-analysis from Zhao and colleagues showed no difference in pain control and functional recovery when comparing liposomal bupivacaine and control.21 In a randomized clinical trial, Schroer and colleagues matched liposomal bupivacaine against regular bupivacaine and found no difference in pain scores and similar narcotic use during hospitalization.22
Studies evaluating liposomal bupivacaine have demonstrated postoperative benefits in pain relief and potential opioid consumption.23 In a multicenter randomized controlled trial, Barrington and colleagues noted improved pain control at 6 and 12 hours after surgery with liposomal bupivacaine as a periarticular injection vs ropivacaine, though results were similar when compared with intrathecal morphine.24 Snyder and colleagues reported higher patient satisfaction in pain control and overall experience as well as decreased MED consumption in the PACU and on postoperative days 0 to 2 when using liposomal bupivacaine vs a multidrug cocktail for periarticular injection.25
The PILLAR trial, an industry-sponsored study, was designed to compare the effects of local infiltration anesthesia with and without liposomal bupivacaine with emphasis on a meticulous standardized infiltration technique. In our study, we used a similar technique with an expanded volume of injection solution to 140 ml that was delivered throughout the knee in a series of 14 syringes. Each needle-stick delivered 1 to 1.5 ml through a 22-gauge needle to each compartment of the knee. Infiltration technique has varied among the literature focused on periarticular injections.
In our experience, a standard infiltration technique is critical to the effective delivery of liposomal bupivacaine throughout all compartments of the knee and to obtaining reproducible pain control. The importance of injection technique cannot be overemphasized, and variations can be seen in studies published to date.26 Well-designed trials are needed to address this key component.
There have been limited data focused on the veteran population regarding postoperative pain-management strategies and recovery pathways either with or without liposomal bupivacaine. In a retrospective review, Sakamoto and colleagues found VA patients undergoing TKA had reduced opioid use in the first 24 hours after primary TKA with the use of intraoperative liposomal bupivacaine.27 The VA population has been shown to be at high risk for opioid misuse. The prevalence of comorbidities such as traumatic brain injury, posttraumatic stress disorder, and depression in the VA population also places them at risk for polypharmacy of central nervous system–acting medications.28 This emphasizes the importance of multimodal strategies, which can limit or eliminate narcotics in the perioperative period. The implementation of our ERAS protocol reduced opioid use during intraoperative, PACU, and inpatient hospital stay.
While the financial implications of our recovery protocol were not a primary focus of this study, there are many notable benefits on the overall inpatient cost to the VHA. According to the Health Economics Resource Center, the average daily cost of stay while under VA care for an inpatient surgical bed increased from $4,831 in 2013 to $6,220 in 2018.29 Our reduction in length of stay between our cohorts is 44.5 hours, which translates to a substantial financial savings per patient after protocol implementation. A more detailed look at the financial aspect of our protocol would need to be performed to evaluate the financial impact of other aspects of our protocol, such as the elimination of patient-controlled anesthesia and the reduction in total narcotics prescribed in the postoperative global period.
Limitations
The limitations of this study include its retrospective study design. With the VHA patient population, it may be subject to selection bias, as the population is mostly older and predominantly male compared with that of the general population. This could potentially influence the efficacy of our protocol on a population of patients with more women. In a recent study by Perruccio and colleagues, sex was found to moderate the effects of comorbidities, low back pain, and depressive symptoms on postoperative pain in patients undergoing TKA.30
With regard to outpatient narcotic prescriptions, although we cannot fully know whether these filled prescriptions were used for pain control, it is a reasonable assumption that patients who are dealing with continued postoperative or chronic pain issues will fill these prescriptions or seek refills. It is important to note that the data on prescriptions and refills in the 3-month postoperative period include all narcotic prescriptions filled by any VHA prescriber and are not specifically limited to our orthopedic team. For outpatient narcotic use, we were not able to access accurate pill counts for any discharge prescriptions or subsequent refills that were given throughout the VA system. We were able to report on total prescriptions filled in the first 3 months following TKA.
We calculated total oral MEDs to better understand the amount of narcotics being distributed throughout our population of patients. We believe this provides important information about the overall narcotic burden in the veteran population. There was no significant difference between the SOC and ERAS groups regarding oral MED prescribed in the 3-month postoperative period; however, at the 6-month follow-up visit, only 16% of patients in the ERAS group were taking any type of narcotic vs 37.2% in the SOC group (P = .0002).
Conclusions
A multidisciplinary ERAS protocol implemented at VANTHCS was effective in reducing length of stay and opioid burden throughout all phases of surgical care in our patients undergoing primary TKA. Patient and nursing education seem to be critical components to the implementation of a successful multimodal pain protocol. Reducing the narcotic burden has valuable financial and medical benefits in this at-risk population.
1. Inacio MCS, Paxton EW, Graves SE, Namba RS, Nemes S. Projected increase in total knee arthroplasty in the United States - an alternative projection model. Osteoarthritis Cartilage. 2017;25(11):1797-1803. doi:10.1016/j.joca.2017.07.022
2. Chou R, Gordon DB, de Leon-Casasola OA, et al. Management of Postoperative pain: a clinical practice guideline from the American Pain Society, the American Society of Regional Anesthesia and Pain Medicine, and the American Society of Anesthesiologists’ Committee on Regional Anesthesia, Executive Committee, and Administrative Council [published correction appears in J Pain. 2016 Apr;17(4):508-10. Dosage error in article text]. J Pain. 2016;17(2):131-157. doi:10.1016/j.jpain.2015.12.008
3. Moucha CS, Weiser MC, Levin EJ. Current Strategies in anesthesia and analgesia for total knee arthroplasty. J Am Acad Orthop Surg. 2016;24(2):60-73. doi:10.5435/JAAOS-D-14-00259
4. Parvizi J, Miller AG, Gandhi K. Multimodal pain management after total joint arthroplasty. J Bone Joint Surg Am. 2011;93(11):1075-1084. doi:10.2106/JBJS.J.01095
5. Jenstrup MT, Jæger P, Lund J, et al. Effects of adductor-canal-blockade on pain and ambulation after total knee arthroplasty: a randomized study. Acta Anaesthesiol Scand. 2012;56(3):357-364. doi:10.1111/j.1399-6576.2011.02621.x
6. Macfarlane AJ, Prasad GA, Chan VW, Brull R. Does regional anesthesia improve outcome after total knee arthroplasty?. Clin Orthop Relat Res. 2009;467(9):2379-2402. doi:10.1007/s11999-008-0666-9
7. Parvataneni HK, Shah VP, Howard H, Cole N, Ranawat AS, Ranawat CS. Controlling pain after total hip and knee arthroplasty using a multimodal protocol with local periarticular injections: a prospective randomized study. J Arthroplasty. 2007;22(6)(suppl 2):33-38. doi:10.1016/j.arth.2007.03.034
8. Busch CA, Shore BJ, Bhandari R, et al. Efficacy of periarticular multimodal drug injection in total knee arthroplasty. A randomized trial. J Bone Joint Surg Am. 2006;88(5):959-963. doi:10.2106/JBJS.E.00344
9. Lamplot JD, Wagner ER, Manning DW. Multimodal pain management in total knee arthroplasty: a prospective randomized controlled trial. J Arthroplasty. 2014;29(2):329-334. doi:10.1016/j.arth.2013.06.005
10. Hyland SJ, Deliberato DG, Fada RA, Romanelli MJ, Collins CL, Wasielewski RC. Liposomal bupivacaine versus standard periarticular injection in total knee arthroplasty with regional anesthesia: a prospective randomized controlled trial. J Arthroplasty. 2019;34(3):488-494. doi:10.1016/j.arth.2018.11.026
11. Barrington JW, Lovald ST, Ong KL, Watson HN, Emerson RH Jr. Postoperative pain after primary total knee arthroplasty: comparison of local injection analgesic cocktails and the role of demographic and surgical factors. J Arthroplasty. 2016;31(9) (suppl):288-292. doi:10.1016/j.arth.2016.05.002
12. Bramlett K, Onel E, Viscusi ER, Jones K. A randomized, double-blind, dose-ranging study comparing wound infiltration of DepoFoam bupivacaine, an extended-release liposomal bupivacaine, to bupivacaine HCl for postsurgical analgesia in total knee arthroplasty. Knee. 2012;19(5):530-536. doi:10.1016/j.knee.2011.12.004
13. Mont MA, Beaver WB, Dysart SH, Barrington JW, Del Gaizo D. Local infiltration analgesia with liposomal bupivacaine improves pain scores and reduces opioid use after total knee arthroplasty: results of a randomized controlled trial. J Arthroplasty. 2018;33(1):90-96. doi:10.1016/j.arth.2017.07.024
14. Hadlandsmyth K, Vander Weg MW, McCoy KD, Mosher HJ, Vaughan-Sarrazin MS, Lund BC. Risk for prolonged opioid use following total knee arthroplasty in veterans. J Arthroplasty. 2018;33(1):119-123. doi:10.1016/j.arth.2017.08.022
15. Nielsen S, Degenhardt L, Hoban B, Gisev N. A synthesis of oral morphine equivalents (OME) for opioid utilisation studies. Pharmacoepidemiol Drug Saf. 2016;25(6):733-737. doi:10.1002/pds.3945
16. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Statist Soc B. 1995;57(1):289-300. doi:10.1111/j.2517-6161.1995.tb02031.x
17. Volkow ND, McLellan TA, Cotto JH, Karithanom M, Weiss SRB. Characteristics of opioid prescriptions in 2009. JAMA. 2011;305(13):1299-1301. doi:10.1001/jama.2011.401
18. Scholl L, Seth P, Kariisa M, Wilson N, Baldwin G. Drug and opioid-involved overdose deaths - United States, 2013-2017. MMWR Morb Mortal Wkly Rep. 2018;67(5152):1419-1427. doi:10.15585/mmwr.mm675152e1
19. Pichler L, Poeran J, Zubizarreta N, et al. Liposomal bupivacaine does not reduce inpatient opioid prescription or related complications after knee arthroplasty: a database analysis. Anesthesiology. 2018;129(4):689-699. doi:10.1097/ALN.0000000000002267
20. Jain RK, Porat MD, Klingenstein GG, Reid JJ, Post RE, Schoifet SD. The AAHKS Clinical Research Award: liposomal bupivacaine and periarticular injection are not superior to single-shot intra-articular injection for pain control in total knee arthroplasty. J Arthroplasty. 2016;31(9)(suppl):22-25. doi:10.1016/j.arth.2016.03.036
21. Zhao B, Ma X, Zhang J, Ma J, Cao Q. The efficacy of local liposomal bupivacaine infiltration on pain and recovery after total joint arthroplasty: a systematic review and meta-analysis of randomized controlled trials. Medicine (Baltimore). 2019;98(3):e14092. doi:10.1097/MD.0000000000014092
22. Schroer WC, Diesfeld PG, LeMarr AR, Morton DJ, Reedy ME. Does extended-release liposomal bupivacaine better control pain than bupivacaine after total knee arthroplasty (TKA)? A prospective, randomized clinical trial. J Arthroplasty. 2015;30(9)(suppl):64-67. doi:10.1016/j.arth.2015.01.059
23. Ma J, Zhang W, Yao S. Liposomal bupivacaine infiltration versus femoral nerve block for pain control in total knee arthroplasty: a systematic review and meta-analysis. Int J Surg. 2016;36(Pt A): 44-55. doi:10.1016/j.ijsu.2016.10.007
24. Barrington JW, Emerson RH, Lovald ST, Lombardi AV, Berend KR. No difference in early analgesia between liposomal bupivacaine injection and intrathecal morphine after TKA. Clin Orthop Relat Res. 2017;475(1):94-105. doi:10.1007/s11999-016-4931-z
25. Snyder MA, Scheuerman CM, Gregg JL, Ruhnke CJ, Eten K. Improving total knee arthroplasty perioperative pain management using a periarticular injection with bupivacaine liposomal suspension. Arthroplast Today. 2016;2(1):37-42. doi:10.1016/j.artd.2015.05.005
26. Kuang MJ,Du Y, Ma JX, He W, Fu L, Ma XL. The efficacy of liposomal bupivacaine using periarticular injection in total knee arthroplasty: a systematic review and meta-analysis. J Arthroplasty. 2017;32(4):1395-1402. doi:10.1016/j.arth.2016.12.025
27. Sakamoto B, Keiser S, Meldrum R, Harker G, Freese A. Efficacy of liposomal bupivacaine infiltration on the management of total knee arthroplasty. JAMA Surg. 2017;152(1):90-95. doi:10.1001/jamasurg.2016.3474
28. Collett GA, Song K, Jaramillo CA, Potter JS, Finley EP, Pugh MJ. Prevalence of central nervous system polypharmacy and associations with overdose and suicide-related behaviors in Iraq and Afghanistan war veterans in VA care 2010-2011. Drugs Real World Outcomes. 2016;3(1):45-52. doi:10.1007/s40801-015-0055-0
29. US Department of Veterans Affairs. HERC inpatient average cost data. Updated April 2, 2021. Accessed April 16, 2021. https://www.herc.research.va.gov/include/page.asp?id=inpatient#herc-inpat-avg-cost
30. Perruccio AV, Fitzpatrick J, Power JD, et al. Sex-modified effects of depression, low back pain, and comorbidities on pain after total knee arthroplasty for osteoarthritis. Arthritis Care Res (Hoboken). 2020;72(8):1074-1080. doi:10.1002/acr.24002
Total knee arthroplasty (TKA) is one of the most common surgical procedures in the United States. The volume of TKAs is projected to substantially increase over the next 30 years.1 Adequate pain control after TKA is critically important to achieve early mobilization, shorten the length of hospital stay, and reduce postoperative complications. The evolution and inclusion of multimodal pain-management protocols have had a major impact on the clinical outcomes for TKA patients.2,3
Pain-management protocols typically use several modalities to control pain throughout the perioperative period. Multimodal opioid and nonopioid oral medications are administered during the pre- and postoperative periods and often involve a combination of acetaminophen, gabapentinoids, and cyclooxygenase-2 inhibitors.4 Peripheral nerve blocks and central neuraxial blockades are widely used and have been shown to be effective in reducing postoperative pain as well as overall opioid consumption.5,6 Finally, intraoperative periarticular injections have been shown to reduce postoperative pain and opioid consumption as well as improve patient satisfaction scores.7-9 These strategies are routinely used in TKA with the goal of minimizing overall opioid consumption and adverse events, reducing perioperative complications, and improving patient satisfaction.
Periarticular injections during surgery are an integral part of the multimodal pain-management protocols, though no consensus has been reached on proper injection formulation or technique. Liposomal bupivacaine is a local anesthetic depot formulation approved by the US Food and Drug Administration for surgical patients. The reported results have been discrepant regarding the efficacy of using liposomal bupivacaine injection in patients with TKA. Several studies have reported no added benefit of liposomal bupivacaine in contrast to a mixture of local anesthetics.10,11 Other studies have demonstrated superior pain relief.12 Many factors may contribute to the discrepant data, such as injection techniques, infiltration volume, and the assessment tools used to measure efficacy and safety.13
The US Department of Veterans Affairs (VA) Veterans Health Administration (VHA) provides care to a large patient population. Many of the patients in that system have high-risk profiles, including medical comorbidities; exposure to chronic pain and opioid use; and psychological and central nervous system injuries, including posttraumatic stress disorder and traumatic brain injury. Hadlandsmyth and colleagues reported increased risk of prolonged opioid use in VA patients after TKA surgery.14 They found that 20% of the patients were still on long-term opioids more than 90 days after TKA.
The purpose of this study was to evaluate the efficacy of the implementation of a comprehensive enhanced recovery after surgery (ERAS) protocol at a regional VA medical center. We hypothesize that the addition of liposomal bupivacaine in a multidisciplinary ERAS protocol would reduce the length of hospital stay and opioid consumption without any deleterious effects on postoperative outcomes.
Methods
A postoperative recovery protocol was implemented in 2013 at VA North Texas Health Care System (VANTHCS) in Dallas, and many of the patients continued to have issues with satisfactory pain control, prolonged length of stay, and extended opioid consumption postoperatively. A multimodal pain-management protocol and multidisciplinary perioperative case-management protocol were implemented in 2016 to further improve the clinical outcomes of patients undergoing TKA surgery. The senior surgeon (JM) organized a multidisciplinary team of health care providers to identify and implement potential solutions. This task force met weekly and consisted of surgeons, anesthesiologists, certified registered nurse anesthetists, orthopedic physician assistants, a nurse coordinator, a physical therapist, and an occupational therapist, as well as operating room, postanesthesia care unit (PACU), and surgical ward nurses. In addition, the staff from the home health agencies and social services attended the weekly meetings.
We conducted a retrospective review of all patients who had undergone unilateral TKA from 2013 to 2018 at VANTHCS. This was a consecutive, unselected cohort. All patients were under the care of a single surgeon using identical implant systems and identical surgical techniques. This study was approved by the institutional review board at VANTHCS. Patients were divided into 2 distinct and consecutive cohorts. The standard of care (SOC) group included all patients from 2013 to 2016. The ERAS group included all patients after the institution of the standardized protocol until the end of the study period.
Data on patient demographics, the American Society of Anesthesiologists risk classification, and preoperative functional status were extracted. Anesthesia techniques included either general endotracheal anesthesia or subarachnoid block with monitored anesthesia care. The quantity of the opioids given during surgery, in the PACU, during the inpatient stay, as discharge prescriptions, and as refills of the narcotic prescriptions up to 3 months postsurgery were recorded. All opioids were converted into morphine equivalent dosages (MED) in order to be properly analyzed using the statistical methodologies described in the statistical section.15 The VHA is a closed health care delivery system; therefore, all of the prescriptions ordered by surgery providers were recorded in the electronic health record.
ERAS Protocol
The SOC cohort was predominantly managed with general endotracheal anesthesia. The ERAS group was predominantly managed with subarachnoid blocks (Table 1). For the ERAS protocol preoperatively, the patients were administered oral gabapentin 300 mg, acetaminophen 650 mg, and oxycodone 20 mg, and IV ondansetron 4 mg. Intraoperatively, minimal opioids were used. In the PACU, the patients received dilaudid 0.25 mg IV as needed every 15 minutes for up to 1 mg/h. The nursing staff was trained to use the visual analog pain scale scores to titrate the medication. During the inpatient stay, patients received 1 g IV acetaminophen every 6 hours for 3 doses. The patients thereafter received oral acetaminophen as needed. Other medications in the multimodal pain-management protocol included gabapentin 300 mg twice daily, meloxicam 15 mg daily, and oxycodone 10 mg every 4 hours as needed. Rescue medication for insufficient pain relief was dilaudid 0.25 mg IV every 15 minutes for visual analog pain scale > 8. On discharge, the patients received a prescription of 30 tablets of hydrocodone 10 mg.
Periarticular Injections
Intraoperatively, all patients in the SOC and ERAS groups received periarticular injections. The liposomal bupivacaine injection was added to the standard injection mixture for the ERAS group. For the SOC group, the total volume of 100 ml was divided into 10 separate 10 cc syringes, and for the ERAS group, the total volume of 140 ml was divided into 14 separate 10 cc syringes. The SOC group injections were performed with an 18-gauge needle and the periarticular soft tissues grossly infiltrated. The ERAS group injections were done with more attention to anatomical detail. Injection sites for the ERAS group included the posterior joint capsule, the medial compartment, the lateral compartment, the tibial fat pad, the quadriceps and the patellar tendon, the femoral and tibial periosteum circumferentially, and the anterior joint capsule. Each needle-stick in the ERAS group delivered 1 to 1.5 ml through a 22-gauge needle to each compartment of the knee.
Outcome Variable
The primary outcome measure was total oral MED intraoperatively, in the PACU, during the hospital inpatient stay, in the hospital discharge prescription, and during the 3-month period after hospital discharge. Incidence of nausea and vomiting during the inpatient stay and any narcotic use at 6 months postsurgery were secondary binary outcomes.
Statistical Analysis
Demographic data and the clinical characteristics for the entire group were described using the sample mean and SD for continuous variables and the frequency and percentage for categorical variables. Differences between the 2 cohorts were analyzed using a 2-independent-sample t test and Fisher exact test.
The estimation of the total oral MED throughout all phases of care was done using a separate Poisson model due to the data being not normally distributed. A log-linear regression model was used to evaluate the main effect of ERAS vs the SOC cohort on the total oral MED used. Finally, a separate multiple logistic regression model was used to estimate the odds of postoperative nausea and vomiting and narcotic use at 6 months postsurgery between the cohorts. The adjusted odds ratio (OR) was estimated from the logistic model. Age, sex, body mass index, preoperative functional independence score, narcotic use within 3 months prior to surgery, anesthesia type used (subarachnoid block with monitored anesthesia care vs general endotracheal anesthesia), and postoperative complications (yes/no) were included as covariates in each model. The length of hospital stay and the above-mentioned factors were also included as covariates in the model estimating the total oral MED during the hospital stay, on hospital discharge, during the 3-month period after hospital discharge, and at 6 months following hospital discharge.
Statistical analysis was done using SAS version 9.4. The level of significance was set at α = 0.05 (2 tailed), and we implemented the false discovery rate (FDR) procedure to control false positives over multiple tests.16
Results
Two hundred forty-nine patients had 296 elective unilateral TKAs in this study from 2013 through 2018. Thirty-one patients had both unilateral TKAs under the SOC protocol; 5 patients had both unilateral TKAs under the ERAS protocol. Eleven of the patients who eventually had both knees replaced had 1 operation under each protocol The SOC group included 196 TKAs and the ERAS group included 100 TKAs. Of the 196 SOC patients, 94% were male. The mean age was 68.2 years (range, 48-86). The length of hospital stay ranged from 36.6 to 664.3 hours. Of the 100 ERAS patients, 96% were male (Table 2). The mean age was 66.7 years (range, 48-85). The length of hospital stay ranged from 12.5 to 45 hours.
Perioperative Opioid Use
Of the SOC patients, 99.0% received narcotics intraoperatively (range, 0-198 mg MED), and 74.5% received narcotics during PACU recovery (range, 0-141 mg MED). The total oral MED during the hospital stay for the SOC patients ranged from 10 to 2,946 mg. Of the ERAS patients, 86% received no narcotics during surgery (range, 0-110 mg MED), and 98% received no narcotics during PACU recovery (range, 0-65 mg MED). The total oral MED during the hospital stay for the ERAS patients ranged from 10 to 240 mg.
The MED used was significantly lower for the ERAS patients than it was for the SOC patients during surgery (10.5 mg vs 57.4 mg, P = .0001, FDR = .0002) and in the PACU (1.3 mg vs 13.6 mg, P = .0002, FDR = .0004), during the inpatient stay (66.7 mg vs 169.5 mg, P = .0001, FDR = .0002), and on hospital discharge (419.3 mg vs 776.7 mg, P = .0001, FDR = .0002). However, there was no significant difference in the total MED prescriptions filled between patients on the ERAS protocol vs those who received SOC during the 3-month period after hospital discharge (858.3 mg vs 1126.1 mg, P = .29, FDR = .29)(Table 3).
Finally, the logistic regression analysis, adjusting for the covariates demonstrated that the ERAS patients were less likely to take narcotics at 6 months following hospital discharge (OR, 0.23; P = .013; FDR = .018) and less likely to have postoperative nausea and vomiting (OR, 0.18; P = .019; FDR = .02) than SOC patients. There was no statistically significant difference between complication rates for the SOC and ERAS groups, which were 11.2% and 5.0%, respectively, with an overall complication rate of 9.1% (P = .09)(Table 4).
Discussion
Orthopedic surgery has been associated with long-term opioid use and misuse. Orthopedic surgeons are frequently among the highest prescribers of narcotics. According to Volkow and colleagues, orthopedic surgeons were the fourth largest prescribers of opioids in 2009, behind primary care physicians, internists, and dentists.17 The opioid crisis in the United States is well recognized. In 2017, > 70,000 deaths occurred due to drug overdoses, with 68% involving a prescription or illicit opioid. The Centers for Disease Control and Prevention has estimated a total economic burden of $78.5 billion per year as a direct result of misused prescribed opioids.18 This includes the cost of health care, lost productivity, addiction treatment, and the impact on the criminal justice system.
The current opioid crisis places further emphasis on opioid-reducing or sparing techniques in patients undergoing TKA. The use of liposomal bupivacaine for intraoperative periarticular injection is debated in the literature regarding its efficacy and whether it should be included in multimodal protocols. Researchers have argued that liposomal bupivacaine is not superior to regular bupivacaine and because of its increased cost is not justified.19,20 A meta-analysis from Zhao and colleagues showed no difference in pain control and functional recovery when comparing liposomal bupivacaine and control.21 In a randomized clinical trial, Schroer and colleagues matched liposomal bupivacaine against regular bupivacaine and found no difference in pain scores and similar narcotic use during hospitalization.22
Studies evaluating liposomal bupivacaine have demonstrated postoperative benefits in pain relief and potential opioid consumption.23 In a multicenter randomized controlled trial, Barrington and colleagues noted improved pain control at 6 and 12 hours after surgery with liposomal bupivacaine as a periarticular injection vs ropivacaine, though results were similar when compared with intrathecal morphine.24 Snyder and colleagues reported higher patient satisfaction in pain control and overall experience as well as decreased MED consumption in the PACU and on postoperative days 0 to 2 when using liposomal bupivacaine vs a multidrug cocktail for periarticular injection.25
The PILLAR trial, an industry-sponsored study, was designed to compare the effects of local infiltration anesthesia with and without liposomal bupivacaine with emphasis on a meticulous standardized infiltration technique. In our study, we used a similar technique with an expanded volume of injection solution to 140 ml that was delivered throughout the knee in a series of 14 syringes. Each needle-stick delivered 1 to 1.5 ml through a 22-gauge needle to each compartment of the knee. Infiltration technique has varied among the literature focused on periarticular injections.
In our experience, a standard infiltration technique is critical to the effective delivery of liposomal bupivacaine throughout all compartments of the knee and to obtaining reproducible pain control. The importance of injection technique cannot be overemphasized, and variations can be seen in studies published to date.26 Well-designed trials are needed to address this key component.
There have been limited data focused on the veteran population regarding postoperative pain-management strategies and recovery pathways either with or without liposomal bupivacaine. In a retrospective review, Sakamoto and colleagues found VA patients undergoing TKA had reduced opioid use in the first 24 hours after primary TKA with the use of intraoperative liposomal bupivacaine.27 The VA population has been shown to be at high risk for opioid misuse. The prevalence of comorbidities such as traumatic brain injury, posttraumatic stress disorder, and depression in the VA population also places them at risk for polypharmacy of central nervous system–acting medications.28 This emphasizes the importance of multimodal strategies, which can limit or eliminate narcotics in the perioperative period. The implementation of our ERAS protocol reduced opioid use during intraoperative, PACU, and inpatient hospital stay.
While the financial implications of our recovery protocol were not a primary focus of this study, there are many notable benefits on the overall inpatient cost to the VHA. According to the Health Economics Resource Center, the average daily cost of stay while under VA care for an inpatient surgical bed increased from $4,831 in 2013 to $6,220 in 2018.29 Our reduction in length of stay between our cohorts is 44.5 hours, which translates to a substantial financial savings per patient after protocol implementation. A more detailed look at the financial aspect of our protocol would need to be performed to evaluate the financial impact of other aspects of our protocol, such as the elimination of patient-controlled anesthesia and the reduction in total narcotics prescribed in the postoperative global period.
Limitations
The limitations of this study include its retrospective study design. With the VHA patient population, it may be subject to selection bias, as the population is mostly older and predominantly male compared with that of the general population. This could potentially influence the efficacy of our protocol on a population of patients with more women. In a recent study by Perruccio and colleagues, sex was found to moderate the effects of comorbidities, low back pain, and depressive symptoms on postoperative pain in patients undergoing TKA.30
With regard to outpatient narcotic prescriptions, although we cannot fully know whether these filled prescriptions were used for pain control, it is a reasonable assumption that patients who are dealing with continued postoperative or chronic pain issues will fill these prescriptions or seek refills. It is important to note that the data on prescriptions and refills in the 3-month postoperative period include all narcotic prescriptions filled by any VHA prescriber and are not specifically limited to our orthopedic team. For outpatient narcotic use, we were not able to access accurate pill counts for any discharge prescriptions or subsequent refills that were given throughout the VA system. We were able to report on total prescriptions filled in the first 3 months following TKA.
We calculated total oral MEDs to better understand the amount of narcotics being distributed throughout our population of patients. We believe this provides important information about the overall narcotic burden in the veteran population. There was no significant difference between the SOC and ERAS groups regarding oral MED prescribed in the 3-month postoperative period; however, at the 6-month follow-up visit, only 16% of patients in the ERAS group were taking any type of narcotic vs 37.2% in the SOC group (P = .0002).
Conclusions
A multidisciplinary ERAS protocol implemented at VANTHCS was effective in reducing length of stay and opioid burden throughout all phases of surgical care in our patients undergoing primary TKA. Patient and nursing education seem to be critical components to the implementation of a successful multimodal pain protocol. Reducing the narcotic burden has valuable financial and medical benefits in this at-risk population.
Total knee arthroplasty (TKA) is one of the most common surgical procedures in the United States. The volume of TKAs is projected to substantially increase over the next 30 years.1 Adequate pain control after TKA is critically important to achieve early mobilization, shorten the length of hospital stay, and reduce postoperative complications. The evolution and inclusion of multimodal pain-management protocols have had a major impact on the clinical outcomes for TKA patients.2,3
Pain-management protocols typically use several modalities to control pain throughout the perioperative period. Multimodal opioid and nonopioid oral medications are administered during the pre- and postoperative periods and often involve a combination of acetaminophen, gabapentinoids, and cyclooxygenase-2 inhibitors.4 Peripheral nerve blocks and central neuraxial blockades are widely used and have been shown to be effective in reducing postoperative pain as well as overall opioid consumption.5,6 Finally, intraoperative periarticular injections have been shown to reduce postoperative pain and opioid consumption as well as improve patient satisfaction scores.7-9 These strategies are routinely used in TKA with the goal of minimizing overall opioid consumption and adverse events, reducing perioperative complications, and improving patient satisfaction.
Periarticular injections during surgery are an integral part of the multimodal pain-management protocols, though no consensus has been reached on proper injection formulation or technique. Liposomal bupivacaine is a local anesthetic depot formulation approved by the US Food and Drug Administration for surgical patients. The reported results have been discrepant regarding the efficacy of using liposomal bupivacaine injection in patients with TKA. Several studies have reported no added benefit of liposomal bupivacaine in contrast to a mixture of local anesthetics.10,11 Other studies have demonstrated superior pain relief.12 Many factors may contribute to the discrepant data, such as injection techniques, infiltration volume, and the assessment tools used to measure efficacy and safety.13
The US Department of Veterans Affairs (VA) Veterans Health Administration (VHA) provides care to a large patient population. Many of the patients in that system have high-risk profiles, including medical comorbidities; exposure to chronic pain and opioid use; and psychological and central nervous system injuries, including posttraumatic stress disorder and traumatic brain injury. Hadlandsmyth and colleagues reported increased risk of prolonged opioid use in VA patients after TKA surgery.14 They found that 20% of the patients were still on long-term opioids more than 90 days after TKA.
The purpose of this study was to evaluate the efficacy of the implementation of a comprehensive enhanced recovery after surgery (ERAS) protocol at a regional VA medical center. We hypothesize that the addition of liposomal bupivacaine in a multidisciplinary ERAS protocol would reduce the length of hospital stay and opioid consumption without any deleterious effects on postoperative outcomes.
Methods
A postoperative recovery protocol was implemented in 2013 at VA North Texas Health Care System (VANTHCS) in Dallas, and many of the patients continued to have issues with satisfactory pain control, prolonged length of stay, and extended opioid consumption postoperatively. A multimodal pain-management protocol and multidisciplinary perioperative case-management protocol were implemented in 2016 to further improve the clinical outcomes of patients undergoing TKA surgery. The senior surgeon (JM) organized a multidisciplinary team of health care providers to identify and implement potential solutions. This task force met weekly and consisted of surgeons, anesthesiologists, certified registered nurse anesthetists, orthopedic physician assistants, a nurse coordinator, a physical therapist, and an occupational therapist, as well as operating room, postanesthesia care unit (PACU), and surgical ward nurses. In addition, the staff from the home health agencies and social services attended the weekly meetings.
We conducted a retrospective review of all patients who had undergone unilateral TKA from 2013 to 2018 at VANTHCS. This was a consecutive, unselected cohort. All patients were under the care of a single surgeon using identical implant systems and identical surgical techniques. This study was approved by the institutional review board at VANTHCS. Patients were divided into 2 distinct and consecutive cohorts. The standard of care (SOC) group included all patients from 2013 to 2016. The ERAS group included all patients after the institution of the standardized protocol until the end of the study period.
Data on patient demographics, the American Society of Anesthesiologists risk classification, and preoperative functional status were extracted. Anesthesia techniques included either general endotracheal anesthesia or subarachnoid block with monitored anesthesia care. The quantity of the opioids given during surgery, in the PACU, during the inpatient stay, as discharge prescriptions, and as refills of the narcotic prescriptions up to 3 months postsurgery were recorded. All opioids were converted into morphine equivalent dosages (MED) in order to be properly analyzed using the statistical methodologies described in the statistical section.15 The VHA is a closed health care delivery system; therefore, all of the prescriptions ordered by surgery providers were recorded in the electronic health record.
ERAS Protocol
The SOC cohort was predominantly managed with general endotracheal anesthesia. The ERAS group was predominantly managed with subarachnoid blocks (Table 1). For the ERAS protocol preoperatively, the patients were administered oral gabapentin 300 mg, acetaminophen 650 mg, and oxycodone 20 mg, and IV ondansetron 4 mg. Intraoperatively, minimal opioids were used. In the PACU, the patients received dilaudid 0.25 mg IV as needed every 15 minutes for up to 1 mg/h. The nursing staff was trained to use the visual analog pain scale scores to titrate the medication. During the inpatient stay, patients received 1 g IV acetaminophen every 6 hours for 3 doses. The patients thereafter received oral acetaminophen as needed. Other medications in the multimodal pain-management protocol included gabapentin 300 mg twice daily, meloxicam 15 mg daily, and oxycodone 10 mg every 4 hours as needed. Rescue medication for insufficient pain relief was dilaudid 0.25 mg IV every 15 minutes for visual analog pain scale > 8. On discharge, the patients received a prescription of 30 tablets of hydrocodone 10 mg.
Periarticular Injections
Intraoperatively, all patients in the SOC and ERAS groups received periarticular injections. The liposomal bupivacaine injection was added to the standard injection mixture for the ERAS group. For the SOC group, the total volume of 100 ml was divided into 10 separate 10 cc syringes, and for the ERAS group, the total volume of 140 ml was divided into 14 separate 10 cc syringes. The SOC group injections were performed with an 18-gauge needle and the periarticular soft tissues grossly infiltrated. The ERAS group injections were done with more attention to anatomical detail. Injection sites for the ERAS group included the posterior joint capsule, the medial compartment, the lateral compartment, the tibial fat pad, the quadriceps and the patellar tendon, the femoral and tibial periosteum circumferentially, and the anterior joint capsule. Each needle-stick in the ERAS group delivered 1 to 1.5 ml through a 22-gauge needle to each compartment of the knee.
Outcome Variable
The primary outcome measure was total oral MED intraoperatively, in the PACU, during the hospital inpatient stay, in the hospital discharge prescription, and during the 3-month period after hospital discharge. Incidence of nausea and vomiting during the inpatient stay and any narcotic use at 6 months postsurgery were secondary binary outcomes.
Statistical Analysis
Demographic data and the clinical characteristics for the entire group were described using the sample mean and SD for continuous variables and the frequency and percentage for categorical variables. Differences between the 2 cohorts were analyzed using a 2-independent-sample t test and Fisher exact test.
The estimation of the total oral MED throughout all phases of care was done using a separate Poisson model due to the data being not normally distributed. A log-linear regression model was used to evaluate the main effect of ERAS vs the SOC cohort on the total oral MED used. Finally, a separate multiple logistic regression model was used to estimate the odds of postoperative nausea and vomiting and narcotic use at 6 months postsurgery between the cohorts. The adjusted odds ratio (OR) was estimated from the logistic model. Age, sex, body mass index, preoperative functional independence score, narcotic use within 3 months prior to surgery, anesthesia type used (subarachnoid block with monitored anesthesia care vs general endotracheal anesthesia), and postoperative complications (yes/no) were included as covariates in each model. The length of hospital stay and the above-mentioned factors were also included as covariates in the model estimating the total oral MED during the hospital stay, on hospital discharge, during the 3-month period after hospital discharge, and at 6 months following hospital discharge.
Statistical analysis was done using SAS version 9.4. The level of significance was set at α = 0.05 (2 tailed), and we implemented the false discovery rate (FDR) procedure to control false positives over multiple tests.16
Results
Two hundred forty-nine patients had 296 elective unilateral TKAs in this study from 2013 through 2018. Thirty-one patients had both unilateral TKAs under the SOC protocol; 5 patients had both unilateral TKAs under the ERAS protocol. Eleven of the patients who eventually had both knees replaced had 1 operation under each protocol The SOC group included 196 TKAs and the ERAS group included 100 TKAs. Of the 196 SOC patients, 94% were male. The mean age was 68.2 years (range, 48-86). The length of hospital stay ranged from 36.6 to 664.3 hours. Of the 100 ERAS patients, 96% were male (Table 2). The mean age was 66.7 years (range, 48-85). The length of hospital stay ranged from 12.5 to 45 hours.
Perioperative Opioid Use
Of the SOC patients, 99.0% received narcotics intraoperatively (range, 0-198 mg MED), and 74.5% received narcotics during PACU recovery (range, 0-141 mg MED). The total oral MED during the hospital stay for the SOC patients ranged from 10 to 2,946 mg. Of the ERAS patients, 86% received no narcotics during surgery (range, 0-110 mg MED), and 98% received no narcotics during PACU recovery (range, 0-65 mg MED). The total oral MED during the hospital stay for the ERAS patients ranged from 10 to 240 mg.
The MED used was significantly lower for the ERAS patients than it was for the SOC patients during surgery (10.5 mg vs 57.4 mg, P = .0001, FDR = .0002) and in the PACU (1.3 mg vs 13.6 mg, P = .0002, FDR = .0004), during the inpatient stay (66.7 mg vs 169.5 mg, P = .0001, FDR = .0002), and on hospital discharge (419.3 mg vs 776.7 mg, P = .0001, FDR = .0002). However, there was no significant difference in the total MED prescriptions filled between patients on the ERAS protocol vs those who received SOC during the 3-month period after hospital discharge (858.3 mg vs 1126.1 mg, P = .29, FDR = .29)(Table 3).
Finally, the logistic regression analysis, adjusting for the covariates demonstrated that the ERAS patients were less likely to take narcotics at 6 months following hospital discharge (OR, 0.23; P = .013; FDR = .018) and less likely to have postoperative nausea and vomiting (OR, 0.18; P = .019; FDR = .02) than SOC patients. There was no statistically significant difference between complication rates for the SOC and ERAS groups, which were 11.2% and 5.0%, respectively, with an overall complication rate of 9.1% (P = .09)(Table 4).
Discussion
Orthopedic surgery has been associated with long-term opioid use and misuse. Orthopedic surgeons are frequently among the highest prescribers of narcotics. According to Volkow and colleagues, orthopedic surgeons were the fourth largest prescribers of opioids in 2009, behind primary care physicians, internists, and dentists.17 The opioid crisis in the United States is well recognized. In 2017, > 70,000 deaths occurred due to drug overdoses, with 68% involving a prescription or illicit opioid. The Centers for Disease Control and Prevention has estimated a total economic burden of $78.5 billion per year as a direct result of misused prescribed opioids.18 This includes the cost of health care, lost productivity, addiction treatment, and the impact on the criminal justice system.
The current opioid crisis places further emphasis on opioid-reducing or sparing techniques in patients undergoing TKA. The use of liposomal bupivacaine for intraoperative periarticular injection is debated in the literature regarding its efficacy and whether it should be included in multimodal protocols. Researchers have argued that liposomal bupivacaine is not superior to regular bupivacaine and because of its increased cost is not justified.19,20 A meta-analysis from Zhao and colleagues showed no difference in pain control and functional recovery when comparing liposomal bupivacaine and control.21 In a randomized clinical trial, Schroer and colleagues matched liposomal bupivacaine against regular bupivacaine and found no difference in pain scores and similar narcotic use during hospitalization.22
Studies evaluating liposomal bupivacaine have demonstrated postoperative benefits in pain relief and potential opioid consumption.23 In a multicenter randomized controlled trial, Barrington and colleagues noted improved pain control at 6 and 12 hours after surgery with liposomal bupivacaine as a periarticular injection vs ropivacaine, though results were similar when compared with intrathecal morphine.24 Snyder and colleagues reported higher patient satisfaction in pain control and overall experience as well as decreased MED consumption in the PACU and on postoperative days 0 to 2 when using liposomal bupivacaine vs a multidrug cocktail for periarticular injection.25
The PILLAR trial, an industry-sponsored study, was designed to compare the effects of local infiltration anesthesia with and without liposomal bupivacaine with emphasis on a meticulous standardized infiltration technique. In our study, we used a similar technique with an expanded volume of injection solution to 140 ml that was delivered throughout the knee in a series of 14 syringes. Each needle-stick delivered 1 to 1.5 ml through a 22-gauge needle to each compartment of the knee. Infiltration technique has varied among the literature focused on periarticular injections.
In our experience, a standard infiltration technique is critical to the effective delivery of liposomal bupivacaine throughout all compartments of the knee and to obtaining reproducible pain control. The importance of injection technique cannot be overemphasized, and variations can be seen in studies published to date.26 Well-designed trials are needed to address this key component.
There have been limited data focused on the veteran population regarding postoperative pain-management strategies and recovery pathways either with or without liposomal bupivacaine. In a retrospective review, Sakamoto and colleagues found VA patients undergoing TKA had reduced opioid use in the first 24 hours after primary TKA with the use of intraoperative liposomal bupivacaine.27 The VA population has been shown to be at high risk for opioid misuse. The prevalence of comorbidities such as traumatic brain injury, posttraumatic stress disorder, and depression in the VA population also places them at risk for polypharmacy of central nervous system–acting medications.28 This emphasizes the importance of multimodal strategies, which can limit or eliminate narcotics in the perioperative period. The implementation of our ERAS protocol reduced opioid use during intraoperative, PACU, and inpatient hospital stay.
While the financial implications of our recovery protocol were not a primary focus of this study, there are many notable benefits on the overall inpatient cost to the VHA. According to the Health Economics Resource Center, the average daily cost of stay while under VA care for an inpatient surgical bed increased from $4,831 in 2013 to $6,220 in 2018.29 Our reduction in length of stay between our cohorts is 44.5 hours, which translates to a substantial financial savings per patient after protocol implementation. A more detailed look at the financial aspect of our protocol would need to be performed to evaluate the financial impact of other aspects of our protocol, such as the elimination of patient-controlled anesthesia and the reduction in total narcotics prescribed in the postoperative global period.
Limitations
The limitations of this study include its retrospective study design. With the VHA patient population, it may be subject to selection bias, as the population is mostly older and predominantly male compared with that of the general population. This could potentially influence the efficacy of our protocol on a population of patients with more women. In a recent study by Perruccio and colleagues, sex was found to moderate the effects of comorbidities, low back pain, and depressive symptoms on postoperative pain in patients undergoing TKA.30
With regard to outpatient narcotic prescriptions, although we cannot fully know whether these filled prescriptions were used for pain control, it is a reasonable assumption that patients who are dealing with continued postoperative or chronic pain issues will fill these prescriptions or seek refills. It is important to note that the data on prescriptions and refills in the 3-month postoperative period include all narcotic prescriptions filled by any VHA prescriber and are not specifically limited to our orthopedic team. For outpatient narcotic use, we were not able to access accurate pill counts for any discharge prescriptions or subsequent refills that were given throughout the VA system. We were able to report on total prescriptions filled in the first 3 months following TKA.
We calculated total oral MEDs to better understand the amount of narcotics being distributed throughout our population of patients. We believe this provides important information about the overall narcotic burden in the veteran population. There was no significant difference between the SOC and ERAS groups regarding oral MED prescribed in the 3-month postoperative period; however, at the 6-month follow-up visit, only 16% of patients in the ERAS group were taking any type of narcotic vs 37.2% in the SOC group (P = .0002).
Conclusions
A multidisciplinary ERAS protocol implemented at VANTHCS was effective in reducing length of stay and opioid burden throughout all phases of surgical care in our patients undergoing primary TKA. Patient and nursing education seem to be critical components to the implementation of a successful multimodal pain protocol. Reducing the narcotic burden has valuable financial and medical benefits in this at-risk population.
1. Inacio MCS, Paxton EW, Graves SE, Namba RS, Nemes S. Projected increase in total knee arthroplasty in the United States - an alternative projection model. Osteoarthritis Cartilage. 2017;25(11):1797-1803. doi:10.1016/j.joca.2017.07.022
2. Chou R, Gordon DB, de Leon-Casasola OA, et al. Management of Postoperative pain: a clinical practice guideline from the American Pain Society, the American Society of Regional Anesthesia and Pain Medicine, and the American Society of Anesthesiologists’ Committee on Regional Anesthesia, Executive Committee, and Administrative Council [published correction appears in J Pain. 2016 Apr;17(4):508-10. Dosage error in article text]. J Pain. 2016;17(2):131-157. doi:10.1016/j.jpain.2015.12.008
3. Moucha CS, Weiser MC, Levin EJ. Current Strategies in anesthesia and analgesia for total knee arthroplasty. J Am Acad Orthop Surg. 2016;24(2):60-73. doi:10.5435/JAAOS-D-14-00259
4. Parvizi J, Miller AG, Gandhi K. Multimodal pain management after total joint arthroplasty. J Bone Joint Surg Am. 2011;93(11):1075-1084. doi:10.2106/JBJS.J.01095
5. Jenstrup MT, Jæger P, Lund J, et al. Effects of adductor-canal-blockade on pain and ambulation after total knee arthroplasty: a randomized study. Acta Anaesthesiol Scand. 2012;56(3):357-364. doi:10.1111/j.1399-6576.2011.02621.x
6. Macfarlane AJ, Prasad GA, Chan VW, Brull R. Does regional anesthesia improve outcome after total knee arthroplasty?. Clin Orthop Relat Res. 2009;467(9):2379-2402. doi:10.1007/s11999-008-0666-9
7. Parvataneni HK, Shah VP, Howard H, Cole N, Ranawat AS, Ranawat CS. Controlling pain after total hip and knee arthroplasty using a multimodal protocol with local periarticular injections: a prospective randomized study. J Arthroplasty. 2007;22(6)(suppl 2):33-38. doi:10.1016/j.arth.2007.03.034
8. Busch CA, Shore BJ, Bhandari R, et al. Efficacy of periarticular multimodal drug injection in total knee arthroplasty. A randomized trial. J Bone Joint Surg Am. 2006;88(5):959-963. doi:10.2106/JBJS.E.00344
9. Lamplot JD, Wagner ER, Manning DW. Multimodal pain management in total knee arthroplasty: a prospective randomized controlled trial. J Arthroplasty. 2014;29(2):329-334. doi:10.1016/j.arth.2013.06.005
10. Hyland SJ, Deliberato DG, Fada RA, Romanelli MJ, Collins CL, Wasielewski RC. Liposomal bupivacaine versus standard periarticular injection in total knee arthroplasty with regional anesthesia: a prospective randomized controlled trial. J Arthroplasty. 2019;34(3):488-494. doi:10.1016/j.arth.2018.11.026
11. Barrington JW, Lovald ST, Ong KL, Watson HN, Emerson RH Jr. Postoperative pain after primary total knee arthroplasty: comparison of local injection analgesic cocktails and the role of demographic and surgical factors. J Arthroplasty. 2016;31(9) (suppl):288-292. doi:10.1016/j.arth.2016.05.002
12. Bramlett K, Onel E, Viscusi ER, Jones K. A randomized, double-blind, dose-ranging study comparing wound infiltration of DepoFoam bupivacaine, an extended-release liposomal bupivacaine, to bupivacaine HCl for postsurgical analgesia in total knee arthroplasty. Knee. 2012;19(5):530-536. doi:10.1016/j.knee.2011.12.004
13. Mont MA, Beaver WB, Dysart SH, Barrington JW, Del Gaizo D. Local infiltration analgesia with liposomal bupivacaine improves pain scores and reduces opioid use after total knee arthroplasty: results of a randomized controlled trial. J Arthroplasty. 2018;33(1):90-96. doi:10.1016/j.arth.2017.07.024
14. Hadlandsmyth K, Vander Weg MW, McCoy KD, Mosher HJ, Vaughan-Sarrazin MS, Lund BC. Risk for prolonged opioid use following total knee arthroplasty in veterans. J Arthroplasty. 2018;33(1):119-123. doi:10.1016/j.arth.2017.08.022
15. Nielsen S, Degenhardt L, Hoban B, Gisev N. A synthesis of oral morphine equivalents (OME) for opioid utilisation studies. Pharmacoepidemiol Drug Saf. 2016;25(6):733-737. doi:10.1002/pds.3945
16. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Statist Soc B. 1995;57(1):289-300. doi:10.1111/j.2517-6161.1995.tb02031.x
17. Volkow ND, McLellan TA, Cotto JH, Karithanom M, Weiss SRB. Characteristics of opioid prescriptions in 2009. JAMA. 2011;305(13):1299-1301. doi:10.1001/jama.2011.401
18. Scholl L, Seth P, Kariisa M, Wilson N, Baldwin G. Drug and opioid-involved overdose deaths - United States, 2013-2017. MMWR Morb Mortal Wkly Rep. 2018;67(5152):1419-1427. doi:10.15585/mmwr.mm675152e1
19. Pichler L, Poeran J, Zubizarreta N, et al. Liposomal bupivacaine does not reduce inpatient opioid prescription or related complications after knee arthroplasty: a database analysis. Anesthesiology. 2018;129(4):689-699. doi:10.1097/ALN.0000000000002267
20. Jain RK, Porat MD, Klingenstein GG, Reid JJ, Post RE, Schoifet SD. The AAHKS Clinical Research Award: liposomal bupivacaine and periarticular injection are not superior to single-shot intra-articular injection for pain control in total knee arthroplasty. J Arthroplasty. 2016;31(9)(suppl):22-25. doi:10.1016/j.arth.2016.03.036
21. Zhao B, Ma X, Zhang J, Ma J, Cao Q. The efficacy of local liposomal bupivacaine infiltration on pain and recovery after total joint arthroplasty: a systematic review and meta-analysis of randomized controlled trials. Medicine (Baltimore). 2019;98(3):e14092. doi:10.1097/MD.0000000000014092
22. Schroer WC, Diesfeld PG, LeMarr AR, Morton DJ, Reedy ME. Does extended-release liposomal bupivacaine better control pain than bupivacaine after total knee arthroplasty (TKA)? A prospective, randomized clinical trial. J Arthroplasty. 2015;30(9)(suppl):64-67. doi:10.1016/j.arth.2015.01.059
23. Ma J, Zhang W, Yao S. Liposomal bupivacaine infiltration versus femoral nerve block for pain control in total knee arthroplasty: a systematic review and meta-analysis. Int J Surg. 2016;36(Pt A): 44-55. doi:10.1016/j.ijsu.2016.10.007
24. Barrington JW, Emerson RH, Lovald ST, Lombardi AV, Berend KR. No difference in early analgesia between liposomal bupivacaine injection and intrathecal morphine after TKA. Clin Orthop Relat Res. 2017;475(1):94-105. doi:10.1007/s11999-016-4931-z
25. Snyder MA, Scheuerman CM, Gregg JL, Ruhnke CJ, Eten K. Improving total knee arthroplasty perioperative pain management using a periarticular injection with bupivacaine liposomal suspension. Arthroplast Today. 2016;2(1):37-42. doi:10.1016/j.artd.2015.05.005
26. Kuang MJ,Du Y, Ma JX, He W, Fu L, Ma XL. The efficacy of liposomal bupivacaine using periarticular injection in total knee arthroplasty: a systematic review and meta-analysis. J Arthroplasty. 2017;32(4):1395-1402. doi:10.1016/j.arth.2016.12.025
27. Sakamoto B, Keiser S, Meldrum R, Harker G, Freese A. Efficacy of liposomal bupivacaine infiltration on the management of total knee arthroplasty. JAMA Surg. 2017;152(1):90-95. doi:10.1001/jamasurg.2016.3474
28. Collett GA, Song K, Jaramillo CA, Potter JS, Finley EP, Pugh MJ. Prevalence of central nervous system polypharmacy and associations with overdose and suicide-related behaviors in Iraq and Afghanistan war veterans in VA care 2010-2011. Drugs Real World Outcomes. 2016;3(1):45-52. doi:10.1007/s40801-015-0055-0
29. US Department of Veterans Affairs. HERC inpatient average cost data. Updated April 2, 2021. Accessed April 16, 2021. https://www.herc.research.va.gov/include/page.asp?id=inpatient#herc-inpat-avg-cost
30. Perruccio AV, Fitzpatrick J, Power JD, et al. Sex-modified effects of depression, low back pain, and comorbidities on pain after total knee arthroplasty for osteoarthritis. Arthritis Care Res (Hoboken). 2020;72(8):1074-1080. doi:10.1002/acr.24002
1. Inacio MCS, Paxton EW, Graves SE, Namba RS, Nemes S. Projected increase in total knee arthroplasty in the United States - an alternative projection model. Osteoarthritis Cartilage. 2017;25(11):1797-1803. doi:10.1016/j.joca.2017.07.022
2. Chou R, Gordon DB, de Leon-Casasola OA, et al. Management of Postoperative pain: a clinical practice guideline from the American Pain Society, the American Society of Regional Anesthesia and Pain Medicine, and the American Society of Anesthesiologists’ Committee on Regional Anesthesia, Executive Committee, and Administrative Council [published correction appears in J Pain. 2016 Apr;17(4):508-10. Dosage error in article text]. J Pain. 2016;17(2):131-157. doi:10.1016/j.jpain.2015.12.008
3. Moucha CS, Weiser MC, Levin EJ. Current Strategies in anesthesia and analgesia for total knee arthroplasty. J Am Acad Orthop Surg. 2016;24(2):60-73. doi:10.5435/JAAOS-D-14-00259
4. Parvizi J, Miller AG, Gandhi K. Multimodal pain management after total joint arthroplasty. J Bone Joint Surg Am. 2011;93(11):1075-1084. doi:10.2106/JBJS.J.01095
5. Jenstrup MT, Jæger P, Lund J, et al. Effects of adductor-canal-blockade on pain and ambulation after total knee arthroplasty: a randomized study. Acta Anaesthesiol Scand. 2012;56(3):357-364. doi:10.1111/j.1399-6576.2011.02621.x
6. Macfarlane AJ, Prasad GA, Chan VW, Brull R. Does regional anesthesia improve outcome after total knee arthroplasty?. Clin Orthop Relat Res. 2009;467(9):2379-2402. doi:10.1007/s11999-008-0666-9
7. Parvataneni HK, Shah VP, Howard H, Cole N, Ranawat AS, Ranawat CS. Controlling pain after total hip and knee arthroplasty using a multimodal protocol with local periarticular injections: a prospective randomized study. J Arthroplasty. 2007;22(6)(suppl 2):33-38. doi:10.1016/j.arth.2007.03.034
8. Busch CA, Shore BJ, Bhandari R, et al. Efficacy of periarticular multimodal drug injection in total knee arthroplasty. A randomized trial. J Bone Joint Surg Am. 2006;88(5):959-963. doi:10.2106/JBJS.E.00344
9. Lamplot JD, Wagner ER, Manning DW. Multimodal pain management in total knee arthroplasty: a prospective randomized controlled trial. J Arthroplasty. 2014;29(2):329-334. doi:10.1016/j.arth.2013.06.005
10. Hyland SJ, Deliberato DG, Fada RA, Romanelli MJ, Collins CL, Wasielewski RC. Liposomal bupivacaine versus standard periarticular injection in total knee arthroplasty with regional anesthesia: a prospective randomized controlled trial. J Arthroplasty. 2019;34(3):488-494. doi:10.1016/j.arth.2018.11.026
11. Barrington JW, Lovald ST, Ong KL, Watson HN, Emerson RH Jr. Postoperative pain after primary total knee arthroplasty: comparison of local injection analgesic cocktails and the role of demographic and surgical factors. J Arthroplasty. 2016;31(9) (suppl):288-292. doi:10.1016/j.arth.2016.05.002
12. Bramlett K, Onel E, Viscusi ER, Jones K. A randomized, double-blind, dose-ranging study comparing wound infiltration of DepoFoam bupivacaine, an extended-release liposomal bupivacaine, to bupivacaine HCl for postsurgical analgesia in total knee arthroplasty. Knee. 2012;19(5):530-536. doi:10.1016/j.knee.2011.12.004
13. Mont MA, Beaver WB, Dysart SH, Barrington JW, Del Gaizo D. Local infiltration analgesia with liposomal bupivacaine improves pain scores and reduces opioid use after total knee arthroplasty: results of a randomized controlled trial. J Arthroplasty. 2018;33(1):90-96. doi:10.1016/j.arth.2017.07.024
14. Hadlandsmyth K, Vander Weg MW, McCoy KD, Mosher HJ, Vaughan-Sarrazin MS, Lund BC. Risk for prolonged opioid use following total knee arthroplasty in veterans. J Arthroplasty. 2018;33(1):119-123. doi:10.1016/j.arth.2017.08.022
15. Nielsen S, Degenhardt L, Hoban B, Gisev N. A synthesis of oral morphine equivalents (OME) for opioid utilisation studies. Pharmacoepidemiol Drug Saf. 2016;25(6):733-737. doi:10.1002/pds.3945
16. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Statist Soc B. 1995;57(1):289-300. doi:10.1111/j.2517-6161.1995.tb02031.x
17. Volkow ND, McLellan TA, Cotto JH, Karithanom M, Weiss SRB. Characteristics of opioid prescriptions in 2009. JAMA. 2011;305(13):1299-1301. doi:10.1001/jama.2011.401
18. Scholl L, Seth P, Kariisa M, Wilson N, Baldwin G. Drug and opioid-involved overdose deaths - United States, 2013-2017. MMWR Morb Mortal Wkly Rep. 2018;67(5152):1419-1427. doi:10.15585/mmwr.mm675152e1
19. Pichler L, Poeran J, Zubizarreta N, et al. Liposomal bupivacaine does not reduce inpatient opioid prescription or related complications after knee arthroplasty: a database analysis. Anesthesiology. 2018;129(4):689-699. doi:10.1097/ALN.0000000000002267
20. Jain RK, Porat MD, Klingenstein GG, Reid JJ, Post RE, Schoifet SD. The AAHKS Clinical Research Award: liposomal bupivacaine and periarticular injection are not superior to single-shot intra-articular injection for pain control in total knee arthroplasty. J Arthroplasty. 2016;31(9)(suppl):22-25. doi:10.1016/j.arth.2016.03.036
21. Zhao B, Ma X, Zhang J, Ma J, Cao Q. The efficacy of local liposomal bupivacaine infiltration on pain and recovery after total joint arthroplasty: a systematic review and meta-analysis of randomized controlled trials. Medicine (Baltimore). 2019;98(3):e14092. doi:10.1097/MD.0000000000014092
22. Schroer WC, Diesfeld PG, LeMarr AR, Morton DJ, Reedy ME. Does extended-release liposomal bupivacaine better control pain than bupivacaine after total knee arthroplasty (TKA)? A prospective, randomized clinical trial. J Arthroplasty. 2015;30(9)(suppl):64-67. doi:10.1016/j.arth.2015.01.059
23. Ma J, Zhang W, Yao S. Liposomal bupivacaine infiltration versus femoral nerve block for pain control in total knee arthroplasty: a systematic review and meta-analysis. Int J Surg. 2016;36(Pt A): 44-55. doi:10.1016/j.ijsu.2016.10.007
24. Barrington JW, Emerson RH, Lovald ST, Lombardi AV, Berend KR. No difference in early analgesia between liposomal bupivacaine injection and intrathecal morphine after TKA. Clin Orthop Relat Res. 2017;475(1):94-105. doi:10.1007/s11999-016-4931-z
25. Snyder MA, Scheuerman CM, Gregg JL, Ruhnke CJ, Eten K. Improving total knee arthroplasty perioperative pain management using a periarticular injection with bupivacaine liposomal suspension. Arthroplast Today. 2016;2(1):37-42. doi:10.1016/j.artd.2015.05.005
26. Kuang MJ,Du Y, Ma JX, He W, Fu L, Ma XL. The efficacy of liposomal bupivacaine using periarticular injection in total knee arthroplasty: a systematic review and meta-analysis. J Arthroplasty. 2017;32(4):1395-1402. doi:10.1016/j.arth.2016.12.025
27. Sakamoto B, Keiser S, Meldrum R, Harker G, Freese A. Efficacy of liposomal bupivacaine infiltration on the management of total knee arthroplasty. JAMA Surg. 2017;152(1):90-95. doi:10.1001/jamasurg.2016.3474
28. Collett GA, Song K, Jaramillo CA, Potter JS, Finley EP, Pugh MJ. Prevalence of central nervous system polypharmacy and associations with overdose and suicide-related behaviors in Iraq and Afghanistan war veterans in VA care 2010-2011. Drugs Real World Outcomes. 2016;3(1):45-52. doi:10.1007/s40801-015-0055-0
29. US Department of Veterans Affairs. HERC inpatient average cost data. Updated April 2, 2021. Accessed April 16, 2021. https://www.herc.research.va.gov/include/page.asp?id=inpatient#herc-inpat-avg-cost
30. Perruccio AV, Fitzpatrick J, Power JD, et al. Sex-modified effects of depression, low back pain, and comorbidities on pain after total knee arthroplasty for osteoarthritis. Arthritis Care Res (Hoboken). 2020;72(8):1074-1080. doi:10.1002/acr.24002
Reducing False-Positive Results With Fourth-Generation HIV Testing at a Veterans Affairs Medical Center
Ever since the first clinical reports of patients with AIDS in 1981, there have been improvements both in the knowledge base of the pathogenesis of HIV in causing AIDS as well as a progressive refinement in the test methodologies used to diagnose this illness.1-3 Given that there are both public health and clinical benefits in earlier diagnosis and treatment of patients with available antiretroviral therapies, universal screening with opt-out consent has been a standard of practice recommendation by the Centers of Disease Control and Prevention (CDC) since 2006; universal screening with opt-out consent also has been recommended by the US Preventative Task Force and has been widely implemented.4-7
HIV Screening
While HIV screening assays have evolved to be accurate with very high sensitivities and specificities, false-positive results are a significant issue both currently and historically.8-16 The use of an HIV assay on a low prevalence population predictably reduces the positive predictive value (PPV) of even an otherwise accurate assay.8-23 In light of this, laboratory HIV testing algorithms include confirmatory testing to increase the likelihood that the correct diagnosis is being rendered.
The fourth-generation assay has been shown to be more sensitive and specific compared with that of the third-generation assay due to the addition of detection of p24 antigen and the refinement of the antigenic targets for the antibody detection.6,8,11-13,18-20,22 Due to these improvements, in the general population, increased sensitivity/specificity with a reduction in both false positives and false negatives have been reported.
It has been observed in the nonveteran population that switching from the older third-generation to a more sensitive and specific fourth-generation HIV screening assay has reduced the false-positive screening rate.18,19,22 For instance, Muthukumar and colleagues demonstrated a false-positive rate of only 2 out of 99 (2%) tested specimens for the fourth-generation ARCHITECT HIV Ag/Ab Combo assay vs 9 out of 99 tested specimens (9%) for the third-generation ADVIA Centaur HIV 1/O/2 Enhanced assay.18 In addition, it has been noted that fourth-generation HIV screening assays can reduce the window period by detecting HIV infection sooner after initial acute infection.19 Mitchell and colleagues demonstrated even highly specific fourth-generation HIV assays with specificities estimated at 99.7% can have PPVs as low as 25.0% if used in a population of low HIV prevalence (such as a 0.1% prevalence population).19 However, the veteran population has been documented to differ significantly on a number of population variables, including severity of disease and susceptibility to infections, and as a result extrapolation of these data from the general population may be limited.24-26 To our knowledge, this article represents the first study directly examining the reduction in false-positive results with the switch to a fourth-generation HIV generation assay from a third-generation assay for the veteran patient population at a regional US Department of Veterans Affairs (VA) medical center (VAMC).8,11
Methods
Quality assurance documents on test volume were retrospectively reviewed to obtain the number of HIV screening tests that were performed by the laboratory at the Corporal Michael J. Crescenz VAMC (CMJCVAMC) in Philadelphia, Pennsylvania, between March 1, 2016 and February 28, 2017, prior to implementation of the fourth-generation assay. The study also include results from the first year of use of the fourth-generation assay (March 1, 2017 to February 28, 2018). In addition, paper quality assurance records of all positive screening results during those periods were reviewed and manually counted for the abstract presentation of these data.
For assurance of accuracy, a search of all HIV testing assays using Veterans Health Information Systems and Technology Architecture and FileMan also was performed, and the results were compared to records in the Computerized Patient Record System (CPRS). Any discrepancies in the numbers of test results generated by both searches were investigated, and data for the manuscript were derived from records associating tests with particular patients. Only results from patient samples were considered for the electronic search. Quality samples that did not correspond to a true patient as identified in CPRS or same time patient sample duplicates were excluded from the calculations. Basic demographic data (age, ethnicity, and gender) were obtained from this FileMan search. The third-generation assay was the Ortho-Clinical Diagnostics Vitros, and the fourth-generation assay was the Abbott Architect.
To interpret the true HIV result of each sample with a reactive or positive screening result, the CDC laboratory HIV testing algorithm was followed and reviewed with a clinical pathologist or microbiologist director.12,13 All specimens interpreted as HIV positive by the pathologist or microbiologist director were discussed with the clinical health care provider at the time of the test with results added to CPRS after all testing was complete and discussions had taken place. All initially reactive specimens (confirmed with retesting in duplicate on the screening platform with at least 1 repeat reactive result) were further tested with the Bio-Rad Geenius HIV 1/2 Supplemental Assay, which screens for both HIV-1 and HIV-2 antibodies. Specimens with reactive results by this supplemental assay were interpreted as positive for HIV based on the CDC laboratory HIV testing algorithm. Specimens with negative or indeterminant results by the supplemental assay then underwent HIV-1 nucleic acid testing (NAT) using the Roche Diagnostics COBAS AmpliPrep/COBAS TaqMan HIV-1 Test v2.0. Specimens with viral load detected on NAT were positive for HIV infection, while specimens with viral load not detected on NAT testing were interpreted as negative for HIV-1 infection. Although there were no HIV-2 positive or indeterminant specimens during the study period, HIV-2 reactivity also would have been interpreted per the CDC laboratory HIV testing algorithm. Specimens with inadequate volume to complete all testing steps would be interpreted as indeterminant for HIV with request for additional specimen to complete testing. All testing platforms used for HIV testing in the laboratory had been properly validated prior to use.
The number of false positives and indeterminant results was tabulated in Microsoft Excel by month throughout the study period alongside the total number of HIV screening tests performed. Statistical analyses to verify statistical significance was performed by 1-tailed homoscedastic t test calculation using Excel.
Results
From March 1, 2016 to February 28, 2017, 7,516 specimens were screened for HIV, using the third-generation assay, and 52 specimens tested positive for HIV. On further review of these reactive specimens per the CDC laboratory testing algorithm, 24 tests were true positive and 28 were false positives with a PPV of 46% (24/52) (Figure 1).
From March 1, 2017 to February 28, 2018, 7,802 specimens were screened for HIV using a fourth-generation assay and 23 tested positive for HIV. On further review of these reactive specimens per the CDC laboratory testing algorithm, 16 were true positive and 7 were false positives with a PPV of 70% (16/23).
The fourth-generation assay was more specific when compared with the third-generation assay (0.09% vs 0.37%, respectively) with a 75.7% decrease in the false-positivity rate after the implementation of fourth-generation testing. The decreased number of false-positive test results per month with the fourth-generation test implementation was statistically significant (P = .002). The mean (SD) number of false-positive test results for the third-generation assay was 2.3 (1.7) per month, while the fourth-generation assay only had a mean (SD) of 0.58 (0.9) false positives monthly. The decrease in the percentage of false positives per month with the implementation of the fourth-generation assay also was statistically significant (P = .002) (Figure 2).
For population-based reference of the tested population at CMJCVAMC, there was a FileMan search for basic demographic data of patients for the HIV specimens screened by the third- or fourth-generation test (Table). For the population tested by the third-generation assay, 1,114 out of the 7,516 total tested population did not have readily available demographic information by the FileMan search as the specimens originated outside of the facility. For 6,402 of 7,516 patients tested by the third-generation assay with demographic information, the age ranged from 25 to 97 years with a mean of 57 years. This population of 6,402 was 88% male (n = 5,639), 50% African American (n = 3,220) and 43% White (n = 2,756). For the population tested by the fourth-generation assay, 993 of 7,802 total tested population did not have readily available demographic information by the FileMan search as the specimens originated outside of the facility. For the 6,809 of 7,802 patients tested by the fourth-generation assay with demographic information, the age ranged from 24 to 97 years with a mean age of 56 years. This population was 88% male (n = 5,971), 47% African American (n = 3,189), and 46% White (n = 3,149).
Discussion
Current practice guidelines from the CDC and the US Preventive Services Task Force recommend universal screening of the population for HIV infection.5,6 As the general population to be screened would normally have a low prevalence of HIV infection, the risk of a false positive on the initial screen is significant.17 Indeed, the CMJCVAMC experience has been that with the third-generation screening assay, the number of false-positive test results outnumbered the number of true-positive test results. Even with the fourth-generation assay, approximately one-third of the results were false positives. These results are similar to those observed in studies involving nonveteran populations in which the implementation of a fourth-generation screening assay led to significantly fewer false-positive results.18
For laboratories that do not follows CDC testing algorithm guidelines, each false-positive screening result represents a potential opportunity for a HIV misdiagnosis.Even in laboratories with proper procedures in place, false-positive results have consequences for the patients and for the cost-effectiveness of laboratory operations.9-11,18 As per CDC HIV testing guidelines, all positive screening results should be retested, which leads to additional use of technologist time and reagents. After this additional testing is performed and reviewed appropriately, only then can an appropriate final laboratory diagnosis be rendered that meets the standard of laboratory care.
Cost Savings
As observed at CMJCVAMC, the use of a fourth-generation assay with increased sensitivity/specificity led to a reduction in these false-positive results, which improved laboratory efficiency and avoided wasted resources for confirmatory tests.11,18 Cost savings at CMJCVAMC from the implementation of the fourth-generation assay would include technologist time and reagent cost. Generalizable technologist time costs at any institution would include the time needed to perform the confirmatory HIV-1/HIV-2 antibody differentiation assay (slightly less than 1 hour at CMJCVAMC per specimen) and the time needed to perform the viral load assay (about 6 hours to run a batch of 24 tests at CMJCVAMC). We calculated that confirmatory testing cost $184.51 per test at CMJCVAMC. Replacing the third-generation assay with the more sensitive and specific fourth-generation test saved an estimated $3,875 annually. This cost savings does not even consider savings in the pathologist/director’s time for reviewing HIV results after the completion of the algorithm or the clinician/patient costs or anxiety while waiting for results of the confirmatory sequence of tests.
As diagnosis of HIV can have a significant psychological impact on the patient, it is important to ensure the diagnosis conveyed is correct.27 The provision of an HIV diagnosis to a patient has been described as a traumatic stressor capable of causing psychological harm; this harm should ideally be avoided if the HIV diagnosis is not accurate. There can be a temptation, when presented with a positive or reactive screening test that is known to come from an instrument or assay with a very high sensitivity and specificity, to present this result as a diagnosis to the patient. However, a false diagnosis from a false-positive screen would not only be harmful, but given the low prevalence of the disease in the screened population, would happen fairly frequently; in some settings the number of false positives may actually outnumber the number of true positive test results.
Better screening assays with greater specificity (even fractions of a percentage, given that specificities are already > 99%) would help reduce the number of false positives and reduce the number of potential enticements to convey an incorrect diagnosis. Therefore, by adding an additional layer of safety through greater specificity, the fourth-generation assay implementation helped improve the diagnostic safety of the laboratory and reduced the significant error risk to the clinician who would ultimately bear responsibility for conveying the HIV diagnoses to the patient. Given the increased prevalence of psychological and physical ailments in veterans, it may be even more important to ensure the diagnosis is correct to avoid increased psychological harm.27,28
Veteran Population
For the general population, the fourth-generation assay has been shown to be more sensitive and specific when compared with the third-generation assay due to the addition of detection of p24 antigen and the refinement of the antigenic targets for the antibody detection.6,8,11-13,18-20,22 However, the veteran population that receives VA medical care differs significantly from the nonveteran general population. Compared with nonveterans, veterans tend to have generally poorer health status, more comorbid conditions, and greater need to use medical resources.24-26 In addition, veterans also may differ in sociodemographic status, race, ethnicity, and gender.24-26
VA research in the veteran population is unique, and veterans who use VA health care services are an even more highly selected subpopulation.26 Conclusions made from studies of the general population may not always be applicable to the veteran population treated by VA health care services due to these population differences. Therefore, specific studies tailored to this special veteran population in the specific VA health care setting are essential to ensure that the results of the general population truly and definitively apply to the veteran population.
While the false-positive risk is most closely associated with testing in a population of low prevalence, it also should be noted that false-positive screening results also can occur in high-risk individuals, such as an individual on preexposure prophylaxis (PrEP) for continuous behavior that places the individual at high risk of HIV acquisition.8,29 The false-positive result in these cases can lead to a conundrum for the clinician, and the differential diagnosis should consider both detection of very early infection as well as false positive. Interventions could include either stopping PrEP and treating for presumed early primary infection with HIV or continuing the PrEP. These interventions all have the potential to impact the patient whether through the production of resistant HIV virus due to the inadvertent provision of an inadequate treatment regimen, increased risk of infection if taken off PrEP as the patient may likely continue the behavior regardless, or the risks carried by the administration of additional antiretroviral therapies for the complete empiric therapy. Cases of an individual on PrEP who had a false-positive HIV screening test has been reported previously both within and outside the veteran population.8 Better screening tests with greater sensitivity/specificity can only help in guiding better patient care.
Limitations
This quality assurance study was limited to retrospectively identifying the improvement in the false-positive rate on the transition from the third-generation to the more advanced fourth-generation HIV screen. False-positive screen cases could be easily picked up on review of the confirmatory testing per the CDC laboratory HIV testing algorithm.12,13 This study also was a retrospective review of clinically ordered and indicated testing; as a result, without confirmatory testing performed on all negative screen cases, a false-negative rate would not be calculable.
This study also was restricted to only the population being treated in a VA health care setting. This population is known to be different from the general population.24-26
Conclusions
The switch to a fourth-generation assay resulted in a significant reduction in false-positive test results for veteran patients at CMJCVAMC. This reduction in false-positive screening not only reduced laboratory workload due to the necessary confirmatory testing and subsequent review, but also saved costs for technologist’s time and reagents. While this reduction in false-positive results has been documented in nonveteran populations, this is the first study specifically on a veteran population treated at a VAMC.8,11,18 This study confirms previously documented findings of improvement in the false-positive rate of HIV screening tests with the change from third-generation to fourth-generation assay for a veteran population.24
1. Feinberg MB. Changing the natural history of HIV disease. Lancet. 1996;348(9022):239-246. doi:10.1016/s0140-6736(96)06231-9.
2. Alexander TS. Human immunodeficiency virus diagnostic testing: 30 years of evolution. Clin Vaccine Immunol. 2016;23(4):249-253. Published 2016 Apr 4. doi:10.1128/CVI.00053-16
3. Mortimer PP, Parry JV, Mortimer JY. Which anti-HTLV III/LAV assays for screening and confirmatory testing?. Lancet. 1985;2(8460):873-877. doi:10.1016/s0140-6736(85)90136-9
4. Holmberg SD, Palella FJ Jr, Lichtenstein KA, Havlir DV. The case for earlier treatment of HIV infection [published correction appears in Clin Infect Dis. 2004 Dec 15;39(12):1869]. Clin Infect Dis. 2004;39(11):1699-1704. doi:10.1086/425743
5. US Preventive Services Task Force, Owens DK, Davidson KW, et al. Screening for HIV Infection: US Preventive Services Task Force Recommendation Statement. JAMA. 2019;321(23):2326-2336. doi:10.1001/jama.2019.6587
6. Branson BM, Handsfield HH, Lampe MA, et al. Revised recommendations for HIV testing of adults, adolescents, and pregnant women in health-care settings. MMWR Recomm Rep. 2006;55(RR-14):1-CE4.
7. Bayer R, Philbin M, Remien RH. The end of written informed consent for HIV testing: not with a bang but a whimper. Am J Public Health. 2017;107(8):1259-1265. doi:10.2105/AJPH.2017.303819
8. Petersen J, Jhala D. Its not HIV! The pitfall of unconfirmed positive HIV screening assays. Abstract presented at: Annual Meeting Pennsylvania Association of Pathologists; April 14, 2018.
9. Wood RW, Dunphy C, Okita K, Swenson P. Two “HIV-infected” persons not really infected. Arch Intern Med. 2003;163(15):1857-1859. doi:10.1001/archinte.163.15.1857
10. Permpalung N, Ungprasert P, Chongnarungsin D, Okoli A, Hyman CL. A diagnostic blind spot: acute infectious mononucleosis or acute retroviral syndrome. Am J Med. 2013;126(9):e5-e6. doi:10.1016/j.amjmed.2013.03.017
11. Dalal S, Petersen J, Luta D, Jhala D. Third- to fourth-generation HIV testing: reduction in false-positive results with the new way of testing, the Corporal Michael J. Crescenz Veteran Affairs Medical Center (CMCVAMC) Experience. Am J Clin Pathol.2018;150(suppl 1):S70-S71. doi:10.1093/ajcp/aqy093.172
12. Centers for Disease Control and Prevention. Laboratory testing for the diagnosis of HIV infection: updated recommendations. Published June 27, 2014. Accessed April 14, 2021. doi:10.15620/cdc.23447
13. Centers for Disease Control and Prevention. 2018 quick reference guide: recommended laboratory HIV testing algorithm for serum or plasma specimens. Updated January 2018. Accessed April 14, 202. https://stacks.cdc.gov/view/cdc/50872
14. Masciotra S, McDougal JS, Feldman J, Sprinkle P, Wesolowski L, Owen SM. Evaluation of an alternative HIV diagnostic algorithm using specimens from seroconversion panels and persons with established HIV infections. J Clin Virol. 2011;52(suppl 1):S17-S22. doi:10.1016/j.jcv.2011.09.011
15. Morton A. When lab tests lie … heterophile antibodies. Aust Fam Physician. 2014;43(6):391-393.
16. Spencer DV, Nolte FS, Zhu Y. Heterophilic antibody interference causing false-positive rapid human immunodeficiency virus antibody testing. Clin Chim Acta. 2009;399(1-2):121-122. doi:10.1016/j.cca.2008.09.030
17. Kim S, Lee JH, Choi JY, Kim JM, Kim HS. False-positive rate of a “fourth-generation” HIV antigen/antibody combination assay in an area of low HIV prevalence. Clin Vaccine Immunol. 2010;17(10):1642-1644. doi:10.1128/CVI.00258-10
18. Muthukumar A, Alatoom A, Burns S, et al. Comparison of 4th-generation HIV antigen/antibody combination assay with 3rd-generation HIV antibody assays for the occurrence of false-positive and false-negative results. Lab Med. 2015;46(2):84-e29. doi:10.1309/LMM3X37NSWUCMVRS
19. Mitchell EO, Stewart G, Bajzik O, Ferret M, Bentsen C, Shriver MK. Performance comparison of the 4th generation Bio-Rad Laboratories GS HIV Combo Ag/Ab EIA on the EVOLIS™ automated system versus Abbott ARCHITECT HIV Ag/Ab Combo, Ortho Anti-HIV 1+2 EIA on Vitros ECi and Siemens HIV-1/O/2 enhanced on Advia Centaur. J Clin Virol. 2013;58(suppl 1):e79-e84. doi:10.1016/j.jcv.2013.08.009
20. Dubravac T, Gahan TF, Pentella MA. Use of the Abbott Architect HIV antigen/antibody assay in a low incidence population. J Clin Virol. 2013;58(suppl 1):e76-e78. doi:10.1016/j.jcv.2013.10.020
21. Montesinos I, Eykmans J, Delforge ML. Evaluation of the Bio-Rad Geenius HIV-1/2 test as a confirmatory assay. J Clin Virol. 2014;60(4):399-401. doi:10.1016/j.jcv.2014.04.025
22. van Binsbergen J, Siebelink A, Jacobs A, et al. Improved performance of seroconversion with a 4th generation HIV antigen/antibody assay. J Virol Methods. 1999;82(1):77-84. doi:10.1016/s0166-0934(99)00086-5
23. CLSI. User Protocol for Evaluation of Qualitative Test Performance: Approved Guideline. Second ed. EP12-A2. CLSI; 2008:1-46.
24. Agha Z, Lofgren RP, VanRuiswyk JV, Layde PM. Are patients at Veterans Affairs medical centers sicker? A comparative analysis of health status and medical resource use. Arch Intern Med. 2000;160(21):3252-3257. doi:10.1001/archinte.160.21.3252
25. Eibner C, Krull H, Brown KM, et al. Current and projected characteristics and unique health care needs of the patient population served by the Department of Veterans Affairs. Rand Health Q. 2016;5(4):13. Published 2016 May 9.
26. Morgan RO, Teal CR, Reddy SG, Ford ME, Ashton CM. Measurement in Veterans Affairs Health Services Research: veterans as a special population. Health Serv Res. 2005;40(5, pt 2):1573-1583. doi:10.1111/j.1475-6773.2005.00448.x
27. Nightingale VR, Sher TG, Hansen NB. The impact of receiving an HIV diagnosis and cognitive processing on psychological distress and posttraumatic growth. J Trauma Stress. 2010;23(4):452-460. doi:10.1002/jts.20554
28. Spelman JF, Hunt SC, Seal KH, Burgo-Black AL. Post deployment care for returning combat veterans. J Gen Intern Med. 2012;27(9):1200-1209. doi:10.1007/s11606-012-2061-1
29. Ndase P, Celum C, Kidoguchi L, et al. Frequency of false positive rapid HIV serologic tests in African men and women receiving PrEP for HIV prevention: implications for programmatic roll-out of biomedical interventions. PLoS One. 2015;10(4):e0123005. Published 2015 Apr 17. doi:10.1371/journal.pone.0123005
Ever since the first clinical reports of patients with AIDS in 1981, there have been improvements both in the knowledge base of the pathogenesis of HIV in causing AIDS as well as a progressive refinement in the test methodologies used to diagnose this illness.1-3 Given that there are both public health and clinical benefits in earlier diagnosis and treatment of patients with available antiretroviral therapies, universal screening with opt-out consent has been a standard of practice recommendation by the Centers of Disease Control and Prevention (CDC) since 2006; universal screening with opt-out consent also has been recommended by the US Preventative Task Force and has been widely implemented.4-7
HIV Screening
While HIV screening assays have evolved to be accurate with very high sensitivities and specificities, false-positive results are a significant issue both currently and historically.8-16 The use of an HIV assay on a low prevalence population predictably reduces the positive predictive value (PPV) of even an otherwise accurate assay.8-23 In light of this, laboratory HIV testing algorithms include confirmatory testing to increase the likelihood that the correct diagnosis is being rendered.
The fourth-generation assay has been shown to be more sensitive and specific compared with that of the third-generation assay due to the addition of detection of p24 antigen and the refinement of the antigenic targets for the antibody detection.6,8,11-13,18-20,22 Due to these improvements, in the general population, increased sensitivity/specificity with a reduction in both false positives and false negatives have been reported.
It has been observed in the nonveteran population that switching from the older third-generation to a more sensitive and specific fourth-generation HIV screening assay has reduced the false-positive screening rate.18,19,22 For instance, Muthukumar and colleagues demonstrated a false-positive rate of only 2 out of 99 (2%) tested specimens for the fourth-generation ARCHITECT HIV Ag/Ab Combo assay vs 9 out of 99 tested specimens (9%) for the third-generation ADVIA Centaur HIV 1/O/2 Enhanced assay.18 In addition, it has been noted that fourth-generation HIV screening assays can reduce the window period by detecting HIV infection sooner after initial acute infection.19 Mitchell and colleagues demonstrated even highly specific fourth-generation HIV assays with specificities estimated at 99.7% can have PPVs as low as 25.0% if used in a population of low HIV prevalence (such as a 0.1% prevalence population).19 However, the veteran population has been documented to differ significantly on a number of population variables, including severity of disease and susceptibility to infections, and as a result extrapolation of these data from the general population may be limited.24-26 To our knowledge, this article represents the first study directly examining the reduction in false-positive results with the switch to a fourth-generation HIV generation assay from a third-generation assay for the veteran patient population at a regional US Department of Veterans Affairs (VA) medical center (VAMC).8,11
Methods
Quality assurance documents on test volume were retrospectively reviewed to obtain the number of HIV screening tests that were performed by the laboratory at the Corporal Michael J. Crescenz VAMC (CMJCVAMC) in Philadelphia, Pennsylvania, between March 1, 2016 and February 28, 2017, prior to implementation of the fourth-generation assay. The study also include results from the first year of use of the fourth-generation assay (March 1, 2017 to February 28, 2018). In addition, paper quality assurance records of all positive screening results during those periods were reviewed and manually counted for the abstract presentation of these data.
For assurance of accuracy, a search of all HIV testing assays using Veterans Health Information Systems and Technology Architecture and FileMan also was performed, and the results were compared to records in the Computerized Patient Record System (CPRS). Any discrepancies in the numbers of test results generated by both searches were investigated, and data for the manuscript were derived from records associating tests with particular patients. Only results from patient samples were considered for the electronic search. Quality samples that did not correspond to a true patient as identified in CPRS or same time patient sample duplicates were excluded from the calculations. Basic demographic data (age, ethnicity, and gender) were obtained from this FileMan search. The third-generation assay was the Ortho-Clinical Diagnostics Vitros, and the fourth-generation assay was the Abbott Architect.
To interpret the true HIV result of each sample with a reactive or positive screening result, the CDC laboratory HIV testing algorithm was followed and reviewed with a clinical pathologist or microbiologist director.12,13 All specimens interpreted as HIV positive by the pathologist or microbiologist director were discussed with the clinical health care provider at the time of the test with results added to CPRS after all testing was complete and discussions had taken place. All initially reactive specimens (confirmed with retesting in duplicate on the screening platform with at least 1 repeat reactive result) were further tested with the Bio-Rad Geenius HIV 1/2 Supplemental Assay, which screens for both HIV-1 and HIV-2 antibodies. Specimens with reactive results by this supplemental assay were interpreted as positive for HIV based on the CDC laboratory HIV testing algorithm. Specimens with negative or indeterminant results by the supplemental assay then underwent HIV-1 nucleic acid testing (NAT) using the Roche Diagnostics COBAS AmpliPrep/COBAS TaqMan HIV-1 Test v2.0. Specimens with viral load detected on NAT were positive for HIV infection, while specimens with viral load not detected on NAT testing were interpreted as negative for HIV-1 infection. Although there were no HIV-2 positive or indeterminant specimens during the study period, HIV-2 reactivity also would have been interpreted per the CDC laboratory HIV testing algorithm. Specimens with inadequate volume to complete all testing steps would be interpreted as indeterminant for HIV with request for additional specimen to complete testing. All testing platforms used for HIV testing in the laboratory had been properly validated prior to use.
The number of false positives and indeterminant results was tabulated in Microsoft Excel by month throughout the study period alongside the total number of HIV screening tests performed. Statistical analyses to verify statistical significance was performed by 1-tailed homoscedastic t test calculation using Excel.
Results
From March 1, 2016 to February 28, 2017, 7,516 specimens were screened for HIV, using the third-generation assay, and 52 specimens tested positive for HIV. On further review of these reactive specimens per the CDC laboratory testing algorithm, 24 tests were true positive and 28 were false positives with a PPV of 46% (24/52) (Figure 1).
From March 1, 2017 to February 28, 2018, 7,802 specimens were screened for HIV using a fourth-generation assay and 23 tested positive for HIV. On further review of these reactive specimens per the CDC laboratory testing algorithm, 16 were true positive and 7 were false positives with a PPV of 70% (16/23).
The fourth-generation assay was more specific when compared with the third-generation assay (0.09% vs 0.37%, respectively) with a 75.7% decrease in the false-positivity rate after the implementation of fourth-generation testing. The decreased number of false-positive test results per month with the fourth-generation test implementation was statistically significant (P = .002). The mean (SD) number of false-positive test results for the third-generation assay was 2.3 (1.7) per month, while the fourth-generation assay only had a mean (SD) of 0.58 (0.9) false positives monthly. The decrease in the percentage of false positives per month with the implementation of the fourth-generation assay also was statistically significant (P = .002) (Figure 2).
For population-based reference of the tested population at CMJCVAMC, there was a FileMan search for basic demographic data of patients for the HIV specimens screened by the third- or fourth-generation test (Table). For the population tested by the third-generation assay, 1,114 out of the 7,516 total tested population did not have readily available demographic information by the FileMan search as the specimens originated outside of the facility. For 6,402 of 7,516 patients tested by the third-generation assay with demographic information, the age ranged from 25 to 97 years with a mean of 57 years. This population of 6,402 was 88% male (n = 5,639), 50% African American (n = 3,220) and 43% White (n = 2,756). For the population tested by the fourth-generation assay, 993 of 7,802 total tested population did not have readily available demographic information by the FileMan search as the specimens originated outside of the facility. For the 6,809 of 7,802 patients tested by the fourth-generation assay with demographic information, the age ranged from 24 to 97 years with a mean age of 56 years. This population was 88% male (n = 5,971), 47% African American (n = 3,189), and 46% White (n = 3,149).
Discussion
Current practice guidelines from the CDC and the US Preventive Services Task Force recommend universal screening of the population for HIV infection.5,6 As the general population to be screened would normally have a low prevalence of HIV infection, the risk of a false positive on the initial screen is significant.17 Indeed, the CMJCVAMC experience has been that with the third-generation screening assay, the number of false-positive test results outnumbered the number of true-positive test results. Even with the fourth-generation assay, approximately one-third of the results were false positives. These results are similar to those observed in studies involving nonveteran populations in which the implementation of a fourth-generation screening assay led to significantly fewer false-positive results.18
For laboratories that do not follows CDC testing algorithm guidelines, each false-positive screening result represents a potential opportunity for a HIV misdiagnosis.Even in laboratories with proper procedures in place, false-positive results have consequences for the patients and for the cost-effectiveness of laboratory operations.9-11,18 As per CDC HIV testing guidelines, all positive screening results should be retested, which leads to additional use of technologist time and reagents. After this additional testing is performed and reviewed appropriately, only then can an appropriate final laboratory diagnosis be rendered that meets the standard of laboratory care.
Cost Savings
As observed at CMJCVAMC, the use of a fourth-generation assay with increased sensitivity/specificity led to a reduction in these false-positive results, which improved laboratory efficiency and avoided wasted resources for confirmatory tests.11,18 Cost savings at CMJCVAMC from the implementation of the fourth-generation assay would include technologist time and reagent cost. Generalizable technologist time costs at any institution would include the time needed to perform the confirmatory HIV-1/HIV-2 antibody differentiation assay (slightly less than 1 hour at CMJCVAMC per specimen) and the time needed to perform the viral load assay (about 6 hours to run a batch of 24 tests at CMJCVAMC). We calculated that confirmatory testing cost $184.51 per test at CMJCVAMC. Replacing the third-generation assay with the more sensitive and specific fourth-generation test saved an estimated $3,875 annually. This cost savings does not even consider savings in the pathologist/director’s time for reviewing HIV results after the completion of the algorithm or the clinician/patient costs or anxiety while waiting for results of the confirmatory sequence of tests.
As diagnosis of HIV can have a significant psychological impact on the patient, it is important to ensure the diagnosis conveyed is correct.27 The provision of an HIV diagnosis to a patient has been described as a traumatic stressor capable of causing psychological harm; this harm should ideally be avoided if the HIV diagnosis is not accurate. There can be a temptation, when presented with a positive or reactive screening test that is known to come from an instrument or assay with a very high sensitivity and specificity, to present this result as a diagnosis to the patient. However, a false diagnosis from a false-positive screen would not only be harmful, but given the low prevalence of the disease in the screened population, would happen fairly frequently; in some settings the number of false positives may actually outnumber the number of true positive test results.
Better screening assays with greater specificity (even fractions of a percentage, given that specificities are already > 99%) would help reduce the number of false positives and reduce the number of potential enticements to convey an incorrect diagnosis. Therefore, by adding an additional layer of safety through greater specificity, the fourth-generation assay implementation helped improve the diagnostic safety of the laboratory and reduced the significant error risk to the clinician who would ultimately bear responsibility for conveying the HIV diagnoses to the patient. Given the increased prevalence of psychological and physical ailments in veterans, it may be even more important to ensure the diagnosis is correct to avoid increased psychological harm.27,28
Veteran Population
For the general population, the fourth-generation assay has been shown to be more sensitive and specific when compared with the third-generation assay due to the addition of detection of p24 antigen and the refinement of the antigenic targets for the antibody detection.6,8,11-13,18-20,22 However, the veteran population that receives VA medical care differs significantly from the nonveteran general population. Compared with nonveterans, veterans tend to have generally poorer health status, more comorbid conditions, and greater need to use medical resources.24-26 In addition, veterans also may differ in sociodemographic status, race, ethnicity, and gender.24-26
VA research in the veteran population is unique, and veterans who use VA health care services are an even more highly selected subpopulation.26 Conclusions made from studies of the general population may not always be applicable to the veteran population treated by VA health care services due to these population differences. Therefore, specific studies tailored to this special veteran population in the specific VA health care setting are essential to ensure that the results of the general population truly and definitively apply to the veteran population.
While the false-positive risk is most closely associated with testing in a population of low prevalence, it also should be noted that false-positive screening results also can occur in high-risk individuals, such as an individual on preexposure prophylaxis (PrEP) for continuous behavior that places the individual at high risk of HIV acquisition.8,29 The false-positive result in these cases can lead to a conundrum for the clinician, and the differential diagnosis should consider both detection of very early infection as well as false positive. Interventions could include either stopping PrEP and treating for presumed early primary infection with HIV or continuing the PrEP. These interventions all have the potential to impact the patient whether through the production of resistant HIV virus due to the inadvertent provision of an inadequate treatment regimen, increased risk of infection if taken off PrEP as the patient may likely continue the behavior regardless, or the risks carried by the administration of additional antiretroviral therapies for the complete empiric therapy. Cases of an individual on PrEP who had a false-positive HIV screening test has been reported previously both within and outside the veteran population.8 Better screening tests with greater sensitivity/specificity can only help in guiding better patient care.
Limitations
This quality assurance study was limited to retrospectively identifying the improvement in the false-positive rate on the transition from the third-generation to the more advanced fourth-generation HIV screen. False-positive screen cases could be easily picked up on review of the confirmatory testing per the CDC laboratory HIV testing algorithm.12,13 This study also was a retrospective review of clinically ordered and indicated testing; as a result, without confirmatory testing performed on all negative screen cases, a false-negative rate would not be calculable.
This study also was restricted to only the population being treated in a VA health care setting. This population is known to be different from the general population.24-26
Conclusions
The switch to a fourth-generation assay resulted in a significant reduction in false-positive test results for veteran patients at CMJCVAMC. This reduction in false-positive screening not only reduced laboratory workload due to the necessary confirmatory testing and subsequent review, but also saved costs for technologist’s time and reagents. While this reduction in false-positive results has been documented in nonveteran populations, this is the first study specifically on a veteran population treated at a VAMC.8,11,18 This study confirms previously documented findings of improvement in the false-positive rate of HIV screening tests with the change from third-generation to fourth-generation assay for a veteran population.24
Ever since the first clinical reports of patients with AIDS in 1981, there have been improvements both in the knowledge base of the pathogenesis of HIV in causing AIDS as well as a progressive refinement in the test methodologies used to diagnose this illness.1-3 Given that there are both public health and clinical benefits in earlier diagnosis and treatment of patients with available antiretroviral therapies, universal screening with opt-out consent has been a standard of practice recommendation by the Centers of Disease Control and Prevention (CDC) since 2006; universal screening with opt-out consent also has been recommended by the US Preventative Task Force and has been widely implemented.4-7
HIV Screening
While HIV screening assays have evolved to be accurate with very high sensitivities and specificities, false-positive results are a significant issue both currently and historically.8-16 The use of an HIV assay on a low prevalence population predictably reduces the positive predictive value (PPV) of even an otherwise accurate assay.8-23 In light of this, laboratory HIV testing algorithms include confirmatory testing to increase the likelihood that the correct diagnosis is being rendered.
The fourth-generation assay has been shown to be more sensitive and specific compared with that of the third-generation assay due to the addition of detection of p24 antigen and the refinement of the antigenic targets for the antibody detection.6,8,11-13,18-20,22 Due to these improvements, in the general population, increased sensitivity/specificity with a reduction in both false positives and false negatives have been reported.
It has been observed in the nonveteran population that switching from the older third-generation to a more sensitive and specific fourth-generation HIV screening assay has reduced the false-positive screening rate.18,19,22 For instance, Muthukumar and colleagues demonstrated a false-positive rate of only 2 out of 99 (2%) tested specimens for the fourth-generation ARCHITECT HIV Ag/Ab Combo assay vs 9 out of 99 tested specimens (9%) for the third-generation ADVIA Centaur HIV 1/O/2 Enhanced assay.18 In addition, it has been noted that fourth-generation HIV screening assays can reduce the window period by detecting HIV infection sooner after initial acute infection.19 Mitchell and colleagues demonstrated even highly specific fourth-generation HIV assays with specificities estimated at 99.7% can have PPVs as low as 25.0% if used in a population of low HIV prevalence (such as a 0.1% prevalence population).19 However, the veteran population has been documented to differ significantly on a number of population variables, including severity of disease and susceptibility to infections, and as a result extrapolation of these data from the general population may be limited.24-26 To our knowledge, this article represents the first study directly examining the reduction in false-positive results with the switch to a fourth-generation HIV generation assay from a third-generation assay for the veteran patient population at a regional US Department of Veterans Affairs (VA) medical center (VAMC).8,11
Methods
Quality assurance documents on test volume were retrospectively reviewed to obtain the number of HIV screening tests that were performed by the laboratory at the Corporal Michael J. Crescenz VAMC (CMJCVAMC) in Philadelphia, Pennsylvania, between March 1, 2016 and February 28, 2017, prior to implementation of the fourth-generation assay. The study also include results from the first year of use of the fourth-generation assay (March 1, 2017 to February 28, 2018). In addition, paper quality assurance records of all positive screening results during those periods were reviewed and manually counted for the abstract presentation of these data.
For assurance of accuracy, a search of all HIV testing assays using Veterans Health Information Systems and Technology Architecture and FileMan also was performed, and the results were compared to records in the Computerized Patient Record System (CPRS). Any discrepancies in the numbers of test results generated by both searches were investigated, and data for the manuscript were derived from records associating tests with particular patients. Only results from patient samples were considered for the electronic search. Quality samples that did not correspond to a true patient as identified in CPRS or same time patient sample duplicates were excluded from the calculations. Basic demographic data (age, ethnicity, and gender) were obtained from this FileMan search. The third-generation assay was the Ortho-Clinical Diagnostics Vitros, and the fourth-generation assay was the Abbott Architect.
To interpret the true HIV result of each sample with a reactive or positive screening result, the CDC laboratory HIV testing algorithm was followed and reviewed with a clinical pathologist or microbiologist director.12,13 All specimens interpreted as HIV positive by the pathologist or microbiologist director were discussed with the clinical health care provider at the time of the test with results added to CPRS after all testing was complete and discussions had taken place. All initially reactive specimens (confirmed with retesting in duplicate on the screening platform with at least 1 repeat reactive result) were further tested with the Bio-Rad Geenius HIV 1/2 Supplemental Assay, which screens for both HIV-1 and HIV-2 antibodies. Specimens with reactive results by this supplemental assay were interpreted as positive for HIV based on the CDC laboratory HIV testing algorithm. Specimens with negative or indeterminant results by the supplemental assay then underwent HIV-1 nucleic acid testing (NAT) using the Roche Diagnostics COBAS AmpliPrep/COBAS TaqMan HIV-1 Test v2.0. Specimens with viral load detected on NAT were positive for HIV infection, while specimens with viral load not detected on NAT testing were interpreted as negative for HIV-1 infection. Although there were no HIV-2 positive or indeterminant specimens during the study period, HIV-2 reactivity also would have been interpreted per the CDC laboratory HIV testing algorithm. Specimens with inadequate volume to complete all testing steps would be interpreted as indeterminant for HIV with request for additional specimen to complete testing. All testing platforms used for HIV testing in the laboratory had been properly validated prior to use.
The number of false positives and indeterminant results was tabulated in Microsoft Excel by month throughout the study period alongside the total number of HIV screening tests performed. Statistical analyses to verify statistical significance was performed by 1-tailed homoscedastic t test calculation using Excel.
Results
From March 1, 2016 to February 28, 2017, 7,516 specimens were screened for HIV, using the third-generation assay, and 52 specimens tested positive for HIV. On further review of these reactive specimens per the CDC laboratory testing algorithm, 24 tests were true positive and 28 were false positives with a PPV of 46% (24/52) (Figure 1).
From March 1, 2017 to February 28, 2018, 7,802 specimens were screened for HIV using a fourth-generation assay and 23 tested positive for HIV. On further review of these reactive specimens per the CDC laboratory testing algorithm, 16 were true positive and 7 were false positives with a PPV of 70% (16/23).
The fourth-generation assay was more specific when compared with the third-generation assay (0.09% vs 0.37%, respectively) with a 75.7% decrease in the false-positivity rate after the implementation of fourth-generation testing. The decreased number of false-positive test results per month with the fourth-generation test implementation was statistically significant (P = .002). The mean (SD) number of false-positive test results for the third-generation assay was 2.3 (1.7) per month, while the fourth-generation assay only had a mean (SD) of 0.58 (0.9) false positives monthly. The decrease in the percentage of false positives per month with the implementation of the fourth-generation assay also was statistically significant (P = .002) (Figure 2).
For population-based reference of the tested population at CMJCVAMC, there was a FileMan search for basic demographic data of patients for the HIV specimens screened by the third- or fourth-generation test (Table). For the population tested by the third-generation assay, 1,114 out of the 7,516 total tested population did not have readily available demographic information by the FileMan search as the specimens originated outside of the facility. For 6,402 of 7,516 patients tested by the third-generation assay with demographic information, the age ranged from 25 to 97 years with a mean of 57 years. This population of 6,402 was 88% male (n = 5,639), 50% African American (n = 3,220) and 43% White (n = 2,756). For the population tested by the fourth-generation assay, 993 of 7,802 total tested population did not have readily available demographic information by the FileMan search as the specimens originated outside of the facility. For the 6,809 of 7,802 patients tested by the fourth-generation assay with demographic information, the age ranged from 24 to 97 years with a mean age of 56 years. This population was 88% male (n = 5,971), 47% African American (n = 3,189), and 46% White (n = 3,149).
Discussion
Current practice guidelines from the CDC and the US Preventive Services Task Force recommend universal screening of the population for HIV infection.5,6 As the general population to be screened would normally have a low prevalence of HIV infection, the risk of a false positive on the initial screen is significant.17 Indeed, the CMJCVAMC experience has been that with the third-generation screening assay, the number of false-positive test results outnumbered the number of true-positive test results. Even with the fourth-generation assay, approximately one-third of the results were false positives. These results are similar to those observed in studies involving nonveteran populations in which the implementation of a fourth-generation screening assay led to significantly fewer false-positive results.18
For laboratories that do not follows CDC testing algorithm guidelines, each false-positive screening result represents a potential opportunity for a HIV misdiagnosis.Even in laboratories with proper procedures in place, false-positive results have consequences for the patients and for the cost-effectiveness of laboratory operations.9-11,18 As per CDC HIV testing guidelines, all positive screening results should be retested, which leads to additional use of technologist time and reagents. After this additional testing is performed and reviewed appropriately, only then can an appropriate final laboratory diagnosis be rendered that meets the standard of laboratory care.
Cost Savings
As observed at CMJCVAMC, the use of a fourth-generation assay with increased sensitivity/specificity led to a reduction in these false-positive results, which improved laboratory efficiency and avoided wasted resources for confirmatory tests.11,18 Cost savings at CMJCVAMC from the implementation of the fourth-generation assay would include technologist time and reagent cost. Generalizable technologist time costs at any institution would include the time needed to perform the confirmatory HIV-1/HIV-2 antibody differentiation assay (slightly less than 1 hour at CMJCVAMC per specimen) and the time needed to perform the viral load assay (about 6 hours to run a batch of 24 tests at CMJCVAMC). We calculated that confirmatory testing cost $184.51 per test at CMJCVAMC. Replacing the third-generation assay with the more sensitive and specific fourth-generation test saved an estimated $3,875 annually. This cost savings does not even consider savings in the pathologist/director’s time for reviewing HIV results after the completion of the algorithm or the clinician/patient costs or anxiety while waiting for results of the confirmatory sequence of tests.
As diagnosis of HIV can have a significant psychological impact on the patient, it is important to ensure the diagnosis conveyed is correct.27 The provision of an HIV diagnosis to a patient has been described as a traumatic stressor capable of causing psychological harm; this harm should ideally be avoided if the HIV diagnosis is not accurate. There can be a temptation, when presented with a positive or reactive screening test that is known to come from an instrument or assay with a very high sensitivity and specificity, to present this result as a diagnosis to the patient. However, a false diagnosis from a false-positive screen would not only be harmful, but given the low prevalence of the disease in the screened population, would happen fairly frequently; in some settings the number of false positives may actually outnumber the number of true positive test results.
Better screening assays with greater specificity (even fractions of a percentage, given that specificities are already > 99%) would help reduce the number of false positives and reduce the number of potential enticements to convey an incorrect diagnosis. Therefore, by adding an additional layer of safety through greater specificity, the fourth-generation assay implementation helped improve the diagnostic safety of the laboratory and reduced the significant error risk to the clinician who would ultimately bear responsibility for conveying the HIV diagnoses to the patient. Given the increased prevalence of psychological and physical ailments in veterans, it may be even more important to ensure the diagnosis is correct to avoid increased psychological harm.27,28
Veteran Population
For the general population, the fourth-generation assay has been shown to be more sensitive and specific when compared with the third-generation assay due to the addition of detection of p24 antigen and the refinement of the antigenic targets for the antibody detection.6,8,11-13,18-20,22 However, the veteran population that receives VA medical care differs significantly from the nonveteran general population. Compared with nonveterans, veterans tend to have generally poorer health status, more comorbid conditions, and greater need to use medical resources.24-26 In addition, veterans also may differ in sociodemographic status, race, ethnicity, and gender.24-26
VA research in the veteran population is unique, and veterans who use VA health care services are an even more highly selected subpopulation.26 Conclusions made from studies of the general population may not always be applicable to the veteran population treated by VA health care services due to these population differences. Therefore, specific studies tailored to this special veteran population in the specific VA health care setting are essential to ensure that the results of the general population truly and definitively apply to the veteran population.
While the false-positive risk is most closely associated with testing in a population of low prevalence, it also should be noted that false-positive screening results also can occur in high-risk individuals, such as an individual on preexposure prophylaxis (PrEP) for continuous behavior that places the individual at high risk of HIV acquisition.8,29 The false-positive result in these cases can lead to a conundrum for the clinician, and the differential diagnosis should consider both detection of very early infection as well as false positive. Interventions could include either stopping PrEP and treating for presumed early primary infection with HIV or continuing the PrEP. These interventions all have the potential to impact the patient whether through the production of resistant HIV virus due to the inadvertent provision of an inadequate treatment regimen, increased risk of infection if taken off PrEP as the patient may likely continue the behavior regardless, or the risks carried by the administration of additional antiretroviral therapies for the complete empiric therapy. Cases of an individual on PrEP who had a false-positive HIV screening test has been reported previously both within and outside the veteran population.8 Better screening tests with greater sensitivity/specificity can only help in guiding better patient care.
Limitations
This quality assurance study was limited to retrospectively identifying the improvement in the false-positive rate on the transition from the third-generation to the more advanced fourth-generation HIV screen. False-positive screen cases could be easily picked up on review of the confirmatory testing per the CDC laboratory HIV testing algorithm.12,13 This study also was a retrospective review of clinically ordered and indicated testing; as a result, without confirmatory testing performed on all negative screen cases, a false-negative rate would not be calculable.
This study also was restricted to only the population being treated in a VA health care setting. This population is known to be different from the general population.24-26
Conclusions
The switch to a fourth-generation assay resulted in a significant reduction in false-positive test results for veteran patients at CMJCVAMC. This reduction in false-positive screening not only reduced laboratory workload due to the necessary confirmatory testing and subsequent review, but also saved costs for technologist’s time and reagents. While this reduction in false-positive results has been documented in nonveteran populations, this is the first study specifically on a veteran population treated at a VAMC.8,11,18 This study confirms previously documented findings of improvement in the false-positive rate of HIV screening tests with the change from third-generation to fourth-generation assay for a veteran population.24
1. Feinberg MB. Changing the natural history of HIV disease. Lancet. 1996;348(9022):239-246. doi:10.1016/s0140-6736(96)06231-9.
2. Alexander TS. Human immunodeficiency virus diagnostic testing: 30 years of evolution. Clin Vaccine Immunol. 2016;23(4):249-253. Published 2016 Apr 4. doi:10.1128/CVI.00053-16
3. Mortimer PP, Parry JV, Mortimer JY. Which anti-HTLV III/LAV assays for screening and confirmatory testing?. Lancet. 1985;2(8460):873-877. doi:10.1016/s0140-6736(85)90136-9
4. Holmberg SD, Palella FJ Jr, Lichtenstein KA, Havlir DV. The case for earlier treatment of HIV infection [published correction appears in Clin Infect Dis. 2004 Dec 15;39(12):1869]. Clin Infect Dis. 2004;39(11):1699-1704. doi:10.1086/425743
5. US Preventive Services Task Force, Owens DK, Davidson KW, et al. Screening for HIV Infection: US Preventive Services Task Force Recommendation Statement. JAMA. 2019;321(23):2326-2336. doi:10.1001/jama.2019.6587
6. Branson BM, Handsfield HH, Lampe MA, et al. Revised recommendations for HIV testing of adults, adolescents, and pregnant women in health-care settings. MMWR Recomm Rep. 2006;55(RR-14):1-CE4.
7. Bayer R, Philbin M, Remien RH. The end of written informed consent for HIV testing: not with a bang but a whimper. Am J Public Health. 2017;107(8):1259-1265. doi:10.2105/AJPH.2017.303819
8. Petersen J, Jhala D. Its not HIV! The pitfall of unconfirmed positive HIV screening assays. Abstract presented at: Annual Meeting Pennsylvania Association of Pathologists; April 14, 2018.
9. Wood RW, Dunphy C, Okita K, Swenson P. Two “HIV-infected” persons not really infected. Arch Intern Med. 2003;163(15):1857-1859. doi:10.1001/archinte.163.15.1857
10. Permpalung N, Ungprasert P, Chongnarungsin D, Okoli A, Hyman CL. A diagnostic blind spot: acute infectious mononucleosis or acute retroviral syndrome. Am J Med. 2013;126(9):e5-e6. doi:10.1016/j.amjmed.2013.03.017
11. Dalal S, Petersen J, Luta D, Jhala D. Third- to fourth-generation HIV testing: reduction in false-positive results with the new way of testing, the Corporal Michael J. Crescenz Veteran Affairs Medical Center (CMCVAMC) Experience. Am J Clin Pathol.2018;150(suppl 1):S70-S71. doi:10.1093/ajcp/aqy093.172
12. Centers for Disease Control and Prevention. Laboratory testing for the diagnosis of HIV infection: updated recommendations. Published June 27, 2014. Accessed April 14, 2021. doi:10.15620/cdc.23447
13. Centers for Disease Control and Prevention. 2018 quick reference guide: recommended laboratory HIV testing algorithm for serum or plasma specimens. Updated January 2018. Accessed April 14, 202. https://stacks.cdc.gov/view/cdc/50872
14. Masciotra S, McDougal JS, Feldman J, Sprinkle P, Wesolowski L, Owen SM. Evaluation of an alternative HIV diagnostic algorithm using specimens from seroconversion panels and persons with established HIV infections. J Clin Virol. 2011;52(suppl 1):S17-S22. doi:10.1016/j.jcv.2011.09.011
15. Morton A. When lab tests lie … heterophile antibodies. Aust Fam Physician. 2014;43(6):391-393.
16. Spencer DV, Nolte FS, Zhu Y. Heterophilic antibody interference causing false-positive rapid human immunodeficiency virus antibody testing. Clin Chim Acta. 2009;399(1-2):121-122. doi:10.1016/j.cca.2008.09.030
17. Kim S, Lee JH, Choi JY, Kim JM, Kim HS. False-positive rate of a “fourth-generation” HIV antigen/antibody combination assay in an area of low HIV prevalence. Clin Vaccine Immunol. 2010;17(10):1642-1644. doi:10.1128/CVI.00258-10
18. Muthukumar A, Alatoom A, Burns S, et al. Comparison of 4th-generation HIV antigen/antibody combination assay with 3rd-generation HIV antibody assays for the occurrence of false-positive and false-negative results. Lab Med. 2015;46(2):84-e29. doi:10.1309/LMM3X37NSWUCMVRS
19. Mitchell EO, Stewart G, Bajzik O, Ferret M, Bentsen C, Shriver MK. Performance comparison of the 4th generation Bio-Rad Laboratories GS HIV Combo Ag/Ab EIA on the EVOLIS™ automated system versus Abbott ARCHITECT HIV Ag/Ab Combo, Ortho Anti-HIV 1+2 EIA on Vitros ECi and Siemens HIV-1/O/2 enhanced on Advia Centaur. J Clin Virol. 2013;58(suppl 1):e79-e84. doi:10.1016/j.jcv.2013.08.009
20. Dubravac T, Gahan TF, Pentella MA. Use of the Abbott Architect HIV antigen/antibody assay in a low incidence population. J Clin Virol. 2013;58(suppl 1):e76-e78. doi:10.1016/j.jcv.2013.10.020
21. Montesinos I, Eykmans J, Delforge ML. Evaluation of the Bio-Rad Geenius HIV-1/2 test as a confirmatory assay. J Clin Virol. 2014;60(4):399-401. doi:10.1016/j.jcv.2014.04.025
22. van Binsbergen J, Siebelink A, Jacobs A, et al. Improved performance of seroconversion with a 4th generation HIV antigen/antibody assay. J Virol Methods. 1999;82(1):77-84. doi:10.1016/s0166-0934(99)00086-5
23. CLSI. User Protocol for Evaluation of Qualitative Test Performance: Approved Guideline. Second ed. EP12-A2. CLSI; 2008:1-46.
24. Agha Z, Lofgren RP, VanRuiswyk JV, Layde PM. Are patients at Veterans Affairs medical centers sicker? A comparative analysis of health status and medical resource use. Arch Intern Med. 2000;160(21):3252-3257. doi:10.1001/archinte.160.21.3252
25. Eibner C, Krull H, Brown KM, et al. Current and projected characteristics and unique health care needs of the patient population served by the Department of Veterans Affairs. Rand Health Q. 2016;5(4):13. Published 2016 May 9.
26. Morgan RO, Teal CR, Reddy SG, Ford ME, Ashton CM. Measurement in Veterans Affairs Health Services Research: veterans as a special population. Health Serv Res. 2005;40(5, pt 2):1573-1583. doi:10.1111/j.1475-6773.2005.00448.x
27. Nightingale VR, Sher TG, Hansen NB. The impact of receiving an HIV diagnosis and cognitive processing on psychological distress and posttraumatic growth. J Trauma Stress. 2010;23(4):452-460. doi:10.1002/jts.20554
28. Spelman JF, Hunt SC, Seal KH, Burgo-Black AL. Post deployment care for returning combat veterans. J Gen Intern Med. 2012;27(9):1200-1209. doi:10.1007/s11606-012-2061-1
29. Ndase P, Celum C, Kidoguchi L, et al. Frequency of false positive rapid HIV serologic tests in African men and women receiving PrEP for HIV prevention: implications for programmatic roll-out of biomedical interventions. PLoS One. 2015;10(4):e0123005. Published 2015 Apr 17. doi:10.1371/journal.pone.0123005
1. Feinberg MB. Changing the natural history of HIV disease. Lancet. 1996;348(9022):239-246. doi:10.1016/s0140-6736(96)06231-9.
2. Alexander TS. Human immunodeficiency virus diagnostic testing: 30 years of evolution. Clin Vaccine Immunol. 2016;23(4):249-253. Published 2016 Apr 4. doi:10.1128/CVI.00053-16
3. Mortimer PP, Parry JV, Mortimer JY. Which anti-HTLV III/LAV assays for screening and confirmatory testing?. Lancet. 1985;2(8460):873-877. doi:10.1016/s0140-6736(85)90136-9
4. Holmberg SD, Palella FJ Jr, Lichtenstein KA, Havlir DV. The case for earlier treatment of HIV infection [published correction appears in Clin Infect Dis. 2004 Dec 15;39(12):1869]. Clin Infect Dis. 2004;39(11):1699-1704. doi:10.1086/425743
5. US Preventive Services Task Force, Owens DK, Davidson KW, et al. Screening for HIV Infection: US Preventive Services Task Force Recommendation Statement. JAMA. 2019;321(23):2326-2336. doi:10.1001/jama.2019.6587
6. Branson BM, Handsfield HH, Lampe MA, et al. Revised recommendations for HIV testing of adults, adolescents, and pregnant women in health-care settings. MMWR Recomm Rep. 2006;55(RR-14):1-CE4.
7. Bayer R, Philbin M, Remien RH. The end of written informed consent for HIV testing: not with a bang but a whimper. Am J Public Health. 2017;107(8):1259-1265. doi:10.2105/AJPH.2017.303819
8. Petersen J, Jhala D. Its not HIV! The pitfall of unconfirmed positive HIV screening assays. Abstract presented at: Annual Meeting Pennsylvania Association of Pathologists; April 14, 2018.
9. Wood RW, Dunphy C, Okita K, Swenson P. Two “HIV-infected” persons not really infected. Arch Intern Med. 2003;163(15):1857-1859. doi:10.1001/archinte.163.15.1857
10. Permpalung N, Ungprasert P, Chongnarungsin D, Okoli A, Hyman CL. A diagnostic blind spot: acute infectious mononucleosis or acute retroviral syndrome. Am J Med. 2013;126(9):e5-e6. doi:10.1016/j.amjmed.2013.03.017
11. Dalal S, Petersen J, Luta D, Jhala D. Third- to fourth-generation HIV testing: reduction in false-positive results with the new way of testing, the Corporal Michael J. Crescenz Veteran Affairs Medical Center (CMCVAMC) Experience. Am J Clin Pathol.2018;150(suppl 1):S70-S71. doi:10.1093/ajcp/aqy093.172
12. Centers for Disease Control and Prevention. Laboratory testing for the diagnosis of HIV infection: updated recommendations. Published June 27, 2014. Accessed April 14, 2021. doi:10.15620/cdc.23447
13. Centers for Disease Control and Prevention. 2018 quick reference guide: recommended laboratory HIV testing algorithm for serum or plasma specimens. Updated January 2018. Accessed April 14, 202. https://stacks.cdc.gov/view/cdc/50872
14. Masciotra S, McDougal JS, Feldman J, Sprinkle P, Wesolowski L, Owen SM. Evaluation of an alternative HIV diagnostic algorithm using specimens from seroconversion panels and persons with established HIV infections. J Clin Virol. 2011;52(suppl 1):S17-S22. doi:10.1016/j.jcv.2011.09.011
15. Morton A. When lab tests lie … heterophile antibodies. Aust Fam Physician. 2014;43(6):391-393.
16. Spencer DV, Nolte FS, Zhu Y. Heterophilic antibody interference causing false-positive rapid human immunodeficiency virus antibody testing. Clin Chim Acta. 2009;399(1-2):121-122. doi:10.1016/j.cca.2008.09.030
17. Kim S, Lee JH, Choi JY, Kim JM, Kim HS. False-positive rate of a “fourth-generation” HIV antigen/antibody combination assay in an area of low HIV prevalence. Clin Vaccine Immunol. 2010;17(10):1642-1644. doi:10.1128/CVI.00258-10
18. Muthukumar A, Alatoom A, Burns S, et al. Comparison of 4th-generation HIV antigen/antibody combination assay with 3rd-generation HIV antibody assays for the occurrence of false-positive and false-negative results. Lab Med. 2015;46(2):84-e29. doi:10.1309/LMM3X37NSWUCMVRS
19. Mitchell EO, Stewart G, Bajzik O, Ferret M, Bentsen C, Shriver MK. Performance comparison of the 4th generation Bio-Rad Laboratories GS HIV Combo Ag/Ab EIA on the EVOLIS™ automated system versus Abbott ARCHITECT HIV Ag/Ab Combo, Ortho Anti-HIV 1+2 EIA on Vitros ECi and Siemens HIV-1/O/2 enhanced on Advia Centaur. J Clin Virol. 2013;58(suppl 1):e79-e84. doi:10.1016/j.jcv.2013.08.009
20. Dubravac T, Gahan TF, Pentella MA. Use of the Abbott Architect HIV antigen/antibody assay in a low incidence population. J Clin Virol. 2013;58(suppl 1):e76-e78. doi:10.1016/j.jcv.2013.10.020
21. Montesinos I, Eykmans J, Delforge ML. Evaluation of the Bio-Rad Geenius HIV-1/2 test as a confirmatory assay. J Clin Virol. 2014;60(4):399-401. doi:10.1016/j.jcv.2014.04.025
22. van Binsbergen J, Siebelink A, Jacobs A, et al. Improved performance of seroconversion with a 4th generation HIV antigen/antibody assay. J Virol Methods. 1999;82(1):77-84. doi:10.1016/s0166-0934(99)00086-5
23. CLSI. User Protocol for Evaluation of Qualitative Test Performance: Approved Guideline. Second ed. EP12-A2. CLSI; 2008:1-46.
24. Agha Z, Lofgren RP, VanRuiswyk JV, Layde PM. Are patients at Veterans Affairs medical centers sicker? A comparative analysis of health status and medical resource use. Arch Intern Med. 2000;160(21):3252-3257. doi:10.1001/archinte.160.21.3252
25. Eibner C, Krull H, Brown KM, et al. Current and projected characteristics and unique health care needs of the patient population served by the Department of Veterans Affairs. Rand Health Q. 2016;5(4):13. Published 2016 May 9.
26. Morgan RO, Teal CR, Reddy SG, Ford ME, Ashton CM. Measurement in Veterans Affairs Health Services Research: veterans as a special population. Health Serv Res. 2005;40(5, pt 2):1573-1583. doi:10.1111/j.1475-6773.2005.00448.x
27. Nightingale VR, Sher TG, Hansen NB. The impact of receiving an HIV diagnosis and cognitive processing on psychological distress and posttraumatic growth. J Trauma Stress. 2010;23(4):452-460. doi:10.1002/jts.20554
28. Spelman JF, Hunt SC, Seal KH, Burgo-Black AL. Post deployment care for returning combat veterans. J Gen Intern Med. 2012;27(9):1200-1209. doi:10.1007/s11606-012-2061-1
29. Ndase P, Celum C, Kidoguchi L, et al. Frequency of false positive rapid HIV serologic tests in African men and women receiving PrEP for HIV prevention: implications for programmatic roll-out of biomedical interventions. PLoS One. 2015;10(4):e0123005. Published 2015 Apr 17. doi:10.1371/journal.pone.0123005
Risk Factors and Antipsychotic Usage Patterns Associated With Terminal Delirium in a Veteran Long-Term Care Hospice Population
Delirium is a condition commonly exhibited by hospitalized patients and by those who are approaching the end of life.1 Patients who experience a disturbance in attention that develops over a relatively short period and represents an acute change may have delirium.2 Furthermore, there is often an additional cognitive disturbance, such as disorientation, memory deficit, language deficits, visuospatial deficit, or perception. Terminal delirium is defined as delirium that occurs in the dying process and implies that reversal is less likely.3 When death is anticipated, diagnostic workups are not recommended, and treatment of the physiologic abnormalities that contribute to delirium is generally ineffective.4
Background
Delirium is often underdiagnosed and undetected by the clinician. Some studies have shown that delirium is not detected in 22 to 50% of cases.5 Factors that contribute to the underdetection of delirium include preexisting dementia, older age, presence of visual or hearing impairment, and hypoactive presentation of delirium. Other possible reasons for nondetection of delirium are its fluctuating nature and lack of formal cognitive assessment as part of a routine screening across care settings.5 Another study found that 41% of health care providers (HCPs) felt that screening for delirium was burdensome.6
To date, there are no veteran-focused studies that investigate prevalence or risk factors for terminal delirium in US Department of Veterans Affairs (VA) long-term care hospice units. Most long-term care hospice units in the VA are in community living centers (CLCs) that follow regulatory guidelines for using antipsychotic medications. The Centers for Medicare and Medicaid Services state that if antipsychotics are prescribed, documentation must clearly show the indication for the antipsychotic medication, the multiple attempts to implement planned care, nonpharmacologic approaches, and ongoing evaluation of the effectiveness of these interventions.7 The symptoms of terminal delirium cause significant distress to patients, family and caregivers, and nursing staff. Literature suggests that delirium poses significant relational challenges for patients, families, and HCPs in end-of-life situations.8,9 We hypothesize that the early identification of risk factors for the development of terminal delirium in this population may lead to increased use of nonpharmacologic measures to prevent terminal delirium, increase nursing vigilance for development of symptoms, and reduce symptom burden should terminal delirium develop.
Prevalence of delirium in the long-term care setting has ranged between 1.4 and 70.3%.10 The rate was found to be much higher in institutionalized populations compared with that of patients classified as at-home. In a study of the prevalence, severity, and natural history of neuropsychiatric syndromes in terminally ill veterans enrolled in community hospice, delirium was found to be present in only 4.1% on the initial visit and 42.5% during last visit. Also, more than half had at least 1 episode of delirium during the 90-day study period.11 In a study of the prevalence of delirium in terminal cancer patients admitted to hospice, 80% experienced delirium in their final days.12
Risk factors for the development of delirium that have been identified in actively dying patients include bowel or bladder obstruction, fluid and electrolyte imbalances, suboptimal pain management, medication adverse effects and toxicity (eg, benzodiazepines, opioids, anticholinergics, and steroids), the addition of ≥ 3 medications, infection, hepatic and renal failure, poor glycemic control, hypoxia, and hematologic disturbances.4,5,13 A high percentage of patients with a previous diagnosis of dementia were found to exhibit terminal delirium.14
There are 2 major subtypes of delirium: hyperactive and hypoactive.4 Patients with hypoactive delirium exhibit lethargy, reduced motor activity, lack of interest, and/or incoherent speech. There is currently little evidence to guide the treatment of hypoactive delirium. By contrast, hyperactive delirium is associated with hallucinations, agitation, heightened arousal, and inappropriate behavior. Many studies suggest both nonpharmacologic and pharmacologic treatment modalities for the treatment of hyperactive delirium.4,13 Nonpharmacologic interventions may minimize the risk and severity of symptoms associated with delirium. Current guidelines recommend these interventions before pharmacologic treatment.4 Nonpharmacologic interventions include but are not limited to the following: engaging the patient in mentally stimulating activities; surrounding the patient with familiar materials (eg, photos); ensuring that all individuals identify themselves when they encounter a patient; minimizing the intensity of stimulation, providing family or volunteer presence, soft lighting and warm blankets; and ensuring the patient uses hearing aids and glasses if needed.4,14
Although there are no US Food and Drug Administration-approved medications to treat hyperactive delirium, first-generation antipsychotics (eg, haloperidol, chlorpromazine) are considered the first-line treatment for patients exhibiting psychosis and psychomotor agitation.3,4,14-16 In terminally ill patients, there is limited evidence from clinical trials to support the efficacy of drug therapy.14 One study showed lack of efficacy with hydration and opioid rotation.17 In terminally ill patients experiencing hyperactive delirium, there is a significant increased risk of muscle tension, myoclonic seizures, and distress to the patient, family, and caregiver.1 Benzodiazepines can be considered first-line treatment for dying patients with terminal delirium in which the goals of treatment are to relieve muscle tension, ensure amnesia, reduce the risk of seizures, and decrease psychosis and agitation.18,19 Furthermore, in patients with history of alcohol misuse who are experiencing terminal delirium, benzodiazepines also may be the preferred pharmacologic treatment.20 Caution must be exercised with the use of benzodiazepines because they can also cause oversedation, increased confusion, and/or a paradoxical worsening of delirium.3,4,14
Methods
This was a retrospective case-control study of patients who died in the Edward Hines Jr. Veterans Affairs Hospital CLC in Hines, Illinois, under the treating specialty nursing home hospice from October 1, 2013 to September 30, 2015. Due to the retrospective nature of this trial, the use of antipsychotics within the last 2 weeks of life was a surrogate marker for development of terminal delirium. Cases were defined as patients who were treated with antipsychotics for terminal delirium within the last 2 weeks of their lives. Controls were defined as patients who were not treated with antipsychotics for terminal delirium within the last 2 weeks of their lives. Living hospice patients and patients who were discharged from the CLC before death were excluded.
The goals of this study were to (1) determine risk factors in the VA CLC hospice veteran population for the development of terminal delirium; (2) evaluate documentation by the nursing staff of nonpharmacologic interventions and indications for antipsychotic use in the treatment of terminal delirium; and (3) examine the current usage patterns of antipsychotics for the treatment of terminal delirium.
Veterans’ medical records were reviewed from 2 weeks before death until the recorded death date. Factors that were assessed included age, war era of service, date of death, terminal diagnosis, time interval from cancer diagnosis to death, comorbid conditions, prescribed antipsychotic medications, and other medications potentially contributing to delirium. Nursing documentation was reviewed for indications for administration of antipsychotic medications and nonpharmacologic interventions used to mitigate the symptoms of terminal delirium.
Statistical analysis was conducted in SAS Version 9.3. Cases were compared with controls using univariate and multivariate statistics as appropriate. Comparisons for continuous variables (eg, age) were conducted with Student t tests. Categorical variables (eg, PTSD diagnosis) were compared using χ2 analysis or Fisher exact test as appropriate. Variables with a P value < .1 in the univariate analysis were included in logistic regression models. Independent variables were removed from the models, using a backward selection process. Interaction terms were tested based on significance and clinical relevance. A P value < .05 was considered statistically significant.
Results
From October 1, 2013 to September 30, 2015, 307 patients were analyzed for inclusion in this study. Within this population, 186 received antipsychotic medications for the treatment of terminal delirium (cases), while 90 did not receive antipsychotics (controls). Of the 31 excluded patients, 13 were discharged to receive home hospice care, 11 were discharged to community nursing homes, 5 died in acute care units of Edward Hines, Jr. VA Hospital, and 2 died outside of the study period.
The mean age of all included patients was 75.5 years, and the most common terminal diagnosis was cancer, which occurred in 156 patients (56.5%) (Table 1). The baseline characteristics were similar between the cases and controls, including war era of veteran, terminal diagnosis, and comorbid conditions. The mean time between cancer diagnosis and death was not notably longer in the control group compared with that of the case group (25 vs 16 mo, respectively). There was no statistically significant difference in terminal diagnoses between cases and controls. Veterans in the control group spent more days (mean [SD]) in the hospice unit compared with veterans who experienced terminal delirium (48.5 [168.4] vs 28.2 [46.9]; P = .01). Patients with suspected infections were more likely found in the control group (P = .04; odds ratio [OR] = 1.70; 95% CI, 1.02-2.82).
The most common antipsychotic administered in the last 14 days of life was haloperidol. In the case group, 175 (94%) received haloperidol at least once in the last 2 weeks of life. Four (4.4%) veterans in the control group received haloperidol for the indication of nausea/vomiting; not terminal delirium. Atypical antipsychotics were infrequently used and included risperidone, olanzapine, quetiapine, and aripiprazole.
A total of 186 veterans received at least 1 dose of an antipsychotic for terminal delirium: 97 (52.2% ) veterans requiring antipsychotics for the treatment of terminal delirium required both scheduled and as-needed doses; 75 (40.3%) received only as-needed doses, and 14 (7.5%) required only scheduled doses. When the number of as-needed and scheduled doses were combined, each veteran received a mean 14.9 doses. However, for those veterans with antipsychotics ordered only as needed, a mean 5.8 doses were received per patient. Administration of antipsychotic doses was split evenly among the 3 nursing shifts (day-evening-night) with about 30% of doses administered on each shift.
Nurses were expected to document nonpharmacologic interventions that preceded the administration of each antipsychotic dose. Of the 1,028 doses administered to the 186 veterans who received at least 1 dose of an antipsychotic for terminal delirium, most of the doses (99.4%) had inadequate documentation based on current long-term care guidelines for prudent antipsychotic use.9
Several risk factors for terminal delirium were identified in this veteran population. Veterans with a history of drug or alcohol abuse were found to be at a significantly higher risk for terminal delirium (P = .04; OR, 1.87; 95% CI, 1.03-3.37). As noted in previous studies, steroid use (P = .01; OR, 2.57; 95% CI, 1.26-5.22); opioids (P = .007; OR, 5.94; 95% CI, 1.54-22.99), and anticholinergic medications (P = .01; OR, 2.06; 95% CI, 1.21-3.52) also increased the risk of delirium (Table 2).
When risk factors were combined, interaction terms were identified (Table 3). Those patients found to be at a higher risk of terminal delirium included Vietnam-era veterans with liver disease (P = .04; OR, 1.21; 95% CI, 1.01-1.45) and veterans with a history of drug or alcohol abuse plus comorbid liver disease (P = .03; OR, 1.26; 95% CI, 1.02-1.56). In a stratified analysis in veterans with a terminal diagnosis of cancer, those with a mental health condition (eg, PTSD, bipolar disorder, or schizophrenia) (P = .048; OR, 2.73; 95% CI, 0.98-7.58) also had higher risk of delirium, though not statistically significant. Within the cancer cohort, veterans with liver disease and a history of drug/alcohol abuse had increased risk of delirium (P = .01; OR, 1.43; 95% CI, 1.07-1.91).
Discussion
Terminal delirium is experienced by many individuals in their last days to weeks of life. Symptoms can present as hyperactive (eg, agitation, hallucinations, heightened arousal) or hypoactive (lethargy, reduced motor activity, incoherent speech). Hyperactive terminal delirium is particularly problematic because it causes increased distress to the patient, family, and caregivers. Delirium can lead to safety concerns, such as fall risk, due to patients’ decreased insight into functional decline.
Many studies suggest both nonpharmacologic and pharmacologic treatments for nonterminal delirium that may also apply to terminal delirium. Nonpharmacologic methods, such as providing a quiet and familiar environment, relieving urinary retention or constipation, and attending to sensory deficits may help prevent or minimize delirium. Pharmacologic interventions, such as antipsychotics or benzodiazepines, may benefit when other modalities have failed to assuage distressing symptoms of delirium. Because hypoactive delirium is usually accompanied by somnolence and reduced motor activity, medication is most often administered to individuals with hyperactive delirium.
The VA provides long-term care hospice beds in their CLCs for veterans who are nearing end of life and have inadequate caregiver support for comprehensive end-of-life care in the home (Case Presentation). Because of their military service and other factors common in their life histories, they may have a unique set of characteristics that are predictive of developing terminal delirium. Awareness of the propensity for terminal delirium will allow for early identification of symptoms, timely initiation of nonpharmacologic interventions, and potentially a decreased need for use of antipsychotic medications.
In this study, as noted in previous studies, certain medications (eg, steroids, opioids, and anticholinergics) increased the risk of developing terminal delirium in this veteran population. Steroids and opioids are commonly used in management of neoplasm-related pain and are prescribed throughout the course of terminal illness. The utility of these medications often outweighs potential adverse effects but should be considered when assessing the risk for development of delirium. Anticholinergics (eg, glycopyrrolate or scopolamine) are often prescribed in the last days of life for terminal secretions despite lack of evidence of patient benefit. Nonetheless, anticholinergics are used to reduce family and caregiver distress resulting from bothersome sounds from terminal secretions, referred to as the death rattle.21
It was found that veterans in the control group lived longer on the hospice unit. It is unclear whether the severity of illness was related to the development of terminal delirium or whether the development of terminal delirium contributed to a hastened death. Veterans with a suspected infection were identified by the use of antibiotics on admission to the hospice unit or when antibiotics were prescribed during the last 2 weeks of life. Thus, treatment of the underlying infection may have contributed to the finding of less delirium in the control group.
More than half the veterans in this study received at least 1 dose of an antipsychotic in the last 2 weeks of life for the treatment of terminal delirium. The most commonly administered medication was haloperidol, given either orally or subcutaneously. Atypical antipsychotics were used less often and were sometimes transitioned to subcutaneous haloperidol as the ability to swallow declined if symptoms persisted.
In this veteran population, having a history of drug or alcohol abuse (even if not recent) increased the risk of terminal delirium. Comorbid cancer and history of mental health disease (eg, PTSD, schizophrenia, bipolar disorder) and Vietnam-era veterans with liver disease (primary cancer, metastases, or cirrhosis) also were more likely to develop terminal delirium.
Just as hospice care is being provided in community settings, nurses are at the forefront of symptom management for veterans residing in VA CLCs under hospice care. Nonpharmacologic interventions are provided by the around-the-clock bedside team to provide comfort for veterans, families, and caregivers throughout the dying process. Nurses’ assessment skills and documentation inform the plan of care for the entire interdisciplinary hospice team. Because the treatment of terminal delirium often involves the administration of antipsychotic medications, scrutiny is applied to documentation surrounding these medications.7 This study suggested that there is a need for a more rigorous and consistent method of documenting the assessment of, and interventions for, terminal delirium.
Limitations
Limitations to the current study include hyperactive delirium that was misinterpreted and treated as pain; the probable underreporting of hypoactive delirium and associated symptoms; the use of antipsychotics as a surrogate marker for the development of terminal delirium; and lack of nursing documentation of assessment and interventions of terminal delirium. In addition, the total milligrams of antipsychotics administered per patient were not collected. Finally, there was the potential that other risk factors were not identified due to low numbers of veterans with certain diagnoses (eg, dementia).
Conclusions
Based on the findings in this study, several steps have been implemented to enhance the care of veterans under hospice care in this CLC: (1) Nurses providing direct patient care have been educated on the assessment by use of the mRASS and treatment of terminal delirium;22 (2) A hospice delirium note template has been created that details symptoms of terminal delirium, nonpharmacologic interventions, the use of antipsychotic medications if indicated, and the outcome of interventions; (3) Providers (eg, physician, advanced practice nurses) review each veteran’s medical history for the risk factors noted above; (4) Any risk factor(s) identified by this study will lead to a nursing order for delirium precautions, which requires completion of the delirium note template by nurses each shift.
The goal for this enhanced process is to identify veterans at risk for terminal delirium, observe changes that may indicate the onset of delirium, and intervene promptly to decrease symptom burden and improve quality of life and safety. Potentially, there will be less requirement for the use of antipsychotic medications to control the more severe symptoms of terminal delirium. A future study will evaluate the outcome of this enhanced process for the assessment and treatment of terminal delirium in this veteran population.
Acknowledgment
We thank Martin J. Gorbien, MD, associate chief of staff of Geriatrics and Extended Care, for his continued support throughout this project.
1. Casarett DJ, Inouye SK. Diagnosis and management of delirium near the end of life. Ann Intern Med. 2001;135(1):32-40.
2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Washington, DC; 2013.
3. Grassi L, Caraceni A, Mitchell A, et al. Management of delirium in palliative care: a review. Curr Psychiatry Rep. 2015;17(13):1-9. doi:10.1007/s11920-015-0550-8
4. Bush S, Leonard M, Agar M, et al. End-of-life delirium: issues regarding the recognition, optimal management, and role of sedation in the dying phase. J Pain Symptom Manage. 2014;48 (2):215-230. doi:10.1016/j.jpainsymman. 2014.05.009
5. Moyer D. Terminal delirium in geriatric patients with cancer at end of life. Am J Hosp Palliat Med. 2010;28(1):44-51. doi:10.1177/1049909110376755
6. Lai X, Huang Z, Chen C, et al. Delirium screening in patients in a palliative care ward: a best practice implementation project. JBI Database System Rev Implement Rep. 2019;17(3):429-441. doi:10.11124/JBISRIR-2017-003646
7. Centers for Medicare and Medicaid Services. Medicare and Medicaid Programs; reform of requirements for long-term care facilities. Final rule. Fed Regist. 2016;81 (192):68688-68872. Accessed April 17, 2021. https://pubmed.ncbi.nlm.nih.gov/27731960
8. Wright D, Brajtman S, Macdonald M. A relational ethical approach to end-of-life delirium. J Pain Symptom Manage. 2014;48(2):191-198. doi:10.1016/j.jpainsymman.2013.08.015
9. Brajtman S, Higuchi K, McPherson C. Caring for patients with terminal delirium: palliative care unit and home care nurses’ experience. Int J Palliat Nurs. 2006;12(4):150-156. doi:10.12968/ijpn.2006.12.4.21010
10. Lange E, Verhaak P, Meer K. Prevalence, presentation, and prognosis of delirium in older people in the population, at home and in long-term care: a review. Int J Geriatr Psychiatry. 2013;28(2):127-134. doi:10.1002/gps.3814
11. Goy E, Ganzini L. Prevalence and natural history of neuropsychiatric syndromes in veteran hospice patients. J Pain Symptom Manage. 2011;41(12):394-401. doi:10.1016/j.jpainsymman.2010.04.015
12. Bush S, Bruera E. The assessment and management of delirium in cancer patients. Oncologist. 2009;4(10):1039-1049. doi:10.1634/theoncologist.2009-0122
13. Clary P, Lawson P. Pharmacologic pearls for end-of-life care. Am Fam Physician. 2009;79(12):1059-1065.
14. Blinderman CD, Billings J. Comfort for patients dying in the hospital. N Engl J Med. 2015;373(26):2549-2561. doi:10.1056/NEJMra1411746
15. Irwin SA, Pirrello RD, Hirst JM, Buckholz GT, Ferris F.D. Clarifying delirium management: practical evidence-based, expert recommendation for clinical practice. J Palliat Med. 2013;16(4):423-435. doi:10.1089/jpm.2012.0319
16. Bobb B. Dyspnea and delirium at the end of life. Clin J Oncol Nurs. 2016;20(3):244-246. doi:10.1188/16.CJON.244-246
17. Morita T, Tei Y, Inoue S. Agitated terminal delirium and association with partial opioid substitution and hydration. J Palliat Med. 2003;6(4):557-563. doi:10.1089/109662103768253669
18. Attard A, Ranjith G, Taylor D. Delirium and its treatment. CNS Drugs. 2008;22(8):631-644-649. doi:10.2165/00023210-200822080-00002
19. Hui D. Benzodiazepines for agitation in patients with delirium: selecting the right patient, right time, and right indication. Curr Opin Support Palliat Care. 2018;12(4):489-494. doi:10.1097/SPC.0000000000000395
20. Irwin P, Murray S, Bilinski A, Chern B, Stafford B. Alcohol withdrawal as an underrated cause of agitated delirium and terminal restlessness in patients with advanced malignancy. J Pain Symptom Manage. 2005;29(1):104-108. doi:10.1016/j.jpainsymman.2004.04.010
21. Lokker ME, van Zuylen L, van der Rijt CCD, van der Heide A. Prevalence, impact, and treatment of death rattle: a systematic review. J Pain Symptom Manage. 2014;48:2-12. doi:10.1016/j.jpainsymman.2013.03.011
22. Sessler C, Gosnell M, Grap M, et al. The Richmond Agitation–Sedation Scale: validity and reliability in adult intensive care unit patients. Am J Respir Crit Care Med. 2002:166(10):1338-1344. doi:10.1164/rccm.2107138
Delirium is a condition commonly exhibited by hospitalized patients and by those who are approaching the end of life.1 Patients who experience a disturbance in attention that develops over a relatively short period and represents an acute change may have delirium.2 Furthermore, there is often an additional cognitive disturbance, such as disorientation, memory deficit, language deficits, visuospatial deficit, or perception. Terminal delirium is defined as delirium that occurs in the dying process and implies that reversal is less likely.3 When death is anticipated, diagnostic workups are not recommended, and treatment of the physiologic abnormalities that contribute to delirium is generally ineffective.4
Background
Delirium is often underdiagnosed and undetected by the clinician. Some studies have shown that delirium is not detected in 22 to 50% of cases.5 Factors that contribute to the underdetection of delirium include preexisting dementia, older age, presence of visual or hearing impairment, and hypoactive presentation of delirium. Other possible reasons for nondetection of delirium are its fluctuating nature and lack of formal cognitive assessment as part of a routine screening across care settings.5 Another study found that 41% of health care providers (HCPs) felt that screening for delirium was burdensome.6
To date, there are no veteran-focused studies that investigate prevalence or risk factors for terminal delirium in US Department of Veterans Affairs (VA) long-term care hospice units. Most long-term care hospice units in the VA are in community living centers (CLCs) that follow regulatory guidelines for using antipsychotic medications. The Centers for Medicare and Medicaid Services state that if antipsychotics are prescribed, documentation must clearly show the indication for the antipsychotic medication, the multiple attempts to implement planned care, nonpharmacologic approaches, and ongoing evaluation of the effectiveness of these interventions.7 The symptoms of terminal delirium cause significant distress to patients, family and caregivers, and nursing staff. Literature suggests that delirium poses significant relational challenges for patients, families, and HCPs in end-of-life situations.8,9 We hypothesize that the early identification of risk factors for the development of terminal delirium in this population may lead to increased use of nonpharmacologic measures to prevent terminal delirium, increase nursing vigilance for development of symptoms, and reduce symptom burden should terminal delirium develop.
Prevalence of delirium in the long-term care setting has ranged between 1.4 and 70.3%.10 The rate was found to be much higher in institutionalized populations compared with that of patients classified as at-home. In a study of the prevalence, severity, and natural history of neuropsychiatric syndromes in terminally ill veterans enrolled in community hospice, delirium was found to be present in only 4.1% on the initial visit and 42.5% during last visit. Also, more than half had at least 1 episode of delirium during the 90-day study period.11 In a study of the prevalence of delirium in terminal cancer patients admitted to hospice, 80% experienced delirium in their final days.12
Risk factors for the development of delirium that have been identified in actively dying patients include bowel or bladder obstruction, fluid and electrolyte imbalances, suboptimal pain management, medication adverse effects and toxicity (eg, benzodiazepines, opioids, anticholinergics, and steroids), the addition of ≥ 3 medications, infection, hepatic and renal failure, poor glycemic control, hypoxia, and hematologic disturbances.4,5,13 A high percentage of patients with a previous diagnosis of dementia were found to exhibit terminal delirium.14
There are 2 major subtypes of delirium: hyperactive and hypoactive.4 Patients with hypoactive delirium exhibit lethargy, reduced motor activity, lack of interest, and/or incoherent speech. There is currently little evidence to guide the treatment of hypoactive delirium. By contrast, hyperactive delirium is associated with hallucinations, agitation, heightened arousal, and inappropriate behavior. Many studies suggest both nonpharmacologic and pharmacologic treatment modalities for the treatment of hyperactive delirium.4,13 Nonpharmacologic interventions may minimize the risk and severity of symptoms associated with delirium. Current guidelines recommend these interventions before pharmacologic treatment.4 Nonpharmacologic interventions include but are not limited to the following: engaging the patient in mentally stimulating activities; surrounding the patient with familiar materials (eg, photos); ensuring that all individuals identify themselves when they encounter a patient; minimizing the intensity of stimulation, providing family or volunteer presence, soft lighting and warm blankets; and ensuring the patient uses hearing aids and glasses if needed.4,14
Although there are no US Food and Drug Administration-approved medications to treat hyperactive delirium, first-generation antipsychotics (eg, haloperidol, chlorpromazine) are considered the first-line treatment for patients exhibiting psychosis and psychomotor agitation.3,4,14-16 In terminally ill patients, there is limited evidence from clinical trials to support the efficacy of drug therapy.14 One study showed lack of efficacy with hydration and opioid rotation.17 In terminally ill patients experiencing hyperactive delirium, there is a significant increased risk of muscle tension, myoclonic seizures, and distress to the patient, family, and caregiver.1 Benzodiazepines can be considered first-line treatment for dying patients with terminal delirium in which the goals of treatment are to relieve muscle tension, ensure amnesia, reduce the risk of seizures, and decrease psychosis and agitation.18,19 Furthermore, in patients with history of alcohol misuse who are experiencing terminal delirium, benzodiazepines also may be the preferred pharmacologic treatment.20 Caution must be exercised with the use of benzodiazepines because they can also cause oversedation, increased confusion, and/or a paradoxical worsening of delirium.3,4,14
Methods
This was a retrospective case-control study of patients who died in the Edward Hines Jr. Veterans Affairs Hospital CLC in Hines, Illinois, under the treating specialty nursing home hospice from October 1, 2013 to September 30, 2015. Due to the retrospective nature of this trial, the use of antipsychotics within the last 2 weeks of life was a surrogate marker for development of terminal delirium. Cases were defined as patients who were treated with antipsychotics for terminal delirium within the last 2 weeks of their lives. Controls were defined as patients who were not treated with antipsychotics for terminal delirium within the last 2 weeks of their lives. Living hospice patients and patients who were discharged from the CLC before death were excluded.
The goals of this study were to (1) determine risk factors in the VA CLC hospice veteran population for the development of terminal delirium; (2) evaluate documentation by the nursing staff of nonpharmacologic interventions and indications for antipsychotic use in the treatment of terminal delirium; and (3) examine the current usage patterns of antipsychotics for the treatment of terminal delirium.
Veterans’ medical records were reviewed from 2 weeks before death until the recorded death date. Factors that were assessed included age, war era of service, date of death, terminal diagnosis, time interval from cancer diagnosis to death, comorbid conditions, prescribed antipsychotic medications, and other medications potentially contributing to delirium. Nursing documentation was reviewed for indications for administration of antipsychotic medications and nonpharmacologic interventions used to mitigate the symptoms of terminal delirium.
Statistical analysis was conducted in SAS Version 9.3. Cases were compared with controls using univariate and multivariate statistics as appropriate. Comparisons for continuous variables (eg, age) were conducted with Student t tests. Categorical variables (eg, PTSD diagnosis) were compared using χ2 analysis or Fisher exact test as appropriate. Variables with a P value < .1 in the univariate analysis were included in logistic regression models. Independent variables were removed from the models, using a backward selection process. Interaction terms were tested based on significance and clinical relevance. A P value < .05 was considered statistically significant.
Results
From October 1, 2013 to September 30, 2015, 307 patients were analyzed for inclusion in this study. Within this population, 186 received antipsychotic medications for the treatment of terminal delirium (cases), while 90 did not receive antipsychotics (controls). Of the 31 excluded patients, 13 were discharged to receive home hospice care, 11 were discharged to community nursing homes, 5 died in acute care units of Edward Hines, Jr. VA Hospital, and 2 died outside of the study period.
The mean age of all included patients was 75.5 years, and the most common terminal diagnosis was cancer, which occurred in 156 patients (56.5%) (Table 1). The baseline characteristics were similar between the cases and controls, including war era of veteran, terminal diagnosis, and comorbid conditions. The mean time between cancer diagnosis and death was not notably longer in the control group compared with that of the case group (25 vs 16 mo, respectively). There was no statistically significant difference in terminal diagnoses between cases and controls. Veterans in the control group spent more days (mean [SD]) in the hospice unit compared with veterans who experienced terminal delirium (48.5 [168.4] vs 28.2 [46.9]; P = .01). Patients with suspected infections were more likely found in the control group (P = .04; odds ratio [OR] = 1.70; 95% CI, 1.02-2.82).
The most common antipsychotic administered in the last 14 days of life was haloperidol. In the case group, 175 (94%) received haloperidol at least once in the last 2 weeks of life. Four (4.4%) veterans in the control group received haloperidol for the indication of nausea/vomiting; not terminal delirium. Atypical antipsychotics were infrequently used and included risperidone, olanzapine, quetiapine, and aripiprazole.
A total of 186 veterans received at least 1 dose of an antipsychotic for terminal delirium: 97 (52.2% ) veterans requiring antipsychotics for the treatment of terminal delirium required both scheduled and as-needed doses; 75 (40.3%) received only as-needed doses, and 14 (7.5%) required only scheduled doses. When the number of as-needed and scheduled doses were combined, each veteran received a mean 14.9 doses. However, for those veterans with antipsychotics ordered only as needed, a mean 5.8 doses were received per patient. Administration of antipsychotic doses was split evenly among the 3 nursing shifts (day-evening-night) with about 30% of doses administered on each shift.
Nurses were expected to document nonpharmacologic interventions that preceded the administration of each antipsychotic dose. Of the 1,028 doses administered to the 186 veterans who received at least 1 dose of an antipsychotic for terminal delirium, most of the doses (99.4%) had inadequate documentation based on current long-term care guidelines for prudent antipsychotic use.9
Several risk factors for terminal delirium were identified in this veteran population. Veterans with a history of drug or alcohol abuse were found to be at a significantly higher risk for terminal delirium (P = .04; OR, 1.87; 95% CI, 1.03-3.37). As noted in previous studies, steroid use (P = .01; OR, 2.57; 95% CI, 1.26-5.22); opioids (P = .007; OR, 5.94; 95% CI, 1.54-22.99), and anticholinergic medications (P = .01; OR, 2.06; 95% CI, 1.21-3.52) also increased the risk of delirium (Table 2).
When risk factors were combined, interaction terms were identified (Table 3). Those patients found to be at a higher risk of terminal delirium included Vietnam-era veterans with liver disease (P = .04; OR, 1.21; 95% CI, 1.01-1.45) and veterans with a history of drug or alcohol abuse plus comorbid liver disease (P = .03; OR, 1.26; 95% CI, 1.02-1.56). In a stratified analysis in veterans with a terminal diagnosis of cancer, those with a mental health condition (eg, PTSD, bipolar disorder, or schizophrenia) (P = .048; OR, 2.73; 95% CI, 0.98-7.58) also had higher risk of delirium, though not statistically significant. Within the cancer cohort, veterans with liver disease and a history of drug/alcohol abuse had increased risk of delirium (P = .01; OR, 1.43; 95% CI, 1.07-1.91).
Discussion
Terminal delirium is experienced by many individuals in their last days to weeks of life. Symptoms can present as hyperactive (eg, agitation, hallucinations, heightened arousal) or hypoactive (lethargy, reduced motor activity, incoherent speech). Hyperactive terminal delirium is particularly problematic because it causes increased distress to the patient, family, and caregivers. Delirium can lead to safety concerns, such as fall risk, due to patients’ decreased insight into functional decline.
Many studies suggest both nonpharmacologic and pharmacologic treatments for nonterminal delirium that may also apply to terminal delirium. Nonpharmacologic methods, such as providing a quiet and familiar environment, relieving urinary retention or constipation, and attending to sensory deficits may help prevent or minimize delirium. Pharmacologic interventions, such as antipsychotics or benzodiazepines, may benefit when other modalities have failed to assuage distressing symptoms of delirium. Because hypoactive delirium is usually accompanied by somnolence and reduced motor activity, medication is most often administered to individuals with hyperactive delirium.
The VA provides long-term care hospice beds in their CLCs for veterans who are nearing end of life and have inadequate caregiver support for comprehensive end-of-life care in the home (Case Presentation). Because of their military service and other factors common in their life histories, they may have a unique set of characteristics that are predictive of developing terminal delirium. Awareness of the propensity for terminal delirium will allow for early identification of symptoms, timely initiation of nonpharmacologic interventions, and potentially a decreased need for use of antipsychotic medications.
In this study, as noted in previous studies, certain medications (eg, steroids, opioids, and anticholinergics) increased the risk of developing terminal delirium in this veteran population. Steroids and opioids are commonly used in management of neoplasm-related pain and are prescribed throughout the course of terminal illness. The utility of these medications often outweighs potential adverse effects but should be considered when assessing the risk for development of delirium. Anticholinergics (eg, glycopyrrolate or scopolamine) are often prescribed in the last days of life for terminal secretions despite lack of evidence of patient benefit. Nonetheless, anticholinergics are used to reduce family and caregiver distress resulting from bothersome sounds from terminal secretions, referred to as the death rattle.21
It was found that veterans in the control group lived longer on the hospice unit. It is unclear whether the severity of illness was related to the development of terminal delirium or whether the development of terminal delirium contributed to a hastened death. Veterans with a suspected infection were identified by the use of antibiotics on admission to the hospice unit or when antibiotics were prescribed during the last 2 weeks of life. Thus, treatment of the underlying infection may have contributed to the finding of less delirium in the control group.
More than half the veterans in this study received at least 1 dose of an antipsychotic in the last 2 weeks of life for the treatment of terminal delirium. The most commonly administered medication was haloperidol, given either orally or subcutaneously. Atypical antipsychotics were used less often and were sometimes transitioned to subcutaneous haloperidol as the ability to swallow declined if symptoms persisted.
In this veteran population, having a history of drug or alcohol abuse (even if not recent) increased the risk of terminal delirium. Comorbid cancer and history of mental health disease (eg, PTSD, schizophrenia, bipolar disorder) and Vietnam-era veterans with liver disease (primary cancer, metastases, or cirrhosis) also were more likely to develop terminal delirium.
Just as hospice care is being provided in community settings, nurses are at the forefront of symptom management for veterans residing in VA CLCs under hospice care. Nonpharmacologic interventions are provided by the around-the-clock bedside team to provide comfort for veterans, families, and caregivers throughout the dying process. Nurses’ assessment skills and documentation inform the plan of care for the entire interdisciplinary hospice team. Because the treatment of terminal delirium often involves the administration of antipsychotic medications, scrutiny is applied to documentation surrounding these medications.7 This study suggested that there is a need for a more rigorous and consistent method of documenting the assessment of, and interventions for, terminal delirium.
Limitations
Limitations to the current study include hyperactive delirium that was misinterpreted and treated as pain; the probable underreporting of hypoactive delirium and associated symptoms; the use of antipsychotics as a surrogate marker for the development of terminal delirium; and lack of nursing documentation of assessment and interventions of terminal delirium. In addition, the total milligrams of antipsychotics administered per patient were not collected. Finally, there was the potential that other risk factors were not identified due to low numbers of veterans with certain diagnoses (eg, dementia).
Conclusions
Based on the findings in this study, several steps have been implemented to enhance the care of veterans under hospice care in this CLC: (1) Nurses providing direct patient care have been educated on the assessment by use of the mRASS and treatment of terminal delirium;22 (2) A hospice delirium note template has been created that details symptoms of terminal delirium, nonpharmacologic interventions, the use of antipsychotic medications if indicated, and the outcome of interventions; (3) Providers (eg, physician, advanced practice nurses) review each veteran’s medical history for the risk factors noted above; (4) Any risk factor(s) identified by this study will lead to a nursing order for delirium precautions, which requires completion of the delirium note template by nurses each shift.
The goal for this enhanced process is to identify veterans at risk for terminal delirium, observe changes that may indicate the onset of delirium, and intervene promptly to decrease symptom burden and improve quality of life and safety. Potentially, there will be less requirement for the use of antipsychotic medications to control the more severe symptoms of terminal delirium. A future study will evaluate the outcome of this enhanced process for the assessment and treatment of terminal delirium in this veteran population.
Acknowledgment
We thank Martin J. Gorbien, MD, associate chief of staff of Geriatrics and Extended Care, for his continued support throughout this project.
Delirium is a condition commonly exhibited by hospitalized patients and by those who are approaching the end of life.1 Patients who experience a disturbance in attention that develops over a relatively short period and represents an acute change may have delirium.2 Furthermore, there is often an additional cognitive disturbance, such as disorientation, memory deficit, language deficits, visuospatial deficit, or perception. Terminal delirium is defined as delirium that occurs in the dying process and implies that reversal is less likely.3 When death is anticipated, diagnostic workups are not recommended, and treatment of the physiologic abnormalities that contribute to delirium is generally ineffective.4
Background
Delirium is often underdiagnosed and undetected by the clinician. Some studies have shown that delirium is not detected in 22 to 50% of cases.5 Factors that contribute to the underdetection of delirium include preexisting dementia, older age, presence of visual or hearing impairment, and hypoactive presentation of delirium. Other possible reasons for nondetection of delirium are its fluctuating nature and lack of formal cognitive assessment as part of a routine screening across care settings.5 Another study found that 41% of health care providers (HCPs) felt that screening for delirium was burdensome.6
To date, there are no veteran-focused studies that investigate prevalence or risk factors for terminal delirium in US Department of Veterans Affairs (VA) long-term care hospice units. Most long-term care hospice units in the VA are in community living centers (CLCs) that follow regulatory guidelines for using antipsychotic medications. The Centers for Medicare and Medicaid Services state that if antipsychotics are prescribed, documentation must clearly show the indication for the antipsychotic medication, the multiple attempts to implement planned care, nonpharmacologic approaches, and ongoing evaluation of the effectiveness of these interventions.7 The symptoms of terminal delirium cause significant distress to patients, family and caregivers, and nursing staff. Literature suggests that delirium poses significant relational challenges for patients, families, and HCPs in end-of-life situations.8,9 We hypothesize that the early identification of risk factors for the development of terminal delirium in this population may lead to increased use of nonpharmacologic measures to prevent terminal delirium, increase nursing vigilance for development of symptoms, and reduce symptom burden should terminal delirium develop.
Prevalence of delirium in the long-term care setting has ranged between 1.4 and 70.3%.10 The rate was found to be much higher in institutionalized populations compared with that of patients classified as at-home. In a study of the prevalence, severity, and natural history of neuropsychiatric syndromes in terminally ill veterans enrolled in community hospice, delirium was found to be present in only 4.1% on the initial visit and 42.5% during last visit. Also, more than half had at least 1 episode of delirium during the 90-day study period.11 In a study of the prevalence of delirium in terminal cancer patients admitted to hospice, 80% experienced delirium in their final days.12
Risk factors for the development of delirium that have been identified in actively dying patients include bowel or bladder obstruction, fluid and electrolyte imbalances, suboptimal pain management, medication adverse effects and toxicity (eg, benzodiazepines, opioids, anticholinergics, and steroids), the addition of ≥ 3 medications, infection, hepatic and renal failure, poor glycemic control, hypoxia, and hematologic disturbances.4,5,13 A high percentage of patients with a previous diagnosis of dementia were found to exhibit terminal delirium.14
There are 2 major subtypes of delirium: hyperactive and hypoactive.4 Patients with hypoactive delirium exhibit lethargy, reduced motor activity, lack of interest, and/or incoherent speech. There is currently little evidence to guide the treatment of hypoactive delirium. By contrast, hyperactive delirium is associated with hallucinations, agitation, heightened arousal, and inappropriate behavior. Many studies suggest both nonpharmacologic and pharmacologic treatment modalities for the treatment of hyperactive delirium.4,13 Nonpharmacologic interventions may minimize the risk and severity of symptoms associated with delirium. Current guidelines recommend these interventions before pharmacologic treatment.4 Nonpharmacologic interventions include but are not limited to the following: engaging the patient in mentally stimulating activities; surrounding the patient with familiar materials (eg, photos); ensuring that all individuals identify themselves when they encounter a patient; minimizing the intensity of stimulation, providing family or volunteer presence, soft lighting and warm blankets; and ensuring the patient uses hearing aids and glasses if needed.4,14
Although there are no US Food and Drug Administration-approved medications to treat hyperactive delirium, first-generation antipsychotics (eg, haloperidol, chlorpromazine) are considered the first-line treatment for patients exhibiting psychosis and psychomotor agitation.3,4,14-16 In terminally ill patients, there is limited evidence from clinical trials to support the efficacy of drug therapy.14 One study showed lack of efficacy with hydration and opioid rotation.17 In terminally ill patients experiencing hyperactive delirium, there is a significant increased risk of muscle tension, myoclonic seizures, and distress to the patient, family, and caregiver.1 Benzodiazepines can be considered first-line treatment for dying patients with terminal delirium in which the goals of treatment are to relieve muscle tension, ensure amnesia, reduce the risk of seizures, and decrease psychosis and agitation.18,19 Furthermore, in patients with history of alcohol misuse who are experiencing terminal delirium, benzodiazepines also may be the preferred pharmacologic treatment.20 Caution must be exercised with the use of benzodiazepines because they can also cause oversedation, increased confusion, and/or a paradoxical worsening of delirium.3,4,14
Methods
This was a retrospective case-control study of patients who died in the Edward Hines Jr. Veterans Affairs Hospital CLC in Hines, Illinois, under the treating specialty nursing home hospice from October 1, 2013 to September 30, 2015. Due to the retrospective nature of this trial, the use of antipsychotics within the last 2 weeks of life was a surrogate marker for development of terminal delirium. Cases were defined as patients who were treated with antipsychotics for terminal delirium within the last 2 weeks of their lives. Controls were defined as patients who were not treated with antipsychotics for terminal delirium within the last 2 weeks of their lives. Living hospice patients and patients who were discharged from the CLC before death were excluded.
The goals of this study were to (1) determine risk factors in the VA CLC hospice veteran population for the development of terminal delirium; (2) evaluate documentation by the nursing staff of nonpharmacologic interventions and indications for antipsychotic use in the treatment of terminal delirium; and (3) examine the current usage patterns of antipsychotics for the treatment of terminal delirium.
Veterans’ medical records were reviewed from 2 weeks before death until the recorded death date. Factors that were assessed included age, war era of service, date of death, terminal diagnosis, time interval from cancer diagnosis to death, comorbid conditions, prescribed antipsychotic medications, and other medications potentially contributing to delirium. Nursing documentation was reviewed for indications for administration of antipsychotic medications and nonpharmacologic interventions used to mitigate the symptoms of terminal delirium.
Statistical analysis was conducted in SAS Version 9.3. Cases were compared with controls using univariate and multivariate statistics as appropriate. Comparisons for continuous variables (eg, age) were conducted with Student t tests. Categorical variables (eg, PTSD diagnosis) were compared using χ2 analysis or Fisher exact test as appropriate. Variables with a P value < .1 in the univariate analysis were included in logistic regression models. Independent variables were removed from the models, using a backward selection process. Interaction terms were tested based on significance and clinical relevance. A P value < .05 was considered statistically significant.
Results
From October 1, 2013 to September 30, 2015, 307 patients were analyzed for inclusion in this study. Within this population, 186 received antipsychotic medications for the treatment of terminal delirium (cases), while 90 did not receive antipsychotics (controls). Of the 31 excluded patients, 13 were discharged to receive home hospice care, 11 were discharged to community nursing homes, 5 died in acute care units of Edward Hines, Jr. VA Hospital, and 2 died outside of the study period.
The mean age of all included patients was 75.5 years, and the most common terminal diagnosis was cancer, which occurred in 156 patients (56.5%) (Table 1). The baseline characteristics were similar between the cases and controls, including war era of veteran, terminal diagnosis, and comorbid conditions. The mean time between cancer diagnosis and death was not notably longer in the control group compared with that of the case group (25 vs 16 mo, respectively). There was no statistically significant difference in terminal diagnoses between cases and controls. Veterans in the control group spent more days (mean [SD]) in the hospice unit compared with veterans who experienced terminal delirium (48.5 [168.4] vs 28.2 [46.9]; P = .01). Patients with suspected infections were more likely found in the control group (P = .04; odds ratio [OR] = 1.70; 95% CI, 1.02-2.82).
The most common antipsychotic administered in the last 14 days of life was haloperidol. In the case group, 175 (94%) received haloperidol at least once in the last 2 weeks of life. Four (4.4%) veterans in the control group received haloperidol for the indication of nausea/vomiting; not terminal delirium. Atypical antipsychotics were infrequently used and included risperidone, olanzapine, quetiapine, and aripiprazole.
A total of 186 veterans received at least 1 dose of an antipsychotic for terminal delirium: 97 (52.2% ) veterans requiring antipsychotics for the treatment of terminal delirium required both scheduled and as-needed doses; 75 (40.3%) received only as-needed doses, and 14 (7.5%) required only scheduled doses. When the number of as-needed and scheduled doses were combined, each veteran received a mean 14.9 doses. However, for those veterans with antipsychotics ordered only as needed, a mean 5.8 doses were received per patient. Administration of antipsychotic doses was split evenly among the 3 nursing shifts (day-evening-night) with about 30% of doses administered on each shift.
Nurses were expected to document nonpharmacologic interventions that preceded the administration of each antipsychotic dose. Of the 1,028 doses administered to the 186 veterans who received at least 1 dose of an antipsychotic for terminal delirium, most of the doses (99.4%) had inadequate documentation based on current long-term care guidelines for prudent antipsychotic use.9
Several risk factors for terminal delirium were identified in this veteran population. Veterans with a history of drug or alcohol abuse were found to be at a significantly higher risk for terminal delirium (P = .04; OR, 1.87; 95% CI, 1.03-3.37). As noted in previous studies, steroid use (P = .01; OR, 2.57; 95% CI, 1.26-5.22); opioids (P = .007; OR, 5.94; 95% CI, 1.54-22.99), and anticholinergic medications (P = .01; OR, 2.06; 95% CI, 1.21-3.52) also increased the risk of delirium (Table 2).
When risk factors were combined, interaction terms were identified (Table 3). Those patients found to be at a higher risk of terminal delirium included Vietnam-era veterans with liver disease (P = .04; OR, 1.21; 95% CI, 1.01-1.45) and veterans with a history of drug or alcohol abuse plus comorbid liver disease (P = .03; OR, 1.26; 95% CI, 1.02-1.56). In a stratified analysis in veterans with a terminal diagnosis of cancer, those with a mental health condition (eg, PTSD, bipolar disorder, or schizophrenia) (P = .048; OR, 2.73; 95% CI, 0.98-7.58) also had higher risk of delirium, though not statistically significant. Within the cancer cohort, veterans with liver disease and a history of drug/alcohol abuse had increased risk of delirium (P = .01; OR, 1.43; 95% CI, 1.07-1.91).
Discussion
Terminal delirium is experienced by many individuals in their last days to weeks of life. Symptoms can present as hyperactive (eg, agitation, hallucinations, heightened arousal) or hypoactive (lethargy, reduced motor activity, incoherent speech). Hyperactive terminal delirium is particularly problematic because it causes increased distress to the patient, family, and caregivers. Delirium can lead to safety concerns, such as fall risk, due to patients’ decreased insight into functional decline.
Many studies suggest both nonpharmacologic and pharmacologic treatments for nonterminal delirium that may also apply to terminal delirium. Nonpharmacologic methods, such as providing a quiet and familiar environment, relieving urinary retention or constipation, and attending to sensory deficits may help prevent or minimize delirium. Pharmacologic interventions, such as antipsychotics or benzodiazepines, may benefit when other modalities have failed to assuage distressing symptoms of delirium. Because hypoactive delirium is usually accompanied by somnolence and reduced motor activity, medication is most often administered to individuals with hyperactive delirium.
The VA provides long-term care hospice beds in their CLCs for veterans who are nearing end of life and have inadequate caregiver support for comprehensive end-of-life care in the home (Case Presentation). Because of their military service and other factors common in their life histories, they may have a unique set of characteristics that are predictive of developing terminal delirium. Awareness of the propensity for terminal delirium will allow for early identification of symptoms, timely initiation of nonpharmacologic interventions, and potentially a decreased need for use of antipsychotic medications.
In this study, as noted in previous studies, certain medications (eg, steroids, opioids, and anticholinergics) increased the risk of developing terminal delirium in this veteran population. Steroids and opioids are commonly used in management of neoplasm-related pain and are prescribed throughout the course of terminal illness. The utility of these medications often outweighs potential adverse effects but should be considered when assessing the risk for development of delirium. Anticholinergics (eg, glycopyrrolate or scopolamine) are often prescribed in the last days of life for terminal secretions despite lack of evidence of patient benefit. Nonetheless, anticholinergics are used to reduce family and caregiver distress resulting from bothersome sounds from terminal secretions, referred to as the death rattle.21
It was found that veterans in the control group lived longer on the hospice unit. It is unclear whether the severity of illness was related to the development of terminal delirium or whether the development of terminal delirium contributed to a hastened death. Veterans with a suspected infection were identified by the use of antibiotics on admission to the hospice unit or when antibiotics were prescribed during the last 2 weeks of life. Thus, treatment of the underlying infection may have contributed to the finding of less delirium in the control group.
More than half the veterans in this study received at least 1 dose of an antipsychotic in the last 2 weeks of life for the treatment of terminal delirium. The most commonly administered medication was haloperidol, given either orally or subcutaneously. Atypical antipsychotics were used less often and were sometimes transitioned to subcutaneous haloperidol as the ability to swallow declined if symptoms persisted.
In this veteran population, having a history of drug or alcohol abuse (even if not recent) increased the risk of terminal delirium. Comorbid cancer and history of mental health disease (eg, PTSD, schizophrenia, bipolar disorder) and Vietnam-era veterans with liver disease (primary cancer, metastases, or cirrhosis) also were more likely to develop terminal delirium.
Just as hospice care is being provided in community settings, nurses are at the forefront of symptom management for veterans residing in VA CLCs under hospice care. Nonpharmacologic interventions are provided by the around-the-clock bedside team to provide comfort for veterans, families, and caregivers throughout the dying process. Nurses’ assessment skills and documentation inform the plan of care for the entire interdisciplinary hospice team. Because the treatment of terminal delirium often involves the administration of antipsychotic medications, scrutiny is applied to documentation surrounding these medications.7 This study suggested that there is a need for a more rigorous and consistent method of documenting the assessment of, and interventions for, terminal delirium.
Limitations
Limitations to the current study include hyperactive delirium that was misinterpreted and treated as pain; the probable underreporting of hypoactive delirium and associated symptoms; the use of antipsychotics as a surrogate marker for the development of terminal delirium; and lack of nursing documentation of assessment and interventions of terminal delirium. In addition, the total milligrams of antipsychotics administered per patient were not collected. Finally, there was the potential that other risk factors were not identified due to low numbers of veterans with certain diagnoses (eg, dementia).
Conclusions
Based on the findings in this study, several steps have been implemented to enhance the care of veterans under hospice care in this CLC: (1) Nurses providing direct patient care have been educated on the assessment by use of the mRASS and treatment of terminal delirium;22 (2) A hospice delirium note template has been created that details symptoms of terminal delirium, nonpharmacologic interventions, the use of antipsychotic medications if indicated, and the outcome of interventions; (3) Providers (eg, physician, advanced practice nurses) review each veteran’s medical history for the risk factors noted above; (4) Any risk factor(s) identified by this study will lead to a nursing order for delirium precautions, which requires completion of the delirium note template by nurses each shift.
The goal for this enhanced process is to identify veterans at risk for terminal delirium, observe changes that may indicate the onset of delirium, and intervene promptly to decrease symptom burden and improve quality of life and safety. Potentially, there will be less requirement for the use of antipsychotic medications to control the more severe symptoms of terminal delirium. A future study will evaluate the outcome of this enhanced process for the assessment and treatment of terminal delirium in this veteran population.
Acknowledgment
We thank Martin J. Gorbien, MD, associate chief of staff of Geriatrics and Extended Care, for his continued support throughout this project.
1. Casarett DJ, Inouye SK. Diagnosis and management of delirium near the end of life. Ann Intern Med. 2001;135(1):32-40.
2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Washington, DC; 2013.
3. Grassi L, Caraceni A, Mitchell A, et al. Management of delirium in palliative care: a review. Curr Psychiatry Rep. 2015;17(13):1-9. doi:10.1007/s11920-015-0550-8
4. Bush S, Leonard M, Agar M, et al. End-of-life delirium: issues regarding the recognition, optimal management, and role of sedation in the dying phase. J Pain Symptom Manage. 2014;48 (2):215-230. doi:10.1016/j.jpainsymman. 2014.05.009
5. Moyer D. Terminal delirium in geriatric patients with cancer at end of life. Am J Hosp Palliat Med. 2010;28(1):44-51. doi:10.1177/1049909110376755
6. Lai X, Huang Z, Chen C, et al. Delirium screening in patients in a palliative care ward: a best practice implementation project. JBI Database System Rev Implement Rep. 2019;17(3):429-441. doi:10.11124/JBISRIR-2017-003646
7. Centers for Medicare and Medicaid Services. Medicare and Medicaid Programs; reform of requirements for long-term care facilities. Final rule. Fed Regist. 2016;81 (192):68688-68872. Accessed April 17, 2021. https://pubmed.ncbi.nlm.nih.gov/27731960
8. Wright D, Brajtman S, Macdonald M. A relational ethical approach to end-of-life delirium. J Pain Symptom Manage. 2014;48(2):191-198. doi:10.1016/j.jpainsymman.2013.08.015
9. Brajtman S, Higuchi K, McPherson C. Caring for patients with terminal delirium: palliative care unit and home care nurses’ experience. Int J Palliat Nurs. 2006;12(4):150-156. doi:10.12968/ijpn.2006.12.4.21010
10. Lange E, Verhaak P, Meer K. Prevalence, presentation, and prognosis of delirium in older people in the population, at home and in long-term care: a review. Int J Geriatr Psychiatry. 2013;28(2):127-134. doi:10.1002/gps.3814
11. Goy E, Ganzini L. Prevalence and natural history of neuropsychiatric syndromes in veteran hospice patients. J Pain Symptom Manage. 2011;41(12):394-401. doi:10.1016/j.jpainsymman.2010.04.015
12. Bush S, Bruera E. The assessment and management of delirium in cancer patients. Oncologist. 2009;4(10):1039-1049. doi:10.1634/theoncologist.2009-0122
13. Clary P, Lawson P. Pharmacologic pearls for end-of-life care. Am Fam Physician. 2009;79(12):1059-1065.
14. Blinderman CD, Billings J. Comfort for patients dying in the hospital. N Engl J Med. 2015;373(26):2549-2561. doi:10.1056/NEJMra1411746
15. Irwin SA, Pirrello RD, Hirst JM, Buckholz GT, Ferris F.D. Clarifying delirium management: practical evidence-based, expert recommendation for clinical practice. J Palliat Med. 2013;16(4):423-435. doi:10.1089/jpm.2012.0319
16. Bobb B. Dyspnea and delirium at the end of life. Clin J Oncol Nurs. 2016;20(3):244-246. doi:10.1188/16.CJON.244-246
17. Morita T, Tei Y, Inoue S. Agitated terminal delirium and association with partial opioid substitution and hydration. J Palliat Med. 2003;6(4):557-563. doi:10.1089/109662103768253669
18. Attard A, Ranjith G, Taylor D. Delirium and its treatment. CNS Drugs. 2008;22(8):631-644-649. doi:10.2165/00023210-200822080-00002
19. Hui D. Benzodiazepines for agitation in patients with delirium: selecting the right patient, right time, and right indication. Curr Opin Support Palliat Care. 2018;12(4):489-494. doi:10.1097/SPC.0000000000000395
20. Irwin P, Murray S, Bilinski A, Chern B, Stafford B. Alcohol withdrawal as an underrated cause of agitated delirium and terminal restlessness in patients with advanced malignancy. J Pain Symptom Manage. 2005;29(1):104-108. doi:10.1016/j.jpainsymman.2004.04.010
21. Lokker ME, van Zuylen L, van der Rijt CCD, van der Heide A. Prevalence, impact, and treatment of death rattle: a systematic review. J Pain Symptom Manage. 2014;48:2-12. doi:10.1016/j.jpainsymman.2013.03.011
22. Sessler C, Gosnell M, Grap M, et al. The Richmond Agitation–Sedation Scale: validity and reliability in adult intensive care unit patients. Am J Respir Crit Care Med. 2002:166(10):1338-1344. doi:10.1164/rccm.2107138
1. Casarett DJ, Inouye SK. Diagnosis and management of delirium near the end of life. Ann Intern Med. 2001;135(1):32-40.
2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Washington, DC; 2013.
3. Grassi L, Caraceni A, Mitchell A, et al. Management of delirium in palliative care: a review. Curr Psychiatry Rep. 2015;17(13):1-9. doi:10.1007/s11920-015-0550-8
4. Bush S, Leonard M, Agar M, et al. End-of-life delirium: issues regarding the recognition, optimal management, and role of sedation in the dying phase. J Pain Symptom Manage. 2014;48 (2):215-230. doi:10.1016/j.jpainsymman. 2014.05.009
5. Moyer D. Terminal delirium in geriatric patients with cancer at end of life. Am J Hosp Palliat Med. 2010;28(1):44-51. doi:10.1177/1049909110376755
6. Lai X, Huang Z, Chen C, et al. Delirium screening in patients in a palliative care ward: a best practice implementation project. JBI Database System Rev Implement Rep. 2019;17(3):429-441. doi:10.11124/JBISRIR-2017-003646
7. Centers for Medicare and Medicaid Services. Medicare and Medicaid Programs; reform of requirements for long-term care facilities. Final rule. Fed Regist. 2016;81 (192):68688-68872. Accessed April 17, 2021. https://pubmed.ncbi.nlm.nih.gov/27731960
8. Wright D, Brajtman S, Macdonald M. A relational ethical approach to end-of-life delirium. J Pain Symptom Manage. 2014;48(2):191-198. doi:10.1016/j.jpainsymman.2013.08.015
9. Brajtman S, Higuchi K, McPherson C. Caring for patients with terminal delirium: palliative care unit and home care nurses’ experience. Int J Palliat Nurs. 2006;12(4):150-156. doi:10.12968/ijpn.2006.12.4.21010
10. Lange E, Verhaak P, Meer K. Prevalence, presentation, and prognosis of delirium in older people in the population, at home and in long-term care: a review. Int J Geriatr Psychiatry. 2013;28(2):127-134. doi:10.1002/gps.3814
11. Goy E, Ganzini L. Prevalence and natural history of neuropsychiatric syndromes in veteran hospice patients. J Pain Symptom Manage. 2011;41(12):394-401. doi:10.1016/j.jpainsymman.2010.04.015
12. Bush S, Bruera E. The assessment and management of delirium in cancer patients. Oncologist. 2009;4(10):1039-1049. doi:10.1634/theoncologist.2009-0122
13. Clary P, Lawson P. Pharmacologic pearls for end-of-life care. Am Fam Physician. 2009;79(12):1059-1065.
14. Blinderman CD, Billings J. Comfort for patients dying in the hospital. N Engl J Med. 2015;373(26):2549-2561. doi:10.1056/NEJMra1411746
15. Irwin SA, Pirrello RD, Hirst JM, Buckholz GT, Ferris F.D. Clarifying delirium management: practical evidence-based, expert recommendation for clinical practice. J Palliat Med. 2013;16(4):423-435. doi:10.1089/jpm.2012.0319
16. Bobb B. Dyspnea and delirium at the end of life. Clin J Oncol Nurs. 2016;20(3):244-246. doi:10.1188/16.CJON.244-246
17. Morita T, Tei Y, Inoue S. Agitated terminal delirium and association with partial opioid substitution and hydration. J Palliat Med. 2003;6(4):557-563. doi:10.1089/109662103768253669
18. Attard A, Ranjith G, Taylor D. Delirium and its treatment. CNS Drugs. 2008;22(8):631-644-649. doi:10.2165/00023210-200822080-00002
19. Hui D. Benzodiazepines for agitation in patients with delirium: selecting the right patient, right time, and right indication. Curr Opin Support Palliat Care. 2018;12(4):489-494. doi:10.1097/SPC.0000000000000395
20. Irwin P, Murray S, Bilinski A, Chern B, Stafford B. Alcohol withdrawal as an underrated cause of agitated delirium and terminal restlessness in patients with advanced malignancy. J Pain Symptom Manage. 2005;29(1):104-108. doi:10.1016/j.jpainsymman.2004.04.010
21. Lokker ME, van Zuylen L, van der Rijt CCD, van der Heide A. Prevalence, impact, and treatment of death rattle: a systematic review. J Pain Symptom Manage. 2014;48:2-12. doi:10.1016/j.jpainsymman.2013.03.011
22. Sessler C, Gosnell M, Grap M, et al. The Richmond Agitation–Sedation Scale: validity and reliability in adult intensive care unit patients. Am J Respir Crit Care Med. 2002:166(10):1338-1344. doi:10.1164/rccm.2107138
Photographic Confirmation of Biopsy Sites Saves Lives
Quality photographic documentation of lesions prior to biopsy can decrease the risk of wrong site surgery, improve patient care, and save lives.
Preventable errors by health care workers are widespread and cause significant morbidity and mortality. Wrong site surgery (WSS) is a preventable error that causes harm through both the direct insult of surgery and propagation of the untreated initial problem. WSS also can cause poor patient outcomes, low morale, malpractice claims, and increased costs to the health care system. The estimated median prevalence of WSS across all specialties is 9 events per 1,000,000 surgical procedures, and an institutional study of 112,500 surgical procedures reported 1 wrong-site event, which involved removing the incorrect skin lesion and not removing the intended lesion.1,2
Though the prevalence is low when examining all specialties together, dermatology is also susceptible to WSS.3 Watson and colleagues demonstrated that 31% of intervention errors were due to WSS and suggested that prebiopsy photography helps decrease errors.4 Thus, the American Academy of Dermatology has emphasized the importance of reducing WSS.5 A study by Nijhawan and colleagues found that 25% of patients receiving Mohs surgery at a private single cancer center could not identify their biopsy location because the duration between biopsy and surgery allowed biopsy sites to heal well, which made finding the lesion difficult.6
Risk factors for WSS include having multiple health care providers (HCPs) living remote from the surgery location involved in the case, being a traveling veteran, receiving care at multiple facilities inside and outside the US Department of Veterans Affairs (VA) system, mislabeling photographs or specimens, and photographs not taken at time of biopsy and too close with no frame of reference to assist in finding the correct site. The VA electronic health record (EHR) is not integrated with outside facility EHRs, and the Office of Community Care (OCC) at the VA is responsible for obtaining copies of outside records. If unsuccessful, the HCP and/or patient must provide the records. Frequently, records are not received or require multiple attempts to be obtained. This mostly affects veterans receiving care at multiple facilities inside and outside the VA system as the lack of or timely receipt of past health records could increase the risk for WSS.
To combat WSS, some institutions have implemented standardized protocols requiring photographic documentation of lesions before biopsy so that the surgeon can properly identify the correct site prior to operating.7 Fortunately, recent advances in technology have made it easier to provide photographic documentation of skin lesions. Highsmith and colleagues highlighted use of smartphones to avoid WSS in dermatology.7 Despite these advances, photographic documentation of lesions is not universal. A study by Rossy and colleagues found that less than half of patients referred for Mohs surgery had clear documentation of the biopsy site with photography, diagram, or measurements, and of those documented, only a small fraction used photographs.8
Photographic documentation is not currently required by the VA, increasing the risk of WSS. About 20% of the ~150 VA dermatology departments nationwide are associated with a dermatology residency program and have implemented photographic documentation of lesions before biopsy. The other 80% of departments may not be using photographic documentation. The following 3 cases experienced by the authors highlight instances of how quality photographic documentation of lesions prior to biopsy can improve patient care and save lives. Then, we propose a photographic documentation protocol for VA dermatology departments to follow based on the photographic standards outlined by the American Society for Dermatologic Surgery.9
Case 1 Presentation
A 36-year-old traveling veteran who relocates frequently and receives care at multiple VA medical centers (VAMCs) presented for excision of a melanoma. The patient had been managed at another VAMC where the lesion was biopsied in September 2016. He presented to the Orlando, Florida, VAMC dermatology clinic 5 months later with the photographs of his biopsy sites along with the biopsy reports. The patient had 6 biopsies labeled A through F. Lesion A at the right mid back was positive for melanoma (Figure 1), whereas lesion C on the mid lower back was not cancerous (Figure 2). On examination of the patient’s back, he had numerous moles and scars. The initial receiving HCP circled and photographed the scar presumed to be the melanoma on the mid lower back (Figure 3).
On the day of surgery, the surgeon routinely checked the biopsy report as well as the photograph from the patient’s most recent HCP visit. The surgeon noted that biopsy A (right mid back) on the pathology report had been identified as the melanoma; however, biopsy C (mid lower back) was circled and presumed to be the melanoma in the recent photograph by the receiving HCP—a nurse practitioner. The surgeon compared the initial photos from the referring VAMC with those from the receiving HCP and subsequently matched biopsy A (melanoma) with the correct location on the right mid back.
This discrepancy was explained to the patient with photographic confirmation, allowing for agreement on the correct site before the surgery. The pathology results of the surgical excision confirmed melanoma in the specimen and clear margins. Thus, the correct site was operated on.
Case 2 Presentation
A veteran aged 86 years with a medical history of a double transplant and long-term immunosuppression leading to numerous skin cancers was referred for surgical excision of a confirmed squamous cell carcinoma (SCC) on the left upper back. On the day of surgery, the biopsy site could not be identified clearly due to numerous preexisting scars (Figure 4). No photograph of the original biopsy site was available. The referring HCP was called to the bedside to assist in identifying the biopsy site but also was unable to clearly identify the site. This was explained to the patient. As 2-person confirmation was unsuccessful, conservative treatment was used with patient consent. The patient has since had subsequent close follow-up to monitor for recurrence, as SCC in transplant patients can display aggressive growth and potential for metastasis.
Case 3 Presentation
A veteran was referred for surgical excision of a nonmelanoma skin cancer. The biopsy was completed well in advance of the anticipated surgery day. On the day of surgery, the site could not be detected as it healed well after the biopsy. Although a clinical photograph was available, it was taken too close-up to find a frame of reference for identifying the location of the biopsy site. The referring HCP was called to the bedside to assist in identification of the biopsy site, but 2-person confirmation was unsuccessful. This was explained to the patient, and with his consent, the HCPs agreed on conservative treatment and close follow-up.
Discussion
To prevent and minimize poor outcomes associated with WSS, the health care team should routinely document the lesion location in detail before the biopsy. Many HCPs believe a preoperative photograph is the best method for documentation. As demonstrated in the third case presentation, photographs must be taken at a distance that includes nearby anatomic landmarks for reference. It is suggested that the providers obtain 2 images, one that is far enough to include landmarks, and one that is close enough to clearly differentiate the targeted lesion from others.10
Although high-resolution digital cameras are preferred, mobile phones also can be used if they provide quality images. As phones with built-in cameras are ubiquitous, they offer a quick and easy method of photographic documentation. St John and colleagues also presented the possibility of having patients keep pictures of the lesion on their phones, as this removes potential privacy concerns and facilitates easy transportation of information between HCPs.10 If it is discovered that a photograph was not taken at the time of biopsy, our practice contacts the patient and asks them to photograph and circle the biopsy site using their mobile phone or camera and bring it to the surgery appointment. We propose a VA protocol for photographic documentation of biopsy sites (Table).
HCPs who are not comfortable with technology may be hesitant to use photographic documentation using a smartphone or camera. Further, HCPs often face time constraints, and taking photographs and uploading them to the EHR could decrease patient contact time. Therefore, photographic documentation presents an opportunity for a team approach to patient-centered care: Nursing and other medical staff can assist with these duties and learn the proper photographic documentation of biopsy sites. Using phone or tablet applications that provide rapid photographic documentation and uploading to the EHR also would facilitate universal use of photographic documentation.
If a HCP is uncomfortable or unable to use photography to document lesions, alternative strategies for documenting lesions exist, including diagrams, anatomic landmarks, ultraviolet (UV) fluorescent tattoos, and patient identification of lesions.10 In the diagram method, a HCP marks the lesion location on a diagram of the body preferably with a short description of the lesion’s location and/or characteristics.11 The diagram should be uploaded into the EHR. There are other methods for documenting lesion location relative to anatomic landmarks. Triangulation involves documenting distance between the lesion and 3 distinct anatomic locations.10 UV fluorescent tattooing involves putting UV tattoo dye in the biopsy site and locating the dye using a Wood lamp at the time of surgery. The lamp was used in a single case report of a patient with recurrent basal cell carcinoma.12 Patient identification of lesions by phone applications that allow patients to track their lesion, a phone selfie of the biopsy site, or a direct account of a lesion can be used to confirm lesion location based on the other methods mentioned.10
Patients often are poorly adherent to instructions aimed at reducing the risk of WSS. In a study that asked patients undergoing elective foot or ankle surgery to mark the foot not being operated on, 41% of patients were either partially or nonadherent with this request.13 Educating patients on the importance of lesion self-identification has the potential to improve identification of biopsy location and prevent WSS. Nursing and medical staff can provide patient education while photographing the biopsy site including taking a photograph with the patient’s cell phone for their records.
Due to subsequent morbidity and mortality that can result from WSS, photographic confirmation of biopsy sites is a step that surgeons can take to ensure identification of the correct site prior to surgery. Case 1 provides an example of how photographs taken prior to biopsy can prevent WSS. In a disease such as melanoma, photographs are particularly important, as insufficient treatment can lead to fatal metastases. To increase quality of care, all available photographs should be reviewed, especially in cases where the pathology report does not match the clinical presentation.
If WSS occurs, HCPs may be hesitant to disclose their mistakes due to potential lawsuits, the possibility that disclosure may inadvertently harm the patient, and their relative inexperience in and training regarding disclosure skills.14 Surgeons who perform WSS may receive severe penalties from state licensing boards, including suspension of medical license. Financially, many insurers will not compensate providers for WSS. Also, many incidents of WSS result in a malpractice claim, with about 80% of those cases resulting in a malpractice award.15 However, it is important that HCPs are open with their patients regarding WSS.
As demonstrated in case presentations 2 and 3, having 2-person confirmation and patient confirmation before to surgery is important in preventing WSS for patients who have poor documentation of biopsy sites. In cases where agreement is not achieved, HCPs can consider several other options to help identify lesions. Dermabrasion and alcohol wipes are options.10 Dermabrasion uses friction to expose surgical sights that have healed, scarred, or been hidden by sun damage.10 Alcohol wipes remove surface scale and crust, creating a glisten with tangential lighting that highlights surface irregularities. Anesthesia injection prior to surgery creates a blister at the location of the cancer. This is because skin cancer weakens the attachments between keratinocytes, and as a result, the hydrostatic pressure from the anesthesia favorably blisters the malignancy location.10,16
Dermoscopy is another strategy shown to help identify scar margins.10,17 Under dermoscopy, a scar demonstrates a white-pink homogenous patch with underlying vessels, whereas basal cell carcinoma remnants include blue-gray ovoid nests and globules, telangiectasias, spoke wheel and leaflike structures.17 As a final option, HCPs can perform an additional biopsy of potential cancer locations to find the lesion again.10 If the lesions cannot be identified, HCPs should consider conservative measures or less invasive treatments with close and frequent follow-up.
Conclusions
The cases described here highlight how the lack of proper photographic documentation can prevent the use of curative surgical treatment. In order to reduce WSS and improve quality care, HCPs must continue to take steps and create safeguards to minimize risk. Proper documentation of lesions prior to biopsy provides an effective route to reduce incidence of WSS. If the biopsy site cannot be found, various strategies to properly identify the site can be employed. If WSS occurs, it is important that HCPs provide full disclosure to patients. With a growing emphasis on patient safety measures and advances in technology, HCPs are becoming increasingly cognizant about the most effective ways to optimize patient care, and it is anticipated that this will result in a decrease in morbidity and mortality.
1. Hempel S, Maggard-Gibbons M, Nguyen DK, et al. Wrong-site surgery, retained surgical items, and surgical fires: a systematic review of surgical never events. JAMA Surg. 2015;150(8):796-805. doi:10.1001/jamasurg.2015.0301
2. Knight N, Aucar J. Use of an anatomic marking form as an alternative to the Universal Protocol for Preventing Wrong Site, Wrong Procedure and Wrong Person Surgery. Am J Surg. 2010;200(6):803-809. doi:10.1016/j.amjsurg.2010.06.010
3. Elston DM, Stratman EJ, Miller SJ. Skin biopsy: biopsy issues in specific diseases [published correction appears in J Am Acad Dermatol. 2016 Oct;75(4):854]. J Am Acad Dermatol. 2016;74(1):1-18. doi:10.1016/j.jaad.2015.06.033
4. Watson AJ, Redbord K, Taylor JS, Shippy A, Kostecki J, Swerlick R. Medical error in dermatology practice: development of a classification system to drive priority setting in patient safety efforts. J Am Acad Dermatol. 2013;68(5):729-737. doi:10.1016/j.jaad.2012.10.058
5. Elston DM, Taylor JS, Coldiron B, et al. Patient safety: Part I. Patient safety and the dermatologist. J Am Acad Dermatol. 2009;61(2):179-191. doi:10.1016/j.jaad.2009.04.056
6. Nijhawan RI, Lee EH, Nehal KS. Biopsy site selfies--a quality improvement pilot study to assist with correct surgical site identification. Dermatol Surg. 2015;41(4):499-504. doi:10.1097/DSS.0000000000000305
7. Highsmith JT, Weinstein DA, Highsmith MJ, Etzkorn JR. BIOPSY 1-2-3 in dermatologic surgery: improving smartphone use to avoid wrong-site surgery. Technol Innov. 2016;18(2-3):203-206. doi:10.21300/18.2-3.2016.203
8. Rossy KM, Lawrence N. Difficulty with surgical site identification: what role does it play in dermatology? J Am Acad Dermatol. 2012;67(2):257-261. doi:10.1016/j.jaad.2012.02.034
9. American Society for Dermatologic Surgery. Photographic standards in dermatologic surgery poster. Accessed April 12, 2021. https://www.asds.net/medical-professionals/members-resources/product-details/productname/photographic-standards-poster
10. St John J, Walker J, Goldberg D, Maloney ME. Avoiding Medical Errors in Cutaneous Site Identification: A Best Practices Review. Dermatol Surg. 2016;42(4):477-484. doi:10.1097/DSS.0000000000000683
11. Alam M, Lee A, Ibrahimi OA, et al. A multistep approach to improving biopsy site identification in dermatology: physician, staff, and patient roles based on a Delphi consensus. JAMA Dermatol. 2014;150(5):550-558. doi:10.1001/jamadermatol.2013.9804
12. Chuang GS, Gilchrest BA. Ultraviolet-fluorescent tattoo location of cutaneous biopsy site. Dermatol Surg. 2012;38(3):479-483. doi:10.1111/j.1524-4725.2011.02238.x
13. DiGiovanni CW, Kang L, Manuel J. Patient compliance in avoiding wrong-site surgery. J Bone Joint Surg Am. 2003;85(5):815-819. doi:10.2106/00004623-200305000-00007
14. Gallagher TH. A 62-year-old woman with skin cancer who experienced wrong-site surgery: review of medical error. JAMA. 2009;302(6):669-677. doi:10.1001/jama.2009.1011
15. Mulloy DF, Hughes RG. Wrong-site surgery: a preventable medical error. In: Hughes RG, ed. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Agency for Healthcare Research and Quality (US); 2008:chap 36. Accessed April 23, 2021. https://www.ncbi.nlm.nih.gov/books/NBK2678
16. Zaiac M, Tongdee E, Porges L, Touloei K, Prodanovich S. Anesthetic blister induction to identify biopsy site prior to Mohs surgery. J Drugs Dermatol. 2015;14(5):446-447.
17. Jawed SI, Goldberg LH, Wang SQ. Dermoscopy to identify biopsy sites before Mohs surgery. Dermatol Surg. 2014;40(3):334-337. doi:10.1111/dsu.12422
Quality photographic documentation of lesions prior to biopsy can decrease the risk of wrong site surgery, improve patient care, and save lives.
Quality photographic documentation of lesions prior to biopsy can decrease the risk of wrong site surgery, improve patient care, and save lives.
Preventable errors by health care workers are widespread and cause significant morbidity and mortality. Wrong site surgery (WSS) is a preventable error that causes harm through both the direct insult of surgery and propagation of the untreated initial problem. WSS also can cause poor patient outcomes, low morale, malpractice claims, and increased costs to the health care system. The estimated median prevalence of WSS across all specialties is 9 events per 1,000,000 surgical procedures, and an institutional study of 112,500 surgical procedures reported 1 wrong-site event, which involved removing the incorrect skin lesion and not removing the intended lesion.1,2
Though the prevalence is low when examining all specialties together, dermatology is also susceptible to WSS.3 Watson and colleagues demonstrated that 31% of intervention errors were due to WSS and suggested that prebiopsy photography helps decrease errors.4 Thus, the American Academy of Dermatology has emphasized the importance of reducing WSS.5 A study by Nijhawan and colleagues found that 25% of patients receiving Mohs surgery at a private single cancer center could not identify their biopsy location because the duration between biopsy and surgery allowed biopsy sites to heal well, which made finding the lesion difficult.6
Risk factors for WSS include having multiple health care providers (HCPs) living remote from the surgery location involved in the case, being a traveling veteran, receiving care at multiple facilities inside and outside the US Department of Veterans Affairs (VA) system, mislabeling photographs or specimens, and photographs not taken at time of biopsy and too close with no frame of reference to assist in finding the correct site. The VA electronic health record (EHR) is not integrated with outside facility EHRs, and the Office of Community Care (OCC) at the VA is responsible for obtaining copies of outside records. If unsuccessful, the HCP and/or patient must provide the records. Frequently, records are not received or require multiple attempts to be obtained. This mostly affects veterans receiving care at multiple facilities inside and outside the VA system as the lack of or timely receipt of past health records could increase the risk for WSS.
To combat WSS, some institutions have implemented standardized protocols requiring photographic documentation of lesions before biopsy so that the surgeon can properly identify the correct site prior to operating.7 Fortunately, recent advances in technology have made it easier to provide photographic documentation of skin lesions. Highsmith and colleagues highlighted use of smartphones to avoid WSS in dermatology.7 Despite these advances, photographic documentation of lesions is not universal. A study by Rossy and colleagues found that less than half of patients referred for Mohs surgery had clear documentation of the biopsy site with photography, diagram, or measurements, and of those documented, only a small fraction used photographs.8
Photographic documentation is not currently required by the VA, increasing the risk of WSS. About 20% of the ~150 VA dermatology departments nationwide are associated with a dermatology residency program and have implemented photographic documentation of lesions before biopsy. The other 80% of departments may not be using photographic documentation. The following 3 cases experienced by the authors highlight instances of how quality photographic documentation of lesions prior to biopsy can improve patient care and save lives. Then, we propose a photographic documentation protocol for VA dermatology departments to follow based on the photographic standards outlined by the American Society for Dermatologic Surgery.9
Case 1 Presentation
A 36-year-old traveling veteran who relocates frequently and receives care at multiple VA medical centers (VAMCs) presented for excision of a melanoma. The patient had been managed at another VAMC where the lesion was biopsied in September 2016. He presented to the Orlando, Florida, VAMC dermatology clinic 5 months later with the photographs of his biopsy sites along with the biopsy reports. The patient had 6 biopsies labeled A through F. Lesion A at the right mid back was positive for melanoma (Figure 1), whereas lesion C on the mid lower back was not cancerous (Figure 2). On examination of the patient’s back, he had numerous moles and scars. The initial receiving HCP circled and photographed the scar presumed to be the melanoma on the mid lower back (Figure 3).
On the day of surgery, the surgeon routinely checked the biopsy report as well as the photograph from the patient’s most recent HCP visit. The surgeon noted that biopsy A (right mid back) on the pathology report had been identified as the melanoma; however, biopsy C (mid lower back) was circled and presumed to be the melanoma in the recent photograph by the receiving HCP—a nurse practitioner. The surgeon compared the initial photos from the referring VAMC with those from the receiving HCP and subsequently matched biopsy A (melanoma) with the correct location on the right mid back.
This discrepancy was explained to the patient with photographic confirmation, allowing for agreement on the correct site before the surgery. The pathology results of the surgical excision confirmed melanoma in the specimen and clear margins. Thus, the correct site was operated on.
Case 2 Presentation
A veteran aged 86 years with a medical history of a double transplant and long-term immunosuppression leading to numerous skin cancers was referred for surgical excision of a confirmed squamous cell carcinoma (SCC) on the left upper back. On the day of surgery, the biopsy site could not be identified clearly due to numerous preexisting scars (Figure 4). No photograph of the original biopsy site was available. The referring HCP was called to the bedside to assist in identifying the biopsy site but also was unable to clearly identify the site. This was explained to the patient. As 2-person confirmation was unsuccessful, conservative treatment was used with patient consent. The patient has since had subsequent close follow-up to monitor for recurrence, as SCC in transplant patients can display aggressive growth and potential for metastasis.
Case 3 Presentation
A veteran was referred for surgical excision of a nonmelanoma skin cancer. The biopsy was completed well in advance of the anticipated surgery day. On the day of surgery, the site could not be detected as it healed well after the biopsy. Although a clinical photograph was available, it was taken too close-up to find a frame of reference for identifying the location of the biopsy site. The referring HCP was called to the bedside to assist in identification of the biopsy site, but 2-person confirmation was unsuccessful. This was explained to the patient, and with his consent, the HCPs agreed on conservative treatment and close follow-up.
Discussion
To prevent and minimize poor outcomes associated with WSS, the health care team should routinely document the lesion location in detail before the biopsy. Many HCPs believe a preoperative photograph is the best method for documentation. As demonstrated in the third case presentation, photographs must be taken at a distance that includes nearby anatomic landmarks for reference. It is suggested that the providers obtain 2 images, one that is far enough to include landmarks, and one that is close enough to clearly differentiate the targeted lesion from others.10
Although high-resolution digital cameras are preferred, mobile phones also can be used if they provide quality images. As phones with built-in cameras are ubiquitous, they offer a quick and easy method of photographic documentation. St John and colleagues also presented the possibility of having patients keep pictures of the lesion on their phones, as this removes potential privacy concerns and facilitates easy transportation of information between HCPs.10 If it is discovered that a photograph was not taken at the time of biopsy, our practice contacts the patient and asks them to photograph and circle the biopsy site using their mobile phone or camera and bring it to the surgery appointment. We propose a VA protocol for photographic documentation of biopsy sites (Table).
HCPs who are not comfortable with technology may be hesitant to use photographic documentation using a smartphone or camera. Further, HCPs often face time constraints, and taking photographs and uploading them to the EHR could decrease patient contact time. Therefore, photographic documentation presents an opportunity for a team approach to patient-centered care: Nursing and other medical staff can assist with these duties and learn the proper photographic documentation of biopsy sites. Using phone or tablet applications that provide rapid photographic documentation and uploading to the EHR also would facilitate universal use of photographic documentation.
If a HCP is uncomfortable or unable to use photography to document lesions, alternative strategies for documenting lesions exist, including diagrams, anatomic landmarks, ultraviolet (UV) fluorescent tattoos, and patient identification of lesions.10 In the diagram method, a HCP marks the lesion location on a diagram of the body preferably with a short description of the lesion’s location and/or characteristics.11 The diagram should be uploaded into the EHR. There are other methods for documenting lesion location relative to anatomic landmarks. Triangulation involves documenting distance between the lesion and 3 distinct anatomic locations.10 UV fluorescent tattooing involves putting UV tattoo dye in the biopsy site and locating the dye using a Wood lamp at the time of surgery. The lamp was used in a single case report of a patient with recurrent basal cell carcinoma.12 Patient identification of lesions by phone applications that allow patients to track their lesion, a phone selfie of the biopsy site, or a direct account of a lesion can be used to confirm lesion location based on the other methods mentioned.10
Patients often are poorly adherent to instructions aimed at reducing the risk of WSS. In a study that asked patients undergoing elective foot or ankle surgery to mark the foot not being operated on, 41% of patients were either partially or nonadherent with this request.13 Educating patients on the importance of lesion self-identification has the potential to improve identification of biopsy location and prevent WSS. Nursing and medical staff can provide patient education while photographing the biopsy site including taking a photograph with the patient’s cell phone for their records.
Due to subsequent morbidity and mortality that can result from WSS, photographic confirmation of biopsy sites is a step that surgeons can take to ensure identification of the correct site prior to surgery. Case 1 provides an example of how photographs taken prior to biopsy can prevent WSS. In a disease such as melanoma, photographs are particularly important, as insufficient treatment can lead to fatal metastases. To increase quality of care, all available photographs should be reviewed, especially in cases where the pathology report does not match the clinical presentation.
If WSS occurs, HCPs may be hesitant to disclose their mistakes due to potential lawsuits, the possibility that disclosure may inadvertently harm the patient, and their relative inexperience in and training regarding disclosure skills.14 Surgeons who perform WSS may receive severe penalties from state licensing boards, including suspension of medical license. Financially, many insurers will not compensate providers for WSS. Also, many incidents of WSS result in a malpractice claim, with about 80% of those cases resulting in a malpractice award.15 However, it is important that HCPs are open with their patients regarding WSS.
As demonstrated in case presentations 2 and 3, having 2-person confirmation and patient confirmation before to surgery is important in preventing WSS for patients who have poor documentation of biopsy sites. In cases where agreement is not achieved, HCPs can consider several other options to help identify lesions. Dermabrasion and alcohol wipes are options.10 Dermabrasion uses friction to expose surgical sights that have healed, scarred, or been hidden by sun damage.10 Alcohol wipes remove surface scale and crust, creating a glisten with tangential lighting that highlights surface irregularities. Anesthesia injection prior to surgery creates a blister at the location of the cancer. This is because skin cancer weakens the attachments between keratinocytes, and as a result, the hydrostatic pressure from the anesthesia favorably blisters the malignancy location.10,16
Dermoscopy is another strategy shown to help identify scar margins.10,17 Under dermoscopy, a scar demonstrates a white-pink homogenous patch with underlying vessels, whereas basal cell carcinoma remnants include blue-gray ovoid nests and globules, telangiectasias, spoke wheel and leaflike structures.17 As a final option, HCPs can perform an additional biopsy of potential cancer locations to find the lesion again.10 If the lesions cannot be identified, HCPs should consider conservative measures or less invasive treatments with close and frequent follow-up.
Conclusions
The cases described here highlight how the lack of proper photographic documentation can prevent the use of curative surgical treatment. In order to reduce WSS and improve quality care, HCPs must continue to take steps and create safeguards to minimize risk. Proper documentation of lesions prior to biopsy provides an effective route to reduce incidence of WSS. If the biopsy site cannot be found, various strategies to properly identify the site can be employed. If WSS occurs, it is important that HCPs provide full disclosure to patients. With a growing emphasis on patient safety measures and advances in technology, HCPs are becoming increasingly cognizant about the most effective ways to optimize patient care, and it is anticipated that this will result in a decrease in morbidity and mortality.
Preventable errors by health care workers are widespread and cause significant morbidity and mortality. Wrong site surgery (WSS) is a preventable error that causes harm through both the direct insult of surgery and propagation of the untreated initial problem. WSS also can cause poor patient outcomes, low morale, malpractice claims, and increased costs to the health care system. The estimated median prevalence of WSS across all specialties is 9 events per 1,000,000 surgical procedures, and an institutional study of 112,500 surgical procedures reported 1 wrong-site event, which involved removing the incorrect skin lesion and not removing the intended lesion.1,2
Though the prevalence is low when examining all specialties together, dermatology is also susceptible to WSS.3 Watson and colleagues demonstrated that 31% of intervention errors were due to WSS and suggested that prebiopsy photography helps decrease errors.4 Thus, the American Academy of Dermatology has emphasized the importance of reducing WSS.5 A study by Nijhawan and colleagues found that 25% of patients receiving Mohs surgery at a private single cancer center could not identify their biopsy location because the duration between biopsy and surgery allowed biopsy sites to heal well, which made finding the lesion difficult.6
Risk factors for WSS include having multiple health care providers (HCPs) living remote from the surgery location involved in the case, being a traveling veteran, receiving care at multiple facilities inside and outside the US Department of Veterans Affairs (VA) system, mislabeling photographs or specimens, and photographs not taken at time of biopsy and too close with no frame of reference to assist in finding the correct site. The VA electronic health record (EHR) is not integrated with outside facility EHRs, and the Office of Community Care (OCC) at the VA is responsible for obtaining copies of outside records. If unsuccessful, the HCP and/or patient must provide the records. Frequently, records are not received or require multiple attempts to be obtained. This mostly affects veterans receiving care at multiple facilities inside and outside the VA system as the lack of or timely receipt of past health records could increase the risk for WSS.
To combat WSS, some institutions have implemented standardized protocols requiring photographic documentation of lesions before biopsy so that the surgeon can properly identify the correct site prior to operating.7 Fortunately, recent advances in technology have made it easier to provide photographic documentation of skin lesions. Highsmith and colleagues highlighted use of smartphones to avoid WSS in dermatology.7 Despite these advances, photographic documentation of lesions is not universal. A study by Rossy and colleagues found that less than half of patients referred for Mohs surgery had clear documentation of the biopsy site with photography, diagram, or measurements, and of those documented, only a small fraction used photographs.8
Photographic documentation is not currently required by the VA, increasing the risk of WSS. About 20% of the ~150 VA dermatology departments nationwide are associated with a dermatology residency program and have implemented photographic documentation of lesions before biopsy. The other 80% of departments may not be using photographic documentation. The following 3 cases experienced by the authors highlight instances of how quality photographic documentation of lesions prior to biopsy can improve patient care and save lives. Then, we propose a photographic documentation protocol for VA dermatology departments to follow based on the photographic standards outlined by the American Society for Dermatologic Surgery.9
Case 1 Presentation
A 36-year-old traveling veteran who relocates frequently and receives care at multiple VA medical centers (VAMCs) presented for excision of a melanoma. The patient had been managed at another VAMC where the lesion was biopsied in September 2016. He presented to the Orlando, Florida, VAMC dermatology clinic 5 months later with the photographs of his biopsy sites along with the biopsy reports. The patient had 6 biopsies labeled A through F. Lesion A at the right mid back was positive for melanoma (Figure 1), whereas lesion C on the mid lower back was not cancerous (Figure 2). On examination of the patient’s back, he had numerous moles and scars. The initial receiving HCP circled and photographed the scar presumed to be the melanoma on the mid lower back (Figure 3).
On the day of surgery, the surgeon routinely checked the biopsy report as well as the photograph from the patient’s most recent HCP visit. The surgeon noted that biopsy A (right mid back) on the pathology report had been identified as the melanoma; however, biopsy C (mid lower back) was circled and presumed to be the melanoma in the recent photograph by the receiving HCP—a nurse practitioner. The surgeon compared the initial photos from the referring VAMC with those from the receiving HCP and subsequently matched biopsy A (melanoma) with the correct location on the right mid back.
This discrepancy was explained to the patient with photographic confirmation, allowing for agreement on the correct site before the surgery. The pathology results of the surgical excision confirmed melanoma in the specimen and clear margins. Thus, the correct site was operated on.
Case 2 Presentation
A veteran aged 86 years with a medical history of a double transplant and long-term immunosuppression leading to numerous skin cancers was referred for surgical excision of a confirmed squamous cell carcinoma (SCC) on the left upper back. On the day of surgery, the biopsy site could not be identified clearly due to numerous preexisting scars (Figure 4). No photograph of the original biopsy site was available. The referring HCP was called to the bedside to assist in identifying the biopsy site but also was unable to clearly identify the site. This was explained to the patient. As 2-person confirmation was unsuccessful, conservative treatment was used with patient consent. The patient has since had subsequent close follow-up to monitor for recurrence, as SCC in transplant patients can display aggressive growth and potential for metastasis.
Case 3 Presentation
A veteran was referred for surgical excision of a nonmelanoma skin cancer. The biopsy was completed well in advance of the anticipated surgery day. On the day of surgery, the site could not be detected as it healed well after the biopsy. Although a clinical photograph was available, it was taken too close-up to find a frame of reference for identifying the location of the biopsy site. The referring HCP was called to the bedside to assist in identification of the biopsy site, but 2-person confirmation was unsuccessful. This was explained to the patient, and with his consent, the HCPs agreed on conservative treatment and close follow-up.
Discussion
To prevent and minimize poor outcomes associated with WSS, the health care team should routinely document the lesion location in detail before the biopsy. Many HCPs believe a preoperative photograph is the best method for documentation. As demonstrated in the third case presentation, photographs must be taken at a distance that includes nearby anatomic landmarks for reference. It is suggested that the providers obtain 2 images, one that is far enough to include landmarks, and one that is close enough to clearly differentiate the targeted lesion from others.10
Although high-resolution digital cameras are preferred, mobile phones also can be used if they provide quality images. As phones with built-in cameras are ubiquitous, they offer a quick and easy method of photographic documentation. St John and colleagues also presented the possibility of having patients keep pictures of the lesion on their phones, as this removes potential privacy concerns and facilitates easy transportation of information between HCPs.10 If it is discovered that a photograph was not taken at the time of biopsy, our practice contacts the patient and asks them to photograph and circle the biopsy site using their mobile phone or camera and bring it to the surgery appointment. We propose a VA protocol for photographic documentation of biopsy sites (Table).
HCPs who are not comfortable with technology may be hesitant to use photographic documentation using a smartphone or camera. Further, HCPs often face time constraints, and taking photographs and uploading them to the EHR could decrease patient contact time. Therefore, photographic documentation presents an opportunity for a team approach to patient-centered care: Nursing and other medical staff can assist with these duties and learn the proper photographic documentation of biopsy sites. Using phone or tablet applications that provide rapid photographic documentation and uploading to the EHR also would facilitate universal use of photographic documentation.
If a HCP is uncomfortable or unable to use photography to document lesions, alternative strategies for documenting lesions exist, including diagrams, anatomic landmarks, ultraviolet (UV) fluorescent tattoos, and patient identification of lesions.10 In the diagram method, a HCP marks the lesion location on a diagram of the body preferably with a short description of the lesion’s location and/or characteristics.11 The diagram should be uploaded into the EHR. There are other methods for documenting lesion location relative to anatomic landmarks. Triangulation involves documenting distance between the lesion and 3 distinct anatomic locations.10 UV fluorescent tattooing involves putting UV tattoo dye in the biopsy site and locating the dye using a Wood lamp at the time of surgery. The lamp was used in a single case report of a patient with recurrent basal cell carcinoma.12 Patient identification of lesions by phone applications that allow patients to track their lesion, a phone selfie of the biopsy site, or a direct account of a lesion can be used to confirm lesion location based on the other methods mentioned.10
Patients often are poorly adherent to instructions aimed at reducing the risk of WSS. In a study that asked patients undergoing elective foot or ankle surgery to mark the foot not being operated on, 41% of patients were either partially or nonadherent with this request.13 Educating patients on the importance of lesion self-identification has the potential to improve identification of biopsy location and prevent WSS. Nursing and medical staff can provide patient education while photographing the biopsy site including taking a photograph with the patient’s cell phone for their records.
Due to subsequent morbidity and mortality that can result from WSS, photographic confirmation of biopsy sites is a step that surgeons can take to ensure identification of the correct site prior to surgery. Case 1 provides an example of how photographs taken prior to biopsy can prevent WSS. In a disease such as melanoma, photographs are particularly important, as insufficient treatment can lead to fatal metastases. To increase quality of care, all available photographs should be reviewed, especially in cases where the pathology report does not match the clinical presentation.
If WSS occurs, HCPs may be hesitant to disclose their mistakes due to potential lawsuits, the possibility that disclosure may inadvertently harm the patient, and their relative inexperience in and training regarding disclosure skills.14 Surgeons who perform WSS may receive severe penalties from state licensing boards, including suspension of medical license. Financially, many insurers will not compensate providers for WSS. Also, many incidents of WSS result in a malpractice claim, with about 80% of those cases resulting in a malpractice award.15 However, it is important that HCPs are open with their patients regarding WSS.
As demonstrated in case presentations 2 and 3, having 2-person confirmation and patient confirmation before to surgery is important in preventing WSS for patients who have poor documentation of biopsy sites. In cases where agreement is not achieved, HCPs can consider several other options to help identify lesions. Dermabrasion and alcohol wipes are options.10 Dermabrasion uses friction to expose surgical sights that have healed, scarred, or been hidden by sun damage.10 Alcohol wipes remove surface scale and crust, creating a glisten with tangential lighting that highlights surface irregularities. Anesthesia injection prior to surgery creates a blister at the location of the cancer. This is because skin cancer weakens the attachments between keratinocytes, and as a result, the hydrostatic pressure from the anesthesia favorably blisters the malignancy location.10,16
Dermoscopy is another strategy shown to help identify scar margins.10,17 Under dermoscopy, a scar demonstrates a white-pink homogenous patch with underlying vessels, whereas basal cell carcinoma remnants include blue-gray ovoid nests and globules, telangiectasias, spoke wheel and leaflike structures.17 As a final option, HCPs can perform an additional biopsy of potential cancer locations to find the lesion again.10 If the lesions cannot be identified, HCPs should consider conservative measures or less invasive treatments with close and frequent follow-up.
Conclusions
The cases described here highlight how the lack of proper photographic documentation can prevent the use of curative surgical treatment. In order to reduce WSS and improve quality care, HCPs must continue to take steps and create safeguards to minimize risk. Proper documentation of lesions prior to biopsy provides an effective route to reduce incidence of WSS. If the biopsy site cannot be found, various strategies to properly identify the site can be employed. If WSS occurs, it is important that HCPs provide full disclosure to patients. With a growing emphasis on patient safety measures and advances in technology, HCPs are becoming increasingly cognizant about the most effective ways to optimize patient care, and it is anticipated that this will result in a decrease in morbidity and mortality.
1. Hempel S, Maggard-Gibbons M, Nguyen DK, et al. Wrong-site surgery, retained surgical items, and surgical fires: a systematic review of surgical never events. JAMA Surg. 2015;150(8):796-805. doi:10.1001/jamasurg.2015.0301
2. Knight N, Aucar J. Use of an anatomic marking form as an alternative to the Universal Protocol for Preventing Wrong Site, Wrong Procedure and Wrong Person Surgery. Am J Surg. 2010;200(6):803-809. doi:10.1016/j.amjsurg.2010.06.010
3. Elston DM, Stratman EJ, Miller SJ. Skin biopsy: biopsy issues in specific diseases [published correction appears in J Am Acad Dermatol. 2016 Oct;75(4):854]. J Am Acad Dermatol. 2016;74(1):1-18. doi:10.1016/j.jaad.2015.06.033
4. Watson AJ, Redbord K, Taylor JS, Shippy A, Kostecki J, Swerlick R. Medical error in dermatology practice: development of a classification system to drive priority setting in patient safety efforts. J Am Acad Dermatol. 2013;68(5):729-737. doi:10.1016/j.jaad.2012.10.058
5. Elston DM, Taylor JS, Coldiron B, et al. Patient safety: Part I. Patient safety and the dermatologist. J Am Acad Dermatol. 2009;61(2):179-191. doi:10.1016/j.jaad.2009.04.056
6. Nijhawan RI, Lee EH, Nehal KS. Biopsy site selfies--a quality improvement pilot study to assist with correct surgical site identification. Dermatol Surg. 2015;41(4):499-504. doi:10.1097/DSS.0000000000000305
7. Highsmith JT, Weinstein DA, Highsmith MJ, Etzkorn JR. BIOPSY 1-2-3 in dermatologic surgery: improving smartphone use to avoid wrong-site surgery. Technol Innov. 2016;18(2-3):203-206. doi:10.21300/18.2-3.2016.203
8. Rossy KM, Lawrence N. Difficulty with surgical site identification: what role does it play in dermatology? J Am Acad Dermatol. 2012;67(2):257-261. doi:10.1016/j.jaad.2012.02.034
9. American Society for Dermatologic Surgery. Photographic standards in dermatologic surgery poster. Accessed April 12, 2021. https://www.asds.net/medical-professionals/members-resources/product-details/productname/photographic-standards-poster
10. St John J, Walker J, Goldberg D, Maloney ME. Avoiding Medical Errors in Cutaneous Site Identification: A Best Practices Review. Dermatol Surg. 2016;42(4):477-484. doi:10.1097/DSS.0000000000000683
11. Alam M, Lee A, Ibrahimi OA, et al. A multistep approach to improving biopsy site identification in dermatology: physician, staff, and patient roles based on a Delphi consensus. JAMA Dermatol. 2014;150(5):550-558. doi:10.1001/jamadermatol.2013.9804
12. Chuang GS, Gilchrest BA. Ultraviolet-fluorescent tattoo location of cutaneous biopsy site. Dermatol Surg. 2012;38(3):479-483. doi:10.1111/j.1524-4725.2011.02238.x
13. DiGiovanni CW, Kang L, Manuel J. Patient compliance in avoiding wrong-site surgery. J Bone Joint Surg Am. 2003;85(5):815-819. doi:10.2106/00004623-200305000-00007
14. Gallagher TH. A 62-year-old woman with skin cancer who experienced wrong-site surgery: review of medical error. JAMA. 2009;302(6):669-677. doi:10.1001/jama.2009.1011
15. Mulloy DF, Hughes RG. Wrong-site surgery: a preventable medical error. In: Hughes RG, ed. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Agency for Healthcare Research and Quality (US); 2008:chap 36. Accessed April 23, 2021. https://www.ncbi.nlm.nih.gov/books/NBK2678
16. Zaiac M, Tongdee E, Porges L, Touloei K, Prodanovich S. Anesthetic blister induction to identify biopsy site prior to Mohs surgery. J Drugs Dermatol. 2015;14(5):446-447.
17. Jawed SI, Goldberg LH, Wang SQ. Dermoscopy to identify biopsy sites before Mohs surgery. Dermatol Surg. 2014;40(3):334-337. doi:10.1111/dsu.12422
1. Hempel S, Maggard-Gibbons M, Nguyen DK, et al. Wrong-site surgery, retained surgical items, and surgical fires: a systematic review of surgical never events. JAMA Surg. 2015;150(8):796-805. doi:10.1001/jamasurg.2015.0301
2. Knight N, Aucar J. Use of an anatomic marking form as an alternative to the Universal Protocol for Preventing Wrong Site, Wrong Procedure and Wrong Person Surgery. Am J Surg. 2010;200(6):803-809. doi:10.1016/j.amjsurg.2010.06.010
3. Elston DM, Stratman EJ, Miller SJ. Skin biopsy: biopsy issues in specific diseases [published correction appears in J Am Acad Dermatol. 2016 Oct;75(4):854]. J Am Acad Dermatol. 2016;74(1):1-18. doi:10.1016/j.jaad.2015.06.033
4. Watson AJ, Redbord K, Taylor JS, Shippy A, Kostecki J, Swerlick R. Medical error in dermatology practice: development of a classification system to drive priority setting in patient safety efforts. J Am Acad Dermatol. 2013;68(5):729-737. doi:10.1016/j.jaad.2012.10.058
5. Elston DM, Taylor JS, Coldiron B, et al. Patient safety: Part I. Patient safety and the dermatologist. J Am Acad Dermatol. 2009;61(2):179-191. doi:10.1016/j.jaad.2009.04.056
6. Nijhawan RI, Lee EH, Nehal KS. Biopsy site selfies--a quality improvement pilot study to assist with correct surgical site identification. Dermatol Surg. 2015;41(4):499-504. doi:10.1097/DSS.0000000000000305
7. Highsmith JT, Weinstein DA, Highsmith MJ, Etzkorn JR. BIOPSY 1-2-3 in dermatologic surgery: improving smartphone use to avoid wrong-site surgery. Technol Innov. 2016;18(2-3):203-206. doi:10.21300/18.2-3.2016.203
8. Rossy KM, Lawrence N. Difficulty with surgical site identification: what role does it play in dermatology? J Am Acad Dermatol. 2012;67(2):257-261. doi:10.1016/j.jaad.2012.02.034
9. American Society for Dermatologic Surgery. Photographic standards in dermatologic surgery poster. Accessed April 12, 2021. https://www.asds.net/medical-professionals/members-resources/product-details/productname/photographic-standards-poster
10. St John J, Walker J, Goldberg D, Maloney ME. Avoiding Medical Errors in Cutaneous Site Identification: A Best Practices Review. Dermatol Surg. 2016;42(4):477-484. doi:10.1097/DSS.0000000000000683
11. Alam M, Lee A, Ibrahimi OA, et al. A multistep approach to improving biopsy site identification in dermatology: physician, staff, and patient roles based on a Delphi consensus. JAMA Dermatol. 2014;150(5):550-558. doi:10.1001/jamadermatol.2013.9804
12. Chuang GS, Gilchrest BA. Ultraviolet-fluorescent tattoo location of cutaneous biopsy site. Dermatol Surg. 2012;38(3):479-483. doi:10.1111/j.1524-4725.2011.02238.x
13. DiGiovanni CW, Kang L, Manuel J. Patient compliance in avoiding wrong-site surgery. J Bone Joint Surg Am. 2003;85(5):815-819. doi:10.2106/00004623-200305000-00007
14. Gallagher TH. A 62-year-old woman with skin cancer who experienced wrong-site surgery: review of medical error. JAMA. 2009;302(6):669-677. doi:10.1001/jama.2009.1011
15. Mulloy DF, Hughes RG. Wrong-site surgery: a preventable medical error. In: Hughes RG, ed. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Agency for Healthcare Research and Quality (US); 2008:chap 36. Accessed April 23, 2021. https://www.ncbi.nlm.nih.gov/books/NBK2678
16. Zaiac M, Tongdee E, Porges L, Touloei K, Prodanovich S. Anesthetic blister induction to identify biopsy site prior to Mohs surgery. J Drugs Dermatol. 2015;14(5):446-447.
17. Jawed SI, Goldberg LH, Wang SQ. Dermoscopy to identify biopsy sites before Mohs surgery. Dermatol Surg. 2014;40(3):334-337. doi:10.1111/dsu.12422
Use of Comprehensive Geriatric Assessment in Oncology Patients to Guide Treatment Decisions and Predict Chemotherapy Toxicity
Age is a well recognized risk factor for cancer development. The population of older Americans is growing, and by 2030, 20% of the US population will be aged ≥ 65 years.1 While 25% of all new cancer cases are diagnosed in people aged 65 to 74 years, more than half of cancers occur in individuals aged ≥ 70 years, with even higher rates in those aged ≥ 75 years.2 Although cancer rates have declined slightly overall among people aged ≥ 65 years, this population still has an 11-fold increased incidence of cancer compared with that of younger individuals.3 With a rapidly growing older population, there will be increasing demand for cancer care.
Treatment of cancer in older individuals often is complicated by medical comorbidities, frailty, and poor functional status. Distinguishing patients who can tolerate aggressive therapy from those who require less intensive therapy can be challenging. Age-related physiologic changes predispose older adults to an increased risk of therapy-related toxicities, resulting in suboptimal therapeutic benefit and substantial morbidity. For example, cardiovascular changes can lead to reduction of the cardiac functional reserve, which can increase the risk of congestive heart failure. Similarly, decline in renal function leads to an increased potential for nephrotoxicity.4 Although patients may be of the same chronologic age, their performance, functional, and biologic status may be quite variable; thus, tolerance to aggressive treatment is not easily predicted. The comprehensive geriatric assessment (CGA) may be used as a global assessment tool to risk stratify older patients prior to oncologic treatment decisions.
Health care providers (HCPs), including physician assistants, nurse practitioners, clinical nurse specialists, nurses, and physicians, routinely participate in every aspect of cancer care by ordering and interpreting diagnostic tests, addressing comorbidities, managing symptoms, and discussing cancer treatment recommendations. HCPs in oncology will continue to play a vital role in the coordination and management of older patients with cancer. However, in general, CGA has not been a consistent part of oncology practices, and few HCPs are familiar with the benefits of CGA screening tools.
What Is Geriatric Assessment?
Geriatric assessment is a multidisciplinary, multidimensional process aimed at detecting medical, psychosocial, and functional issues of older adults that are not identified by traditional performance status measures alone. It provides guidance for management of identified problems and improvement in quality of life.6 CGA was developed by geriatricians and multidisciplinary care teams to evaluate the domains of functional, nutritional, cognitive, psychosocial, and economic status; comorbidities; geriatric syndromes; and mood, and it has been tested in both clinics and hospitals.7 Although such assessment requires additional time and resources, its goals are to identify areas of vulnerability, assist in clinical decisions of treatable health problems, and guide therapeutic interventions.6 In oncology practice, the assessment not only addresses these global issues, but also is critical in predicting toxicity and survival outcomes in older oncology patients.
Components of CGA
Advancing age brings many physiologic, psychosocial, and functional challenges, and a cancer diagnosis only adds to these issues. CGA provides a system of assessing older and/or frail patients with cancer through specific domains to identify issues that are not apparent on routine evaluation in a clinic setting before and during chemotherapy treatments. These domains include comorbidity, polypharmacy, functional status, cognition, psychological and social status, and nutrition.8
Comorbidity
The prevalence of multiple medical problems and comorbidities, including cancer, among people aged > 65 years is increasing.9 Studies have shown that two-thirds of patients with cancer had ≥ 2 medical conditions, and nearly one quarter had ≥ 4 medical conditions.10 In older adults, common comorbidities include cardiovascular disease, hypertension, diabetes mellitus, and dementia. These comorbidities can impact treatment decisions, increase the risk of disease, impact treatment-related complications, and affect a patient’s life expectancy.11 Assessing comorbidities is essential to CGA and is done using the Charlson Comorbidity Index and/or the Cumulative Illness Rating Scale.12
The Charlson Comorbidity Index was originally designed to predict 1-year mortality on the basis of a weighted composite score for the following categories: cardiovascular, endocrine, pulmonary, neurologic, renal, hepatic, gastrointestinal, and neoplastic disease.13 It is now the most widely used comorbidity index and has been adapted and verified as applicable and valid for predicting the outcomes and risk of death from many comorbid diseases.14 The Cumulative Illness Rating Scale has been validated as a predictor for readmission for hospitalized older adults, hospitalization within 1 year in a residential setting, and long-term mortality when assessed in inpatient and residential settings.15
Polypharmacy
Polypharmacy (use of ≥ 5 medications) is common in older patients regardless of cancer diagnosis and is often instead defined as “the use of multiple drugs or more than are medically necessary.”16 The use of multiple medications, including those not indicated for existing medical conditions (such as over‐the‐counter, herbal, and complementary/alternative medicines, which patients often fail to declare to their specialist, doctor, or pharmacist) adds to the potential negative aspects of polypharmacy that affect older patients.17
Patients with cancer usually are prescribed an extensive number of medicines, both for the disease and for supportive care, which can increase the chance of drug-drug interactions and adverse reactions.18 While these issues certainly affect quality of life, they also may influence chemotherapy treatment and potentially impact survival. Studies have shown that the presence of polypharmacy has been associated with higher numbers of comorbidities, increased use of inappropriate medications, poor performance status, decline in functional status, and poor survival.18
Functional Status
Although Eastern Cooperative Oncology Group (ECOG) performance status and Karnofsky Performance Status are commonly used by oncologists, these guidelines are limited in focus and do not reliably measure functional status in older patients. Functional status is determined by the ability to perform daily acts of self-care, which includes assessment of activities of daily living (ADLs) and instrumental activities of daily living (IADLs). ADLs refer to such tasks as bathing, dressing, eating, mobility, balance, and toileting.19 IADLs include the ability to perform activities required to live within a community and include shopping, transportation, managing finances, medication management, cooking, and cleaning.11
Physical functionality also can be assessed by measures such as gait speed, grip strength, balance, and lower extremity strength. These are more sensitive and shown to be associated with worse clinical outcomes.20 Grip strength and gait speed, as assessed by the Timed Up and Go test or the Short Physical Performance Battery measure strength and balance.12 Reduction in gait speed and/or grip strength are associated with adverse clinical outcomes and increased risk of mortality.21 Patients with cancer who have difficulty with ADLs are at increased risk for falls, which can limit their functional independence, compromise cancer therapy, and increase the risk of chemotherapy toxicities.11 Impaired hearing and poor vision are added factors that can be barriers to cancer treatment.
Cognition
Cognitive impairment in patients with cancer is becoming more of an issue for oncology HCPs as both cancer and cognitive decline are more common with advancing age. Cognition in cancer patients is important for understanding their diagnosis, prognosis, treatment options, and adherence. Impaired cognition can affect decision making regarding treatment options and administration. Cognition can be assessed through validated screening tools such as the Mini-Mental State Examination and Mini-Cog.11
Psychological and Social Status
A cancer diagnosis has a major impact on the mental and emotional state of patients and family members. Clinically significant anxiety has been reported in approximately 21% of older patients with cancer, and the incidence of depression ranges from 17 to 26%.22 In older patients with, psychologic distress can impact cancer treatment, resulting in less definitive therapy and poorer outcomes.23 All patients with cancer should be screened for psychologic distress using standardized methods, such as the Geriatric Depression Scale or the General Anxiety Disorder-7 scale.24 A positive screen should lead to additional assessments that evaluate the severity of depression and other comorbid psychological problems and medical conditions.
Social isolation and loneliness are factors that can affect both depression and anxiety. Older patients with cancer are at risk for decreased social activities and are already challenged with issues related to home care, comorbidities, functional status, and caregiver support.23 Therefore, it is important to assess the social interactions of an older and/or frail patient with cancer and use social work assistance to address needs for supportive services.
Nutrition
Nutrition is important in any patient with cancer undergoing chemotherapy treatment. However, it is of greater importance in older adults, as malnutrition and weight loss are negative prognostic factors that correlate with poor tolerance to chemotherapy treatment, decline in quality of life, and increased mortality.25 The Mini-Nutritional Assessment is a widely used validated tool to assess nutritional status and risk of malnutrition.11 This tool can help identify those older and/or frail patients with cancer with impaired nutritional status and aid in instituting corrective measures to treat or prevent malnutrition.
Effectiveness of CGA
Multiple randomized controlled clinical trials assessing the effectiveness of CGA have been conducted over the past 3 decades with overall positive outcomes related to its value.26 Benefits of CGA can include overall improved medical care, avoidance of hospitalization or nursing home placement, identification of cognitive impairment, and prevention of geriatric syndrome (a range of conditions representing multiple organ impairment in older adults).27
In oncology, CGA is particularly beneficial, as it can identify issues in nearly 70% of patients that may not be apparent through traditional oncology assessment.28 A systematic review of 36 studies assessing the prognostic value of CGA in elderly patients with cancer receiving chemotherapy concluded that impaired performance and functional status as well as a frail and vulnerable profile are important predictors of severe chemotherapy-related toxicity and are associated with a higher risk of mortality.29 Therefore, CGA should be an integral part of the evaluation of older and/or frail patients with cancer prior to chemotherapy consideration.
Several screening tools have been developed using information from CGA to assess the risk of severe toxicities. The most commonly used tools for predicting toxicity include the Cancer and Aging Research Group (CARG) chemotoxicity calculator and the Chemotherapy Risk Assessment Scale for High-Age Patients (CRASH).30,31 Although these tools are readily available to facilitate CGA, and despite their proven beneficial outcome and recommended usage by national guidelines, implementation of these tools in routine oncology practice has been challenging and slow to spread. Unless these recommended interventions are effectively implemented, the benefits of CGA cannot be realized. With the expected surge in the number of older patients with cancer, hopefully this will change.
Geriatric Assessment Screening Tools
A screening tool recommended for use in older and/or frail patients with cancer allows for a brief assessment to help clinicians identify patients in need of further evaluation by CGA and to provides information on treatment-related toxicities, functional decline, and survival.32 The predictive value and utility of geriatric assessment screening tools have been repeatedly proven to identify older and/or frail adults at risk for treatment-related toxicities.12 The CARG and the CRASH are validated screening tools used in identifying patients at higher risk for chemotherapy toxicity. These screening tools are intended to provide guidance to the clinical oncology practitioner on risk stratification of chemotherapy toxicity in older patients with cancer.33
Both of these screening tools provide similar predictive performance for chemotherapy toxicity in older patients with cancer.34 However, the CARG tool seems to have the advantage of using more data that had already been obtained during regular office visits and is clear and easy to use clinically. The CRASH tool is slightly more involved, as it uses multiple geriatric instruments to determine the predictive risk of both hematologic and nonhematologic toxicities of chemotherapy.
CARG Chemotoxicity Calculator
Hurria and colleagues originally developed the CARG tool from data obtained through a prospective multicenter study involving 500 patients with cancer aged ≥ 65 years.35 They concluded that chemotherapy-related toxicity is common in older adults, with 53% of patients sustaining grade 3 or 4 treatment-related toxicities and 2% treatment-related mortality.12 This predictive model for chemotherapy-related toxicity used 11 variables, both objective (obtained during a regular clinical encounter: age, tumor type, chemotherapy dosing, number of drugs, creatinine, and hemoglobin) and subjective (completed by patient: number of falls, social support, the ability to take medications, hearing impairment, and physical performance), to determine at-risk patients (Table 1).31
Compared with standard performance status measures in oncology practice, the CARG model was better able to predict chemotherapy-related toxicities. In 2016, Hurria and colleagues published the results of an updated external validation study with a cohort of 250 older patients with cancer receiving chemotherapy that confirmed the prediction of chemotherapy toxicity using the CARG screening tool in this population.31 An appealing feature of this tool is the free online accessibility and the expedited manner in which screening can be conducted.
CRASH Score
The CRASH score was derived from the results of a prospective, multicenter study of 518 patients aged ≥ 70 years who were assessed on 24 parameters prior to starting chemotherapy.30 A total of 64% of patients experienced significant toxicities, including 32% with grade 4 hematologic toxicity and 56% with grade 3 or 4 nonhematologic toxicity. The hematologic and nonhematologic toxicity risks are the 2 categories that comprise the CRASH score. Both baseline patient variables and chemotherapy regimen are incorporated into an 8-item assessment profile that determines the risk categories (Table 2).30
Increased risk of hematologic toxicities was associated with increased diastolic blood pressure, increased lactate dehydrogenase, need for assistance with IADL, and increased toxicity potential of the chemotherapy regimen. Nonhematologic toxicities were associated with ECOG performance score, Mini Mental Status Examination and Mini-Nutritional Assessment, and increased toxicity of the chemotherapy regimen.12 Patient scores are stratified into 4 risk categories: low, medium-low, medium-high, and high.30 Like the CARG tool, the CRASH screening tool also is available as a free online resource and can be used in everyday clinical practice to assess older and/or frail adults with cancer.
Conclusions
In older adults, cancer may significantly impact the natural course of concurrent comorbidities due to physiologic and functional changes. These vulnerabilities predispose older patients with cancer to an increased risk of adverse outcomes, including treatment-related toxicities.36 Given the rapidly aging population, it is critical for oncology clinical teams to be prepared to assess for, prevent, and manage issues for older adults that could impact outcomes, including complications and toxicities from chemotherapy.35 Studies have reported that 78 to 93% of older oncology patients have at least 1 geriatric impairment that could potentially impact oncology treatment plans.37,38 This supports the utility of CGA as a global assessment tool to risk stratify older and/or frail patients prior to deciding on subsequent oncologic treatment approaches.5 In fact, major cooperative groups sponsored by the National Cancer Institute, such as the Alliance for Clinical Trials in Oncology, are including CGA as part of some of their treatment trials. CGA was conducted as part of a multicenter cooperative group study in older patients with acute myeloid leukemia prior to inpatient intensive induction chemotherapy and was determined to be feasible and useful in clinical trials and practice.39
Despite the increasing evidence for benefits of CGA, it has not been a consistent part of oncology practices, and few HCPs are familiar with the benefits of CGA screening tools. Although oncology providers routinely participate in every aspect of cancer care and play a vital role in the coordination and management of older patients with cancer, CGA implementation into routine clinical practice has been slow in part due to lack of knowledge and training regarding the use of GA tools.
Oncology providers can easily incorporate CGA screening tools into the history and physical examination process for older patients with cancer, which will add an important dimension to these patient evaluations. Oncology providers are not only well positioned to administer these screening tools, but also can lead the field in developing innovative ways for effective implementation in busy routine oncology clinics. However, to be successful, oncology providers must be knowledgeable about these tools and understand their utility in guiding treatment decisions and improving quality of care in older patients with cancer.
1. Sharless NE. The challenging landscape of cancer and aging: charting a way forward. Published January 24, 2018. Accessed April 16, 2021. https://www.cancer.gov/news-events/cancer-currents-blog/2018/sharpless-aging-cancer-research
2. National Cancer Institute. Age and cancer risk. Updated March 5, 2021. Accessed April 16, 2021. https://www.cancer.gov/about-cancer/causes-prevention/risk/age
3. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2019. CA Cancer J Clin. 2019;69(1):7-34. doi:10.3322/caac.21551 4. Sawhney R, Sehl M, Naeim A. Physiologic aspects of aging: impact on cancer management and decision making, part I. Cancer J. 2005;11(6):449-460. doi:10.1097/00130404-200511000-00004
5. Kenis C, Bron D, Libert Y, et al. Relevance of a systematic geriatric screening and assessment in older patients with cancer: results of a prospective multicentric study. Ann Oncol. 2013;24(5):1306-1312. doi:10.1093/annonc/mds619
6. Loh KP, Soto-Perez-de-Celis E, Hsu T, et al. What every oncologist should know about geriatric assessment for older patients with cancer: Young International Society of Geriatric Oncology position paper. J Oncol Pract. 2018;14(2):85-94. doi:10.1200/JOP.2017.026435
7. Cohen HJ. Evolution of geriatric assessment in oncology. J Oncol Pract. 2018;14(2):95-96. doi:10.1200/JOP.18.00017
8. Wildiers H, Heeren P, Puts M, et al. International Society of Geriatric Oncology consensus on geriatric assessment in older patients with cancer. J Clin Oncol. 2014;32(24):2595-2603. doi:10.1200/JCO.2013.54.8347
9. American Cancer Society. Cancer facts & figures 2019. Accessed April 16, 2021. https://www.cancer.org/research/cancer-facts-statistics/all-cancer-facts-figures/cancer-facts-figures-2019.html
10. Williams GR, Mackenzie A, Magnuson A, et al. Comorbidity in older adults with cancer. J Geriatr Oncol. 2016;7(4):249-257. doi:10.1016/j.jgo.2015.12.002
11. Korc-Grodzicki B, Holmes HM, Shahrokni A. Geriatric assessment for oncologists. Cancer Biol Med. 2015;12(4):261-274. doi:10.7497/j.issn.2095-3941.2015.0082
12. Li D, Soto-Perez-de-Celis E, Hurria A. Geriatric assessment and tools for predicting treatment toxicity in older adults with cancer. Cancer J. 2017;23(4):206-210. doi:10.1097/PPO.0000000000000269
13. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. doi:10.1016/0021-9681(87)90171-8
14. Huang Y, Gou R, Diao Y, et al. Charlson comorbidity index helps predict the risk of mortality for patients with type 2 diabetic nephropathy. J Zhejiang Univ Sci B. 2014;15(1):58-66. doi:10.1631/jzus.B1300109
15. Osborn KP IV, Nothelle S, Slaven JE, Montz K, Hui S, Torke AM. Cumulative Illness Rating Scale (CIRS) can be used to predict hospital outcomes in older adults. J Geriatric Med Gerontol. 2017;3(2). doi:10.23937/2469-5858/1510030
16. Maher RL, Hanlon J, Hajjar ER. Clinical consequences of polypharmacy in elderly. Expert Opin Drug Saf. 2014;13(1):57-65. doi:10.1517/14740338.2013.827660
17. Shrestha S, Shrestha S, Khanal S. Polypharmacy in elderly cancer patients: challenges and the way clinical pharmacists can contribute in resource-limited settings. Aging Med. 2019;2(1):42-49. doi:10.1002/agm2.12051
18. Sharma M, Loh KP, Nightingale G, Mohile SG, Holmes HM. Polypharmacy and potentially inappropriate medication use in geriatric oncology. J Geriatr Oncol. 2016;7(5):346-353. doi:10.1016/j.jgo.2016.07.010
19. Norburn JE, Bernard SL, Konrad TR, et al. Self-care and assistance from others in coping with functional status limitations among a national sample of older adults. J Gerontol B Psychol Sci Soc Sci. 1995;50(2):S101-S109. doi:10.1093/geronb/50b.2.s101
20. Fragala MS, Alley DE, Shardell MD, et al. Comparison of handgrip and leg extension strength in predicting slow gait speed in older adults. J Am Geriatr Soc. 2016;64(1):144-150. doi:10.1111/jgs.13871
21. Owusu C, Berger NA. Comprehensive geriatric assessment in the older cancer patient: coming of age in clinical cancer care. Clin Pract (Lond). 2014;11(6):749-762. doi:10.2217/cpr.14.72
22. Weiss Wiesel TR, Nelson CJ, Tew WP, et al. The relationship between age, anxiety, and depression in older adults with cancer. Psychooncology. 2015;24(6):712-717. doi:10.1002/pon.3638
23. Soto-Perez-de-Celis E, Li D, Yuan Y, Lau YM, Hurria A. Functional versus chronological age: geriatric assessments to guide decision making in older patients with cancer. Lancet Oncol. 2018;19(6):e305-e316. doi:10.1016/S1470-2045(18)30348-6
24. Andersen BL, DeRubeis RJ, Berman BS, et al. Screening, assessment, and care of anxiety and depressive symptoms in adults with cancer: an American Society of Clinical Oncology guideline adaptation. J Clin Oncol. 2014;32(15):1605-1619. doi:10.1200/JCO.2013.52.4611
25. Muscaritoli M, Lucia S, Farcomeni A, et al. Prevalence of malnutrition in patients at first medical oncology visit: the PreMiO study. Oncotarget. 2017;8(45):79884-79886. doi:10.18632/oncotarget.20168
26. Ekdahl AW, Axmon A, Sandberg M, Steen Carlsson K. Is care based on comprehensive geriatric assessment with mobile teams better than usual care? A study protocol of a randomised controlled trial (the GerMoT study). BMJ Open. 2018;8(10)e23969. doi:10.1136/bmjopen-2018-023969
27. Mohile SG, Dale W, Somerfield MR, et al. Practical assessment and management of vulnerabilities in older patients receiving chemotherapy: ASCO guideline for geriatric oncology. J Clin Oncol. 2018;36(22):2326-2347. doi:10.1200/JCO.2018.78.8687
28. Hernandez Torres C, Hsu T. Comprehensive geriatric assessment in the older adult with cancer: a review. Eur Urol Focus. 2017;3(4-5):330-339. doi:10.1016/j.euf.2017.10.010
29. Janssens K, Specenier P. The prognostic value of the comprehensive geriatric assessment (CGA) in elderly cancer patients (ECP) treated with chemotherapy (CT): a systematic review. Eur J Cancer. 2017;72(1):S164-S165. doi:10.1016/S0959-8049(17)30611-1
30. Extermann M, Boler I, Reich RR, et al. Predicting the risk of chemotherapy toxicity in older patients: The Chemotherapy Risk Assessment Scale for High‐Age Patients (CRASH) score. Cancer. 2012;118(13):3377-3386. doi:10.1002/cncr.26646
31. Hurria A, Mohile S, Gajra A, et al. Validation of a prediction tool for chemotherapy toxicity in older adults with cancer. J Clin Oncol. 2016;34(20):2366-2371. doi:10.1200/JCO.2015.65.4327
32. Decoster L, Van Puyvelde K, Mohile S, et al. Screening tools for multidimensional health problems warranting a geriatric assessment in older cancer patients: an update on SIOG recommendations. Ann Oncol. 2015;26(2):288-300. doi:10.1093/annonc/mdu210
33. Schiefen JK, Madsen LT, Dains JE. Instruments that predict oncology treatment risk in the senior population. J Adv Pract Oncol. 2017;8(5):528-533.
34. Ortland I, Mendel Ott M, Kowar M, et al. Comparing the performance of the CARG and the CRASH score for predicting toxicity in older patients with cancer. J Geriatr Oncol. 2020;11(6):997-1005. doi:10.1016/j.jgo.2019.12.016
35. Hurria A, Togawa K, Mohile SG, et al. Predicting chemotherapy toxicity in older adults with cancer: a prospective multicenter study. J Clin Oncol. 2011;29(25):3457-3465. doi:10.1200/JCO.2011.34.7625
36. Mohile SG, Velarde C, Hurria A, et al. Geriatric assessment-guided care processes for older adults: a Delphi consensus of geriatric oncology experts. J Natl Compr Canc Netw. 2015;13(9):1120-1130. doi:10.6004/jnccn.2015.0137
37. Schiphorst AHW, Ten Bokkel Huinink D, Breumelhof R, Burgmans JPJ, Pronk A, Hamaker ME. Geriatric consultation can aid in complex treatment decisions for elderly cancer patients. Eur J Cancer Care (Engl). 2016;25(3):365-370. doi:10.1111/ecc.12349
38. Schulkes KJG, Souwer ETD, Hamaker ME, et al. The effect of a geriatric assessment on treatment decisions for patients with lung cancer. Lung. 2017;195(2):225-231. doi:10.1007/s00408-017-9983-7
39. Klepin HD, Ritchie E, Major-Elechi B, et al. Geriatric assessment among older adults receiving intensive therapy for acute myeloid leukemia: report of CALGB 361006 (Alliance). J Geriatr Oncol. 2020;11(1):107-113. doi:10.1016/j.jgo.2019.10.002
Age is a well recognized risk factor for cancer development. The population of older Americans is growing, and by 2030, 20% of the US population will be aged ≥ 65 years.1 While 25% of all new cancer cases are diagnosed in people aged 65 to 74 years, more than half of cancers occur in individuals aged ≥ 70 years, with even higher rates in those aged ≥ 75 years.2 Although cancer rates have declined slightly overall among people aged ≥ 65 years, this population still has an 11-fold increased incidence of cancer compared with that of younger individuals.3 With a rapidly growing older population, there will be increasing demand for cancer care.
Treatment of cancer in older individuals often is complicated by medical comorbidities, frailty, and poor functional status. Distinguishing patients who can tolerate aggressive therapy from those who require less intensive therapy can be challenging. Age-related physiologic changes predispose older adults to an increased risk of therapy-related toxicities, resulting in suboptimal therapeutic benefit and substantial morbidity. For example, cardiovascular changes can lead to reduction of the cardiac functional reserve, which can increase the risk of congestive heart failure. Similarly, decline in renal function leads to an increased potential for nephrotoxicity.4 Although patients may be of the same chronologic age, their performance, functional, and biologic status may be quite variable; thus, tolerance to aggressive treatment is not easily predicted. The comprehensive geriatric assessment (CGA) may be used as a global assessment tool to risk stratify older patients prior to oncologic treatment decisions.
Health care providers (HCPs), including physician assistants, nurse practitioners, clinical nurse specialists, nurses, and physicians, routinely participate in every aspect of cancer care by ordering and interpreting diagnostic tests, addressing comorbidities, managing symptoms, and discussing cancer treatment recommendations. HCPs in oncology will continue to play a vital role in the coordination and management of older patients with cancer. However, in general, CGA has not been a consistent part of oncology practices, and few HCPs are familiar with the benefits of CGA screening tools.
What Is Geriatric Assessment?
Geriatric assessment is a multidisciplinary, multidimensional process aimed at detecting medical, psychosocial, and functional issues of older adults that are not identified by traditional performance status measures alone. It provides guidance for management of identified problems and improvement in quality of life.6 CGA was developed by geriatricians and multidisciplinary care teams to evaluate the domains of functional, nutritional, cognitive, psychosocial, and economic status; comorbidities; geriatric syndromes; and mood, and it has been tested in both clinics and hospitals.7 Although such assessment requires additional time and resources, its goals are to identify areas of vulnerability, assist in clinical decisions of treatable health problems, and guide therapeutic interventions.6 In oncology practice, the assessment not only addresses these global issues, but also is critical in predicting toxicity and survival outcomes in older oncology patients.
Components of CGA
Advancing age brings many physiologic, psychosocial, and functional challenges, and a cancer diagnosis only adds to these issues. CGA provides a system of assessing older and/or frail patients with cancer through specific domains to identify issues that are not apparent on routine evaluation in a clinic setting before and during chemotherapy treatments. These domains include comorbidity, polypharmacy, functional status, cognition, psychological and social status, and nutrition.8
Comorbidity
The prevalence of multiple medical problems and comorbidities, including cancer, among people aged > 65 years is increasing.9 Studies have shown that two-thirds of patients with cancer had ≥ 2 medical conditions, and nearly one quarter had ≥ 4 medical conditions.10 In older adults, common comorbidities include cardiovascular disease, hypertension, diabetes mellitus, and dementia. These comorbidities can impact treatment decisions, increase the risk of disease, impact treatment-related complications, and affect a patient’s life expectancy.11 Assessing comorbidities is essential to CGA and is done using the Charlson Comorbidity Index and/or the Cumulative Illness Rating Scale.12
The Charlson Comorbidity Index was originally designed to predict 1-year mortality on the basis of a weighted composite score for the following categories: cardiovascular, endocrine, pulmonary, neurologic, renal, hepatic, gastrointestinal, and neoplastic disease.13 It is now the most widely used comorbidity index and has been adapted and verified as applicable and valid for predicting the outcomes and risk of death from many comorbid diseases.14 The Cumulative Illness Rating Scale has been validated as a predictor for readmission for hospitalized older adults, hospitalization within 1 year in a residential setting, and long-term mortality when assessed in inpatient and residential settings.15
Polypharmacy
Polypharmacy (use of ≥ 5 medications) is common in older patients regardless of cancer diagnosis and is often instead defined as “the use of multiple drugs or more than are medically necessary.”16 The use of multiple medications, including those not indicated for existing medical conditions (such as over‐the‐counter, herbal, and complementary/alternative medicines, which patients often fail to declare to their specialist, doctor, or pharmacist) adds to the potential negative aspects of polypharmacy that affect older patients.17
Patients with cancer usually are prescribed an extensive number of medicines, both for the disease and for supportive care, which can increase the chance of drug-drug interactions and adverse reactions.18 While these issues certainly affect quality of life, they also may influence chemotherapy treatment and potentially impact survival. Studies have shown that the presence of polypharmacy has been associated with higher numbers of comorbidities, increased use of inappropriate medications, poor performance status, decline in functional status, and poor survival.18
Functional Status
Although Eastern Cooperative Oncology Group (ECOG) performance status and Karnofsky Performance Status are commonly used by oncologists, these guidelines are limited in focus and do not reliably measure functional status in older patients. Functional status is determined by the ability to perform daily acts of self-care, which includes assessment of activities of daily living (ADLs) and instrumental activities of daily living (IADLs). ADLs refer to such tasks as bathing, dressing, eating, mobility, balance, and toileting.19 IADLs include the ability to perform activities required to live within a community and include shopping, transportation, managing finances, medication management, cooking, and cleaning.11
Physical functionality also can be assessed by measures such as gait speed, grip strength, balance, and lower extremity strength. These are more sensitive and shown to be associated with worse clinical outcomes.20 Grip strength and gait speed, as assessed by the Timed Up and Go test or the Short Physical Performance Battery measure strength and balance.12 Reduction in gait speed and/or grip strength are associated with adverse clinical outcomes and increased risk of mortality.21 Patients with cancer who have difficulty with ADLs are at increased risk for falls, which can limit their functional independence, compromise cancer therapy, and increase the risk of chemotherapy toxicities.11 Impaired hearing and poor vision are added factors that can be barriers to cancer treatment.
Cognition
Cognitive impairment in patients with cancer is becoming more of an issue for oncology HCPs as both cancer and cognitive decline are more common with advancing age. Cognition in cancer patients is important for understanding their diagnosis, prognosis, treatment options, and adherence. Impaired cognition can affect decision making regarding treatment options and administration. Cognition can be assessed through validated screening tools such as the Mini-Mental State Examination and Mini-Cog.11
Psychological and Social Status
A cancer diagnosis has a major impact on the mental and emotional state of patients and family members. Clinically significant anxiety has been reported in approximately 21% of older patients with cancer, and the incidence of depression ranges from 17 to 26%.22 In older patients with, psychologic distress can impact cancer treatment, resulting in less definitive therapy and poorer outcomes.23 All patients with cancer should be screened for psychologic distress using standardized methods, such as the Geriatric Depression Scale or the General Anxiety Disorder-7 scale.24 A positive screen should lead to additional assessments that evaluate the severity of depression and other comorbid psychological problems and medical conditions.
Social isolation and loneliness are factors that can affect both depression and anxiety. Older patients with cancer are at risk for decreased social activities and are already challenged with issues related to home care, comorbidities, functional status, and caregiver support.23 Therefore, it is important to assess the social interactions of an older and/or frail patient with cancer and use social work assistance to address needs for supportive services.
Nutrition
Nutrition is important in any patient with cancer undergoing chemotherapy treatment. However, it is of greater importance in older adults, as malnutrition and weight loss are negative prognostic factors that correlate with poor tolerance to chemotherapy treatment, decline in quality of life, and increased mortality.25 The Mini-Nutritional Assessment is a widely used validated tool to assess nutritional status and risk of malnutrition.11 This tool can help identify those older and/or frail patients with cancer with impaired nutritional status and aid in instituting corrective measures to treat or prevent malnutrition.
Effectiveness of CGA
Multiple randomized controlled clinical trials assessing the effectiveness of CGA have been conducted over the past 3 decades with overall positive outcomes related to its value.26 Benefits of CGA can include overall improved medical care, avoidance of hospitalization or nursing home placement, identification of cognitive impairment, and prevention of geriatric syndrome (a range of conditions representing multiple organ impairment in older adults).27
In oncology, CGA is particularly beneficial, as it can identify issues in nearly 70% of patients that may not be apparent through traditional oncology assessment.28 A systematic review of 36 studies assessing the prognostic value of CGA in elderly patients with cancer receiving chemotherapy concluded that impaired performance and functional status as well as a frail and vulnerable profile are important predictors of severe chemotherapy-related toxicity and are associated with a higher risk of mortality.29 Therefore, CGA should be an integral part of the evaluation of older and/or frail patients with cancer prior to chemotherapy consideration.
Several screening tools have been developed using information from CGA to assess the risk of severe toxicities. The most commonly used tools for predicting toxicity include the Cancer and Aging Research Group (CARG) chemotoxicity calculator and the Chemotherapy Risk Assessment Scale for High-Age Patients (CRASH).30,31 Although these tools are readily available to facilitate CGA, and despite their proven beneficial outcome and recommended usage by national guidelines, implementation of these tools in routine oncology practice has been challenging and slow to spread. Unless these recommended interventions are effectively implemented, the benefits of CGA cannot be realized. With the expected surge in the number of older patients with cancer, hopefully this will change.
Geriatric Assessment Screening Tools
A screening tool recommended for use in older and/or frail patients with cancer allows for a brief assessment to help clinicians identify patients in need of further evaluation by CGA and to provides information on treatment-related toxicities, functional decline, and survival.32 The predictive value and utility of geriatric assessment screening tools have been repeatedly proven to identify older and/or frail adults at risk for treatment-related toxicities.12 The CARG and the CRASH are validated screening tools used in identifying patients at higher risk for chemotherapy toxicity. These screening tools are intended to provide guidance to the clinical oncology practitioner on risk stratification of chemotherapy toxicity in older patients with cancer.33
Both of these screening tools provide similar predictive performance for chemotherapy toxicity in older patients with cancer.34 However, the CARG tool seems to have the advantage of using more data that had already been obtained during regular office visits and is clear and easy to use clinically. The CRASH tool is slightly more involved, as it uses multiple geriatric instruments to determine the predictive risk of both hematologic and nonhematologic toxicities of chemotherapy.
CARG Chemotoxicity Calculator
Hurria and colleagues originally developed the CARG tool from data obtained through a prospective multicenter study involving 500 patients with cancer aged ≥ 65 years.35 They concluded that chemotherapy-related toxicity is common in older adults, with 53% of patients sustaining grade 3 or 4 treatment-related toxicities and 2% treatment-related mortality.12 This predictive model for chemotherapy-related toxicity used 11 variables, both objective (obtained during a regular clinical encounter: age, tumor type, chemotherapy dosing, number of drugs, creatinine, and hemoglobin) and subjective (completed by patient: number of falls, social support, the ability to take medications, hearing impairment, and physical performance), to determine at-risk patients (Table 1).31
Compared with standard performance status measures in oncology practice, the CARG model was better able to predict chemotherapy-related toxicities. In 2016, Hurria and colleagues published the results of an updated external validation study with a cohort of 250 older patients with cancer receiving chemotherapy that confirmed the prediction of chemotherapy toxicity using the CARG screening tool in this population.31 An appealing feature of this tool is the free online accessibility and the expedited manner in which screening can be conducted.
CRASH Score
The CRASH score was derived from the results of a prospective, multicenter study of 518 patients aged ≥ 70 years who were assessed on 24 parameters prior to starting chemotherapy.30 A total of 64% of patients experienced significant toxicities, including 32% with grade 4 hematologic toxicity and 56% with grade 3 or 4 nonhematologic toxicity. The hematologic and nonhematologic toxicity risks are the 2 categories that comprise the CRASH score. Both baseline patient variables and chemotherapy regimen are incorporated into an 8-item assessment profile that determines the risk categories (Table 2).30
Increased risk of hematologic toxicities was associated with increased diastolic blood pressure, increased lactate dehydrogenase, need for assistance with IADL, and increased toxicity potential of the chemotherapy regimen. Nonhematologic toxicities were associated with ECOG performance score, Mini Mental Status Examination and Mini-Nutritional Assessment, and increased toxicity of the chemotherapy regimen.12 Patient scores are stratified into 4 risk categories: low, medium-low, medium-high, and high.30 Like the CARG tool, the CRASH screening tool also is available as a free online resource and can be used in everyday clinical practice to assess older and/or frail adults with cancer.
Conclusions
In older adults, cancer may significantly impact the natural course of concurrent comorbidities due to physiologic and functional changes. These vulnerabilities predispose older patients with cancer to an increased risk of adverse outcomes, including treatment-related toxicities.36 Given the rapidly aging population, it is critical for oncology clinical teams to be prepared to assess for, prevent, and manage issues for older adults that could impact outcomes, including complications and toxicities from chemotherapy.35 Studies have reported that 78 to 93% of older oncology patients have at least 1 geriatric impairment that could potentially impact oncology treatment plans.37,38 This supports the utility of CGA as a global assessment tool to risk stratify older and/or frail patients prior to deciding on subsequent oncologic treatment approaches.5 In fact, major cooperative groups sponsored by the National Cancer Institute, such as the Alliance for Clinical Trials in Oncology, are including CGA as part of some of their treatment trials. CGA was conducted as part of a multicenter cooperative group study in older patients with acute myeloid leukemia prior to inpatient intensive induction chemotherapy and was determined to be feasible and useful in clinical trials and practice.39
Despite the increasing evidence for benefits of CGA, it has not been a consistent part of oncology practices, and few HCPs are familiar with the benefits of CGA screening tools. Although oncology providers routinely participate in every aspect of cancer care and play a vital role in the coordination and management of older patients with cancer, CGA implementation into routine clinical practice has been slow in part due to lack of knowledge and training regarding the use of GA tools.
Oncology providers can easily incorporate CGA screening tools into the history and physical examination process for older patients with cancer, which will add an important dimension to these patient evaluations. Oncology providers are not only well positioned to administer these screening tools, but also can lead the field in developing innovative ways for effective implementation in busy routine oncology clinics. However, to be successful, oncology providers must be knowledgeable about these tools and understand their utility in guiding treatment decisions and improving quality of care in older patients with cancer.
Age is a well recognized risk factor for cancer development. The population of older Americans is growing, and by 2030, 20% of the US population will be aged ≥ 65 years.1 While 25% of all new cancer cases are diagnosed in people aged 65 to 74 years, more than half of cancers occur in individuals aged ≥ 70 years, with even higher rates in those aged ≥ 75 years.2 Although cancer rates have declined slightly overall among people aged ≥ 65 years, this population still has an 11-fold increased incidence of cancer compared with that of younger individuals.3 With a rapidly growing older population, there will be increasing demand for cancer care.
Treatment of cancer in older individuals often is complicated by medical comorbidities, frailty, and poor functional status. Distinguishing patients who can tolerate aggressive therapy from those who require less intensive therapy can be challenging. Age-related physiologic changes predispose older adults to an increased risk of therapy-related toxicities, resulting in suboptimal therapeutic benefit and substantial morbidity. For example, cardiovascular changes can lead to reduction of the cardiac functional reserve, which can increase the risk of congestive heart failure. Similarly, decline in renal function leads to an increased potential for nephrotoxicity.4 Although patients may be of the same chronologic age, their performance, functional, and biologic status may be quite variable; thus, tolerance to aggressive treatment is not easily predicted. The comprehensive geriatric assessment (CGA) may be used as a global assessment tool to risk stratify older patients prior to oncologic treatment decisions.
Health care providers (HCPs), including physician assistants, nurse practitioners, clinical nurse specialists, nurses, and physicians, routinely participate in every aspect of cancer care by ordering and interpreting diagnostic tests, addressing comorbidities, managing symptoms, and discussing cancer treatment recommendations. HCPs in oncology will continue to play a vital role in the coordination and management of older patients with cancer. However, in general, CGA has not been a consistent part of oncology practices, and few HCPs are familiar with the benefits of CGA screening tools.
What Is Geriatric Assessment?
Geriatric assessment is a multidisciplinary, multidimensional process aimed at detecting medical, psychosocial, and functional issues of older adults that are not identified by traditional performance status measures alone. It provides guidance for management of identified problems and improvement in quality of life.6 CGA was developed by geriatricians and multidisciplinary care teams to evaluate the domains of functional, nutritional, cognitive, psychosocial, and economic status; comorbidities; geriatric syndromes; and mood, and it has been tested in both clinics and hospitals.7 Although such assessment requires additional time and resources, its goals are to identify areas of vulnerability, assist in clinical decisions of treatable health problems, and guide therapeutic interventions.6 In oncology practice, the assessment not only addresses these global issues, but also is critical in predicting toxicity and survival outcomes in older oncology patients.
Components of CGA
Advancing age brings many physiologic, psychosocial, and functional challenges, and a cancer diagnosis only adds to these issues. CGA provides a system of assessing older and/or frail patients with cancer through specific domains to identify issues that are not apparent on routine evaluation in a clinic setting before and during chemotherapy treatments. These domains include comorbidity, polypharmacy, functional status, cognition, psychological and social status, and nutrition.8
Comorbidity
The prevalence of multiple medical problems and comorbidities, including cancer, among people aged > 65 years is increasing.9 Studies have shown that two-thirds of patients with cancer had ≥ 2 medical conditions, and nearly one quarter had ≥ 4 medical conditions.10 In older adults, common comorbidities include cardiovascular disease, hypertension, diabetes mellitus, and dementia. These comorbidities can impact treatment decisions, increase the risk of disease, impact treatment-related complications, and affect a patient’s life expectancy.11 Assessing comorbidities is essential to CGA and is done using the Charlson Comorbidity Index and/or the Cumulative Illness Rating Scale.12
The Charlson Comorbidity Index was originally designed to predict 1-year mortality on the basis of a weighted composite score for the following categories: cardiovascular, endocrine, pulmonary, neurologic, renal, hepatic, gastrointestinal, and neoplastic disease.13 It is now the most widely used comorbidity index and has been adapted and verified as applicable and valid for predicting the outcomes and risk of death from many comorbid diseases.14 The Cumulative Illness Rating Scale has been validated as a predictor for readmission for hospitalized older adults, hospitalization within 1 year in a residential setting, and long-term mortality when assessed in inpatient and residential settings.15
Polypharmacy
Polypharmacy (use of ≥ 5 medications) is common in older patients regardless of cancer diagnosis and is often instead defined as “the use of multiple drugs or more than are medically necessary.”16 The use of multiple medications, including those not indicated for existing medical conditions (such as over‐the‐counter, herbal, and complementary/alternative medicines, which patients often fail to declare to their specialist, doctor, or pharmacist) adds to the potential negative aspects of polypharmacy that affect older patients.17
Patients with cancer usually are prescribed an extensive number of medicines, both for the disease and for supportive care, which can increase the chance of drug-drug interactions and adverse reactions.18 While these issues certainly affect quality of life, they also may influence chemotherapy treatment and potentially impact survival. Studies have shown that the presence of polypharmacy has been associated with higher numbers of comorbidities, increased use of inappropriate medications, poor performance status, decline in functional status, and poor survival.18
Functional Status
Although Eastern Cooperative Oncology Group (ECOG) performance status and Karnofsky Performance Status are commonly used by oncologists, these guidelines are limited in focus and do not reliably measure functional status in older patients. Functional status is determined by the ability to perform daily acts of self-care, which includes assessment of activities of daily living (ADLs) and instrumental activities of daily living (IADLs). ADLs refer to such tasks as bathing, dressing, eating, mobility, balance, and toileting.19 IADLs include the ability to perform activities required to live within a community and include shopping, transportation, managing finances, medication management, cooking, and cleaning.11
Physical functionality also can be assessed by measures such as gait speed, grip strength, balance, and lower extremity strength. These are more sensitive and shown to be associated with worse clinical outcomes.20 Grip strength and gait speed, as assessed by the Timed Up and Go test or the Short Physical Performance Battery measure strength and balance.12 Reduction in gait speed and/or grip strength are associated with adverse clinical outcomes and increased risk of mortality.21 Patients with cancer who have difficulty with ADLs are at increased risk for falls, which can limit their functional independence, compromise cancer therapy, and increase the risk of chemotherapy toxicities.11 Impaired hearing and poor vision are added factors that can be barriers to cancer treatment.
Cognition
Cognitive impairment in patients with cancer is becoming more of an issue for oncology HCPs as both cancer and cognitive decline are more common with advancing age. Cognition in cancer patients is important for understanding their diagnosis, prognosis, treatment options, and adherence. Impaired cognition can affect decision making regarding treatment options and administration. Cognition can be assessed through validated screening tools such as the Mini-Mental State Examination and Mini-Cog.11
Psychological and Social Status
A cancer diagnosis has a major impact on the mental and emotional state of patients and family members. Clinically significant anxiety has been reported in approximately 21% of older patients with cancer, and the incidence of depression ranges from 17 to 26%.22 In older patients with, psychologic distress can impact cancer treatment, resulting in less definitive therapy and poorer outcomes.23 All patients with cancer should be screened for psychologic distress using standardized methods, such as the Geriatric Depression Scale or the General Anxiety Disorder-7 scale.24 A positive screen should lead to additional assessments that evaluate the severity of depression and other comorbid psychological problems and medical conditions.
Social isolation and loneliness are factors that can affect both depression and anxiety. Older patients with cancer are at risk for decreased social activities and are already challenged with issues related to home care, comorbidities, functional status, and caregiver support.23 Therefore, it is important to assess the social interactions of an older and/or frail patient with cancer and use social work assistance to address needs for supportive services.
Nutrition
Nutrition is important in any patient with cancer undergoing chemotherapy treatment. However, it is of greater importance in older adults, as malnutrition and weight loss are negative prognostic factors that correlate with poor tolerance to chemotherapy treatment, decline in quality of life, and increased mortality.25 The Mini-Nutritional Assessment is a widely used validated tool to assess nutritional status and risk of malnutrition.11 This tool can help identify those older and/or frail patients with cancer with impaired nutritional status and aid in instituting corrective measures to treat or prevent malnutrition.
Effectiveness of CGA
Multiple randomized controlled clinical trials assessing the effectiveness of CGA have been conducted over the past 3 decades with overall positive outcomes related to its value.26 Benefits of CGA can include overall improved medical care, avoidance of hospitalization or nursing home placement, identification of cognitive impairment, and prevention of geriatric syndrome (a range of conditions representing multiple organ impairment in older adults).27
In oncology, CGA is particularly beneficial, as it can identify issues in nearly 70% of patients that may not be apparent through traditional oncology assessment.28 A systematic review of 36 studies assessing the prognostic value of CGA in elderly patients with cancer receiving chemotherapy concluded that impaired performance and functional status as well as a frail and vulnerable profile are important predictors of severe chemotherapy-related toxicity and are associated with a higher risk of mortality.29 Therefore, CGA should be an integral part of the evaluation of older and/or frail patients with cancer prior to chemotherapy consideration.
Several screening tools have been developed using information from CGA to assess the risk of severe toxicities. The most commonly used tools for predicting toxicity include the Cancer and Aging Research Group (CARG) chemotoxicity calculator and the Chemotherapy Risk Assessment Scale for High-Age Patients (CRASH).30,31 Although these tools are readily available to facilitate CGA, and despite their proven beneficial outcome and recommended usage by national guidelines, implementation of these tools in routine oncology practice has been challenging and slow to spread. Unless these recommended interventions are effectively implemented, the benefits of CGA cannot be realized. With the expected surge in the number of older patients with cancer, hopefully this will change.
Geriatric Assessment Screening Tools
A screening tool recommended for use in older and/or frail patients with cancer allows for a brief assessment to help clinicians identify patients in need of further evaluation by CGA and to provides information on treatment-related toxicities, functional decline, and survival.32 The predictive value and utility of geriatric assessment screening tools have been repeatedly proven to identify older and/or frail adults at risk for treatment-related toxicities.12 The CARG and the CRASH are validated screening tools used in identifying patients at higher risk for chemotherapy toxicity. These screening tools are intended to provide guidance to the clinical oncology practitioner on risk stratification of chemotherapy toxicity in older patients with cancer.33
Both of these screening tools provide similar predictive performance for chemotherapy toxicity in older patients with cancer.34 However, the CARG tool seems to have the advantage of using more data that had already been obtained during regular office visits and is clear and easy to use clinically. The CRASH tool is slightly more involved, as it uses multiple geriatric instruments to determine the predictive risk of both hematologic and nonhematologic toxicities of chemotherapy.
CARG Chemotoxicity Calculator
Hurria and colleagues originally developed the CARG tool from data obtained through a prospective multicenter study involving 500 patients with cancer aged ≥ 65 years.35 They concluded that chemotherapy-related toxicity is common in older adults, with 53% of patients sustaining grade 3 or 4 treatment-related toxicities and 2% treatment-related mortality.12 This predictive model for chemotherapy-related toxicity used 11 variables, both objective (obtained during a regular clinical encounter: age, tumor type, chemotherapy dosing, number of drugs, creatinine, and hemoglobin) and subjective (completed by patient: number of falls, social support, the ability to take medications, hearing impairment, and physical performance), to determine at-risk patients (Table 1).31
Compared with standard performance status measures in oncology practice, the CARG model was better able to predict chemotherapy-related toxicities. In 2016, Hurria and colleagues published the results of an updated external validation study with a cohort of 250 older patients with cancer receiving chemotherapy that confirmed the prediction of chemotherapy toxicity using the CARG screening tool in this population.31 An appealing feature of this tool is the free online accessibility and the expedited manner in which screening can be conducted.
CRASH Score
The CRASH score was derived from the results of a prospective, multicenter study of 518 patients aged ≥ 70 years who were assessed on 24 parameters prior to starting chemotherapy.30 A total of 64% of patients experienced significant toxicities, including 32% with grade 4 hematologic toxicity and 56% with grade 3 or 4 nonhematologic toxicity. The hematologic and nonhematologic toxicity risks are the 2 categories that comprise the CRASH score. Both baseline patient variables and chemotherapy regimen are incorporated into an 8-item assessment profile that determines the risk categories (Table 2).30
Increased risk of hematologic toxicities was associated with increased diastolic blood pressure, increased lactate dehydrogenase, need for assistance with IADL, and increased toxicity potential of the chemotherapy regimen. Nonhematologic toxicities were associated with ECOG performance score, Mini Mental Status Examination and Mini-Nutritional Assessment, and increased toxicity of the chemotherapy regimen.12 Patient scores are stratified into 4 risk categories: low, medium-low, medium-high, and high.30 Like the CARG tool, the CRASH screening tool also is available as a free online resource and can be used in everyday clinical practice to assess older and/or frail adults with cancer.
Conclusions
In older adults, cancer may significantly impact the natural course of concurrent comorbidities due to physiologic and functional changes. These vulnerabilities predispose older patients with cancer to an increased risk of adverse outcomes, including treatment-related toxicities.36 Given the rapidly aging population, it is critical for oncology clinical teams to be prepared to assess for, prevent, and manage issues for older adults that could impact outcomes, including complications and toxicities from chemotherapy.35 Studies have reported that 78 to 93% of older oncology patients have at least 1 geriatric impairment that could potentially impact oncology treatment plans.37,38 This supports the utility of CGA as a global assessment tool to risk stratify older and/or frail patients prior to deciding on subsequent oncologic treatment approaches.5 In fact, major cooperative groups sponsored by the National Cancer Institute, such as the Alliance for Clinical Trials in Oncology, are including CGA as part of some of their treatment trials. CGA was conducted as part of a multicenter cooperative group study in older patients with acute myeloid leukemia prior to inpatient intensive induction chemotherapy and was determined to be feasible and useful in clinical trials and practice.39
Despite the increasing evidence for benefits of CGA, it has not been a consistent part of oncology practices, and few HCPs are familiar with the benefits of CGA screening tools. Although oncology providers routinely participate in every aspect of cancer care and play a vital role in the coordination and management of older patients with cancer, CGA implementation into routine clinical practice has been slow in part due to lack of knowledge and training regarding the use of GA tools.
Oncology providers can easily incorporate CGA screening tools into the history and physical examination process for older patients with cancer, which will add an important dimension to these patient evaluations. Oncology providers are not only well positioned to administer these screening tools, but also can lead the field in developing innovative ways for effective implementation in busy routine oncology clinics. However, to be successful, oncology providers must be knowledgeable about these tools and understand their utility in guiding treatment decisions and improving quality of care in older patients with cancer.
1. Sharless NE. The challenging landscape of cancer and aging: charting a way forward. Published January 24, 2018. Accessed April 16, 2021. https://www.cancer.gov/news-events/cancer-currents-blog/2018/sharpless-aging-cancer-research
2. National Cancer Institute. Age and cancer risk. Updated March 5, 2021. Accessed April 16, 2021. https://www.cancer.gov/about-cancer/causes-prevention/risk/age
3. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2019. CA Cancer J Clin. 2019;69(1):7-34. doi:10.3322/caac.21551 4. Sawhney R, Sehl M, Naeim A. Physiologic aspects of aging: impact on cancer management and decision making, part I. Cancer J. 2005;11(6):449-460. doi:10.1097/00130404-200511000-00004
5. Kenis C, Bron D, Libert Y, et al. Relevance of a systematic geriatric screening and assessment in older patients with cancer: results of a prospective multicentric study. Ann Oncol. 2013;24(5):1306-1312. doi:10.1093/annonc/mds619
6. Loh KP, Soto-Perez-de-Celis E, Hsu T, et al. What every oncologist should know about geriatric assessment for older patients with cancer: Young International Society of Geriatric Oncology position paper. J Oncol Pract. 2018;14(2):85-94. doi:10.1200/JOP.2017.026435
7. Cohen HJ. Evolution of geriatric assessment in oncology. J Oncol Pract. 2018;14(2):95-96. doi:10.1200/JOP.18.00017
8. Wildiers H, Heeren P, Puts M, et al. International Society of Geriatric Oncology consensus on geriatric assessment in older patients with cancer. J Clin Oncol. 2014;32(24):2595-2603. doi:10.1200/JCO.2013.54.8347
9. American Cancer Society. Cancer facts & figures 2019. Accessed April 16, 2021. https://www.cancer.org/research/cancer-facts-statistics/all-cancer-facts-figures/cancer-facts-figures-2019.html
10. Williams GR, Mackenzie A, Magnuson A, et al. Comorbidity in older adults with cancer. J Geriatr Oncol. 2016;7(4):249-257. doi:10.1016/j.jgo.2015.12.002
11. Korc-Grodzicki B, Holmes HM, Shahrokni A. Geriatric assessment for oncologists. Cancer Biol Med. 2015;12(4):261-274. doi:10.7497/j.issn.2095-3941.2015.0082
12. Li D, Soto-Perez-de-Celis E, Hurria A. Geriatric assessment and tools for predicting treatment toxicity in older adults with cancer. Cancer J. 2017;23(4):206-210. doi:10.1097/PPO.0000000000000269
13. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. doi:10.1016/0021-9681(87)90171-8
14. Huang Y, Gou R, Diao Y, et al. Charlson comorbidity index helps predict the risk of mortality for patients with type 2 diabetic nephropathy. J Zhejiang Univ Sci B. 2014;15(1):58-66. doi:10.1631/jzus.B1300109
15. Osborn KP IV, Nothelle S, Slaven JE, Montz K, Hui S, Torke AM. Cumulative Illness Rating Scale (CIRS) can be used to predict hospital outcomes in older adults. J Geriatric Med Gerontol. 2017;3(2). doi:10.23937/2469-5858/1510030
16. Maher RL, Hanlon J, Hajjar ER. Clinical consequences of polypharmacy in elderly. Expert Opin Drug Saf. 2014;13(1):57-65. doi:10.1517/14740338.2013.827660
17. Shrestha S, Shrestha S, Khanal S. Polypharmacy in elderly cancer patients: challenges and the way clinical pharmacists can contribute in resource-limited settings. Aging Med. 2019;2(1):42-49. doi:10.1002/agm2.12051
18. Sharma M, Loh KP, Nightingale G, Mohile SG, Holmes HM. Polypharmacy and potentially inappropriate medication use in geriatric oncology. J Geriatr Oncol. 2016;7(5):346-353. doi:10.1016/j.jgo.2016.07.010
19. Norburn JE, Bernard SL, Konrad TR, et al. Self-care and assistance from others in coping with functional status limitations among a national sample of older adults. J Gerontol B Psychol Sci Soc Sci. 1995;50(2):S101-S109. doi:10.1093/geronb/50b.2.s101
20. Fragala MS, Alley DE, Shardell MD, et al. Comparison of handgrip and leg extension strength in predicting slow gait speed in older adults. J Am Geriatr Soc. 2016;64(1):144-150. doi:10.1111/jgs.13871
21. Owusu C, Berger NA. Comprehensive geriatric assessment in the older cancer patient: coming of age in clinical cancer care. Clin Pract (Lond). 2014;11(6):749-762. doi:10.2217/cpr.14.72
22. Weiss Wiesel TR, Nelson CJ, Tew WP, et al. The relationship between age, anxiety, and depression in older adults with cancer. Psychooncology. 2015;24(6):712-717. doi:10.1002/pon.3638
23. Soto-Perez-de-Celis E, Li D, Yuan Y, Lau YM, Hurria A. Functional versus chronological age: geriatric assessments to guide decision making in older patients with cancer. Lancet Oncol. 2018;19(6):e305-e316. doi:10.1016/S1470-2045(18)30348-6
24. Andersen BL, DeRubeis RJ, Berman BS, et al. Screening, assessment, and care of anxiety and depressive symptoms in adults with cancer: an American Society of Clinical Oncology guideline adaptation. J Clin Oncol. 2014;32(15):1605-1619. doi:10.1200/JCO.2013.52.4611
25. Muscaritoli M, Lucia S, Farcomeni A, et al. Prevalence of malnutrition in patients at first medical oncology visit: the PreMiO study. Oncotarget. 2017;8(45):79884-79886. doi:10.18632/oncotarget.20168
26. Ekdahl AW, Axmon A, Sandberg M, Steen Carlsson K. Is care based on comprehensive geriatric assessment with mobile teams better than usual care? A study protocol of a randomised controlled trial (the GerMoT study). BMJ Open. 2018;8(10)e23969. doi:10.1136/bmjopen-2018-023969
27. Mohile SG, Dale W, Somerfield MR, et al. Practical assessment and management of vulnerabilities in older patients receiving chemotherapy: ASCO guideline for geriatric oncology. J Clin Oncol. 2018;36(22):2326-2347. doi:10.1200/JCO.2018.78.8687
28. Hernandez Torres C, Hsu T. Comprehensive geriatric assessment in the older adult with cancer: a review. Eur Urol Focus. 2017;3(4-5):330-339. doi:10.1016/j.euf.2017.10.010
29. Janssens K, Specenier P. The prognostic value of the comprehensive geriatric assessment (CGA) in elderly cancer patients (ECP) treated with chemotherapy (CT): a systematic review. Eur J Cancer. 2017;72(1):S164-S165. doi:10.1016/S0959-8049(17)30611-1
30. Extermann M, Boler I, Reich RR, et al. Predicting the risk of chemotherapy toxicity in older patients: The Chemotherapy Risk Assessment Scale for High‐Age Patients (CRASH) score. Cancer. 2012;118(13):3377-3386. doi:10.1002/cncr.26646
31. Hurria A, Mohile S, Gajra A, et al. Validation of a prediction tool for chemotherapy toxicity in older adults with cancer. J Clin Oncol. 2016;34(20):2366-2371. doi:10.1200/JCO.2015.65.4327
32. Decoster L, Van Puyvelde K, Mohile S, et al. Screening tools for multidimensional health problems warranting a geriatric assessment in older cancer patients: an update on SIOG recommendations. Ann Oncol. 2015;26(2):288-300. doi:10.1093/annonc/mdu210
33. Schiefen JK, Madsen LT, Dains JE. Instruments that predict oncology treatment risk in the senior population. J Adv Pract Oncol. 2017;8(5):528-533.
34. Ortland I, Mendel Ott M, Kowar M, et al. Comparing the performance of the CARG and the CRASH score for predicting toxicity in older patients with cancer. J Geriatr Oncol. 2020;11(6):997-1005. doi:10.1016/j.jgo.2019.12.016
35. Hurria A, Togawa K, Mohile SG, et al. Predicting chemotherapy toxicity in older adults with cancer: a prospective multicenter study. J Clin Oncol. 2011;29(25):3457-3465. doi:10.1200/JCO.2011.34.7625
36. Mohile SG, Velarde C, Hurria A, et al. Geriatric assessment-guided care processes for older adults: a Delphi consensus of geriatric oncology experts. J Natl Compr Canc Netw. 2015;13(9):1120-1130. doi:10.6004/jnccn.2015.0137
37. Schiphorst AHW, Ten Bokkel Huinink D, Breumelhof R, Burgmans JPJ, Pronk A, Hamaker ME. Geriatric consultation can aid in complex treatment decisions for elderly cancer patients. Eur J Cancer Care (Engl). 2016;25(3):365-370. doi:10.1111/ecc.12349
38. Schulkes KJG, Souwer ETD, Hamaker ME, et al. The effect of a geriatric assessment on treatment decisions for patients with lung cancer. Lung. 2017;195(2):225-231. doi:10.1007/s00408-017-9983-7
39. Klepin HD, Ritchie E, Major-Elechi B, et al. Geriatric assessment among older adults receiving intensive therapy for acute myeloid leukemia: report of CALGB 361006 (Alliance). J Geriatr Oncol. 2020;11(1):107-113. doi:10.1016/j.jgo.2019.10.002
1. Sharless NE. The challenging landscape of cancer and aging: charting a way forward. Published January 24, 2018. Accessed April 16, 2021. https://www.cancer.gov/news-events/cancer-currents-blog/2018/sharpless-aging-cancer-research
2. National Cancer Institute. Age and cancer risk. Updated March 5, 2021. Accessed April 16, 2021. https://www.cancer.gov/about-cancer/causes-prevention/risk/age
3. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2019. CA Cancer J Clin. 2019;69(1):7-34. doi:10.3322/caac.21551 4. Sawhney R, Sehl M, Naeim A. Physiologic aspects of aging: impact on cancer management and decision making, part I. Cancer J. 2005;11(6):449-460. doi:10.1097/00130404-200511000-00004
5. Kenis C, Bron D, Libert Y, et al. Relevance of a systematic geriatric screening and assessment in older patients with cancer: results of a prospective multicentric study. Ann Oncol. 2013;24(5):1306-1312. doi:10.1093/annonc/mds619
6. Loh KP, Soto-Perez-de-Celis E, Hsu T, et al. What every oncologist should know about geriatric assessment for older patients with cancer: Young International Society of Geriatric Oncology position paper. J Oncol Pract. 2018;14(2):85-94. doi:10.1200/JOP.2017.026435
7. Cohen HJ. Evolution of geriatric assessment in oncology. J Oncol Pract. 2018;14(2):95-96. doi:10.1200/JOP.18.00017
8. Wildiers H, Heeren P, Puts M, et al. International Society of Geriatric Oncology consensus on geriatric assessment in older patients with cancer. J Clin Oncol. 2014;32(24):2595-2603. doi:10.1200/JCO.2013.54.8347
9. American Cancer Society. Cancer facts & figures 2019. Accessed April 16, 2021. https://www.cancer.org/research/cancer-facts-statistics/all-cancer-facts-figures/cancer-facts-figures-2019.html
10. Williams GR, Mackenzie A, Magnuson A, et al. Comorbidity in older adults with cancer. J Geriatr Oncol. 2016;7(4):249-257. doi:10.1016/j.jgo.2015.12.002
11. Korc-Grodzicki B, Holmes HM, Shahrokni A. Geriatric assessment for oncologists. Cancer Biol Med. 2015;12(4):261-274. doi:10.7497/j.issn.2095-3941.2015.0082
12. Li D, Soto-Perez-de-Celis E, Hurria A. Geriatric assessment and tools for predicting treatment toxicity in older adults with cancer. Cancer J. 2017;23(4):206-210. doi:10.1097/PPO.0000000000000269
13. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. doi:10.1016/0021-9681(87)90171-8
14. Huang Y, Gou R, Diao Y, et al. Charlson comorbidity index helps predict the risk of mortality for patients with type 2 diabetic nephropathy. J Zhejiang Univ Sci B. 2014;15(1):58-66. doi:10.1631/jzus.B1300109
15. Osborn KP IV, Nothelle S, Slaven JE, Montz K, Hui S, Torke AM. Cumulative Illness Rating Scale (CIRS) can be used to predict hospital outcomes in older adults. J Geriatric Med Gerontol. 2017;3(2). doi:10.23937/2469-5858/1510030
16. Maher RL, Hanlon J, Hajjar ER. Clinical consequences of polypharmacy in elderly. Expert Opin Drug Saf. 2014;13(1):57-65. doi:10.1517/14740338.2013.827660
17. Shrestha S, Shrestha S, Khanal S. Polypharmacy in elderly cancer patients: challenges and the way clinical pharmacists can contribute in resource-limited settings. Aging Med. 2019;2(1):42-49. doi:10.1002/agm2.12051
18. Sharma M, Loh KP, Nightingale G, Mohile SG, Holmes HM. Polypharmacy and potentially inappropriate medication use in geriatric oncology. J Geriatr Oncol. 2016;7(5):346-353. doi:10.1016/j.jgo.2016.07.010
19. Norburn JE, Bernard SL, Konrad TR, et al. Self-care and assistance from others in coping with functional status limitations among a national sample of older adults. J Gerontol B Psychol Sci Soc Sci. 1995;50(2):S101-S109. doi:10.1093/geronb/50b.2.s101
20. Fragala MS, Alley DE, Shardell MD, et al. Comparison of handgrip and leg extension strength in predicting slow gait speed in older adults. J Am Geriatr Soc. 2016;64(1):144-150. doi:10.1111/jgs.13871
21. Owusu C, Berger NA. Comprehensive geriatric assessment in the older cancer patient: coming of age in clinical cancer care. Clin Pract (Lond). 2014;11(6):749-762. doi:10.2217/cpr.14.72
22. Weiss Wiesel TR, Nelson CJ, Tew WP, et al. The relationship between age, anxiety, and depression in older adults with cancer. Psychooncology. 2015;24(6):712-717. doi:10.1002/pon.3638
23. Soto-Perez-de-Celis E, Li D, Yuan Y, Lau YM, Hurria A. Functional versus chronological age: geriatric assessments to guide decision making in older patients with cancer. Lancet Oncol. 2018;19(6):e305-e316. doi:10.1016/S1470-2045(18)30348-6
24. Andersen BL, DeRubeis RJ, Berman BS, et al. Screening, assessment, and care of anxiety and depressive symptoms in adults with cancer: an American Society of Clinical Oncology guideline adaptation. J Clin Oncol. 2014;32(15):1605-1619. doi:10.1200/JCO.2013.52.4611
25. Muscaritoli M, Lucia S, Farcomeni A, et al. Prevalence of malnutrition in patients at first medical oncology visit: the PreMiO study. Oncotarget. 2017;8(45):79884-79886. doi:10.18632/oncotarget.20168
26. Ekdahl AW, Axmon A, Sandberg M, Steen Carlsson K. Is care based on comprehensive geriatric assessment with mobile teams better than usual care? A study protocol of a randomised controlled trial (the GerMoT study). BMJ Open. 2018;8(10)e23969. doi:10.1136/bmjopen-2018-023969
27. Mohile SG, Dale W, Somerfield MR, et al. Practical assessment and management of vulnerabilities in older patients receiving chemotherapy: ASCO guideline for geriatric oncology. J Clin Oncol. 2018;36(22):2326-2347. doi:10.1200/JCO.2018.78.8687
28. Hernandez Torres C, Hsu T. Comprehensive geriatric assessment in the older adult with cancer: a review. Eur Urol Focus. 2017;3(4-5):330-339. doi:10.1016/j.euf.2017.10.010
29. Janssens K, Specenier P. The prognostic value of the comprehensive geriatric assessment (CGA) in elderly cancer patients (ECP) treated with chemotherapy (CT): a systematic review. Eur J Cancer. 2017;72(1):S164-S165. doi:10.1016/S0959-8049(17)30611-1
30. Extermann M, Boler I, Reich RR, et al. Predicting the risk of chemotherapy toxicity in older patients: The Chemotherapy Risk Assessment Scale for High‐Age Patients (CRASH) score. Cancer. 2012;118(13):3377-3386. doi:10.1002/cncr.26646
31. Hurria A, Mohile S, Gajra A, et al. Validation of a prediction tool for chemotherapy toxicity in older adults with cancer. J Clin Oncol. 2016;34(20):2366-2371. doi:10.1200/JCO.2015.65.4327
32. Decoster L, Van Puyvelde K, Mohile S, et al. Screening tools for multidimensional health problems warranting a geriatric assessment in older cancer patients: an update on SIOG recommendations. Ann Oncol. 2015;26(2):288-300. doi:10.1093/annonc/mdu210
33. Schiefen JK, Madsen LT, Dains JE. Instruments that predict oncology treatment risk in the senior population. J Adv Pract Oncol. 2017;8(5):528-533.
34. Ortland I, Mendel Ott M, Kowar M, et al. Comparing the performance of the CARG and the CRASH score for predicting toxicity in older patients with cancer. J Geriatr Oncol. 2020;11(6):997-1005. doi:10.1016/j.jgo.2019.12.016
35. Hurria A, Togawa K, Mohile SG, et al. Predicting chemotherapy toxicity in older adults with cancer: a prospective multicenter study. J Clin Oncol. 2011;29(25):3457-3465. doi:10.1200/JCO.2011.34.7625
36. Mohile SG, Velarde C, Hurria A, et al. Geriatric assessment-guided care processes for older adults: a Delphi consensus of geriatric oncology experts. J Natl Compr Canc Netw. 2015;13(9):1120-1130. doi:10.6004/jnccn.2015.0137
37. Schiphorst AHW, Ten Bokkel Huinink D, Breumelhof R, Burgmans JPJ, Pronk A, Hamaker ME. Geriatric consultation can aid in complex treatment decisions for elderly cancer patients. Eur J Cancer Care (Engl). 2016;25(3):365-370. doi:10.1111/ecc.12349
38. Schulkes KJG, Souwer ETD, Hamaker ME, et al. The effect of a geriatric assessment on treatment decisions for patients with lung cancer. Lung. 2017;195(2):225-231. doi:10.1007/s00408-017-9983-7
39. Klepin HD, Ritchie E, Major-Elechi B, et al. Geriatric assessment among older adults receiving intensive therapy for acute myeloid leukemia: report of CALGB 361006 (Alliance). J Geriatr Oncol. 2020;11(1):107-113. doi:10.1016/j.jgo.2019.10.002