User login
‘First reliable estimate’ of breast cancer metastasis
The data come from a massive meta-analysis of more than 400 studies conducted around the world, involving tens of thousands of women.
It found that the overall risk of metastasis is between 6% and 22%, with younger women having a higher risk.
While women aged 50 years or older when they were diagnosed with breast cancer have a risk of developing metastasis that ranged from 3.7% to 28.6%, women diagnosed with breast cancer before age 35 had a higher risk – 12.7% to 38%. The investigators speculate that this may be because younger women have a more aggressive form of breast cancer or because they are diagnosed at a later stage.
The risk of metastasis also varies by tumor type, with luminal B cancers having a 4.2% to 35.5% risk of metastasis versus a 2.3% to 11.8% risk with luminal A tumors.
“The quantification of recurrence and disease progression is important to assess the effectiveness of treatment, evaluate prognosis, and allocate resources,” commented lead investigator Eileen Morgan, PhD, of the International Agency for Research on Cancer.
Dr. Morgan and colleagues presented the new meta-analysis at the virtual Advanced Breast Cancer Sixth International Consensus Conference.
She added that this information has not been available until now “because cancer registries have not been routinely collecting this data.”
In fact, the U.S. National Cancer Institute began a project earlier this year to track this information, after 48 years of not doing so.
Reacting to the findings, Shani Paluch-Shimon, MBBS, director of the Breast Unit at Hadassah University Hospital, Jerusalem, commented that this work “provides the first reliable estimate of how many breast cancer patients go on to develop advanced disease in contemporary cohorts.”
“This information is, of course, important for patients who want to understand their prognosis,” she continued.
“But it’s also vital at a public health level for those of us working to treat and prevent advanced breast cancer, to help us understand the scale of the disease around the world,” she said. “It will help us identify at-risk groups across different populations and demonstrate how disease course is changing with contemporary treatments.”
“It will also help us understand what resources are needed and where, to ensure we can collect and analyze quality data in real-time as this is key for resource allocation and planning future studies.”
The work was funded by a grant from the Susan G. Komen Foundation.
A version of this article first appeared on Medscape.com.
The data come from a massive meta-analysis of more than 400 studies conducted around the world, involving tens of thousands of women.
It found that the overall risk of metastasis is between 6% and 22%, with younger women having a higher risk.
While women aged 50 years or older when they were diagnosed with breast cancer have a risk of developing metastasis that ranged from 3.7% to 28.6%, women diagnosed with breast cancer before age 35 had a higher risk – 12.7% to 38%. The investigators speculate that this may be because younger women have a more aggressive form of breast cancer or because they are diagnosed at a later stage.
The risk of metastasis also varies by tumor type, with luminal B cancers having a 4.2% to 35.5% risk of metastasis versus a 2.3% to 11.8% risk with luminal A tumors.
“The quantification of recurrence and disease progression is important to assess the effectiveness of treatment, evaluate prognosis, and allocate resources,” commented lead investigator Eileen Morgan, PhD, of the International Agency for Research on Cancer.
Dr. Morgan and colleagues presented the new meta-analysis at the virtual Advanced Breast Cancer Sixth International Consensus Conference.
She added that this information has not been available until now “because cancer registries have not been routinely collecting this data.”
In fact, the U.S. National Cancer Institute began a project earlier this year to track this information, after 48 years of not doing so.
Reacting to the findings, Shani Paluch-Shimon, MBBS, director of the Breast Unit at Hadassah University Hospital, Jerusalem, commented that this work “provides the first reliable estimate of how many breast cancer patients go on to develop advanced disease in contemporary cohorts.”
“This information is, of course, important for patients who want to understand their prognosis,” she continued.
“But it’s also vital at a public health level for those of us working to treat and prevent advanced breast cancer, to help us understand the scale of the disease around the world,” she said. “It will help us identify at-risk groups across different populations and demonstrate how disease course is changing with contemporary treatments.”
“It will also help us understand what resources are needed and where, to ensure we can collect and analyze quality data in real-time as this is key for resource allocation and planning future studies.”
The work was funded by a grant from the Susan G. Komen Foundation.
A version of this article first appeared on Medscape.com.
The data come from a massive meta-analysis of more than 400 studies conducted around the world, involving tens of thousands of women.
It found that the overall risk of metastasis is between 6% and 22%, with younger women having a higher risk.
While women aged 50 years or older when they were diagnosed with breast cancer have a risk of developing metastasis that ranged from 3.7% to 28.6%, women diagnosed with breast cancer before age 35 had a higher risk – 12.7% to 38%. The investigators speculate that this may be because younger women have a more aggressive form of breast cancer or because they are diagnosed at a later stage.
The risk of metastasis also varies by tumor type, with luminal B cancers having a 4.2% to 35.5% risk of metastasis versus a 2.3% to 11.8% risk with luminal A tumors.
“The quantification of recurrence and disease progression is important to assess the effectiveness of treatment, evaluate prognosis, and allocate resources,” commented lead investigator Eileen Morgan, PhD, of the International Agency for Research on Cancer.
Dr. Morgan and colleagues presented the new meta-analysis at the virtual Advanced Breast Cancer Sixth International Consensus Conference.
She added that this information has not been available until now “because cancer registries have not been routinely collecting this data.”
In fact, the U.S. National Cancer Institute began a project earlier this year to track this information, after 48 years of not doing so.
Reacting to the findings, Shani Paluch-Shimon, MBBS, director of the Breast Unit at Hadassah University Hospital, Jerusalem, commented that this work “provides the first reliable estimate of how many breast cancer patients go on to develop advanced disease in contemporary cohorts.”
“This information is, of course, important for patients who want to understand their prognosis,” she continued.
“But it’s also vital at a public health level for those of us working to treat and prevent advanced breast cancer, to help us understand the scale of the disease around the world,” she said. “It will help us identify at-risk groups across different populations and demonstrate how disease course is changing with contemporary treatments.”
“It will also help us understand what resources are needed and where, to ensure we can collect and analyze quality data in real-time as this is key for resource allocation and planning future studies.”
The work was funded by a grant from the Susan G. Komen Foundation.
A version of this article first appeared on Medscape.com.
Improving Unadjusted and Adjusted Mortality With an Early Warning Sepsis System in the Emergency Department and Inpatient Wards
In 1997, Elizabeth McGlynn wrote, “Measuring quality is no simple task.”1 We are reminded of this seminal Health Affairs article at a very pertinent point—as health care practice progresses, measuring the impact of performance improvement initiatives on clinical care delivery remains integral to monitoring overall effectiveness of quality. Mortality outcomes are a major focus of quality.
Inpatient mortality within the Veterans Health Administration (VHA) was measured as actual number of deaths (unadjusted mortality), and adjusted mortality was calculated using the standardized mortality ratio (SMR). SMR included actual number of deaths during hospitalization or within 1 day of hospital discharge divided by predicted number of deaths using a risk-adjusted formula and was calculated separately for acute level of care (LOC) and the intensive care unit (ICU). Using risk-adjusted SMR, if an observed/expected ratio was > 1.0, there were more inpatient deaths than expected; if < 1.0, fewer inpatient deaths occurred than predicted; and if 1.0, observed number of inpatient deaths was equivalent to expected number of deaths.2
Mortality reduction is a complex area of performance improvement. Health care facilities often focus their efforts on the biggest mortality contributors. According to Dantes and Epstein, sepsis results in about 265,000 deaths annually in the United States.3 Reinhart and colleagues demonstrated that sepsis is a worldwide issue resulting in approximately 30 million cases and 6 million deaths annually.4 Furthermore, Kumar and colleagues have noted that when sepsis progresses to septic shock, survival decreases by almost 8% for each hour delay in sepsis identification and treatment.5
Improvements in sepsis management have been multifaceted. The Surviving Sepsis Campaign guidelines created sepsis treatment bundles to guide early diagnosis/treatment of sepsis.6 In addition to awareness and sepsis care bundles, a plethora of informatics solutions within electronic health record (EHR) systems have demonstrated improved sepsis care.7-16 Various approaches to early diagnosis and management of sepsis have been collectively referred to as an early warning sepsis system (EWSS).
An EWSS typically contains automated decision support tools that are integrated in the EHR and meant to assist health care professionals with clinical workflow decision-making. Automated decision support tools within the EHR have a variety of functions, such as clinical care reminders and alerts.17
Sepsis screening tools function as a form of automated decision support and may be incorporated into the EHR to support the EWSS. Although sepsis screening tools vary, they frequently include a combination of data involving vital signs, laboratory values and/or physical examination findings, such as mental status evaluation.The Modified Early Warning Signs (MEWS) + Sepsis Recognition Score (SRS) is one example of a sepsis screening tool.7,16
At Malcom Randall Veterans Affairs Medical Center (MRVAMC) in Gainesville, Florida, we identified a quality improvement project opportunity to improve sepsis care in the emergency department (ED) and inpatient wards using the VHA EHR system, the Computerized Patient Record System (CPRS), which is supported by the Veterans Information Systems and Technology Architecture (VistA).18 A VistA/CPRS EWSS was developed using Lean Six Sigma DMAIC (define, measure, analyze, improve, and control) methodology.19 During the improve stage, informatics solutions were applied and included a combination of EHR interventions, such as template design, an order set, and clinical reminders. Clinical reminders have a wide variety of use, such as reminders for clinical tasks and as automated decision support within clinical workflows using Boolean logic.
To the best of our knowledge, there has been no published application of an EWSS within VistA/CPRS. In this study, we outline the strategic development of an EWSS in VistA/CPRS that assisted clinical staff with identification and treatment of sepsis; improved documentation of sepsis when present; and associated with improvement in unadjusted and adjusted inpatient mortality.
Methods
According to policy activities that constitute research at MRVAMC, no institutional review board approval was required as this work met criteria for operational improvement activities exempt from ethics review.
The North Florida/South Georgia Veterans Health System (NF/SGVHS) includes MRVAMC, a large academic hospital with rotating residents/fellows and multiple specialty care services. MRVAMC comprised 144 beds on the medicine/surgery wards; 48 beds in the psychiatry unit; 18 intermediate LOC beds; and 27 ICU beds. The MRVAMC SMR was identified as an improvement opportunity during fiscal year (FY) 2017 (Table 1). Its adjusted mortality for acute LOC demonstrated an observed/expected ratio of > 1.0 suggesting more inpatient deaths were observed than expected. The number of deaths (unadjusted mortality) on acute LOC at MRVAMC was noted to be rising during the first 3 quarters of FY 2017. A deeper examination of data by Pyramid Analytics (www.pyramidanalytics.com) discovered that sepsis was the primary driver for inpatient mortality on acute LOC at MRVAMC. Our goal was to reduce inpatient sepsis-related mortality via development of an EWSS that leveraged VistA/CPRS to improve early identification and treatment of sepsis in the ED and inpatient wards.
Emergency Department
Given the importance of recognizing sepsis early, the sepsis team focused on improvement opportunities at the initial point of patient contact: ED triage. The goal was to incorporate automated VistA/CPRS decision support to assist clinicians with identifying sepsis in triage using MEWS, which was chosen to optimize immediate hospital-wide buy-in. Clinical staff were already familiar with MEWS, which was in use on the inpatient wards.
Flow through the ED and availability of resources differed from the wards. Hence, modification to MEWS on the wards was necessary to fit clinical workflow in the ED. Temperature, heart rate (HR), respiratory rate (RR), systolic blood pressure (SBP), mental status, and white blood cell count (WBC) factored into a MEWS + SRS score on the wards (Table 2). For the ED, MEWS included temperature, HR, RR and SBP, but excluded mental status and WBC. Mental status assessment was excluded due to technical infeasibility (while vital signs could be automatically calculated in real time for a MEWS score, that was not possible for mental status changes). WBC was excluded from the ED as laboratory test results would not be available in triage.
MEWS + SRS scores were calculated in VistA by using clinical reminders. Clinical reminder logic included a series of conditional statements based on various combinations of MEWS + SRS clinical data entered in the EHR. When ED triage vital signs data were entered in CPRS, clinical data were stored and processed according to clinical reminder logic in VistA and displayed to the user in CPRS. While MEWS of ≥ 5 triggered a sepsis alert on the wards, the ≥ 4 threshold was used in the ED given mental status and WBC were excluded from calculations in triage (eAppendix 1 available at doi:10.12788/fp.0194).
Once a sepsis alert was triggered in triage for MEWS ≥ 4, ED nursing staff prioritized bed location and expedited staffing with an ED attending physician for early assessment. The ED attending then performed an assessment to confirm whether sepsis was present and direct early treatment. Although every patient who triggered a sepsis alert in triage did not meet clinical findings of sepsis, patients with MEWS ≥ 4 were frequently ill and required timely intervention.
If an ED attending physician agreed with a sepsis diagnosis, the physician had access to a sepsis workup and treatment order set in CPRS (eAppendix 2 available at doi:10.12788/fp.0194). The sepsis order set incorporated recommendations from the Surviving Sepsis Campaign guidelines and included orders for 2 large-bore peripheral IV lines; aggressive fluid resuscitation (30 mL/kg) for patients with clinical findings of hypoperfusion; broad-spectrum antibiotics; and frequent ordering of laboratory tests and imaging during initial sepsis workup.6 Vancomycin and cefepime were selected as routine broad-spectrum antibiotics in the order set when sepsis was suspected based on local antimicrobial stewardship and safety-efficacy profiles. For example, Luther and colleagues demonstrated that cefepime has lower rates of acute kidney injury when combined with vancomycin vs vancomycin + piperacillin-tazobactam.20 If a β-lactam antibiotic could not be used due to a patient’s drug allergy history, aztreonam was available as an alternative option.
The design of the order set also functioned as a communication interface with clinical pharmacists. Given the large volume of antibiotics ordered in the ED, it was difficult for pharmacists to prioritize antibiotic order verification. While stat orders convey high priority, they often lack specificity. When antibiotic orders were selected from the sepsis order set, comments were already included that stated: “STAT. First dose for sepsis protocol” (eAppendix 3 available at doi:10.12788/fp.0194). This standardized communication conveyed a sense of urgency and a collective understanding that patients with suspected sepsis required timely order verification and administration of antibiotics.
Hospital Ward
Mental status and WBC were included on the wards to monitor for possible signs of sepsis, using MEWS + SRS, which was routinely monitored by nursing every 4 to 8 hours. When MEWS + SRS was ≥ 5 points, ward nursing staff called a sepsis alert.7,16 Early response team (ERT) members received telephone notifications of the alert. ERT staff proceeded with immediate evaluation and treatment at the bedside along with determination for most appropriate LOC. The ERT members included an ICU physician and nurse; respiratory therapist; and nursing supervisor/bed flow coordinator. During bedside evaluation, if the ERT or primary team agreed with a sepsis diagnosis, the ERT or primary team used the sepsis order set to ensure standardized procedures. Stat orders generated through the sepsis order set pathway conveyed a sense of urgency and need for immediate order verification and administration of antibiotics.
In addition to clinical care process improvement, accurate documentation also was emphasized in the EWSS. When a sepsis alert was called, a clinician from the primary team was expected to complete a standardized progress note, which communicated clinical findings, a treatment plan, and captured severity of illness (eAppendix 4 available at doi:10.12788/fp.0194). It included sections for subjective, objective, assessment, and plan. In addition, data objects were created for vital signs and common laboratory findings that retrieved important clinical data from VistA and inserted it into the CPRS note.21
Nursing staff on the wards were expected to communicate results with the primary team for clinical decision making when a patient had a MEWS + SRS of 3 to 4. A sepsis alert may have been called at the discretion of clinical team members but was not required if the score was < 5. Additionally, vital signs were expected to be checked by the nursing staff on the wards at least every 4 hours for closer monitoring.
Sepsis Review Meetings
Weekly meetings were scheduled to review sepsis cases to assess diagnosis, treatment, and documentation entered in the patient record. The team conducting sepsis reviews comprised the chief of staff, chief of quality management, director of patient safety, physician utilization management advisor, chief resident in quality and patient safety (CRQS), and inpatient pharmacy supervisor. In addition, ad hoc physicians and nurses from different specialty areas, such as infectious diseases, hospitalist section, ICU, and the ED participated on request for subject matter expertise when needed. At the conclusion of weekly sepsis meetings, sepsis team members provided feedback to the clinical staff for continuous improvement purposes.
Results
Before implementation of an EWSS at NF/SGVHS, a plan was devised to increase awareness and educate staff on sepsis-related mortality in late FY 2017. Awareness and education about sepsis-related mortality was organized at physician, nursing, and pharmacy leadership clinical staff meetings. Posters about early warning signs of sepsis also were displayed on the nursing units for educational purposes and to convey the importance of early recognition/treatment of sepsis. In addition, the CRQS was the quality leader for house staff and led sepsis campaign change efforts for residents/fellows. An immediate improvement in unadjusted mortality at MRVAMC was noted with initial sepsis awareness and education. From FY 2017, quarter 3 to FY 2018, quarter 1, the number of acute LOC inpatient deaths decreased from 48 to 28, a 42% reduction in unadjusted mortality at MRVAMC (Figure 1). Additionally, the acute LOC SMR improved from 1.20 during FY 2017, quarter 3 down to as low as 0.71 during FY 2018, quarter 1 (Figure 2).
The number of MRVAMC inpatient deaths increased from 28 in FY 2018, quarter 1 to 45 in FY 2018, quarter 3. While acute LOC showed improvement in unadjusted mortality after sepsis education/awareness, it was felt continuous improvement could not be sustained with education alone. An EWSS was designed and implemented within the EHR system in FY 2018. Following implementation of EWSS and reeducating staff on early recognition and treatment of sepsis, acute LOC inpatient deaths decreased from 45 in FY 2018, quarter 3 through FY 2019 where unadjusted mortality was as low as 27 during FY 2019, quarter 4. The MRVAMC acute LOC SMR was consistently < 1.0 from FY 2018, quarter 4 through FY 2019, quarter 4.
In addition to the observed decrease in acute LOC inpatient deaths and improved SMR, the number of ERT alerts and sepsis alerts on the inpatient wards were monitored from FY 2017 through FY 2019. ERT alerts listed in Table 3 were nonspecific and initiated by nursing staff on the wards where a patient’s clinical status was identified as worsening while sepsis alerts were specific ERT alerts called by the ward nursing staff due to concerns for sepsis. The inpatient wards included inpatient medicine, surgery, and psychiatry acute care and the intermediate level of care unit while outpatient clinical areas of treatment, intensive care units, stroke alerts, and STEMI alerts were excluded.
From FY 2017 to FY 2018, quarter 1, the number of nonspecific ERT alerts varied between 75 to 100. Sepsis alerts were not available until December 2017 while the EWSS was in development. Afterward, nonspecific ERT alerts and sepsis alerts were monitored each quarter. Sepsis alerts ranged from 4 to 18. Nonspecific ERT alerts + sepsis alerts continued to increase from FY 2018, quarter 3 through FY 2019, quarter 4.
Discussion
Implementation of the EWSS was associated with improved unadjusted mortality and adjusted mortality for acute LOC at MRVAMC. Although variation exists with application of EWSS at other medical centers, there was similarity with improved sepsis outcomes reported at other health care systems after EWSS implementation.7-16
Improved unadjusted mortality and adjusted mortality for acute LOC at MRVAMC was likely due to multiple contributing factors. First, during design and implementation of the EWSS, project work was interdisciplinary with input from physicians, nurses, and pharmacists from multiple specialties (ie, ED, ICU, and the medicine service); quality management and data analysis specialists; and clinical informatics. Second, facility commitment to improving early recognition and treatment of sepsis from leadership level down to front-line staff was evident. Weekly sepsis meetings with the NF/SGVHS chief of staff helped to sustain EWSS efforts and to identify additional improvement opportunities. Third, integrated informatics solutions within the EHR helped identify early sepsis and minimized human error as well as assisted with coordination of sepsis care across services. Fourth, the focus was on both early identification and treatment of sepsis in the ED and hospital wards. Although it cannot be deduced whether there was causation between reduced inpatient mortality and an increased number of nonspecific ERT alerts+ sepsis alerts on the inpatient wards after EWSS implementation, inpatient deaths decreased and SMR improved. Finally, the EWSS emphasized both the importance of evidence-based clinical care of sepsis and standardized documentation to appropriately capture clinical severity of illness.
Limitations
This program has limitations. The EWSS was studied at a single VHA facility. Veteran demographics and local epidemiology may limit conclusion of outcomes to an individual VHA facility located in a specific geographical region. Additional research is necessary to demonstrate reproducibility and determine whether applicable to other VHA facilities and community care settings.
SMR is a risk-adjusted formula developed by the VHA Inpatient Evaluation Center, which included numerous factors such as diagnosis, comorbid conditions, age, marital status, procedures, source of admission, specific laboratory values, medical or surgical diagnosis-related group, ICU stays, immunosuppressive status, and a COVID-19 positive indicator (added after this study). Further research is needed to evaluate sepsis-related outcomes using the EWSS during the COVID-19 pandemic.
EWSS in the literature have demonstrated various approaches to early identification and treatment of sepsis and have used different sepsis screening tools.22 Evidence suggests that the MEWS + SRS sepsis screening tool may result in false-positive screenings.23-27 Additional research into the specificity of this sepsis screening tool is needed. Ward nursing staff were encouraged to initiate automatic sepsis alerts when MEWS + SRS was ≥ 5; however, this still depended on human factors. Because sepsis alerts are software-specific and others were incompatible with the VHA EHR, it was necessary to design our own EWSS.
Despite improvement with MRVAMC acute LOC unadjusted and adjusted mortality with our EWSS, we did not identify any actual improvement in earlier antibiotic administration times once sepsis was recognized. While accurate documentation regarding degree of sepsis improved, a MRVAMC clinical documentation improvement program was expanded in FY 2018. Therefore, it is difficult to demonstrate causation related to improved sepsis documentation with template changes alone. While sepsis alerts on the inpatient wards were variable since EWSS implementation, nonspecific ERT alerts increased. It is unclear whether some sepsis alerts were called as nonspecific ERT alerts, making it impossible to know the true number of sepsis alerts.
MRVAMC experienced an increase in nurse turnover during FY 2018 and as a teaching hospital had frequent rotating residents and fellows new to processes/protocols. These factors may have contributed to variations in unadjusted mortality. Also the decrease in inpatient mortality and improvement in SMR on acute LOC could have been the result of factors other than the EWSS and the effect of education alone may have been at least as good as that of the EWSS intervention.
Conclusions
Education along with the possible implementation of an EWSS at NF/SGVHS was associated with a decrease in the number of inpatient deaths on MRVAMC’s acute LOC wards from as high as 48 in FY 2017, quarter 3 to as low as 27 in FY 2019, quarter 4 resulting in as large of an improvement as a 44% reduction in unadjusted mortality from FY 2017 to FY 2019. In addition, MRVAMC’s acute LOC SMR improved from > 1.0 to < 1.0, demonstrating fewer inpatient mortalities than predicted from FY 2017 to FY 2019.
This multifaceted interventional strategy may be effectively applied at other VHA health care facilities that use the same EHR system. Next steps may include determining the specificity of MEWS + SRS as a sepsis screening tool; studying outcomes of MRVAMC’s EWSS during the COVID-19 era; and conducting a multicentered study on this EWSS across multiple VHA facilities.
1. McGlynn EA. Six challenges in measuring the quality of health care. Health Aff (Millwood). 1997;16(3):7-21. doi:10.1377/hlthaff.16.3.7
2. US Department of Veterans Affairs, Veterans Health Administration. Strategic Analytics for Improvement and Learning (SAIL) value model measure definitions. Updated May 15, 2019. Accessed October 11, 2021. https://www.va.gov/QUALITYOFCARE/measure-up/SAIL_definitions.asp
3. Dantes RB, Epstein L. Combatting sepsis: a public health perspective. Clin Infect Dis. 2018;67(8):1300-1302. doi:10.1093/cid/ciy342
4. Reinhart K, Daniels R, Kissoon N, Machado FR, Schachter RD, Finfer S. Recognizing sepsis as a global health priority - a WHO resolution. N Engl J Med. 2017;377(5):414-417. doi:10.1056/NEJMp1707170
5. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34(6):1589-1596. doi:10.1097/01.CCM.0000217961.75225.E9
6. Rhodes A, Evans LE, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock: 2016. Crit Care Med. 2017;45(3):486-552. doi:10.1097/CCM.0000000000002255
7. Guirgis FW, Jones L, Esma R, et al. Managing sepsis: electronic recognition, rapid response teams, and standardized care save lives. J Crit Care. 2017;40:296-302. doi:10.1016/j.jcrc.2017.04.005
8. Whippy A, Skeath M, Crawford B, et al. Kaiser Permanente’s performance improvement system, part 3: multisite improvements in care for patients with sepsis. Jt Comm J Qual Patient Saf. 2011;37(11):483-493. doi:10.1016/s1553-7250(11)37061-4
9. Harrison AM, Thongprayoon C, Kashyap R, et al. Developing the surveillance algorithm for detection of failure to recognize and treat severe sepsis. Mayo Clin Proc. 2015;90(2):166-175. doi:10.1016/j.mayocp.2014.11.014
10. Rothman M, Levy M, Dellinger RP, et al. Sepsis as 2 problems: identifying sepsis at admission and predicting onset in the hospital using an electronic medical record-based acuity score. J Crit Care. 2017;38:237-244. doi:10.1016/j.jcrc.2016.11.037
11. Back JS, Jin Y, Jin T, Lee SM. Development and validation of an automated sepsis risk assessment system. Res Nurs Health. 2016;39(5):317-327. doi:10.1002/nur.21734
12. Khurana HS, Groves RH Jr, Simons MP, et al. Real-time automated sampling of electronic medical records predicts hospital mortality. Am J Med. 2016;129(7):688-698.e2. doi:10.1016/j.amjmed.2016.02.037
13. Umscheid CA, Betesh J, VanZandbergen C, et al. Development, implementation, and impact of an automated early warning and response system for sepsis. J Hosp Med. 2015;10(1):26-31. doi:10.1002/jhm.2259
14. Vogel L. EMR alert cuts sepsis deaths. CMAJ. 2014;186(2):E80. doi:10.1503/cmaj.109-4686
15. Jones SL, Ashton CM, Kiehne L, et al. Reductions in sepsis mortality and costs after design and implementation of a nurse-based early recognition and response program. Jt Comm J Qual Patient Saf. 2015;41(11):483-491. doi:10.1016/s1553-7250(15)41063-3
16. Croft CA, Moore FA, Efron PA, et al. Computer versus paper system for recognition and management of sepsis in surgical intensive care. J Trauma Acute Care Surg. 2014;76(2):311-319. doi:10.1097/TA.0000000000000121
17. Tcheng JE, Bakken S, Bates DW, et al, eds. Optimizing Strategies for Clinical Decision Support: Summary of a Meeting Series. National Academy of Medicine; 2017. Accessed October 11, 2021. https://nam.edu/wp-content/uploads/2017/11/Optimizing-Strategies-for-Clinical-Decision-Support.pdf
18. US Department of Veterans Affairs. History of IT at VA. Updated January 1, 2020. Accessed October 11, 2021. https://www.oit.va.gov/about/history.cfm
19. GoLeanSixSigma. DMAIC: The 5 Phases of Lean Six Sigma. Published 2012. Accessed October 11, 2021. https://goleansixsigma.com/wp-content/uploads/2012/02/DMAIC-The-5-Phases-of-Lean-Six-Sigma-www.GoLeanSixSigma.com_.pdf
20. Luther MK, Timbrook TT, Caffrey AR, Dosa D, Lodise TP, LaPlante KL. Vancomycin plus piperacillin-tazobactam and acute kidney injury in adults: a systematic review and meta-analysis. Crit Care Med. 2018;46(1):12-20. doi:10.1097/CCM.0000000000002769
21. International Business Machines Corp. Overview of data objects. Accessed October 11, 2021. https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.3.0/com.ibm.zos.v2r3.cbclx01/data_objects.htm
22. Churpek MM, Snyder A, Han X, et al. Quick sepsis-related organ failure assessment, systemic inflammatory response syndrome, and early warning scores for detecting clinical deterioration in infected patients outside the intensive care unit. Am J Respir Crit Care Med. 2017;195(7):906-911. doi:10.1164/rccm.201604-0854OC
23. Ghanem-Zoubi NO, Vardi M, Laor A, Weber G, Bitterman H. Assessment of disease-severity scoring systems for patients with sepsis in general internal medicine departments. Crit Care. 2011;15(2):R95. doi:10.1186/cc10102
24. Hamilton F, Arnold D, Baird A, Albur M, Whiting P. Early Warning scores do not accurately predict mortality in sepsis: a meta-analysis and systematic review of the literature. J Infect. 2018;76(3):241-248. doi:10.1016/j.jinf.2018.01.002
25. Martino IF, Figgiaconi V, Seminari E, Muzzi A, Corbella M, Perlini S. The role of qSOFA compared to other prognostic scores in septic patients upon admission to the emergency department. Eur J Intern Med. 2018;53:e11-e13. doi:10.1016/j.ejim.2018.05.022
26. Nannan Panday RS, Minderhoud TC, Alam N, Nanayakkara PWB. Prognostic value of early warning scores in the emergency department (ED) and acute medical unit (AMU): A narrative review. Eur J Intern Med. 2017;45:20-31. doi:10.1016/j.ejim.2017.09.027
27. Jayasundera R, Neilly M, Smith TO, Myint PK. Are early warning scores useful predictors for mortality and morbidity in hospitalised acutely unwell older patients? A systematic review. J Clin Med. 2018;7(10):309. Published 2018 Sep 28. doi:10.3390/jcm7100309
In 1997, Elizabeth McGlynn wrote, “Measuring quality is no simple task.”1 We are reminded of this seminal Health Affairs article at a very pertinent point—as health care practice progresses, measuring the impact of performance improvement initiatives on clinical care delivery remains integral to monitoring overall effectiveness of quality. Mortality outcomes are a major focus of quality.
Inpatient mortality within the Veterans Health Administration (VHA) was measured as actual number of deaths (unadjusted mortality), and adjusted mortality was calculated using the standardized mortality ratio (SMR). SMR included actual number of deaths during hospitalization or within 1 day of hospital discharge divided by predicted number of deaths using a risk-adjusted formula and was calculated separately for acute level of care (LOC) and the intensive care unit (ICU). Using risk-adjusted SMR, if an observed/expected ratio was > 1.0, there were more inpatient deaths than expected; if < 1.0, fewer inpatient deaths occurred than predicted; and if 1.0, observed number of inpatient deaths was equivalent to expected number of deaths.2
Mortality reduction is a complex area of performance improvement. Health care facilities often focus their efforts on the biggest mortality contributors. According to Dantes and Epstein, sepsis results in about 265,000 deaths annually in the United States.3 Reinhart and colleagues demonstrated that sepsis is a worldwide issue resulting in approximately 30 million cases and 6 million deaths annually.4 Furthermore, Kumar and colleagues have noted that when sepsis progresses to septic shock, survival decreases by almost 8% for each hour delay in sepsis identification and treatment.5
Improvements in sepsis management have been multifaceted. The Surviving Sepsis Campaign guidelines created sepsis treatment bundles to guide early diagnosis/treatment of sepsis.6 In addition to awareness and sepsis care bundles, a plethora of informatics solutions within electronic health record (EHR) systems have demonstrated improved sepsis care.7-16 Various approaches to early diagnosis and management of sepsis have been collectively referred to as an early warning sepsis system (EWSS).
An EWSS typically contains automated decision support tools that are integrated in the EHR and meant to assist health care professionals with clinical workflow decision-making. Automated decision support tools within the EHR have a variety of functions, such as clinical care reminders and alerts.17
Sepsis screening tools function as a form of automated decision support and may be incorporated into the EHR to support the EWSS. Although sepsis screening tools vary, they frequently include a combination of data involving vital signs, laboratory values and/or physical examination findings, such as mental status evaluation.The Modified Early Warning Signs (MEWS) + Sepsis Recognition Score (SRS) is one example of a sepsis screening tool.7,16
At Malcom Randall Veterans Affairs Medical Center (MRVAMC) in Gainesville, Florida, we identified a quality improvement project opportunity to improve sepsis care in the emergency department (ED) and inpatient wards using the VHA EHR system, the Computerized Patient Record System (CPRS), which is supported by the Veterans Information Systems and Technology Architecture (VistA).18 A VistA/CPRS EWSS was developed using Lean Six Sigma DMAIC (define, measure, analyze, improve, and control) methodology.19 During the improve stage, informatics solutions were applied and included a combination of EHR interventions, such as template design, an order set, and clinical reminders. Clinical reminders have a wide variety of use, such as reminders for clinical tasks and as automated decision support within clinical workflows using Boolean logic.
To the best of our knowledge, there has been no published application of an EWSS within VistA/CPRS. In this study, we outline the strategic development of an EWSS in VistA/CPRS that assisted clinical staff with identification and treatment of sepsis; improved documentation of sepsis when present; and associated with improvement in unadjusted and adjusted inpatient mortality.
Methods
According to policy activities that constitute research at MRVAMC, no institutional review board approval was required as this work met criteria for operational improvement activities exempt from ethics review.
The North Florida/South Georgia Veterans Health System (NF/SGVHS) includes MRVAMC, a large academic hospital with rotating residents/fellows and multiple specialty care services. MRVAMC comprised 144 beds on the medicine/surgery wards; 48 beds in the psychiatry unit; 18 intermediate LOC beds; and 27 ICU beds. The MRVAMC SMR was identified as an improvement opportunity during fiscal year (FY) 2017 (Table 1). Its adjusted mortality for acute LOC demonstrated an observed/expected ratio of > 1.0 suggesting more inpatient deaths were observed than expected. The number of deaths (unadjusted mortality) on acute LOC at MRVAMC was noted to be rising during the first 3 quarters of FY 2017. A deeper examination of data by Pyramid Analytics (www.pyramidanalytics.com) discovered that sepsis was the primary driver for inpatient mortality on acute LOC at MRVAMC. Our goal was to reduce inpatient sepsis-related mortality via development of an EWSS that leveraged VistA/CPRS to improve early identification and treatment of sepsis in the ED and inpatient wards.
Emergency Department
Given the importance of recognizing sepsis early, the sepsis team focused on improvement opportunities at the initial point of patient contact: ED triage. The goal was to incorporate automated VistA/CPRS decision support to assist clinicians with identifying sepsis in triage using MEWS, which was chosen to optimize immediate hospital-wide buy-in. Clinical staff were already familiar with MEWS, which was in use on the inpatient wards.
Flow through the ED and availability of resources differed from the wards. Hence, modification to MEWS on the wards was necessary to fit clinical workflow in the ED. Temperature, heart rate (HR), respiratory rate (RR), systolic blood pressure (SBP), mental status, and white blood cell count (WBC) factored into a MEWS + SRS score on the wards (Table 2). For the ED, MEWS included temperature, HR, RR and SBP, but excluded mental status and WBC. Mental status assessment was excluded due to technical infeasibility (while vital signs could be automatically calculated in real time for a MEWS score, that was not possible for mental status changes). WBC was excluded from the ED as laboratory test results would not be available in triage.
MEWS + SRS scores were calculated in VistA by using clinical reminders. Clinical reminder logic included a series of conditional statements based on various combinations of MEWS + SRS clinical data entered in the EHR. When ED triage vital signs data were entered in CPRS, clinical data were stored and processed according to clinical reminder logic in VistA and displayed to the user in CPRS. While MEWS of ≥ 5 triggered a sepsis alert on the wards, the ≥ 4 threshold was used in the ED given mental status and WBC were excluded from calculations in triage (eAppendix 1 available at doi:10.12788/fp.0194).
Once a sepsis alert was triggered in triage for MEWS ≥ 4, ED nursing staff prioritized bed location and expedited staffing with an ED attending physician for early assessment. The ED attending then performed an assessment to confirm whether sepsis was present and direct early treatment. Although every patient who triggered a sepsis alert in triage did not meet clinical findings of sepsis, patients with MEWS ≥ 4 were frequently ill and required timely intervention.
If an ED attending physician agreed with a sepsis diagnosis, the physician had access to a sepsis workup and treatment order set in CPRS (eAppendix 2 available at doi:10.12788/fp.0194). The sepsis order set incorporated recommendations from the Surviving Sepsis Campaign guidelines and included orders for 2 large-bore peripheral IV lines; aggressive fluid resuscitation (30 mL/kg) for patients with clinical findings of hypoperfusion; broad-spectrum antibiotics; and frequent ordering of laboratory tests and imaging during initial sepsis workup.6 Vancomycin and cefepime were selected as routine broad-spectrum antibiotics in the order set when sepsis was suspected based on local antimicrobial stewardship and safety-efficacy profiles. For example, Luther and colleagues demonstrated that cefepime has lower rates of acute kidney injury when combined with vancomycin vs vancomycin + piperacillin-tazobactam.20 If a β-lactam antibiotic could not be used due to a patient’s drug allergy history, aztreonam was available as an alternative option.
The design of the order set also functioned as a communication interface with clinical pharmacists. Given the large volume of antibiotics ordered in the ED, it was difficult for pharmacists to prioritize antibiotic order verification. While stat orders convey high priority, they often lack specificity. When antibiotic orders were selected from the sepsis order set, comments were already included that stated: “STAT. First dose for sepsis protocol” (eAppendix 3 available at doi:10.12788/fp.0194). This standardized communication conveyed a sense of urgency and a collective understanding that patients with suspected sepsis required timely order verification and administration of antibiotics.
Hospital Ward
Mental status and WBC were included on the wards to monitor for possible signs of sepsis, using MEWS + SRS, which was routinely monitored by nursing every 4 to 8 hours. When MEWS + SRS was ≥ 5 points, ward nursing staff called a sepsis alert.7,16 Early response team (ERT) members received telephone notifications of the alert. ERT staff proceeded with immediate evaluation and treatment at the bedside along with determination for most appropriate LOC. The ERT members included an ICU physician and nurse; respiratory therapist; and nursing supervisor/bed flow coordinator. During bedside evaluation, if the ERT or primary team agreed with a sepsis diagnosis, the ERT or primary team used the sepsis order set to ensure standardized procedures. Stat orders generated through the sepsis order set pathway conveyed a sense of urgency and need for immediate order verification and administration of antibiotics.
In addition to clinical care process improvement, accurate documentation also was emphasized in the EWSS. When a sepsis alert was called, a clinician from the primary team was expected to complete a standardized progress note, which communicated clinical findings, a treatment plan, and captured severity of illness (eAppendix 4 available at doi:10.12788/fp.0194). It included sections for subjective, objective, assessment, and plan. In addition, data objects were created for vital signs and common laboratory findings that retrieved important clinical data from VistA and inserted it into the CPRS note.21
Nursing staff on the wards were expected to communicate results with the primary team for clinical decision making when a patient had a MEWS + SRS of 3 to 4. A sepsis alert may have been called at the discretion of clinical team members but was not required if the score was < 5. Additionally, vital signs were expected to be checked by the nursing staff on the wards at least every 4 hours for closer monitoring.
Sepsis Review Meetings
Weekly meetings were scheduled to review sepsis cases to assess diagnosis, treatment, and documentation entered in the patient record. The team conducting sepsis reviews comprised the chief of staff, chief of quality management, director of patient safety, physician utilization management advisor, chief resident in quality and patient safety (CRQS), and inpatient pharmacy supervisor. In addition, ad hoc physicians and nurses from different specialty areas, such as infectious diseases, hospitalist section, ICU, and the ED participated on request for subject matter expertise when needed. At the conclusion of weekly sepsis meetings, sepsis team members provided feedback to the clinical staff for continuous improvement purposes.
Results
Before implementation of an EWSS at NF/SGVHS, a plan was devised to increase awareness and educate staff on sepsis-related mortality in late FY 2017. Awareness and education about sepsis-related mortality was organized at physician, nursing, and pharmacy leadership clinical staff meetings. Posters about early warning signs of sepsis also were displayed on the nursing units for educational purposes and to convey the importance of early recognition/treatment of sepsis. In addition, the CRQS was the quality leader for house staff and led sepsis campaign change efforts for residents/fellows. An immediate improvement in unadjusted mortality at MRVAMC was noted with initial sepsis awareness and education. From FY 2017, quarter 3 to FY 2018, quarter 1, the number of acute LOC inpatient deaths decreased from 48 to 28, a 42% reduction in unadjusted mortality at MRVAMC (Figure 1). Additionally, the acute LOC SMR improved from 1.20 during FY 2017, quarter 3 down to as low as 0.71 during FY 2018, quarter 1 (Figure 2).
The number of MRVAMC inpatient deaths increased from 28 in FY 2018, quarter 1 to 45 in FY 2018, quarter 3. While acute LOC showed improvement in unadjusted mortality after sepsis education/awareness, it was felt continuous improvement could not be sustained with education alone. An EWSS was designed and implemented within the EHR system in FY 2018. Following implementation of EWSS and reeducating staff on early recognition and treatment of sepsis, acute LOC inpatient deaths decreased from 45 in FY 2018, quarter 3 through FY 2019 where unadjusted mortality was as low as 27 during FY 2019, quarter 4. The MRVAMC acute LOC SMR was consistently < 1.0 from FY 2018, quarter 4 through FY 2019, quarter 4.
In addition to the observed decrease in acute LOC inpatient deaths and improved SMR, the number of ERT alerts and sepsis alerts on the inpatient wards were monitored from FY 2017 through FY 2019. ERT alerts listed in Table 3 were nonspecific and initiated by nursing staff on the wards where a patient’s clinical status was identified as worsening while sepsis alerts were specific ERT alerts called by the ward nursing staff due to concerns for sepsis. The inpatient wards included inpatient medicine, surgery, and psychiatry acute care and the intermediate level of care unit while outpatient clinical areas of treatment, intensive care units, stroke alerts, and STEMI alerts were excluded.
From FY 2017 to FY 2018, quarter 1, the number of nonspecific ERT alerts varied between 75 to 100. Sepsis alerts were not available until December 2017 while the EWSS was in development. Afterward, nonspecific ERT alerts and sepsis alerts were monitored each quarter. Sepsis alerts ranged from 4 to 18. Nonspecific ERT alerts + sepsis alerts continued to increase from FY 2018, quarter 3 through FY 2019, quarter 4.
Discussion
Implementation of the EWSS was associated with improved unadjusted mortality and adjusted mortality for acute LOC at MRVAMC. Although variation exists with application of EWSS at other medical centers, there was similarity with improved sepsis outcomes reported at other health care systems after EWSS implementation.7-16
Improved unadjusted mortality and adjusted mortality for acute LOC at MRVAMC was likely due to multiple contributing factors. First, during design and implementation of the EWSS, project work was interdisciplinary with input from physicians, nurses, and pharmacists from multiple specialties (ie, ED, ICU, and the medicine service); quality management and data analysis specialists; and clinical informatics. Second, facility commitment to improving early recognition and treatment of sepsis from leadership level down to front-line staff was evident. Weekly sepsis meetings with the NF/SGVHS chief of staff helped to sustain EWSS efforts and to identify additional improvement opportunities. Third, integrated informatics solutions within the EHR helped identify early sepsis and minimized human error as well as assisted with coordination of sepsis care across services. Fourth, the focus was on both early identification and treatment of sepsis in the ED and hospital wards. Although it cannot be deduced whether there was causation between reduced inpatient mortality and an increased number of nonspecific ERT alerts+ sepsis alerts on the inpatient wards after EWSS implementation, inpatient deaths decreased and SMR improved. Finally, the EWSS emphasized both the importance of evidence-based clinical care of sepsis and standardized documentation to appropriately capture clinical severity of illness.
Limitations
This program has limitations. The EWSS was studied at a single VHA facility. Veteran demographics and local epidemiology may limit conclusion of outcomes to an individual VHA facility located in a specific geographical region. Additional research is necessary to demonstrate reproducibility and determine whether applicable to other VHA facilities and community care settings.
SMR is a risk-adjusted formula developed by the VHA Inpatient Evaluation Center, which included numerous factors such as diagnosis, comorbid conditions, age, marital status, procedures, source of admission, specific laboratory values, medical or surgical diagnosis-related group, ICU stays, immunosuppressive status, and a COVID-19 positive indicator (added after this study). Further research is needed to evaluate sepsis-related outcomes using the EWSS during the COVID-19 pandemic.
EWSS in the literature have demonstrated various approaches to early identification and treatment of sepsis and have used different sepsis screening tools.22 Evidence suggests that the MEWS + SRS sepsis screening tool may result in false-positive screenings.23-27 Additional research into the specificity of this sepsis screening tool is needed. Ward nursing staff were encouraged to initiate automatic sepsis alerts when MEWS + SRS was ≥ 5; however, this still depended on human factors. Because sepsis alerts are software-specific and others were incompatible with the VHA EHR, it was necessary to design our own EWSS.
Despite improvement with MRVAMC acute LOC unadjusted and adjusted mortality with our EWSS, we did not identify any actual improvement in earlier antibiotic administration times once sepsis was recognized. While accurate documentation regarding degree of sepsis improved, a MRVAMC clinical documentation improvement program was expanded in FY 2018. Therefore, it is difficult to demonstrate causation related to improved sepsis documentation with template changes alone. While sepsis alerts on the inpatient wards were variable since EWSS implementation, nonspecific ERT alerts increased. It is unclear whether some sepsis alerts were called as nonspecific ERT alerts, making it impossible to know the true number of sepsis alerts.
MRVAMC experienced an increase in nurse turnover during FY 2018 and as a teaching hospital had frequent rotating residents and fellows new to processes/protocols. These factors may have contributed to variations in unadjusted mortality. Also the decrease in inpatient mortality and improvement in SMR on acute LOC could have been the result of factors other than the EWSS and the effect of education alone may have been at least as good as that of the EWSS intervention.
Conclusions
Education along with the possible implementation of an EWSS at NF/SGVHS was associated with a decrease in the number of inpatient deaths on MRVAMC’s acute LOC wards from as high as 48 in FY 2017, quarter 3 to as low as 27 in FY 2019, quarter 4 resulting in as large of an improvement as a 44% reduction in unadjusted mortality from FY 2017 to FY 2019. In addition, MRVAMC’s acute LOC SMR improved from > 1.0 to < 1.0, demonstrating fewer inpatient mortalities than predicted from FY 2017 to FY 2019.
This multifaceted interventional strategy may be effectively applied at other VHA health care facilities that use the same EHR system. Next steps may include determining the specificity of MEWS + SRS as a sepsis screening tool; studying outcomes of MRVAMC’s EWSS during the COVID-19 era; and conducting a multicentered study on this EWSS across multiple VHA facilities.
In 1997, Elizabeth McGlynn wrote, “Measuring quality is no simple task.”1 We are reminded of this seminal Health Affairs article at a very pertinent point—as health care practice progresses, measuring the impact of performance improvement initiatives on clinical care delivery remains integral to monitoring overall effectiveness of quality. Mortality outcomes are a major focus of quality.
Inpatient mortality within the Veterans Health Administration (VHA) was measured as actual number of deaths (unadjusted mortality), and adjusted mortality was calculated using the standardized mortality ratio (SMR). SMR included actual number of deaths during hospitalization or within 1 day of hospital discharge divided by predicted number of deaths using a risk-adjusted formula and was calculated separately for acute level of care (LOC) and the intensive care unit (ICU). Using risk-adjusted SMR, if an observed/expected ratio was > 1.0, there were more inpatient deaths than expected; if < 1.0, fewer inpatient deaths occurred than predicted; and if 1.0, observed number of inpatient deaths was equivalent to expected number of deaths.2
Mortality reduction is a complex area of performance improvement. Health care facilities often focus their efforts on the biggest mortality contributors. According to Dantes and Epstein, sepsis results in about 265,000 deaths annually in the United States.3 Reinhart and colleagues demonstrated that sepsis is a worldwide issue resulting in approximately 30 million cases and 6 million deaths annually.4 Furthermore, Kumar and colleagues have noted that when sepsis progresses to septic shock, survival decreases by almost 8% for each hour delay in sepsis identification and treatment.5
Improvements in sepsis management have been multifaceted. The Surviving Sepsis Campaign guidelines created sepsis treatment bundles to guide early diagnosis/treatment of sepsis.6 In addition to awareness and sepsis care bundles, a plethora of informatics solutions within electronic health record (EHR) systems have demonstrated improved sepsis care.7-16 Various approaches to early diagnosis and management of sepsis have been collectively referred to as an early warning sepsis system (EWSS).
An EWSS typically contains automated decision support tools that are integrated in the EHR and meant to assist health care professionals with clinical workflow decision-making. Automated decision support tools within the EHR have a variety of functions, such as clinical care reminders and alerts.17
Sepsis screening tools function as a form of automated decision support and may be incorporated into the EHR to support the EWSS. Although sepsis screening tools vary, they frequently include a combination of data involving vital signs, laboratory values and/or physical examination findings, such as mental status evaluation.The Modified Early Warning Signs (MEWS) + Sepsis Recognition Score (SRS) is one example of a sepsis screening tool.7,16
At Malcom Randall Veterans Affairs Medical Center (MRVAMC) in Gainesville, Florida, we identified a quality improvement project opportunity to improve sepsis care in the emergency department (ED) and inpatient wards using the VHA EHR system, the Computerized Patient Record System (CPRS), which is supported by the Veterans Information Systems and Technology Architecture (VistA).18 A VistA/CPRS EWSS was developed using Lean Six Sigma DMAIC (define, measure, analyze, improve, and control) methodology.19 During the improve stage, informatics solutions were applied and included a combination of EHR interventions, such as template design, an order set, and clinical reminders. Clinical reminders have a wide variety of use, such as reminders for clinical tasks and as automated decision support within clinical workflows using Boolean logic.
To the best of our knowledge, there has been no published application of an EWSS within VistA/CPRS. In this study, we outline the strategic development of an EWSS in VistA/CPRS that assisted clinical staff with identification and treatment of sepsis; improved documentation of sepsis when present; and associated with improvement in unadjusted and adjusted inpatient mortality.
Methods
According to policy activities that constitute research at MRVAMC, no institutional review board approval was required as this work met criteria for operational improvement activities exempt from ethics review.
The North Florida/South Georgia Veterans Health System (NF/SGVHS) includes MRVAMC, a large academic hospital with rotating residents/fellows and multiple specialty care services. MRVAMC comprised 144 beds on the medicine/surgery wards; 48 beds in the psychiatry unit; 18 intermediate LOC beds; and 27 ICU beds. The MRVAMC SMR was identified as an improvement opportunity during fiscal year (FY) 2017 (Table 1). Its adjusted mortality for acute LOC demonstrated an observed/expected ratio of > 1.0 suggesting more inpatient deaths were observed than expected. The number of deaths (unadjusted mortality) on acute LOC at MRVAMC was noted to be rising during the first 3 quarters of FY 2017. A deeper examination of data by Pyramid Analytics (www.pyramidanalytics.com) discovered that sepsis was the primary driver for inpatient mortality on acute LOC at MRVAMC. Our goal was to reduce inpatient sepsis-related mortality via development of an EWSS that leveraged VistA/CPRS to improve early identification and treatment of sepsis in the ED and inpatient wards.
Emergency Department
Given the importance of recognizing sepsis early, the sepsis team focused on improvement opportunities at the initial point of patient contact: ED triage. The goal was to incorporate automated VistA/CPRS decision support to assist clinicians with identifying sepsis in triage using MEWS, which was chosen to optimize immediate hospital-wide buy-in. Clinical staff were already familiar with MEWS, which was in use on the inpatient wards.
Flow through the ED and availability of resources differed from the wards. Hence, modification to MEWS on the wards was necessary to fit clinical workflow in the ED. Temperature, heart rate (HR), respiratory rate (RR), systolic blood pressure (SBP), mental status, and white blood cell count (WBC) factored into a MEWS + SRS score on the wards (Table 2). For the ED, MEWS included temperature, HR, RR and SBP, but excluded mental status and WBC. Mental status assessment was excluded due to technical infeasibility (while vital signs could be automatically calculated in real time for a MEWS score, that was not possible for mental status changes). WBC was excluded from the ED as laboratory test results would not be available in triage.
MEWS + SRS scores were calculated in VistA by using clinical reminders. Clinical reminder logic included a series of conditional statements based on various combinations of MEWS + SRS clinical data entered in the EHR. When ED triage vital signs data were entered in CPRS, clinical data were stored and processed according to clinical reminder logic in VistA and displayed to the user in CPRS. While MEWS of ≥ 5 triggered a sepsis alert on the wards, the ≥ 4 threshold was used in the ED given mental status and WBC were excluded from calculations in triage (eAppendix 1 available at doi:10.12788/fp.0194).
Once a sepsis alert was triggered in triage for MEWS ≥ 4, ED nursing staff prioritized bed location and expedited staffing with an ED attending physician for early assessment. The ED attending then performed an assessment to confirm whether sepsis was present and direct early treatment. Although every patient who triggered a sepsis alert in triage did not meet clinical findings of sepsis, patients with MEWS ≥ 4 were frequently ill and required timely intervention.
If an ED attending physician agreed with a sepsis diagnosis, the physician had access to a sepsis workup and treatment order set in CPRS (eAppendix 2 available at doi:10.12788/fp.0194). The sepsis order set incorporated recommendations from the Surviving Sepsis Campaign guidelines and included orders for 2 large-bore peripheral IV lines; aggressive fluid resuscitation (30 mL/kg) for patients with clinical findings of hypoperfusion; broad-spectrum antibiotics; and frequent ordering of laboratory tests and imaging during initial sepsis workup.6 Vancomycin and cefepime were selected as routine broad-spectrum antibiotics in the order set when sepsis was suspected based on local antimicrobial stewardship and safety-efficacy profiles. For example, Luther and colleagues demonstrated that cefepime has lower rates of acute kidney injury when combined with vancomycin vs vancomycin + piperacillin-tazobactam.20 If a β-lactam antibiotic could not be used due to a patient’s drug allergy history, aztreonam was available as an alternative option.
The design of the order set also functioned as a communication interface with clinical pharmacists. Given the large volume of antibiotics ordered in the ED, it was difficult for pharmacists to prioritize antibiotic order verification. While stat orders convey high priority, they often lack specificity. When antibiotic orders were selected from the sepsis order set, comments were already included that stated: “STAT. First dose for sepsis protocol” (eAppendix 3 available at doi:10.12788/fp.0194). This standardized communication conveyed a sense of urgency and a collective understanding that patients with suspected sepsis required timely order verification and administration of antibiotics.
Hospital Ward
Mental status and WBC were included on the wards to monitor for possible signs of sepsis, using MEWS + SRS, which was routinely monitored by nursing every 4 to 8 hours. When MEWS + SRS was ≥ 5 points, ward nursing staff called a sepsis alert.7,16 Early response team (ERT) members received telephone notifications of the alert. ERT staff proceeded with immediate evaluation and treatment at the bedside along with determination for most appropriate LOC. The ERT members included an ICU physician and nurse; respiratory therapist; and nursing supervisor/bed flow coordinator. During bedside evaluation, if the ERT or primary team agreed with a sepsis diagnosis, the ERT or primary team used the sepsis order set to ensure standardized procedures. Stat orders generated through the sepsis order set pathway conveyed a sense of urgency and need for immediate order verification and administration of antibiotics.
In addition to clinical care process improvement, accurate documentation also was emphasized in the EWSS. When a sepsis alert was called, a clinician from the primary team was expected to complete a standardized progress note, which communicated clinical findings, a treatment plan, and captured severity of illness (eAppendix 4 available at doi:10.12788/fp.0194). It included sections for subjective, objective, assessment, and plan. In addition, data objects were created for vital signs and common laboratory findings that retrieved important clinical data from VistA and inserted it into the CPRS note.21
Nursing staff on the wards were expected to communicate results with the primary team for clinical decision making when a patient had a MEWS + SRS of 3 to 4. A sepsis alert may have been called at the discretion of clinical team members but was not required if the score was < 5. Additionally, vital signs were expected to be checked by the nursing staff on the wards at least every 4 hours for closer monitoring.
Sepsis Review Meetings
Weekly meetings were scheduled to review sepsis cases to assess diagnosis, treatment, and documentation entered in the patient record. The team conducting sepsis reviews comprised the chief of staff, chief of quality management, director of patient safety, physician utilization management advisor, chief resident in quality and patient safety (CRQS), and inpatient pharmacy supervisor. In addition, ad hoc physicians and nurses from different specialty areas, such as infectious diseases, hospitalist section, ICU, and the ED participated on request for subject matter expertise when needed. At the conclusion of weekly sepsis meetings, sepsis team members provided feedback to the clinical staff for continuous improvement purposes.
Results
Before implementation of an EWSS at NF/SGVHS, a plan was devised to increase awareness and educate staff on sepsis-related mortality in late FY 2017. Awareness and education about sepsis-related mortality was organized at physician, nursing, and pharmacy leadership clinical staff meetings. Posters about early warning signs of sepsis also were displayed on the nursing units for educational purposes and to convey the importance of early recognition/treatment of sepsis. In addition, the CRQS was the quality leader for house staff and led sepsis campaign change efforts for residents/fellows. An immediate improvement in unadjusted mortality at MRVAMC was noted with initial sepsis awareness and education. From FY 2017, quarter 3 to FY 2018, quarter 1, the number of acute LOC inpatient deaths decreased from 48 to 28, a 42% reduction in unadjusted mortality at MRVAMC (Figure 1). Additionally, the acute LOC SMR improved from 1.20 during FY 2017, quarter 3 down to as low as 0.71 during FY 2018, quarter 1 (Figure 2).
The number of MRVAMC inpatient deaths increased from 28 in FY 2018, quarter 1 to 45 in FY 2018, quarter 3. While acute LOC showed improvement in unadjusted mortality after sepsis education/awareness, it was felt continuous improvement could not be sustained with education alone. An EWSS was designed and implemented within the EHR system in FY 2018. Following implementation of EWSS and reeducating staff on early recognition and treatment of sepsis, acute LOC inpatient deaths decreased from 45 in FY 2018, quarter 3 through FY 2019 where unadjusted mortality was as low as 27 during FY 2019, quarter 4. The MRVAMC acute LOC SMR was consistently < 1.0 from FY 2018, quarter 4 through FY 2019, quarter 4.
In addition to the observed decrease in acute LOC inpatient deaths and improved SMR, the number of ERT alerts and sepsis alerts on the inpatient wards were monitored from FY 2017 through FY 2019. ERT alerts listed in Table 3 were nonspecific and initiated by nursing staff on the wards where a patient’s clinical status was identified as worsening while sepsis alerts were specific ERT alerts called by the ward nursing staff due to concerns for sepsis. The inpatient wards included inpatient medicine, surgery, and psychiatry acute care and the intermediate level of care unit while outpatient clinical areas of treatment, intensive care units, stroke alerts, and STEMI alerts were excluded.
From FY 2017 to FY 2018, quarter 1, the number of nonspecific ERT alerts varied between 75 to 100. Sepsis alerts were not available until December 2017 while the EWSS was in development. Afterward, nonspecific ERT alerts and sepsis alerts were monitored each quarter. Sepsis alerts ranged from 4 to 18. Nonspecific ERT alerts + sepsis alerts continued to increase from FY 2018, quarter 3 through FY 2019, quarter 4.
Discussion
Implementation of the EWSS was associated with improved unadjusted mortality and adjusted mortality for acute LOC at MRVAMC. Although variation exists with application of EWSS at other medical centers, there was similarity with improved sepsis outcomes reported at other health care systems after EWSS implementation.7-16
Improved unadjusted mortality and adjusted mortality for acute LOC at MRVAMC was likely due to multiple contributing factors. First, during design and implementation of the EWSS, project work was interdisciplinary with input from physicians, nurses, and pharmacists from multiple specialties (ie, ED, ICU, and the medicine service); quality management and data analysis specialists; and clinical informatics. Second, facility commitment to improving early recognition and treatment of sepsis from leadership level down to front-line staff was evident. Weekly sepsis meetings with the NF/SGVHS chief of staff helped to sustain EWSS efforts and to identify additional improvement opportunities. Third, integrated informatics solutions within the EHR helped identify early sepsis and minimized human error as well as assisted with coordination of sepsis care across services. Fourth, the focus was on both early identification and treatment of sepsis in the ED and hospital wards. Although it cannot be deduced whether there was causation between reduced inpatient mortality and an increased number of nonspecific ERT alerts+ sepsis alerts on the inpatient wards after EWSS implementation, inpatient deaths decreased and SMR improved. Finally, the EWSS emphasized both the importance of evidence-based clinical care of sepsis and standardized documentation to appropriately capture clinical severity of illness.
Limitations
This program has limitations. The EWSS was studied at a single VHA facility. Veteran demographics and local epidemiology may limit conclusion of outcomes to an individual VHA facility located in a specific geographical region. Additional research is necessary to demonstrate reproducibility and determine whether applicable to other VHA facilities and community care settings.
SMR is a risk-adjusted formula developed by the VHA Inpatient Evaluation Center, which included numerous factors such as diagnosis, comorbid conditions, age, marital status, procedures, source of admission, specific laboratory values, medical or surgical diagnosis-related group, ICU stays, immunosuppressive status, and a COVID-19 positive indicator (added after this study). Further research is needed to evaluate sepsis-related outcomes using the EWSS during the COVID-19 pandemic.
EWSS in the literature have demonstrated various approaches to early identification and treatment of sepsis and have used different sepsis screening tools.22 Evidence suggests that the MEWS + SRS sepsis screening tool may result in false-positive screenings.23-27 Additional research into the specificity of this sepsis screening tool is needed. Ward nursing staff were encouraged to initiate automatic sepsis alerts when MEWS + SRS was ≥ 5; however, this still depended on human factors. Because sepsis alerts are software-specific and others were incompatible with the VHA EHR, it was necessary to design our own EWSS.
Despite improvement with MRVAMC acute LOC unadjusted and adjusted mortality with our EWSS, we did not identify any actual improvement in earlier antibiotic administration times once sepsis was recognized. While accurate documentation regarding degree of sepsis improved, a MRVAMC clinical documentation improvement program was expanded in FY 2018. Therefore, it is difficult to demonstrate causation related to improved sepsis documentation with template changes alone. While sepsis alerts on the inpatient wards were variable since EWSS implementation, nonspecific ERT alerts increased. It is unclear whether some sepsis alerts were called as nonspecific ERT alerts, making it impossible to know the true number of sepsis alerts.
MRVAMC experienced an increase in nurse turnover during FY 2018 and as a teaching hospital had frequent rotating residents and fellows new to processes/protocols. These factors may have contributed to variations in unadjusted mortality. Also the decrease in inpatient mortality and improvement in SMR on acute LOC could have been the result of factors other than the EWSS and the effect of education alone may have been at least as good as that of the EWSS intervention.
Conclusions
Education along with the possible implementation of an EWSS at NF/SGVHS was associated with a decrease in the number of inpatient deaths on MRVAMC’s acute LOC wards from as high as 48 in FY 2017, quarter 3 to as low as 27 in FY 2019, quarter 4 resulting in as large of an improvement as a 44% reduction in unadjusted mortality from FY 2017 to FY 2019. In addition, MRVAMC’s acute LOC SMR improved from > 1.0 to < 1.0, demonstrating fewer inpatient mortalities than predicted from FY 2017 to FY 2019.
This multifaceted interventional strategy may be effectively applied at other VHA health care facilities that use the same EHR system. Next steps may include determining the specificity of MEWS + SRS as a sepsis screening tool; studying outcomes of MRVAMC’s EWSS during the COVID-19 era; and conducting a multicentered study on this EWSS across multiple VHA facilities.
1. McGlynn EA. Six challenges in measuring the quality of health care. Health Aff (Millwood). 1997;16(3):7-21. doi:10.1377/hlthaff.16.3.7
2. US Department of Veterans Affairs, Veterans Health Administration. Strategic Analytics for Improvement and Learning (SAIL) value model measure definitions. Updated May 15, 2019. Accessed October 11, 2021. https://www.va.gov/QUALITYOFCARE/measure-up/SAIL_definitions.asp
3. Dantes RB, Epstein L. Combatting sepsis: a public health perspective. Clin Infect Dis. 2018;67(8):1300-1302. doi:10.1093/cid/ciy342
4. Reinhart K, Daniels R, Kissoon N, Machado FR, Schachter RD, Finfer S. Recognizing sepsis as a global health priority - a WHO resolution. N Engl J Med. 2017;377(5):414-417. doi:10.1056/NEJMp1707170
5. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34(6):1589-1596. doi:10.1097/01.CCM.0000217961.75225.E9
6. Rhodes A, Evans LE, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock: 2016. Crit Care Med. 2017;45(3):486-552. doi:10.1097/CCM.0000000000002255
7. Guirgis FW, Jones L, Esma R, et al. Managing sepsis: electronic recognition, rapid response teams, and standardized care save lives. J Crit Care. 2017;40:296-302. doi:10.1016/j.jcrc.2017.04.005
8. Whippy A, Skeath M, Crawford B, et al. Kaiser Permanente’s performance improvement system, part 3: multisite improvements in care for patients with sepsis. Jt Comm J Qual Patient Saf. 2011;37(11):483-493. doi:10.1016/s1553-7250(11)37061-4
9. Harrison AM, Thongprayoon C, Kashyap R, et al. Developing the surveillance algorithm for detection of failure to recognize and treat severe sepsis. Mayo Clin Proc. 2015;90(2):166-175. doi:10.1016/j.mayocp.2014.11.014
10. Rothman M, Levy M, Dellinger RP, et al. Sepsis as 2 problems: identifying sepsis at admission and predicting onset in the hospital using an electronic medical record-based acuity score. J Crit Care. 2017;38:237-244. doi:10.1016/j.jcrc.2016.11.037
11. Back JS, Jin Y, Jin T, Lee SM. Development and validation of an automated sepsis risk assessment system. Res Nurs Health. 2016;39(5):317-327. doi:10.1002/nur.21734
12. Khurana HS, Groves RH Jr, Simons MP, et al. Real-time automated sampling of electronic medical records predicts hospital mortality. Am J Med. 2016;129(7):688-698.e2. doi:10.1016/j.amjmed.2016.02.037
13. Umscheid CA, Betesh J, VanZandbergen C, et al. Development, implementation, and impact of an automated early warning and response system for sepsis. J Hosp Med. 2015;10(1):26-31. doi:10.1002/jhm.2259
14. Vogel L. EMR alert cuts sepsis deaths. CMAJ. 2014;186(2):E80. doi:10.1503/cmaj.109-4686
15. Jones SL, Ashton CM, Kiehne L, et al. Reductions in sepsis mortality and costs after design and implementation of a nurse-based early recognition and response program. Jt Comm J Qual Patient Saf. 2015;41(11):483-491. doi:10.1016/s1553-7250(15)41063-3
16. Croft CA, Moore FA, Efron PA, et al. Computer versus paper system for recognition and management of sepsis in surgical intensive care. J Trauma Acute Care Surg. 2014;76(2):311-319. doi:10.1097/TA.0000000000000121
17. Tcheng JE, Bakken S, Bates DW, et al, eds. Optimizing Strategies for Clinical Decision Support: Summary of a Meeting Series. National Academy of Medicine; 2017. Accessed October 11, 2021. https://nam.edu/wp-content/uploads/2017/11/Optimizing-Strategies-for-Clinical-Decision-Support.pdf
18. US Department of Veterans Affairs. History of IT at VA. Updated January 1, 2020. Accessed October 11, 2021. https://www.oit.va.gov/about/history.cfm
19. GoLeanSixSigma. DMAIC: The 5 Phases of Lean Six Sigma. Published 2012. Accessed October 11, 2021. https://goleansixsigma.com/wp-content/uploads/2012/02/DMAIC-The-5-Phases-of-Lean-Six-Sigma-www.GoLeanSixSigma.com_.pdf
20. Luther MK, Timbrook TT, Caffrey AR, Dosa D, Lodise TP, LaPlante KL. Vancomycin plus piperacillin-tazobactam and acute kidney injury in adults: a systematic review and meta-analysis. Crit Care Med. 2018;46(1):12-20. doi:10.1097/CCM.0000000000002769
21. International Business Machines Corp. Overview of data objects. Accessed October 11, 2021. https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.3.0/com.ibm.zos.v2r3.cbclx01/data_objects.htm
22. Churpek MM, Snyder A, Han X, et al. Quick sepsis-related organ failure assessment, systemic inflammatory response syndrome, and early warning scores for detecting clinical deterioration in infected patients outside the intensive care unit. Am J Respir Crit Care Med. 2017;195(7):906-911. doi:10.1164/rccm.201604-0854OC
23. Ghanem-Zoubi NO, Vardi M, Laor A, Weber G, Bitterman H. Assessment of disease-severity scoring systems for patients with sepsis in general internal medicine departments. Crit Care. 2011;15(2):R95. doi:10.1186/cc10102
24. Hamilton F, Arnold D, Baird A, Albur M, Whiting P. Early Warning scores do not accurately predict mortality in sepsis: a meta-analysis and systematic review of the literature. J Infect. 2018;76(3):241-248. doi:10.1016/j.jinf.2018.01.002
25. Martino IF, Figgiaconi V, Seminari E, Muzzi A, Corbella M, Perlini S. The role of qSOFA compared to other prognostic scores in septic patients upon admission to the emergency department. Eur J Intern Med. 2018;53:e11-e13. doi:10.1016/j.ejim.2018.05.022
26. Nannan Panday RS, Minderhoud TC, Alam N, Nanayakkara PWB. Prognostic value of early warning scores in the emergency department (ED) and acute medical unit (AMU): A narrative review. Eur J Intern Med. 2017;45:20-31. doi:10.1016/j.ejim.2017.09.027
27. Jayasundera R, Neilly M, Smith TO, Myint PK. Are early warning scores useful predictors for mortality and morbidity in hospitalised acutely unwell older patients? A systematic review. J Clin Med. 2018;7(10):309. Published 2018 Sep 28. doi:10.3390/jcm7100309
1. McGlynn EA. Six challenges in measuring the quality of health care. Health Aff (Millwood). 1997;16(3):7-21. doi:10.1377/hlthaff.16.3.7
2. US Department of Veterans Affairs, Veterans Health Administration. Strategic Analytics for Improvement and Learning (SAIL) value model measure definitions. Updated May 15, 2019. Accessed October 11, 2021. https://www.va.gov/QUALITYOFCARE/measure-up/SAIL_definitions.asp
3. Dantes RB, Epstein L. Combatting sepsis: a public health perspective. Clin Infect Dis. 2018;67(8):1300-1302. doi:10.1093/cid/ciy342
4. Reinhart K, Daniels R, Kissoon N, Machado FR, Schachter RD, Finfer S. Recognizing sepsis as a global health priority - a WHO resolution. N Engl J Med. 2017;377(5):414-417. doi:10.1056/NEJMp1707170
5. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34(6):1589-1596. doi:10.1097/01.CCM.0000217961.75225.E9
6. Rhodes A, Evans LE, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock: 2016. Crit Care Med. 2017;45(3):486-552. doi:10.1097/CCM.0000000000002255
7. Guirgis FW, Jones L, Esma R, et al. Managing sepsis: electronic recognition, rapid response teams, and standardized care save lives. J Crit Care. 2017;40:296-302. doi:10.1016/j.jcrc.2017.04.005
8. Whippy A, Skeath M, Crawford B, et al. Kaiser Permanente’s performance improvement system, part 3: multisite improvements in care for patients with sepsis. Jt Comm J Qual Patient Saf. 2011;37(11):483-493. doi:10.1016/s1553-7250(11)37061-4
9. Harrison AM, Thongprayoon C, Kashyap R, et al. Developing the surveillance algorithm for detection of failure to recognize and treat severe sepsis. Mayo Clin Proc. 2015;90(2):166-175. doi:10.1016/j.mayocp.2014.11.014
10. Rothman M, Levy M, Dellinger RP, et al. Sepsis as 2 problems: identifying sepsis at admission and predicting onset in the hospital using an electronic medical record-based acuity score. J Crit Care. 2017;38:237-244. doi:10.1016/j.jcrc.2016.11.037
11. Back JS, Jin Y, Jin T, Lee SM. Development and validation of an automated sepsis risk assessment system. Res Nurs Health. 2016;39(5):317-327. doi:10.1002/nur.21734
12. Khurana HS, Groves RH Jr, Simons MP, et al. Real-time automated sampling of electronic medical records predicts hospital mortality. Am J Med. 2016;129(7):688-698.e2. doi:10.1016/j.amjmed.2016.02.037
13. Umscheid CA, Betesh J, VanZandbergen C, et al. Development, implementation, and impact of an automated early warning and response system for sepsis. J Hosp Med. 2015;10(1):26-31. doi:10.1002/jhm.2259
14. Vogel L. EMR alert cuts sepsis deaths. CMAJ. 2014;186(2):E80. doi:10.1503/cmaj.109-4686
15. Jones SL, Ashton CM, Kiehne L, et al. Reductions in sepsis mortality and costs after design and implementation of a nurse-based early recognition and response program. Jt Comm J Qual Patient Saf. 2015;41(11):483-491. doi:10.1016/s1553-7250(15)41063-3
16. Croft CA, Moore FA, Efron PA, et al. Computer versus paper system for recognition and management of sepsis in surgical intensive care. J Trauma Acute Care Surg. 2014;76(2):311-319. doi:10.1097/TA.0000000000000121
17. Tcheng JE, Bakken S, Bates DW, et al, eds. Optimizing Strategies for Clinical Decision Support: Summary of a Meeting Series. National Academy of Medicine; 2017. Accessed October 11, 2021. https://nam.edu/wp-content/uploads/2017/11/Optimizing-Strategies-for-Clinical-Decision-Support.pdf
18. US Department of Veterans Affairs. History of IT at VA. Updated January 1, 2020. Accessed October 11, 2021. https://www.oit.va.gov/about/history.cfm
19. GoLeanSixSigma. DMAIC: The 5 Phases of Lean Six Sigma. Published 2012. Accessed October 11, 2021. https://goleansixsigma.com/wp-content/uploads/2012/02/DMAIC-The-5-Phases-of-Lean-Six-Sigma-www.GoLeanSixSigma.com_.pdf
20. Luther MK, Timbrook TT, Caffrey AR, Dosa D, Lodise TP, LaPlante KL. Vancomycin plus piperacillin-tazobactam and acute kidney injury in adults: a systematic review and meta-analysis. Crit Care Med. 2018;46(1):12-20. doi:10.1097/CCM.0000000000002769
21. International Business Machines Corp. Overview of data objects. Accessed October 11, 2021. https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.3.0/com.ibm.zos.v2r3.cbclx01/data_objects.htm
22. Churpek MM, Snyder A, Han X, et al. Quick sepsis-related organ failure assessment, systemic inflammatory response syndrome, and early warning scores for detecting clinical deterioration in infected patients outside the intensive care unit. Am J Respir Crit Care Med. 2017;195(7):906-911. doi:10.1164/rccm.201604-0854OC
23. Ghanem-Zoubi NO, Vardi M, Laor A, Weber G, Bitterman H. Assessment of disease-severity scoring systems for patients with sepsis in general internal medicine departments. Crit Care. 2011;15(2):R95. doi:10.1186/cc10102
24. Hamilton F, Arnold D, Baird A, Albur M, Whiting P. Early Warning scores do not accurately predict mortality in sepsis: a meta-analysis and systematic review of the literature. J Infect. 2018;76(3):241-248. doi:10.1016/j.jinf.2018.01.002
25. Martino IF, Figgiaconi V, Seminari E, Muzzi A, Corbella M, Perlini S. The role of qSOFA compared to other prognostic scores in septic patients upon admission to the emergency department. Eur J Intern Med. 2018;53:e11-e13. doi:10.1016/j.ejim.2018.05.022
26. Nannan Panday RS, Minderhoud TC, Alam N, Nanayakkara PWB. Prognostic value of early warning scores in the emergency department (ED) and acute medical unit (AMU): A narrative review. Eur J Intern Med. 2017;45:20-31. doi:10.1016/j.ejim.2017.09.027
27. Jayasundera R, Neilly M, Smith TO, Myint PK. Are early warning scores useful predictors for mortality and morbidity in hospitalised acutely unwell older patients? A systematic review. J Clin Med. 2018;7(10):309. Published 2018 Sep 28. doi:10.3390/jcm7100309
COVID vaccines’ protection dropped sharply over 6 months: Study
, a study of almost 800,000 veterans found.
The study, published in the journal Science ., says the three vaccines offered about the same protection against the virus in March, when the Delta variant was first detected in the United States, but that changed 6 months later.
The Moderna two-dose vaccine went from being 89% effective in March to 58% effective in September, according to a story about the study in theLos Angeles Times.
Meanwhile, the Pfizer/BioNTech vaccine went from being 87% effective to 45% effective over the same time period.
The Johnson & Johnson vaccine showed the biggest drop -- from 86% effectiveness to 13% over those 6 months.
“In summary, although vaccination remains protective against SARS-CoV-2 infection, protection waned as the Delta variant emerged in the U.S., and this decline did not differ by age,” the study said.
The three vaccines also lost effectiveness in the ability to protect against death in veterans 65 and over after only 3 months, the Los Angeles Times reported.
Compared to unvaccinated veterans in that age group, veterans who got the Moderna vaccine and had a breakthrough case were 76% less likely to die of COVID-19 by July.
The protection was 70% for Pfizer/BioNTech vaccine recipients and 52% for J&J vaccine recipients for the same age group, compared to unvaccinated veterans, according to the newspaper.
For veterans under 65, the protectiveness against a fatal case of COVID was 84% for Pfizer/BioNTech recipients, 82% for Moderna recipients, and 73% for J&J recipients, compared to unvaccinated veterans in that age group.
The study confirms the need for booster vaccines and protective measures such as vaccine passports, vaccine mandates, masking, hand-washing, and social distancing, the researchers said.
Of the veterans studied, about 500,000 were vaccinated and 300,000 were not. Researchers noted that the study population had 6 times as many men as women. About 48% of the study group was 65 or older, 29% was 50-64, while 24% was under 50.
Researchers from the Public Health Institute in Oakland, the Veterans Affairs Medical Center in San Francisco, and the University of Texas Health Science Center conducted the study.
A version of this article first appeared on WebMD.com.
, a study of almost 800,000 veterans found.
The study, published in the journal Science ., says the three vaccines offered about the same protection against the virus in March, when the Delta variant was first detected in the United States, but that changed 6 months later.
The Moderna two-dose vaccine went from being 89% effective in March to 58% effective in September, according to a story about the study in theLos Angeles Times.
Meanwhile, the Pfizer/BioNTech vaccine went from being 87% effective to 45% effective over the same time period.
The Johnson & Johnson vaccine showed the biggest drop -- from 86% effectiveness to 13% over those 6 months.
“In summary, although vaccination remains protective against SARS-CoV-2 infection, protection waned as the Delta variant emerged in the U.S., and this decline did not differ by age,” the study said.
The three vaccines also lost effectiveness in the ability to protect against death in veterans 65 and over after only 3 months, the Los Angeles Times reported.
Compared to unvaccinated veterans in that age group, veterans who got the Moderna vaccine and had a breakthrough case were 76% less likely to die of COVID-19 by July.
The protection was 70% for Pfizer/BioNTech vaccine recipients and 52% for J&J vaccine recipients for the same age group, compared to unvaccinated veterans, according to the newspaper.
For veterans under 65, the protectiveness against a fatal case of COVID was 84% for Pfizer/BioNTech recipients, 82% for Moderna recipients, and 73% for J&J recipients, compared to unvaccinated veterans in that age group.
The study confirms the need for booster vaccines and protective measures such as vaccine passports, vaccine mandates, masking, hand-washing, and social distancing, the researchers said.
Of the veterans studied, about 500,000 were vaccinated and 300,000 were not. Researchers noted that the study population had 6 times as many men as women. About 48% of the study group was 65 or older, 29% was 50-64, while 24% was under 50.
Researchers from the Public Health Institute in Oakland, the Veterans Affairs Medical Center in San Francisco, and the University of Texas Health Science Center conducted the study.
A version of this article first appeared on WebMD.com.
, a study of almost 800,000 veterans found.
The study, published in the journal Science ., says the three vaccines offered about the same protection against the virus in March, when the Delta variant was first detected in the United States, but that changed 6 months later.
The Moderna two-dose vaccine went from being 89% effective in March to 58% effective in September, according to a story about the study in theLos Angeles Times.
Meanwhile, the Pfizer/BioNTech vaccine went from being 87% effective to 45% effective over the same time period.
The Johnson & Johnson vaccine showed the biggest drop -- from 86% effectiveness to 13% over those 6 months.
“In summary, although vaccination remains protective against SARS-CoV-2 infection, protection waned as the Delta variant emerged in the U.S., and this decline did not differ by age,” the study said.
The three vaccines also lost effectiveness in the ability to protect against death in veterans 65 and over after only 3 months, the Los Angeles Times reported.
Compared to unvaccinated veterans in that age group, veterans who got the Moderna vaccine and had a breakthrough case were 76% less likely to die of COVID-19 by July.
The protection was 70% for Pfizer/BioNTech vaccine recipients and 52% for J&J vaccine recipients for the same age group, compared to unvaccinated veterans, according to the newspaper.
For veterans under 65, the protectiveness against a fatal case of COVID was 84% for Pfizer/BioNTech recipients, 82% for Moderna recipients, and 73% for J&J recipients, compared to unvaccinated veterans in that age group.
The study confirms the need for booster vaccines and protective measures such as vaccine passports, vaccine mandates, masking, hand-washing, and social distancing, the researchers said.
Of the veterans studied, about 500,000 were vaccinated and 300,000 were not. Researchers noted that the study population had 6 times as many men as women. About 48% of the study group was 65 or older, 29% was 50-64, while 24% was under 50.
Researchers from the Public Health Institute in Oakland, the Veterans Affairs Medical Center in San Francisco, and the University of Texas Health Science Center conducted the study.
A version of this article first appeared on WebMD.com.
FROM SCIENCE
Severe COVID two times higher for cancer patients
A new systematic review and meta-analysis finds that unvaccinated cancer patients who contracted COVID-19 last year, were more than two times more likely – than people without cancer – to develop a case of COVID-19 so severe it required hospitalization in an intensive care unit.
“Our study provides the most precise measure to date of the effect of COVID-19 in cancer patients,” wrote researchers who were led by Paolo Boffetta, MD, MPH, a specialist in population science with the Stony Brook Cancer Center in New York.
Dr. Boffetta and colleagues also found that patients with hematologic neoplasms had a higher mortality rate from COVID-19 comparable to that of all cancers combined.
Cancer patients have long been considered to be among those patients who are at high risk of developing COVID-19, and if they contract the disease, they are at high risk of having poor outcomes. Other high-risk patients include those with hypertension, diabetes, chronic kidney disease, or COPD, or the elderly. But how high the risk of developing severe COVID-19 disease is for cancer patients hasn’t yet been documented on a wide scale.
The study, which was made available as a preprint on medRxiv on Oct. 23, is based on an analysis of COVID-19 cases that were documented in 35 reviews, meta-analyses, case reports, and studies indexed in PubMed from authors in North America, Europe, and Asia.
In this study, the pooled odds ratio for mortality for all patients with any cancer was 2.32 (95% confidence interval, 1.82-2.94; 24 studies). For ICU admission, the odds ratio was 2.39 (95% CI, 1.90-3.02; I2 0.0%; 5 studies). And, for disease severity or hospitalization, it was 2.08 (95% CI, 1.60-2.72; I2 92.1%; 15 studies). The pooled mortality odds ratio for hematologic neoplasms was 2.14 (95% CI, 1.87-2.44; I2 20.8%; 8 studies).
Their findings, which have not yet been peer reviewed, confirmed the results of a similar analysis from China published as a preprint in May 2020. The analysis included 181,323 patients (23,736 cancer patients) from 26 studies reported an odds ratio of 2.54 (95% CI, 1.47-4.42). “Cancer patients with COVID-19 have an increased likelihood of death compared to non-cancer COVID-19 patients,” Venkatesulu et al. wrote. And a systematic review and meta-analysis of five studies of 2,619 patients published in October 2020 in Medicine also found a significantly higher risk of death from COVID-19 among cancer patients (odds ratio, 2.63; 95% confidence interval, 1.14-6.06; P = .023; I2 = 26.4%).
Fakih et al., writing in the journal Hematology/Oncology and Stem Cell Therapy conducted a meta-analysis early last year finding a threefold increase for admission to the intensive care unit, an almost fourfold increase for a severe SARS-CoV-2 infection, and a fivefold increase for being intubated.
The three studies show that mortality rates were higher early in the pandemic “when diagnosis and treatment for SARS-CoV-2 might have been delayed, resulting in higher death rate,” Boffetta et al. wrote, adding that their analysis showed only a twofold increase most likely because it was a year-long analysis.
“Future studies will be able to better analyze this association for the different subtypes of cancer. Furthermore, they will eventually be able to evaluate whether the difference among vaccinated population is reduced,” Boffetta et al. wrote.
The authors noted several limitations for the study, including the fact that many of the studies included in the analysis did not include sex, age, comorbidities, and therapy. Nor were the authors able to analyze specific cancers other than hematologic neoplasms.
The authors declared no conflicts of interest.
A new systematic review and meta-analysis finds that unvaccinated cancer patients who contracted COVID-19 last year, were more than two times more likely – than people without cancer – to develop a case of COVID-19 so severe it required hospitalization in an intensive care unit.
“Our study provides the most precise measure to date of the effect of COVID-19 in cancer patients,” wrote researchers who were led by Paolo Boffetta, MD, MPH, a specialist in population science with the Stony Brook Cancer Center in New York.
Dr. Boffetta and colleagues also found that patients with hematologic neoplasms had a higher mortality rate from COVID-19 comparable to that of all cancers combined.
Cancer patients have long been considered to be among those patients who are at high risk of developing COVID-19, and if they contract the disease, they are at high risk of having poor outcomes. Other high-risk patients include those with hypertension, diabetes, chronic kidney disease, or COPD, or the elderly. But how high the risk of developing severe COVID-19 disease is for cancer patients hasn’t yet been documented on a wide scale.
The study, which was made available as a preprint on medRxiv on Oct. 23, is based on an analysis of COVID-19 cases that were documented in 35 reviews, meta-analyses, case reports, and studies indexed in PubMed from authors in North America, Europe, and Asia.
In this study, the pooled odds ratio for mortality for all patients with any cancer was 2.32 (95% confidence interval, 1.82-2.94; 24 studies). For ICU admission, the odds ratio was 2.39 (95% CI, 1.90-3.02; I2 0.0%; 5 studies). And, for disease severity or hospitalization, it was 2.08 (95% CI, 1.60-2.72; I2 92.1%; 15 studies). The pooled mortality odds ratio for hematologic neoplasms was 2.14 (95% CI, 1.87-2.44; I2 20.8%; 8 studies).
Their findings, which have not yet been peer reviewed, confirmed the results of a similar analysis from China published as a preprint in May 2020. The analysis included 181,323 patients (23,736 cancer patients) from 26 studies reported an odds ratio of 2.54 (95% CI, 1.47-4.42). “Cancer patients with COVID-19 have an increased likelihood of death compared to non-cancer COVID-19 patients,” Venkatesulu et al. wrote. And a systematic review and meta-analysis of five studies of 2,619 patients published in October 2020 in Medicine also found a significantly higher risk of death from COVID-19 among cancer patients (odds ratio, 2.63; 95% confidence interval, 1.14-6.06; P = .023; I2 = 26.4%).
Fakih et al., writing in the journal Hematology/Oncology and Stem Cell Therapy conducted a meta-analysis early last year finding a threefold increase for admission to the intensive care unit, an almost fourfold increase for a severe SARS-CoV-2 infection, and a fivefold increase for being intubated.
The three studies show that mortality rates were higher early in the pandemic “when diagnosis and treatment for SARS-CoV-2 might have been delayed, resulting in higher death rate,” Boffetta et al. wrote, adding that their analysis showed only a twofold increase most likely because it was a year-long analysis.
“Future studies will be able to better analyze this association for the different subtypes of cancer. Furthermore, they will eventually be able to evaluate whether the difference among vaccinated population is reduced,” Boffetta et al. wrote.
The authors noted several limitations for the study, including the fact that many of the studies included in the analysis did not include sex, age, comorbidities, and therapy. Nor were the authors able to analyze specific cancers other than hematologic neoplasms.
The authors declared no conflicts of interest.
A new systematic review and meta-analysis finds that unvaccinated cancer patients who contracted COVID-19 last year, were more than two times more likely – than people without cancer – to develop a case of COVID-19 so severe it required hospitalization in an intensive care unit.
“Our study provides the most precise measure to date of the effect of COVID-19 in cancer patients,” wrote researchers who were led by Paolo Boffetta, MD, MPH, a specialist in population science with the Stony Brook Cancer Center in New York.
Dr. Boffetta and colleagues also found that patients with hematologic neoplasms had a higher mortality rate from COVID-19 comparable to that of all cancers combined.
Cancer patients have long been considered to be among those patients who are at high risk of developing COVID-19, and if they contract the disease, they are at high risk of having poor outcomes. Other high-risk patients include those with hypertension, diabetes, chronic kidney disease, or COPD, or the elderly. But how high the risk of developing severe COVID-19 disease is for cancer patients hasn’t yet been documented on a wide scale.
The study, which was made available as a preprint on medRxiv on Oct. 23, is based on an analysis of COVID-19 cases that were documented in 35 reviews, meta-analyses, case reports, and studies indexed in PubMed from authors in North America, Europe, and Asia.
In this study, the pooled odds ratio for mortality for all patients with any cancer was 2.32 (95% confidence interval, 1.82-2.94; 24 studies). For ICU admission, the odds ratio was 2.39 (95% CI, 1.90-3.02; I2 0.0%; 5 studies). And, for disease severity or hospitalization, it was 2.08 (95% CI, 1.60-2.72; I2 92.1%; 15 studies). The pooled mortality odds ratio for hematologic neoplasms was 2.14 (95% CI, 1.87-2.44; I2 20.8%; 8 studies).
Their findings, which have not yet been peer reviewed, confirmed the results of a similar analysis from China published as a preprint in May 2020. The analysis included 181,323 patients (23,736 cancer patients) from 26 studies reported an odds ratio of 2.54 (95% CI, 1.47-4.42). “Cancer patients with COVID-19 have an increased likelihood of death compared to non-cancer COVID-19 patients,” Venkatesulu et al. wrote. And a systematic review and meta-analysis of five studies of 2,619 patients published in October 2020 in Medicine also found a significantly higher risk of death from COVID-19 among cancer patients (odds ratio, 2.63; 95% confidence interval, 1.14-6.06; P = .023; I2 = 26.4%).
Fakih et al., writing in the journal Hematology/Oncology and Stem Cell Therapy conducted a meta-analysis early last year finding a threefold increase for admission to the intensive care unit, an almost fourfold increase for a severe SARS-CoV-2 infection, and a fivefold increase for being intubated.
The three studies show that mortality rates were higher early in the pandemic “when diagnosis and treatment for SARS-CoV-2 might have been delayed, resulting in higher death rate,” Boffetta et al. wrote, adding that their analysis showed only a twofold increase most likely because it was a year-long analysis.
“Future studies will be able to better analyze this association for the different subtypes of cancer. Furthermore, they will eventually be able to evaluate whether the difference among vaccinated population is reduced,” Boffetta et al. wrote.
The authors noted several limitations for the study, including the fact that many of the studies included in the analysis did not include sex, age, comorbidities, and therapy. Nor were the authors able to analyze specific cancers other than hematologic neoplasms.
The authors declared no conflicts of interest.
FROM MEDRXIV
New transmission information should motivate hospitals to reexamine aerosol procedures, researchers say
Two studies published in Thorax have found that the use of continuous positive airways pressure (CPAP) or high-flow nasal oxygen (HFNO) to treat moderate to severe COVID-19 is not linked to a heightened risk of infection, as currently thought. Researchers say hospitals should use this information to re-examine aerosol procedures in regard to risk of transmission of SARS-CoV-2.
CPAP and HFNO have been thought to generate virus particles capable of contaminating the air and surfaces, necessitating additional infection control precautions such as segregating patients. However, this research demonstrates that both methods produced little measurable air or surface viral contamination. The amount of contamination was no more than with the use of supplemental oxygen and less than that produced when breathing, speaking, or coughing.
In one study, led by a team from the North Bristol NHS Trust, 25 healthy volunteers and eight hospitalized patients with COVID-19 were recruited and asked to breathe, speak, and cough in ultra-clean, laminar flow theaters followed by use of CPAP and HFNO. Aerosol emission was measured via two methodologies, simultaneously. Hospitalized patients with COVID-19 had cough recorded via the same methodology on the infectious diseases ward.
CPAP (with exhalation port filter) was found to produce less aerosol than breathing, speaking, and coughing, even with large > 50 L/min face mask leaks. Coughing was associated with the highest aerosol emissions of any recorded activity.
HFNO was associated with aerosol emission from the machine. Generated particles were small (< 1 mcm), passing from the machine through the patient and to the detector without coalescence with respiratory aerosol, and, consequently, would be unlikely to carry viral particles.
More aerosol was generated in cough from patients with COVID-19 (n = 8) than from volunteers.
In the second study, 30 hospitalized patients with COVID-19 requiring supplemental oxygen were prospectively enrolled. In this observational environmental sampling study, participants received either supplemental oxygen, CPAP, or HFNO (n = 10 in each group). A nasopharyngeal swab, three air, and three surface samples were collected from each participant and the clinical environment.
Overall, 21 of the 30 participants tested positive for SARS-CoV-2 RNA in the nasopharynx. In contrast, 4 out of 90 air samples and 6 of 90 surface samples tested positive for viral RNA, although there were an additional 10 suspected-positive samples in both air and surfaces samples.
Neither the use of CPAP nor HFNO nor coughing were associated with significantly more environmental contamination than supplemental oxygen use. Of the total positive or suspected-positive samples by viral PCR detection, only one nasopharynx sample from an HFNO patient was biologically viable in cell culture assay.
“Our findings show that the noninvasive breathing support methods do not pose a higher risk of transmitting infection, which has significant implications for the management of the patients,” said coauthor Danny McAuley, MD.
“If there isn’t a higher risk of infection transmission, current practices may be overcautious measures for certain settings, for example preventing relatives visiting the sickest patients, whilst underestimating the risk in other settings, such as coughing patients with early infection on general wards.”
Although both studies are small, the results do suggest that there is a need for an evidence-based reassessment of infection prevention and control measures for noninvasive respiratory support treatments that are currently considered aerosol generating procedures.
A version of this article first appeared on Univadis.com.
Two studies published in Thorax have found that the use of continuous positive airways pressure (CPAP) or high-flow nasal oxygen (HFNO) to treat moderate to severe COVID-19 is not linked to a heightened risk of infection, as currently thought. Researchers say hospitals should use this information to re-examine aerosol procedures in regard to risk of transmission of SARS-CoV-2.
CPAP and HFNO have been thought to generate virus particles capable of contaminating the air and surfaces, necessitating additional infection control precautions such as segregating patients. However, this research demonstrates that both methods produced little measurable air or surface viral contamination. The amount of contamination was no more than with the use of supplemental oxygen and less than that produced when breathing, speaking, or coughing.
In one study, led by a team from the North Bristol NHS Trust, 25 healthy volunteers and eight hospitalized patients with COVID-19 were recruited and asked to breathe, speak, and cough in ultra-clean, laminar flow theaters followed by use of CPAP and HFNO. Aerosol emission was measured via two methodologies, simultaneously. Hospitalized patients with COVID-19 had cough recorded via the same methodology on the infectious diseases ward.
CPAP (with exhalation port filter) was found to produce less aerosol than breathing, speaking, and coughing, even with large > 50 L/min face mask leaks. Coughing was associated with the highest aerosol emissions of any recorded activity.
HFNO was associated with aerosol emission from the machine. Generated particles were small (< 1 mcm), passing from the machine through the patient and to the detector without coalescence with respiratory aerosol, and, consequently, would be unlikely to carry viral particles.
More aerosol was generated in cough from patients with COVID-19 (n = 8) than from volunteers.
In the second study, 30 hospitalized patients with COVID-19 requiring supplemental oxygen were prospectively enrolled. In this observational environmental sampling study, participants received either supplemental oxygen, CPAP, or HFNO (n = 10 in each group). A nasopharyngeal swab, three air, and three surface samples were collected from each participant and the clinical environment.
Overall, 21 of the 30 participants tested positive for SARS-CoV-2 RNA in the nasopharynx. In contrast, 4 out of 90 air samples and 6 of 90 surface samples tested positive for viral RNA, although there were an additional 10 suspected-positive samples in both air and surfaces samples.
Neither the use of CPAP nor HFNO nor coughing were associated with significantly more environmental contamination than supplemental oxygen use. Of the total positive or suspected-positive samples by viral PCR detection, only one nasopharynx sample from an HFNO patient was biologically viable in cell culture assay.
“Our findings show that the noninvasive breathing support methods do not pose a higher risk of transmitting infection, which has significant implications for the management of the patients,” said coauthor Danny McAuley, MD.
“If there isn’t a higher risk of infection transmission, current practices may be overcautious measures for certain settings, for example preventing relatives visiting the sickest patients, whilst underestimating the risk in other settings, such as coughing patients with early infection on general wards.”
Although both studies are small, the results do suggest that there is a need for an evidence-based reassessment of infection prevention and control measures for noninvasive respiratory support treatments that are currently considered aerosol generating procedures.
A version of this article first appeared on Univadis.com.
Two studies published in Thorax have found that the use of continuous positive airways pressure (CPAP) or high-flow nasal oxygen (HFNO) to treat moderate to severe COVID-19 is not linked to a heightened risk of infection, as currently thought. Researchers say hospitals should use this information to re-examine aerosol procedures in regard to risk of transmission of SARS-CoV-2.
CPAP and HFNO have been thought to generate virus particles capable of contaminating the air and surfaces, necessitating additional infection control precautions such as segregating patients. However, this research demonstrates that both methods produced little measurable air or surface viral contamination. The amount of contamination was no more than with the use of supplemental oxygen and less than that produced when breathing, speaking, or coughing.
In one study, led by a team from the North Bristol NHS Trust, 25 healthy volunteers and eight hospitalized patients with COVID-19 were recruited and asked to breathe, speak, and cough in ultra-clean, laminar flow theaters followed by use of CPAP and HFNO. Aerosol emission was measured via two methodologies, simultaneously. Hospitalized patients with COVID-19 had cough recorded via the same methodology on the infectious diseases ward.
CPAP (with exhalation port filter) was found to produce less aerosol than breathing, speaking, and coughing, even with large > 50 L/min face mask leaks. Coughing was associated with the highest aerosol emissions of any recorded activity.
HFNO was associated with aerosol emission from the machine. Generated particles were small (< 1 mcm), passing from the machine through the patient and to the detector without coalescence with respiratory aerosol, and, consequently, would be unlikely to carry viral particles.
More aerosol was generated in cough from patients with COVID-19 (n = 8) than from volunteers.
In the second study, 30 hospitalized patients with COVID-19 requiring supplemental oxygen were prospectively enrolled. In this observational environmental sampling study, participants received either supplemental oxygen, CPAP, or HFNO (n = 10 in each group). A nasopharyngeal swab, three air, and three surface samples were collected from each participant and the clinical environment.
Overall, 21 of the 30 participants tested positive for SARS-CoV-2 RNA in the nasopharynx. In contrast, 4 out of 90 air samples and 6 of 90 surface samples tested positive for viral RNA, although there were an additional 10 suspected-positive samples in both air and surfaces samples.
Neither the use of CPAP nor HFNO nor coughing were associated with significantly more environmental contamination than supplemental oxygen use. Of the total positive or suspected-positive samples by viral PCR detection, only one nasopharynx sample from an HFNO patient was biologically viable in cell culture assay.
“Our findings show that the noninvasive breathing support methods do not pose a higher risk of transmitting infection, which has significant implications for the management of the patients,” said coauthor Danny McAuley, MD.
“If there isn’t a higher risk of infection transmission, current practices may be overcautious measures for certain settings, for example preventing relatives visiting the sickest patients, whilst underestimating the risk in other settings, such as coughing patients with early infection on general wards.”
Although both studies are small, the results do suggest that there is a need for an evidence-based reassessment of infection prevention and control measures for noninvasive respiratory support treatments that are currently considered aerosol generating procedures.
A version of this article first appeared on Univadis.com.
FROM THORAX
Decades spent searching for genes linked to rare blood cancer
Mary Lou McMaster, MD, has spent her entire career at the National Cancer Institute (NCI) searching for the genetic underpinnings that give rise to Waldenstrom's macroglobulinemia (WM).
After searching for decades, she has yet to uncover a "smoking gun," though a few tantalizing clues have emerged along the way.
"Our questions are pretty basic: Why are some people more susceptible to developing WM, and why does WM sometimes cluster in families?" she explained. It turns out that the answers are not at all simple.
Dr. McMaster described some of the clues that her team at the Clinical Genetics Branch of the NCI has unearthed in a presentation at the recent International Waldenstrom's Macroglobulinemia Foundation (IWMF) 2021 Virtual Educational Forum.
Commenting after the presentation, Steven Treon, MD, PhD, professor of medicine, Harvard Medical School, Boston, who is collaborating with Dr. McMaster on this work, said: "From these familial studies, we can learn how familial genomics may give us insights into disease prevention and treatment."
Identifying affected families
Work began in 2001 to identify families in which two or more family members had been diagnosed with WM or in which there was one patient with WM and at least one other relative with a related B-cell cancer, such as chronic lymphocytic leukemia.
For a frame of reference, they enrolled some families with only one member with WM and in which there was no known family history of the disease.
"Overall, we have learned that familial WM is a rare disease but not nearly as rare as we first thought," Dr. McMaster said.
For example, in a referral hospital setting, 5% of WM patients will report having a family member with the same disorder, and up to 20% of WM patients report having a family member with a related but different B-cell cancer, she noted.
NCI researchers also discovered that environmental factors contribute to the development of WM. Notable chemical or occupational exposures include exposures to pesticides, herbicides, and fertilizers. Infections and autoimmune disease are additional factors.
"This was not a surprise," Dr. McMaster commented regarding the role of occupational exposures. The research community has known for decades that a "lymphoma belt" cuts through the Midwest farming states.
Focusing on genetic susceptibility, Dr. McMaster and colleagues first tried to identify a rare germline variant that can be passed down to offspring and that might confer high risk for the disease.
"We used our high-risk families to study these types of changes, although they may be modified by other genes and environmental factors," Dr. McMaster explained.
Much to their collective disappointment, the research team has been unable to identify any rare germline variant that could account for WM in many families. What they did find were many small changes in genes that are known to be important in B-cell development and function, but all of those would lead to only a small increase in WM risk.
"What is holding us back is that, so far, we are not seeing the same gene affected in more than one family, so this suggests to us either that this is not the mechanism behind the development of WM in families, or we have an unfortunate situation where each family is going to have a genetic change that is private to that family and which is not found in other families," Dr. McMaster acknowledged.
Sheer difficulty
Given the difficulty of determining whether these small genetic changes had any detrimental functional effect in each and every family with a member who had WM, Dr. McMaster and colleagues have now turned their attention to genes that exert only a small effect on disease risk.
"Here, we focused on specific genes that we knew were important in the function of the immune system," she explained. "We did find a few genes that may contribute to risk, but those have not yet been confirmed by us or others, and we cannot say they are causative without that confirmation," she said.
The team has gone on to scan the highway of our genetic material so as to isolate genetic "mile markers." They then examine the area around a particular marker that they suspect contains genes that may be involved in WM.
One study they conducted involved a cohort of 217 patients with WM in which numerous family members had WM and so was enriched with susceptibility genes. A second cohort comprised 312 WM patients in which there were few WM cases among family members. Both of these cohorts were compared with a group of healthy control persons.
From these genome studies, "we found there are at least two regions of the genome that can contribute to WM susceptibility, the largest effect being on the short arm of chromosome 6, and the other on the long arm of chromosome 14," Dr. McMaster reported. Dr. McMaster feels that there are probably more regions on the genome that also contribute to WM, although they do not yet understand how these regions contribute to susceptibility.
"It's more evidence that WM likely results from a combination of events rather than one single gene variant," she observed. Dr. McMaster and colleagues are now collaborating with a large consortium of WM researchers to confirm and extend their findings. Plans are underway to analyze data from approximately 1,350 WM patients and more than 20,000 control persons within the next year.
"Our hope is that we will confirm our original findings and, because we now have a much larger sample, we will be able to discover additional regions of the genome that are contributing to susceptibility," Dr. McMaster said.
"A single gene is not likely to account for all WM, as we've looked carefully and others have looked too," she commented.
"So the risk for WM depends on a combination of genes and environmental exposures and possibly lifestyle factors as well, although we still estimate that approximately 25% of the heritability of WM can be attributed to these kinds of genetic changes," Dr. McMaster predicted.
Dr. McMaster has disclosed no relevant financial relationships. Dr. Treon has served as a director, officer, partner, employee, advisor, consultant, or trustee for Janssen, Pfizer, PCYC, and BioGene.
A version of this article first appeared on Medscape.com
Mary Lou McMaster, MD, has spent her entire career at the National Cancer Institute (NCI) searching for the genetic underpinnings that give rise to Waldenstrom's macroglobulinemia (WM).
After searching for decades, she has yet to uncover a "smoking gun," though a few tantalizing clues have emerged along the way.
"Our questions are pretty basic: Why are some people more susceptible to developing WM, and why does WM sometimes cluster in families?" she explained. It turns out that the answers are not at all simple.
Dr. McMaster described some of the clues that her team at the Clinical Genetics Branch of the NCI has unearthed in a presentation at the recent International Waldenstrom's Macroglobulinemia Foundation (IWMF) 2021 Virtual Educational Forum.
Commenting after the presentation, Steven Treon, MD, PhD, professor of medicine, Harvard Medical School, Boston, who is collaborating with Dr. McMaster on this work, said: "From these familial studies, we can learn how familial genomics may give us insights into disease prevention and treatment."
Identifying affected families
Work began in 2001 to identify families in which two or more family members had been diagnosed with WM or in which there was one patient with WM and at least one other relative with a related B-cell cancer, such as chronic lymphocytic leukemia.
For a frame of reference, they enrolled some families with only one member with WM and in which there was no known family history of the disease.
"Overall, we have learned that familial WM is a rare disease but not nearly as rare as we first thought," Dr. McMaster said.
For example, in a referral hospital setting, 5% of WM patients will report having a family member with the same disorder, and up to 20% of WM patients report having a family member with a related but different B-cell cancer, she noted.
NCI researchers also discovered that environmental factors contribute to the development of WM. Notable chemical or occupational exposures include exposures to pesticides, herbicides, and fertilizers. Infections and autoimmune disease are additional factors.
"This was not a surprise," Dr. McMaster commented regarding the role of occupational exposures. The research community has known for decades that a "lymphoma belt" cuts through the Midwest farming states.
Focusing on genetic susceptibility, Dr. McMaster and colleagues first tried to identify a rare germline variant that can be passed down to offspring and that might confer high risk for the disease.
"We used our high-risk families to study these types of changes, although they may be modified by other genes and environmental factors," Dr. McMaster explained.
Much to their collective disappointment, the research team has been unable to identify any rare germline variant that could account for WM in many families. What they did find were many small changes in genes that are known to be important in B-cell development and function, but all of those would lead to only a small increase in WM risk.
"What is holding us back is that, so far, we are not seeing the same gene affected in more than one family, so this suggests to us either that this is not the mechanism behind the development of WM in families, or we have an unfortunate situation where each family is going to have a genetic change that is private to that family and which is not found in other families," Dr. McMaster acknowledged.
Sheer difficulty
Given the difficulty of determining whether these small genetic changes had any detrimental functional effect in each and every family with a member who had WM, Dr. McMaster and colleagues have now turned their attention to genes that exert only a small effect on disease risk.
"Here, we focused on specific genes that we knew were important in the function of the immune system," she explained. "We did find a few genes that may contribute to risk, but those have not yet been confirmed by us or others, and we cannot say they are causative without that confirmation," she said.
The team has gone on to scan the highway of our genetic material so as to isolate genetic "mile markers." They then examine the area around a particular marker that they suspect contains genes that may be involved in WM.
One study they conducted involved a cohort of 217 patients with WM in which numerous family members had WM and so was enriched with susceptibility genes. A second cohort comprised 312 WM patients in which there were few WM cases among family members. Both of these cohorts were compared with a group of healthy control persons.
From these genome studies, "we found there are at least two regions of the genome that can contribute to WM susceptibility, the largest effect being on the short arm of chromosome 6, and the other on the long arm of chromosome 14," Dr. McMaster reported. Dr. McMaster feels that there are probably more regions on the genome that also contribute to WM, although they do not yet understand how these regions contribute to susceptibility.
"It's more evidence that WM likely results from a combination of events rather than one single gene variant," she observed. Dr. McMaster and colleagues are now collaborating with a large consortium of WM researchers to confirm and extend their findings. Plans are underway to analyze data from approximately 1,350 WM patients and more than 20,000 control persons within the next year.
"Our hope is that we will confirm our original findings and, because we now have a much larger sample, we will be able to discover additional regions of the genome that are contributing to susceptibility," Dr. McMaster said.
"A single gene is not likely to account for all WM, as we've looked carefully and others have looked too," she commented.
"So the risk for WM depends on a combination of genes and environmental exposures and possibly lifestyle factors as well, although we still estimate that approximately 25% of the heritability of WM can be attributed to these kinds of genetic changes," Dr. McMaster predicted.
Dr. McMaster has disclosed no relevant financial relationships. Dr. Treon has served as a director, officer, partner, employee, advisor, consultant, or trustee for Janssen, Pfizer, PCYC, and BioGene.
A version of this article first appeared on Medscape.com
Mary Lou McMaster, MD, has spent her entire career at the National Cancer Institute (NCI) searching for the genetic underpinnings that give rise to Waldenstrom's macroglobulinemia (WM).
After searching for decades, she has yet to uncover a "smoking gun," though a few tantalizing clues have emerged along the way.
"Our questions are pretty basic: Why are some people more susceptible to developing WM, and why does WM sometimes cluster in families?" she explained. It turns out that the answers are not at all simple.
Dr. McMaster described some of the clues that her team at the Clinical Genetics Branch of the NCI has unearthed in a presentation at the recent International Waldenstrom's Macroglobulinemia Foundation (IWMF) 2021 Virtual Educational Forum.
Commenting after the presentation, Steven Treon, MD, PhD, professor of medicine, Harvard Medical School, Boston, who is collaborating with Dr. McMaster on this work, said: "From these familial studies, we can learn how familial genomics may give us insights into disease prevention and treatment."
Identifying affected families
Work began in 2001 to identify families in which two or more family members had been diagnosed with WM or in which there was one patient with WM and at least one other relative with a related B-cell cancer, such as chronic lymphocytic leukemia.
For a frame of reference, they enrolled some families with only one member with WM and in which there was no known family history of the disease.
"Overall, we have learned that familial WM is a rare disease but not nearly as rare as we first thought," Dr. McMaster said.
For example, in a referral hospital setting, 5% of WM patients will report having a family member with the same disorder, and up to 20% of WM patients report having a family member with a related but different B-cell cancer, she noted.
NCI researchers also discovered that environmental factors contribute to the development of WM. Notable chemical or occupational exposures include exposures to pesticides, herbicides, and fertilizers. Infections and autoimmune disease are additional factors.
"This was not a surprise," Dr. McMaster commented regarding the role of occupational exposures. The research community has known for decades that a "lymphoma belt" cuts through the Midwest farming states.
Focusing on genetic susceptibility, Dr. McMaster and colleagues first tried to identify a rare germline variant that can be passed down to offspring and that might confer high risk for the disease.
"We used our high-risk families to study these types of changes, although they may be modified by other genes and environmental factors," Dr. McMaster explained.
Much to their collective disappointment, the research team has been unable to identify any rare germline variant that could account for WM in many families. What they did find were many small changes in genes that are known to be important in B-cell development and function, but all of those would lead to only a small increase in WM risk.
"What is holding us back is that, so far, we are not seeing the same gene affected in more than one family, so this suggests to us either that this is not the mechanism behind the development of WM in families, or we have an unfortunate situation where each family is going to have a genetic change that is private to that family and which is not found in other families," Dr. McMaster acknowledged.
Sheer difficulty
Given the difficulty of determining whether these small genetic changes had any detrimental functional effect in each and every family with a member who had WM, Dr. McMaster and colleagues have now turned their attention to genes that exert only a small effect on disease risk.
"Here, we focused on specific genes that we knew were important in the function of the immune system," she explained. "We did find a few genes that may contribute to risk, but those have not yet been confirmed by us or others, and we cannot say they are causative without that confirmation," she said.
The team has gone on to scan the highway of our genetic material so as to isolate genetic "mile markers." They then examine the area around a particular marker that they suspect contains genes that may be involved in WM.
One study they conducted involved a cohort of 217 patients with WM in which numerous family members had WM and so was enriched with susceptibility genes. A second cohort comprised 312 WM patients in which there were few WM cases among family members. Both of these cohorts were compared with a group of healthy control persons.
From these genome studies, "we found there are at least two regions of the genome that can contribute to WM susceptibility, the largest effect being on the short arm of chromosome 6, and the other on the long arm of chromosome 14," Dr. McMaster reported. Dr. McMaster feels that there are probably more regions on the genome that also contribute to WM, although they do not yet understand how these regions contribute to susceptibility.
"It's more evidence that WM likely results from a combination of events rather than one single gene variant," she observed. Dr. McMaster and colleagues are now collaborating with a large consortium of WM researchers to confirm and extend their findings. Plans are underway to analyze data from approximately 1,350 WM patients and more than 20,000 control persons within the next year.
"Our hope is that we will confirm our original findings and, because we now have a much larger sample, we will be able to discover additional regions of the genome that are contributing to susceptibility," Dr. McMaster said.
"A single gene is not likely to account for all WM, as we've looked carefully and others have looked too," she commented.
"So the risk for WM depends on a combination of genes and environmental exposures and possibly lifestyle factors as well, although we still estimate that approximately 25% of the heritability of WM can be attributed to these kinds of genetic changes," Dr. McMaster predicted.
Dr. McMaster has disclosed no relevant financial relationships. Dr. Treon has served as a director, officer, partner, employee, advisor, consultant, or trustee for Janssen, Pfizer, PCYC, and BioGene.
A version of this article first appeared on Medscape.com
Erratum (Cutis. 2021;108:181-184, 202)
Kowtoniuk RA, Liu YE, Jeter JP. Cutaneous cold weather injuries in the US Military. Cutis. 2021;108:181-184, 202. doi:10.12788/cutis.0363
In the article above from the October 2021 issue, an author’s name was spelled incorrectly. The correct byline appears below. The article has been corrected online at www.mdedge.com/dermatology. We apologize for the error.
Robert A. Kowtoniuk, DO; Yizhen E. Liu, MD; Jonathan P. Jeter, MD
Kowtoniuk RA, Liu YE, Jeter JP. Cutaneous cold weather injuries in the US Military. Cutis. 2021;108:181-184, 202. doi:10.12788/cutis.0363
In the article above from the October 2021 issue, an author’s name was spelled incorrectly. The correct byline appears below. The article has been corrected online at www.mdedge.com/dermatology. We apologize for the error.
Robert A. Kowtoniuk, DO; Yizhen E. Liu, MD; Jonathan P. Jeter, MD
Kowtoniuk RA, Liu YE, Jeter JP. Cutaneous cold weather injuries in the US Military. Cutis. 2021;108:181-184, 202. doi:10.12788/cutis.0363
In the article above from the October 2021 issue, an author’s name was spelled incorrectly. The correct byline appears below. The article has been corrected online at www.mdedge.com/dermatology. We apologize for the error.
Robert A. Kowtoniuk, DO; Yizhen E. Liu, MD; Jonathan P. Jeter, MD
Rituximab improves systemic sclerosis skin, lung symptoms
Rituximab effectively reduced skin sclerosis and appeared to have a beneficial effect on interstitial lung disease (ILD) for patients with systemic sclerosis (SSc) in a randomized, clinical trial.
At 24 weeks’ follow-up, there was significant improvement in total skin thickness scores among patients who received four once-weekly rituximab infusions, compared with patients who received placebo infusions. Among patients who received rituximab, there were also small but significant improvements in percentage of forced vital capacity (FVC). Among patients who received placebo, FVC worsened, reported Ayumi Yoshizaki, MD, of the University of Tokyo and colleagues.
“Systemic sclerosis is considered to have high unmet medical needs because of its poor prognosis and the lack of satisfactory and effective treatments,” he said at the virtual annual meeting of the American College of Rheumatology.
“Several clinical studies have suggested that B-cell depletion therapy with rituximab anti-CD20 antibody is effective in treating skin and lung fibrosis of SSc. However, no randomized, placebo-controlled trial has been able to confirm the efficacy of rituximab in SSc,” Dr. Yoshizaki said.
A rheumatologist who is currently conducting an investigator-initiated trial in which patients with SSC are undergoing treatment with rituximab followed by belimumab (Benlysta) said in an interview that he found the data to be “super interesting.”
“There are a lot of reasons to think that B cells might be important in systemic sclerosis, and actually that’s why our group had previously done an investigator-initiated trial with belimumab years ago,” said Robert Spiera, MD, director of the Scleroderma, Vasculitis, and Myositis Center at the Hospital for Special Surgery in New York.
Randomized trial
Dr. Yoshizaki and colleagues conducted the randomized, placebo-controlled DESIRES trial in four hospitals in Japan to evaluate the safety and efficacy of rituximab for the treatment of SSc.
In the investigator-initiated trial, patients aged 20-79 years who fulfilled ACR and European Alliance of Associations for Rheumatology classification criteria for systemic sclerosis and who had a modified Rodnan Skin Score (mRSS) of 10 or more and a life expectancy of at least 6 months were randomly assigned to receive infusions with either rituximab 375 mg/m2 or placebo once weekly for 4 weeks. Patients and clinicians were masked to treatment allocation.
The trial included 56 patients (51 women, 5 men). Of all patients enrolled, 27 of 28 who were allocated to receive rituximab and 22 of 28 who were allocated to receive placebo underwent at least one infusion and completed 24 weeks of follow-up.
The absolute change in mRSS at 24 weeks after the start of therapy, the primary endpoint, was –6.30 in the rituximab group, compared with +2.14 in the placebo group, a difference of –8.44 (P < .0001).
In a subgroup analysis, rituximab was superior to placebo regardless of disease duration, disease type (diffuse cutaneous or limited cutaneous SSc), prior receipt of systemic corticosteroids or immunosuppressants, or having C-reactive protein levels less than 0.3 mg/dL or at least 0.3 mg/dL.
However, there was no significant benefit with rituximab for patients with baseline mRSS of at least 20 or for those without ILD at baseline.
There was also evidence that rituximab reduced lung fibrosis. For patients assigned to the active drug, the absolute change in FVC at 24 weeks was +0.09% of the predicted value, compared with –3.56% for patients who received placebo (P = .044).
The researchers also observed radiographic evidence of lung improvement. The absolute change in the percentage of lung field occupied with interstitial shadows was –0.32% in the rituximab arm versus +2.39% in the placebo arm (P = .034). There was no significant between-group difference in the absolute change in diffusing capacity of lung for carbon monoxide, however.
Adverse events that occurred more frequently with rituximab included oral mucositis, diarrhea, and decreased neutrophil and white blood cell counts.
Convincing results
“What I thought the Japanese study did was to give a much more convincing proof of concept than has been out there,” Dr. Spiera said in an interview.
“There have been some preliminary experiences that have been encouraging with rituximab in scleroderma, most of which has been open label,” he said.
He also referred to a retrospective study by EUSTAR, the European Scleroderma Trials and Research group, which indicated that patients who had previously received rituximab seemed to have had better outcomes than patients who had been treated with other therapies.
Dr. Spiera added that, although he was glad to see the data from a randomized, placebo-controlled trial in this population, he was uncomfortable with the idea of leaving patients untreated for 6 months.
“From the standpoint of somebody wanting to know what strategies might be promising, this is great for us, but I would not have designed the trial that way,” he said.
The study results were previously published in the Lancet Rheumatology.
The study was supported by grants from the Japan Agency for Medical Research and Development and Zenyaku Kogyo. Dr. Yoshizaki disclosed no relevant financial relationships. Dr. Spiera has received grant/research support from and has consulted for Roche/Genentech, maker of rituximab, and has received compensation from other companies.
A version of this article first appeared on Medscape.com.
Rituximab effectively reduced skin sclerosis and appeared to have a beneficial effect on interstitial lung disease (ILD) for patients with systemic sclerosis (SSc) in a randomized, clinical trial.
At 24 weeks’ follow-up, there was significant improvement in total skin thickness scores among patients who received four once-weekly rituximab infusions, compared with patients who received placebo infusions. Among patients who received rituximab, there were also small but significant improvements in percentage of forced vital capacity (FVC). Among patients who received placebo, FVC worsened, reported Ayumi Yoshizaki, MD, of the University of Tokyo and colleagues.
“Systemic sclerosis is considered to have high unmet medical needs because of its poor prognosis and the lack of satisfactory and effective treatments,” he said at the virtual annual meeting of the American College of Rheumatology.
“Several clinical studies have suggested that B-cell depletion therapy with rituximab anti-CD20 antibody is effective in treating skin and lung fibrosis of SSc. However, no randomized, placebo-controlled trial has been able to confirm the efficacy of rituximab in SSc,” Dr. Yoshizaki said.
A rheumatologist who is currently conducting an investigator-initiated trial in which patients with SSC are undergoing treatment with rituximab followed by belimumab (Benlysta) said in an interview that he found the data to be “super interesting.”
“There are a lot of reasons to think that B cells might be important in systemic sclerosis, and actually that’s why our group had previously done an investigator-initiated trial with belimumab years ago,” said Robert Spiera, MD, director of the Scleroderma, Vasculitis, and Myositis Center at the Hospital for Special Surgery in New York.
Randomized trial
Dr. Yoshizaki and colleagues conducted the randomized, placebo-controlled DESIRES trial in four hospitals in Japan to evaluate the safety and efficacy of rituximab for the treatment of SSc.
In the investigator-initiated trial, patients aged 20-79 years who fulfilled ACR and European Alliance of Associations for Rheumatology classification criteria for systemic sclerosis and who had a modified Rodnan Skin Score (mRSS) of 10 or more and a life expectancy of at least 6 months were randomly assigned to receive infusions with either rituximab 375 mg/m2 or placebo once weekly for 4 weeks. Patients and clinicians were masked to treatment allocation.
The trial included 56 patients (51 women, 5 men). Of all patients enrolled, 27 of 28 who were allocated to receive rituximab and 22 of 28 who were allocated to receive placebo underwent at least one infusion and completed 24 weeks of follow-up.
The absolute change in mRSS at 24 weeks after the start of therapy, the primary endpoint, was –6.30 in the rituximab group, compared with +2.14 in the placebo group, a difference of –8.44 (P < .0001).
In a subgroup analysis, rituximab was superior to placebo regardless of disease duration, disease type (diffuse cutaneous or limited cutaneous SSc), prior receipt of systemic corticosteroids or immunosuppressants, or having C-reactive protein levels less than 0.3 mg/dL or at least 0.3 mg/dL.
However, there was no significant benefit with rituximab for patients with baseline mRSS of at least 20 or for those without ILD at baseline.
There was also evidence that rituximab reduced lung fibrosis. For patients assigned to the active drug, the absolute change in FVC at 24 weeks was +0.09% of the predicted value, compared with –3.56% for patients who received placebo (P = .044).
The researchers also observed radiographic evidence of lung improvement. The absolute change in the percentage of lung field occupied with interstitial shadows was –0.32% in the rituximab arm versus +2.39% in the placebo arm (P = .034). There was no significant between-group difference in the absolute change in diffusing capacity of lung for carbon monoxide, however.
Adverse events that occurred more frequently with rituximab included oral mucositis, diarrhea, and decreased neutrophil and white blood cell counts.
Convincing results
“What I thought the Japanese study did was to give a much more convincing proof of concept than has been out there,” Dr. Spiera said in an interview.
“There have been some preliminary experiences that have been encouraging with rituximab in scleroderma, most of which has been open label,” he said.
He also referred to a retrospective study by EUSTAR, the European Scleroderma Trials and Research group, which indicated that patients who had previously received rituximab seemed to have had better outcomes than patients who had been treated with other therapies.
Dr. Spiera added that, although he was glad to see the data from a randomized, placebo-controlled trial in this population, he was uncomfortable with the idea of leaving patients untreated for 6 months.
“From the standpoint of somebody wanting to know what strategies might be promising, this is great for us, but I would not have designed the trial that way,” he said.
The study results were previously published in the Lancet Rheumatology.
The study was supported by grants from the Japan Agency for Medical Research and Development and Zenyaku Kogyo. Dr. Yoshizaki disclosed no relevant financial relationships. Dr. Spiera has received grant/research support from and has consulted for Roche/Genentech, maker of rituximab, and has received compensation from other companies.
A version of this article first appeared on Medscape.com.
Rituximab effectively reduced skin sclerosis and appeared to have a beneficial effect on interstitial lung disease (ILD) for patients with systemic sclerosis (SSc) in a randomized, clinical trial.
At 24 weeks’ follow-up, there was significant improvement in total skin thickness scores among patients who received four once-weekly rituximab infusions, compared with patients who received placebo infusions. Among patients who received rituximab, there were also small but significant improvements in percentage of forced vital capacity (FVC). Among patients who received placebo, FVC worsened, reported Ayumi Yoshizaki, MD, of the University of Tokyo and colleagues.
“Systemic sclerosis is considered to have high unmet medical needs because of its poor prognosis and the lack of satisfactory and effective treatments,” he said at the virtual annual meeting of the American College of Rheumatology.
“Several clinical studies have suggested that B-cell depletion therapy with rituximab anti-CD20 antibody is effective in treating skin and lung fibrosis of SSc. However, no randomized, placebo-controlled trial has been able to confirm the efficacy of rituximab in SSc,” Dr. Yoshizaki said.
A rheumatologist who is currently conducting an investigator-initiated trial in which patients with SSC are undergoing treatment with rituximab followed by belimumab (Benlysta) said in an interview that he found the data to be “super interesting.”
“There are a lot of reasons to think that B cells might be important in systemic sclerosis, and actually that’s why our group had previously done an investigator-initiated trial with belimumab years ago,” said Robert Spiera, MD, director of the Scleroderma, Vasculitis, and Myositis Center at the Hospital for Special Surgery in New York.
Randomized trial
Dr. Yoshizaki and colleagues conducted the randomized, placebo-controlled DESIRES trial in four hospitals in Japan to evaluate the safety and efficacy of rituximab for the treatment of SSc.
In the investigator-initiated trial, patients aged 20-79 years who fulfilled ACR and European Alliance of Associations for Rheumatology classification criteria for systemic sclerosis and who had a modified Rodnan Skin Score (mRSS) of 10 or more and a life expectancy of at least 6 months were randomly assigned to receive infusions with either rituximab 375 mg/m2 or placebo once weekly for 4 weeks. Patients and clinicians were masked to treatment allocation.
The trial included 56 patients (51 women, 5 men). Of all patients enrolled, 27 of 28 who were allocated to receive rituximab and 22 of 28 who were allocated to receive placebo underwent at least one infusion and completed 24 weeks of follow-up.
The absolute change in mRSS at 24 weeks after the start of therapy, the primary endpoint, was –6.30 in the rituximab group, compared with +2.14 in the placebo group, a difference of –8.44 (P < .0001).
In a subgroup analysis, rituximab was superior to placebo regardless of disease duration, disease type (diffuse cutaneous or limited cutaneous SSc), prior receipt of systemic corticosteroids or immunosuppressants, or having C-reactive protein levels less than 0.3 mg/dL or at least 0.3 mg/dL.
However, there was no significant benefit with rituximab for patients with baseline mRSS of at least 20 or for those without ILD at baseline.
There was also evidence that rituximab reduced lung fibrosis. For patients assigned to the active drug, the absolute change in FVC at 24 weeks was +0.09% of the predicted value, compared with –3.56% for patients who received placebo (P = .044).
The researchers also observed radiographic evidence of lung improvement. The absolute change in the percentage of lung field occupied with interstitial shadows was –0.32% in the rituximab arm versus +2.39% in the placebo arm (P = .034). There was no significant between-group difference in the absolute change in diffusing capacity of lung for carbon monoxide, however.
Adverse events that occurred more frequently with rituximab included oral mucositis, diarrhea, and decreased neutrophil and white blood cell counts.
Convincing results
“What I thought the Japanese study did was to give a much more convincing proof of concept than has been out there,” Dr. Spiera said in an interview.
“There have been some preliminary experiences that have been encouraging with rituximab in scleroderma, most of which has been open label,” he said.
He also referred to a retrospective study by EUSTAR, the European Scleroderma Trials and Research group, which indicated that patients who had previously received rituximab seemed to have had better outcomes than patients who had been treated with other therapies.
Dr. Spiera added that, although he was glad to see the data from a randomized, placebo-controlled trial in this population, he was uncomfortable with the idea of leaving patients untreated for 6 months.
“From the standpoint of somebody wanting to know what strategies might be promising, this is great for us, but I would not have designed the trial that way,” he said.
The study results were previously published in the Lancet Rheumatology.
The study was supported by grants from the Japan Agency for Medical Research and Development and Zenyaku Kogyo. Dr. Yoshizaki disclosed no relevant financial relationships. Dr. Spiera has received grant/research support from and has consulted for Roche/Genentech, maker of rituximab, and has received compensation from other companies.
A version of this article first appeared on Medscape.com.
FROM ACR 2021
Seborrheic Dermatitis
THE COMPARISON
A Seborrheic dermatitis in a woman with brown-gray greasy scale as well as petaloid papules and plaques that are especially prominent in the nasolabial folds.
B Seborrheic dermatitis in a man with erythema, scale, and mild postinflammatory hypopigmentation that are especially prominent in the nasolabial folds.
C Seborrheic dermatitis in a man with erythema, faint scale, and postinflammatory hypopigmentation that are especially prominent in the nasolabial folds.
D Seborrheic dermatitis in a man with erythema and scale of the eyebrows and glabellar region.
Seborrheic dermatitis (SD) is an inflammatory condition that is thought to be part of a response to Malassezia yeast. The scalp and face are most commonly affected, particularly the nasolabial folds, eyebrows, ears, postauricular areas, and beard area. Men also may have SD on the mid upper chest in association with chest hair. In infants, the scalp and body skin folds often are affected.
Epidemiology
Seborrheic dermatitis affects patients of all ages: infants, adolescents, and adults. It is among the most common dermatologic diagnoses reported in Black patients in the United States.1
Key clinical features in darker skin tones
- In those with darker skin tones, arcuate, polycyclic, or petaloid (flower petal–like) plaques may be present (Figure A). Also, hypopigmented patches and plaques may be prominent (Figures B and C). The classic description includes thin pink patches and plaques with white greasy scale on the face (Figure D).
- The scalp may have diffuse scale or isolated scaly plaques.
Worth noting
- In those with tightly coiled hair, there is a predisposition for dry hair and increased risk for breakage.
- Treatment plans for patients with SD often include frequent hair washing. However, in those with tightly coiled hair, the treatment plan may need to be modified due to hair texture, tendency for dryness, and washing frequency preferences. Washing the scalp at least every 1 to 2 weeks may be a preferred approach for those with tightly coiled hair at increased risk for dryness/breakage vs washing daily.2 In a sample of 201 caregivers of Black girls, Rucker Wright et al3 found that washing the hair more than once per week was not correlated with a lower prevalence of SD.
- If tightly coiled hair is temporarily straightened with heat (eg, blow-dryer, flat iron), adding a liquid-based treatment such as clobetasol solution or fluocinonide solution will cause the hair to revert to its normal curl pattern.
- It is appropriate to ask patients for their vehicle preference for medications.2 For example, if clobetasol is the treatment selected for the patient, the vehicle can reflect patient preference for a liquid, foam, cream, or ointment.
- Some antifungal/antiyeast shampoos may cause further hair dryness and breakage.
- Treatment may be delayed because patients often use various topical pomades and ointments to cover up the scale and help with pruritus.
- Diffuse scale of tinea capitis in school-aged children can be mistaken for SD, which leads to delayed diagnosis and treatment.
- Clinicians should become comfortable with scalp examinations in patients with tightly coiled hair. Patients with chief concerns related to their hair and scalp expect their clinicians to touch these areas. Avoid leaning in to examine the patient without touching the patient’s hair and scalp.2,4
Health disparity highlight
Seborrheic dermatitis is among the most common cutaneous disorders diagnosed in patients with skin of color.1,5 Delay in recognition of SD in those with darker skin tones leads to delayed treatment. Seborrheic dermatitis of the face can cause notable postinflammatory pigmentation alteration. Pigmentation changes in the skin further impact quality of life.
- Alexis AF, Sergay AB, Taylor SC. Common dermatologic disorders in skin of color: a comparative practice survey. Cutis. 2007;80:387-394.
- Grayson C, Heath C. Tips for addressing common conditions affecting pediatric and adolescent patients with skin of color [published online March 2, 2021]. Pediatr Dermatol. 2021;10.1111/pde.14525
- Rucker Wright D, Gathers R, Kapke A, et al. Hair care practices and their association with scalp and hair disorders in African American girls. J Am Acad Dermatol. 2011;64:253-262. doi:10.1016/j .jaad.2010.05.037
- Grayson C, Heath C. An approach to examining tightly coiled hair among patients with hair loss in race-discordant patient-physician interactions. JAMA Dermatol. 2021;157:505-506. doi:10.1001/jamadermatol.2021.0338
- Gaulding JV, Gutierrez D, Bhatia BK, et al. Epidemiology of skin diseases in a diverse patient population. J Drugs Dermatol. 2018; 17:1032-1036.
THE COMPARISON
A Seborrheic dermatitis in a woman with brown-gray greasy scale as well as petaloid papules and plaques that are especially prominent in the nasolabial folds.
B Seborrheic dermatitis in a man with erythema, scale, and mild postinflammatory hypopigmentation that are especially prominent in the nasolabial folds.
C Seborrheic dermatitis in a man with erythema, faint scale, and postinflammatory hypopigmentation that are especially prominent in the nasolabial folds.
D Seborrheic dermatitis in a man with erythema and scale of the eyebrows and glabellar region.
Seborrheic dermatitis (SD) is an inflammatory condition that is thought to be part of a response to Malassezia yeast. The scalp and face are most commonly affected, particularly the nasolabial folds, eyebrows, ears, postauricular areas, and beard area. Men also may have SD on the mid upper chest in association with chest hair. In infants, the scalp and body skin folds often are affected.
Epidemiology
Seborrheic dermatitis affects patients of all ages: infants, adolescents, and adults. It is among the most common dermatologic diagnoses reported in Black patients in the United States.1
Key clinical features in darker skin tones
- In those with darker skin tones, arcuate, polycyclic, or petaloid (flower petal–like) plaques may be present (Figure A). Also, hypopigmented patches and plaques may be prominent (Figures B and C). The classic description includes thin pink patches and plaques with white greasy scale on the face (Figure D).
- The scalp may have diffuse scale or isolated scaly plaques.
Worth noting
- In those with tightly coiled hair, there is a predisposition for dry hair and increased risk for breakage.
- Treatment plans for patients with SD often include frequent hair washing. However, in those with tightly coiled hair, the treatment plan may need to be modified due to hair texture, tendency for dryness, and washing frequency preferences. Washing the scalp at least every 1 to 2 weeks may be a preferred approach for those with tightly coiled hair at increased risk for dryness/breakage vs washing daily.2 In a sample of 201 caregivers of Black girls, Rucker Wright et al3 found that washing the hair more than once per week was not correlated with a lower prevalence of SD.
- If tightly coiled hair is temporarily straightened with heat (eg, blow-dryer, flat iron), adding a liquid-based treatment such as clobetasol solution or fluocinonide solution will cause the hair to revert to its normal curl pattern.
- It is appropriate to ask patients for their vehicle preference for medications.2 For example, if clobetasol is the treatment selected for the patient, the vehicle can reflect patient preference for a liquid, foam, cream, or ointment.
- Some antifungal/antiyeast shampoos may cause further hair dryness and breakage.
- Treatment may be delayed because patients often use various topical pomades and ointments to cover up the scale and help with pruritus.
- Diffuse scale of tinea capitis in school-aged children can be mistaken for SD, which leads to delayed diagnosis and treatment.
- Clinicians should become comfortable with scalp examinations in patients with tightly coiled hair. Patients with chief concerns related to their hair and scalp expect their clinicians to touch these areas. Avoid leaning in to examine the patient without touching the patient’s hair and scalp.2,4
Health disparity highlight
Seborrheic dermatitis is among the most common cutaneous disorders diagnosed in patients with skin of color.1,5 Delay in recognition of SD in those with darker skin tones leads to delayed treatment. Seborrheic dermatitis of the face can cause notable postinflammatory pigmentation alteration. Pigmentation changes in the skin further impact quality of life.
THE COMPARISON
A Seborrheic dermatitis in a woman with brown-gray greasy scale as well as petaloid papules and plaques that are especially prominent in the nasolabial folds.
B Seborrheic dermatitis in a man with erythema, scale, and mild postinflammatory hypopigmentation that are especially prominent in the nasolabial folds.
C Seborrheic dermatitis in a man with erythema, faint scale, and postinflammatory hypopigmentation that are especially prominent in the nasolabial folds.
D Seborrheic dermatitis in a man with erythema and scale of the eyebrows and glabellar region.
Seborrheic dermatitis (SD) is an inflammatory condition that is thought to be part of a response to Malassezia yeast. The scalp and face are most commonly affected, particularly the nasolabial folds, eyebrows, ears, postauricular areas, and beard area. Men also may have SD on the mid upper chest in association with chest hair. In infants, the scalp and body skin folds often are affected.
Epidemiology
Seborrheic dermatitis affects patients of all ages: infants, adolescents, and adults. It is among the most common dermatologic diagnoses reported in Black patients in the United States.1
Key clinical features in darker skin tones
- In those with darker skin tones, arcuate, polycyclic, or petaloid (flower petal–like) plaques may be present (Figure A). Also, hypopigmented patches and plaques may be prominent (Figures B and C). The classic description includes thin pink patches and plaques with white greasy scale on the face (Figure D).
- The scalp may have diffuse scale or isolated scaly plaques.
Worth noting
- In those with tightly coiled hair, there is a predisposition for dry hair and increased risk for breakage.
- Treatment plans for patients with SD often include frequent hair washing. However, in those with tightly coiled hair, the treatment plan may need to be modified due to hair texture, tendency for dryness, and washing frequency preferences. Washing the scalp at least every 1 to 2 weeks may be a preferred approach for those with tightly coiled hair at increased risk for dryness/breakage vs washing daily.2 In a sample of 201 caregivers of Black girls, Rucker Wright et al3 found that washing the hair more than once per week was not correlated with a lower prevalence of SD.
- If tightly coiled hair is temporarily straightened with heat (eg, blow-dryer, flat iron), adding a liquid-based treatment such as clobetasol solution or fluocinonide solution will cause the hair to revert to its normal curl pattern.
- It is appropriate to ask patients for their vehicle preference for medications.2 For example, if clobetasol is the treatment selected for the patient, the vehicle can reflect patient preference for a liquid, foam, cream, or ointment.
- Some antifungal/antiyeast shampoos may cause further hair dryness and breakage.
- Treatment may be delayed because patients often use various topical pomades and ointments to cover up the scale and help with pruritus.
- Diffuse scale of tinea capitis in school-aged children can be mistaken for SD, which leads to delayed diagnosis and treatment.
- Clinicians should become comfortable with scalp examinations in patients with tightly coiled hair. Patients with chief concerns related to their hair and scalp expect their clinicians to touch these areas. Avoid leaning in to examine the patient without touching the patient’s hair and scalp.2,4
Health disparity highlight
Seborrheic dermatitis is among the most common cutaneous disorders diagnosed in patients with skin of color.1,5 Delay in recognition of SD in those with darker skin tones leads to delayed treatment. Seborrheic dermatitis of the face can cause notable postinflammatory pigmentation alteration. Pigmentation changes in the skin further impact quality of life.
- Alexis AF, Sergay AB, Taylor SC. Common dermatologic disorders in skin of color: a comparative practice survey. Cutis. 2007;80:387-394.
- Grayson C, Heath C. Tips for addressing common conditions affecting pediatric and adolescent patients with skin of color [published online March 2, 2021]. Pediatr Dermatol. 2021;10.1111/pde.14525
- Rucker Wright D, Gathers R, Kapke A, et al. Hair care practices and their association with scalp and hair disorders in African American girls. J Am Acad Dermatol. 2011;64:253-262. doi:10.1016/j .jaad.2010.05.037
- Grayson C, Heath C. An approach to examining tightly coiled hair among patients with hair loss in race-discordant patient-physician interactions. JAMA Dermatol. 2021;157:505-506. doi:10.1001/jamadermatol.2021.0338
- Gaulding JV, Gutierrez D, Bhatia BK, et al. Epidemiology of skin diseases in a diverse patient population. J Drugs Dermatol. 2018; 17:1032-1036.
- Alexis AF, Sergay AB, Taylor SC. Common dermatologic disorders in skin of color: a comparative practice survey. Cutis. 2007;80:387-394.
- Grayson C, Heath C. Tips for addressing common conditions affecting pediatric and adolescent patients with skin of color [published online March 2, 2021]. Pediatr Dermatol. 2021;10.1111/pde.14525
- Rucker Wright D, Gathers R, Kapke A, et al. Hair care practices and their association with scalp and hair disorders in African American girls. J Am Acad Dermatol. 2011;64:253-262. doi:10.1016/j .jaad.2010.05.037
- Grayson C, Heath C. An approach to examining tightly coiled hair among patients with hair loss in race-discordant patient-physician interactions. JAMA Dermatol. 2021;157:505-506. doi:10.1001/jamadermatol.2021.0338
- Gaulding JV, Gutierrez D, Bhatia BK, et al. Epidemiology of skin diseases in a diverse patient population. J Drugs Dermatol. 2018; 17:1032-1036.
Psoriatic arthritis and axial spondyloarthritis patients succeed with reduced TNF inhibitor dosing
Reducing the dose of tumor necrosis factor inhibitors by approximately one-third did not increase disease activity in adults with psoriatic arthritis (PsA) or axial spondyloarthritis (axSpA) in a stable low–disease activity state, according to findings from two parallel controlled retrospective cohort studies.
Disease activity–guided dose optimization (DAGDO) can reduce drug exposure in patients with PsA or axSpA who have low disease activity, but its impact on increased disease activity has not been as well studied as full-dose continuation, Celia A.J. Michielsens, MD, of Sint Maartenskliniek, Nijmegen, the Netherlands, and colleagues wrote.
“DAGDO or discontinuation of bDMARDs [biologic disease-modifying antirheumatic drugs] as a standard of care in adults with stable axSpA is currently discouraged by” the American College of Rheumatology, the researchers said. However, guidelines from the European Alliance of Associations for Rheumatology allow for the slow tapering of bDMARDs in patients with sustained remission.
In a controlled, retrospective cohort study published in Rheumatology, the researchers analyzed data from their outpatient clinic, which initiated a specific TNF inhibitor DAGDO protocol in 2010 for patients with RA, PsA, and axSpA. Disease activity was measured using the Disease Activity Score in 28 joints with C-reactive protein (DAS28-CRP) for patients with PsA and the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) for patients with axSpA.
The study population included 153 patients with PsA who had a mean DAS28-CRP of 6.5 and 171 with axSpA who had a similar mean number of disease activity measurements (6.5 with DAS28-CRP and 6.4 with BASDAI). Median follow-up time was several months short of 4 years in each group. Treatment was divided into three periods: continuation of full TNF inhibitor dose, TNF inhibitor DAGDO, and a period with stable TNF inhibitor dose after DAGDO.
Overall, no significant differences appeared in mean DAS28-CRP and BASDAI over the course of the study between the period of the full TNF inhibitor dose continuation and both the TNF inhibitor DAGDO period and the stable TNF inhibitor dose period. Among PsA patients, the mean DAS28-CRP was 1.94 for the full-dose period, 2.0 in the TNF inhibitor DAGDO period, and 1.97 in the stable TNF inhibitor dose after DAGDO period. For axSpA patients, the mean BASDAI was 3.44, 3.47, and 3.48, respectively, for the three periods. Older age, longer disease duration, and longer follow-up were significantly associated with higher DAS28-CRP scores in patients with PsA, and older age and female gender were significantly associated with higher BASDAI scores in patients with axSpA.
The mean percentage of daily defined dose (%DDD) for patients with PsA was 108% during the full dose period, 62% in the TNF inhibitor DAGDO period, and 78% with stable TNF inhibitor after DAGDO, and nearly the same for patients with axSPA at 108%, 62%, and 72%, respectively.
The %DDD represents “a modest degree of tapering,” compared with studies in RA patients, the researchers noted. “Explanations for this difference could be that the full dose-reduction potential was not met due to suboptimal execution of the local protocol, whereas in prospective intervention trials, protocol adherence is likely higher.”
The study findings were limited by several factors including the open-label design and potential for nocebo effects, possible incorrect attribution, and information bias, as well as the use of DAS28-CRP and BASDAI rather than more modern measurement tools, the researchers noted.
However, the results were strengthened by the large sample size and real-world clinical setting, frequent assessment of disease activity, long-term follow-up, and the performance of DAGDO by rheumatologists familiar with the measuring tools, they said. The results suggest that DAGDO is safe and effective for patients with low disease activity in either condition, but randomized, prospective studies can provide more definitive evidence.
The study received no outside funding. One author disclosed relationships with multiple pharmaceutical companies.
Reducing the dose of tumor necrosis factor inhibitors by approximately one-third did not increase disease activity in adults with psoriatic arthritis (PsA) or axial spondyloarthritis (axSpA) in a stable low–disease activity state, according to findings from two parallel controlled retrospective cohort studies.
Disease activity–guided dose optimization (DAGDO) can reduce drug exposure in patients with PsA or axSpA who have low disease activity, but its impact on increased disease activity has not been as well studied as full-dose continuation, Celia A.J. Michielsens, MD, of Sint Maartenskliniek, Nijmegen, the Netherlands, and colleagues wrote.
“DAGDO or discontinuation of bDMARDs [biologic disease-modifying antirheumatic drugs] as a standard of care in adults with stable axSpA is currently discouraged by” the American College of Rheumatology, the researchers said. However, guidelines from the European Alliance of Associations for Rheumatology allow for the slow tapering of bDMARDs in patients with sustained remission.
In a controlled, retrospective cohort study published in Rheumatology, the researchers analyzed data from their outpatient clinic, which initiated a specific TNF inhibitor DAGDO protocol in 2010 for patients with RA, PsA, and axSpA. Disease activity was measured using the Disease Activity Score in 28 joints with C-reactive protein (DAS28-CRP) for patients with PsA and the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) for patients with axSpA.
The study population included 153 patients with PsA who had a mean DAS28-CRP of 6.5 and 171 with axSpA who had a similar mean number of disease activity measurements (6.5 with DAS28-CRP and 6.4 with BASDAI). Median follow-up time was several months short of 4 years in each group. Treatment was divided into three periods: continuation of full TNF inhibitor dose, TNF inhibitor DAGDO, and a period with stable TNF inhibitor dose after DAGDO.
Overall, no significant differences appeared in mean DAS28-CRP and BASDAI over the course of the study between the period of the full TNF inhibitor dose continuation and both the TNF inhibitor DAGDO period and the stable TNF inhibitor dose period. Among PsA patients, the mean DAS28-CRP was 1.94 for the full-dose period, 2.0 in the TNF inhibitor DAGDO period, and 1.97 in the stable TNF inhibitor dose after DAGDO period. For axSpA patients, the mean BASDAI was 3.44, 3.47, and 3.48, respectively, for the three periods. Older age, longer disease duration, and longer follow-up were significantly associated with higher DAS28-CRP scores in patients with PsA, and older age and female gender were significantly associated with higher BASDAI scores in patients with axSpA.
The mean percentage of daily defined dose (%DDD) for patients with PsA was 108% during the full dose period, 62% in the TNF inhibitor DAGDO period, and 78% with stable TNF inhibitor after DAGDO, and nearly the same for patients with axSPA at 108%, 62%, and 72%, respectively.
The %DDD represents “a modest degree of tapering,” compared with studies in RA patients, the researchers noted. “Explanations for this difference could be that the full dose-reduction potential was not met due to suboptimal execution of the local protocol, whereas in prospective intervention trials, protocol adherence is likely higher.”
The study findings were limited by several factors including the open-label design and potential for nocebo effects, possible incorrect attribution, and information bias, as well as the use of DAS28-CRP and BASDAI rather than more modern measurement tools, the researchers noted.
However, the results were strengthened by the large sample size and real-world clinical setting, frequent assessment of disease activity, long-term follow-up, and the performance of DAGDO by rheumatologists familiar with the measuring tools, they said. The results suggest that DAGDO is safe and effective for patients with low disease activity in either condition, but randomized, prospective studies can provide more definitive evidence.
The study received no outside funding. One author disclosed relationships with multiple pharmaceutical companies.
Reducing the dose of tumor necrosis factor inhibitors by approximately one-third did not increase disease activity in adults with psoriatic arthritis (PsA) or axial spondyloarthritis (axSpA) in a stable low–disease activity state, according to findings from two parallel controlled retrospective cohort studies.
Disease activity–guided dose optimization (DAGDO) can reduce drug exposure in patients with PsA or axSpA who have low disease activity, but its impact on increased disease activity has not been as well studied as full-dose continuation, Celia A.J. Michielsens, MD, of Sint Maartenskliniek, Nijmegen, the Netherlands, and colleagues wrote.
“DAGDO or discontinuation of bDMARDs [biologic disease-modifying antirheumatic drugs] as a standard of care in adults with stable axSpA is currently discouraged by” the American College of Rheumatology, the researchers said. However, guidelines from the European Alliance of Associations for Rheumatology allow for the slow tapering of bDMARDs in patients with sustained remission.
In a controlled, retrospective cohort study published in Rheumatology, the researchers analyzed data from their outpatient clinic, which initiated a specific TNF inhibitor DAGDO protocol in 2010 for patients with RA, PsA, and axSpA. Disease activity was measured using the Disease Activity Score in 28 joints with C-reactive protein (DAS28-CRP) for patients with PsA and the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) for patients with axSpA.
The study population included 153 patients with PsA who had a mean DAS28-CRP of 6.5 and 171 with axSpA who had a similar mean number of disease activity measurements (6.5 with DAS28-CRP and 6.4 with BASDAI). Median follow-up time was several months short of 4 years in each group. Treatment was divided into three periods: continuation of full TNF inhibitor dose, TNF inhibitor DAGDO, and a period with stable TNF inhibitor dose after DAGDO.
Overall, no significant differences appeared in mean DAS28-CRP and BASDAI over the course of the study between the period of the full TNF inhibitor dose continuation and both the TNF inhibitor DAGDO period and the stable TNF inhibitor dose period. Among PsA patients, the mean DAS28-CRP was 1.94 for the full-dose period, 2.0 in the TNF inhibitor DAGDO period, and 1.97 in the stable TNF inhibitor dose after DAGDO period. For axSpA patients, the mean BASDAI was 3.44, 3.47, and 3.48, respectively, for the three periods. Older age, longer disease duration, and longer follow-up were significantly associated with higher DAS28-CRP scores in patients with PsA, and older age and female gender were significantly associated with higher BASDAI scores in patients with axSpA.
The mean percentage of daily defined dose (%DDD) for patients with PsA was 108% during the full dose period, 62% in the TNF inhibitor DAGDO period, and 78% with stable TNF inhibitor after DAGDO, and nearly the same for patients with axSPA at 108%, 62%, and 72%, respectively.
The %DDD represents “a modest degree of tapering,” compared with studies in RA patients, the researchers noted. “Explanations for this difference could be that the full dose-reduction potential was not met due to suboptimal execution of the local protocol, whereas in prospective intervention trials, protocol adherence is likely higher.”
The study findings were limited by several factors including the open-label design and potential for nocebo effects, possible incorrect attribution, and information bias, as well as the use of DAS28-CRP and BASDAI rather than more modern measurement tools, the researchers noted.
However, the results were strengthened by the large sample size and real-world clinical setting, frequent assessment of disease activity, long-term follow-up, and the performance of DAGDO by rheumatologists familiar with the measuring tools, they said. The results suggest that DAGDO is safe and effective for patients with low disease activity in either condition, but randomized, prospective studies can provide more definitive evidence.
The study received no outside funding. One author disclosed relationships with multiple pharmaceutical companies.
FROM RHEUMATOLOGY