User login
Barriers to Implementation of Telehealth Pre-anesthesia Evaluation Visits in the Department of Veterans Affairs
Days or weeks before a scheduled surgical or invasive procedure involving anesthesia, evaluations are conducted to assess a patient’s condition and risk, optimize their status, and prepare them for their procedure. A comprehensive pre-anesthesia evaluation visit includes a history of present illness, the evaluation of comorbidities and medication use, the assessment of health habits such as alcohol and tobacco use, functional capacity and nutritional assessments, and the identification of social support deficiencies that may influence recovery. It also includes a focused physical examination and laboratory and other ancillary testing as needed and may include optimization interventions such as anemia management or prehabilitation. Conducting pre-anesthesia evaluations before surgery has been shown to reduce delays and cancellations, unnecessary preprocedure testing, hospital length of stay, and in-hospital mortality.1-4
The pre-anesthesia evaluation is usually conducted in person, although other modalities have been in use for several years and have accelerated since the advent of the COVID-19 pandemic. Specifically, audio-only telephone visits are used in many settings to conduct abbreviated forms of a pre-anesthesia evaluation, typically for less-invasive procedures. When patients are evaluated over the telephone, the physical examination and testing are deferred until the day of the procedure. Another modality is the use of synchronous video telehealth. Emerging evidence for the use of video-based care in anesthesiology provides encouraging results. Several institutions have proven the technological feasibility of performing preoperative evaluations via video.5,6 Compared with in-person evaluations, these visits seem to have similar surgery cancellation rates, improved patient satisfaction, and reduced wait times and costs.7-9
As part of a quality improvement project, we studied the use of telehealth for pre-anesthesia evaluations within the US Department of Veterans Affairs (VA). An internal review found overall low utilization of these modalities before the COVID-19 pandemic that accelerated toward telehealth during the pandemic: The largest uptake was with telephone visits. Given the increasing adoption of telehealth for pre-anesthesia evaluations and the marked preference for telephone over video modalities among VA practitioners during the COVID-19 pandemic, we sought to understand the barriers and facilitators to the adoption of telephone- and video-based pre-anesthesia evaluation visits within the VA.
Methods
Our objective was to assess health care practitioners’ (HCPs) preferences regarding pre-anesthesia evaluation modalities (in-person, telephone, or video), and the perceived advantages and barriers to adoption for each modality. We followed the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guideline and Checklist for statistical Assessment of Medical Papers (CHAMP) statement.10,11 The survey was deemed a quality improvement activity that was exempt from institutional review board oversight by the VA National Anesthesia Program Office and the VA Office of Connected Care.
A survey was distributed to all VA anesthesiology service chiefs via email between April 27, 2022, and May 3, 2022. Three emails were sent to each participant (initial invitation and 2 reminders). The respondents were asked to identify themselves by facility and role and to indicate whether their anesthesiology service performed any pre-anesthesia evaluations, including any telephone- or video-based evaluations; and whether their service has a dedicated pre-anesthesia evaluation clinic.
A second set of questions referred to the use of telephone- and video-based preprocedure evaluations. The questions were based on branch logic and depended on the respondent’s answers concerning their use of telephone- and video-based evaluations. Questions included statements about perceived barriers to the adoption of these pre-anesthesia evaluation modalities. Each item was rated on a 5-point Likert scale, (completely disagree [1] to completely agree [5]). A third section measured acceptability and feasibility of video using the validated Acceptability of Intervention Measure (AIM) and Feasibility of Intervention Measure (FIM)questionnaires.12 These instruments are 4-item measures of implementation outcomes that are often considered indicators of implementation success.13Acceptability is the perception among implementation stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory. Feasibility is defined as the extent to which a new treatment or an innovation can be successfully used or carried out within a given agency or setting.13 The criterion for acceptability is personal, meaning that different HCPs may have differing needs, preferences, and expectations regarding the same intervention. The criterion for feasibility is practical. An intervention may be considered to be feasible if the required tasks can be performed easily or conveniently. Finally, 2 open-ended questions allowed respondents to identify the most important factor that allowed the implementation of telehealth for pre-anesthesia evaluations in their service, and provide comments about the use of telehealth for pre-anesthesia evaluations at the VA. All questions were developed by the authors except for the 2 implementation measure instruments.
The survey was administered using an electronic survey platform (Qualtrics, version April 2022) and sent by email alongside a brief introductory video. Participation was voluntary and anonymous, as no personal information was collected. Responses were attributed to each facility, using the self-declared affiliation. When an affiliation was not provided, we deduced it using the latitude/longitude of the respondent, a feature included in the survey software. No incentives were provided. Data were stored and maintained in a secure VA server. All completed surveys were included. Some facilities had > 1 complete response, and all were included. Facilities that provided > 1 response and where responses were discordant, we clarified with the facility service chief. Incomplete responses were excluded from the analysis.
Statistics
For this analysis, the 2 positive sentiment responses (agree and completely agree) and the 2 negative sentiment responses (disagree and completely disagree) in the Likert scale were collapsed into single categories (good and poor, respectively). The neither agree nor disagree responses were coded as neutral. Our analysis began with a visual exploration of all variables to evaluate the frequency, percentage, and near-zero variance for categorical variables.14 Near-zero variance occurs when a categorical variable has a low frequency of unique values over the sample size (ie, the variable is almost constant), and we addressed it by combining different variable categorizations. We handled missing values through imputation algorithms followed by sensitivity analyses to verify whether our results were stable with and without imputation. We performed comparisons for the exploratory analysis using P values for one-way analysis of variance tests for numeric variables and χ2tests for categorical variables. We considered P values < .05 to be statistically significant. We also used correlation matrices and plots as exploratory analysis tools to better understand all items’ correlations. We used Pearson, polychoric, and polyserial correlation tests as appropriate for numeric, ordinal, and logical items.
Our modeling strategy involved a series of generalized linear models (GLMs) with a Gaussian family, ie, multiple linear regression models, to assess the association between (1) facilities’ preferences regarding pre-anesthesia evaluation modalities; (2) advantages between modalities; and (3) barriers to the adoption of telehealth and the ability to perform different pre-anesthesia evaluation-related tasks. In addition, we used backward deletion to reach the most parsimonious model based on a series of likelihood-ratio tests comparing nested models. Results are reported as predicted means with 95% confidence intervals, with results being interpreted as significant when any 2 predicted means do not overlap between different estimates along with P for trends < .001. We performed all analyses using the R language.15
Results
Of 109 surveyed facilities, 50 (46%) responded to the survey. The final study sample included 67 responses, and 55 were included in the analysis. Twelve responses were excluded from the analysis as they were either incomplete or test responses. Three facilities had > 1 complete response (2 facilities had 2 responses and 1 facility had 4 responses), and these were all included in the analysis.
Thirty-six locations were complex inpatient facilities, and 32 (89%) had pre-anesthesia evaluation clinics (Table 1).
The ability to obtain a history of present illness was rated good/very good via telephone for 34 respondents (92%) and 25 for video (86%). Assessing comorbidities and health habits was rated good/very good via telephone for 32 respondents (89%) and 31 respondents (86%), respectively, and via video for 24 respondents (83%) and 23 respondents (79%), respectively (Figure 1).
To compare differences between the 2 remote pre-anesthesia evaluation modalities, we created GLMs evaluating the association between each modality and the perceived ability to perform the tasks. For GLMs, we transformed the values of the categories into numerical (ie, 1, poor; 2, neutral; 3, good). Compared with telephone, video was rated more favorably regarding the assessment of nutritional status (mean, 2.1; 95% CI, 1.8-2.3 vs mean, 2.4; 95% CI, 2.2-2.7; P = .04) (eAppendix 1, available at doi:10.12788/fp.0387). No other significant differences in ratings existed between the 2 remote pre-anesthesia evaluation modalities.
The most significant barriers (cited as significant or very significant in the survey) included the inability to perform a physical examination, which was noted by 13 respondents (72%) and 15 respondents (60%) for telephone and video, respectively. The inability to obtain vital signs was rated as a significant barrier for telephone by 12 respondents (67%) and for video by 15 respondents (60%)(Figure 2).
The average FIM score was 3.7, with the highest score among respondents who used both phone and video (Table 2). The average AIM score was 3.4, with the highest score among respondents who used both telehealth modalities. The internal consistency of the implementation measures was excellent (Cronbach’s α 0.95 and 0.975 for FIM and AIM, respectively).
Discussion
We surveyed 109 anesthesiology services across the VA regarding barriers to implementing telephone- and video-based pre-anesthesia evaluation visits. We found that 12 (23%) of the 50 anesthesiology services responding to this survey still conduct the totality of their pre-anesthesia evaluations in person. This represents an opportunity to further disseminate the appropriate use of telehealth and potentially reduce travel time, costs, and low-value testing, as it is well established that remote pre-anesthesia evaluations for low-risk procedures are safe and effective.6
We also found no difference between telephone and video regarding users’ perceived ability to perform any of the basic pre-anesthesia evaluation tasks except for assessing patients’ nutritional status, which was rated as easier using video than telephone. According to those not using telephone and/or video, the biggest barriers to implementation of telehealth visits were the inability to obtain vital signs and to perform a physical examination. This finding was unexpected, as facilities that conduct remote evaluations typically defer these tasks to the day of surgery, a practice that has been well established and shown to be safe and efficient. Respondents also identified patient-level factors (eg, patient preference, lack of telephone or computer) as significant barriers. Finally, feasibility ratings were higher than acceptability ratings with regards to the implementation of telehealth.
In 2004, the first use of telehealth for pre-anesthesia evaluations was reported by Wong and colleagues.16 Since then, several case series and a literature review have documented the efficacy, safety, and patient and HCP satisfaction with the use of telehealth for pre-anesthesia evaluations. A study by Mullen-Fortino and colleagues showed reduced visit times when telehealth was used for pre-anesthesia evaluation.8 Another study at VA hospitals showed that 88% of veterans reported that telemedicine saved them time and money.17 A report of 35 patients in rural Australia reported 98% satisfaction with the video quality of the visit, 95% perceived efficacy, and 87% preference for telehealth compared with driving to be seen in person.18 These reports conflict with the perceptions of the respondents of our survey, who identified patient preference as an important barrier to adoption of telehealth. Given these findings, research is needed on veterans’ perceptions on the use of telehealth modalities for pre-anesthesia evaluations; if their perceptions are similarly favorable, it will be important to communicate this information to HCPs and leadership, which may help increase subsequent telehealth adoption.
Despite the reported safety, efficacy, and high satisfaction of video visits among anesthesiology teams conducting pre-anesthesia evaluations, its use remains low at VA. We have found that most facilities in the VA system chose telephone platforms during the COVID-19 pandemic. One possibility is that the adoption of video modalities among pre-anesthesia evaluation clinics in the VA system is resource intensive or difficult from the HCP’s perspective. When combined with the lack of perceived advantages over telephone as we found in our survey, most practitioners resort to the technologically less demanding and more familiar telephone platform. The results from FIM and AIM support this. While both telephone and video have high feasibility scores, acceptability scores are lower for video, even among those currently using this technology. Our findings do not rule out the utility of video-based care in perioperative medicine. Rather than a yes/no proposition, future studies need to establish the precise indications for video for pre-anesthesia evaluations; that is, situations where video visits offer an advantage over telephone. For example, video could be used to deliver preoperative optimization therapies, such as supervised exercise or mental health interventions or to guide the achievement of certain milestones before surgery in patients with chronic conditions, such as target glucose values or the treatment of anemia. Future studies should explore the perceived benefits of video over telephone among centers offering these more advanced optimization interventions.
Limitations
We received responses from a subset of VA anesthesiology services; therefore, they may not be representative of the entire VA system. Facilities designated by the VA as inpatient complex were overrepresented (72% of our sample vs 50% of the total facilities nationally), and ambulatory centers (those designed by the VA as ambulatory procedural center with basic or advanced capabilities) were underrepresented (2% of our sample vs 22% nationally). Despite this, the response rate was high, and no geographic area appeared to be underrepresented. In addition, we surveyed pre-anesthesia evaluation facilities led by anesthesiologists, and the results may not be representative of the preferences of HCPs working in nonanesthesiology led pre-anesthesia evaluation clinics. Finally, just 11 facilities used both telephone and video; therefore, a true direct comparison between these 2 platforms was limited. The VA serves a unique patient population, and the findings may not be completely applicable to the non-VA population.
Conclusions
We found no significant perceived advantages of video over telephone in the ability to conduct routine pre-anesthesia evaluations among a sample of anesthesiology HCPs in the VA except for the perceived ability to assess nutritional status. HCPs with no telehealth experience cited the inability to perform a physical examination and obtain vital signs as the most significant barriers to implementation. Respondents not using telephone cited concerns about safety. Video visits in this clinical setting had additional perceived barriers to implementation, such as lack of information technology and staff support and patient-level barriers. Video had lower acceptability by HCPs. Given findings that pre-anesthesia evaluations can be conducted effectively via telehealth and have high levels of patient satisfaction, future work should focus on increasing uptake of these remote modalities. Additionally, research on the most appropriate uses of video visits within perioperative care is also needed.
1. Starsnic MA, Guarnieri DM, Norris MC. Efficacy and financial benefit of an anesthesiologist-directed university preadmission evaluation center. J Clin Anesth. 1997;9(4):299-305. doi:10.1016/s0952-8180(97)00007-x
2. Kristoffersen EW, Opsal A, Tveit TO, Berg RC, Fossum M. Effectiveness of pre-anaesthetic assessment clinic: a systematic review of randomised and non-randomised prospective controlled studies. BMJ Open. 2022;12(5):e054206. doi:10.1136/bmjopen-2021-054206
3. Ferschl MB, Tung A, Sweitzer B, Huo D, Glick DB. Preoperative clinic visits reduce operating room cancellations and delays. Anesthesiology. 2005;103(4):855-9. doi:10.1097/00000542-200510000-00025
4. Blitz JD, Kendale SM, Jain SK, Cuff GE, Kim JT, Rosenberg AD. preoperative evaluation clinic visit is associated with decreased risk of in-hospital postoperative mortality. Anesthesiology. 2016;125(2):280-294. doi:10.1097/ALN.0000000000001193
5. Dilisio RP, Dilisio AJ, Weiner MM. Preoperative virtual screening examination of the airway. J Clin Anesth. 2014;26(4):315-317. doi:10.1016/j.jclinane.2013.12.010
6. Kamdar NV, Huverserian A, Jalilian L, et al. Development, implementation, and evaluation of a telemedicine preoperative evaluation initiative at a major academic medical center. Anesth Analg. 2020;131(6):1647-1656. doi:10.1213/ANE.0000000000005208
7. Azizad O, Joshi GP. Telemedicine for preanesthesia evaluation: review of current literature and recommendations for future implementation. Curr Opin Anaesthesiol. 2021;34(6):672-677. doi:10.1097/ACO.0000000000001064
8. Mullen-Fortino M, Rising KL, Duckworth J, Gwynn V, Sites FD, Hollander JE. Presurgical assessment using telemedicine technology: impact on efficiency, effectiveness, and patient experience of care. Telemed J E Health. 2019;25(2):137-142. doi:10.1089/tmj.2017.0133
9. Zhang K, Rashid-Kolvear M, Waseem R, Englesakis M, Chung F. Virtual preoperative assessment in surgical patients: a systematic review and meta-analysis. J Clin Anesth. 2021;75:110540. doi:10.1016/j.jclinane.2021.110540
10. Mansournia MA, Collins GS, Nielsen RO, et al. A CHecklist for statistical Assessment of Medical Papers (the CHAMP statement): explanation and elaboration. Br J Sports Med. 2021;55(18):1009-1017. doi:10.1136/bjsports-2020-103652
11. von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies. Int J Surg. 2014;12(12):1495-1499. doi:10.1016/j.ijsu.2014.07.013
12. Weiner BJ, Lewis CC, Stanick C, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108. doi:10.1186/s13012-017-0635-3
13. Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65-76. doi:10.1007/s10488-010-0319-7
14. Kuhn M, Johnson K. Applied Predictive Modeling. Springer; 2013.
15. Team RC. A language and environment for statistical computing. 2018. Accessed December 16, 2022. https://www.R-project.org
16. Wong DT, Kamming D, Salenieks ME, Go K, Kohm C, Chung F. Preadmission anesthesia consultation using telemedicine technology: a pilot study. Anesthesiology. 2004;100(6):1605-1607. doi:10.1097/00000542-200406000-00038
17. Zetterman CV, Sweitzer BJ, Webb B, Barak-Bernhagen MA, Boedeker BH. Validation of a virtual preoperative evaluation clinic: a pilot study. Stud Health Technol Inform. 2011;163:737-739. doi: 10.3233/978-1-60750-706-2-737
18. Roberts S, Spain B, Hicks C, London J, Tay S. Telemedicine in the Northern Territory: an assessment of patient perceptions in the preoperative anaesthetic clinic. Aust J Rural Health. 2015;23(3):136-141. doi:10.1111/ajr.12140
Days or weeks before a scheduled surgical or invasive procedure involving anesthesia, evaluations are conducted to assess a patient’s condition and risk, optimize their status, and prepare them for their procedure. A comprehensive pre-anesthesia evaluation visit includes a history of present illness, the evaluation of comorbidities and medication use, the assessment of health habits such as alcohol and tobacco use, functional capacity and nutritional assessments, and the identification of social support deficiencies that may influence recovery. It also includes a focused physical examination and laboratory and other ancillary testing as needed and may include optimization interventions such as anemia management or prehabilitation. Conducting pre-anesthesia evaluations before surgery has been shown to reduce delays and cancellations, unnecessary preprocedure testing, hospital length of stay, and in-hospital mortality.1-4
The pre-anesthesia evaluation is usually conducted in person, although other modalities have been in use for several years and have accelerated since the advent of the COVID-19 pandemic. Specifically, audio-only telephone visits are used in many settings to conduct abbreviated forms of a pre-anesthesia evaluation, typically for less-invasive procedures. When patients are evaluated over the telephone, the physical examination and testing are deferred until the day of the procedure. Another modality is the use of synchronous video telehealth. Emerging evidence for the use of video-based care in anesthesiology provides encouraging results. Several institutions have proven the technological feasibility of performing preoperative evaluations via video.5,6 Compared with in-person evaluations, these visits seem to have similar surgery cancellation rates, improved patient satisfaction, and reduced wait times and costs.7-9
As part of a quality improvement project, we studied the use of telehealth for pre-anesthesia evaluations within the US Department of Veterans Affairs (VA). An internal review found overall low utilization of these modalities before the COVID-19 pandemic that accelerated toward telehealth during the pandemic: The largest uptake was with telephone visits. Given the increasing adoption of telehealth for pre-anesthesia evaluations and the marked preference for telephone over video modalities among VA practitioners during the COVID-19 pandemic, we sought to understand the barriers and facilitators to the adoption of telephone- and video-based pre-anesthesia evaluation visits within the VA.
Methods
Our objective was to assess health care practitioners’ (HCPs) preferences regarding pre-anesthesia evaluation modalities (in-person, telephone, or video), and the perceived advantages and barriers to adoption for each modality. We followed the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guideline and Checklist for statistical Assessment of Medical Papers (CHAMP) statement.10,11 The survey was deemed a quality improvement activity that was exempt from institutional review board oversight by the VA National Anesthesia Program Office and the VA Office of Connected Care.
A survey was distributed to all VA anesthesiology service chiefs via email between April 27, 2022, and May 3, 2022. Three emails were sent to each participant (initial invitation and 2 reminders). The respondents were asked to identify themselves by facility and role and to indicate whether their anesthesiology service performed any pre-anesthesia evaluations, including any telephone- or video-based evaluations; and whether their service has a dedicated pre-anesthesia evaluation clinic.
A second set of questions referred to the use of telephone- and video-based preprocedure evaluations. The questions were based on branch logic and depended on the respondent’s answers concerning their use of telephone- and video-based evaluations. Questions included statements about perceived barriers to the adoption of these pre-anesthesia evaluation modalities. Each item was rated on a 5-point Likert scale, (completely disagree [1] to completely agree [5]). A third section measured acceptability and feasibility of video using the validated Acceptability of Intervention Measure (AIM) and Feasibility of Intervention Measure (FIM)questionnaires.12 These instruments are 4-item measures of implementation outcomes that are often considered indicators of implementation success.13Acceptability is the perception among implementation stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory. Feasibility is defined as the extent to which a new treatment or an innovation can be successfully used or carried out within a given agency or setting.13 The criterion for acceptability is personal, meaning that different HCPs may have differing needs, preferences, and expectations regarding the same intervention. The criterion for feasibility is practical. An intervention may be considered to be feasible if the required tasks can be performed easily or conveniently. Finally, 2 open-ended questions allowed respondents to identify the most important factor that allowed the implementation of telehealth for pre-anesthesia evaluations in their service, and provide comments about the use of telehealth for pre-anesthesia evaluations at the VA. All questions were developed by the authors except for the 2 implementation measure instruments.
The survey was administered using an electronic survey platform (Qualtrics, version April 2022) and sent by email alongside a brief introductory video. Participation was voluntary and anonymous, as no personal information was collected. Responses were attributed to each facility, using the self-declared affiliation. When an affiliation was not provided, we deduced it using the latitude/longitude of the respondent, a feature included in the survey software. No incentives were provided. Data were stored and maintained in a secure VA server. All completed surveys were included. Some facilities had > 1 complete response, and all were included. Facilities that provided > 1 response and where responses were discordant, we clarified with the facility service chief. Incomplete responses were excluded from the analysis.
Statistics
For this analysis, the 2 positive sentiment responses (agree and completely agree) and the 2 negative sentiment responses (disagree and completely disagree) in the Likert scale were collapsed into single categories (good and poor, respectively). The neither agree nor disagree responses were coded as neutral. Our analysis began with a visual exploration of all variables to evaluate the frequency, percentage, and near-zero variance for categorical variables.14 Near-zero variance occurs when a categorical variable has a low frequency of unique values over the sample size (ie, the variable is almost constant), and we addressed it by combining different variable categorizations. We handled missing values through imputation algorithms followed by sensitivity analyses to verify whether our results were stable with and without imputation. We performed comparisons for the exploratory analysis using P values for one-way analysis of variance tests for numeric variables and χ2tests for categorical variables. We considered P values < .05 to be statistically significant. We also used correlation matrices and plots as exploratory analysis tools to better understand all items’ correlations. We used Pearson, polychoric, and polyserial correlation tests as appropriate for numeric, ordinal, and logical items.
Our modeling strategy involved a series of generalized linear models (GLMs) with a Gaussian family, ie, multiple linear regression models, to assess the association between (1) facilities’ preferences regarding pre-anesthesia evaluation modalities; (2) advantages between modalities; and (3) barriers to the adoption of telehealth and the ability to perform different pre-anesthesia evaluation-related tasks. In addition, we used backward deletion to reach the most parsimonious model based on a series of likelihood-ratio tests comparing nested models. Results are reported as predicted means with 95% confidence intervals, with results being interpreted as significant when any 2 predicted means do not overlap between different estimates along with P for trends < .001. We performed all analyses using the R language.15
Results
Of 109 surveyed facilities, 50 (46%) responded to the survey. The final study sample included 67 responses, and 55 were included in the analysis. Twelve responses were excluded from the analysis as they were either incomplete or test responses. Three facilities had > 1 complete response (2 facilities had 2 responses and 1 facility had 4 responses), and these were all included in the analysis.
Thirty-six locations were complex inpatient facilities, and 32 (89%) had pre-anesthesia evaluation clinics (Table 1).
The ability to obtain a history of present illness was rated good/very good via telephone for 34 respondents (92%) and 25 for video (86%). Assessing comorbidities and health habits was rated good/very good via telephone for 32 respondents (89%) and 31 respondents (86%), respectively, and via video for 24 respondents (83%) and 23 respondents (79%), respectively (Figure 1).
To compare differences between the 2 remote pre-anesthesia evaluation modalities, we created GLMs evaluating the association between each modality and the perceived ability to perform the tasks. For GLMs, we transformed the values of the categories into numerical (ie, 1, poor; 2, neutral; 3, good). Compared with telephone, video was rated more favorably regarding the assessment of nutritional status (mean, 2.1; 95% CI, 1.8-2.3 vs mean, 2.4; 95% CI, 2.2-2.7; P = .04) (eAppendix 1, available at doi:10.12788/fp.0387). No other significant differences in ratings existed between the 2 remote pre-anesthesia evaluation modalities.
The most significant barriers (cited as significant or very significant in the survey) included the inability to perform a physical examination, which was noted by 13 respondents (72%) and 15 respondents (60%) for telephone and video, respectively. The inability to obtain vital signs was rated as a significant barrier for telephone by 12 respondents (67%) and for video by 15 respondents (60%)(Figure 2).
The average FIM score was 3.7, with the highest score among respondents who used both phone and video (Table 2). The average AIM score was 3.4, with the highest score among respondents who used both telehealth modalities. The internal consistency of the implementation measures was excellent (Cronbach’s α 0.95 and 0.975 for FIM and AIM, respectively).
Discussion
We surveyed 109 anesthesiology services across the VA regarding barriers to implementing telephone- and video-based pre-anesthesia evaluation visits. We found that 12 (23%) of the 50 anesthesiology services responding to this survey still conduct the totality of their pre-anesthesia evaluations in person. This represents an opportunity to further disseminate the appropriate use of telehealth and potentially reduce travel time, costs, and low-value testing, as it is well established that remote pre-anesthesia evaluations for low-risk procedures are safe and effective.6
We also found no difference between telephone and video regarding users’ perceived ability to perform any of the basic pre-anesthesia evaluation tasks except for assessing patients’ nutritional status, which was rated as easier using video than telephone. According to those not using telephone and/or video, the biggest barriers to implementation of telehealth visits were the inability to obtain vital signs and to perform a physical examination. This finding was unexpected, as facilities that conduct remote evaluations typically defer these tasks to the day of surgery, a practice that has been well established and shown to be safe and efficient. Respondents also identified patient-level factors (eg, patient preference, lack of telephone or computer) as significant barriers. Finally, feasibility ratings were higher than acceptability ratings with regards to the implementation of telehealth.
In 2004, the first use of telehealth for pre-anesthesia evaluations was reported by Wong and colleagues.16 Since then, several case series and a literature review have documented the efficacy, safety, and patient and HCP satisfaction with the use of telehealth for pre-anesthesia evaluations. A study by Mullen-Fortino and colleagues showed reduced visit times when telehealth was used for pre-anesthesia evaluation.8 Another study at VA hospitals showed that 88% of veterans reported that telemedicine saved them time and money.17 A report of 35 patients in rural Australia reported 98% satisfaction with the video quality of the visit, 95% perceived efficacy, and 87% preference for telehealth compared with driving to be seen in person.18 These reports conflict with the perceptions of the respondents of our survey, who identified patient preference as an important barrier to adoption of telehealth. Given these findings, research is needed on veterans’ perceptions on the use of telehealth modalities for pre-anesthesia evaluations; if their perceptions are similarly favorable, it will be important to communicate this information to HCPs and leadership, which may help increase subsequent telehealth adoption.
Despite the reported safety, efficacy, and high satisfaction of video visits among anesthesiology teams conducting pre-anesthesia evaluations, its use remains low at VA. We have found that most facilities in the VA system chose telephone platforms during the COVID-19 pandemic. One possibility is that the adoption of video modalities among pre-anesthesia evaluation clinics in the VA system is resource intensive or difficult from the HCP’s perspective. When combined with the lack of perceived advantages over telephone as we found in our survey, most practitioners resort to the technologically less demanding and more familiar telephone platform. The results from FIM and AIM support this. While both telephone and video have high feasibility scores, acceptability scores are lower for video, even among those currently using this technology. Our findings do not rule out the utility of video-based care in perioperative medicine. Rather than a yes/no proposition, future studies need to establish the precise indications for video for pre-anesthesia evaluations; that is, situations where video visits offer an advantage over telephone. For example, video could be used to deliver preoperative optimization therapies, such as supervised exercise or mental health interventions or to guide the achievement of certain milestones before surgery in patients with chronic conditions, such as target glucose values or the treatment of anemia. Future studies should explore the perceived benefits of video over telephone among centers offering these more advanced optimization interventions.
Limitations
We received responses from a subset of VA anesthesiology services; therefore, they may not be representative of the entire VA system. Facilities designated by the VA as inpatient complex were overrepresented (72% of our sample vs 50% of the total facilities nationally), and ambulatory centers (those designed by the VA as ambulatory procedural center with basic or advanced capabilities) were underrepresented (2% of our sample vs 22% nationally). Despite this, the response rate was high, and no geographic area appeared to be underrepresented. In addition, we surveyed pre-anesthesia evaluation facilities led by anesthesiologists, and the results may not be representative of the preferences of HCPs working in nonanesthesiology led pre-anesthesia evaluation clinics. Finally, just 11 facilities used both telephone and video; therefore, a true direct comparison between these 2 platforms was limited. The VA serves a unique patient population, and the findings may not be completely applicable to the non-VA population.
Conclusions
We found no significant perceived advantages of video over telephone in the ability to conduct routine pre-anesthesia evaluations among a sample of anesthesiology HCPs in the VA except for the perceived ability to assess nutritional status. HCPs with no telehealth experience cited the inability to perform a physical examination and obtain vital signs as the most significant barriers to implementation. Respondents not using telephone cited concerns about safety. Video visits in this clinical setting had additional perceived barriers to implementation, such as lack of information technology and staff support and patient-level barriers. Video had lower acceptability by HCPs. Given findings that pre-anesthesia evaluations can be conducted effectively via telehealth and have high levels of patient satisfaction, future work should focus on increasing uptake of these remote modalities. Additionally, research on the most appropriate uses of video visits within perioperative care is also needed.
Days or weeks before a scheduled surgical or invasive procedure involving anesthesia, evaluations are conducted to assess a patient’s condition and risk, optimize their status, and prepare them for their procedure. A comprehensive pre-anesthesia evaluation visit includes a history of present illness, the evaluation of comorbidities and medication use, the assessment of health habits such as alcohol and tobacco use, functional capacity and nutritional assessments, and the identification of social support deficiencies that may influence recovery. It also includes a focused physical examination and laboratory and other ancillary testing as needed and may include optimization interventions such as anemia management or prehabilitation. Conducting pre-anesthesia evaluations before surgery has been shown to reduce delays and cancellations, unnecessary preprocedure testing, hospital length of stay, and in-hospital mortality.1-4
The pre-anesthesia evaluation is usually conducted in person, although other modalities have been in use for several years and have accelerated since the advent of the COVID-19 pandemic. Specifically, audio-only telephone visits are used in many settings to conduct abbreviated forms of a pre-anesthesia evaluation, typically for less-invasive procedures. When patients are evaluated over the telephone, the physical examination and testing are deferred until the day of the procedure. Another modality is the use of synchronous video telehealth. Emerging evidence for the use of video-based care in anesthesiology provides encouraging results. Several institutions have proven the technological feasibility of performing preoperative evaluations via video.5,6 Compared with in-person evaluations, these visits seem to have similar surgery cancellation rates, improved patient satisfaction, and reduced wait times and costs.7-9
As part of a quality improvement project, we studied the use of telehealth for pre-anesthesia evaluations within the US Department of Veterans Affairs (VA). An internal review found overall low utilization of these modalities before the COVID-19 pandemic that accelerated toward telehealth during the pandemic: The largest uptake was with telephone visits. Given the increasing adoption of telehealth for pre-anesthesia evaluations and the marked preference for telephone over video modalities among VA practitioners during the COVID-19 pandemic, we sought to understand the barriers and facilitators to the adoption of telephone- and video-based pre-anesthesia evaluation visits within the VA.
Methods
Our objective was to assess health care practitioners’ (HCPs) preferences regarding pre-anesthesia evaluation modalities (in-person, telephone, or video), and the perceived advantages and barriers to adoption for each modality. We followed the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guideline and Checklist for statistical Assessment of Medical Papers (CHAMP) statement.10,11 The survey was deemed a quality improvement activity that was exempt from institutional review board oversight by the VA National Anesthesia Program Office and the VA Office of Connected Care.
A survey was distributed to all VA anesthesiology service chiefs via email between April 27, 2022, and May 3, 2022. Three emails were sent to each participant (initial invitation and 2 reminders). The respondents were asked to identify themselves by facility and role and to indicate whether their anesthesiology service performed any pre-anesthesia evaluations, including any telephone- or video-based evaluations; and whether their service has a dedicated pre-anesthesia evaluation clinic.
A second set of questions referred to the use of telephone- and video-based preprocedure evaluations. The questions were based on branch logic and depended on the respondent’s answers concerning their use of telephone- and video-based evaluations. Questions included statements about perceived barriers to the adoption of these pre-anesthesia evaluation modalities. Each item was rated on a 5-point Likert scale, (completely disagree [1] to completely agree [5]). A third section measured acceptability and feasibility of video using the validated Acceptability of Intervention Measure (AIM) and Feasibility of Intervention Measure (FIM)questionnaires.12 These instruments are 4-item measures of implementation outcomes that are often considered indicators of implementation success.13Acceptability is the perception among implementation stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory. Feasibility is defined as the extent to which a new treatment or an innovation can be successfully used or carried out within a given agency or setting.13 The criterion for acceptability is personal, meaning that different HCPs may have differing needs, preferences, and expectations regarding the same intervention. The criterion for feasibility is practical. An intervention may be considered to be feasible if the required tasks can be performed easily or conveniently. Finally, 2 open-ended questions allowed respondents to identify the most important factor that allowed the implementation of telehealth for pre-anesthesia evaluations in their service, and provide comments about the use of telehealth for pre-anesthesia evaluations at the VA. All questions were developed by the authors except for the 2 implementation measure instruments.
The survey was administered using an electronic survey platform (Qualtrics, version April 2022) and sent by email alongside a brief introductory video. Participation was voluntary and anonymous, as no personal information was collected. Responses were attributed to each facility, using the self-declared affiliation. When an affiliation was not provided, we deduced it using the latitude/longitude of the respondent, a feature included in the survey software. No incentives were provided. Data were stored and maintained in a secure VA server. All completed surveys were included. Some facilities had > 1 complete response, and all were included. Facilities that provided > 1 response and where responses were discordant, we clarified with the facility service chief. Incomplete responses were excluded from the analysis.
Statistics
For this analysis, the 2 positive sentiment responses (agree and completely agree) and the 2 negative sentiment responses (disagree and completely disagree) in the Likert scale were collapsed into single categories (good and poor, respectively). The neither agree nor disagree responses were coded as neutral. Our analysis began with a visual exploration of all variables to evaluate the frequency, percentage, and near-zero variance for categorical variables.14 Near-zero variance occurs when a categorical variable has a low frequency of unique values over the sample size (ie, the variable is almost constant), and we addressed it by combining different variable categorizations. We handled missing values through imputation algorithms followed by sensitivity analyses to verify whether our results were stable with and without imputation. We performed comparisons for the exploratory analysis using P values for one-way analysis of variance tests for numeric variables and χ2tests for categorical variables. We considered P values < .05 to be statistically significant. We also used correlation matrices and plots as exploratory analysis tools to better understand all items’ correlations. We used Pearson, polychoric, and polyserial correlation tests as appropriate for numeric, ordinal, and logical items.
Our modeling strategy involved a series of generalized linear models (GLMs) with a Gaussian family, ie, multiple linear regression models, to assess the association between (1) facilities’ preferences regarding pre-anesthesia evaluation modalities; (2) advantages between modalities; and (3) barriers to the adoption of telehealth and the ability to perform different pre-anesthesia evaluation-related tasks. In addition, we used backward deletion to reach the most parsimonious model based on a series of likelihood-ratio tests comparing nested models. Results are reported as predicted means with 95% confidence intervals, with results being interpreted as significant when any 2 predicted means do not overlap between different estimates along with P for trends < .001. We performed all analyses using the R language.15
Results
Of 109 surveyed facilities, 50 (46%) responded to the survey. The final study sample included 67 responses, and 55 were included in the analysis. Twelve responses were excluded from the analysis as they were either incomplete or test responses. Three facilities had > 1 complete response (2 facilities had 2 responses and 1 facility had 4 responses), and these were all included in the analysis.
Thirty-six locations were complex inpatient facilities, and 32 (89%) had pre-anesthesia evaluation clinics (Table 1).
The ability to obtain a history of present illness was rated good/very good via telephone for 34 respondents (92%) and 25 for video (86%). Assessing comorbidities and health habits was rated good/very good via telephone for 32 respondents (89%) and 31 respondents (86%), respectively, and via video for 24 respondents (83%) and 23 respondents (79%), respectively (Figure 1).
To compare differences between the 2 remote pre-anesthesia evaluation modalities, we created GLMs evaluating the association between each modality and the perceived ability to perform the tasks. For GLMs, we transformed the values of the categories into numerical (ie, 1, poor; 2, neutral; 3, good). Compared with telephone, video was rated more favorably regarding the assessment of nutritional status (mean, 2.1; 95% CI, 1.8-2.3 vs mean, 2.4; 95% CI, 2.2-2.7; P = .04) (eAppendix 1, available at doi:10.12788/fp.0387). No other significant differences in ratings existed between the 2 remote pre-anesthesia evaluation modalities.
The most significant barriers (cited as significant or very significant in the survey) included the inability to perform a physical examination, which was noted by 13 respondents (72%) and 15 respondents (60%) for telephone and video, respectively. The inability to obtain vital signs was rated as a significant barrier for telephone by 12 respondents (67%) and for video by 15 respondents (60%)(Figure 2).
The average FIM score was 3.7, with the highest score among respondents who used both phone and video (Table 2). The average AIM score was 3.4, with the highest score among respondents who used both telehealth modalities. The internal consistency of the implementation measures was excellent (Cronbach’s α 0.95 and 0.975 for FIM and AIM, respectively).
Discussion
We surveyed 109 anesthesiology services across the VA regarding barriers to implementing telephone- and video-based pre-anesthesia evaluation visits. We found that 12 (23%) of the 50 anesthesiology services responding to this survey still conduct the totality of their pre-anesthesia evaluations in person. This represents an opportunity to further disseminate the appropriate use of telehealth and potentially reduce travel time, costs, and low-value testing, as it is well established that remote pre-anesthesia evaluations for low-risk procedures are safe and effective.6
We also found no difference between telephone and video regarding users’ perceived ability to perform any of the basic pre-anesthesia evaluation tasks except for assessing patients’ nutritional status, which was rated as easier using video than telephone. According to those not using telephone and/or video, the biggest barriers to implementation of telehealth visits were the inability to obtain vital signs and to perform a physical examination. This finding was unexpected, as facilities that conduct remote evaluations typically defer these tasks to the day of surgery, a practice that has been well established and shown to be safe and efficient. Respondents also identified patient-level factors (eg, patient preference, lack of telephone or computer) as significant barriers. Finally, feasibility ratings were higher than acceptability ratings with regards to the implementation of telehealth.
In 2004, the first use of telehealth for pre-anesthesia evaluations was reported by Wong and colleagues.16 Since then, several case series and a literature review have documented the efficacy, safety, and patient and HCP satisfaction with the use of telehealth for pre-anesthesia evaluations. A study by Mullen-Fortino and colleagues showed reduced visit times when telehealth was used for pre-anesthesia evaluation.8 Another study at VA hospitals showed that 88% of veterans reported that telemedicine saved them time and money.17 A report of 35 patients in rural Australia reported 98% satisfaction with the video quality of the visit, 95% perceived efficacy, and 87% preference for telehealth compared with driving to be seen in person.18 These reports conflict with the perceptions of the respondents of our survey, who identified patient preference as an important barrier to adoption of telehealth. Given these findings, research is needed on veterans’ perceptions on the use of telehealth modalities for pre-anesthesia evaluations; if their perceptions are similarly favorable, it will be important to communicate this information to HCPs and leadership, which may help increase subsequent telehealth adoption.
Despite the reported safety, efficacy, and high satisfaction of video visits among anesthesiology teams conducting pre-anesthesia evaluations, its use remains low at VA. We have found that most facilities in the VA system chose telephone platforms during the COVID-19 pandemic. One possibility is that the adoption of video modalities among pre-anesthesia evaluation clinics in the VA system is resource intensive or difficult from the HCP’s perspective. When combined with the lack of perceived advantages over telephone as we found in our survey, most practitioners resort to the technologically less demanding and more familiar telephone platform. The results from FIM and AIM support this. While both telephone and video have high feasibility scores, acceptability scores are lower for video, even among those currently using this technology. Our findings do not rule out the utility of video-based care in perioperative medicine. Rather than a yes/no proposition, future studies need to establish the precise indications for video for pre-anesthesia evaluations; that is, situations where video visits offer an advantage over telephone. For example, video could be used to deliver preoperative optimization therapies, such as supervised exercise or mental health interventions or to guide the achievement of certain milestones before surgery in patients with chronic conditions, such as target glucose values or the treatment of anemia. Future studies should explore the perceived benefits of video over telephone among centers offering these more advanced optimization interventions.
Limitations
We received responses from a subset of VA anesthesiology services; therefore, they may not be representative of the entire VA system. Facilities designated by the VA as inpatient complex were overrepresented (72% of our sample vs 50% of the total facilities nationally), and ambulatory centers (those designed by the VA as ambulatory procedural center with basic or advanced capabilities) were underrepresented (2% of our sample vs 22% nationally). Despite this, the response rate was high, and no geographic area appeared to be underrepresented. In addition, we surveyed pre-anesthesia evaluation facilities led by anesthesiologists, and the results may not be representative of the preferences of HCPs working in nonanesthesiology led pre-anesthesia evaluation clinics. Finally, just 11 facilities used both telephone and video; therefore, a true direct comparison between these 2 platforms was limited. The VA serves a unique patient population, and the findings may not be completely applicable to the non-VA population.
Conclusions
We found no significant perceived advantages of video over telephone in the ability to conduct routine pre-anesthesia evaluations among a sample of anesthesiology HCPs in the VA except for the perceived ability to assess nutritional status. HCPs with no telehealth experience cited the inability to perform a physical examination and obtain vital signs as the most significant barriers to implementation. Respondents not using telephone cited concerns about safety. Video visits in this clinical setting had additional perceived barriers to implementation, such as lack of information technology and staff support and patient-level barriers. Video had lower acceptability by HCPs. Given findings that pre-anesthesia evaluations can be conducted effectively via telehealth and have high levels of patient satisfaction, future work should focus on increasing uptake of these remote modalities. Additionally, research on the most appropriate uses of video visits within perioperative care is also needed.
1. Starsnic MA, Guarnieri DM, Norris MC. Efficacy and financial benefit of an anesthesiologist-directed university preadmission evaluation center. J Clin Anesth. 1997;9(4):299-305. doi:10.1016/s0952-8180(97)00007-x
2. Kristoffersen EW, Opsal A, Tveit TO, Berg RC, Fossum M. Effectiveness of pre-anaesthetic assessment clinic: a systematic review of randomised and non-randomised prospective controlled studies. BMJ Open. 2022;12(5):e054206. doi:10.1136/bmjopen-2021-054206
3. Ferschl MB, Tung A, Sweitzer B, Huo D, Glick DB. Preoperative clinic visits reduce operating room cancellations and delays. Anesthesiology. 2005;103(4):855-9. doi:10.1097/00000542-200510000-00025
4. Blitz JD, Kendale SM, Jain SK, Cuff GE, Kim JT, Rosenberg AD. preoperative evaluation clinic visit is associated with decreased risk of in-hospital postoperative mortality. Anesthesiology. 2016;125(2):280-294. doi:10.1097/ALN.0000000000001193
5. Dilisio RP, Dilisio AJ, Weiner MM. Preoperative virtual screening examination of the airway. J Clin Anesth. 2014;26(4):315-317. doi:10.1016/j.jclinane.2013.12.010
6. Kamdar NV, Huverserian A, Jalilian L, et al. Development, implementation, and evaluation of a telemedicine preoperative evaluation initiative at a major academic medical center. Anesth Analg. 2020;131(6):1647-1656. doi:10.1213/ANE.0000000000005208
7. Azizad O, Joshi GP. Telemedicine for preanesthesia evaluation: review of current literature and recommendations for future implementation. Curr Opin Anaesthesiol. 2021;34(6):672-677. doi:10.1097/ACO.0000000000001064
8. Mullen-Fortino M, Rising KL, Duckworth J, Gwynn V, Sites FD, Hollander JE. Presurgical assessment using telemedicine technology: impact on efficiency, effectiveness, and patient experience of care. Telemed J E Health. 2019;25(2):137-142. doi:10.1089/tmj.2017.0133
9. Zhang K, Rashid-Kolvear M, Waseem R, Englesakis M, Chung F. Virtual preoperative assessment in surgical patients: a systematic review and meta-analysis. J Clin Anesth. 2021;75:110540. doi:10.1016/j.jclinane.2021.110540
10. Mansournia MA, Collins GS, Nielsen RO, et al. A CHecklist for statistical Assessment of Medical Papers (the CHAMP statement): explanation and elaboration. Br J Sports Med. 2021;55(18):1009-1017. doi:10.1136/bjsports-2020-103652
11. von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies. Int J Surg. 2014;12(12):1495-1499. doi:10.1016/j.ijsu.2014.07.013
12. Weiner BJ, Lewis CC, Stanick C, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108. doi:10.1186/s13012-017-0635-3
13. Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65-76. doi:10.1007/s10488-010-0319-7
14. Kuhn M, Johnson K. Applied Predictive Modeling. Springer; 2013.
15. Team RC. A language and environment for statistical computing. 2018. Accessed December 16, 2022. https://www.R-project.org
16. Wong DT, Kamming D, Salenieks ME, Go K, Kohm C, Chung F. Preadmission anesthesia consultation using telemedicine technology: a pilot study. Anesthesiology. 2004;100(6):1605-1607. doi:10.1097/00000542-200406000-00038
17. Zetterman CV, Sweitzer BJ, Webb B, Barak-Bernhagen MA, Boedeker BH. Validation of a virtual preoperative evaluation clinic: a pilot study. Stud Health Technol Inform. 2011;163:737-739. doi: 10.3233/978-1-60750-706-2-737
18. Roberts S, Spain B, Hicks C, London J, Tay S. Telemedicine in the Northern Territory: an assessment of patient perceptions in the preoperative anaesthetic clinic. Aust J Rural Health. 2015;23(3):136-141. doi:10.1111/ajr.12140
1. Starsnic MA, Guarnieri DM, Norris MC. Efficacy and financial benefit of an anesthesiologist-directed university preadmission evaluation center. J Clin Anesth. 1997;9(4):299-305. doi:10.1016/s0952-8180(97)00007-x
2. Kristoffersen EW, Opsal A, Tveit TO, Berg RC, Fossum M. Effectiveness of pre-anaesthetic assessment clinic: a systematic review of randomised and non-randomised prospective controlled studies. BMJ Open. 2022;12(5):e054206. doi:10.1136/bmjopen-2021-054206
3. Ferschl MB, Tung A, Sweitzer B, Huo D, Glick DB. Preoperative clinic visits reduce operating room cancellations and delays. Anesthesiology. 2005;103(4):855-9. doi:10.1097/00000542-200510000-00025
4. Blitz JD, Kendale SM, Jain SK, Cuff GE, Kim JT, Rosenberg AD. preoperative evaluation clinic visit is associated with decreased risk of in-hospital postoperative mortality. Anesthesiology. 2016;125(2):280-294. doi:10.1097/ALN.0000000000001193
5. Dilisio RP, Dilisio AJ, Weiner MM. Preoperative virtual screening examination of the airway. J Clin Anesth. 2014;26(4):315-317. doi:10.1016/j.jclinane.2013.12.010
6. Kamdar NV, Huverserian A, Jalilian L, et al. Development, implementation, and evaluation of a telemedicine preoperative evaluation initiative at a major academic medical center. Anesth Analg. 2020;131(6):1647-1656. doi:10.1213/ANE.0000000000005208
7. Azizad O, Joshi GP. Telemedicine for preanesthesia evaluation: review of current literature and recommendations for future implementation. Curr Opin Anaesthesiol. 2021;34(6):672-677. doi:10.1097/ACO.0000000000001064
8. Mullen-Fortino M, Rising KL, Duckworth J, Gwynn V, Sites FD, Hollander JE. Presurgical assessment using telemedicine technology: impact on efficiency, effectiveness, and patient experience of care. Telemed J E Health. 2019;25(2):137-142. doi:10.1089/tmj.2017.0133
9. Zhang K, Rashid-Kolvear M, Waseem R, Englesakis M, Chung F. Virtual preoperative assessment in surgical patients: a systematic review and meta-analysis. J Clin Anesth. 2021;75:110540. doi:10.1016/j.jclinane.2021.110540
10. Mansournia MA, Collins GS, Nielsen RO, et al. A CHecklist for statistical Assessment of Medical Papers (the CHAMP statement): explanation and elaboration. Br J Sports Med. 2021;55(18):1009-1017. doi:10.1136/bjsports-2020-103652
11. von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies. Int J Surg. 2014;12(12):1495-1499. doi:10.1016/j.ijsu.2014.07.013
12. Weiner BJ, Lewis CC, Stanick C, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108. doi:10.1186/s13012-017-0635-3
13. Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65-76. doi:10.1007/s10488-010-0319-7
14. Kuhn M, Johnson K. Applied Predictive Modeling. Springer; 2013.
15. Team RC. A language and environment for statistical computing. 2018. Accessed December 16, 2022. https://www.R-project.org
16. Wong DT, Kamming D, Salenieks ME, Go K, Kohm C, Chung F. Preadmission anesthesia consultation using telemedicine technology: a pilot study. Anesthesiology. 2004;100(6):1605-1607. doi:10.1097/00000542-200406000-00038
17. Zetterman CV, Sweitzer BJ, Webb B, Barak-Bernhagen MA, Boedeker BH. Validation of a virtual preoperative evaluation clinic: a pilot study. Stud Health Technol Inform. 2011;163:737-739. doi: 10.3233/978-1-60750-706-2-737
18. Roberts S, Spain B, Hicks C, London J, Tay S. Telemedicine in the Northern Territory: an assessment of patient perceptions in the preoperative anaesthetic clinic. Aust J Rural Health. 2015;23(3):136-141. doi:10.1111/ajr.12140
Fireworks, Veterans, and PTSD: The Ironies of the Fourth of July
My first wish is to see this plague to Mankind, war, banished from the Earth; & the Sons and daughters of this World employed in more pleasing & innocent amusements than in preparing implements, & exercising them for the destruction of the human race.
General George Washington1
When I was a child, every Fourth of July holiday my father would take me to the military fireworks display at Fort Sam Houston in Texas. We would take our place in the long cascade of cars parked at the huge parade ground in front of Brooke Army Medical Center. It was the most spectacular display of the year not to be found anywhere else in the city. Army fire engines and medics were always on site in case anything went wrong, which rarely occurred thanks to the pyrotechnic experts who ran the display.
Later, when I began my psychiatric residency at the US Department of Veterans Affairs (VA) New Mexico Healthcare System, I quickly learned a darker truth about fireworks. What seemed to me and many other civilians in General Washington’s words, a “pleasing and innocent amusement,” instead was a distressing and terrifying revisiting of trauma for many service members and veterans, likely including my father, who was a World War II combat veteran.
Fireworks are so closely linked to the birth of our young nation that we often forget they were invented in China a millennia ago. Fireworks were first associated with the fledgling nation in the middle of the War of Independence. On July 4, 1776, representatives of the 13 colonies signed the Declaration of Independence. In one of several ironies of history, what was used at the initial commemorations was not fireworks but the very “implements of destruction,” to use Washington’s phrase—guns and cannons. The demonstrations of firepower were meant to be morale boosters. After the war, the dangers of the detonations were recognized, and firearms were replaced with the fireworks we still launch today.2
The country celebrates the holiday with cookouts, parades, brass band concerts, and of course fireworks. Added to the organized shows are the millions of citizens who demonstrate private patriotism by shooting off fireworks in their neighborhoods. In 2021, Americans spent $1.5 billion on fireworks, and 33% said they planned to attend a public display.3
However, people are increasingly recognizing the negative side of fireworks for wild and companion animals and the environment. Most of us who have dogs and I am sure cats, horses, and other animals dread the impending darkness of the Fourth as it signals the coming loud noise and the cringing, pacing animals who want to run yet have nowhere to go to be safe from the sound.4
Sitting in the clinic with veterans, I realized it was not only pets and wildlife that feared the ultimate American holiday but also the very individuals who fought to preserve the freedom those fireworks celebrate. The VA’s National Center for Posttraumatic Stress Disorder (PTSD) estimates that about 7% of veterans will meet the diagnostic criteria for PTSD in their lifetimes. The prevalence of PTSD differs, depending on the methodology used, era and type of services, and demographics. Some studies have found higher rates of PTSD in women, young veterans, and those who served in Vietnam. Among the veterans who receive health care at the VA, like those I saw in the clinic, 23 in 1000 may have PTSD.5
We, after all, are remarkably similar in physiology to other mammals, and not surprisingly, veterans with PTSD exhibit many of the same reactions to fireworks. The sights, sounds, and odor of fireworks, as well as the vocal responses of the crowd at large displays evoke memories that trigger fear and anxiety. Many veterans experience flashbacks in which they relive combat and training accidents and have nightmares of those events, interrupting sleep. The instinct of many veterans is to avoid the holiday altogether: Many patients I knew sought refuge in remote mountain campsites often to find that even there they were not safe from revelers.
Avoidance being a cardinal symptom and coping mechanism of PTSD, therapists advise other methods of managing the Fourth of July, such as distractions that are calming and people who are reassuring. Therapists often rehearse self-talk scripts and teach breathing exercises targeted to break the behavioral conditioning that links present innocuous sensory overstimulation with a past life-threatening danger. The heat of summer worsens the stress, cooling down literally and figuratively can help.6
Many VA medical centers send announcements to the media or have their experts do interviews to educate the public about the potentially traumatizing effects of fireworks. They also encourage veterans who are apprehensive about the holiday to seek additional mental health help, including the Veterans Crisis Line. With my patients, we started early and developed a preventive plan to manage the anticipatory apprehension and arrange a means of enduring the ordeal. I do not have data to prove it, but anecdotally I know from my years on-call that visits to VA emergency departments and admissions to our inpatient psychiatry unit always increased around Independence Day in part because some veterans used drugs and/or alcohol to dampen their stress response.
VA experts also have advice for the families and friends of veterans who want to reduce the impact of fireworks and other holiday activities on them. Many veterans will feel at once intensely present to the disturbing aspects like fireworks and crowds and at the same time, distant and separated from the more positive parts of celebrations like being with loved ones in the outdoors. We can simply ask the veterans in our lives and neighborhoods how the festivities affect them and how we can help them get through the long hot night.7 Yet it would not be America without some controversy, and opinions are divided even among veterans about whether yard signs that say, “Combat Veteran Lives Here Please Be Courteous With Fireworks” enhance or impede the effort to increase awareness of the connection between fireworks, veterans, and PTSD.8
This editorial began with my own story of enjoying fireworks to emphasize that my aim is not to ruin the fun but to ask us to think before we shoot and consider the veterans near us for whom our recreation may cause unnecessary distress. Fourth of July would not have been possible without the soldiers who fought and died in the American Revolution and all the conflicts since. We owe it to all who have worn the uniform for the United States of America to remember the extraordinary toll it has taken on their ability to live ordinary lives. Like General Washington, we should vow to end the wars that wounded them so future generations will be able to join in celebrating Independence Day.
1. From George Washington to David Humphreys, 25 July 1785. Accessed June 19, 2023. https://founders.archives.gov/documents/Washington/04-03-02-0142
2. Waxman OB. How fireworks became a Fourth of July tradition. TIME. Accessed June 19, 2023. https://time.com/4828701/first-fireworks-history-july-4th
3. Velasquez F. Here’s how much Americans are spending on food, alcohol, and fireworks this Fourth of July. Accessed June 19, 2023. https://www.cnbc.com/2021/07/04/how-much-americans-are-spending-on-fourth-of-july.html
4. Fireworks: growing evidence they distress animals builds case to restrict use. The Conversation. Accessed June 19, 2023. https://theconversation.com/fireworks-growing-evidence-they-distress-animals-builds-case-to-restrict-use-191472
5. US Department of Veterans Affairs. Epidemiology and impact of PTSD. Accessed June 17, 2023. https://www.ptsd.va.gov/professional/treat/essentials/epidemiology.asp#two,
6. US Department of Veterans Affairs. Independence Day celebrations can trigger PTSD in veterans. Press release. Accessed June 19, 2023. https://www.va.gov/new-jersey-health-care/news-releases/independence-day-celebrations-can-trigger-ptsd-in-veterans
7. Tips for veterans celebrating Independence Day. VA News. https://news.va.gov/62393/some-helpful-tips-to-remember-for-this-4th-of-july
8. Faith S. Veterans, July 4, and fireworks: don’t be courteous, just be American. Military.com. Accessed June 19, 2023. https://www.military.com/july-4th/veterans-july-4-and-fireworks-dont-be-courteous-just-be-american.html
My first wish is to see this plague to Mankind, war, banished from the Earth; & the Sons and daughters of this World employed in more pleasing & innocent amusements than in preparing implements, & exercising them for the destruction of the human race.
General George Washington1
When I was a child, every Fourth of July holiday my father would take me to the military fireworks display at Fort Sam Houston in Texas. We would take our place in the long cascade of cars parked at the huge parade ground in front of Brooke Army Medical Center. It was the most spectacular display of the year not to be found anywhere else in the city. Army fire engines and medics were always on site in case anything went wrong, which rarely occurred thanks to the pyrotechnic experts who ran the display.
Later, when I began my psychiatric residency at the US Department of Veterans Affairs (VA) New Mexico Healthcare System, I quickly learned a darker truth about fireworks. What seemed to me and many other civilians in General Washington’s words, a “pleasing and innocent amusement,” instead was a distressing and terrifying revisiting of trauma for many service members and veterans, likely including my father, who was a World War II combat veteran.
Fireworks are so closely linked to the birth of our young nation that we often forget they were invented in China a millennia ago. Fireworks were first associated with the fledgling nation in the middle of the War of Independence. On July 4, 1776, representatives of the 13 colonies signed the Declaration of Independence. In one of several ironies of history, what was used at the initial commemorations was not fireworks but the very “implements of destruction,” to use Washington’s phrase—guns and cannons. The demonstrations of firepower were meant to be morale boosters. After the war, the dangers of the detonations were recognized, and firearms were replaced with the fireworks we still launch today.2
The country celebrates the holiday with cookouts, parades, brass band concerts, and of course fireworks. Added to the organized shows are the millions of citizens who demonstrate private patriotism by shooting off fireworks in their neighborhoods. In 2021, Americans spent $1.5 billion on fireworks, and 33% said they planned to attend a public display.3
However, people are increasingly recognizing the negative side of fireworks for wild and companion animals and the environment. Most of us who have dogs and I am sure cats, horses, and other animals dread the impending darkness of the Fourth as it signals the coming loud noise and the cringing, pacing animals who want to run yet have nowhere to go to be safe from the sound.4
Sitting in the clinic with veterans, I realized it was not only pets and wildlife that feared the ultimate American holiday but also the very individuals who fought to preserve the freedom those fireworks celebrate. The VA’s National Center for Posttraumatic Stress Disorder (PTSD) estimates that about 7% of veterans will meet the diagnostic criteria for PTSD in their lifetimes. The prevalence of PTSD differs, depending on the methodology used, era and type of services, and demographics. Some studies have found higher rates of PTSD in women, young veterans, and those who served in Vietnam. Among the veterans who receive health care at the VA, like those I saw in the clinic, 23 in 1000 may have PTSD.5
We, after all, are remarkably similar in physiology to other mammals, and not surprisingly, veterans with PTSD exhibit many of the same reactions to fireworks. The sights, sounds, and odor of fireworks, as well as the vocal responses of the crowd at large displays evoke memories that trigger fear and anxiety. Many veterans experience flashbacks in which they relive combat and training accidents and have nightmares of those events, interrupting sleep. The instinct of many veterans is to avoid the holiday altogether: Many patients I knew sought refuge in remote mountain campsites often to find that even there they were not safe from revelers.
Avoidance being a cardinal symptom and coping mechanism of PTSD, therapists advise other methods of managing the Fourth of July, such as distractions that are calming and people who are reassuring. Therapists often rehearse self-talk scripts and teach breathing exercises targeted to break the behavioral conditioning that links present innocuous sensory overstimulation with a past life-threatening danger. The heat of summer worsens the stress, cooling down literally and figuratively can help.6
Many VA medical centers send announcements to the media or have their experts do interviews to educate the public about the potentially traumatizing effects of fireworks. They also encourage veterans who are apprehensive about the holiday to seek additional mental health help, including the Veterans Crisis Line. With my patients, we started early and developed a preventive plan to manage the anticipatory apprehension and arrange a means of enduring the ordeal. I do not have data to prove it, but anecdotally I know from my years on-call that visits to VA emergency departments and admissions to our inpatient psychiatry unit always increased around Independence Day in part because some veterans used drugs and/or alcohol to dampen their stress response.
VA experts also have advice for the families and friends of veterans who want to reduce the impact of fireworks and other holiday activities on them. Many veterans will feel at once intensely present to the disturbing aspects like fireworks and crowds and at the same time, distant and separated from the more positive parts of celebrations like being with loved ones in the outdoors. We can simply ask the veterans in our lives and neighborhoods how the festivities affect them and how we can help them get through the long hot night.7 Yet it would not be America without some controversy, and opinions are divided even among veterans about whether yard signs that say, “Combat Veteran Lives Here Please Be Courteous With Fireworks” enhance or impede the effort to increase awareness of the connection between fireworks, veterans, and PTSD.8
This editorial began with my own story of enjoying fireworks to emphasize that my aim is not to ruin the fun but to ask us to think before we shoot and consider the veterans near us for whom our recreation may cause unnecessary distress. Fourth of July would not have been possible without the soldiers who fought and died in the American Revolution and all the conflicts since. We owe it to all who have worn the uniform for the United States of America to remember the extraordinary toll it has taken on their ability to live ordinary lives. Like General Washington, we should vow to end the wars that wounded them so future generations will be able to join in celebrating Independence Day.
My first wish is to see this plague to Mankind, war, banished from the Earth; & the Sons and daughters of this World employed in more pleasing & innocent amusements than in preparing implements, & exercising them for the destruction of the human race.
General George Washington1
When I was a child, every Fourth of July holiday my father would take me to the military fireworks display at Fort Sam Houston in Texas. We would take our place in the long cascade of cars parked at the huge parade ground in front of Brooke Army Medical Center. It was the most spectacular display of the year not to be found anywhere else in the city. Army fire engines and medics were always on site in case anything went wrong, which rarely occurred thanks to the pyrotechnic experts who ran the display.
Later, when I began my psychiatric residency at the US Department of Veterans Affairs (VA) New Mexico Healthcare System, I quickly learned a darker truth about fireworks. What seemed to me and many other civilians in General Washington’s words, a “pleasing and innocent amusement,” instead was a distressing and terrifying revisiting of trauma for many service members and veterans, likely including my father, who was a World War II combat veteran.
Fireworks are so closely linked to the birth of our young nation that we often forget they were invented in China a millennia ago. Fireworks were first associated with the fledgling nation in the middle of the War of Independence. On July 4, 1776, representatives of the 13 colonies signed the Declaration of Independence. In one of several ironies of history, what was used at the initial commemorations was not fireworks but the very “implements of destruction,” to use Washington’s phrase—guns and cannons. The demonstrations of firepower were meant to be morale boosters. After the war, the dangers of the detonations were recognized, and firearms were replaced with the fireworks we still launch today.2
The country celebrates the holiday with cookouts, parades, brass band concerts, and of course fireworks. Added to the organized shows are the millions of citizens who demonstrate private patriotism by shooting off fireworks in their neighborhoods. In 2021, Americans spent $1.5 billion on fireworks, and 33% said they planned to attend a public display.3
However, people are increasingly recognizing the negative side of fireworks for wild and companion animals and the environment. Most of us who have dogs and I am sure cats, horses, and other animals dread the impending darkness of the Fourth as it signals the coming loud noise and the cringing, pacing animals who want to run yet have nowhere to go to be safe from the sound.4
Sitting in the clinic with veterans, I realized it was not only pets and wildlife that feared the ultimate American holiday but also the very individuals who fought to preserve the freedom those fireworks celebrate. The VA’s National Center for Posttraumatic Stress Disorder (PTSD) estimates that about 7% of veterans will meet the diagnostic criteria for PTSD in their lifetimes. The prevalence of PTSD differs, depending on the methodology used, era and type of services, and demographics. Some studies have found higher rates of PTSD in women, young veterans, and those who served in Vietnam. Among the veterans who receive health care at the VA, like those I saw in the clinic, 23 in 1000 may have PTSD.5
We, after all, are remarkably similar in physiology to other mammals, and not surprisingly, veterans with PTSD exhibit many of the same reactions to fireworks. The sights, sounds, and odor of fireworks, as well as the vocal responses of the crowd at large displays evoke memories that trigger fear and anxiety. Many veterans experience flashbacks in which they relive combat and training accidents and have nightmares of those events, interrupting sleep. The instinct of many veterans is to avoid the holiday altogether: Many patients I knew sought refuge in remote mountain campsites often to find that even there they were not safe from revelers.
Avoidance being a cardinal symptom and coping mechanism of PTSD, therapists advise other methods of managing the Fourth of July, such as distractions that are calming and people who are reassuring. Therapists often rehearse self-talk scripts and teach breathing exercises targeted to break the behavioral conditioning that links present innocuous sensory overstimulation with a past life-threatening danger. The heat of summer worsens the stress, cooling down literally and figuratively can help.6
Many VA medical centers send announcements to the media or have their experts do interviews to educate the public about the potentially traumatizing effects of fireworks. They also encourage veterans who are apprehensive about the holiday to seek additional mental health help, including the Veterans Crisis Line. With my patients, we started early and developed a preventive plan to manage the anticipatory apprehension and arrange a means of enduring the ordeal. I do not have data to prove it, but anecdotally I know from my years on-call that visits to VA emergency departments and admissions to our inpatient psychiatry unit always increased around Independence Day in part because some veterans used drugs and/or alcohol to dampen their stress response.
VA experts also have advice for the families and friends of veterans who want to reduce the impact of fireworks and other holiday activities on them. Many veterans will feel at once intensely present to the disturbing aspects like fireworks and crowds and at the same time, distant and separated from the more positive parts of celebrations like being with loved ones in the outdoors. We can simply ask the veterans in our lives and neighborhoods how the festivities affect them and how we can help them get through the long hot night.7 Yet it would not be America without some controversy, and opinions are divided even among veterans about whether yard signs that say, “Combat Veteran Lives Here Please Be Courteous With Fireworks” enhance or impede the effort to increase awareness of the connection between fireworks, veterans, and PTSD.8
This editorial began with my own story of enjoying fireworks to emphasize that my aim is not to ruin the fun but to ask us to think before we shoot and consider the veterans near us for whom our recreation may cause unnecessary distress. Fourth of July would not have been possible without the soldiers who fought and died in the American Revolution and all the conflicts since. We owe it to all who have worn the uniform for the United States of America to remember the extraordinary toll it has taken on their ability to live ordinary lives. Like General Washington, we should vow to end the wars that wounded them so future generations will be able to join in celebrating Independence Day.
1. From George Washington to David Humphreys, 25 July 1785. Accessed June 19, 2023. https://founders.archives.gov/documents/Washington/04-03-02-0142
2. Waxman OB. How fireworks became a Fourth of July tradition. TIME. Accessed June 19, 2023. https://time.com/4828701/first-fireworks-history-july-4th
3. Velasquez F. Here’s how much Americans are spending on food, alcohol, and fireworks this Fourth of July. Accessed June 19, 2023. https://www.cnbc.com/2021/07/04/how-much-americans-are-spending-on-fourth-of-july.html
4. Fireworks: growing evidence they distress animals builds case to restrict use. The Conversation. Accessed June 19, 2023. https://theconversation.com/fireworks-growing-evidence-they-distress-animals-builds-case-to-restrict-use-191472
5. US Department of Veterans Affairs. Epidemiology and impact of PTSD. Accessed June 17, 2023. https://www.ptsd.va.gov/professional/treat/essentials/epidemiology.asp#two,
6. US Department of Veterans Affairs. Independence Day celebrations can trigger PTSD in veterans. Press release. Accessed June 19, 2023. https://www.va.gov/new-jersey-health-care/news-releases/independence-day-celebrations-can-trigger-ptsd-in-veterans
7. Tips for veterans celebrating Independence Day. VA News. https://news.va.gov/62393/some-helpful-tips-to-remember-for-this-4th-of-july
8. Faith S. Veterans, July 4, and fireworks: don’t be courteous, just be American. Military.com. Accessed June 19, 2023. https://www.military.com/july-4th/veterans-july-4-and-fireworks-dont-be-courteous-just-be-american.html
1. From George Washington to David Humphreys, 25 July 1785. Accessed June 19, 2023. https://founders.archives.gov/documents/Washington/04-03-02-0142
2. Waxman OB. How fireworks became a Fourth of July tradition. TIME. Accessed June 19, 2023. https://time.com/4828701/first-fireworks-history-july-4th
3. Velasquez F. Here’s how much Americans are spending on food, alcohol, and fireworks this Fourth of July. Accessed June 19, 2023. https://www.cnbc.com/2021/07/04/how-much-americans-are-spending-on-fourth-of-july.html
4. Fireworks: growing evidence they distress animals builds case to restrict use. The Conversation. Accessed June 19, 2023. https://theconversation.com/fireworks-growing-evidence-they-distress-animals-builds-case-to-restrict-use-191472
5. US Department of Veterans Affairs. Epidemiology and impact of PTSD. Accessed June 17, 2023. https://www.ptsd.va.gov/professional/treat/essentials/epidemiology.asp#two,
6. US Department of Veterans Affairs. Independence Day celebrations can trigger PTSD in veterans. Press release. Accessed June 19, 2023. https://www.va.gov/new-jersey-health-care/news-releases/independence-day-celebrations-can-trigger-ptsd-in-veterans
7. Tips for veterans celebrating Independence Day. VA News. https://news.va.gov/62393/some-helpful-tips-to-remember-for-this-4th-of-july
8. Faith S. Veterans, July 4, and fireworks: don’t be courteous, just be American. Military.com. Accessed June 19, 2023. https://www.military.com/july-4th/veterans-july-4-and-fireworks-dont-be-courteous-just-be-american.html
Commentary: New treatments for mantle cell lymphoma and B-cell lymphoma, July 2023
Mantle cell lymphoma (MCL) is a rare and often heterogenous subtype of non-Hodgkin lymphoma (NHL). Though patients can experience prolonged remissions after frontline therapy, most patients ultimately relapse. Treatment of relapsed/refractory disease can be challenging, but there have recently been a growing number of therapeutic options in this setting. Covalent Bruton tyrosine kinase (BTK) inhibitors, for example, have demonstrated activity in patients with MCL and are approved by the US Food and Drug Administration (FDA) for relapsed/refractory disease. Chimeric antigen receptor (CAR) T-cell therapy is also an effective option for relapsed/refractory disease, though this is typically available only at select centers and is associated with toxicities, such as cytokine release syndrome and neurologic toxicity.
Recently, a novel BTK inhibitor, pirtobrutinib, has also been studied across NHL, including MCL (Wang et al) Pirtobrutinib is a selective, noncovalent BTK inhibitor with the ability to bind both the C481S-mutant and wild-type BTK. The multicenter, phase 1/2 BRUIN study included 90 patients with MCL who were previously treated with a covalent BTK inhibitor. Patients in the phase 1 portion of the study were treated with oral pirtobrutinib at a dose of 25-300 mg once daily, and patients in the phase 2 study were treated at the recommended dose of 200 mg once daily. The overall response rate was 57.8% (95% CI 46.9%-68.1%), with the complete response rate being 20.0%. At a median follow-up of 12 months, the median duration of response was 21.6 (95% CI 7.5 to not reached) months. Treatment-related adverse events that were grade 3 or higher were not frequent, with neutropenia (8.5%) being the most common. Of note, grade 3 or higher hemorrhage and atrial fibrillation, which can be seen with BTK inhibitors, were rare, occurring in 3.7% and 1.2% of patients, respectively. Based on the results of this study, pirtobrutinib has been approved by the FDA for patients with relapsed/refractory MCL after at least two prior lines of therapy, including a BTK inhibitor. This is an appealing oral option for patients with relapsed disease.
Options for patients with relapsed/refractory large B-cell lymphoma (LBCL) have also significantly increased in recent years. One of the most important advances in this disease has been the use of anti-CD19 CAR T-cell therapy. There are currently three FDA-approved options for patients with relapsed/refractory LBCL who have received at least two prior lines of therapy.1-3 More recently, axicabtagene ciloleucel (axi-cel) and lisocabtagene maraleucel (liso-cel) have also been approved for the second line based on the results of the ZUMA-7 and TRANSFORM studies, respectively.4,5
Longer follow-up of the ZUMA-7 trial continues to confirm the advantage of axi-cel over standard-care therapy for patients with primary refractory or early relapse of disease, now with evidence of an overall survival advantage (Westin et al). The ZUMA-7 trial included 359 adults with LBCL (refractory to or relapsed within 12 months of first-line treatment) who were randomly assigned to receive axi-cel (n = 180) or standard care (n = 179). At a median follow-up of 47.2 mo, patients receiving axi-cel vs standard care had a significantly longer median overall survival (not reached vs 31.1 mo; hazard ratio [HR] 0.73; P = .03) and an absolute improvement in overall survival (8.6 percentage points at 4 years). No new treatment-related deaths were reported since the primary event-free survival analysis. These data confirm that early use of axi-cel is preferred over standard-care therapy with high-dose chemotherapy and autologous stem cell transplantation.
Another important study that was recently published looked at the role of mental health on outcomes in patients with diffuse large B-cell lymphoma (DLBCL) (Kuczmarski et al). Though it is known that mental health disorders can decrease the quality of life of patients with cancer, there is limited information on the survival implications of these issues. A recent retrospective cohort study analyzed the data of 13,244 patients aged 67 years or older with DLBCL from the Surveillance, Epidemiology, and End Results (SEER)–Medicare registry, of which, 2094 patients had depression, anxiety, or both at the time of their DLBCL diagnosis. At a median follow-up of 2.0 years, patients with depression, anxiety, or both vs without any mental disorder had significantly lower 5-year overall survival rates (27.0% vs 37.4%; HR 1.37; 95% CI 1.29-1.44). They also found that those patients with preexisting depression vs without any mental disorder have the worst survival (23.4% vs 38.0%; HR 1.37; P < .0001). Though the mechanism accounting for decreased survival is not clear, the authors postulate that mental health disorders may lead to delays or interruptions in lymphoma-directed therapy. They also note the potential for increased barriers to care in patients with mental health disorders, which may result in nonadherence in this patient population. Regardless, these results highlight the importance of mental health screening and interventions in patients with DLBCL.
Additional References
- Neelapu SS, Locke FL, Bartlett NL, et al. Axicabtagene ciloleucel CAR T-Cell therapy in refractory large B-cell lymphoma. N Engl J Med. 2017;377:2531-2544. doi: 10.1056/NEJMoa1707447
- Schuster SJ, Bishop MR, Tam CS, et al; JULIET Investigators. Tisagenlecleucel in adult relapsed or refractory diffuse large B-cell lymphoma. N Engl J Med. 2019;380:45-56. doi: 10.1056/NEJMoa1804980
- Abramson JS, Palomba ML, Gordon LI, et al. Lisocabtagene maraleucel for patients with relapsed or refractory large B-cell lymphomas (TRANSCEND NHL 001): A multicentre seamless design study. Lancet. 2020;396:839-852. doi: 10.1016/S0140-6736(20)31366-0
- Locke FL, Miklos DB, Jacobson CA, et al; All ZUMA-7 Investigators and Contributing Kite Members. Axicabtagene ciloleucel as second-line therapy for large B-cell lymphoma. N Engl J Med. 2022;386:640-654. doi: 10.1056/NEJMoa2116133
- Kamdar M, Solomon SR, Arnason J, et al; TRANSFORM Investigators. Lisocabtagene maraleucel versus standard of care with salvage chemotherapy followed by autologous stem cell transplantation as second-line treatment in patients with relapsed or refractory large B-cell lymphoma (TRANSFORM): Results from an interim analysis of an open-label, randomised, phase 3 trial. Lancet. 2022;399:2294-2308. doi: 10.1016/S0140-6736(22)00662-6
Mantle cell lymphoma (MCL) is a rare and often heterogenous subtype of non-Hodgkin lymphoma (NHL). Though patients can experience prolonged remissions after frontline therapy, most patients ultimately relapse. Treatment of relapsed/refractory disease can be challenging, but there have recently been a growing number of therapeutic options in this setting. Covalent Bruton tyrosine kinase (BTK) inhibitors, for example, have demonstrated activity in patients with MCL and are approved by the US Food and Drug Administration (FDA) for relapsed/refractory disease. Chimeric antigen receptor (CAR) T-cell therapy is also an effective option for relapsed/refractory disease, though this is typically available only at select centers and is associated with toxicities, such as cytokine release syndrome and neurologic toxicity.
Recently, a novel BTK inhibitor, pirtobrutinib, has also been studied across NHL, including MCL (Wang et al) Pirtobrutinib is a selective, noncovalent BTK inhibitor with the ability to bind both the C481S-mutant and wild-type BTK. The multicenter, phase 1/2 BRUIN study included 90 patients with MCL who were previously treated with a covalent BTK inhibitor. Patients in the phase 1 portion of the study were treated with oral pirtobrutinib at a dose of 25-300 mg once daily, and patients in the phase 2 study were treated at the recommended dose of 200 mg once daily. The overall response rate was 57.8% (95% CI 46.9%-68.1%), with the complete response rate being 20.0%. At a median follow-up of 12 months, the median duration of response was 21.6 (95% CI 7.5 to not reached) months. Treatment-related adverse events that were grade 3 or higher were not frequent, with neutropenia (8.5%) being the most common. Of note, grade 3 or higher hemorrhage and atrial fibrillation, which can be seen with BTK inhibitors, were rare, occurring in 3.7% and 1.2% of patients, respectively. Based on the results of this study, pirtobrutinib has been approved by the FDA for patients with relapsed/refractory MCL after at least two prior lines of therapy, including a BTK inhibitor. This is an appealing oral option for patients with relapsed disease.
Options for patients with relapsed/refractory large B-cell lymphoma (LBCL) have also significantly increased in recent years. One of the most important advances in this disease has been the use of anti-CD19 CAR T-cell therapy. There are currently three FDA-approved options for patients with relapsed/refractory LBCL who have received at least two prior lines of therapy.1-3 More recently, axicabtagene ciloleucel (axi-cel) and lisocabtagene maraleucel (liso-cel) have also been approved for the second line based on the results of the ZUMA-7 and TRANSFORM studies, respectively.4,5
Longer follow-up of the ZUMA-7 trial continues to confirm the advantage of axi-cel over standard-care therapy for patients with primary refractory or early relapse of disease, now with evidence of an overall survival advantage (Westin et al). The ZUMA-7 trial included 359 adults with LBCL (refractory to or relapsed within 12 months of first-line treatment) who were randomly assigned to receive axi-cel (n = 180) or standard care (n = 179). At a median follow-up of 47.2 mo, patients receiving axi-cel vs standard care had a significantly longer median overall survival (not reached vs 31.1 mo; hazard ratio [HR] 0.73; P = .03) and an absolute improvement in overall survival (8.6 percentage points at 4 years). No new treatment-related deaths were reported since the primary event-free survival analysis. These data confirm that early use of axi-cel is preferred over standard-care therapy with high-dose chemotherapy and autologous stem cell transplantation.
Another important study that was recently published looked at the role of mental health on outcomes in patients with diffuse large B-cell lymphoma (DLBCL) (Kuczmarski et al). Though it is known that mental health disorders can decrease the quality of life of patients with cancer, there is limited information on the survival implications of these issues. A recent retrospective cohort study analyzed the data of 13,244 patients aged 67 years or older with DLBCL from the Surveillance, Epidemiology, and End Results (SEER)–Medicare registry, of which, 2094 patients had depression, anxiety, or both at the time of their DLBCL diagnosis. At a median follow-up of 2.0 years, patients with depression, anxiety, or both vs without any mental disorder had significantly lower 5-year overall survival rates (27.0% vs 37.4%; HR 1.37; 95% CI 1.29-1.44). They also found that those patients with preexisting depression vs without any mental disorder have the worst survival (23.4% vs 38.0%; HR 1.37; P < .0001). Though the mechanism accounting for decreased survival is not clear, the authors postulate that mental health disorders may lead to delays or interruptions in lymphoma-directed therapy. They also note the potential for increased barriers to care in patients with mental health disorders, which may result in nonadherence in this patient population. Regardless, these results highlight the importance of mental health screening and interventions in patients with DLBCL.
Additional References
- Neelapu SS, Locke FL, Bartlett NL, et al. Axicabtagene ciloleucel CAR T-Cell therapy in refractory large B-cell lymphoma. N Engl J Med. 2017;377:2531-2544. doi: 10.1056/NEJMoa1707447
- Schuster SJ, Bishop MR, Tam CS, et al; JULIET Investigators. Tisagenlecleucel in adult relapsed or refractory diffuse large B-cell lymphoma. N Engl J Med. 2019;380:45-56. doi: 10.1056/NEJMoa1804980
- Abramson JS, Palomba ML, Gordon LI, et al. Lisocabtagene maraleucel for patients with relapsed or refractory large B-cell lymphomas (TRANSCEND NHL 001): A multicentre seamless design study. Lancet. 2020;396:839-852. doi: 10.1016/S0140-6736(20)31366-0
- Locke FL, Miklos DB, Jacobson CA, et al; All ZUMA-7 Investigators and Contributing Kite Members. Axicabtagene ciloleucel as second-line therapy for large B-cell lymphoma. N Engl J Med. 2022;386:640-654. doi: 10.1056/NEJMoa2116133
- Kamdar M, Solomon SR, Arnason J, et al; TRANSFORM Investigators. Lisocabtagene maraleucel versus standard of care with salvage chemotherapy followed by autologous stem cell transplantation as second-line treatment in patients with relapsed or refractory large B-cell lymphoma (TRANSFORM): Results from an interim analysis of an open-label, randomised, phase 3 trial. Lancet. 2022;399:2294-2308. doi: 10.1016/S0140-6736(22)00662-6
Mantle cell lymphoma (MCL) is a rare and often heterogenous subtype of non-Hodgkin lymphoma (NHL). Though patients can experience prolonged remissions after frontline therapy, most patients ultimately relapse. Treatment of relapsed/refractory disease can be challenging, but there have recently been a growing number of therapeutic options in this setting. Covalent Bruton tyrosine kinase (BTK) inhibitors, for example, have demonstrated activity in patients with MCL and are approved by the US Food and Drug Administration (FDA) for relapsed/refractory disease. Chimeric antigen receptor (CAR) T-cell therapy is also an effective option for relapsed/refractory disease, though this is typically available only at select centers and is associated with toxicities, such as cytokine release syndrome and neurologic toxicity.
Recently, a novel BTK inhibitor, pirtobrutinib, has also been studied across NHL, including MCL (Wang et al) Pirtobrutinib is a selective, noncovalent BTK inhibitor with the ability to bind both the C481S-mutant and wild-type BTK. The multicenter, phase 1/2 BRUIN study included 90 patients with MCL who were previously treated with a covalent BTK inhibitor. Patients in the phase 1 portion of the study were treated with oral pirtobrutinib at a dose of 25-300 mg once daily, and patients in the phase 2 study were treated at the recommended dose of 200 mg once daily. The overall response rate was 57.8% (95% CI 46.9%-68.1%), with the complete response rate being 20.0%. At a median follow-up of 12 months, the median duration of response was 21.6 (95% CI 7.5 to not reached) months. Treatment-related adverse events that were grade 3 or higher were not frequent, with neutropenia (8.5%) being the most common. Of note, grade 3 or higher hemorrhage and atrial fibrillation, which can be seen with BTK inhibitors, were rare, occurring in 3.7% and 1.2% of patients, respectively. Based on the results of this study, pirtobrutinib has been approved by the FDA for patients with relapsed/refractory MCL after at least two prior lines of therapy, including a BTK inhibitor. This is an appealing oral option for patients with relapsed disease.
Options for patients with relapsed/refractory large B-cell lymphoma (LBCL) have also significantly increased in recent years. One of the most important advances in this disease has been the use of anti-CD19 CAR T-cell therapy. There are currently three FDA-approved options for patients with relapsed/refractory LBCL who have received at least two prior lines of therapy.1-3 More recently, axicabtagene ciloleucel (axi-cel) and lisocabtagene maraleucel (liso-cel) have also been approved for the second line based on the results of the ZUMA-7 and TRANSFORM studies, respectively.4,5
Longer follow-up of the ZUMA-7 trial continues to confirm the advantage of axi-cel over standard-care therapy for patients with primary refractory or early relapse of disease, now with evidence of an overall survival advantage (Westin et al). The ZUMA-7 trial included 359 adults with LBCL (refractory to or relapsed within 12 months of first-line treatment) who were randomly assigned to receive axi-cel (n = 180) or standard care (n = 179). At a median follow-up of 47.2 mo, patients receiving axi-cel vs standard care had a significantly longer median overall survival (not reached vs 31.1 mo; hazard ratio [HR] 0.73; P = .03) and an absolute improvement in overall survival (8.6 percentage points at 4 years). No new treatment-related deaths were reported since the primary event-free survival analysis. These data confirm that early use of axi-cel is preferred over standard-care therapy with high-dose chemotherapy and autologous stem cell transplantation.
Another important study that was recently published looked at the role of mental health on outcomes in patients with diffuse large B-cell lymphoma (DLBCL) (Kuczmarski et al). Though it is known that mental health disorders can decrease the quality of life of patients with cancer, there is limited information on the survival implications of these issues. A recent retrospective cohort study analyzed the data of 13,244 patients aged 67 years or older with DLBCL from the Surveillance, Epidemiology, and End Results (SEER)–Medicare registry, of which, 2094 patients had depression, anxiety, or both at the time of their DLBCL diagnosis. At a median follow-up of 2.0 years, patients with depression, anxiety, or both vs without any mental disorder had significantly lower 5-year overall survival rates (27.0% vs 37.4%; HR 1.37; 95% CI 1.29-1.44). They also found that those patients with preexisting depression vs without any mental disorder have the worst survival (23.4% vs 38.0%; HR 1.37; P < .0001). Though the mechanism accounting for decreased survival is not clear, the authors postulate that mental health disorders may lead to delays or interruptions in lymphoma-directed therapy. They also note the potential for increased barriers to care in patients with mental health disorders, which may result in nonadherence in this patient population. Regardless, these results highlight the importance of mental health screening and interventions in patients with DLBCL.
Additional References
- Neelapu SS, Locke FL, Bartlett NL, et al. Axicabtagene ciloleucel CAR T-Cell therapy in refractory large B-cell lymphoma. N Engl J Med. 2017;377:2531-2544. doi: 10.1056/NEJMoa1707447
- Schuster SJ, Bishop MR, Tam CS, et al; JULIET Investigators. Tisagenlecleucel in adult relapsed or refractory diffuse large B-cell lymphoma. N Engl J Med. 2019;380:45-56. doi: 10.1056/NEJMoa1804980
- Abramson JS, Palomba ML, Gordon LI, et al. Lisocabtagene maraleucel for patients with relapsed or refractory large B-cell lymphomas (TRANSCEND NHL 001): A multicentre seamless design study. Lancet. 2020;396:839-852. doi: 10.1016/S0140-6736(20)31366-0
- Locke FL, Miklos DB, Jacobson CA, et al; All ZUMA-7 Investigators and Contributing Kite Members. Axicabtagene ciloleucel as second-line therapy for large B-cell lymphoma. N Engl J Med. 2022;386:640-654. doi: 10.1056/NEJMoa2116133
- Kamdar M, Solomon SR, Arnason J, et al; TRANSFORM Investigators. Lisocabtagene maraleucel versus standard of care with salvage chemotherapy followed by autologous stem cell transplantation as second-line treatment in patients with relapsed or refractory large B-cell lymphoma (TRANSFORM): Results from an interim analysis of an open-label, randomised, phase 3 trial. Lancet. 2022;399:2294-2308. doi: 10.1016/S0140-6736(22)00662-6
Commentary: DMARD and HCQ in RA, July 2023
Despite multiple existing conventional synthetic disease-modifying antirheumatic drug (csDMARD) and biologic DMARD (bDMARD) options, many patients with rheumatoid arthritis (RA) do not respond adequately to treatment. In an exciting development, a recent phase 2 study by Tuttle and colleagues examined a novel treatment approach in RA: stimulation of the programmed cell death protein 1 (PD-1) inhibitor pathway. PD-1 is a checkpoint inhibitor receptor whose activation reflects T-cell activation and may play a role in synovitis and extra-articular inflammation. Blocking PD-1 in cancer therapy has been associated with an increase in inflammatory arthritis. In this 12-week study, RA disease activity was analyzed in patients randomly assigned to two different monthly intravenous doses of peresolimab or placebo. Of note, a large majority of participants were seropositive for rheumatoid factor (RF) or cyclic citrullinated peptide (CCP). Patients receiving the 700-mg dose of peresolimab had a better American College of Rheumatology (ACR) 20 response than did those receiving placebo (71% vs 42%), but not a better ACR50 or ACR70 response; the 300-mg dose was not better than placebo. Although reported adverse events were similar in all three groups, with a short timeframe it would be difficult to address concerns about cancer risk. Though this novel treatment is exciting, a larger and longer-term trial is necessary to address this concern as well as potentially tease out risk factors (including age or other immunosuppression) in this susceptible group.
Two other studies examined use of a much older csDMARD therapy, hydroxychloroquine (HCQ), in Brazilian patients with RA. Bredemeier and colleagues looked at the effects of HCQ on adverse events as well as the persistence of bDMARD/targeted synthetic DMARD (tsDMARD) therapy in over 1300 patients with RA. Using the BiobadaBrasil registry of patients starting their first bDMARD or Janus kinase (JAK) inhibitor, they looked at effects of combination therapy with HCQ during the treatment course of up to six bDMARD or JAK inhibitors. At baseline, patients prescribed antimalarial therapy had shorter RA duration and began treatment earlier, perhaps due to patient or physician preferences regarding starting "milder" antimalarial medication earlier or due to use of "triple therapy" with methotrexate and sulfasalazine. Of interest, patients receiving antimalarial therapy had a lower incidence of adverse events, especially serious infections, but no effect on cardiovascular events was seen despite HCQ's perceived beneficial effects on thrombotic risk and cholesterol profile. Patients receiving HCQ were also more likely to persist in their course of bDMARD or JAK inhibitor therapy, though the effect size seems relatively small. As the focus in this study was on adverse effects, the authors' analysis of the effects on antimalarials on the persistence of therapy was not detailed.
Lin and colleagues also looked at the effects of HCQ in patients with older-onset RA with respect to mortality risk. Using data from the electronic health records of a hospital in Taiwan, mortality-associated risk factors were evaluated in 980 patients with RA diagnosed at >60 years. Male sex, current smoking status, and cancer status were all associated with mortality, whereas HCQ use was associated with reduced mortality (hazard ratio 0.30). In contrast to the registry study mentioned above, patients receiving HCQ had a lower risk for cardiovascular events, hyperlipidemia, diabetes, and chronic kidney disease. Interaction with cancer was less clear due to lower number of patients. Of interest, use of cyclosporine, leflunomide, and a bDMARD was associated with higher mortality risk. The source and true relevance of the potential risk reduction in this study is not clear because of the lack of prospective data, but combined with the information above, this study suggests that the benefits of HCQ use should not be discounted in patients with RA.
Despite multiple existing conventional synthetic disease-modifying antirheumatic drug (csDMARD) and biologic DMARD (bDMARD) options, many patients with rheumatoid arthritis (RA) do not respond adequately to treatment. In an exciting development, a recent phase 2 study by Tuttle and colleagues examined a novel treatment approach in RA: stimulation of the programmed cell death protein 1 (PD-1) inhibitor pathway. PD-1 is a checkpoint inhibitor receptor whose activation reflects T-cell activation and may play a role in synovitis and extra-articular inflammation. Blocking PD-1 in cancer therapy has been associated with an increase in inflammatory arthritis. In this 12-week study, RA disease activity was analyzed in patients randomly assigned to two different monthly intravenous doses of peresolimab or placebo. Of note, a large majority of participants were seropositive for rheumatoid factor (RF) or cyclic citrullinated peptide (CCP). Patients receiving the 700-mg dose of peresolimab had a better American College of Rheumatology (ACR) 20 response than did those receiving placebo (71% vs 42%), but not a better ACR50 or ACR70 response; the 300-mg dose was not better than placebo. Although reported adverse events were similar in all three groups, with a short timeframe it would be difficult to address concerns about cancer risk. Though this novel treatment is exciting, a larger and longer-term trial is necessary to address this concern as well as potentially tease out risk factors (including age or other immunosuppression) in this susceptible group.
Two other studies examined use of a much older csDMARD therapy, hydroxychloroquine (HCQ), in Brazilian patients with RA. Bredemeier and colleagues looked at the effects of HCQ on adverse events as well as the persistence of bDMARD/targeted synthetic DMARD (tsDMARD) therapy in over 1300 patients with RA. Using the BiobadaBrasil registry of patients starting their first bDMARD or Janus kinase (JAK) inhibitor, they looked at effects of combination therapy with HCQ during the treatment course of up to six bDMARD or JAK inhibitors. At baseline, patients prescribed antimalarial therapy had shorter RA duration and began treatment earlier, perhaps due to patient or physician preferences regarding starting "milder" antimalarial medication earlier or due to use of "triple therapy" with methotrexate and sulfasalazine. Of interest, patients receiving antimalarial therapy had a lower incidence of adverse events, especially serious infections, but no effect on cardiovascular events was seen despite HCQ's perceived beneficial effects on thrombotic risk and cholesterol profile. Patients receiving HCQ were also more likely to persist in their course of bDMARD or JAK inhibitor therapy, though the effect size seems relatively small. As the focus in this study was on adverse effects, the authors' analysis of the effects on antimalarials on the persistence of therapy was not detailed.
Lin and colleagues also looked at the effects of HCQ in patients with older-onset RA with respect to mortality risk. Using data from the electronic health records of a hospital in Taiwan, mortality-associated risk factors were evaluated in 980 patients with RA diagnosed at >60 years. Male sex, current smoking status, and cancer status were all associated with mortality, whereas HCQ use was associated with reduced mortality (hazard ratio 0.30). In contrast to the registry study mentioned above, patients receiving HCQ had a lower risk for cardiovascular events, hyperlipidemia, diabetes, and chronic kidney disease. Interaction with cancer was less clear due to lower number of patients. Of interest, use of cyclosporine, leflunomide, and a bDMARD was associated with higher mortality risk. The source and true relevance of the potential risk reduction in this study is not clear because of the lack of prospective data, but combined with the information above, this study suggests that the benefits of HCQ use should not be discounted in patients with RA.
Despite multiple existing conventional synthetic disease-modifying antirheumatic drug (csDMARD) and biologic DMARD (bDMARD) options, many patients with rheumatoid arthritis (RA) do not respond adequately to treatment. In an exciting development, a recent phase 2 study by Tuttle and colleagues examined a novel treatment approach in RA: stimulation of the programmed cell death protein 1 (PD-1) inhibitor pathway. PD-1 is a checkpoint inhibitor receptor whose activation reflects T-cell activation and may play a role in synovitis and extra-articular inflammation. Blocking PD-1 in cancer therapy has been associated with an increase in inflammatory arthritis. In this 12-week study, RA disease activity was analyzed in patients randomly assigned to two different monthly intravenous doses of peresolimab or placebo. Of note, a large majority of participants were seropositive for rheumatoid factor (RF) or cyclic citrullinated peptide (CCP). Patients receiving the 700-mg dose of peresolimab had a better American College of Rheumatology (ACR) 20 response than did those receiving placebo (71% vs 42%), but not a better ACR50 or ACR70 response; the 300-mg dose was not better than placebo. Although reported adverse events were similar in all three groups, with a short timeframe it would be difficult to address concerns about cancer risk. Though this novel treatment is exciting, a larger and longer-term trial is necessary to address this concern as well as potentially tease out risk factors (including age or other immunosuppression) in this susceptible group.
Two other studies examined use of a much older csDMARD therapy, hydroxychloroquine (HCQ), in Brazilian patients with RA. Bredemeier and colleagues looked at the effects of HCQ on adverse events as well as the persistence of bDMARD/targeted synthetic DMARD (tsDMARD) therapy in over 1300 patients with RA. Using the BiobadaBrasil registry of patients starting their first bDMARD or Janus kinase (JAK) inhibitor, they looked at effects of combination therapy with HCQ during the treatment course of up to six bDMARD or JAK inhibitors. At baseline, patients prescribed antimalarial therapy had shorter RA duration and began treatment earlier, perhaps due to patient or physician preferences regarding starting "milder" antimalarial medication earlier or due to use of "triple therapy" with methotrexate and sulfasalazine. Of interest, patients receiving antimalarial therapy had a lower incidence of adverse events, especially serious infections, but no effect on cardiovascular events was seen despite HCQ's perceived beneficial effects on thrombotic risk and cholesterol profile. Patients receiving HCQ were also more likely to persist in their course of bDMARD or JAK inhibitor therapy, though the effect size seems relatively small. As the focus in this study was on adverse effects, the authors' analysis of the effects on antimalarials on the persistence of therapy was not detailed.
Lin and colleagues also looked at the effects of HCQ in patients with older-onset RA with respect to mortality risk. Using data from the electronic health records of a hospital in Taiwan, mortality-associated risk factors were evaluated in 980 patients with RA diagnosed at >60 years. Male sex, current smoking status, and cancer status were all associated with mortality, whereas HCQ use was associated with reduced mortality (hazard ratio 0.30). In contrast to the registry study mentioned above, patients receiving HCQ had a lower risk for cardiovascular events, hyperlipidemia, diabetes, and chronic kidney disease. Interaction with cancer was less clear due to lower number of patients. Of interest, use of cyclosporine, leflunomide, and a bDMARD was associated with higher mortality risk. The source and true relevance of the potential risk reduction in this study is not clear because of the lack of prospective data, but combined with the information above, this study suggests that the benefits of HCQ use should not be discounted in patients with RA.
Commentary: CDK4/6 Inhibitors, Breast Irradiation, and Aromatase Inhibitors in Breast Cancer Treatment, July 2023
After a median follow-up of 21.6 mo, the dalpiciclib group demonstrated a significantly longer median progression-free survival (PFS) compared with the placebo group (30.6 mo vs 18.2 mo; stratified hazard ratio [HR] 0.51; 95% CI 0.38-0.69; P < .0001). Overall, the dalpiciclib group demonstrated a manageable safety profile, although a higher percentage of grade 3/4 adverse events was noted with dalpiciclib than with placebo (90% vs 12%), as expected. Overall survival data for this CDK4/6 inhibitor are yet to come. These results suggest that dalpiciclib in combination with endocrine therapy is an alternative treatment for this group of patients, especially in countries where the traditionally approved CDK4/6 inhibitors (palbociclib, ribociclib, and abemaciclib) are not available.
The optimal sequencing of endocrine therapy (ET) after progression on CDK4/6 inhibitor–based therapy remains a challenge. In the phase 2 MAINTAIN trial, 119 patients (all of whom had HR+/HER2- metastatic breast cancer and who progressed on ET and CDK4/6 inhibitors) were randomly assigned to receive a different ET (fulvestrant or exemestane) from the previous ET they had received plus either the CDK4/6 inhibitor ribociclib or placebo. In the study by Kalinksky and colleagues, at a median follow-up of 18.2 mo, a significant improvement in PFS was observed in the switched ET-plus-ribociclib group compared with the switched ET-plus-placebo group (HR 0.57; P = .006). The phase 2 MAINTAIN trial is the first randomized trial to show the benefit of a CDK4/6 inhibitor after progression on another CDK4/6 inhibitor. It is important to note that the majority of patients in the MAINTAIN study previously received palbociclib in the first-line setting, which in recent studies has been demonstrated to be inferior to other CDK4/6 inhibitors. Therefore, it is important to confirm whether this will hold true upon progression from ribociclib or abemaciclib in the first-line setting. In addition, more data are needed to compare this approach with other ET treatment options, such as phosphoinositide 3-kinases inhibitors and oral selective estrogen receptor degraders.
There are several options for adjuvant radiation therapy for early-stage breast cancer. A meta-analysis of 14 randomized controlled trials and six comparative observational studies assessed the efficacy of whole breast irradiation (WBI) compared with partial breast irradiation (PBI) in 17,234 adults with early-stage breast cancer. Results of this meta-analysis showed that PBI was not significantly different from WBI, with similar rates of ipsilateral breast recurrence at 5 years (relative risk [RR] 1.34; 95% CI 0.83-2.18) and 10 years (RR 1.29; 95% CI 0.87-1.91), although patients undergoing PBI reported fewer acute adverse events compared with patients undergoing WBI (incidence rate ratio [IRR] 0.53; 95% CI 0.31-0.92) and acute grade ≥2 adverse events (IRR 0.21; 95% CI 0.07-0.62). These findings support using PBI as the adjuvant radiotherapy modality for select patients with favorable-risk early-stage breast cancer.
Another meta-analysis looked at assessing the survival benefit of adding CDK4/6 inhibitors to standard ET in older patients with advanced breast cancer. The study included 10 trials with 1985 older patients with advanced ER+ breast cancer who received ET with or without CDK4/6 inhibitors. The findings showed that adding CDK4/6 inhibitors to ET (letrozole or fulvestrant) significantly reduced the mortality risk by 21% (HR 0.79; 95% CI 0.69-0.91) and progression risk by 41% (HR 0.59; 95% CI 0.51-0.69) in older patients (age ≥ 65 years) with advanced breast cancer. Grade 3-4 neutropenia and diarrhea were similar in older patients. This study supports the use of CDK4/6 inhibitors as a reasonable treatment modality for older patients. More studies dedicated to the geriatric population are needed to help elaborate on the efficacy and tolerability of such agents in this population.
The phase 3 National Surgical Adjuvant Breast and Bowel Project B-42 (NSABP B-42) trial evaluated the role of extended letrozole therapy in postmenopausal breast cancer patients who were disease-free after 5 years of aromatase inhibitor–based therapy. The study included 3966 postmenopausal women with stage I-IIIA HR+ breast cancer who were randomly assigned to receive letrozole or placebo for 5 more years. After a median follow-up of 10.3 years, letrozole significantly improved disease-free survival (10-year absolute benefit 3.4%; HR 0.85; P = .01) compared with placebo, although there were no differences noted in overall survival between the groups (HR 0.97, P = .74). Furthermore, letrozole significantly reduced the breast cancer–free interval (HR 0.75, ,P = .003) and distant recurrence (HR 0.72, P = .01). There were no notable differences in toxicity, particularly rates of osteoporotic fractures and arterial thrombotic events, between the groups. Extended therapy with aromatase inhibitors beyond 5 years can be considered for select patients with early-stage breast cancer. Careful consideration of risks and benefits is needed to make these recommendations.
After a median follow-up of 21.6 mo, the dalpiciclib group demonstrated a significantly longer median progression-free survival (PFS) compared with the placebo group (30.6 mo vs 18.2 mo; stratified hazard ratio [HR] 0.51; 95% CI 0.38-0.69; P < .0001). Overall, the dalpiciclib group demonstrated a manageable safety profile, although a higher percentage of grade 3/4 adverse events was noted with dalpiciclib than with placebo (90% vs 12%), as expected. Overall survival data for this CDK4/6 inhibitor are yet to come. These results suggest that dalpiciclib in combination with endocrine therapy is an alternative treatment for this group of patients, especially in countries where the traditionally approved CDK4/6 inhibitors (palbociclib, ribociclib, and abemaciclib) are not available.
The optimal sequencing of endocrine therapy (ET) after progression on CDK4/6 inhibitor–based therapy remains a challenge. In the phase 2 MAINTAIN trial, 119 patients (all of whom had HR+/HER2- metastatic breast cancer and who progressed on ET and CDK4/6 inhibitors) were randomly assigned to receive a different ET (fulvestrant or exemestane) from the previous ET they had received plus either the CDK4/6 inhibitor ribociclib or placebo. In the study by Kalinksky and colleagues, at a median follow-up of 18.2 mo, a significant improvement in PFS was observed in the switched ET-plus-ribociclib group compared with the switched ET-plus-placebo group (HR 0.57; P = .006). The phase 2 MAINTAIN trial is the first randomized trial to show the benefit of a CDK4/6 inhibitor after progression on another CDK4/6 inhibitor. It is important to note that the majority of patients in the MAINTAIN study previously received palbociclib in the first-line setting, which in recent studies has been demonstrated to be inferior to other CDK4/6 inhibitors. Therefore, it is important to confirm whether this will hold true upon progression from ribociclib or abemaciclib in the first-line setting. In addition, more data are needed to compare this approach with other ET treatment options, such as phosphoinositide 3-kinases inhibitors and oral selective estrogen receptor degraders.
There are several options for adjuvant radiation therapy for early-stage breast cancer. A meta-analysis of 14 randomized controlled trials and six comparative observational studies assessed the efficacy of whole breast irradiation (WBI) compared with partial breast irradiation (PBI) in 17,234 adults with early-stage breast cancer. Results of this meta-analysis showed that PBI was not significantly different from WBI, with similar rates of ipsilateral breast recurrence at 5 years (relative risk [RR] 1.34; 95% CI 0.83-2.18) and 10 years (RR 1.29; 95% CI 0.87-1.91), although patients undergoing PBI reported fewer acute adverse events compared with patients undergoing WBI (incidence rate ratio [IRR] 0.53; 95% CI 0.31-0.92) and acute grade ≥2 adverse events (IRR 0.21; 95% CI 0.07-0.62). These findings support using PBI as the adjuvant radiotherapy modality for select patients with favorable-risk early-stage breast cancer.
Another meta-analysis looked at assessing the survival benefit of adding CDK4/6 inhibitors to standard ET in older patients with advanced breast cancer. The study included 10 trials with 1985 older patients with advanced ER+ breast cancer who received ET with or without CDK4/6 inhibitors. The findings showed that adding CDK4/6 inhibitors to ET (letrozole or fulvestrant) significantly reduced the mortality risk by 21% (HR 0.79; 95% CI 0.69-0.91) and progression risk by 41% (HR 0.59; 95% CI 0.51-0.69) in older patients (age ≥ 65 years) with advanced breast cancer. Grade 3-4 neutropenia and diarrhea were similar in older patients. This study supports the use of CDK4/6 inhibitors as a reasonable treatment modality for older patients. More studies dedicated to the geriatric population are needed to help elaborate on the efficacy and tolerability of such agents in this population.
The phase 3 National Surgical Adjuvant Breast and Bowel Project B-42 (NSABP B-42) trial evaluated the role of extended letrozole therapy in postmenopausal breast cancer patients who were disease-free after 5 years of aromatase inhibitor–based therapy. The study included 3966 postmenopausal women with stage I-IIIA HR+ breast cancer who were randomly assigned to receive letrozole or placebo for 5 more years. After a median follow-up of 10.3 years, letrozole significantly improved disease-free survival (10-year absolute benefit 3.4%; HR 0.85; P = .01) compared with placebo, although there were no differences noted in overall survival between the groups (HR 0.97, P = .74). Furthermore, letrozole significantly reduced the breast cancer–free interval (HR 0.75, ,P = .003) and distant recurrence (HR 0.72, P = .01). There were no notable differences in toxicity, particularly rates of osteoporotic fractures and arterial thrombotic events, between the groups. Extended therapy with aromatase inhibitors beyond 5 years can be considered for select patients with early-stage breast cancer. Careful consideration of risks and benefits is needed to make these recommendations.
After a median follow-up of 21.6 mo, the dalpiciclib group demonstrated a significantly longer median progression-free survival (PFS) compared with the placebo group (30.6 mo vs 18.2 mo; stratified hazard ratio [HR] 0.51; 95% CI 0.38-0.69; P < .0001). Overall, the dalpiciclib group demonstrated a manageable safety profile, although a higher percentage of grade 3/4 adverse events was noted with dalpiciclib than with placebo (90% vs 12%), as expected. Overall survival data for this CDK4/6 inhibitor are yet to come. These results suggest that dalpiciclib in combination with endocrine therapy is an alternative treatment for this group of patients, especially in countries where the traditionally approved CDK4/6 inhibitors (palbociclib, ribociclib, and abemaciclib) are not available.
The optimal sequencing of endocrine therapy (ET) after progression on CDK4/6 inhibitor–based therapy remains a challenge. In the phase 2 MAINTAIN trial, 119 patients (all of whom had HR+/HER2- metastatic breast cancer and who progressed on ET and CDK4/6 inhibitors) were randomly assigned to receive a different ET (fulvestrant or exemestane) from the previous ET they had received plus either the CDK4/6 inhibitor ribociclib or placebo. In the study by Kalinksky and colleagues, at a median follow-up of 18.2 mo, a significant improvement in PFS was observed in the switched ET-plus-ribociclib group compared with the switched ET-plus-placebo group (HR 0.57; P = .006). The phase 2 MAINTAIN trial is the first randomized trial to show the benefit of a CDK4/6 inhibitor after progression on another CDK4/6 inhibitor. It is important to note that the majority of patients in the MAINTAIN study previously received palbociclib in the first-line setting, which in recent studies has been demonstrated to be inferior to other CDK4/6 inhibitors. Therefore, it is important to confirm whether this will hold true upon progression from ribociclib or abemaciclib in the first-line setting. In addition, more data are needed to compare this approach with other ET treatment options, such as phosphoinositide 3-kinases inhibitors and oral selective estrogen receptor degraders.
There are several options for adjuvant radiation therapy for early-stage breast cancer. A meta-analysis of 14 randomized controlled trials and six comparative observational studies assessed the efficacy of whole breast irradiation (WBI) compared with partial breast irradiation (PBI) in 17,234 adults with early-stage breast cancer. Results of this meta-analysis showed that PBI was not significantly different from WBI, with similar rates of ipsilateral breast recurrence at 5 years (relative risk [RR] 1.34; 95% CI 0.83-2.18) and 10 years (RR 1.29; 95% CI 0.87-1.91), although patients undergoing PBI reported fewer acute adverse events compared with patients undergoing WBI (incidence rate ratio [IRR] 0.53; 95% CI 0.31-0.92) and acute grade ≥2 adverse events (IRR 0.21; 95% CI 0.07-0.62). These findings support using PBI as the adjuvant radiotherapy modality for select patients with favorable-risk early-stage breast cancer.
Another meta-analysis looked at assessing the survival benefit of adding CDK4/6 inhibitors to standard ET in older patients with advanced breast cancer. The study included 10 trials with 1985 older patients with advanced ER+ breast cancer who received ET with or without CDK4/6 inhibitors. The findings showed that adding CDK4/6 inhibitors to ET (letrozole or fulvestrant) significantly reduced the mortality risk by 21% (HR 0.79; 95% CI 0.69-0.91) and progression risk by 41% (HR 0.59; 95% CI 0.51-0.69) in older patients (age ≥ 65 years) with advanced breast cancer. Grade 3-4 neutropenia and diarrhea were similar in older patients. This study supports the use of CDK4/6 inhibitors as a reasonable treatment modality for older patients. More studies dedicated to the geriatric population are needed to help elaborate on the efficacy and tolerability of such agents in this population.
The phase 3 National Surgical Adjuvant Breast and Bowel Project B-42 (NSABP B-42) trial evaluated the role of extended letrozole therapy in postmenopausal breast cancer patients who were disease-free after 5 years of aromatase inhibitor–based therapy. The study included 3966 postmenopausal women with stage I-IIIA HR+ breast cancer who were randomly assigned to receive letrozole or placebo for 5 more years. After a median follow-up of 10.3 years, letrozole significantly improved disease-free survival (10-year absolute benefit 3.4%; HR 0.85; P = .01) compared with placebo, although there were no differences noted in overall survival between the groups (HR 0.97, P = .74). Furthermore, letrozole significantly reduced the breast cancer–free interval (HR 0.75, ,P = .003) and distant recurrence (HR 0.72, P = .01). There were no notable differences in toxicity, particularly rates of osteoporotic fractures and arterial thrombotic events, between the groups. Extended therapy with aromatase inhibitors beyond 5 years can be considered for select patients with early-stage breast cancer. Careful consideration of risks and benefits is needed to make these recommendations.
AI model interprets EEGs with near-perfect accuracy
An automated artificial intelligence (AI) model trained to read electroencephalograms (EEGs) in patients with suspected epilepsy is just as accurate as trained neurologists, new data suggest.
Known as SCORE-AI, the technology distinguishes between abnormal and normal EEG recordings and classifies irregular recordings into specific categories crucial for patient decision-making.
“SCORE-AI can be used in place of experts in underprivileged areas, where expertise is missing, or to help physicians to preselect or prescore recordings in areas where the workload is high – we can all benefit from AI,” study investigator Sándor Beniczky, MD, PhD, said in a JAMA Neurology podcast.
Dr. Beniczky is professor of clinical neurophysiology at Aarhus University in Denmark.
The findings were published online in JAMA Neurology.
Gaining a foothold
Increasingly, AI is gaining a foothold in medicine by credibly addressing patient queries and aiding radiologists.
To bring AI to EEG interpretation, the researchers developed and validated an AI model that was able to assess routine, clinical EEGs in patients with suspected epilepsy.
Beyond using AI to distinguish abnormal from normal EEG recordings, the researchers wanted to train the new system to classify abnormal recordings into the major categories that are most relevant for clinical decision-making in patients who may have epilepsy. The categories included epileptiform-focal, epileptiform-generalized, nonepileptiform-focal, and nonepileptiform-diffuse abnormalities.
The researchers trained the learning model using Standardized Computer-based Organized Reporting of EEG (SCORE) software.
In the development phase, the model was trained using more than 30,490 anonymized and highly annotated EEG recordings from 14,100 men (median age, 25 years) from a single center. The recordings had an average duration of 31 minutes and were interpreted by 17 neurologists using standardized criteria. If an EEG recording was abnormal, the physicians had to specify which abnormal features were present.
SCORE-AI then performed an analysis of the recordings based on input from the experts.
To validate the findings, investigators used two independent test datasets. The first dataset consisted of 100 representative routine EEGs from 61 men (median age, 26 years), evaluated by 11 neurologists from different centers.
The consensus of these evaluations served as the reference standard. The second dataset comprised nearly 10,000 EEGs from a single center (5,170 men; median age, 35 years), independently assessed by 14 neurologists.
Near-perfect accuracy
When compared with the experts, SCORE-AI had near-perfect accuracy with an area under the receiver operating characteristic (AUROC) curve for differentiating normal from abnormal EEG recordings of 0.95.
SCORE-AI also performed well at identifying generalized epileptiform abnormalities (AUROC, 0.96), focal epileptiform abnormalities (AUROC, 0.91), focal nonepileptiform abnormalities (AUROC, 0.89), and diffuse nonepileptiform abnormalities (AUROC, 0.93).
In addition, SCORE-AI had excellent agreement with clinicians – and sometimes agreed with individual experts more than the experts agreed with one another.
When Dr. Beniczky and team tested SCORE-AI against three previously published AI models, SCORE-AI demonstrated greater specificity than those models (90% vs. 3%-63%) but was not as sensitive (86.7%) as two of the models (96.7% and 100%).
One of the study’s limitations was the fact that SCORE-AI was developed and validated on routine EEGs that excluded neonates and critically ill patients.
In the future, Dr. Beniczky said on the podcast, the team would like to train SCORE-AI to read EEGs with more granularity, and eventually use only one single channel to record EEGs. At present, SCORE-AI is being integrated with Natus Neuro, a widely used EEG equipment system, the investigators note.
In an accompanying editorial, Jonathan Kleen, MD, PhD, and Elan Guterman, MD, said, “The overall approach taken ... in developing and validating SCORE-AI sets a standard for this work going forward.”
Dr. Kleen and Dr. Guterman note that the technological gains brought about by SCORE-AI technology “could offer an exciting prospect to improve EEG availability and clinical care for the 50 million people with epilepsy worldwide.”
A version of this article originally appeared on Medscape.com.
An automated artificial intelligence (AI) model trained to read electroencephalograms (EEGs) in patients with suspected epilepsy is just as accurate as trained neurologists, new data suggest.
Known as SCORE-AI, the technology distinguishes between abnormal and normal EEG recordings and classifies irregular recordings into specific categories crucial for patient decision-making.
“SCORE-AI can be used in place of experts in underprivileged areas, where expertise is missing, or to help physicians to preselect or prescore recordings in areas where the workload is high – we can all benefit from AI,” study investigator Sándor Beniczky, MD, PhD, said in a JAMA Neurology podcast.
Dr. Beniczky is professor of clinical neurophysiology at Aarhus University in Denmark.
The findings were published online in JAMA Neurology.
Gaining a foothold
Increasingly, AI is gaining a foothold in medicine by credibly addressing patient queries and aiding radiologists.
To bring AI to EEG interpretation, the researchers developed and validated an AI model that was able to assess routine, clinical EEGs in patients with suspected epilepsy.
Beyond using AI to distinguish abnormal from normal EEG recordings, the researchers wanted to train the new system to classify abnormal recordings into the major categories that are most relevant for clinical decision-making in patients who may have epilepsy. The categories included epileptiform-focal, epileptiform-generalized, nonepileptiform-focal, and nonepileptiform-diffuse abnormalities.
The researchers trained the learning model using Standardized Computer-based Organized Reporting of EEG (SCORE) software.
In the development phase, the model was trained using more than 30,490 anonymized and highly annotated EEG recordings from 14,100 men (median age, 25 years) from a single center. The recordings had an average duration of 31 minutes and were interpreted by 17 neurologists using standardized criteria. If an EEG recording was abnormal, the physicians had to specify which abnormal features were present.
SCORE-AI then performed an analysis of the recordings based on input from the experts.
To validate the findings, investigators used two independent test datasets. The first dataset consisted of 100 representative routine EEGs from 61 men (median age, 26 years), evaluated by 11 neurologists from different centers.
The consensus of these evaluations served as the reference standard. The second dataset comprised nearly 10,000 EEGs from a single center (5,170 men; median age, 35 years), independently assessed by 14 neurologists.
Near-perfect accuracy
When compared with the experts, SCORE-AI had near-perfect accuracy with an area under the receiver operating characteristic (AUROC) curve for differentiating normal from abnormal EEG recordings of 0.95.
SCORE-AI also performed well at identifying generalized epileptiform abnormalities (AUROC, 0.96), focal epileptiform abnormalities (AUROC, 0.91), focal nonepileptiform abnormalities (AUROC, 0.89), and diffuse nonepileptiform abnormalities (AUROC, 0.93).
In addition, SCORE-AI had excellent agreement with clinicians – and sometimes agreed with individual experts more than the experts agreed with one another.
When Dr. Beniczky and team tested SCORE-AI against three previously published AI models, SCORE-AI demonstrated greater specificity than those models (90% vs. 3%-63%) but was not as sensitive (86.7%) as two of the models (96.7% and 100%).
One of the study’s limitations was the fact that SCORE-AI was developed and validated on routine EEGs that excluded neonates and critically ill patients.
In the future, Dr. Beniczky said on the podcast, the team would like to train SCORE-AI to read EEGs with more granularity, and eventually use only one single channel to record EEGs. At present, SCORE-AI is being integrated with Natus Neuro, a widely used EEG equipment system, the investigators note.
In an accompanying editorial, Jonathan Kleen, MD, PhD, and Elan Guterman, MD, said, “The overall approach taken ... in developing and validating SCORE-AI sets a standard for this work going forward.”
Dr. Kleen and Dr. Guterman note that the technological gains brought about by SCORE-AI technology “could offer an exciting prospect to improve EEG availability and clinical care for the 50 million people with epilepsy worldwide.”
A version of this article originally appeared on Medscape.com.
An automated artificial intelligence (AI) model trained to read electroencephalograms (EEGs) in patients with suspected epilepsy is just as accurate as trained neurologists, new data suggest.
Known as SCORE-AI, the technology distinguishes between abnormal and normal EEG recordings and classifies irregular recordings into specific categories crucial for patient decision-making.
“SCORE-AI can be used in place of experts in underprivileged areas, where expertise is missing, or to help physicians to preselect or prescore recordings in areas where the workload is high – we can all benefit from AI,” study investigator Sándor Beniczky, MD, PhD, said in a JAMA Neurology podcast.
Dr. Beniczky is professor of clinical neurophysiology at Aarhus University in Denmark.
The findings were published online in JAMA Neurology.
Gaining a foothold
Increasingly, AI is gaining a foothold in medicine by credibly addressing patient queries and aiding radiologists.
To bring AI to EEG interpretation, the researchers developed and validated an AI model that was able to assess routine, clinical EEGs in patients with suspected epilepsy.
Beyond using AI to distinguish abnormal from normal EEG recordings, the researchers wanted to train the new system to classify abnormal recordings into the major categories that are most relevant for clinical decision-making in patients who may have epilepsy. The categories included epileptiform-focal, epileptiform-generalized, nonepileptiform-focal, and nonepileptiform-diffuse abnormalities.
The researchers trained the learning model using Standardized Computer-based Organized Reporting of EEG (SCORE) software.
In the development phase, the model was trained using more than 30,490 anonymized and highly annotated EEG recordings from 14,100 men (median age, 25 years) from a single center. The recordings had an average duration of 31 minutes and were interpreted by 17 neurologists using standardized criteria. If an EEG recording was abnormal, the physicians had to specify which abnormal features were present.
SCORE-AI then performed an analysis of the recordings based on input from the experts.
To validate the findings, investigators used two independent test datasets. The first dataset consisted of 100 representative routine EEGs from 61 men (median age, 26 years), evaluated by 11 neurologists from different centers.
The consensus of these evaluations served as the reference standard. The second dataset comprised nearly 10,000 EEGs from a single center (5,170 men; median age, 35 years), independently assessed by 14 neurologists.
Near-perfect accuracy
When compared with the experts, SCORE-AI had near-perfect accuracy with an area under the receiver operating characteristic (AUROC) curve for differentiating normal from abnormal EEG recordings of 0.95.
SCORE-AI also performed well at identifying generalized epileptiform abnormalities (AUROC, 0.96), focal epileptiform abnormalities (AUROC, 0.91), focal nonepileptiform abnormalities (AUROC, 0.89), and diffuse nonepileptiform abnormalities (AUROC, 0.93).
In addition, SCORE-AI had excellent agreement with clinicians – and sometimes agreed with individual experts more than the experts agreed with one another.
When Dr. Beniczky and team tested SCORE-AI against three previously published AI models, SCORE-AI demonstrated greater specificity than those models (90% vs. 3%-63%) but was not as sensitive (86.7%) as two of the models (96.7% and 100%).
One of the study’s limitations was the fact that SCORE-AI was developed and validated on routine EEGs that excluded neonates and critically ill patients.
In the future, Dr. Beniczky said on the podcast, the team would like to train SCORE-AI to read EEGs with more granularity, and eventually use only one single channel to record EEGs. At present, SCORE-AI is being integrated with Natus Neuro, a widely used EEG equipment system, the investigators note.
In an accompanying editorial, Jonathan Kleen, MD, PhD, and Elan Guterman, MD, said, “The overall approach taken ... in developing and validating SCORE-AI sets a standard for this work going forward.”
Dr. Kleen and Dr. Guterman note that the technological gains brought about by SCORE-AI technology “could offer an exciting prospect to improve EEG availability and clinical care for the 50 million people with epilepsy worldwide.”
A version of this article originally appeared on Medscape.com.
Tirzepatide: Therapeutic titan or costly cure?
As a general practitioner with a specialist interest in diabetes, I am increasingly diagnosing younger people living with type 2 diabetes and obesity. Sadly, my youngest patient living with type 2 diabetes and obesity is only in her early 20s.
In fact, in England, there are now more people under the age of 40 years living with type 2 diabetes than type 1 diabetes. These younger individuals tend to present with very high hemoglobin A1c levels; I am routinely seeing double-digit A1c percentage levels in my practice. Indeed, the patient mentioned above presented with an A1c of more than 13%.
The lifetime cardiometabolic risk of individuals like her is considerable and very worrying: Younger adults with type 2 diabetes often have adverse cardiometabolic risk profiles at diagnosis, with higher body mass indices, marked dyslipidemia, hypertension, and abnormal liver profiles suggesting nonalcoholic fatty liver disease. The cumulative impact of this risk profile is a significant impact on quality and quantity of life. Evidence tells us that a younger age of diagnosis with type 2 diabetes is associated with an increased risk for premature death, especially from cardiovascular disease.
Early treatment intensification is warranted in younger individuals living with type 2 diabetes and obesity. My patient above is now on triple therapy with metformin, a sodium-glucose cotransporter 2 (SGLT2) inhibitor, and a glucagonlike peptide–1 (GLP-1) receptor agonist. I gave her an urgent referral to my local weight management service for weight, nutritional, and psychological support. I have also issued her a real-time continuous glucose monitoring (rt-CGM) device: Whilst she does not meet any current U.K. criteria for using rt-CGM, I feel that the role of CGM as an educational tool for her is invaluable and equally important to her pharmacologic therapies. We are in desperate need of effective pharmacologic and lifestyle interventions to tackle this epidemic of cardiometabolic disease in the young.
I attended the recent ADA 2023 congress in San Diego, including the presentation of the SURMOUNT-2 trial data. SURMOUNT-2 explored the efficacy and safety of the dual GLP-GIP agonist tirzepatide for weight management in patients with obesity and type 2 diabetes. Tirzepatide was associated with significant reductions in weight (average weight loss, 14-16 kg after 72 weeks) and glycemia (2.1% reduction in A1c after 72 weeks), as well as reductions in clinically meaningful cardiometabolic risk factors, including systolic blood pressure, liver enzymes, and fasting non–HDL cholesterol levels. The overall safety profile of tirzepatide was also reassuring and consistent with the GLP-1 class. Most adverse effects were gastrointestinal and of mild to moderate severity. These adverse effects decreased over time.
These results perfectly position tirzepatide for my younger patients like the young woman mentioned above. The significant improvements in weight, glycemia, and cardiometabolic risk factors will not only help mitigate her future cardiometabolic risk but also help the sustainability of the U.K.’s National Health System. The cost of diabetes to the NHS in the United Kingdom is more than 10% of the entire NHS budget for England and Wales. More than 80% of this cost, however, is related not to the medications and devices we prescribe for diabetes but to the downstream complications of diabetes, such as hospital admissions for cardiovascular events and amputations, as well as regular hospital attendance for dialysis for end-stage kidney disease.
There is no doubt, however, that modern obesity medications such as semaglutide and tirzepatide are expensive, and demand has been astronomical. This demand has been driven by private weight-management services and celebrity influencers, and has resulted in major U.K.-wide GLP-1 shortages.
This situation is tragically widening health inequalities, as many of my patients who have been on GLP-1 receptor agonists for many years are unable to obtain them. I am having to consider switching therapies, often to less efficacious options without the compelling cardiorenal benefits. Furthermore, the GLP-1 shortages have prevented GLP-1 initiation for my other high-risk younger patients, potentially increasing future cardiometabolic risk.
There remain unanswered questions for tirzepatide: What is the durability of effect of tirzepatide after 72 weeks (that is, the trial duration of SURMOUNT-2)? Crucially, what is the effect of withdrawal of tirzepatide on weight loss maintenance? Previous evidence has suggested weight regain after discontinuation of a GLP-1 receptor agonist for obesity. This, of course, has further financial and sustainability implications for health care systems such as the NHS.
Finally, we are increasingly seeing younger women of childbearing age with or at risk for cardiometabolic disease. Again, my patient above is one example. Many of the therapies we use for cardiometabolic disease management, including GLP-1 receptor agonists and tirzepatide, have not been studied, and hence have not been licensed in pregnant women. Therefore, frank discussions are required with patients about future family plans and the importance of contraception. Often, the significant weight loss seen with GLP-1 receptor agonists can improve hormonal profiles and fertility in women and result in unexpected pregnancies if robust contraception is not in place.
Tirzepatide has yet to be made commercially available in the United Kingdom, and its price has also yet to be set. But I already envision a clear role for tirzepatide in my treatment armamentarium. I will be positioning tirzepatide as my first injectable of choice after oral treatment escalation with metformin and an SGLT2 inhibitor in all my patients who require treatment intensification – not just my younger, higher-risk individuals. This may remain an aspirational goal until supply chains and cost are defined. There is no doubt, however, that the compelling weight and glycemic benefits of tirzepatide alongside individualized lifestyle interventions can help improve the quality and quantity of life of my patients living with type 2 diabetes and obesity.
Dr. Fernando is a general practitioner near Edinburgh. He reported receiving speaker fees from Eli Lilly and Novo Nordisk..
A version of this article first appeared on Medscape.com.
As a general practitioner with a specialist interest in diabetes, I am increasingly diagnosing younger people living with type 2 diabetes and obesity. Sadly, my youngest patient living with type 2 diabetes and obesity is only in her early 20s.
In fact, in England, there are now more people under the age of 40 years living with type 2 diabetes than type 1 diabetes. These younger individuals tend to present with very high hemoglobin A1c levels; I am routinely seeing double-digit A1c percentage levels in my practice. Indeed, the patient mentioned above presented with an A1c of more than 13%.
The lifetime cardiometabolic risk of individuals like her is considerable and very worrying: Younger adults with type 2 diabetes often have adverse cardiometabolic risk profiles at diagnosis, with higher body mass indices, marked dyslipidemia, hypertension, and abnormal liver profiles suggesting nonalcoholic fatty liver disease. The cumulative impact of this risk profile is a significant impact on quality and quantity of life. Evidence tells us that a younger age of diagnosis with type 2 diabetes is associated with an increased risk for premature death, especially from cardiovascular disease.
Early treatment intensification is warranted in younger individuals living with type 2 diabetes and obesity. My patient above is now on triple therapy with metformin, a sodium-glucose cotransporter 2 (SGLT2) inhibitor, and a glucagonlike peptide–1 (GLP-1) receptor agonist. I gave her an urgent referral to my local weight management service for weight, nutritional, and psychological support. I have also issued her a real-time continuous glucose monitoring (rt-CGM) device: Whilst she does not meet any current U.K. criteria for using rt-CGM, I feel that the role of CGM as an educational tool for her is invaluable and equally important to her pharmacologic therapies. We are in desperate need of effective pharmacologic and lifestyle interventions to tackle this epidemic of cardiometabolic disease in the young.
I attended the recent ADA 2023 congress in San Diego, including the presentation of the SURMOUNT-2 trial data. SURMOUNT-2 explored the efficacy and safety of the dual GLP-GIP agonist tirzepatide for weight management in patients with obesity and type 2 diabetes. Tirzepatide was associated with significant reductions in weight (average weight loss, 14-16 kg after 72 weeks) and glycemia (2.1% reduction in A1c after 72 weeks), as well as reductions in clinically meaningful cardiometabolic risk factors, including systolic blood pressure, liver enzymes, and fasting non–HDL cholesterol levels. The overall safety profile of tirzepatide was also reassuring and consistent with the GLP-1 class. Most adverse effects were gastrointestinal and of mild to moderate severity. These adverse effects decreased over time.
These results perfectly position tirzepatide for my younger patients like the young woman mentioned above. The significant improvements in weight, glycemia, and cardiometabolic risk factors will not only help mitigate her future cardiometabolic risk but also help the sustainability of the U.K.’s National Health System. The cost of diabetes to the NHS in the United Kingdom is more than 10% of the entire NHS budget for England and Wales. More than 80% of this cost, however, is related not to the medications and devices we prescribe for diabetes but to the downstream complications of diabetes, such as hospital admissions for cardiovascular events and amputations, as well as regular hospital attendance for dialysis for end-stage kidney disease.
There is no doubt, however, that modern obesity medications such as semaglutide and tirzepatide are expensive, and demand has been astronomical. This demand has been driven by private weight-management services and celebrity influencers, and has resulted in major U.K.-wide GLP-1 shortages.
This situation is tragically widening health inequalities, as many of my patients who have been on GLP-1 receptor agonists for many years are unable to obtain them. I am having to consider switching therapies, often to less efficacious options without the compelling cardiorenal benefits. Furthermore, the GLP-1 shortages have prevented GLP-1 initiation for my other high-risk younger patients, potentially increasing future cardiometabolic risk.
There remain unanswered questions for tirzepatide: What is the durability of effect of tirzepatide after 72 weeks (that is, the trial duration of SURMOUNT-2)? Crucially, what is the effect of withdrawal of tirzepatide on weight loss maintenance? Previous evidence has suggested weight regain after discontinuation of a GLP-1 receptor agonist for obesity. This, of course, has further financial and sustainability implications for health care systems such as the NHS.
Finally, we are increasingly seeing younger women of childbearing age with or at risk for cardiometabolic disease. Again, my patient above is one example. Many of the therapies we use for cardiometabolic disease management, including GLP-1 receptor agonists and tirzepatide, have not been studied, and hence have not been licensed in pregnant women. Therefore, frank discussions are required with patients about future family plans and the importance of contraception. Often, the significant weight loss seen with GLP-1 receptor agonists can improve hormonal profiles and fertility in women and result in unexpected pregnancies if robust contraception is not in place.
Tirzepatide has yet to be made commercially available in the United Kingdom, and its price has also yet to be set. But I already envision a clear role for tirzepatide in my treatment armamentarium. I will be positioning tirzepatide as my first injectable of choice after oral treatment escalation with metformin and an SGLT2 inhibitor in all my patients who require treatment intensification – not just my younger, higher-risk individuals. This may remain an aspirational goal until supply chains and cost are defined. There is no doubt, however, that the compelling weight and glycemic benefits of tirzepatide alongside individualized lifestyle interventions can help improve the quality and quantity of life of my patients living with type 2 diabetes and obesity.
Dr. Fernando is a general practitioner near Edinburgh. He reported receiving speaker fees from Eli Lilly and Novo Nordisk..
A version of this article first appeared on Medscape.com.
As a general practitioner with a specialist interest in diabetes, I am increasingly diagnosing younger people living with type 2 diabetes and obesity. Sadly, my youngest patient living with type 2 diabetes and obesity is only in her early 20s.
In fact, in England, there are now more people under the age of 40 years living with type 2 diabetes than type 1 diabetes. These younger individuals tend to present with very high hemoglobin A1c levels; I am routinely seeing double-digit A1c percentage levels in my practice. Indeed, the patient mentioned above presented with an A1c of more than 13%.
The lifetime cardiometabolic risk of individuals like her is considerable and very worrying: Younger adults with type 2 diabetes often have adverse cardiometabolic risk profiles at diagnosis, with higher body mass indices, marked dyslipidemia, hypertension, and abnormal liver profiles suggesting nonalcoholic fatty liver disease. The cumulative impact of this risk profile is a significant impact on quality and quantity of life. Evidence tells us that a younger age of diagnosis with type 2 diabetes is associated with an increased risk for premature death, especially from cardiovascular disease.
Early treatment intensification is warranted in younger individuals living with type 2 diabetes and obesity. My patient above is now on triple therapy with metformin, a sodium-glucose cotransporter 2 (SGLT2) inhibitor, and a glucagonlike peptide–1 (GLP-1) receptor agonist. I gave her an urgent referral to my local weight management service for weight, nutritional, and psychological support. I have also issued her a real-time continuous glucose monitoring (rt-CGM) device: Whilst she does not meet any current U.K. criteria for using rt-CGM, I feel that the role of CGM as an educational tool for her is invaluable and equally important to her pharmacologic therapies. We are in desperate need of effective pharmacologic and lifestyle interventions to tackle this epidemic of cardiometabolic disease in the young.
I attended the recent ADA 2023 congress in San Diego, including the presentation of the SURMOUNT-2 trial data. SURMOUNT-2 explored the efficacy and safety of the dual GLP-GIP agonist tirzepatide for weight management in patients with obesity and type 2 diabetes. Tirzepatide was associated with significant reductions in weight (average weight loss, 14-16 kg after 72 weeks) and glycemia (2.1% reduction in A1c after 72 weeks), as well as reductions in clinically meaningful cardiometabolic risk factors, including systolic blood pressure, liver enzymes, and fasting non–HDL cholesterol levels. The overall safety profile of tirzepatide was also reassuring and consistent with the GLP-1 class. Most adverse effects were gastrointestinal and of mild to moderate severity. These adverse effects decreased over time.
These results perfectly position tirzepatide for my younger patients like the young woman mentioned above. The significant improvements in weight, glycemia, and cardiometabolic risk factors will not only help mitigate her future cardiometabolic risk but also help the sustainability of the U.K.’s National Health System. The cost of diabetes to the NHS in the United Kingdom is more than 10% of the entire NHS budget for England and Wales. More than 80% of this cost, however, is related not to the medications and devices we prescribe for diabetes but to the downstream complications of diabetes, such as hospital admissions for cardiovascular events and amputations, as well as regular hospital attendance for dialysis for end-stage kidney disease.
There is no doubt, however, that modern obesity medications such as semaglutide and tirzepatide are expensive, and demand has been astronomical. This demand has been driven by private weight-management services and celebrity influencers, and has resulted in major U.K.-wide GLP-1 shortages.
This situation is tragically widening health inequalities, as many of my patients who have been on GLP-1 receptor agonists for many years are unable to obtain them. I am having to consider switching therapies, often to less efficacious options without the compelling cardiorenal benefits. Furthermore, the GLP-1 shortages have prevented GLP-1 initiation for my other high-risk younger patients, potentially increasing future cardiometabolic risk.
There remain unanswered questions for tirzepatide: What is the durability of effect of tirzepatide after 72 weeks (that is, the trial duration of SURMOUNT-2)? Crucially, what is the effect of withdrawal of tirzepatide on weight loss maintenance? Previous evidence has suggested weight regain after discontinuation of a GLP-1 receptor agonist for obesity. This, of course, has further financial and sustainability implications for health care systems such as the NHS.
Finally, we are increasingly seeing younger women of childbearing age with or at risk for cardiometabolic disease. Again, my patient above is one example. Many of the therapies we use for cardiometabolic disease management, including GLP-1 receptor agonists and tirzepatide, have not been studied, and hence have not been licensed in pregnant women. Therefore, frank discussions are required with patients about future family plans and the importance of contraception. Often, the significant weight loss seen with GLP-1 receptor agonists can improve hormonal profiles and fertility in women and result in unexpected pregnancies if robust contraception is not in place.
Tirzepatide has yet to be made commercially available in the United Kingdom, and its price has also yet to be set. But I already envision a clear role for tirzepatide in my treatment armamentarium. I will be positioning tirzepatide as my first injectable of choice after oral treatment escalation with metformin and an SGLT2 inhibitor in all my patients who require treatment intensification – not just my younger, higher-risk individuals. This may remain an aspirational goal until supply chains and cost are defined. There is no doubt, however, that the compelling weight and glycemic benefits of tirzepatide alongside individualized lifestyle interventions can help improve the quality and quantity of life of my patients living with type 2 diabetes and obesity.
Dr. Fernando is a general practitioner near Edinburgh. He reported receiving speaker fees from Eli Lilly and Novo Nordisk..
A version of this article first appeared on Medscape.com.
AHA statement addresses equity in cardio-oncology care
A new scientific statement from the American Heart Association focuses on equity in cardio-oncology care and research.
A “growing body of evidence” suggests that women and people from underrepresented patient groups experience disproportionately higher cardiovascular effects from new and emerging anticancer therapies, the writing group, led by Daniel Addison, MD, with the Ohio State University, Columbus, pointed out.
For example, women appear to be at higher risk of immune checkpoint inhibitor–related toxicities, whereas Black patients with cancer face up to a threefold higher risk of cardiotoxicity with anticancer therapies.
With reduced screening and delayed preventive measures, Hispanic patients have more complex heart disease, cancer is diagnosed at later stages, and they receive more cardiotoxic regimens because of a lack of eligibility for novel treatments. Ultimately, this contributes to a higher incidence of treatment complications, cardiac dysfunction, and adverse patient outcomes for this patient group, they write.
Although no studies have specifically addressed cardio-oncology disparities in the LGBTQIA+ population, such disparities can be inferred from known cardiovascular disease and oncology disparities, the writing group noted.
These disparities are supported by “disparately high” risk of death after a cancer diagnosis among women and individuals from underrepresented groups, even after accounting for socioeconomic and behavioral patterns, they pointed out.
The scientific statement was published online in Circulation.
Evidence gaps and the path forward
“Despite advances in strategies to limit the risks of cardiovascular events among cancer survivors, relatively limited guidance is available to address the rapidly growing problem of disparate cardiotoxic risks among women and underrepresented patient populations,” the writing group said.
Decentralized and sporadic evaluations have led to a lack of consensus on the definitions, investigations, and potential optimal strategies to address disparate cardiotoxicity with contemporary cancer immunotherapy, as well as biologic and cytotoxic therapies, they noted.
They said caution is needed when interpreting clinical trial data about cardiotoxicity and in generalizing the results because people from diverse racial and ethnic groups have not been well represented in many trials.
The writing group outlined key evidence gaps and future research directions for addressing cardio-oncology disparities, as well as strategies to improve equity in cardio-oncology care and research.
These include the following:
- Identifying specific predictive factors of long-term cardiotoxic risk with targeted and immune-based cancer therapies in women and underrepresented populations.
- Investigating biological mechanisms that may underlie differences in cardiotoxicities between different patient groups.
- Developing personalized cardioprotection strategies that integrate biological, genetic, and social determinant markers.
- Intentionally diversifying clinical trials and identifying optimal strategies to improve representation in cancer clinical trials.
- Determining the role of technology, such as artificial intelligence, in improving cardiotoxicity disparities.
“Conscientiously leveraging technology and designing trials with outcomes related to these issues in practice (considering feasibility and cost) will critically accelerate the field of cardio-oncology in the 21st century. With tangible goals, we can improve health inequities in cardio-oncology,” the writing group said.
The research had no commercial funding. No conflicts of interest were reported.
A version of this article originally appeared on Medscape.com.
A new scientific statement from the American Heart Association focuses on equity in cardio-oncology care and research.
A “growing body of evidence” suggests that women and people from underrepresented patient groups experience disproportionately higher cardiovascular effects from new and emerging anticancer therapies, the writing group, led by Daniel Addison, MD, with the Ohio State University, Columbus, pointed out.
For example, women appear to be at higher risk of immune checkpoint inhibitor–related toxicities, whereas Black patients with cancer face up to a threefold higher risk of cardiotoxicity with anticancer therapies.
With reduced screening and delayed preventive measures, Hispanic patients have more complex heart disease, cancer is diagnosed at later stages, and they receive more cardiotoxic regimens because of a lack of eligibility for novel treatments. Ultimately, this contributes to a higher incidence of treatment complications, cardiac dysfunction, and adverse patient outcomes for this patient group, they write.
Although no studies have specifically addressed cardio-oncology disparities in the LGBTQIA+ population, such disparities can be inferred from known cardiovascular disease and oncology disparities, the writing group noted.
These disparities are supported by “disparately high” risk of death after a cancer diagnosis among women and individuals from underrepresented groups, even after accounting for socioeconomic and behavioral patterns, they pointed out.
The scientific statement was published online in Circulation.
Evidence gaps and the path forward
“Despite advances in strategies to limit the risks of cardiovascular events among cancer survivors, relatively limited guidance is available to address the rapidly growing problem of disparate cardiotoxic risks among women and underrepresented patient populations,” the writing group said.
Decentralized and sporadic evaluations have led to a lack of consensus on the definitions, investigations, and potential optimal strategies to address disparate cardiotoxicity with contemporary cancer immunotherapy, as well as biologic and cytotoxic therapies, they noted.
They said caution is needed when interpreting clinical trial data about cardiotoxicity and in generalizing the results because people from diverse racial and ethnic groups have not been well represented in many trials.
The writing group outlined key evidence gaps and future research directions for addressing cardio-oncology disparities, as well as strategies to improve equity in cardio-oncology care and research.
These include the following:
- Identifying specific predictive factors of long-term cardiotoxic risk with targeted and immune-based cancer therapies in women and underrepresented populations.
- Investigating biological mechanisms that may underlie differences in cardiotoxicities between different patient groups.
- Developing personalized cardioprotection strategies that integrate biological, genetic, and social determinant markers.
- Intentionally diversifying clinical trials and identifying optimal strategies to improve representation in cancer clinical trials.
- Determining the role of technology, such as artificial intelligence, in improving cardiotoxicity disparities.
“Conscientiously leveraging technology and designing trials with outcomes related to these issues in practice (considering feasibility and cost) will critically accelerate the field of cardio-oncology in the 21st century. With tangible goals, we can improve health inequities in cardio-oncology,” the writing group said.
The research had no commercial funding. No conflicts of interest were reported.
A version of this article originally appeared on Medscape.com.
A new scientific statement from the American Heart Association focuses on equity in cardio-oncology care and research.
A “growing body of evidence” suggests that women and people from underrepresented patient groups experience disproportionately higher cardiovascular effects from new and emerging anticancer therapies, the writing group, led by Daniel Addison, MD, with the Ohio State University, Columbus, pointed out.
For example, women appear to be at higher risk of immune checkpoint inhibitor–related toxicities, whereas Black patients with cancer face up to a threefold higher risk of cardiotoxicity with anticancer therapies.
With reduced screening and delayed preventive measures, Hispanic patients have more complex heart disease, cancer is diagnosed at later stages, and they receive more cardiotoxic regimens because of a lack of eligibility for novel treatments. Ultimately, this contributes to a higher incidence of treatment complications, cardiac dysfunction, and adverse patient outcomes for this patient group, they write.
Although no studies have specifically addressed cardio-oncology disparities in the LGBTQIA+ population, such disparities can be inferred from known cardiovascular disease and oncology disparities, the writing group noted.
These disparities are supported by “disparately high” risk of death after a cancer diagnosis among women and individuals from underrepresented groups, even after accounting for socioeconomic and behavioral patterns, they pointed out.
The scientific statement was published online in Circulation.
Evidence gaps and the path forward
“Despite advances in strategies to limit the risks of cardiovascular events among cancer survivors, relatively limited guidance is available to address the rapidly growing problem of disparate cardiotoxic risks among women and underrepresented patient populations,” the writing group said.
Decentralized and sporadic evaluations have led to a lack of consensus on the definitions, investigations, and potential optimal strategies to address disparate cardiotoxicity with contemporary cancer immunotherapy, as well as biologic and cytotoxic therapies, they noted.
They said caution is needed when interpreting clinical trial data about cardiotoxicity and in generalizing the results because people from diverse racial and ethnic groups have not been well represented in many trials.
The writing group outlined key evidence gaps and future research directions for addressing cardio-oncology disparities, as well as strategies to improve equity in cardio-oncology care and research.
These include the following:
- Identifying specific predictive factors of long-term cardiotoxic risk with targeted and immune-based cancer therapies in women and underrepresented populations.
- Investigating biological mechanisms that may underlie differences in cardiotoxicities between different patient groups.
- Developing personalized cardioprotection strategies that integrate biological, genetic, and social determinant markers.
- Intentionally diversifying clinical trials and identifying optimal strategies to improve representation in cancer clinical trials.
- Determining the role of technology, such as artificial intelligence, in improving cardiotoxicity disparities.
“Conscientiously leveraging technology and designing trials with outcomes related to these issues in practice (considering feasibility and cost) will critically accelerate the field of cardio-oncology in the 21st century. With tangible goals, we can improve health inequities in cardio-oncology,” the writing group said.
The research had no commercial funding. No conflicts of interest were reported.
A version of this article originally appeared on Medscape.com.
FROM CIRCULATION
CGM alarm fatigue in youth?
Teenagers with diabetes who use a continuous glucose monitor (CGM) employ a wide variety of alarm settings to alert them when their blood sugar may be too high or too low. But sometimes those thresholds generate too many alarms – which in turn might lead patients to ignore the devices, according to a study presented at the 2023 annual meeting of the Endocrine Society.
“These alarms alert people with diabetes and their caregivers of pending glycemic changes. However, little work has been done studying CGM alarm settings in pediatric clinical populations,” said Victoria Ochs, BS, a medical student at the Indiana University, Indianapolis, who helped conduct the study.
Ms. Ochs and colleagues analyzed 2 weeks of real-time CGM alarm settings from 150 children with diabetes treated at Indiana. Their average age was 14 years; 47% were female, 89% of were White, 9.5% were Black, and 1.5% were Asian. Approximately half the patients used insulin pumps (51%) in addition to the monitoring devices.
For both alarms that indicated blood sugar was too low or too high, settings among the children often varied widely from thresholds recommended by the University of Colorado’s Barbara Davis Center for Diabetes, Aurora. Those thresholds are 70 mg/dL of glucose for low and 180 mg/dL for high glucose. At Indiana, the median alert level for low was set to 74 mg/dL (range: 60-100), while the median for high was 242 mg/dL (range: 120-400).
“If we have it set at 100, what exactly is the purpose of that? Is it just to make you more anxious that you’re going to drop low at some point?” asked Cari Berget, MPH, RN, CDE, who specializes in pediatric diabetes at the University of Colorado, speaking of the low blood sugar alarm. Setting this alarm at 70 md/dL instead could lead to concrete action when it does go off – such as consuming carbohydrates to boost blood sugar, she said.
“Alarms should result in action most of the time,” said Ms. Berget, associate director of Colorado’s PANTHER program, which established the alarm thresholds used in the Indiana study. Alarm setting is not one-size-fits-all, Ms. Berget noted: Some people might want 70 mg/dL to warn of low blood sugar, whereas others prefer 75 or 80 mg/dL.
As for alerts about hyperglycemia, Ms. Berget said patients often exceed the high range of 180 mg/dL immediately after a meal. Ideally these sugars will subside on their own within 3 hours, a process aided by insulin shots or pumps. Setting a threshold for high blood sugar too low, such as 120 mg/dL, could result in ceaseless alarms even if the person is not at risk for harm.
“If you receive an alarm and there’s no action for you to take, then we need to change how we’re setting these alarms,” Ms. Berget said. She advised parents and children to be thoughtful about setting their CGM alarm thresholds to be most useful to them.
Ms. Ochs said in some cases families have CGM devices shipped directly to their homes and never consult with anyone about optimal alarm settings.
“It would be useful to talk to families about what baseline information they had,” Ms. Ochs told this news organization. “It would be nice to talk to diabetes educators, and I think it would be nice to talk to physicians.”
Ms. Ochs reports no relevant financial relationships. Ms. Berget has consulted for Dexcom and Insulet.
A version of this article originally appeared on Medscape.com.
Teenagers with diabetes who use a continuous glucose monitor (CGM) employ a wide variety of alarm settings to alert them when their blood sugar may be too high or too low. But sometimes those thresholds generate too many alarms – which in turn might lead patients to ignore the devices, according to a study presented at the 2023 annual meeting of the Endocrine Society.
“These alarms alert people with diabetes and their caregivers of pending glycemic changes. However, little work has been done studying CGM alarm settings in pediatric clinical populations,” said Victoria Ochs, BS, a medical student at the Indiana University, Indianapolis, who helped conduct the study.
Ms. Ochs and colleagues analyzed 2 weeks of real-time CGM alarm settings from 150 children with diabetes treated at Indiana. Their average age was 14 years; 47% were female, 89% of were White, 9.5% were Black, and 1.5% were Asian. Approximately half the patients used insulin pumps (51%) in addition to the monitoring devices.
For both alarms that indicated blood sugar was too low or too high, settings among the children often varied widely from thresholds recommended by the University of Colorado’s Barbara Davis Center for Diabetes, Aurora. Those thresholds are 70 mg/dL of glucose for low and 180 mg/dL for high glucose. At Indiana, the median alert level for low was set to 74 mg/dL (range: 60-100), while the median for high was 242 mg/dL (range: 120-400).
“If we have it set at 100, what exactly is the purpose of that? Is it just to make you more anxious that you’re going to drop low at some point?” asked Cari Berget, MPH, RN, CDE, who specializes in pediatric diabetes at the University of Colorado, speaking of the low blood sugar alarm. Setting this alarm at 70 md/dL instead could lead to concrete action when it does go off – such as consuming carbohydrates to boost blood sugar, she said.
“Alarms should result in action most of the time,” said Ms. Berget, associate director of Colorado’s PANTHER program, which established the alarm thresholds used in the Indiana study. Alarm setting is not one-size-fits-all, Ms. Berget noted: Some people might want 70 mg/dL to warn of low blood sugar, whereas others prefer 75 or 80 mg/dL.
As for alerts about hyperglycemia, Ms. Berget said patients often exceed the high range of 180 mg/dL immediately after a meal. Ideally these sugars will subside on their own within 3 hours, a process aided by insulin shots or pumps. Setting a threshold for high blood sugar too low, such as 120 mg/dL, could result in ceaseless alarms even if the person is not at risk for harm.
“If you receive an alarm and there’s no action for you to take, then we need to change how we’re setting these alarms,” Ms. Berget said. She advised parents and children to be thoughtful about setting their CGM alarm thresholds to be most useful to them.
Ms. Ochs said in some cases families have CGM devices shipped directly to their homes and never consult with anyone about optimal alarm settings.
“It would be useful to talk to families about what baseline information they had,” Ms. Ochs told this news organization. “It would be nice to talk to diabetes educators, and I think it would be nice to talk to physicians.”
Ms. Ochs reports no relevant financial relationships. Ms. Berget has consulted for Dexcom and Insulet.
A version of this article originally appeared on Medscape.com.
Teenagers with diabetes who use a continuous glucose monitor (CGM) employ a wide variety of alarm settings to alert them when their blood sugar may be too high or too low. But sometimes those thresholds generate too many alarms – which in turn might lead patients to ignore the devices, according to a study presented at the 2023 annual meeting of the Endocrine Society.
“These alarms alert people with diabetes and their caregivers of pending glycemic changes. However, little work has been done studying CGM alarm settings in pediatric clinical populations,” said Victoria Ochs, BS, a medical student at the Indiana University, Indianapolis, who helped conduct the study.
Ms. Ochs and colleagues analyzed 2 weeks of real-time CGM alarm settings from 150 children with diabetes treated at Indiana. Their average age was 14 years; 47% were female, 89% of were White, 9.5% were Black, and 1.5% were Asian. Approximately half the patients used insulin pumps (51%) in addition to the monitoring devices.
For both alarms that indicated blood sugar was too low or too high, settings among the children often varied widely from thresholds recommended by the University of Colorado’s Barbara Davis Center for Diabetes, Aurora. Those thresholds are 70 mg/dL of glucose for low and 180 mg/dL for high glucose. At Indiana, the median alert level for low was set to 74 mg/dL (range: 60-100), while the median for high was 242 mg/dL (range: 120-400).
“If we have it set at 100, what exactly is the purpose of that? Is it just to make you more anxious that you’re going to drop low at some point?” asked Cari Berget, MPH, RN, CDE, who specializes in pediatric diabetes at the University of Colorado, speaking of the low blood sugar alarm. Setting this alarm at 70 md/dL instead could lead to concrete action when it does go off – such as consuming carbohydrates to boost blood sugar, she said.
“Alarms should result in action most of the time,” said Ms. Berget, associate director of Colorado’s PANTHER program, which established the alarm thresholds used in the Indiana study. Alarm setting is not one-size-fits-all, Ms. Berget noted: Some people might want 70 mg/dL to warn of low blood sugar, whereas others prefer 75 or 80 mg/dL.
As for alerts about hyperglycemia, Ms. Berget said patients often exceed the high range of 180 mg/dL immediately after a meal. Ideally these sugars will subside on their own within 3 hours, a process aided by insulin shots or pumps. Setting a threshold for high blood sugar too low, such as 120 mg/dL, could result in ceaseless alarms even if the person is not at risk for harm.
“If you receive an alarm and there’s no action for you to take, then we need to change how we’re setting these alarms,” Ms. Berget said. She advised parents and children to be thoughtful about setting their CGM alarm thresholds to be most useful to them.
Ms. Ochs said in some cases families have CGM devices shipped directly to their homes and never consult with anyone about optimal alarm settings.
“It would be useful to talk to families about what baseline information they had,” Ms. Ochs told this news organization. “It would be nice to talk to diabetes educators, and I think it would be nice to talk to physicians.”
Ms. Ochs reports no relevant financial relationships. Ms. Berget has consulted for Dexcom and Insulet.
A version of this article originally appeared on Medscape.com.
Will the doctor see you now? The health system’s changing landscape
Lucia Agajanian, a 25-year-old freelance film producer in Chicago, doesn’t have a specific primary care doctor, preferring the convenience of visiting a local clinic for flu shots or going online for video visits. “You say what you need, and there’s a 15-minute wait time,” she said, explaining how her appointments usually work. “I really liked that.”
But Olga Lucia Torres, a 52-year-old who teaches narrative medicine classes at Columbia University in New York, misses her longtime primary care doctor, who kept tabs for two decades on her conditions, including lupus and rheumatoid arthritis, and made sure she was up to date on vaccines and screening tests. Two years ago, Torres received a letter informing her that he was changing to a “boutique practice” and would charge a retainer fee of $10,000 for her to stay on as a patient.
“I felt really sad and abandoned,” Ms. Torres said. “This was my PCP. I was like, ‘Dude, I thought we were in this together!’ ”
The two women reflect an ongoing reality: The primary care landscape is changing in ways that could shape patients’ access and quality of care now and for decades to come. A solid and enduring relationship with a primary care doctor – who knows a patient’s history and can monitor new problems – has long been regarded as the bedrock of a quality health care system. But investment in primary care in the U.S. lags behind that of other high-income countries, and America has a smaller share of primary care physicians than most of its European counterparts.
An estimated one-third of all physicians in the U.S. are primary care doctors – who include family medicine physicians, general internists, and pediatricians – according to the Robert Graham Center, a research and analysis organization that studies primary care. Other researchers say the numbers are lower, with the Peterson-KFF Health System Tracker reporting only 12% of U.S. doctors are generalists, compared with 23% in Germany and as many as 45% in the Netherlands.
That means it’s often hard to find a doctor and make an appointment that’s not weeks or months away.
“This is a problem that has been simmering and now beginning to erupt in some communities at a boil. It’s hard to find that front door of the health system,” said Ann Greiner, president and CEO of the Primary Care Collaborative, a nonprofit membership organization.
Today, a smaller percentage of physicians are entering the field than are practicing, suggesting that shortages will worsen over time.
Interest has waned partly because, in the U.S., primary care yields lower salaries than other medical and surgical specialties.
Some doctors now in practice also say they are burned out, facing cumbersome electronic health record systems and limits on appointment times, making it harder to get to know a patient and establish a relationship.
Others are retiring or selling their practices. Hospitals, insurers like Aetna-CVS Health, and other corporate entities like Amazon are on a buying spree, snapping up primary care practices, furthering a move away from the “Marcus Welby, M.D.”-style neighborhood doctor. About 48% of primary care physicians currently work in practices they do not own. Two-thirds of those doctors don’t work for other physicians but are employed by private equity investors or other corporate entities, according to data in the “Primary Care Chartbook,” which is collected and published by the Graham Center.
Patients who seek care at these offices may not be seen by the same doctor at every visit. Indeed, they may not be seen by a doctor at all but by a paraprofessional – a nurse practitioner or a physician assistant, for instance – who works under the doctor’s license. That trend has been accelerated by new state laws – as well as changes in Medicare policy – that loosen the requirements for physician supervisors and billing. And these jobs are expected to be among the decade’s fastest-growing in the health sector.
Overall, demand for primary care is up, spurred partly by record enrollment in Affordable Care Act plans. All those new patients, combined with the low supply of doctors, are contributing to a years-long downward trend in the number of people reporting they have a usual source of care, be it an individual doctor or a specific clinic or practice.
Researchers say that raises questions, including whether people can’t find a primary care doctor, can’t afford one, or simply no longer want an established relationship.
“Is it poor access or problems with the supply of providers? Does it reflect a societal disconnection, a go-it-alone phenomenon?” asked Christopher F. Koller, president of the Milbank Memorial Fund, a foundation whose nonpartisan analyses focus on state health policy.
For patients, frustrating wait times are one result. A recent survey by a physician staffing firm found it now takes an average of 21 days just to get in to see a doctor of family medicine, defined as a subgroup of primary care, which includes general internists and pediatricians. Those physicians are many patients’ first stop for health care. That runs counter to the trend in other countries, where patients complain of months- or years-long waits for elective procedures like hip replacements but generally experience short waits for primary care visits.
Another complication: All these factors are adding urgency to ongoing concerns about attracting new primary care physicians to the specialty.
When she was in medical school, Natalie A. Cameron, MD, specifically chose primary care because she enjoyed forming relationships with patients and because “I’m specifically interested in prevention and women’s health, and you do a lot of that in primary care.” The 33-year-old is currently an instructor of medicine at Northwestern University, Chicago, where she also sees patients at a primary care practice.
Still, she understands why many of her colleagues chose something else. For some, it’s the pay differential. For others, it’s because of primary care’s reputation for involving “a lot of care and paperwork and coordinating a lot of issues that may not just be medical,” Dr. Cameron said.
The million-dollar question, then, is how much does having a usual source of care influence medical outcomes and cost? And for which kinds of patients is having a close relationship with a doctor important? While studies show that many young people value the convenience of visiting urgent care – especially when it takes so long to see a primary care doctor – will their long-term health suffer because of that strategy?
Many patients – particularly the young and generally healthy ones – shrug at the new normal, embracing alternatives that require less waiting. These options are particularly attractive to millennials, who tell focus groups that the convenience of a one-off video call or visit to a big-box store clinic trumps a long-standing relationship with a doctor, especially if they have to wait days, weeks, or longer for a traditional appointment.
“The doctor I have is a family friend, but definitely I would take access and ease over a relationship,” said Matt Degn, 24, who says it can take two to three months to book a routine appointment in Salt Lake City, where he lives.
Patients are increasingly turning to what are dubbed “retail clinics,” such as CVS’ Minute Clinics, which tout “in-person and virtual care 7 days a week.” CVS Health’s more than 1,000 clinics inside stores across the U.S. treated more than 5 million people last year, Creagh Milford, a physician and the company’s senior vice president of retail health, said in a written statement. He cited a recent study by a data products firm showing the use of retail clinics has grown 200% over the past five years.
Health policy experts say increased access to alternatives can be good, but forgoing an ongoing relationship to a regular provider is not, especially as people get older and are more likely to develop chronic conditions or other medical problems.
“There’s a lot of data that show communities with a lot of primary care have better health,” said Mr. Koller.
People with a regular primary care doctor or practice are more likely to get preventive care, such as cancer screenings or flu shots, studies show, and are less likely to die if they do suffer a heart attack.
Physicians who see patients regularly are better able to spot patterns of seemingly minor concerns that could add up to a serious health issue.
“What happens when you go to four different providers on four platforms for urinary tract infections because, well, they are just UTIs,” posed Yalda Jabbarpour, MD, a family physician practicing in Washington, and the director of the Robert Graham Center for Policy Studies. “But actually, you have a large kidney stone that’s causing your UTI or have some sort of immune deficiency like diabetes that’s causing frequent UTIs. But no one tested you.”
Most experts agree that figuring out how to coordinate care amid this changing landscape and make it more accessible without undermining quality – even when different doctors, locations, health systems, and electronic health records are involved – will be as complex as the pressures causing long waits and less interest in today’s primary care market.
And experiences sometimes lead patients to change their minds.
There’s something to be said for establishing a relationship, said Ms. Agajanian, in Chicago. She’s rethinking her decision to cobble together care, rather than have a specific primary care doctor or clinic, following an injury at work last year that led to shoulder surgery.
“As I’m getting older, even though I’m still young,” she said, “I have all these problems with my body, and it would be nice to have a consistent person who knows all my problems to talk with.”
KFF Health News’ Colleen DeGuzman contributed to this report.
KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF – an independent source of health policy research, polling, and journalism. Learn more about KFF.
Lucia Agajanian, a 25-year-old freelance film producer in Chicago, doesn’t have a specific primary care doctor, preferring the convenience of visiting a local clinic for flu shots or going online for video visits. “You say what you need, and there’s a 15-minute wait time,” she said, explaining how her appointments usually work. “I really liked that.”
But Olga Lucia Torres, a 52-year-old who teaches narrative medicine classes at Columbia University in New York, misses her longtime primary care doctor, who kept tabs for two decades on her conditions, including lupus and rheumatoid arthritis, and made sure she was up to date on vaccines and screening tests. Two years ago, Torres received a letter informing her that he was changing to a “boutique practice” and would charge a retainer fee of $10,000 for her to stay on as a patient.
“I felt really sad and abandoned,” Ms. Torres said. “This was my PCP. I was like, ‘Dude, I thought we were in this together!’ ”
The two women reflect an ongoing reality: The primary care landscape is changing in ways that could shape patients’ access and quality of care now and for decades to come. A solid and enduring relationship with a primary care doctor – who knows a patient’s history and can monitor new problems – has long been regarded as the bedrock of a quality health care system. But investment in primary care in the U.S. lags behind that of other high-income countries, and America has a smaller share of primary care physicians than most of its European counterparts.
An estimated one-third of all physicians in the U.S. are primary care doctors – who include family medicine physicians, general internists, and pediatricians – according to the Robert Graham Center, a research and analysis organization that studies primary care. Other researchers say the numbers are lower, with the Peterson-KFF Health System Tracker reporting only 12% of U.S. doctors are generalists, compared with 23% in Germany and as many as 45% in the Netherlands.
That means it’s often hard to find a doctor and make an appointment that’s not weeks or months away.
“This is a problem that has been simmering and now beginning to erupt in some communities at a boil. It’s hard to find that front door of the health system,” said Ann Greiner, president and CEO of the Primary Care Collaborative, a nonprofit membership organization.
Today, a smaller percentage of physicians are entering the field than are practicing, suggesting that shortages will worsen over time.
Interest has waned partly because, in the U.S., primary care yields lower salaries than other medical and surgical specialties.
Some doctors now in practice also say they are burned out, facing cumbersome electronic health record systems and limits on appointment times, making it harder to get to know a patient and establish a relationship.
Others are retiring or selling their practices. Hospitals, insurers like Aetna-CVS Health, and other corporate entities like Amazon are on a buying spree, snapping up primary care practices, furthering a move away from the “Marcus Welby, M.D.”-style neighborhood doctor. About 48% of primary care physicians currently work in practices they do not own. Two-thirds of those doctors don’t work for other physicians but are employed by private equity investors or other corporate entities, according to data in the “Primary Care Chartbook,” which is collected and published by the Graham Center.
Patients who seek care at these offices may not be seen by the same doctor at every visit. Indeed, they may not be seen by a doctor at all but by a paraprofessional – a nurse practitioner or a physician assistant, for instance – who works under the doctor’s license. That trend has been accelerated by new state laws – as well as changes in Medicare policy – that loosen the requirements for physician supervisors and billing. And these jobs are expected to be among the decade’s fastest-growing in the health sector.
Overall, demand for primary care is up, spurred partly by record enrollment in Affordable Care Act plans. All those new patients, combined with the low supply of doctors, are contributing to a years-long downward trend in the number of people reporting they have a usual source of care, be it an individual doctor or a specific clinic or practice.
Researchers say that raises questions, including whether people can’t find a primary care doctor, can’t afford one, or simply no longer want an established relationship.
“Is it poor access or problems with the supply of providers? Does it reflect a societal disconnection, a go-it-alone phenomenon?” asked Christopher F. Koller, president of the Milbank Memorial Fund, a foundation whose nonpartisan analyses focus on state health policy.
For patients, frustrating wait times are one result. A recent survey by a physician staffing firm found it now takes an average of 21 days just to get in to see a doctor of family medicine, defined as a subgroup of primary care, which includes general internists and pediatricians. Those physicians are many patients’ first stop for health care. That runs counter to the trend in other countries, where patients complain of months- or years-long waits for elective procedures like hip replacements but generally experience short waits for primary care visits.
Another complication: All these factors are adding urgency to ongoing concerns about attracting new primary care physicians to the specialty.
When she was in medical school, Natalie A. Cameron, MD, specifically chose primary care because she enjoyed forming relationships with patients and because “I’m specifically interested in prevention and women’s health, and you do a lot of that in primary care.” The 33-year-old is currently an instructor of medicine at Northwestern University, Chicago, where she also sees patients at a primary care practice.
Still, she understands why many of her colleagues chose something else. For some, it’s the pay differential. For others, it’s because of primary care’s reputation for involving “a lot of care and paperwork and coordinating a lot of issues that may not just be medical,” Dr. Cameron said.
The million-dollar question, then, is how much does having a usual source of care influence medical outcomes and cost? And for which kinds of patients is having a close relationship with a doctor important? While studies show that many young people value the convenience of visiting urgent care – especially when it takes so long to see a primary care doctor – will their long-term health suffer because of that strategy?
Many patients – particularly the young and generally healthy ones – shrug at the new normal, embracing alternatives that require less waiting. These options are particularly attractive to millennials, who tell focus groups that the convenience of a one-off video call or visit to a big-box store clinic trumps a long-standing relationship with a doctor, especially if they have to wait days, weeks, or longer for a traditional appointment.
“The doctor I have is a family friend, but definitely I would take access and ease over a relationship,” said Matt Degn, 24, who says it can take two to three months to book a routine appointment in Salt Lake City, where he lives.
Patients are increasingly turning to what are dubbed “retail clinics,” such as CVS’ Minute Clinics, which tout “in-person and virtual care 7 days a week.” CVS Health’s more than 1,000 clinics inside stores across the U.S. treated more than 5 million people last year, Creagh Milford, a physician and the company’s senior vice president of retail health, said in a written statement. He cited a recent study by a data products firm showing the use of retail clinics has grown 200% over the past five years.
Health policy experts say increased access to alternatives can be good, but forgoing an ongoing relationship to a regular provider is not, especially as people get older and are more likely to develop chronic conditions or other medical problems.
“There’s a lot of data that show communities with a lot of primary care have better health,” said Mr. Koller.
People with a regular primary care doctor or practice are more likely to get preventive care, such as cancer screenings or flu shots, studies show, and are less likely to die if they do suffer a heart attack.
Physicians who see patients regularly are better able to spot patterns of seemingly minor concerns that could add up to a serious health issue.
“What happens when you go to four different providers on four platforms for urinary tract infections because, well, they are just UTIs,” posed Yalda Jabbarpour, MD, a family physician practicing in Washington, and the director of the Robert Graham Center for Policy Studies. “But actually, you have a large kidney stone that’s causing your UTI or have some sort of immune deficiency like diabetes that’s causing frequent UTIs. But no one tested you.”
Most experts agree that figuring out how to coordinate care amid this changing landscape and make it more accessible without undermining quality – even when different doctors, locations, health systems, and electronic health records are involved – will be as complex as the pressures causing long waits and less interest in today’s primary care market.
And experiences sometimes lead patients to change their minds.
There’s something to be said for establishing a relationship, said Ms. Agajanian, in Chicago. She’s rethinking her decision to cobble together care, rather than have a specific primary care doctor or clinic, following an injury at work last year that led to shoulder surgery.
“As I’m getting older, even though I’m still young,” she said, “I have all these problems with my body, and it would be nice to have a consistent person who knows all my problems to talk with.”
KFF Health News’ Colleen DeGuzman contributed to this report.
KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF – an independent source of health policy research, polling, and journalism. Learn more about KFF.
Lucia Agajanian, a 25-year-old freelance film producer in Chicago, doesn’t have a specific primary care doctor, preferring the convenience of visiting a local clinic for flu shots or going online for video visits. “You say what you need, and there’s a 15-minute wait time,” she said, explaining how her appointments usually work. “I really liked that.”
But Olga Lucia Torres, a 52-year-old who teaches narrative medicine classes at Columbia University in New York, misses her longtime primary care doctor, who kept tabs for two decades on her conditions, including lupus and rheumatoid arthritis, and made sure she was up to date on vaccines and screening tests. Two years ago, Torres received a letter informing her that he was changing to a “boutique practice” and would charge a retainer fee of $10,000 for her to stay on as a patient.
“I felt really sad and abandoned,” Ms. Torres said. “This was my PCP. I was like, ‘Dude, I thought we were in this together!’ ”
The two women reflect an ongoing reality: The primary care landscape is changing in ways that could shape patients’ access and quality of care now and for decades to come. A solid and enduring relationship with a primary care doctor – who knows a patient’s history and can monitor new problems – has long been regarded as the bedrock of a quality health care system. But investment in primary care in the U.S. lags behind that of other high-income countries, and America has a smaller share of primary care physicians than most of its European counterparts.
An estimated one-third of all physicians in the U.S. are primary care doctors – who include family medicine physicians, general internists, and pediatricians – according to the Robert Graham Center, a research and analysis organization that studies primary care. Other researchers say the numbers are lower, with the Peterson-KFF Health System Tracker reporting only 12% of U.S. doctors are generalists, compared with 23% in Germany and as many as 45% in the Netherlands.
That means it’s often hard to find a doctor and make an appointment that’s not weeks or months away.
“This is a problem that has been simmering and now beginning to erupt in some communities at a boil. It’s hard to find that front door of the health system,” said Ann Greiner, president and CEO of the Primary Care Collaborative, a nonprofit membership organization.
Today, a smaller percentage of physicians are entering the field than are practicing, suggesting that shortages will worsen over time.
Interest has waned partly because, in the U.S., primary care yields lower salaries than other medical and surgical specialties.
Some doctors now in practice also say they are burned out, facing cumbersome electronic health record systems and limits on appointment times, making it harder to get to know a patient and establish a relationship.
Others are retiring or selling their practices. Hospitals, insurers like Aetna-CVS Health, and other corporate entities like Amazon are on a buying spree, snapping up primary care practices, furthering a move away from the “Marcus Welby, M.D.”-style neighborhood doctor. About 48% of primary care physicians currently work in practices they do not own. Two-thirds of those doctors don’t work for other physicians but are employed by private equity investors or other corporate entities, according to data in the “Primary Care Chartbook,” which is collected and published by the Graham Center.
Patients who seek care at these offices may not be seen by the same doctor at every visit. Indeed, they may not be seen by a doctor at all but by a paraprofessional – a nurse practitioner or a physician assistant, for instance – who works under the doctor’s license. That trend has been accelerated by new state laws – as well as changes in Medicare policy – that loosen the requirements for physician supervisors and billing. And these jobs are expected to be among the decade’s fastest-growing in the health sector.
Overall, demand for primary care is up, spurred partly by record enrollment in Affordable Care Act plans. All those new patients, combined with the low supply of doctors, are contributing to a years-long downward trend in the number of people reporting they have a usual source of care, be it an individual doctor or a specific clinic or practice.
Researchers say that raises questions, including whether people can’t find a primary care doctor, can’t afford one, or simply no longer want an established relationship.
“Is it poor access or problems with the supply of providers? Does it reflect a societal disconnection, a go-it-alone phenomenon?” asked Christopher F. Koller, president of the Milbank Memorial Fund, a foundation whose nonpartisan analyses focus on state health policy.
For patients, frustrating wait times are one result. A recent survey by a physician staffing firm found it now takes an average of 21 days just to get in to see a doctor of family medicine, defined as a subgroup of primary care, which includes general internists and pediatricians. Those physicians are many patients’ first stop for health care. That runs counter to the trend in other countries, where patients complain of months- or years-long waits for elective procedures like hip replacements but generally experience short waits for primary care visits.
Another complication: All these factors are adding urgency to ongoing concerns about attracting new primary care physicians to the specialty.
When she was in medical school, Natalie A. Cameron, MD, specifically chose primary care because she enjoyed forming relationships with patients and because “I’m specifically interested in prevention and women’s health, and you do a lot of that in primary care.” The 33-year-old is currently an instructor of medicine at Northwestern University, Chicago, where she also sees patients at a primary care practice.
Still, she understands why many of her colleagues chose something else. For some, it’s the pay differential. For others, it’s because of primary care’s reputation for involving “a lot of care and paperwork and coordinating a lot of issues that may not just be medical,” Dr. Cameron said.
The million-dollar question, then, is how much does having a usual source of care influence medical outcomes and cost? And for which kinds of patients is having a close relationship with a doctor important? While studies show that many young people value the convenience of visiting urgent care – especially when it takes so long to see a primary care doctor – will their long-term health suffer because of that strategy?
Many patients – particularly the young and generally healthy ones – shrug at the new normal, embracing alternatives that require less waiting. These options are particularly attractive to millennials, who tell focus groups that the convenience of a one-off video call or visit to a big-box store clinic trumps a long-standing relationship with a doctor, especially if they have to wait days, weeks, or longer for a traditional appointment.
“The doctor I have is a family friend, but definitely I would take access and ease over a relationship,” said Matt Degn, 24, who says it can take two to three months to book a routine appointment in Salt Lake City, where he lives.
Patients are increasingly turning to what are dubbed “retail clinics,” such as CVS’ Minute Clinics, which tout “in-person and virtual care 7 days a week.” CVS Health’s more than 1,000 clinics inside stores across the U.S. treated more than 5 million people last year, Creagh Milford, a physician and the company’s senior vice president of retail health, said in a written statement. He cited a recent study by a data products firm showing the use of retail clinics has grown 200% over the past five years.
Health policy experts say increased access to alternatives can be good, but forgoing an ongoing relationship to a regular provider is not, especially as people get older and are more likely to develop chronic conditions or other medical problems.
“There’s a lot of data that show communities with a lot of primary care have better health,” said Mr. Koller.
People with a regular primary care doctor or practice are more likely to get preventive care, such as cancer screenings or flu shots, studies show, and are less likely to die if they do suffer a heart attack.
Physicians who see patients regularly are better able to spot patterns of seemingly minor concerns that could add up to a serious health issue.
“What happens when you go to four different providers on four platforms for urinary tract infections because, well, they are just UTIs,” posed Yalda Jabbarpour, MD, a family physician practicing in Washington, and the director of the Robert Graham Center for Policy Studies. “But actually, you have a large kidney stone that’s causing your UTI or have some sort of immune deficiency like diabetes that’s causing frequent UTIs. But no one tested you.”
Most experts agree that figuring out how to coordinate care amid this changing landscape and make it more accessible without undermining quality – even when different doctors, locations, health systems, and electronic health records are involved – will be as complex as the pressures causing long waits and less interest in today’s primary care market.
And experiences sometimes lead patients to change their minds.
There’s something to be said for establishing a relationship, said Ms. Agajanian, in Chicago. She’s rethinking her decision to cobble together care, rather than have a specific primary care doctor or clinic, following an injury at work last year that led to shoulder surgery.
“As I’m getting older, even though I’m still young,” she said, “I have all these problems with my body, and it would be nice to have a consistent person who knows all my problems to talk with.”
KFF Health News’ Colleen DeGuzman contributed to this report.
KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF – an independent source of health policy research, polling, and journalism. Learn more about KFF.