Pandemic and sleep: Increased stress, lack of exercise and insomnia

Article Type
Changed
Mon, 02/14/2022 - 16:44
Display Headline
Pandemic and sleep: Increased stress, lack of exercise and insomnia

While working as a registered nurse on inpatient Stroke and Generalized Rehabilitation unit, she pursued for a degree in Adult and Gerontology Primary Care degree. She currently practices at UW Medicine/Harborview Medical Center for Sleep Medicine treating a variety of sleep disorders. She strives to provide quality and safe care to her patients.

1. According to the American Academy of Sleep Medicine, even in normal times, 30 to 35 % of the US population contends with acute, or short-term insomnia. As a board-certified nurse practitioner focusing on treating sleep disorders among older adults, can you discuss whether that percentage has increased during the coronavirus (COVID-19) pandemic, and if so, what would you say are the underlying reasons or causes?

As a sleep medicine nurse practitioner at UW (University of Washington) Medicine, I have seen quite a few patients with sleep disorders including acute and chronic insomnia. Since the start of  the COVID-19 pandemic there has been a noticeable increase in poor-sleep complaints -- the data indicate a 37% increase in the rate of clinical insomnia since the pandemic started.

Stress can worsen insomnia, and the pandemic has negatively affected most if not everyone’s life. It has changed lifestyles through social distancing, mask mandates, and stay-at-home orders. Many have been forced to balance working from home with household duties; parents are supervising their children’s schooling. This disruption in the workday environment and workload can be hard to manage. The uncertainty of the pandemic has increased worries – health related and financially related. Ready access to media can also increase stress. Moreover, the lack of structure in a person’s day can cause many problems. Working from home, quarantining, living a more sedentary lifestyle, losing a job, losing socialization, including attending events, all can cause a disruption in a person’s daily routine and induce later bed- and wake-up times. This disruption to the body’s biological or circadian rhythm can reduce sleep quality and INCREASE phase-delay insomnia. Moreover, the pandemic has been especially hard on people’s mental health.  One CDC study showed that 40% of adults are struggling with adverse mental health and substance-use issues due to COVID. Also, 13.3% of adults have responded to surveys saying they’ve started or increased their use of substances. As the pandemic continues, acute insomnia will likely turn into chronic insomnia.  

2. How can increased stress and lack of exercise cause insomnia? What risk factors contribute to lack of sleep and impact our overall health?

The incidence of anxiety disorder and depressive disorder has increased significantly as compared to pre-pandemic rates. Psychological stress, especially at bedtime, increases psychophysiological arousal. The hypothalamic- pituitary- adrenal (HPA) axis responds to stress by releasing cortisol. HPA activation is associated with poorer sleep quality – it increases sleep latency, frequency of awakening, decreases in slow-wave sleep, and degrades overall sleep efficiency. The result of poor quality and fragmented sleep can further activate the HPA axis, causing a positive feedback loop.

A deterrent to poor sleep is physical activity. It greatly improves sleep by improving sleep efficiency, decreasing light sleep, increasing REM sleep, and regulating circadian rhythm. Lack of physical activity has been  associated with increased sleep problems such as daytime sleepiness, an insufficient amount of sleep, snoring, sleep apnea symptoms, and restless sleep. And poor sleep further reduces physical activity which perpetuates the problem. The pandemic’s effect on physical activity is significant. It has caused people to stay home more often and therefore decreases in levels of exercise. and increased sedentary lifestyle. More than half of the adults in this country do not meet federal guidelines for aerobic physical activity.

Sleep deprivation can be dangerous, as sleepiness increases the likelihood of major occupational and road traffic accidents. Being awake for at least 18 hours is equivalent to having a blood alcohol content of 0.05% to 0.10% for 24 hours. Chronic sleep deprivation, defined as getting, on average, fewer than 7 hours per night negatively affects all systems of the body. Sleep deprivation therefore reduces quality of life and can reduce life expectancy.

Cardiovascular – Sleep deprivation can increase excessive heart age and reduce heart rate recovery after exercise. It is also linked to increases in heart rate, blood pressure, and death from cardiovascular issues.

Respiratory – Even one night of sleep deprivation can increase respiratory load. Studies have shown an association between sleep apnea and sleep deprivation. Sleep deprivation and respiratory disorders can perpetuate each other.

Neurologic – Sleep is crucial in brain development. Lack of sleep is associated with low grade neuroinflammation, memory and cognitive function decline, and acceleration of Alzheimer’s disease. Sleep deprivation can increase pain sensitivity, the risk of stroke, aggressive behavior, cognitive instability, hyperactivity, and socialization problems.

Endocrine – Sleep deprivation increases appetite stimulation causing excessive food intake and weight gain. It can also impair metabolism, which leads to obesity and insulin resistance.

Reproductive – Studies on sleep deprivation and the human reproduction system are limited. A study in male rats shows a relation between less sleep and overall lower reproductive health such as alteration of spermatic function, “decreased sexual behavior, lower testosterone level, and lower sperm viability level”. Studies also show renal dysfunction and high blood pressure in the offspring of sleep deprived rats in the last week of pregnancy.

3. Please discuss coronasomnia and its symptoms. Also, will you discuss your thoughts on the diagnosis and provide examples of the types of stressors associated with coronasomnia.

Coronasomnia is the term used to describe the increase in sleep problems associated with the COVID-19 pandemic. Coronasomnia is associated with increased sleep onset, maintenance insomnia, delayed sleep schedule, nocturnal awakening, sleep deprivation, and worsened pre-existing sleep issues. The worst insomnia and psychological symptoms are among those who are in the center of the pandemic, such as frontline workers and people living in areas more impacted by COVID-19.

During the pandemic, anxiety, depression, stress, and poor sleep have significantly increased. Anxiety and depression can be accompanied by intrusive thoughts which interfere with falling asleep. Patients with depression have a twofold risk of sleep disruption. Lack of daily routine may be associated with an increase in poor dental hygiene, such as lower rates of flossing and brushing.There’s also an increased rate in snacking (weight gain) and avoidance of visits to the dentists.

More time at home leads to more time spent on TV or social media. Increased screen time and media use at night, especially close to bedtime, are linked to poorer sleep. Blue light emitted by electronic devices can suppress the release of melatonin, making it more difficult to fall asleep. In addition, viewing or listening to content that is distressing or exciting right before bedtime negatively affects sleep quality. Following pandemic news for more than 3 hours a day has been found to be associated with increased levels of anxiety.

Health care providers are especially susceptible to coronasomnia. Those who work directly with COVID -19 patients are twice as likely to report disrupted sleep, anxiety, and depression. An increased work and patient load, the shortage of both fellow providers and supplies, all contribute to increased anxiety and disrupted sleep. Poor sleep, especially coupled with longer work hours and shift work, are associated with a worsened immune system and poor work performance. 

 

4. In looking at the overall challenges pertaining to pandemic-induced sleep problems, what are your guideline recommendations to help ensure we sleep well during this outbreak?

Poor sleep can be detrimental to physical and mental health, and poor sleep hygiene practices can significantly impact sleep quality. Below are some general sleep-hygiene recommendations.

Caffeine – Caffeine consumed  close to bedtime can disrupt sleep. Caffeine should be avoided 6 hours prior to bedtime. Everyone’s tolerance to caffeine is different so timing and caffeine dosage may need to be individually tailored.

Alcohol – Alcohol consumed  close to bedtime can decrease sleep latency. However, it increases arousal during the second half of the night. It can also worsen snoring and sleep apnea. The effect can be alcohol level dependent.

Exercise – Regular exercise, as already discussed, is linked to better sleep quality. It is typically recommended to exercise earlier in the day; research has shown conflicting results on nighttime exercise. One study of  patients with insomnia who exercised at night showed that aerobic exercise of moderate intensity improved polysomnography patient-reported sleep latency, and total sleep time.

Routine – An irregular sleep schedule is associated with poor sleep and daytime sleepiness. Following a consistent sleep schedule promotes stable circadian rhythm. A familiar relaxing routine should be established before bedtime.

Stress – To lower stress, patients should be advised to schedule brief meditation sessions so they can reflect on stressful situations. Patients also should limit the amount of exposure to pandemic news. Writing down and talking about stress, relaxation, and mindfulness techniques may reduce stress. However, stress and anxiety significantly differ case by case and interventions from health care providers may be needed.

Time in bed – Limit the amount of time in bed only for sleep and sex. Limit the use of electronics before bed and avoid use of electronics in bed. Turning off devices or silencing notifications can all help in reducing sleep disruption. 

Cognitive behavioral therapy for insomnia (CBT-I) should be considered for patients with chronic insomnia. This therapy often includes sleep hygiene education, sleep restriction therapy, and relaxation training.  Benefits of CBT-I treatment are long-term and reduce the need for additional pharmacologic therapies.

While many patients are experiencing insomnia these days, other underlying sleep disorders also should be  considered. Patients should be evaluated to see if a sleep specialist is needed to diagnose and treat their sleep disorders.

References

 Sleep Foundation. Sleep Guidelines and Help During the COVID-19 Pandemic. .Apr 7, 2021.

Morin CM, Carrier C.  The acute effects of the COVID-19 pandemic on insomnia and psychological symptoms. Sleep Med. 2021: 77: 346–347. doi: 10.1016/j.sleep.2020.06.005

Pengpid S,  Peltzer K. Sedentary Behaviour and 12 Sleep Problem Indicators among Middle-Aged and Elderly Adults in South Africa. Int J Environ Res Public Health. 2019 Apr; 16(8): 1422.

Czeisler M É, Lane RI, Petrosky E, et al. Mental Health, Substance Use, and Suicidal Ideation During the COVID-19 Pandemic — United States, June 24–30, 2020 | MMWR Weekly. Aug 14, 2020. 69(32);1049–1057.

 van Dalfsen JH, Markus, CR. The influence of sleep on human hypothalamic–pituitary–adrenal (HPA)axis reactivity: A systematic review.  Sleep Medicine Reviews. June 2018, 187-194.   doi.org/10.1016/j.smrv.2017.10.002

 Nicolaides NC, et al, eds. Axis and Sleep.  Endotext - NCBI Bookshelf.  South Dartmouth, MA. 2000- https://www.ncbi.nlm.nih.gov/books/NBK278943/

 Issa FG and Sullivan CE. Alcohol, snoring and sleep apnea. J Neurol Neurosurg Psychiatry. 1982 Apr; 45: pp 353–359.

 Liewa SC, Aung T. Sleep deprivation and its association with diseases- a review.  Sleep Medicine. January 2021, pp 192-204.

Sleep Foundation. Coronasomnia: Definition, Symptoms, and Solutions | Sleep Foundation. Apr 14, 2021. https://www.sleepfoundation.org/covid-19-and-sleep/coronasomnia

American Association of Endodontists. Survey Reveals COVID-19 is a Major Factor in Americans’ Failing Dental Health | American Association of Endodontists (aae.org). Mar 4, 2021.

Altena E, Baglioni C, Espie CA, et al. Dealing with sleep problems during home confinement due to the COVID‐19 outbreak: Practical recommendations from a task force of the European CBT‐I Academy. J Sleep Res.  April 4, 2020. doi.org/10.1111/jsr.13052  https://onlinelibrary.wiley.com/doi/10.1111/jsr.13052

CDC. Drowsy Driving- Sleep and Sleep Disorders. Mar 17, 2017. https://www.cdc.gov/sleep/about_sleep/drowsy_driving.html

Dolezal, BA, Neufeld, EV, Boland DM. Interrelationship between Sleep and Exercise: A Systematic Review. Adv Prev Med. 2017; 2017: 1364387. doi: 10.1155/2017/1364387

 Irish LA,  Kline, CE,  Heather E. Gunn HE, et al. The Role of Sleep Hygiene in Promoting Public Health: A Review of Empirical Evidence.Sleep Med Rev. 2015 Aug; 22: 23–36.doi: 10.1016/j.smrv.2014.10.001

Edinger JD,  Arnedt JT, Suzanne M. Bertisch SM, et al.    Behavioral and psychological treatments for chronic insomnia disorder in adults: an American Academy of Sleep Medicine clinical practice guideline. J Clin Sleep Med. Feb. 1, 2021.

Author and Disclosure Information

Xiang/Stella Zeng is a board-certified nurse practitioner. She obtained both her Bachelors’ Degree and Masters’ of Science Degree in Nursing from the University of Alabama at Birmingham.

Ms. Xiang/Stella Zeng has no disclosures.

Publications
Topics
Sections
Author and Disclosure Information

Xiang/Stella Zeng is a board-certified nurse practitioner. She obtained both her Bachelors’ Degree and Masters’ of Science Degree in Nursing from the University of Alabama at Birmingham.

Ms. Xiang/Stella Zeng has no disclosures.

Author and Disclosure Information

Xiang/Stella Zeng is a board-certified nurse practitioner. She obtained both her Bachelors’ Degree and Masters’ of Science Degree in Nursing from the University of Alabama at Birmingham.

Ms. Xiang/Stella Zeng has no disclosures.

While working as a registered nurse on inpatient Stroke and Generalized Rehabilitation unit, she pursued for a degree in Adult and Gerontology Primary Care degree. She currently practices at UW Medicine/Harborview Medical Center for Sleep Medicine treating a variety of sleep disorders. She strives to provide quality and safe care to her patients.

1. According to the American Academy of Sleep Medicine, even in normal times, 30 to 35 % of the US population contends with acute, or short-term insomnia. As a board-certified nurse practitioner focusing on treating sleep disorders among older adults, can you discuss whether that percentage has increased during the coronavirus (COVID-19) pandemic, and if so, what would you say are the underlying reasons or causes?

As a sleep medicine nurse practitioner at UW (University of Washington) Medicine, I have seen quite a few patients with sleep disorders including acute and chronic insomnia. Since the start of  the COVID-19 pandemic there has been a noticeable increase in poor-sleep complaints -- the data indicate a 37% increase in the rate of clinical insomnia since the pandemic started.

Stress can worsen insomnia, and the pandemic has negatively affected most if not everyone’s life. It has changed lifestyles through social distancing, mask mandates, and stay-at-home orders. Many have been forced to balance working from home with household duties; parents are supervising their children’s schooling. This disruption in the workday environment and workload can be hard to manage. The uncertainty of the pandemic has increased worries – health related and financially related. Ready access to media can also increase stress. Moreover, the lack of structure in a person’s day can cause many problems. Working from home, quarantining, living a more sedentary lifestyle, losing a job, losing socialization, including attending events, all can cause a disruption in a person’s daily routine and induce later bed- and wake-up times. This disruption to the body’s biological or circadian rhythm can reduce sleep quality and INCREASE phase-delay insomnia. Moreover, the pandemic has been especially hard on people’s mental health.  One CDC study showed that 40% of adults are struggling with adverse mental health and substance-use issues due to COVID. Also, 13.3% of adults have responded to surveys saying they’ve started or increased their use of substances. As the pandemic continues, acute insomnia will likely turn into chronic insomnia.  

2. How can increased stress and lack of exercise cause insomnia? What risk factors contribute to lack of sleep and impact our overall health?

The incidence of anxiety disorder and depressive disorder has increased significantly as compared to pre-pandemic rates. Psychological stress, especially at bedtime, increases psychophysiological arousal. The hypothalamic- pituitary- adrenal (HPA) axis responds to stress by releasing cortisol. HPA activation is associated with poorer sleep quality – it increases sleep latency, frequency of awakening, decreases in slow-wave sleep, and degrades overall sleep efficiency. The result of poor quality and fragmented sleep can further activate the HPA axis, causing a positive feedback loop.

A deterrent to poor sleep is physical activity. It greatly improves sleep by improving sleep efficiency, decreasing light sleep, increasing REM sleep, and regulating circadian rhythm. Lack of physical activity has been  associated with increased sleep problems such as daytime sleepiness, an insufficient amount of sleep, snoring, sleep apnea symptoms, and restless sleep. And poor sleep further reduces physical activity which perpetuates the problem. The pandemic’s effect on physical activity is significant. It has caused people to stay home more often and therefore decreases in levels of exercise. and increased sedentary lifestyle. More than half of the adults in this country do not meet federal guidelines for aerobic physical activity.

Sleep deprivation can be dangerous, as sleepiness increases the likelihood of major occupational and road traffic accidents. Being awake for at least 18 hours is equivalent to having a blood alcohol content of 0.05% to 0.10% for 24 hours. Chronic sleep deprivation, defined as getting, on average, fewer than 7 hours per night negatively affects all systems of the body. Sleep deprivation therefore reduces quality of life and can reduce life expectancy.

Cardiovascular – Sleep deprivation can increase excessive heart age and reduce heart rate recovery after exercise. It is also linked to increases in heart rate, blood pressure, and death from cardiovascular issues.

Respiratory – Even one night of sleep deprivation can increase respiratory load. Studies have shown an association between sleep apnea and sleep deprivation. Sleep deprivation and respiratory disorders can perpetuate each other.

Neurologic – Sleep is crucial in brain development. Lack of sleep is associated with low grade neuroinflammation, memory and cognitive function decline, and acceleration of Alzheimer’s disease. Sleep deprivation can increase pain sensitivity, the risk of stroke, aggressive behavior, cognitive instability, hyperactivity, and socialization problems.

Endocrine – Sleep deprivation increases appetite stimulation causing excessive food intake and weight gain. It can also impair metabolism, which leads to obesity and insulin resistance.

Reproductive – Studies on sleep deprivation and the human reproduction system are limited. A study in male rats shows a relation between less sleep and overall lower reproductive health such as alteration of spermatic function, “decreased sexual behavior, lower testosterone level, and lower sperm viability level”. Studies also show renal dysfunction and high blood pressure in the offspring of sleep deprived rats in the last week of pregnancy.

3. Please discuss coronasomnia and its symptoms. Also, will you discuss your thoughts on the diagnosis and provide examples of the types of stressors associated with coronasomnia.

Coronasomnia is the term used to describe the increase in sleep problems associated with the COVID-19 pandemic. Coronasomnia is associated with increased sleep onset, maintenance insomnia, delayed sleep schedule, nocturnal awakening, sleep deprivation, and worsened pre-existing sleep issues. The worst insomnia and psychological symptoms are among those who are in the center of the pandemic, such as frontline workers and people living in areas more impacted by COVID-19.

During the pandemic, anxiety, depression, stress, and poor sleep have significantly increased. Anxiety and depression can be accompanied by intrusive thoughts which interfere with falling asleep. Patients with depression have a twofold risk of sleep disruption. Lack of daily routine may be associated with an increase in poor dental hygiene, such as lower rates of flossing and brushing.There’s also an increased rate in snacking (weight gain) and avoidance of visits to the dentists.

More time at home leads to more time spent on TV or social media. Increased screen time and media use at night, especially close to bedtime, are linked to poorer sleep. Blue light emitted by electronic devices can suppress the release of melatonin, making it more difficult to fall asleep. In addition, viewing or listening to content that is distressing or exciting right before bedtime negatively affects sleep quality. Following pandemic news for more than 3 hours a day has been found to be associated with increased levels of anxiety.

Health care providers are especially susceptible to coronasomnia. Those who work directly with COVID -19 patients are twice as likely to report disrupted sleep, anxiety, and depression. An increased work and patient load, the shortage of both fellow providers and supplies, all contribute to increased anxiety and disrupted sleep. Poor sleep, especially coupled with longer work hours and shift work, are associated with a worsened immune system and poor work performance. 

 

4. In looking at the overall challenges pertaining to pandemic-induced sleep problems, what are your guideline recommendations to help ensure we sleep well during this outbreak?

Poor sleep can be detrimental to physical and mental health, and poor sleep hygiene practices can significantly impact sleep quality. Below are some general sleep-hygiene recommendations.

Caffeine – Caffeine consumed  close to bedtime can disrupt sleep. Caffeine should be avoided 6 hours prior to bedtime. Everyone’s tolerance to caffeine is different so timing and caffeine dosage may need to be individually tailored.

Alcohol – Alcohol consumed  close to bedtime can decrease sleep latency. However, it increases arousal during the second half of the night. It can also worsen snoring and sleep apnea. The effect can be alcohol level dependent.

Exercise – Regular exercise, as already discussed, is linked to better sleep quality. It is typically recommended to exercise earlier in the day; research has shown conflicting results on nighttime exercise. One study of  patients with insomnia who exercised at night showed that aerobic exercise of moderate intensity improved polysomnography patient-reported sleep latency, and total sleep time.

Routine – An irregular sleep schedule is associated with poor sleep and daytime sleepiness. Following a consistent sleep schedule promotes stable circadian rhythm. A familiar relaxing routine should be established before bedtime.

Stress – To lower stress, patients should be advised to schedule brief meditation sessions so they can reflect on stressful situations. Patients also should limit the amount of exposure to pandemic news. Writing down and talking about stress, relaxation, and mindfulness techniques may reduce stress. However, stress and anxiety significantly differ case by case and interventions from health care providers may be needed.

Time in bed – Limit the amount of time in bed only for sleep and sex. Limit the use of electronics before bed and avoid use of electronics in bed. Turning off devices or silencing notifications can all help in reducing sleep disruption. 

Cognitive behavioral therapy for insomnia (CBT-I) should be considered for patients with chronic insomnia. This therapy often includes sleep hygiene education, sleep restriction therapy, and relaxation training.  Benefits of CBT-I treatment are long-term and reduce the need for additional pharmacologic therapies.

While many patients are experiencing insomnia these days, other underlying sleep disorders also should be  considered. Patients should be evaluated to see if a sleep specialist is needed to diagnose and treat their sleep disorders.

While working as a registered nurse on inpatient Stroke and Generalized Rehabilitation unit, she pursued for a degree in Adult and Gerontology Primary Care degree. She currently practices at UW Medicine/Harborview Medical Center for Sleep Medicine treating a variety of sleep disorders. She strives to provide quality and safe care to her patients.

1. According to the American Academy of Sleep Medicine, even in normal times, 30 to 35 % of the US population contends with acute, or short-term insomnia. As a board-certified nurse practitioner focusing on treating sleep disorders among older adults, can you discuss whether that percentage has increased during the coronavirus (COVID-19) pandemic, and if so, what would you say are the underlying reasons or causes?

As a sleep medicine nurse practitioner at UW (University of Washington) Medicine, I have seen quite a few patients with sleep disorders including acute and chronic insomnia. Since the start of  the COVID-19 pandemic there has been a noticeable increase in poor-sleep complaints -- the data indicate a 37% increase in the rate of clinical insomnia since the pandemic started.

Stress can worsen insomnia, and the pandemic has negatively affected most if not everyone’s life. It has changed lifestyles through social distancing, mask mandates, and stay-at-home orders. Many have been forced to balance working from home with household duties; parents are supervising their children’s schooling. This disruption in the workday environment and workload can be hard to manage. The uncertainty of the pandemic has increased worries – health related and financially related. Ready access to media can also increase stress. Moreover, the lack of structure in a person’s day can cause many problems. Working from home, quarantining, living a more sedentary lifestyle, losing a job, losing socialization, including attending events, all can cause a disruption in a person’s daily routine and induce later bed- and wake-up times. This disruption to the body’s biological or circadian rhythm can reduce sleep quality and INCREASE phase-delay insomnia. Moreover, the pandemic has been especially hard on people’s mental health.  One CDC study showed that 40% of adults are struggling with adverse mental health and substance-use issues due to COVID. Also, 13.3% of adults have responded to surveys saying they’ve started or increased their use of substances. As the pandemic continues, acute insomnia will likely turn into chronic insomnia.  

2. How can increased stress and lack of exercise cause insomnia? What risk factors contribute to lack of sleep and impact our overall health?

The incidence of anxiety disorder and depressive disorder has increased significantly as compared to pre-pandemic rates. Psychological stress, especially at bedtime, increases psychophysiological arousal. The hypothalamic- pituitary- adrenal (HPA) axis responds to stress by releasing cortisol. HPA activation is associated with poorer sleep quality – it increases sleep latency, frequency of awakening, decreases in slow-wave sleep, and degrades overall sleep efficiency. The result of poor quality and fragmented sleep can further activate the HPA axis, causing a positive feedback loop.

A deterrent to poor sleep is physical activity. It greatly improves sleep by improving sleep efficiency, decreasing light sleep, increasing REM sleep, and regulating circadian rhythm. Lack of physical activity has been  associated with increased sleep problems such as daytime sleepiness, an insufficient amount of sleep, snoring, sleep apnea symptoms, and restless sleep. And poor sleep further reduces physical activity which perpetuates the problem. The pandemic’s effect on physical activity is significant. It has caused people to stay home more often and therefore decreases in levels of exercise. and increased sedentary lifestyle. More than half of the adults in this country do not meet federal guidelines for aerobic physical activity.

Sleep deprivation can be dangerous, as sleepiness increases the likelihood of major occupational and road traffic accidents. Being awake for at least 18 hours is equivalent to having a blood alcohol content of 0.05% to 0.10% for 24 hours. Chronic sleep deprivation, defined as getting, on average, fewer than 7 hours per night negatively affects all systems of the body. Sleep deprivation therefore reduces quality of life and can reduce life expectancy.

Cardiovascular – Sleep deprivation can increase excessive heart age and reduce heart rate recovery after exercise. It is also linked to increases in heart rate, blood pressure, and death from cardiovascular issues.

Respiratory – Even one night of sleep deprivation can increase respiratory load. Studies have shown an association between sleep apnea and sleep deprivation. Sleep deprivation and respiratory disorders can perpetuate each other.

Neurologic – Sleep is crucial in brain development. Lack of sleep is associated with low grade neuroinflammation, memory and cognitive function decline, and acceleration of Alzheimer’s disease. Sleep deprivation can increase pain sensitivity, the risk of stroke, aggressive behavior, cognitive instability, hyperactivity, and socialization problems.

Endocrine – Sleep deprivation increases appetite stimulation causing excessive food intake and weight gain. It can also impair metabolism, which leads to obesity and insulin resistance.

Reproductive – Studies on sleep deprivation and the human reproduction system are limited. A study in male rats shows a relation between less sleep and overall lower reproductive health such as alteration of spermatic function, “decreased sexual behavior, lower testosterone level, and lower sperm viability level”. Studies also show renal dysfunction and high blood pressure in the offspring of sleep deprived rats in the last week of pregnancy.

3. Please discuss coronasomnia and its symptoms. Also, will you discuss your thoughts on the diagnosis and provide examples of the types of stressors associated with coronasomnia.

Coronasomnia is the term used to describe the increase in sleep problems associated with the COVID-19 pandemic. Coronasomnia is associated with increased sleep onset, maintenance insomnia, delayed sleep schedule, nocturnal awakening, sleep deprivation, and worsened pre-existing sleep issues. The worst insomnia and psychological symptoms are among those who are in the center of the pandemic, such as frontline workers and people living in areas more impacted by COVID-19.

During the pandemic, anxiety, depression, stress, and poor sleep have significantly increased. Anxiety and depression can be accompanied by intrusive thoughts which interfere with falling asleep. Patients with depression have a twofold risk of sleep disruption. Lack of daily routine may be associated with an increase in poor dental hygiene, such as lower rates of flossing and brushing.There’s also an increased rate in snacking (weight gain) and avoidance of visits to the dentists.

More time at home leads to more time spent on TV or social media. Increased screen time and media use at night, especially close to bedtime, are linked to poorer sleep. Blue light emitted by electronic devices can suppress the release of melatonin, making it more difficult to fall asleep. In addition, viewing or listening to content that is distressing or exciting right before bedtime negatively affects sleep quality. Following pandemic news for more than 3 hours a day has been found to be associated with increased levels of anxiety.

Health care providers are especially susceptible to coronasomnia. Those who work directly with COVID -19 patients are twice as likely to report disrupted sleep, anxiety, and depression. An increased work and patient load, the shortage of both fellow providers and supplies, all contribute to increased anxiety and disrupted sleep. Poor sleep, especially coupled with longer work hours and shift work, are associated with a worsened immune system and poor work performance. 

 

4. In looking at the overall challenges pertaining to pandemic-induced sleep problems, what are your guideline recommendations to help ensure we sleep well during this outbreak?

Poor sleep can be detrimental to physical and mental health, and poor sleep hygiene practices can significantly impact sleep quality. Below are some general sleep-hygiene recommendations.

Caffeine – Caffeine consumed  close to bedtime can disrupt sleep. Caffeine should be avoided 6 hours prior to bedtime. Everyone’s tolerance to caffeine is different so timing and caffeine dosage may need to be individually tailored.

Alcohol – Alcohol consumed  close to bedtime can decrease sleep latency. However, it increases arousal during the second half of the night. It can also worsen snoring and sleep apnea. The effect can be alcohol level dependent.

Exercise – Regular exercise, as already discussed, is linked to better sleep quality. It is typically recommended to exercise earlier in the day; research has shown conflicting results on nighttime exercise. One study of  patients with insomnia who exercised at night showed that aerobic exercise of moderate intensity improved polysomnography patient-reported sleep latency, and total sleep time.

Routine – An irregular sleep schedule is associated with poor sleep and daytime sleepiness. Following a consistent sleep schedule promotes stable circadian rhythm. A familiar relaxing routine should be established before bedtime.

Stress – To lower stress, patients should be advised to schedule brief meditation sessions so they can reflect on stressful situations. Patients also should limit the amount of exposure to pandemic news. Writing down and talking about stress, relaxation, and mindfulness techniques may reduce stress. However, stress and anxiety significantly differ case by case and interventions from health care providers may be needed.

Time in bed – Limit the amount of time in bed only for sleep and sex. Limit the use of electronics before bed and avoid use of electronics in bed. Turning off devices or silencing notifications can all help in reducing sleep disruption. 

Cognitive behavioral therapy for insomnia (CBT-I) should be considered for patients with chronic insomnia. This therapy often includes sleep hygiene education, sleep restriction therapy, and relaxation training.  Benefits of CBT-I treatment are long-term and reduce the need for additional pharmacologic therapies.

While many patients are experiencing insomnia these days, other underlying sleep disorders also should be  considered. Patients should be evaluated to see if a sleep specialist is needed to diagnose and treat their sleep disorders.

References

 Sleep Foundation. Sleep Guidelines and Help During the COVID-19 Pandemic. .Apr 7, 2021.

Morin CM, Carrier C.  The acute effects of the COVID-19 pandemic on insomnia and psychological symptoms. Sleep Med. 2021: 77: 346–347. doi: 10.1016/j.sleep.2020.06.005

Pengpid S,  Peltzer K. Sedentary Behaviour and 12 Sleep Problem Indicators among Middle-Aged and Elderly Adults in South Africa. Int J Environ Res Public Health. 2019 Apr; 16(8): 1422.

Czeisler M É, Lane RI, Petrosky E, et al. Mental Health, Substance Use, and Suicidal Ideation During the COVID-19 Pandemic — United States, June 24–30, 2020 | MMWR Weekly. Aug 14, 2020. 69(32);1049–1057.

 van Dalfsen JH, Markus, CR. The influence of sleep on human hypothalamic–pituitary–adrenal (HPA)axis reactivity: A systematic review.  Sleep Medicine Reviews. June 2018, 187-194.   doi.org/10.1016/j.smrv.2017.10.002

 Nicolaides NC, et al, eds. Axis and Sleep.  Endotext - NCBI Bookshelf.  South Dartmouth, MA. 2000- https://www.ncbi.nlm.nih.gov/books/NBK278943/

 Issa FG and Sullivan CE. Alcohol, snoring and sleep apnea. J Neurol Neurosurg Psychiatry. 1982 Apr; 45: pp 353–359.

 Liewa SC, Aung T. Sleep deprivation and its association with diseases- a review.  Sleep Medicine. January 2021, pp 192-204.

Sleep Foundation. Coronasomnia: Definition, Symptoms, and Solutions | Sleep Foundation. Apr 14, 2021. https://www.sleepfoundation.org/covid-19-and-sleep/coronasomnia

American Association of Endodontists. Survey Reveals COVID-19 is a Major Factor in Americans’ Failing Dental Health | American Association of Endodontists (aae.org). Mar 4, 2021.

Altena E, Baglioni C, Espie CA, et al. Dealing with sleep problems during home confinement due to the COVID‐19 outbreak: Practical recommendations from a task force of the European CBT‐I Academy. J Sleep Res.  April 4, 2020. doi.org/10.1111/jsr.13052  https://onlinelibrary.wiley.com/doi/10.1111/jsr.13052

CDC. Drowsy Driving- Sleep and Sleep Disorders. Mar 17, 2017. https://www.cdc.gov/sleep/about_sleep/drowsy_driving.html

Dolezal, BA, Neufeld, EV, Boland DM. Interrelationship between Sleep and Exercise: A Systematic Review. Adv Prev Med. 2017; 2017: 1364387. doi: 10.1155/2017/1364387

 Irish LA,  Kline, CE,  Heather E. Gunn HE, et al. The Role of Sleep Hygiene in Promoting Public Health: A Review of Empirical Evidence.Sleep Med Rev. 2015 Aug; 22: 23–36.doi: 10.1016/j.smrv.2014.10.001

Edinger JD,  Arnedt JT, Suzanne M. Bertisch SM, et al.    Behavioral and psychological treatments for chronic insomnia disorder in adults: an American Academy of Sleep Medicine clinical practice guideline. J Clin Sleep Med. Feb. 1, 2021.

References

 Sleep Foundation. Sleep Guidelines and Help During the COVID-19 Pandemic. .Apr 7, 2021.

Morin CM, Carrier C.  The acute effects of the COVID-19 pandemic on insomnia and psychological symptoms. Sleep Med. 2021: 77: 346–347. doi: 10.1016/j.sleep.2020.06.005

Pengpid S,  Peltzer K. Sedentary Behaviour and 12 Sleep Problem Indicators among Middle-Aged and Elderly Adults in South Africa. Int J Environ Res Public Health. 2019 Apr; 16(8): 1422.

Czeisler M É, Lane RI, Petrosky E, et al. Mental Health, Substance Use, and Suicidal Ideation During the COVID-19 Pandemic — United States, June 24–30, 2020 | MMWR Weekly. Aug 14, 2020. 69(32);1049–1057.

 van Dalfsen JH, Markus, CR. The influence of sleep on human hypothalamic–pituitary–adrenal (HPA)axis reactivity: A systematic review.  Sleep Medicine Reviews. June 2018, 187-194.   doi.org/10.1016/j.smrv.2017.10.002

 Nicolaides NC, et al, eds. Axis and Sleep.  Endotext - NCBI Bookshelf.  South Dartmouth, MA. 2000- https://www.ncbi.nlm.nih.gov/books/NBK278943/

 Issa FG and Sullivan CE. Alcohol, snoring and sleep apnea. J Neurol Neurosurg Psychiatry. 1982 Apr; 45: pp 353–359.

 Liewa SC, Aung T. Sleep deprivation and its association with diseases- a review.  Sleep Medicine. January 2021, pp 192-204.

Sleep Foundation. Coronasomnia: Definition, Symptoms, and Solutions | Sleep Foundation. Apr 14, 2021. https://www.sleepfoundation.org/covid-19-and-sleep/coronasomnia

American Association of Endodontists. Survey Reveals COVID-19 is a Major Factor in Americans’ Failing Dental Health | American Association of Endodontists (aae.org). Mar 4, 2021.

Altena E, Baglioni C, Espie CA, et al. Dealing with sleep problems during home confinement due to the COVID‐19 outbreak: Practical recommendations from a task force of the European CBT‐I Academy. J Sleep Res.  April 4, 2020. doi.org/10.1111/jsr.13052  https://onlinelibrary.wiley.com/doi/10.1111/jsr.13052

CDC. Drowsy Driving- Sleep and Sleep Disorders. Mar 17, 2017. https://www.cdc.gov/sleep/about_sleep/drowsy_driving.html

Dolezal, BA, Neufeld, EV, Boland DM. Interrelationship between Sleep and Exercise: A Systematic Review. Adv Prev Med. 2017; 2017: 1364387. doi: 10.1155/2017/1364387

 Irish LA,  Kline, CE,  Heather E. Gunn HE, et al. The Role of Sleep Hygiene in Promoting Public Health: A Review of Empirical Evidence.Sleep Med Rev. 2015 Aug; 22: 23–36.doi: 10.1016/j.smrv.2014.10.001

Edinger JD,  Arnedt JT, Suzanne M. Bertisch SM, et al.    Behavioral and psychological treatments for chronic insomnia disorder in adults: an American Academy of Sleep Medicine clinical practice guideline. J Clin Sleep Med. Feb. 1, 2021.

Publications
Publications
Topics
Article Type
Display Headline
Pandemic and sleep: Increased stress, lack of exercise and insomnia
Display Headline
Pandemic and sleep: Increased stress, lack of exercise and insomnia
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 09/27/2021 - 11:45
Un-Gate On Date
Mon, 09/27/2021 - 11:45
Use ProPublica
CFC Schedule Remove Status
Mon, 09/27/2021 - 11:45
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Activity Salesforce Deliverable ID
321411.1
Activity ID
77893
Product Name
Expert Interview Article Series
Product ID
106
Supporter Name /ID
wakix [ 5103 ]

Treating endometriosis: maximizing all options for medical management, from hormones to new medical therapies

Article Type
Changed
Thu, 11/11/2021 - 00:15
Display Headline
Treating endometriosis: maximizing all options for medical management, from hormones to new medical therapies

 Stephanie J. Estes, MD is a board certified Obstetrician/Gynecologist and Professor of Reproductive Endocrinology and Infertility at the Penn State Hershey Medical center in Hershey, Pennsylvania.  As a subspecialist, she has a focus on endometriosis and fibroid research and also advances the care of women through advanced reproductive surgery techniques and robotic surgery. 

 

Q: At what point do you consider medical therapies in your approach to a patient with endometriosis?

Dr. Estes: I consider medical therapies for every patient that I see. There are 3 categories that I think about:

 

  • Are they a candidate for medical therapy?
  • Are they a candidate for surgical therapy?
  • Lastly, are they a candidate for fertility treatment?

 

The difference between the plans is if the patient desires fertility. Next, I consider the advantages and disadvantages of medical therapy. The pros of medical therapy are avoiding the risks of surgery, with known and unknown complications or adhesions. And the cons are side effects of medical therapy. Also, medical therapy does not address treating hydrosalpinges, endometriomas, or other deep infiltrating nodules and, clearly, it also inhibits fertility treatment. So, my overall process is looking at the patient’s goal of symptom management and how best to limit their number of surgical procedures. My approach spans many options, and I look at all of those to make an appropriate decision for each patient.

 

Q: How do you determine what options would be best suited for the patient?

 

Dr. Estes: The first line of symptom treatment best suited for the patient is always NSAIDs, which are nonsteroidal anti-inflammatory drugs. These are an appropriate first start in managing the dysmenorrhea and pelvic pain that can be associated with endometriosis. If there are no contraindications, the next most common way to manage endometriosis symptoms is combined oral contraceptive pills, transdermal patches, or rings. A key principle, especially when selecting a pill, is to look at the type of estrogen and progestin you’re choosing. Some practitioners may see these pills as equals, but there are differences. I always select a 20-µm ethinyl estradiol pill for my endometriosis patients. Then, I select a progestin, such as norethindrone or levonorgestrel, which provide good suppressive treatment.

 

The preferred hormonal therapy for endometriosis symptoms not only should be easy for the patient to use but also accomplish management of the symptoms that they are coming in for. Hormonal therapies have a low side-effect profile, which allows the patient to feel well for a long time. We know that endometriosis is a chronic disease, so this is something that the patient is going to need to manage for a long time. I really like to help my patients because they have other things to do in life. They want to take care of their kids, or they’re in school, or have other goals. Patients want to feel well while doing all these activities of life, and so an individualized approach is important. Some of my patients love taking pills, and they are perfectly happy to do so. Other people would prefer a more long-acting treatment so that they do not have to deal with remembering to take a medication every day.

 

Q: Explain how you would apply the use of an intrauterine device (IUD) to manage endometriosis. 

 

Dr. Estes: I use many IUDs with progestin because evidence has shown them to be effective in managing endometriosis symptoms, specifically by decreasing dysmenorrhea. I will usually insert them during a patient’s surgery in the operating room. If I identify endometriosis and can place the IUD at the same time of the surgery, this see-and-treat philosophy maximizes the efficiency of patient care—and the patient avoids the discomfort of an in-office insertion. During the patient’s post-operative check, I evaluate how they feel and how the IUD is working for them. I think many patients are candidates for progestin IUDs, especially those who cannot take an estrogen-containing compound.

 

 

Q: Where do newly available therapies for endometriosis fit into your overall management approach?

 

Dr. Estes: Norethindrone 5 mg is an effective treatment for endometriosis and it is US Food and Drug Administration (FDA)‒approved for that cause. And then, the other medications that Penn State Hershey was part of the clinical trials for include Gonadotropin-releasing hormone (GnRH) antagonists such as elagolix. I saw the improvement in symptoms not only in the trial but also in continued follow-up of patients. GnRH antagonists are a group of medications that have a very good side-effect profile. Patients typically do not have significant side effects, however, hot flashes can me common. Hot flashes that can be ameliorated with some add back hormone therapy. The other drugs in the pipeline are relugolix and linzagolix. Relugolix is FDA approved for prostate cancer treatment. Endometriosis trials are expensive, and they are long and hard to do because of the pain factor—people do not want to stop their other medications to do the trial; however, the use of these medications will continue to be studied, and I look further to continuing to fine tune treatment options

 

 

Q: Is cost to the patient a consideration during management counseling, and should it be? 

 

Dr. Estes: Absolutely. It comes up in every conversation because endometriosis is a long-term disease process that needs to be managed throughout the life cycle of a woman; you cannot have something that is going to be so expensive that will have to be taken for years and years or is not going to be continued. Because cost is critical, I use Lupron Depot as well as letrozole, and goserelin implants are also approved for endometriosis treatment. I also occasionally use danazol, which is a very different mechanism of action in select patients, so multiple options are all present. We have streamlined our pre-approval process for the GnRH antagonists to make it fairly easy.

 

It used to be a little bit harder, but now, if a patient has found that other medications did not offer relief for her endometriosis, then GnRH antagonists are much easier to obtain.

 

Q: Is there anything else you’d like to add?

 

Dr. Estes: Our patient involvement in clinical trials is just so valuable for endometriosis disease research. So, anyone out there living with endometriosis who would like to help medical science—wherever you live, wherever you are—get involved because this can help not only you but also the next person who comes after you. We are currently participating in a study right now on quinagolide. It is a vaginal ring that is not hormonal.

 

Again, we want to keep all options open and see what works and does not work. Science is so fantastic, and it is one of those summary points to note that we always need more information in terms of endometriosis. We really are aspiring to develop and apply treatment options that are effective throughout the lifespan of those affected by endometriosis.

 

 

Author and Disclosure Information

Stephanie Estes, MD, Penn State Hershey Obstetrics & Gynecology

Dr. Estes reports receiving Research grants from AbbVie, Ferring and ObsEva and Consulting with AbbVie.

Publications
Topics
Sections
Author and Disclosure Information

Stephanie Estes, MD, Penn State Hershey Obstetrics & Gynecology

Dr. Estes reports receiving Research grants from AbbVie, Ferring and ObsEva and Consulting with AbbVie.

Author and Disclosure Information

Stephanie Estes, MD, Penn State Hershey Obstetrics & Gynecology

Dr. Estes reports receiving Research grants from AbbVie, Ferring and ObsEva and Consulting with AbbVie.

 Stephanie J. Estes, MD is a board certified Obstetrician/Gynecologist and Professor of Reproductive Endocrinology and Infertility at the Penn State Hershey Medical center in Hershey, Pennsylvania.  As a subspecialist, she has a focus on endometriosis and fibroid research and also advances the care of women through advanced reproductive surgery techniques and robotic surgery. 

 

Q: At what point do you consider medical therapies in your approach to a patient with endometriosis?

Dr. Estes: I consider medical therapies for every patient that I see. There are 3 categories that I think about:

 

  • Are they a candidate for medical therapy?
  • Are they a candidate for surgical therapy?
  • Lastly, are they a candidate for fertility treatment?

 

The difference between the plans is if the patient desires fertility. Next, I consider the advantages and disadvantages of medical therapy. The pros of medical therapy are avoiding the risks of surgery, with known and unknown complications or adhesions. And the cons are side effects of medical therapy. Also, medical therapy does not address treating hydrosalpinges, endometriomas, or other deep infiltrating nodules and, clearly, it also inhibits fertility treatment. So, my overall process is looking at the patient’s goal of symptom management and how best to limit their number of surgical procedures. My approach spans many options, and I look at all of those to make an appropriate decision for each patient.

 

Q: How do you determine what options would be best suited for the patient?

 

Dr. Estes: The first line of symptom treatment best suited for the patient is always NSAIDs, which are nonsteroidal anti-inflammatory drugs. These are an appropriate first start in managing the dysmenorrhea and pelvic pain that can be associated with endometriosis. If there are no contraindications, the next most common way to manage endometriosis symptoms is combined oral contraceptive pills, transdermal patches, or rings. A key principle, especially when selecting a pill, is to look at the type of estrogen and progestin you’re choosing. Some practitioners may see these pills as equals, but there are differences. I always select a 20-µm ethinyl estradiol pill for my endometriosis patients. Then, I select a progestin, such as norethindrone or levonorgestrel, which provide good suppressive treatment.

 

The preferred hormonal therapy for endometriosis symptoms not only should be easy for the patient to use but also accomplish management of the symptoms that they are coming in for. Hormonal therapies have a low side-effect profile, which allows the patient to feel well for a long time. We know that endometriosis is a chronic disease, so this is something that the patient is going to need to manage for a long time. I really like to help my patients because they have other things to do in life. They want to take care of their kids, or they’re in school, or have other goals. Patients want to feel well while doing all these activities of life, and so an individualized approach is important. Some of my patients love taking pills, and they are perfectly happy to do so. Other people would prefer a more long-acting treatment so that they do not have to deal with remembering to take a medication every day.

 

Q: Explain how you would apply the use of an intrauterine device (IUD) to manage endometriosis. 

 

Dr. Estes: I use many IUDs with progestin because evidence has shown them to be effective in managing endometriosis symptoms, specifically by decreasing dysmenorrhea. I will usually insert them during a patient’s surgery in the operating room. If I identify endometriosis and can place the IUD at the same time of the surgery, this see-and-treat philosophy maximizes the efficiency of patient care—and the patient avoids the discomfort of an in-office insertion. During the patient’s post-operative check, I evaluate how they feel and how the IUD is working for them. I think many patients are candidates for progestin IUDs, especially those who cannot take an estrogen-containing compound.

 

 

Q: Where do newly available therapies for endometriosis fit into your overall management approach?

 

Dr. Estes: Norethindrone 5 mg is an effective treatment for endometriosis and it is US Food and Drug Administration (FDA)‒approved for that cause. And then, the other medications that Penn State Hershey was part of the clinical trials for include Gonadotropin-releasing hormone (GnRH) antagonists such as elagolix. I saw the improvement in symptoms not only in the trial but also in continued follow-up of patients. GnRH antagonists are a group of medications that have a very good side-effect profile. Patients typically do not have significant side effects, however, hot flashes can me common. Hot flashes that can be ameliorated with some add back hormone therapy. The other drugs in the pipeline are relugolix and linzagolix. Relugolix is FDA approved for prostate cancer treatment. Endometriosis trials are expensive, and they are long and hard to do because of the pain factor—people do not want to stop their other medications to do the trial; however, the use of these medications will continue to be studied, and I look further to continuing to fine tune treatment options

 

 

Q: Is cost to the patient a consideration during management counseling, and should it be? 

 

Dr. Estes: Absolutely. It comes up in every conversation because endometriosis is a long-term disease process that needs to be managed throughout the life cycle of a woman; you cannot have something that is going to be so expensive that will have to be taken for years and years or is not going to be continued. Because cost is critical, I use Lupron Depot as well as letrozole, and goserelin implants are also approved for endometriosis treatment. I also occasionally use danazol, which is a very different mechanism of action in select patients, so multiple options are all present. We have streamlined our pre-approval process for the GnRH antagonists to make it fairly easy.

 

It used to be a little bit harder, but now, if a patient has found that other medications did not offer relief for her endometriosis, then GnRH antagonists are much easier to obtain.

 

Q: Is there anything else you’d like to add?

 

Dr. Estes: Our patient involvement in clinical trials is just so valuable for endometriosis disease research. So, anyone out there living with endometriosis who would like to help medical science—wherever you live, wherever you are—get involved because this can help not only you but also the next person who comes after you. We are currently participating in a study right now on quinagolide. It is a vaginal ring that is not hormonal.

 

Again, we want to keep all options open and see what works and does not work. Science is so fantastic, and it is one of those summary points to note that we always need more information in terms of endometriosis. We really are aspiring to develop and apply treatment options that are effective throughout the lifespan of those affected by endometriosis.

 

 

 Stephanie J. Estes, MD is a board certified Obstetrician/Gynecologist and Professor of Reproductive Endocrinology and Infertility at the Penn State Hershey Medical center in Hershey, Pennsylvania.  As a subspecialist, she has a focus on endometriosis and fibroid research and also advances the care of women through advanced reproductive surgery techniques and robotic surgery. 

 

Q: At what point do you consider medical therapies in your approach to a patient with endometriosis?

Dr. Estes: I consider medical therapies for every patient that I see. There are 3 categories that I think about:

 

  • Are they a candidate for medical therapy?
  • Are they a candidate for surgical therapy?
  • Lastly, are they a candidate for fertility treatment?

 

The difference between the plans is if the patient desires fertility. Next, I consider the advantages and disadvantages of medical therapy. The pros of medical therapy are avoiding the risks of surgery, with known and unknown complications or adhesions. And the cons are side effects of medical therapy. Also, medical therapy does not address treating hydrosalpinges, endometriomas, or other deep infiltrating nodules and, clearly, it also inhibits fertility treatment. So, my overall process is looking at the patient’s goal of symptom management and how best to limit their number of surgical procedures. My approach spans many options, and I look at all of those to make an appropriate decision for each patient.

 

Q: How do you determine what options would be best suited for the patient?

 

Dr. Estes: The first line of symptom treatment best suited for the patient is always NSAIDs, which are nonsteroidal anti-inflammatory drugs. These are an appropriate first start in managing the dysmenorrhea and pelvic pain that can be associated with endometriosis. If there are no contraindications, the next most common way to manage endometriosis symptoms is combined oral contraceptive pills, transdermal patches, or rings. A key principle, especially when selecting a pill, is to look at the type of estrogen and progestin you’re choosing. Some practitioners may see these pills as equals, but there are differences. I always select a 20-µm ethinyl estradiol pill for my endometriosis patients. Then, I select a progestin, such as norethindrone or levonorgestrel, which provide good suppressive treatment.

 

The preferred hormonal therapy for endometriosis symptoms not only should be easy for the patient to use but also accomplish management of the symptoms that they are coming in for. Hormonal therapies have a low side-effect profile, which allows the patient to feel well for a long time. We know that endometriosis is a chronic disease, so this is something that the patient is going to need to manage for a long time. I really like to help my patients because they have other things to do in life. They want to take care of their kids, or they’re in school, or have other goals. Patients want to feel well while doing all these activities of life, and so an individualized approach is important. Some of my patients love taking pills, and they are perfectly happy to do so. Other people would prefer a more long-acting treatment so that they do not have to deal with remembering to take a medication every day.

 

Q: Explain how you would apply the use of an intrauterine device (IUD) to manage endometriosis. 

 

Dr. Estes: I use many IUDs with progestin because evidence has shown them to be effective in managing endometriosis symptoms, specifically by decreasing dysmenorrhea. I will usually insert them during a patient’s surgery in the operating room. If I identify endometriosis and can place the IUD at the same time of the surgery, this see-and-treat philosophy maximizes the efficiency of patient care—and the patient avoids the discomfort of an in-office insertion. During the patient’s post-operative check, I evaluate how they feel and how the IUD is working for them. I think many patients are candidates for progestin IUDs, especially those who cannot take an estrogen-containing compound.

 

 

Q: Where do newly available therapies for endometriosis fit into your overall management approach?

 

Dr. Estes: Norethindrone 5 mg is an effective treatment for endometriosis and it is US Food and Drug Administration (FDA)‒approved for that cause. And then, the other medications that Penn State Hershey was part of the clinical trials for include Gonadotropin-releasing hormone (GnRH) antagonists such as elagolix. I saw the improvement in symptoms not only in the trial but also in continued follow-up of patients. GnRH antagonists are a group of medications that have a very good side-effect profile. Patients typically do not have significant side effects, however, hot flashes can me common. Hot flashes that can be ameliorated with some add back hormone therapy. The other drugs in the pipeline are relugolix and linzagolix. Relugolix is FDA approved for prostate cancer treatment. Endometriosis trials are expensive, and they are long and hard to do because of the pain factor—people do not want to stop their other medications to do the trial; however, the use of these medications will continue to be studied, and I look further to continuing to fine tune treatment options

 

 

Q: Is cost to the patient a consideration during management counseling, and should it be? 

 

Dr. Estes: Absolutely. It comes up in every conversation because endometriosis is a long-term disease process that needs to be managed throughout the life cycle of a woman; you cannot have something that is going to be so expensive that will have to be taken for years and years or is not going to be continued. Because cost is critical, I use Lupron Depot as well as letrozole, and goserelin implants are also approved for endometriosis treatment. I also occasionally use danazol, which is a very different mechanism of action in select patients, so multiple options are all present. We have streamlined our pre-approval process for the GnRH antagonists to make it fairly easy.

 

It used to be a little bit harder, but now, if a patient has found that other medications did not offer relief for her endometriosis, then GnRH antagonists are much easier to obtain.

 

Q: Is there anything else you’d like to add?

 

Dr. Estes: Our patient involvement in clinical trials is just so valuable for endometriosis disease research. So, anyone out there living with endometriosis who would like to help medical science—wherever you live, wherever you are—get involved because this can help not only you but also the next person who comes after you. We are currently participating in a study right now on quinagolide. It is a vaginal ring that is not hormonal.

 

Again, we want to keep all options open and see what works and does not work. Science is so fantastic, and it is one of those summary points to note that we always need more information in terms of endometriosis. We really are aspiring to develop and apply treatment options that are effective throughout the lifespan of those affected by endometriosis.

 

 

Publications
Publications
Topics
Article Type
Display Headline
Treating endometriosis: maximizing all options for medical management, from hormones to new medical therapies
Display Headline
Treating endometriosis: maximizing all options for medical management, from hormones to new medical therapies
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 09/27/2021 - 11:15
Un-Gate On Date
Mon, 09/27/2021 - 11:15
Use ProPublica
CFC Schedule Remove Status
Mon, 09/27/2021 - 11:15
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Enriched infant formula offers no academic benefit later: Study

Article Type
Changed
Wed, 11/10/2021 - 18:45

Infants who are given nutrient- or supplement-enriched formula milk do not later have higher academic scores as adolescents than those fed with standard formula, a study published online in the BMJ suggests.

One goal of modifying infant formula has been to make long-term cognitive outcomes similar to those for breast-fed infants, the authors noted. Rates for breastfeeding beyond 6 weeks are low in many parts of the world and more than 60% of babies worldwide under the age of 6 months are given formula to replace or supplement breast milk, the paper states.

So far, research has been inconclusive on benefits, though enhancements continue to be added and claims have been made as to their benefits on cognition in advertising. Long-term trials are difficult as researchers move on and participants are lost to follow-up.

In a new study, however, researchers led by Maximiliane L. Verfürden, MsC, with the University College of London’s Great Ormond Street Institute of Child Health, linked data from seven dormant, randomized, controlled infant formula trials to participants’ performance later as adolescents in the United Kingdom on mandatory national school math and English exams at ages 11 and 16 and found no difference in scores.

They followed 1,763 adolescents who had been participants in the formula trials, which were conducted between 1993 and 2001, and were able to link 91.2% (1,607) to academic records.

They found “no benefit of the infant formula modifications on cognitive outcomes.”
 

Three types of formula studied

In this study, the researchers discuss three widely available types of modified infant formulas that have been promoted as benefiting cognitive development: formula enriched with nutrients; formula supplemented with long-chain polyunsaturated fatty acids (LCPUFAs); and follow-on formula fortified with iron.

In one supplement group the academic results were worse than for those given standard formula. At age 11, children who had been given the LCPUFA-enhanced formula scored lower in both English and math.

“Given the potential associations between the source of LCPUFAs and adverse cognitive outcomes, long-term follow-up of trials testing infant formulas from other sources of LCPUFAs is recommended,” the authors wrote.
 

Nutrients can harm, editorialist says

Charlotte Wright, BM BCH, MSc, a pediatrician and epidemiologist with the Glasgow Royal Hospital for Children in Glasgow, who was not part of the study, coauthored an editorial that accompanied the article in the BMJ.

Dr. Wright and nutritionist Ada L. Gargia, PhD, at the University of Glasgow, wrote that nutrients in some formula enhancements can harm and that infant milk trials often have been poorly conducted.

The editorialists point to a large systematic review of formula milk trials published this year in the BMJ by Helfer et al. that found that most were funded by industry.

“Helfer and colleagues’ review found that 80% of studies were at high risk of bias, mainly because of selective reporting, with 92% of abstracts mentioning positive findings, despite only 42% of trials finding statistically significant differences in a stated primary outcome,” they wrote.

Dr. Wright, who runs a specialist feeding clinic for children, said in an interview that the study is valuable in that it has follow-up “to an age when adult cognition can be robustly assessed.”

She noted that the authors say additives that have been shown to be harmful are still routinely added.

“There is now evidence that adding LCPUFAs results in lower cognition and that giving extra iron to healthy children increases their risk of infection and may even slow their growth,” she said.

But advertisements to the contrary are quickly found in an Internet search, she said, even if no specific claims are made for them.

She gave an example of an advertisement for a commonly used enhanced formula, which reads: “Our formulation contains our highest levels of DHA (Omega 3 LCPs) and is enriched with iron to support normal cognitive development.”

The formula studies were done more than 20 years ago, but Dr. Wright said that does not downplay their relevance.

The basic formulation of the formulas hasn’t changed much, she said, and the additives are still present.

This work was supported by the Economic and Social Research Council UCL, Bloomsbury and East London Doctoral Training Partnership and a Great Ormond Street Hospital Charity Research grant. Full disclosures for all authors are available with the full text of the paper. Dr. Wright and Dr. Garcia declared no relevant financial relationships.

Publications
Topics
Sections

Infants who are given nutrient- or supplement-enriched formula milk do not later have higher academic scores as adolescents than those fed with standard formula, a study published online in the BMJ suggests.

One goal of modifying infant formula has been to make long-term cognitive outcomes similar to those for breast-fed infants, the authors noted. Rates for breastfeeding beyond 6 weeks are low in many parts of the world and more than 60% of babies worldwide under the age of 6 months are given formula to replace or supplement breast milk, the paper states.

So far, research has been inconclusive on benefits, though enhancements continue to be added and claims have been made as to their benefits on cognition in advertising. Long-term trials are difficult as researchers move on and participants are lost to follow-up.

In a new study, however, researchers led by Maximiliane L. Verfürden, MsC, with the University College of London’s Great Ormond Street Institute of Child Health, linked data from seven dormant, randomized, controlled infant formula trials to participants’ performance later as adolescents in the United Kingdom on mandatory national school math and English exams at ages 11 and 16 and found no difference in scores.

They followed 1,763 adolescents who had been participants in the formula trials, which were conducted between 1993 and 2001, and were able to link 91.2% (1,607) to academic records.

They found “no benefit of the infant formula modifications on cognitive outcomes.”
 

Three types of formula studied

In this study, the researchers discuss three widely available types of modified infant formulas that have been promoted as benefiting cognitive development: formula enriched with nutrients; formula supplemented with long-chain polyunsaturated fatty acids (LCPUFAs); and follow-on formula fortified with iron.

In one supplement group the academic results were worse than for those given standard formula. At age 11, children who had been given the LCPUFA-enhanced formula scored lower in both English and math.

“Given the potential associations between the source of LCPUFAs and adverse cognitive outcomes, long-term follow-up of trials testing infant formulas from other sources of LCPUFAs is recommended,” the authors wrote.
 

Nutrients can harm, editorialist says

Charlotte Wright, BM BCH, MSc, a pediatrician and epidemiologist with the Glasgow Royal Hospital for Children in Glasgow, who was not part of the study, coauthored an editorial that accompanied the article in the BMJ.

Dr. Wright and nutritionist Ada L. Gargia, PhD, at the University of Glasgow, wrote that nutrients in some formula enhancements can harm and that infant milk trials often have been poorly conducted.

The editorialists point to a large systematic review of formula milk trials published this year in the BMJ by Helfer et al. that found that most were funded by industry.

“Helfer and colleagues’ review found that 80% of studies were at high risk of bias, mainly because of selective reporting, with 92% of abstracts mentioning positive findings, despite only 42% of trials finding statistically significant differences in a stated primary outcome,” they wrote.

Dr. Wright, who runs a specialist feeding clinic for children, said in an interview that the study is valuable in that it has follow-up “to an age when adult cognition can be robustly assessed.”

She noted that the authors say additives that have been shown to be harmful are still routinely added.

“There is now evidence that adding LCPUFAs results in lower cognition and that giving extra iron to healthy children increases their risk of infection and may even slow their growth,” she said.

But advertisements to the contrary are quickly found in an Internet search, she said, even if no specific claims are made for them.

She gave an example of an advertisement for a commonly used enhanced formula, which reads: “Our formulation contains our highest levels of DHA (Omega 3 LCPs) and is enriched with iron to support normal cognitive development.”

The formula studies were done more than 20 years ago, but Dr. Wright said that does not downplay their relevance.

The basic formulation of the formulas hasn’t changed much, she said, and the additives are still present.

This work was supported by the Economic and Social Research Council UCL, Bloomsbury and East London Doctoral Training Partnership and a Great Ormond Street Hospital Charity Research grant. Full disclosures for all authors are available with the full text of the paper. Dr. Wright and Dr. Garcia declared no relevant financial relationships.

Infants who are given nutrient- or supplement-enriched formula milk do not later have higher academic scores as adolescents than those fed with standard formula, a study published online in the BMJ suggests.

One goal of modifying infant formula has been to make long-term cognitive outcomes similar to those for breast-fed infants, the authors noted. Rates for breastfeeding beyond 6 weeks are low in many parts of the world and more than 60% of babies worldwide under the age of 6 months are given formula to replace or supplement breast milk, the paper states.

So far, research has been inconclusive on benefits, though enhancements continue to be added and claims have been made as to their benefits on cognition in advertising. Long-term trials are difficult as researchers move on and participants are lost to follow-up.

In a new study, however, researchers led by Maximiliane L. Verfürden, MsC, with the University College of London’s Great Ormond Street Institute of Child Health, linked data from seven dormant, randomized, controlled infant formula trials to participants’ performance later as adolescents in the United Kingdom on mandatory national school math and English exams at ages 11 and 16 and found no difference in scores.

They followed 1,763 adolescents who had been participants in the formula trials, which were conducted between 1993 and 2001, and were able to link 91.2% (1,607) to academic records.

They found “no benefit of the infant formula modifications on cognitive outcomes.”
 

Three types of formula studied

In this study, the researchers discuss three widely available types of modified infant formulas that have been promoted as benefiting cognitive development: formula enriched with nutrients; formula supplemented with long-chain polyunsaturated fatty acids (LCPUFAs); and follow-on formula fortified with iron.

In one supplement group the academic results were worse than for those given standard formula. At age 11, children who had been given the LCPUFA-enhanced formula scored lower in both English and math.

“Given the potential associations between the source of LCPUFAs and adverse cognitive outcomes, long-term follow-up of trials testing infant formulas from other sources of LCPUFAs is recommended,” the authors wrote.
 

Nutrients can harm, editorialist says

Charlotte Wright, BM BCH, MSc, a pediatrician and epidemiologist with the Glasgow Royal Hospital for Children in Glasgow, who was not part of the study, coauthored an editorial that accompanied the article in the BMJ.

Dr. Wright and nutritionist Ada L. Gargia, PhD, at the University of Glasgow, wrote that nutrients in some formula enhancements can harm and that infant milk trials often have been poorly conducted.

The editorialists point to a large systematic review of formula milk trials published this year in the BMJ by Helfer et al. that found that most were funded by industry.

“Helfer and colleagues’ review found that 80% of studies were at high risk of bias, mainly because of selective reporting, with 92% of abstracts mentioning positive findings, despite only 42% of trials finding statistically significant differences in a stated primary outcome,” they wrote.

Dr. Wright, who runs a specialist feeding clinic for children, said in an interview that the study is valuable in that it has follow-up “to an age when adult cognition can be robustly assessed.”

She noted that the authors say additives that have been shown to be harmful are still routinely added.

“There is now evidence that adding LCPUFAs results in lower cognition and that giving extra iron to healthy children increases their risk of infection and may even slow their growth,” she said.

But advertisements to the contrary are quickly found in an Internet search, she said, even if no specific claims are made for them.

She gave an example of an advertisement for a commonly used enhanced formula, which reads: “Our formulation contains our highest levels of DHA (Omega 3 LCPs) and is enriched with iron to support normal cognitive development.”

The formula studies were done more than 20 years ago, but Dr. Wright said that does not downplay their relevance.

The basic formulation of the formulas hasn’t changed much, she said, and the additives are still present.

This work was supported by the Economic and Social Research Council UCL, Bloomsbury and East London Doctoral Training Partnership and a Great Ormond Street Hospital Charity Research grant. Full disclosures for all authors are available with the full text of the paper. Dr. Wright and Dr. Garcia declared no relevant financial relationships.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE BMJ

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

When a JAK inhibitor fails for a patient with RA, what’s next?

Article Type
Changed
Wed, 11/10/2021 - 16:47

For patients with rheumatoid arthritis (RA) for whom a first Janus kinase inhibitor (JAKi) has failed, there appears to be no difference in treatment effectiveness whether the patient is cycled to a second JAKi or receives a biologic disease-modifying antirheumatic drug (bDMARD), a study of international patient registry data suggests.

However, patients who are prescribed a different JAKi after the first has failed them tend to have conditions that are more difficult to treat than do patients who are switched to a bDMARD after JAKi failure. In addition, adverse events that occur with the first JAKi are likely to occur again if a different agent in the same class is used, reported Manuel Pombo-Suarez, MD, PhD, adjunct professor of medicine at the University Hospital of Santiago de Compostela, Spain.

“When the first JAK inhibitor was stopped due to an adverse event, it was also more likely that the second JAK inhibitor would be stopped for the same reason,” he said in an oral abstract presentation during the American College of Rheumatology (ACR) 2021 Annual Meeting, which was held online.

The 2019 update of the European Alliance of Associations for Rheumatology (EULAR) guidelines for RA recommend that for patients for whom a first JAKi has failed, clinicians can consider a different JAKi or switch to a bDMARD. But at the time the guidelines were published, no data were available from studies in which a second JAKi was used after the failure of a first JAKi, Dr. Pombo-Suarez noted.

“We are trying to shed a light on this growing population of patients, as prescription of these drugs is increasing and new JAK inhibitors come into play, meaning that this scenario, we propose, is becoming more and more frequent in real life. We must provide a solution for these patients,” he said.
 

Pooled registry data

The investigators compared the effectiveness of the two approaches with respect to rates of drug retention and Disease Activity Score in 28 joints (DAS28).

They conducted a nested cohort study using data from 14 national registries that are part of the JAK-pot collaboration.

They pooled data from each registry on patients with RA for whom a first JAKi had failed and who were then treated with either a second JAKi or a bDMARD.

They identified a total of 708 patients for whom a JAKi had failed initially. Of these patients, 154 were given a different JAKi, and 554 were switched to a bDMARD. In each group, women accounted for a large majority of patients.

The mean age was slightly older among those who received a second JAKi (58.41 years vs. 54.74 years for patients who were given a bDMARD). The mean disease duration was 13.95 years and 11.37 years, respectively.

In each group, approximately 77% of patients received tofacitinib (Xeljanz).

At baseline, the mean DAS28 scores were similar between the groups: 4.10 in the group that received a second JAKi, and 4.17 in the group given a bDMARD.

Reasons for initially stopping use of a JAKi were as follows: adverse events (27.3% of those who took a second JAKi after they had stopped taking one initially, and 17.9% of patients who received a bDMARD); lack of efficacy (61% and 65%, respectively), and other reasons (11.7% and 17.1%, respectively).



At 2 years’ follow-up, drug survival rates were similar between the two treatment arms, although there was a nonsignificant trend toward a higher rate of discontinuation among patients who were given a second JAKi after they stopped taking the first JAKi because of adverse events. In contrast, there was also a nonsignificant trend toward lower discontinuation rates among patients who were given a second JAKi after they had stopped taking the first JAKi because of lack of efficacy.

As noted before, patients who stopped taking the first JAKi because of an adverse event were more likely to stop taking the second JAKi because of they experienced either the same or a different adverse event, whereas patients who started taking a bDMARD were equally likely to stop taking the second therapy because of either adverse events or lack of efficacy.

The treatment strategies were virtually identical with respect to improvement of DAS28 at 7 months after the start of therapy.

Dr. Pombo-Suarez acknowledged that the study was limited by the fact that heterogeneity between countries could not be assessed, owing to the small sample sizes in each nation’s registry. Other limitations include short follow-up and the fact that tofacitinib was used as the first JAKi by the large majority of patients.

 

 

What’s your practice?

In a media briefing during which Dr. Pombo-Suarez discussed the study findings, this news organization polled other speakers who were not involved in the study about their go-to strategies when JAKi therapy fails.

Silje Watterdal Syversen, MD, PhD, a consultant rheumatologist and researcher at Diakonhjemmet Hospital, Oslo, said that she would choose to switch to a tumor necrosis factor [TNF] inhibitor.

“I think it would depend on what prior treatment the patient had received,” said April Jorge, MD, a rheumatologist at Massachusetts General Hospital, Boston. “In my practice, patients receiving a JAK inhibitor typically failed on their biologics. I haven’t had many fail a JAK inhibitor – a small sample size.”

“That’s what we see in our study,” Dr. Pombo-Suarez said. “Most of the patients that cycled JAK inhibitors had higher numbers of biologics compared with switchers.”

“I can share my experience, which is a greater comfort level with cycling a TNF antagonist. I agree with Dr Jorge: I don’t use JAK inhibitors in the first line for rheumatoid arthritis, but based on the work that’s been described here and future data, I might have a greater comfort level cycling JAK inhibitors once the data support such an approach,” commented H. Michael Belmont, MD, professor of medicine at New York University, co-director of the NYU Lupus Center, and medical director of Bellevue Hospital Lupus Center, New York.

The JAK-pot study is supported by unrestricted research grants from AbbVie and Galapagos. Dr. Pombo-Suarez has received adviser and speaker honoraria from several companies other than the funders. Dr. Syversen has received honoraria from Thermo Fisher. Dr. Jorge has disclosed no relevant financial relationships. Dr. Belmont has received honoraria from Alexion.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

For patients with rheumatoid arthritis (RA) for whom a first Janus kinase inhibitor (JAKi) has failed, there appears to be no difference in treatment effectiveness whether the patient is cycled to a second JAKi or receives a biologic disease-modifying antirheumatic drug (bDMARD), a study of international patient registry data suggests.

However, patients who are prescribed a different JAKi after the first has failed them tend to have conditions that are more difficult to treat than do patients who are switched to a bDMARD after JAKi failure. In addition, adverse events that occur with the first JAKi are likely to occur again if a different agent in the same class is used, reported Manuel Pombo-Suarez, MD, PhD, adjunct professor of medicine at the University Hospital of Santiago de Compostela, Spain.

“When the first JAK inhibitor was stopped due to an adverse event, it was also more likely that the second JAK inhibitor would be stopped for the same reason,” he said in an oral abstract presentation during the American College of Rheumatology (ACR) 2021 Annual Meeting, which was held online.

The 2019 update of the European Alliance of Associations for Rheumatology (EULAR) guidelines for RA recommend that for patients for whom a first JAKi has failed, clinicians can consider a different JAKi or switch to a bDMARD. But at the time the guidelines were published, no data were available from studies in which a second JAKi was used after the failure of a first JAKi, Dr. Pombo-Suarez noted.

“We are trying to shed a light on this growing population of patients, as prescription of these drugs is increasing and new JAK inhibitors come into play, meaning that this scenario, we propose, is becoming more and more frequent in real life. We must provide a solution for these patients,” he said.
 

Pooled registry data

The investigators compared the effectiveness of the two approaches with respect to rates of drug retention and Disease Activity Score in 28 joints (DAS28).

They conducted a nested cohort study using data from 14 national registries that are part of the JAK-pot collaboration.

They pooled data from each registry on patients with RA for whom a first JAKi had failed and who were then treated with either a second JAKi or a bDMARD.

They identified a total of 708 patients for whom a JAKi had failed initially. Of these patients, 154 were given a different JAKi, and 554 were switched to a bDMARD. In each group, women accounted for a large majority of patients.

The mean age was slightly older among those who received a second JAKi (58.41 years vs. 54.74 years for patients who were given a bDMARD). The mean disease duration was 13.95 years and 11.37 years, respectively.

In each group, approximately 77% of patients received tofacitinib (Xeljanz).

At baseline, the mean DAS28 scores were similar between the groups: 4.10 in the group that received a second JAKi, and 4.17 in the group given a bDMARD.

Reasons for initially stopping use of a JAKi were as follows: adverse events (27.3% of those who took a second JAKi after they had stopped taking one initially, and 17.9% of patients who received a bDMARD); lack of efficacy (61% and 65%, respectively), and other reasons (11.7% and 17.1%, respectively).



At 2 years’ follow-up, drug survival rates were similar between the two treatment arms, although there was a nonsignificant trend toward a higher rate of discontinuation among patients who were given a second JAKi after they stopped taking the first JAKi because of adverse events. In contrast, there was also a nonsignificant trend toward lower discontinuation rates among patients who were given a second JAKi after they had stopped taking the first JAKi because of lack of efficacy.

As noted before, patients who stopped taking the first JAKi because of an adverse event were more likely to stop taking the second JAKi because of they experienced either the same or a different adverse event, whereas patients who started taking a bDMARD were equally likely to stop taking the second therapy because of either adverse events or lack of efficacy.

The treatment strategies were virtually identical with respect to improvement of DAS28 at 7 months after the start of therapy.

Dr. Pombo-Suarez acknowledged that the study was limited by the fact that heterogeneity between countries could not be assessed, owing to the small sample sizes in each nation’s registry. Other limitations include short follow-up and the fact that tofacitinib was used as the first JAKi by the large majority of patients.

 

 

What’s your practice?

In a media briefing during which Dr. Pombo-Suarez discussed the study findings, this news organization polled other speakers who were not involved in the study about their go-to strategies when JAKi therapy fails.

Silje Watterdal Syversen, MD, PhD, a consultant rheumatologist and researcher at Diakonhjemmet Hospital, Oslo, said that she would choose to switch to a tumor necrosis factor [TNF] inhibitor.

“I think it would depend on what prior treatment the patient had received,” said April Jorge, MD, a rheumatologist at Massachusetts General Hospital, Boston. “In my practice, patients receiving a JAK inhibitor typically failed on their biologics. I haven’t had many fail a JAK inhibitor – a small sample size.”

“That’s what we see in our study,” Dr. Pombo-Suarez said. “Most of the patients that cycled JAK inhibitors had higher numbers of biologics compared with switchers.”

“I can share my experience, which is a greater comfort level with cycling a TNF antagonist. I agree with Dr Jorge: I don’t use JAK inhibitors in the first line for rheumatoid arthritis, but based on the work that’s been described here and future data, I might have a greater comfort level cycling JAK inhibitors once the data support such an approach,” commented H. Michael Belmont, MD, professor of medicine at New York University, co-director of the NYU Lupus Center, and medical director of Bellevue Hospital Lupus Center, New York.

The JAK-pot study is supported by unrestricted research grants from AbbVie and Galapagos. Dr. Pombo-Suarez has received adviser and speaker honoraria from several companies other than the funders. Dr. Syversen has received honoraria from Thermo Fisher. Dr. Jorge has disclosed no relevant financial relationships. Dr. Belmont has received honoraria from Alexion.

A version of this article first appeared on Medscape.com.

For patients with rheumatoid arthritis (RA) for whom a first Janus kinase inhibitor (JAKi) has failed, there appears to be no difference in treatment effectiveness whether the patient is cycled to a second JAKi or receives a biologic disease-modifying antirheumatic drug (bDMARD), a study of international patient registry data suggests.

However, patients who are prescribed a different JAKi after the first has failed them tend to have conditions that are more difficult to treat than do patients who are switched to a bDMARD after JAKi failure. In addition, adverse events that occur with the first JAKi are likely to occur again if a different agent in the same class is used, reported Manuel Pombo-Suarez, MD, PhD, adjunct professor of medicine at the University Hospital of Santiago de Compostela, Spain.

“When the first JAK inhibitor was stopped due to an adverse event, it was also more likely that the second JAK inhibitor would be stopped for the same reason,” he said in an oral abstract presentation during the American College of Rheumatology (ACR) 2021 Annual Meeting, which was held online.

The 2019 update of the European Alliance of Associations for Rheumatology (EULAR) guidelines for RA recommend that for patients for whom a first JAKi has failed, clinicians can consider a different JAKi or switch to a bDMARD. But at the time the guidelines were published, no data were available from studies in which a second JAKi was used after the failure of a first JAKi, Dr. Pombo-Suarez noted.

“We are trying to shed a light on this growing population of patients, as prescription of these drugs is increasing and new JAK inhibitors come into play, meaning that this scenario, we propose, is becoming more and more frequent in real life. We must provide a solution for these patients,” he said.
 

Pooled registry data

The investigators compared the effectiveness of the two approaches with respect to rates of drug retention and Disease Activity Score in 28 joints (DAS28).

They conducted a nested cohort study using data from 14 national registries that are part of the JAK-pot collaboration.

They pooled data from each registry on patients with RA for whom a first JAKi had failed and who were then treated with either a second JAKi or a bDMARD.

They identified a total of 708 patients for whom a JAKi had failed initially. Of these patients, 154 were given a different JAKi, and 554 were switched to a bDMARD. In each group, women accounted for a large majority of patients.

The mean age was slightly older among those who received a second JAKi (58.41 years vs. 54.74 years for patients who were given a bDMARD). The mean disease duration was 13.95 years and 11.37 years, respectively.

In each group, approximately 77% of patients received tofacitinib (Xeljanz).

At baseline, the mean DAS28 scores were similar between the groups: 4.10 in the group that received a second JAKi, and 4.17 in the group given a bDMARD.

Reasons for initially stopping use of a JAKi were as follows: adverse events (27.3% of those who took a second JAKi after they had stopped taking one initially, and 17.9% of patients who received a bDMARD); lack of efficacy (61% and 65%, respectively), and other reasons (11.7% and 17.1%, respectively).



At 2 years’ follow-up, drug survival rates were similar between the two treatment arms, although there was a nonsignificant trend toward a higher rate of discontinuation among patients who were given a second JAKi after they stopped taking the first JAKi because of adverse events. In contrast, there was also a nonsignificant trend toward lower discontinuation rates among patients who were given a second JAKi after they had stopped taking the first JAKi because of lack of efficacy.

As noted before, patients who stopped taking the first JAKi because of an adverse event were more likely to stop taking the second JAKi because of they experienced either the same or a different adverse event, whereas patients who started taking a bDMARD were equally likely to stop taking the second therapy because of either adverse events or lack of efficacy.

The treatment strategies were virtually identical with respect to improvement of DAS28 at 7 months after the start of therapy.

Dr. Pombo-Suarez acknowledged that the study was limited by the fact that heterogeneity between countries could not be assessed, owing to the small sample sizes in each nation’s registry. Other limitations include short follow-up and the fact that tofacitinib was used as the first JAKi by the large majority of patients.

 

 

What’s your practice?

In a media briefing during which Dr. Pombo-Suarez discussed the study findings, this news organization polled other speakers who were not involved in the study about their go-to strategies when JAKi therapy fails.

Silje Watterdal Syversen, MD, PhD, a consultant rheumatologist and researcher at Diakonhjemmet Hospital, Oslo, said that she would choose to switch to a tumor necrosis factor [TNF] inhibitor.

“I think it would depend on what prior treatment the patient had received,” said April Jorge, MD, a rheumatologist at Massachusetts General Hospital, Boston. “In my practice, patients receiving a JAK inhibitor typically failed on their biologics. I haven’t had many fail a JAK inhibitor – a small sample size.”

“That’s what we see in our study,” Dr. Pombo-Suarez said. “Most of the patients that cycled JAK inhibitors had higher numbers of biologics compared with switchers.”

“I can share my experience, which is a greater comfort level with cycling a TNF antagonist. I agree with Dr Jorge: I don’t use JAK inhibitors in the first line for rheumatoid arthritis, but based on the work that’s been described here and future data, I might have a greater comfort level cycling JAK inhibitors once the data support such an approach,” commented H. Michael Belmont, MD, professor of medicine at New York University, co-director of the NYU Lupus Center, and medical director of Bellevue Hospital Lupus Center, New York.

The JAK-pot study is supported by unrestricted research grants from AbbVie and Galapagos. Dr. Pombo-Suarez has received adviser and speaker honoraria from several companies other than the funders. Dr. Syversen has received honoraria from Thermo Fisher. Dr. Jorge has disclosed no relevant financial relationships. Dr. Belmont has received honoraria from Alexion.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Infected, vaccinated, or both: How protected am I from COVID-19?

Article Type
Changed
Wed, 11/10/2021 - 16:41

As the United States rounds out its second year of the pandemic, many people are trying to figure out just how vulnerable they may be to COVID-19 infection, and whether it’s finally safe to fully return to all the activities they miss.

On an individual basis, the degree and durability of the immunity a person gets after vaccination versus an infection is not an easy question to answer. But it’s one that science is hotly pursing.

“This virus is teaching us a lot about immunology,” says Gregory Poland, MD, who studies how the body responds to vaccines at the Mayo Clinic in Rochester, Minn. Dr. Poland says this moment in science reminds him of a quote attributed to Ralph Waldo Emerson: “We learn about geology the morning after the earthquake.”

“And that’s the case here. It is and will continue to teach us a lot of immunology,” he says.

It’s vital to understand how a COVID-19 infection reshapes the body’s immune defenses so that researchers can tailor vaccines and therapies to do the same or better.

“Because, of course, it’s much more risky to get infected with the actual virus, than with the vaccine,” says Daniela Weiskopf, PhD, a researcher at the La Jolla Institute for Immunology in California.

What is known so far is that how much protection you get and how long you may have it depends on several factors. Those include your age, whether you’ve had COVID-19 before and how severe your symptoms were, your vaccination status, and how long it has been since you were infected or inoculated. Your underlying health matters, too. Immune protection also depends on the virus and how much it is changing as it evolves to evade all our hard-won defenses.

In a new scientific brief, the Centers for Disease Control and Prevention digs into the evidence behind the immune protection created by infection compared with immunity after vaccination. Here’s what we know so far:
 

Durability of immunity

The agency’s researchers say if you’ve recovered from a COVID-19 infection or are fully vaccinated, you’re probably in good shape for at least 6 months. That’s why this is the recommended interval for people to consider getting a booster dose.

Even though the protection you get after infection and vaccination is generally strong, it’s not perfect.

Getting COVID-19 after you’ve been vaccinated or recovered is still possible. But having some immunity -- whether from infection or vaccination -- really drops the odds of this happening to you. And if you do happen to catch COVID, if your immune system has already gotten a heads up about the virus, your infection is much less likely to be one that lands you in the hospital or morgue.

According to CDC data, at the height of the Delta surge in August, fully vaccinated people were six times less likely to get a COVID-19 infection compared with unvaccinated people, and 11 times less likely to die if they did get it.
 

How strong is immunity after a COVID-19 Infection?

About 90% of people develop some number of protective antibodies after a COVID-19 infection, according to the CDC. But how high those levels climb appears to be all over the map. Studies show peak antibody concentrations can vary as much as 200-fold, or 2,000%.

Where you fall within that very large range will depend on your age and how sick you became from your COVID-19 infection. It also depends on whether you have an underlying health condition or take a medication that blunts immune function.

Our immune system slows down with age. Immunosenescence starts to affect a person’s health around the age of 60. But there’s no bright line for failure. People who exercise and are generally healthy will have better immune function than someone who doesn’t, no matter their age. In general, though, the older you are, the less likely you are to get a robust immune response after an infection or a vaccination. That’s why this group has been prioritized both for first vaccine doses and boosters.

Beyond age, your protection from future infection seems to depend on how ill you were with the first. Several studies have shown that blood levels of antibodies rise faster and reach a higher peak in people with more severe infections.

In general, people with cold-like symptoms who tested positive but recovered at home are better protected than people who didn’t get any symptoms. And people who were hospitalized for their infections are better protected over the long term than people with milder infections. They may have paid a steep price for that protection: Many hospitalized patients continue to have debilitating symptoms that last for months after they go home.

On average, though, protection after infection seems to be comparable to vaccination, at least for a while. Six large studies from different countries have looked into this question, and five of them have used the very sensitive real-time polymerase chain reaction test (RT-PCR) to count people as truly being previously infected. These studies found that for 6 to 9 months after recovery, a person was 80% to 93% less likely to get COVID-19 again.

There are some caveats to mention, though. Early in the pandemic when supplies were scarce, it was hard to get tested unless you were so sick you landed in the hospital. Studies have shown that the concentration of antibodies a person makes after an infection seems to depend on how sick they got in the first place.

People who had milder infections, or who didn’t have any symptoms at all, may not develop as much protection as those who have more severe symptoms. So these studies may reflect the immunity developed by people who were pretty ill during their first infections.

One study of 25,000 health care workers, who were all tested every 2 weeks -- whether they had symptoms or not -- may offer a clearer picture. In this study, health care workers who’d previously tested positive for COVID-19 were 84% less likely to test positive for the virus again. They were 93% less likely to get an infection that made them sick, and 52% less likely to get an infection without symptoms, for at least 6 months after they recovered.
 

 

 

How does protection after infection compare to vaccination?

Two weeks after your final vaccine dose, protection against a COVID-19 infection is high -- around 90% for the Pfizer and Moderna mRNA vaccines and 66% for the one-dose Johnson & Johnson shot. Clinical trials conducted by the manufacturer have shown that a second dose of the Johnson & Johnson vaccine given at least 2 months after vaccination boosts protection against illness in the United States to about 94%, which is why another dose has been recommended for all Johnson & Johnson vaccine recipients 2 months after their first shot.

It’s not yet known how long the COVID-19 vaccines remain protective. There’s some evidence that protection against symptomatic infections wanes a bit over time as antibody levels drop. But protection against severe illness, including hospitalization and death, has remained high so far, even without a booster.
 

Are antibodies different after infection compared to vaccination?

Yes. And researchers don’t yet understand what these differences mean.

It seems to come down to a question of quality versus quantity. Vaccines seem to produce higher peak antibody levels than natural infections do. But these antibodies are highly specialized, able to recognize only the parts of the virus they were designed to target.

“The mRNA vaccine directs all the immune responses to the single spike protein,” says Alice Cho, PhD, who is studying the differences in vaccine and infection-created immunity at the Rockefeller University in New York. “There’s a lot more to respond to with a virus than there is in a vaccine.”

During an infection, the immune system learns to recognize and grab onto many parts of the virus, not just its spike.

The job of remembering the various pieces and parts of a foreign invader, so that it can be quickly recognized and disarmed should it ever return, falls to memory B cells.

Memory B cells, in turn, make plasma cells that then crank out antibodies that are custom tailored to attach to their targets.

Antibody levels gradually fall over a few months’ time as the plasma cells that make them die off. But memory B cells live for extended periods. One study that was attempting to measure the lifespan of individual memory B cells in mice found that these cells probably live as long as the mouse itself. Memory B cells induced by smallpox vaccination may live at least 60 years -- virtually an entire lifetime.

Dr. Cho’s research team has found that when memory B cells are trained by the vaccine, they become one-hit wonders, cranking out copious amounts of the same kinds of antibodies over and over again.

Memory B cells trained by viral infection, however, are more versatile. They continue to evolve over several months and produce higher quality antibodies that appear to become more potent over time and can even develop activity against future variants.

Still, the researchers stress that it’s not smart to wait to get a COVID-19 infection in hopes of getting these more versatile antibodies.

“While a natural infection may induce maturation of antibodies with broader activity than a vaccine does -- a natural infection can also kill you,” says Michel Nussenzweig, MD, PhD, head of Rockefeller’s Laboratory of Molecular Immunology.

Sure, memory B cells generated by infections may be immunological Swiss Army Knives, but maybe, argues Donna Farber, PhD, an immunologist at Columbia University in New York, we really only need a single blade.

“The thing with the vaccine is that it’s really focused,” she says. “It’s not giving you all these other viral proteins. It’s only giving you the spike.”

“It may be even better than the level of neutralizing spike antibodies you’re going to get from the infection,” she says. “With a viral infection, the immune response really has a lot to do. It’s really being distracted by all these other proteins.”

“Whereas with the vaccine, it’s just saying to the immune response, ‘This is the immunity we need,’” Dr. Farber says. “‘Just generate this immunity.’ So it’s focusing the immune response in a way that’s going to guarantee that you’re going to get that protective response.”
 

 

 

What if you had COVID and later got vaccinated?

This is called hybrid immunity, and it’s the best of both worlds.

“You have the benefit of very deep, but narrow, immunity produced by vaccine, and very broad, but not very deep, immunity produced by infection,” Dr. Poland says. He says you’ve effectively cross-trained your immune system.

In studies of people who recovered from COVID-19 and then went on to get an mRNA vaccine, after one dose, their antibodies were as high as someone who had been fully vaccinated. After two doses, their antibody levels were about double the average levels seen in someone who’d only been vaccinated.

Studies have shown this kind of immunity has real benefits, too. A recent study by researchers at the University of Kentucky and the CDC found that people who’d gotten COVID-19 in 2020, but had not been vaccinated, were about twice as likely to be reinfected in May and June compared with those who recovered and went on to get their vaccines.
 

What antibody level is protective?

Scientists aren’t exactly sure how high antibody levels need to be for protection, or even which kinds of antibodies or other immune components matter most yet.

But vaccines appear to generate higher antibody levels than infections do. In a recent study published in the journal Science , Dr. Weiskopf and her colleagues at the La Jolla Institute of Immunology detail the findings of a de-escalation study, where they gave people one-quarter of the normal dose of the Moderna mRNA vaccine and then collected blood samples over time to study their immune responses.

Their immune responses were scaled down with the dose.

“We saw that this has the exact same levels as natural infection,” Dr. Weiskopf says. “People who are vaccinated have much higher immune memory than people who are naturally infected,” she says.

Antibody levels are not easy to determine in the real world. Can you take a test to find out how protected you are? The answer is no, because we don’t yet know what antibody level, or even which kind of antibodies, correlate with protection.

Also, there are many different kinds of antibody tests and they all use a slightly different scale, so there’s no broadly agreed upon way to measure them yet. It’s difficult to compare levels test to test.
 

Weeks or months between doses? Which is best?

Both the Pfizer and Moderna vaccines were tested to be given 3 and 4 weeks apart, respectively. But when the vaccines were first rolling out, shortages prompted some countries to stretch the interval between doses to 4 or more months.

Researchers who have studied the immune responses of people who were inoculated on an extended dosing schedule noticed something interesting: When the interval was stretched, people had better antibody responses. In fact, their antibody responses looked like the sky-high levels people got with hybrid immunity.

Susanna Dunachie, PhD, a global research professor at the University of Oxford in the United Kingdom, wondered why. She’s leading a team of researchers who are doing detailed studies of the immune responses of health care workers after their vaccinations.

“We found that B cells, which are the cells that make antibodies to the viral spike protein after vaccination, carry on increasing in number between 4 and 10 weeks after vaccination,” she says.

Waiting to give the second vaccine 6 to 14 weeks seems to stimulate the immune system when all of its antibody-making factories are finally up and running.

For this reason, giving the second dose at 3 weeks, she says, might be premature.

But there’s a tradeoff involved in waiting. If there are high levels of the virus circulating in a community, you want to get people fully vaccinated as quickly as possible to maximize their protection in the shortest window of time, which is what we decided to do in the United States.

Researchers say it might be a good idea to revisit the dosing interval when it’s less risky to try it.
 

A version of this article first appeared on WebMD.com.

Publications
Topics
Sections

As the United States rounds out its second year of the pandemic, many people are trying to figure out just how vulnerable they may be to COVID-19 infection, and whether it’s finally safe to fully return to all the activities they miss.

On an individual basis, the degree and durability of the immunity a person gets after vaccination versus an infection is not an easy question to answer. But it’s one that science is hotly pursing.

“This virus is teaching us a lot about immunology,” says Gregory Poland, MD, who studies how the body responds to vaccines at the Mayo Clinic in Rochester, Minn. Dr. Poland says this moment in science reminds him of a quote attributed to Ralph Waldo Emerson: “We learn about geology the morning after the earthquake.”

“And that’s the case here. It is and will continue to teach us a lot of immunology,” he says.

It’s vital to understand how a COVID-19 infection reshapes the body’s immune defenses so that researchers can tailor vaccines and therapies to do the same or better.

“Because, of course, it’s much more risky to get infected with the actual virus, than with the vaccine,” says Daniela Weiskopf, PhD, a researcher at the La Jolla Institute for Immunology in California.

What is known so far is that how much protection you get and how long you may have it depends on several factors. Those include your age, whether you’ve had COVID-19 before and how severe your symptoms were, your vaccination status, and how long it has been since you were infected or inoculated. Your underlying health matters, too. Immune protection also depends on the virus and how much it is changing as it evolves to evade all our hard-won defenses.

In a new scientific brief, the Centers for Disease Control and Prevention digs into the evidence behind the immune protection created by infection compared with immunity after vaccination. Here’s what we know so far:
 

Durability of immunity

The agency’s researchers say if you’ve recovered from a COVID-19 infection or are fully vaccinated, you’re probably in good shape for at least 6 months. That’s why this is the recommended interval for people to consider getting a booster dose.

Even though the protection you get after infection and vaccination is generally strong, it’s not perfect.

Getting COVID-19 after you’ve been vaccinated or recovered is still possible. But having some immunity -- whether from infection or vaccination -- really drops the odds of this happening to you. And if you do happen to catch COVID, if your immune system has already gotten a heads up about the virus, your infection is much less likely to be one that lands you in the hospital or morgue.

According to CDC data, at the height of the Delta surge in August, fully vaccinated people were six times less likely to get a COVID-19 infection compared with unvaccinated people, and 11 times less likely to die if they did get it.
 

How strong is immunity after a COVID-19 Infection?

About 90% of people develop some number of protective antibodies after a COVID-19 infection, according to the CDC. But how high those levels climb appears to be all over the map. Studies show peak antibody concentrations can vary as much as 200-fold, or 2,000%.

Where you fall within that very large range will depend on your age and how sick you became from your COVID-19 infection. It also depends on whether you have an underlying health condition or take a medication that blunts immune function.

Our immune system slows down with age. Immunosenescence starts to affect a person’s health around the age of 60. But there’s no bright line for failure. People who exercise and are generally healthy will have better immune function than someone who doesn’t, no matter their age. In general, though, the older you are, the less likely you are to get a robust immune response after an infection or a vaccination. That’s why this group has been prioritized both for first vaccine doses and boosters.

Beyond age, your protection from future infection seems to depend on how ill you were with the first. Several studies have shown that blood levels of antibodies rise faster and reach a higher peak in people with more severe infections.

In general, people with cold-like symptoms who tested positive but recovered at home are better protected than people who didn’t get any symptoms. And people who were hospitalized for their infections are better protected over the long term than people with milder infections. They may have paid a steep price for that protection: Many hospitalized patients continue to have debilitating symptoms that last for months after they go home.

On average, though, protection after infection seems to be comparable to vaccination, at least for a while. Six large studies from different countries have looked into this question, and five of them have used the very sensitive real-time polymerase chain reaction test (RT-PCR) to count people as truly being previously infected. These studies found that for 6 to 9 months after recovery, a person was 80% to 93% less likely to get COVID-19 again.

There are some caveats to mention, though. Early in the pandemic when supplies were scarce, it was hard to get tested unless you were so sick you landed in the hospital. Studies have shown that the concentration of antibodies a person makes after an infection seems to depend on how sick they got in the first place.

People who had milder infections, or who didn’t have any symptoms at all, may not develop as much protection as those who have more severe symptoms. So these studies may reflect the immunity developed by people who were pretty ill during their first infections.

One study of 25,000 health care workers, who were all tested every 2 weeks -- whether they had symptoms or not -- may offer a clearer picture. In this study, health care workers who’d previously tested positive for COVID-19 were 84% less likely to test positive for the virus again. They were 93% less likely to get an infection that made them sick, and 52% less likely to get an infection without symptoms, for at least 6 months after they recovered.
 

 

 

How does protection after infection compare to vaccination?

Two weeks after your final vaccine dose, protection against a COVID-19 infection is high -- around 90% for the Pfizer and Moderna mRNA vaccines and 66% for the one-dose Johnson & Johnson shot. Clinical trials conducted by the manufacturer have shown that a second dose of the Johnson & Johnson vaccine given at least 2 months after vaccination boosts protection against illness in the United States to about 94%, which is why another dose has been recommended for all Johnson & Johnson vaccine recipients 2 months after their first shot.

It’s not yet known how long the COVID-19 vaccines remain protective. There’s some evidence that protection against symptomatic infections wanes a bit over time as antibody levels drop. But protection against severe illness, including hospitalization and death, has remained high so far, even without a booster.
 

Are antibodies different after infection compared to vaccination?

Yes. And researchers don’t yet understand what these differences mean.

It seems to come down to a question of quality versus quantity. Vaccines seem to produce higher peak antibody levels than natural infections do. But these antibodies are highly specialized, able to recognize only the parts of the virus they were designed to target.

“The mRNA vaccine directs all the immune responses to the single spike protein,” says Alice Cho, PhD, who is studying the differences in vaccine and infection-created immunity at the Rockefeller University in New York. “There’s a lot more to respond to with a virus than there is in a vaccine.”

During an infection, the immune system learns to recognize and grab onto many parts of the virus, not just its spike.

The job of remembering the various pieces and parts of a foreign invader, so that it can be quickly recognized and disarmed should it ever return, falls to memory B cells.

Memory B cells, in turn, make plasma cells that then crank out antibodies that are custom tailored to attach to their targets.

Antibody levels gradually fall over a few months’ time as the plasma cells that make them die off. But memory B cells live for extended periods. One study that was attempting to measure the lifespan of individual memory B cells in mice found that these cells probably live as long as the mouse itself. Memory B cells induced by smallpox vaccination may live at least 60 years -- virtually an entire lifetime.

Dr. Cho’s research team has found that when memory B cells are trained by the vaccine, they become one-hit wonders, cranking out copious amounts of the same kinds of antibodies over and over again.

Memory B cells trained by viral infection, however, are more versatile. They continue to evolve over several months and produce higher quality antibodies that appear to become more potent over time and can even develop activity against future variants.

Still, the researchers stress that it’s not smart to wait to get a COVID-19 infection in hopes of getting these more versatile antibodies.

“While a natural infection may induce maturation of antibodies with broader activity than a vaccine does -- a natural infection can also kill you,” says Michel Nussenzweig, MD, PhD, head of Rockefeller’s Laboratory of Molecular Immunology.

Sure, memory B cells generated by infections may be immunological Swiss Army Knives, but maybe, argues Donna Farber, PhD, an immunologist at Columbia University in New York, we really only need a single blade.

“The thing with the vaccine is that it’s really focused,” she says. “It’s not giving you all these other viral proteins. It’s only giving you the spike.”

“It may be even better than the level of neutralizing spike antibodies you’re going to get from the infection,” she says. “With a viral infection, the immune response really has a lot to do. It’s really being distracted by all these other proteins.”

“Whereas with the vaccine, it’s just saying to the immune response, ‘This is the immunity we need,’” Dr. Farber says. “‘Just generate this immunity.’ So it’s focusing the immune response in a way that’s going to guarantee that you’re going to get that protective response.”
 

 

 

What if you had COVID and later got vaccinated?

This is called hybrid immunity, and it’s the best of both worlds.

“You have the benefit of very deep, but narrow, immunity produced by vaccine, and very broad, but not very deep, immunity produced by infection,” Dr. Poland says. He says you’ve effectively cross-trained your immune system.

In studies of people who recovered from COVID-19 and then went on to get an mRNA vaccine, after one dose, their antibodies were as high as someone who had been fully vaccinated. After two doses, their antibody levels were about double the average levels seen in someone who’d only been vaccinated.

Studies have shown this kind of immunity has real benefits, too. A recent study by researchers at the University of Kentucky and the CDC found that people who’d gotten COVID-19 in 2020, but had not been vaccinated, were about twice as likely to be reinfected in May and June compared with those who recovered and went on to get their vaccines.
 

What antibody level is protective?

Scientists aren’t exactly sure how high antibody levels need to be for protection, or even which kinds of antibodies or other immune components matter most yet.

But vaccines appear to generate higher antibody levels than infections do. In a recent study published in the journal Science , Dr. Weiskopf and her colleagues at the La Jolla Institute of Immunology detail the findings of a de-escalation study, where they gave people one-quarter of the normal dose of the Moderna mRNA vaccine and then collected blood samples over time to study their immune responses.

Their immune responses were scaled down with the dose.

“We saw that this has the exact same levels as natural infection,” Dr. Weiskopf says. “People who are vaccinated have much higher immune memory than people who are naturally infected,” she says.

Antibody levels are not easy to determine in the real world. Can you take a test to find out how protected you are? The answer is no, because we don’t yet know what antibody level, or even which kind of antibodies, correlate with protection.

Also, there are many different kinds of antibody tests and they all use a slightly different scale, so there’s no broadly agreed upon way to measure them yet. It’s difficult to compare levels test to test.
 

Weeks or months between doses? Which is best?

Both the Pfizer and Moderna vaccines were tested to be given 3 and 4 weeks apart, respectively. But when the vaccines were first rolling out, shortages prompted some countries to stretch the interval between doses to 4 or more months.

Researchers who have studied the immune responses of people who were inoculated on an extended dosing schedule noticed something interesting: When the interval was stretched, people had better antibody responses. In fact, their antibody responses looked like the sky-high levels people got with hybrid immunity.

Susanna Dunachie, PhD, a global research professor at the University of Oxford in the United Kingdom, wondered why. She’s leading a team of researchers who are doing detailed studies of the immune responses of health care workers after their vaccinations.

“We found that B cells, which are the cells that make antibodies to the viral spike protein after vaccination, carry on increasing in number between 4 and 10 weeks after vaccination,” she says.

Waiting to give the second vaccine 6 to 14 weeks seems to stimulate the immune system when all of its antibody-making factories are finally up and running.

For this reason, giving the second dose at 3 weeks, she says, might be premature.

But there’s a tradeoff involved in waiting. If there are high levels of the virus circulating in a community, you want to get people fully vaccinated as quickly as possible to maximize their protection in the shortest window of time, which is what we decided to do in the United States.

Researchers say it might be a good idea to revisit the dosing interval when it’s less risky to try it.
 

A version of this article first appeared on WebMD.com.

As the United States rounds out its second year of the pandemic, many people are trying to figure out just how vulnerable they may be to COVID-19 infection, and whether it’s finally safe to fully return to all the activities they miss.

On an individual basis, the degree and durability of the immunity a person gets after vaccination versus an infection is not an easy question to answer. But it’s one that science is hotly pursing.

“This virus is teaching us a lot about immunology,” says Gregory Poland, MD, who studies how the body responds to vaccines at the Mayo Clinic in Rochester, Minn. Dr. Poland says this moment in science reminds him of a quote attributed to Ralph Waldo Emerson: “We learn about geology the morning after the earthquake.”

“And that’s the case here. It is and will continue to teach us a lot of immunology,” he says.

It’s vital to understand how a COVID-19 infection reshapes the body’s immune defenses so that researchers can tailor vaccines and therapies to do the same or better.

“Because, of course, it’s much more risky to get infected with the actual virus, than with the vaccine,” says Daniela Weiskopf, PhD, a researcher at the La Jolla Institute for Immunology in California.

What is known so far is that how much protection you get and how long you may have it depends on several factors. Those include your age, whether you’ve had COVID-19 before and how severe your symptoms were, your vaccination status, and how long it has been since you were infected or inoculated. Your underlying health matters, too. Immune protection also depends on the virus and how much it is changing as it evolves to evade all our hard-won defenses.

In a new scientific brief, the Centers for Disease Control and Prevention digs into the evidence behind the immune protection created by infection compared with immunity after vaccination. Here’s what we know so far:
 

Durability of immunity

The agency’s researchers say if you’ve recovered from a COVID-19 infection or are fully vaccinated, you’re probably in good shape for at least 6 months. That’s why this is the recommended interval for people to consider getting a booster dose.

Even though the protection you get after infection and vaccination is generally strong, it’s not perfect.

Getting COVID-19 after you’ve been vaccinated or recovered is still possible. But having some immunity -- whether from infection or vaccination -- really drops the odds of this happening to you. And if you do happen to catch COVID, if your immune system has already gotten a heads up about the virus, your infection is much less likely to be one that lands you in the hospital or morgue.

According to CDC data, at the height of the Delta surge in August, fully vaccinated people were six times less likely to get a COVID-19 infection compared with unvaccinated people, and 11 times less likely to die if they did get it.
 

How strong is immunity after a COVID-19 Infection?

About 90% of people develop some number of protective antibodies after a COVID-19 infection, according to the CDC. But how high those levels climb appears to be all over the map. Studies show peak antibody concentrations can vary as much as 200-fold, or 2,000%.

Where you fall within that very large range will depend on your age and how sick you became from your COVID-19 infection. It also depends on whether you have an underlying health condition or take a medication that blunts immune function.

Our immune system slows down with age. Immunosenescence starts to affect a person’s health around the age of 60. But there’s no bright line for failure. People who exercise and are generally healthy will have better immune function than someone who doesn’t, no matter their age. In general, though, the older you are, the less likely you are to get a robust immune response after an infection or a vaccination. That’s why this group has been prioritized both for first vaccine doses and boosters.

Beyond age, your protection from future infection seems to depend on how ill you were with the first. Several studies have shown that blood levels of antibodies rise faster and reach a higher peak in people with more severe infections.

In general, people with cold-like symptoms who tested positive but recovered at home are better protected than people who didn’t get any symptoms. And people who were hospitalized for their infections are better protected over the long term than people with milder infections. They may have paid a steep price for that protection: Many hospitalized patients continue to have debilitating symptoms that last for months after they go home.

On average, though, protection after infection seems to be comparable to vaccination, at least for a while. Six large studies from different countries have looked into this question, and five of them have used the very sensitive real-time polymerase chain reaction test (RT-PCR) to count people as truly being previously infected. These studies found that for 6 to 9 months after recovery, a person was 80% to 93% less likely to get COVID-19 again.

There are some caveats to mention, though. Early in the pandemic when supplies were scarce, it was hard to get tested unless you were so sick you landed in the hospital. Studies have shown that the concentration of antibodies a person makes after an infection seems to depend on how sick they got in the first place.

People who had milder infections, or who didn’t have any symptoms at all, may not develop as much protection as those who have more severe symptoms. So these studies may reflect the immunity developed by people who were pretty ill during their first infections.

One study of 25,000 health care workers, who were all tested every 2 weeks -- whether they had symptoms or not -- may offer a clearer picture. In this study, health care workers who’d previously tested positive for COVID-19 were 84% less likely to test positive for the virus again. They were 93% less likely to get an infection that made them sick, and 52% less likely to get an infection without symptoms, for at least 6 months after they recovered.
 

 

 

How does protection after infection compare to vaccination?

Two weeks after your final vaccine dose, protection against a COVID-19 infection is high -- around 90% for the Pfizer and Moderna mRNA vaccines and 66% for the one-dose Johnson & Johnson shot. Clinical trials conducted by the manufacturer have shown that a second dose of the Johnson & Johnson vaccine given at least 2 months after vaccination boosts protection against illness in the United States to about 94%, which is why another dose has been recommended for all Johnson & Johnson vaccine recipients 2 months after their first shot.

It’s not yet known how long the COVID-19 vaccines remain protective. There’s some evidence that protection against symptomatic infections wanes a bit over time as antibody levels drop. But protection against severe illness, including hospitalization and death, has remained high so far, even without a booster.
 

Are antibodies different after infection compared to vaccination?

Yes. And researchers don’t yet understand what these differences mean.

It seems to come down to a question of quality versus quantity. Vaccines seem to produce higher peak antibody levels than natural infections do. But these antibodies are highly specialized, able to recognize only the parts of the virus they were designed to target.

“The mRNA vaccine directs all the immune responses to the single spike protein,” says Alice Cho, PhD, who is studying the differences in vaccine and infection-created immunity at the Rockefeller University in New York. “There’s a lot more to respond to with a virus than there is in a vaccine.”

During an infection, the immune system learns to recognize and grab onto many parts of the virus, not just its spike.

The job of remembering the various pieces and parts of a foreign invader, so that it can be quickly recognized and disarmed should it ever return, falls to memory B cells.

Memory B cells, in turn, make plasma cells that then crank out antibodies that are custom tailored to attach to their targets.

Antibody levels gradually fall over a few months’ time as the plasma cells that make them die off. But memory B cells live for extended periods. One study that was attempting to measure the lifespan of individual memory B cells in mice found that these cells probably live as long as the mouse itself. Memory B cells induced by smallpox vaccination may live at least 60 years -- virtually an entire lifetime.

Dr. Cho’s research team has found that when memory B cells are trained by the vaccine, they become one-hit wonders, cranking out copious amounts of the same kinds of antibodies over and over again.

Memory B cells trained by viral infection, however, are more versatile. They continue to evolve over several months and produce higher quality antibodies that appear to become more potent over time and can even develop activity against future variants.

Still, the researchers stress that it’s not smart to wait to get a COVID-19 infection in hopes of getting these more versatile antibodies.

“While a natural infection may induce maturation of antibodies with broader activity than a vaccine does -- a natural infection can also kill you,” says Michel Nussenzweig, MD, PhD, head of Rockefeller’s Laboratory of Molecular Immunology.

Sure, memory B cells generated by infections may be immunological Swiss Army Knives, but maybe, argues Donna Farber, PhD, an immunologist at Columbia University in New York, we really only need a single blade.

“The thing with the vaccine is that it’s really focused,” she says. “It’s not giving you all these other viral proteins. It’s only giving you the spike.”

“It may be even better than the level of neutralizing spike antibodies you’re going to get from the infection,” she says. “With a viral infection, the immune response really has a lot to do. It’s really being distracted by all these other proteins.”

“Whereas with the vaccine, it’s just saying to the immune response, ‘This is the immunity we need,’” Dr. Farber says. “‘Just generate this immunity.’ So it’s focusing the immune response in a way that’s going to guarantee that you’re going to get that protective response.”
 

 

 

What if you had COVID and later got vaccinated?

This is called hybrid immunity, and it’s the best of both worlds.

“You have the benefit of very deep, but narrow, immunity produced by vaccine, and very broad, but not very deep, immunity produced by infection,” Dr. Poland says. He says you’ve effectively cross-trained your immune system.

In studies of people who recovered from COVID-19 and then went on to get an mRNA vaccine, after one dose, their antibodies were as high as someone who had been fully vaccinated. After two doses, their antibody levels were about double the average levels seen in someone who’d only been vaccinated.

Studies have shown this kind of immunity has real benefits, too. A recent study by researchers at the University of Kentucky and the CDC found that people who’d gotten COVID-19 in 2020, but had not been vaccinated, were about twice as likely to be reinfected in May and June compared with those who recovered and went on to get their vaccines.
 

What antibody level is protective?

Scientists aren’t exactly sure how high antibody levels need to be for protection, or even which kinds of antibodies or other immune components matter most yet.

But vaccines appear to generate higher antibody levels than infections do. In a recent study published in the journal Science , Dr. Weiskopf and her colleagues at the La Jolla Institute of Immunology detail the findings of a de-escalation study, where they gave people one-quarter of the normal dose of the Moderna mRNA vaccine and then collected blood samples over time to study their immune responses.

Their immune responses were scaled down with the dose.

“We saw that this has the exact same levels as natural infection,” Dr. Weiskopf says. “People who are vaccinated have much higher immune memory than people who are naturally infected,” she says.

Antibody levels are not easy to determine in the real world. Can you take a test to find out how protected you are? The answer is no, because we don’t yet know what antibody level, or even which kind of antibodies, correlate with protection.

Also, there are many different kinds of antibody tests and they all use a slightly different scale, so there’s no broadly agreed upon way to measure them yet. It’s difficult to compare levels test to test.
 

Weeks or months between doses? Which is best?

Both the Pfizer and Moderna vaccines were tested to be given 3 and 4 weeks apart, respectively. But when the vaccines were first rolling out, shortages prompted some countries to stretch the interval between doses to 4 or more months.

Researchers who have studied the immune responses of people who were inoculated on an extended dosing schedule noticed something interesting: When the interval was stretched, people had better antibody responses. In fact, their antibody responses looked like the sky-high levels people got with hybrid immunity.

Susanna Dunachie, PhD, a global research professor at the University of Oxford in the United Kingdom, wondered why. She’s leading a team of researchers who are doing detailed studies of the immune responses of health care workers after their vaccinations.

“We found that B cells, which are the cells that make antibodies to the viral spike protein after vaccination, carry on increasing in number between 4 and 10 weeks after vaccination,” she says.

Waiting to give the second vaccine 6 to 14 weeks seems to stimulate the immune system when all of its antibody-making factories are finally up and running.

For this reason, giving the second dose at 3 weeks, she says, might be premature.

But there’s a tradeoff involved in waiting. If there are high levels of the virus circulating in a community, you want to get people fully vaccinated as quickly as possible to maximize their protection in the shortest window of time, which is what we decided to do in the United States.

Researchers say it might be a good idea to revisit the dosing interval when it’s less risky to try it.
 

A version of this article first appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Brief, automated cognitive test may offer key advantages in MS

Article Type
Changed
Wed, 11/10/2021 - 16:31

The National Institutes of Health Toolbox Cognitive Battery (NIHTB-CB), a 30-minute, automated, iPad-based battery of psychological tests, offers some key advantages over gold standard cognitive assessments in patients with relapsing-remitting multiple sclerosis (RRMS), new research shows.

Heena Manglani


“To our knowledge this is the first psychometric evaluation of the NIH Toolbox Cognition Battery in MS,” said study investigator Heena R. Manglani, MA, a clinical psychology fellow at Massachusetts General Hospital and Harvard Medical School, Boston.

“[The findings] suggest that the NIH Toolbox Cognition Battery may be used as an alternative to other gold-standard measures which may cover limited domains or require manual scoring,” added Ms. Manglani, who is working toward her PhD in clinical psychology.

The study was presented at the 2021 Annual Meeting of the Consortium of Multiple Sclerosis Centers (CMSC).
 

An indicator of disease activity?

Cognitive deficits affecting a range of functions – including memory, attention and communication – are common in MS and affect 34% to 65% of patients with the disease, and the ability to detect and monitor such deficits has important implications.

Cognitive changes can provide a unique opportunity to identify acute disease activity in patients with MS that might be already occurring before physical manifestations become apparent, said Ms. Manglani. “If we can detect subtle changes in cognition that might foreshadow other symptoms of disease worsening, we can then allocate interventions that might stave off cognitive decline,” she explained.

While there is an array of well-established neuropsychological tests for the assessment of cognitive deficits, each has limitations, so a shorter, computerized, convenient, and reliable test could prove beneficial.

The NIHTB-CB has been validated in a large, nationally representative sample of individuals aged 8 to 85 and represents a potentially attractive option, yielding composite measures and scores corrected for age, gender, education, race, and ethnicity.
 

Comparative testing

To compare the test with other leading cognition tools used in MS, the investigators recruited 87 patients with RRMS (79% female, mean age 47.3 years). Participants were recruited to perform the full NIHTB-CB (about 30 minutes) and the full Minimal Assessment of Cognitive Function in Multiple Sclerosis (MACFIMS), which takes about 90 minutes, as well as some subsets from the Wechsler Adult Intelligence Scale-IV (WAIS-IV) covering processing speed and working memory. All patients had an EDSS of 5.0 or below and, on average, had been living with MS for about a decade.

The results showed the normative scores for NIHTB-CB had significant concordance with the other measures in terms of processing speed (concordance correlation coefficient [CCC] range = 0.28-0.48), working memory (CCC range = 0.27-0.37), and episodic memory (CCC range = 0.21-0.32). However, agreement was not shown for executive function (CCC range = 0.096-0.11).

Ms. Manglani noted executive function included various submeasures such as planning and inhibitory control. “Perhaps our gold standard measures tapped into a different facet of executive function than measured by the NIHTB,” she said.

The investigators found the proportion of participants classified as cognitively impaired was similar between the MACFIMS and the NIHTB tests.

Further assessment of fluid cognition on the NIHTB-CB – a composite of processing speed, working memory, episodic memory, and executive function that is automatically generated by the toolbox – showed the measure was negatively associated with disease severity, as measured by the EDSS (P = .006). However, the measure was not associated with a difference in depression (P = .39) or fatigue (P = .69).

Of note, a similar association with lower disease severity on the EDSS was not observed with MACFIMS.

“Interestingly, we found that only the NIHTB-CB fluid cognition was associated with disease severity, such that it was associated with nearly 11% of the variance in EDSS scores, and we were surprised that we didn’t see this with MACFIMS,” Ms. Manglani said.
 

 

 

Key advantages

The NIHTB-CB was developed as part of the NIH Blueprint for Neuroscience Research initiative and commissioned by 16 NIH Institutes to provide brief, efficient assessment measures of cognitive function.

The battery has been validated in healthy individuals and tested in other populations with neurologic disorders, including patients who have suffered stroke and traumatic brain injury.

Ms. Manglani noted that the NIHTB-CB had key advantages over other tests. “First, it is a 30-minute iPad-based battery, which is shorter than most cognitive batteries available, and one of the few that is completely computerized. In addition, it automatically scores performance and yields a report with both composite scores and scores for each subtest,” she said.

In addition, said Ms. Manglani, “the NIH toolbox has a large validation sample of individuals between 8-85 years of age and provides normative scores that account for age, gender, education, and race/ethnicity, which allows individuals’ performances to be compared with their peers.”

The findings underscore that with further validation, the battery could have an important role in MS, she added.

“The NIH Toolbox needs to be tested in all subtypes of MS, with a full range of disease severity, and in MS clinics to gauge the clinical feasibility. Larger samples and repeated assessments are also needed to assess the test-retest reliability,” she said.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

The National Institutes of Health Toolbox Cognitive Battery (NIHTB-CB), a 30-minute, automated, iPad-based battery of psychological tests, offers some key advantages over gold standard cognitive assessments in patients with relapsing-remitting multiple sclerosis (RRMS), new research shows.

Heena Manglani


“To our knowledge this is the first psychometric evaluation of the NIH Toolbox Cognition Battery in MS,” said study investigator Heena R. Manglani, MA, a clinical psychology fellow at Massachusetts General Hospital and Harvard Medical School, Boston.

“[The findings] suggest that the NIH Toolbox Cognition Battery may be used as an alternative to other gold-standard measures which may cover limited domains or require manual scoring,” added Ms. Manglani, who is working toward her PhD in clinical psychology.

The study was presented at the 2021 Annual Meeting of the Consortium of Multiple Sclerosis Centers (CMSC).
 

An indicator of disease activity?

Cognitive deficits affecting a range of functions – including memory, attention and communication – are common in MS and affect 34% to 65% of patients with the disease, and the ability to detect and monitor such deficits has important implications.

Cognitive changes can provide a unique opportunity to identify acute disease activity in patients with MS that might be already occurring before physical manifestations become apparent, said Ms. Manglani. “If we can detect subtle changes in cognition that might foreshadow other symptoms of disease worsening, we can then allocate interventions that might stave off cognitive decline,” she explained.

While there is an array of well-established neuropsychological tests for the assessment of cognitive deficits, each has limitations, so a shorter, computerized, convenient, and reliable test could prove beneficial.

The NIHTB-CB has been validated in a large, nationally representative sample of individuals aged 8 to 85 and represents a potentially attractive option, yielding composite measures and scores corrected for age, gender, education, race, and ethnicity.
 

Comparative testing

To compare the test with other leading cognition tools used in MS, the investigators recruited 87 patients with RRMS (79% female, mean age 47.3 years). Participants were recruited to perform the full NIHTB-CB (about 30 minutes) and the full Minimal Assessment of Cognitive Function in Multiple Sclerosis (MACFIMS), which takes about 90 minutes, as well as some subsets from the Wechsler Adult Intelligence Scale-IV (WAIS-IV) covering processing speed and working memory. All patients had an EDSS of 5.0 or below and, on average, had been living with MS for about a decade.

The results showed the normative scores for NIHTB-CB had significant concordance with the other measures in terms of processing speed (concordance correlation coefficient [CCC] range = 0.28-0.48), working memory (CCC range = 0.27-0.37), and episodic memory (CCC range = 0.21-0.32). However, agreement was not shown for executive function (CCC range = 0.096-0.11).

Ms. Manglani noted executive function included various submeasures such as planning and inhibitory control. “Perhaps our gold standard measures tapped into a different facet of executive function than measured by the NIHTB,” she said.

The investigators found the proportion of participants classified as cognitively impaired was similar between the MACFIMS and the NIHTB tests.

Further assessment of fluid cognition on the NIHTB-CB – a composite of processing speed, working memory, episodic memory, and executive function that is automatically generated by the toolbox – showed the measure was negatively associated with disease severity, as measured by the EDSS (P = .006). However, the measure was not associated with a difference in depression (P = .39) or fatigue (P = .69).

Of note, a similar association with lower disease severity on the EDSS was not observed with MACFIMS.

“Interestingly, we found that only the NIHTB-CB fluid cognition was associated with disease severity, such that it was associated with nearly 11% of the variance in EDSS scores, and we were surprised that we didn’t see this with MACFIMS,” Ms. Manglani said.
 

 

 

Key advantages

The NIHTB-CB was developed as part of the NIH Blueprint for Neuroscience Research initiative and commissioned by 16 NIH Institutes to provide brief, efficient assessment measures of cognitive function.

The battery has been validated in healthy individuals and tested in other populations with neurologic disorders, including patients who have suffered stroke and traumatic brain injury.

Ms. Manglani noted that the NIHTB-CB had key advantages over other tests. “First, it is a 30-minute iPad-based battery, which is shorter than most cognitive batteries available, and one of the few that is completely computerized. In addition, it automatically scores performance and yields a report with both composite scores and scores for each subtest,” she said.

In addition, said Ms. Manglani, “the NIH toolbox has a large validation sample of individuals between 8-85 years of age and provides normative scores that account for age, gender, education, and race/ethnicity, which allows individuals’ performances to be compared with their peers.”

The findings underscore that with further validation, the battery could have an important role in MS, she added.

“The NIH Toolbox needs to be tested in all subtypes of MS, with a full range of disease severity, and in MS clinics to gauge the clinical feasibility. Larger samples and repeated assessments are also needed to assess the test-retest reliability,” she said.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

The National Institutes of Health Toolbox Cognitive Battery (NIHTB-CB), a 30-minute, automated, iPad-based battery of psychological tests, offers some key advantages over gold standard cognitive assessments in patients with relapsing-remitting multiple sclerosis (RRMS), new research shows.

Heena Manglani


“To our knowledge this is the first psychometric evaluation of the NIH Toolbox Cognition Battery in MS,” said study investigator Heena R. Manglani, MA, a clinical psychology fellow at Massachusetts General Hospital and Harvard Medical School, Boston.

“[The findings] suggest that the NIH Toolbox Cognition Battery may be used as an alternative to other gold-standard measures which may cover limited domains or require manual scoring,” added Ms. Manglani, who is working toward her PhD in clinical psychology.

The study was presented at the 2021 Annual Meeting of the Consortium of Multiple Sclerosis Centers (CMSC).
 

An indicator of disease activity?

Cognitive deficits affecting a range of functions – including memory, attention and communication – are common in MS and affect 34% to 65% of patients with the disease, and the ability to detect and monitor such deficits has important implications.

Cognitive changes can provide a unique opportunity to identify acute disease activity in patients with MS that might be already occurring before physical manifestations become apparent, said Ms. Manglani. “If we can detect subtle changes in cognition that might foreshadow other symptoms of disease worsening, we can then allocate interventions that might stave off cognitive decline,” she explained.

While there is an array of well-established neuropsychological tests for the assessment of cognitive deficits, each has limitations, so a shorter, computerized, convenient, and reliable test could prove beneficial.

The NIHTB-CB has been validated in a large, nationally representative sample of individuals aged 8 to 85 and represents a potentially attractive option, yielding composite measures and scores corrected for age, gender, education, race, and ethnicity.
 

Comparative testing

To compare the test with other leading cognition tools used in MS, the investigators recruited 87 patients with RRMS (79% female, mean age 47.3 years). Participants were recruited to perform the full NIHTB-CB (about 30 minutes) and the full Minimal Assessment of Cognitive Function in Multiple Sclerosis (MACFIMS), which takes about 90 minutes, as well as some subsets from the Wechsler Adult Intelligence Scale-IV (WAIS-IV) covering processing speed and working memory. All patients had an EDSS of 5.0 or below and, on average, had been living with MS for about a decade.

The results showed the normative scores for NIHTB-CB had significant concordance with the other measures in terms of processing speed (concordance correlation coefficient [CCC] range = 0.28-0.48), working memory (CCC range = 0.27-0.37), and episodic memory (CCC range = 0.21-0.32). However, agreement was not shown for executive function (CCC range = 0.096-0.11).

Ms. Manglani noted executive function included various submeasures such as planning and inhibitory control. “Perhaps our gold standard measures tapped into a different facet of executive function than measured by the NIHTB,” she said.

The investigators found the proportion of participants classified as cognitively impaired was similar between the MACFIMS and the NIHTB tests.

Further assessment of fluid cognition on the NIHTB-CB – a composite of processing speed, working memory, episodic memory, and executive function that is automatically generated by the toolbox – showed the measure was negatively associated with disease severity, as measured by the EDSS (P = .006). However, the measure was not associated with a difference in depression (P = .39) or fatigue (P = .69).

Of note, a similar association with lower disease severity on the EDSS was not observed with MACFIMS.

“Interestingly, we found that only the NIHTB-CB fluid cognition was associated with disease severity, such that it was associated with nearly 11% of the variance in EDSS scores, and we were surprised that we didn’t see this with MACFIMS,” Ms. Manglani said.
 

 

 

Key advantages

The NIHTB-CB was developed as part of the NIH Blueprint for Neuroscience Research initiative and commissioned by 16 NIH Institutes to provide brief, efficient assessment measures of cognitive function.

The battery has been validated in healthy individuals and tested in other populations with neurologic disorders, including patients who have suffered stroke and traumatic brain injury.

Ms. Manglani noted that the NIHTB-CB had key advantages over other tests. “First, it is a 30-minute iPad-based battery, which is shorter than most cognitive batteries available, and one of the few that is completely computerized. In addition, it automatically scores performance and yields a report with both composite scores and scores for each subtest,” she said.

In addition, said Ms. Manglani, “the NIH toolbox has a large validation sample of individuals between 8-85 years of age and provides normative scores that account for age, gender, education, and race/ethnicity, which allows individuals’ performances to be compared with their peers.”

The findings underscore that with further validation, the battery could have an important role in MS, she added.

“The NIH Toolbox needs to be tested in all subtypes of MS, with a full range of disease severity, and in MS clinics to gauge the clinical feasibility. Larger samples and repeated assessments are also needed to assess the test-retest reliability,” she said.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CMSC 2021

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Specialty pharmacists may speed time to MS treatment

Article Type
Changed
Wed, 11/10/2021 - 16:15

Specialty pharmacists play a key and growing role in navigating the complexities of initiating disease-modifying therapies (DMTs) for multiple sclerosis (MS), resulting in earlier treatment, new data suggest.

Dr. Jenelle Montgomery


“As DMT management and treatment options for MS symptoms become more complex, clinical pharmacists can be utilized for medication education and management,” Jenelle Hall Montgomery, PharmD, a clinical pharmacist practitioner at the Multiple Sclerosis and Neuroimmunology Division, department of neurology, Duke University Hospital, Durham, N.C., told delegates attending the 2021 Annual Meeting of the Consortium of Multiple Sclerosis Centers (CMSC).

Since 2018, more than half a dozen DMTs have been approved for MS by the U.S. Food and Drug Administration. However, there is currently no established DMT selection algorithm, and because of this, there is a need for specialty pharmacists, she added.

“DMT approvals by the FDA have outpaced MS guideline recommendations. This can be overwhelming for patients, especially now that they have so many options to choose from,” she said.

Key services provided by specialty pharmacists include coordinating pretreatment requirements, as well as help with dosing, side effects, safety monitoring, and treatment adherence. In addition, pharmacists help with switching therapies, dispensing, and cost and authorization problems.

In reporting on improvements associated with specialty pharmacists, researchers from prominent MS centers around the country described specific outcomes.
 

Aids early intervention

A report on the Kaiser Permanente Washington (KPWA) MS Pharmacy Program detailed significant reductions in the time to address patients’ needs through the use of specialty pharmacists. In an assessment of 391 referrals to the program from 2019 to 2020, the average total time spent per patient per year dropped from 145 minutes in 2019 to 109 minutes in 2020.

Services included assessment of medication adherence, adverse drug reaction consultation, lab monitoring, patient counseling on initiation of a DMT, shared decision making, and follow-up visits.

“The KPWA MS Pharmacy Program plays an integral role in the care of patients with MS. The MS clinical pharmacists ensure patients are well informed about their DMT options and are fully educated about selected treatment,” the investigators noted.

A report on an outpatient MS clinic at Emory Healthcare, Atlanta, described how use of specialty pharmacist services resulted in a 49% reduction in time to treatment initiation with fingolimod. The time decreased from 83.9 days to 42.9 days following the introduction of specialty pharmacist services.

“Integration of a clinical pharmacy specialist in the therapeutic management of MS patients is crucial to early intervention with disease-modifying therapy,” the investigators noted.

A report on the specialty pharmacy services provided at Johns Hopkins MS Precision Medicine Center of Excellence, Baltimore, described an evaluation of 708 assessments between July 2019 and June 2020. Results showed that the vast majority (98%) of patients reported no missed days from work or school due to MS-related symptoms and that 99.3% reported no hospitalizations due to MS relapses, which are both key measures of MS treatment adherence.
 

 

 

High patient satisfaction

Patients reported high satisfaction with the in-house pharmacy on the National Association of Specialty Pharmacy’s patient satisfaction survey. In the survey, the average score was 82, compared with 79 for external specialty pharmacies.

“Moreover, patients were highly satisfied with the services provided at the pharmacy and were likely to continue receiving their comprehensive pharmacy care at our institution,” the researchers reported.

The study “highlights the value of pharmacists’ involvement in patient care and supports the need for continuation of integrated clinical services in health system specialty pharmacy,” the investigators noted.

CMSC President Scott D. Newsome, DO, director of the Neurosciences Consultation and Infusion Center at Green Spring Station, Lutherville, Maryland, and associate professor of neurology at Johns Hopkins University School of Medicine, said that as a clinician, he is highly satisfied with the specialty pharmacy services for MS at Johns Hopkins.

“Our pharmacists are fantastic in communicating with the prescriber if something comes up related to medication safety or they are concerned that the patient isn’t adhering to the medication,” Dr. Newsome said.

He noted that in addition to helping to alleviate the burden of a myriad of tasks associated with prescribing for patients with MS, specialty pharmacists may have an important impact on outcomes, although more data are needed.

“Having a specialty pharmacy involved in the care of our patients can help navigate the challenges associated with the process of obtaining approval for DMTs,” he said. “We know how important it is to expedite and shorten the time frame from writing the prescription to getting the person on their DMT.”
 

Telemedicine, other models

Although integrated specialty pharmacist services may seem out of reach for smaller MS clinics, the use of telemedicine and other models may help achieve similar results.

“A model I have seen is having pharmacists split their time between a specialty pharmacy and the MS clinic,” said Dr. Montgomery.

“A telemedicine model can also be utilized, in which a pharmacist can reach out to patients by telephone or through video visits. This would allow a pharmacist to be utilized for multiple clinics or as an MS specialist within a specialty pharmacy,” she added.

Whether provided in house or through telemedicine, a key benefit for clinicians is in freeing up valuable time, which has a domino effect in improving quality all around.

“In addition to improving safety outcomes, specialty pharmacists help with the allocation of clinic staff to other clinic responsibilities, and the utilization of services by patients results in more resources allocated for their care,” Dr. Montgomery said.

Dr. Montgomery is a nonpromotional speaker for Novartis and is on its advisory board.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Specialty pharmacists play a key and growing role in navigating the complexities of initiating disease-modifying therapies (DMTs) for multiple sclerosis (MS), resulting in earlier treatment, new data suggest.

Dr. Jenelle Montgomery


“As DMT management and treatment options for MS symptoms become more complex, clinical pharmacists can be utilized for medication education and management,” Jenelle Hall Montgomery, PharmD, a clinical pharmacist practitioner at the Multiple Sclerosis and Neuroimmunology Division, department of neurology, Duke University Hospital, Durham, N.C., told delegates attending the 2021 Annual Meeting of the Consortium of Multiple Sclerosis Centers (CMSC).

Since 2018, more than half a dozen DMTs have been approved for MS by the U.S. Food and Drug Administration. However, there is currently no established DMT selection algorithm, and because of this, there is a need for specialty pharmacists, she added.

“DMT approvals by the FDA have outpaced MS guideline recommendations. This can be overwhelming for patients, especially now that they have so many options to choose from,” she said.

Key services provided by specialty pharmacists include coordinating pretreatment requirements, as well as help with dosing, side effects, safety monitoring, and treatment adherence. In addition, pharmacists help with switching therapies, dispensing, and cost and authorization problems.

In reporting on improvements associated with specialty pharmacists, researchers from prominent MS centers around the country described specific outcomes.
 

Aids early intervention

A report on the Kaiser Permanente Washington (KPWA) MS Pharmacy Program detailed significant reductions in the time to address patients’ needs through the use of specialty pharmacists. In an assessment of 391 referrals to the program from 2019 to 2020, the average total time spent per patient per year dropped from 145 minutes in 2019 to 109 minutes in 2020.

Services included assessment of medication adherence, adverse drug reaction consultation, lab monitoring, patient counseling on initiation of a DMT, shared decision making, and follow-up visits.

“The KPWA MS Pharmacy Program plays an integral role in the care of patients with MS. The MS clinical pharmacists ensure patients are well informed about their DMT options and are fully educated about selected treatment,” the investigators noted.

A report on an outpatient MS clinic at Emory Healthcare, Atlanta, described how use of specialty pharmacist services resulted in a 49% reduction in time to treatment initiation with fingolimod. The time decreased from 83.9 days to 42.9 days following the introduction of specialty pharmacist services.

“Integration of a clinical pharmacy specialist in the therapeutic management of MS patients is crucial to early intervention with disease-modifying therapy,” the investigators noted.

A report on the specialty pharmacy services provided at Johns Hopkins MS Precision Medicine Center of Excellence, Baltimore, described an evaluation of 708 assessments between July 2019 and June 2020. Results showed that the vast majority (98%) of patients reported no missed days from work or school due to MS-related symptoms and that 99.3% reported no hospitalizations due to MS relapses, which are both key measures of MS treatment adherence.
 

 

 

High patient satisfaction

Patients reported high satisfaction with the in-house pharmacy on the National Association of Specialty Pharmacy’s patient satisfaction survey. In the survey, the average score was 82, compared with 79 for external specialty pharmacies.

“Moreover, patients were highly satisfied with the services provided at the pharmacy and were likely to continue receiving their comprehensive pharmacy care at our institution,” the researchers reported.

The study “highlights the value of pharmacists’ involvement in patient care and supports the need for continuation of integrated clinical services in health system specialty pharmacy,” the investigators noted.

CMSC President Scott D. Newsome, DO, director of the Neurosciences Consultation and Infusion Center at Green Spring Station, Lutherville, Maryland, and associate professor of neurology at Johns Hopkins University School of Medicine, said that as a clinician, he is highly satisfied with the specialty pharmacy services for MS at Johns Hopkins.

“Our pharmacists are fantastic in communicating with the prescriber if something comes up related to medication safety or they are concerned that the patient isn’t adhering to the medication,” Dr. Newsome said.

He noted that in addition to helping to alleviate the burden of a myriad of tasks associated with prescribing for patients with MS, specialty pharmacists may have an important impact on outcomes, although more data are needed.

“Having a specialty pharmacy involved in the care of our patients can help navigate the challenges associated with the process of obtaining approval for DMTs,” he said. “We know how important it is to expedite and shorten the time frame from writing the prescription to getting the person on their DMT.”
 

Telemedicine, other models

Although integrated specialty pharmacist services may seem out of reach for smaller MS clinics, the use of telemedicine and other models may help achieve similar results.

“A model I have seen is having pharmacists split their time between a specialty pharmacy and the MS clinic,” said Dr. Montgomery.

“A telemedicine model can also be utilized, in which a pharmacist can reach out to patients by telephone or through video visits. This would allow a pharmacist to be utilized for multiple clinics or as an MS specialist within a specialty pharmacy,” she added.

Whether provided in house or through telemedicine, a key benefit for clinicians is in freeing up valuable time, which has a domino effect in improving quality all around.

“In addition to improving safety outcomes, specialty pharmacists help with the allocation of clinic staff to other clinic responsibilities, and the utilization of services by patients results in more resources allocated for their care,” Dr. Montgomery said.

Dr. Montgomery is a nonpromotional speaker for Novartis and is on its advisory board.

A version of this article first appeared on Medscape.com.

Specialty pharmacists play a key and growing role in navigating the complexities of initiating disease-modifying therapies (DMTs) for multiple sclerosis (MS), resulting in earlier treatment, new data suggest.

Dr. Jenelle Montgomery


“As DMT management and treatment options for MS symptoms become more complex, clinical pharmacists can be utilized for medication education and management,” Jenelle Hall Montgomery, PharmD, a clinical pharmacist practitioner at the Multiple Sclerosis and Neuroimmunology Division, department of neurology, Duke University Hospital, Durham, N.C., told delegates attending the 2021 Annual Meeting of the Consortium of Multiple Sclerosis Centers (CMSC).

Since 2018, more than half a dozen DMTs have been approved for MS by the U.S. Food and Drug Administration. However, there is currently no established DMT selection algorithm, and because of this, there is a need for specialty pharmacists, she added.

“DMT approvals by the FDA have outpaced MS guideline recommendations. This can be overwhelming for patients, especially now that they have so many options to choose from,” she said.

Key services provided by specialty pharmacists include coordinating pretreatment requirements, as well as help with dosing, side effects, safety monitoring, and treatment adherence. In addition, pharmacists help with switching therapies, dispensing, and cost and authorization problems.

In reporting on improvements associated with specialty pharmacists, researchers from prominent MS centers around the country described specific outcomes.
 

Aids early intervention

A report on the Kaiser Permanente Washington (KPWA) MS Pharmacy Program detailed significant reductions in the time to address patients’ needs through the use of specialty pharmacists. In an assessment of 391 referrals to the program from 2019 to 2020, the average total time spent per patient per year dropped from 145 minutes in 2019 to 109 minutes in 2020.

Services included assessment of medication adherence, adverse drug reaction consultation, lab monitoring, patient counseling on initiation of a DMT, shared decision making, and follow-up visits.

“The KPWA MS Pharmacy Program plays an integral role in the care of patients with MS. The MS clinical pharmacists ensure patients are well informed about their DMT options and are fully educated about selected treatment,” the investigators noted.

A report on an outpatient MS clinic at Emory Healthcare, Atlanta, described how use of specialty pharmacist services resulted in a 49% reduction in time to treatment initiation with fingolimod. The time decreased from 83.9 days to 42.9 days following the introduction of specialty pharmacist services.

“Integration of a clinical pharmacy specialist in the therapeutic management of MS patients is crucial to early intervention with disease-modifying therapy,” the investigators noted.

A report on the specialty pharmacy services provided at Johns Hopkins MS Precision Medicine Center of Excellence, Baltimore, described an evaluation of 708 assessments between July 2019 and June 2020. Results showed that the vast majority (98%) of patients reported no missed days from work or school due to MS-related symptoms and that 99.3% reported no hospitalizations due to MS relapses, which are both key measures of MS treatment adherence.
 

 

 

High patient satisfaction

Patients reported high satisfaction with the in-house pharmacy on the National Association of Specialty Pharmacy’s patient satisfaction survey. In the survey, the average score was 82, compared with 79 for external specialty pharmacies.

“Moreover, patients were highly satisfied with the services provided at the pharmacy and were likely to continue receiving their comprehensive pharmacy care at our institution,” the researchers reported.

The study “highlights the value of pharmacists’ involvement in patient care and supports the need for continuation of integrated clinical services in health system specialty pharmacy,” the investigators noted.

CMSC President Scott D. Newsome, DO, director of the Neurosciences Consultation and Infusion Center at Green Spring Station, Lutherville, Maryland, and associate professor of neurology at Johns Hopkins University School of Medicine, said that as a clinician, he is highly satisfied with the specialty pharmacy services for MS at Johns Hopkins.

“Our pharmacists are fantastic in communicating with the prescriber if something comes up related to medication safety or they are concerned that the patient isn’t adhering to the medication,” Dr. Newsome said.

He noted that in addition to helping to alleviate the burden of a myriad of tasks associated with prescribing for patients with MS, specialty pharmacists may have an important impact on outcomes, although more data are needed.

“Having a specialty pharmacy involved in the care of our patients can help navigate the challenges associated with the process of obtaining approval for DMTs,” he said. “We know how important it is to expedite and shorten the time frame from writing the prescription to getting the person on their DMT.”
 

Telemedicine, other models

Although integrated specialty pharmacist services may seem out of reach for smaller MS clinics, the use of telemedicine and other models may help achieve similar results.

“A model I have seen is having pharmacists split their time between a specialty pharmacy and the MS clinic,” said Dr. Montgomery.

“A telemedicine model can also be utilized, in which a pharmacist can reach out to patients by telephone or through video visits. This would allow a pharmacist to be utilized for multiple clinics or as an MS specialist within a specialty pharmacy,” she added.

Whether provided in house or through telemedicine, a key benefit for clinicians is in freeing up valuable time, which has a domino effect in improving quality all around.

“In addition to improving safety outcomes, specialty pharmacists help with the allocation of clinic staff to other clinic responsibilities, and the utilization of services by patients results in more resources allocated for their care,” Dr. Montgomery said.

Dr. Montgomery is a nonpromotional speaker for Novartis and is on its advisory board.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CMSC 2021

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Evaluation of the Effectiveness and Safety of Alirocumab Use in Statin-Intolerant Veterans

Article Type
Changed
Tue, 05/03/2022 - 15:03

In 2016, 17.6 million deaths occurred globally due to cardiovascular disease (CVD) with coronary artery disease (CAD) and ischemic stroke as top contributors.1 Elevated low-density lipoprotein cholesterol (LDL-C) has been linked to greater risk of atherosclerotic cardiovascular disease (ASCVD); therefore, LDL-C reduction is imperative to decrease risk of cardiovascular (CV) morbidity and mortality.2 Since 1987, statin therapy has been the mainstay of treatment for hypercholesterolemia, and current practice guidelines recommend statins as first-line therapy given demonstrated reductions in LDL-C and CV mortality reduction in robust clinical trials.2-4 Although generally safe and well tolerated, muscle-related adverse events (AEs) limit optimal use of statins in up to 20% of individuals who have an indication for statin therapy.5 As a consequence, these patients receive suboptimal statin doses or no statin therapy and are at a higher risk for ASCVD.5

Proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitors have been shown to significantly lower LDL-C when used as monotherapy or in combination with statins and/or other lipid-lowering therapies.5 These agents are currently approved by the US Food and Drug Administration as an adjunct to diet with or without other lipid-lowering therapies for the management of primary hypercholesterolemia (including heterozygous familial hypercholesterolemia), homozygous familial hypercholesterolemia (evolocumab only), and for use in patients with established CVD unable to achieve their lipid-lowering goals with maximally tolerated statin doses and ezetimibe.4 With the ability to reduce LDL-C by up to 65%, PCSK9 inhibitors offer an alternative option for LDL-C and potentially CV risk reduction in statin-intolerant patients.5

Alirocumab, the formulary preferred PCSK9 inhibitor at the Michael E. DeBakey Veterans Affairs Medical Center (MEDVAMC) in Houston, Texas, has been increasingly used in high-risk statin-intolerant veterans. The primary objective of this case series was to assess LDL-C reduction associated with alirocumab use in statin-intolerant veterans at the MEDVAMC. The secondary objective was to assess the incidence of CV events. This study was approved by the MEDVAMC Quality Assurance and Regulatory Affairs committee.

Methods

In this single-center case series, a retrospective chart review was conducted to identify statin-intolerant veterans who were initiated on treatment with alirocumab for LDL-C and/or CV risk reduction between June 2017 and May 2019. Adult veterans with a diagnosis of primary hypercholesterolemia (including heterozygous familial hypercholesterolemia) and/or CAD with documented statin intolerance were included in the study. Statin intolerance was defined in accordance with the National Lipid Association (NLA) definition as aninability to tolerate ≥ 2 statins with a trial of at least 1 statin at its lowest daily dose.5 Veterans who previously received treatment with evolocumab, those prescribed concurrent statin therapies, and those missing follow-up lipid panels at 24 weeks were excluded from the study. To assess LDL-C reduction, LDL-C at baseline was compared with LDL-C at 4 and 24 weeks. Incident CV events before and after alirocumab initiation were documented. The US Department of Veteran Affairs (VA) Computerized Patient Record System was used to collect patient data.

Data Collection, Measures, and Analysis

Electronic health records of all eligible patients who received alirocumab were reviewed, and basic demographics (patient age, sex, and race/ethnicity) as well as medical characteristics at baseline were collected. To confirm statin intolerance, each veteran’s history of statin use and use of additional lipid-lowering agents was documented. CV history was measured with an index of categorical measures for hypertension, confirmed CAD, hyperlipidemia, heart failure, arrhythmias, peripheral artery disease, stroke, diabetes mellitus, and hypothyroidism. Additionally, concomitant medications, such as aspirin, P2Y12 inhibitors, β-blockers, angiotensin-converting enzyme inhibitors, and angiotensin II receptor blockers that patients were taking also were collected. Each veteran’s lipid panel at baseline, and at 4 and 24 weeks posttreatment initiation, also was extracted. Continuous variables were summarized with means (SD), and categorical variables were summarized with frequencies and proportions. The paired Wilcoxon signed rank test was used to compare LDL-C at 4 and 24 weeks after alirocumab initiation with patients’ baseline LDL-C.

Results

Between June 2017 and May 2019, 122 veterans were initiated on alirocumab. Of these veterans, 98 were excluded: 35 concurrently received statin therapy, 33 missed follow-up lipid panels, 21 had previously received evolocumab, 6 failed to meet the NLA definition for statin intolerance, 2 did not fill active alirocumab prescriptions, and 1 had an incalculable LDL-C with a baseline triglyceride level of 3079 mg/dL. This resulted in 24 veterans included in the analysis.

Most participants were male (87.5%) and White veterans (79.2%) with a mean (SD) age of 66.0 (8.4) years and mean (SD) baseline LDL-C of 161.9 (74.3) mg/dL. At baseline, 21 veterans had a history of primary hyperlipidemia, 19 had a history of CAD, and 2 had a history of heterozygous familial hypercholesterolemia. Of the 24 patients included, the most trialed statins before alirocumab initiation were atorvastatin (95.8%), simvastatin (79.2%), rosuvastatin (79.2%), and pravastatin (62.5%) (Table).

Baseline Characteristics (N = 24) table

LDL-C Reduction

Veterans were initially treated with alirocumab 75 mg administered subcutaneously every 2 weeks; however, 11 veterans required a dose increase to 150 mg every 2 weeks. At treatment week 4, the median LDL-C reduction was 78.5 mg/dL (IQR, 28.0-107.3; P < .01), and at treatment week 24, the median LDL-C reduction was 55.6 mg/dL (IQR, 18.6-85.3; P < .01). This equated to median LDL-C reductions from baseline of 48.5% at week 4 and 34.3% at week 24. A total of 3 veterans experienced LDL-C increases following initiation of alirocumab. At week 4, 9 veterans were noted to have an LDL-C reduction > 50%, 7 veterans had an LDL-C reduction between 30% and 50%, and 5 veterans had an LDL-C reduction of < 30%. At week 24, 6 had an LDL-C reduction > 50%, 9 veterans had an LDL-C reduction between 30% and 50%, and 6 had a LDL-C reduction < 30%.

 

 

Cardiovascular Events

Before alirocumab initiation, 22 CV events and interventions were reported in 16 veterans: 12 percutaneous coronary interventions, 5 coronary artery bypass surgeries (CABG), 4 myocardial infarctions, and 1 transient ischemic attack. One month following alirocumab initiation, 1 veteran underwent a CABG after a non-ST-elevation myocardial infarction (NSTEMI).

Safety and Tolerability

Alirocumab was discontinued in 5 veterans due to 4 cases of intolerance (reported memory loss, lethargy, myalgias, and body aches with dyspnea) and 1 case of persistent LDL-C of < 40 mg/dL. Alirocumab was discontinued after 1 year in 2 patients (persistent LDL-C < 40 mg/dL and reported memory loss) and after 6 months in the veteran who reported lethargy. Alirocumab was discontinued after 4 months in the veteran with myalgias and within 2 months in the veteran with body aches and dyspnea. No other AEs were reported.

Discussion

The Efficacy and Safety of Alirocumab vs Ezetimibe in Statin-Intolerant Veterans With a Statin Rechallenge Arm trial is the first clinical trial to examine the efficacy and safety of alirocumab use in statin-intolerant patients. In the trial, 314 patients were randomized to receive alirocumab, ezetimibe, or an atorvastatin rechallenge.6 At 24 weeks, alirocumab reduced mean (SE) LDL-C by 45.0% (2.2%) vs 14.6% (2.2%) with ezetimibe (mean difference 30.4% [3.1%], P < .01).6 Fewer skeletal-muscle-related events also were noted with alirocumab vs atorvastatin (hazard ratio, 0.61; 95% CI, 0.38-0.99; P = .04).6

In this case series, an LDL-C reduction of > 50% was observed in 9 veterans (42.9%) following 4 weeks of treatment; however, LDL-C reduction of > 50% compared with baseline was sustained in only 6 veterans (28.6%) at week 24. Additionally, LDL-C increases from baseline were observed in 3 veterans; the reasoning for the observed increase was unclear, but this may have been due to nonadherence and dietary factors.4 Although a majority of patients saw a significant and clinically meaningful reduction in LDL-C, the group of patients with an increase in the same may have benefitted from targeted intervention to improve medication and dietary adherence. PCSK9 inhibitor resistance also may have contributed to an increase in LDL-C during treatment.7

Of the 24 patients included, 4 reported AEs resulted in therapy discontinuation. Memory impairment, a rare AE of alirocumab, was reported 1 year following alirocumab initiation. Additionally, lethargy was reported after 6 months of treatment. Myalgia also was reported in a veteran 4 months following treatment, and 1 veteran experienced body aches and dyspnea < 2 months following treatment. The most common AEs associated with alirocumab, as noted in previous safety and efficacy clinical trials, included: nasopharyngitis, injection site reaction, influenza, urinary tract infection, and myalgias.8 Many of these more common AEs may be subclinical and underreported. This small case series, however, detected 4 events severe enough to lead to therapy discontinuation. Although this sample is not representative of all statin-intolerant patients who receive treatment with alirocumab, our findings suggest the need for patient education on potential AEs before therapy initiation and clinician monitoring at follow-up visits.

The ODYSSEY OUTCOMES trial established a CV benefit associated with alirocumab; however, patients included had a recent acute coronary syndrome event and were receiving a high-intensity statin.9 This case series is unique in that before alirocumab initiation, 22 CV events/interventions were reported in the sample of 24 patients. After therapy initiation, 1 patient underwent a CABG after an NSTEMI in the month following initiation. This suggests that cardiac complications are possible after PCSK-9 initiation; however, little information can be gained from 1 patient. Nevertheless, early therapy failure should be investigated in the context of real-world use in statin-intolerant patients. This is a complex task, however, given the difficulties of achieving a balanced study design. Statin intolerance is a clear source of selection bias into treatment with alirocumab as patients in this population have already initiated and failed statin therapy. The prevalence of prior CV events and the time-dependent association between prior and future CV events stand as another complex confounder. Although there is a clear and pressing need to understand the risks and benefits of PCSK9 therapy in statin-intolerant patients, future research in this area will need to cautiously address these important sources of bias.

Overall, the results of this case series support LDL-C reduction associated with alirocumab in the absence of statin therapy. Despite favorable results, use of alirocumab may be limited by cost and its subcutaneous route of administration. Bempedoic acid, an oral, once-daily lipid-lowering agent poses an alternative to PCSK9 inhibitors, but further data regarding CV outcomes with this agent is needed.10,11 Robust randomized controlled trials also are needed to evaluate CV outcomes for alirocumab use in statin-intolerant veterans.

Limitations

Only 24 veterans were included in the study, reflecting 20% of the charts reviewed (80% exclusion rate), and in this small sample, only 1 CV event was observed. Both of these serve as threats to external validity. As the study information was extracted from chart review, the results may be limited by coding or historical bias. Medical information from outside institutions may be missing from medical records. Additionally, results may be skewed by possible documentation errors. Furthermore, the period between previous CV events and alirocumab initiation is unclear as event dates were often not recorded if treatment was received at an outside institution.

Due to the short follow-up period, the case series is limited in its assessment of CV outcomes and safety outcomes. Larger studies over an extended period are needed to assess CV outcomes and safety of alirocumab use in statin-intolerant patients. Also, medication adherence was not assessed. Given the impact of medication adherence on LDL-C reduction, it is unclear what role medication adherence played in the LDL-C reduction observed in this study.4

Conclusions

Alirocumab use in 24 statin-intolerant veterans resulted in a significant reduction in LDL-C at 4 and 24 weeks after initiation. In addition, 1 CV event/intervention was observed following alirocumab initiation, although this should be interpreted with caution due to the retrospective nature of this case series, small sample size, and short follow-up period. Large, long-term studies would better evaluate the CV benefit associated with alirocumab therapy in a veteran population.

References

1. Benjamin EJ, Munter P, Alonso A, et al; American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee. Heart disease and stroke statistics—2019 update: a report from the American Heart Association. Circulation. 2019;139(10):e56-e528. doi:10.1161/CIR.0000000000000659

2. Stone NJ, Robinson JG, Lichtenstein AH, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation. 2014 Jun 24;129(25)(suppl 2):S1-S45. doi:10.1016/j.jacc.2013.11.002

3. Hajar R. Statins: past and present. Heart Views. 2011;12(3): 121-127. doi:10.4103/1995-705X.95070

4. Grundy SM, Stone NJ, Bailey AL, et al. 2018 AHA/ACC/AACVPR/AAPA/ABC/ACPM/ADA/AGS/APhA/ASPC/NLA/PCNA guideline on the management of blood cholesterol: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. J Am Coll Cardiol 2019;73(4):3168-3209. doi:10.1016/j.jacc.2018.11.002

5. Toth PH, Patti AM, Giglio RV, et al. Management of statin intolerance in 2018: still more questions than answers. Am J Cardiovasc Drugs. 2018;18(3):157-173. doi:10.1007/s40256-017-0259-7

6. Moriarty PM, Thompson PD, Cannon CP, et al; ODYSSEY ALTERNATIVE Investigators. Efficacy and safety of alirocumab vs ezetimibe in statin-intolerant patients, with a statin rechallenge arm: The ODYSSEY ALTERNATIVE randomized trial. J Clin Lipidol. 2015;9(6):758-769. doi:10.1016/j.jacl.2015.08.006

7. Shapiro MD, Miles J, Tavori H, Fazio S. Diagnosing resistance to a proprotein convertase subtilisin/kexin type 9 inhibitor. Ann Intern Med. 2018;168(5):376-379. doi:10.7326/M17-2485

8. Raedler LA. Praluent (alirocumab): first PCSK9 inhibitor approved by the FDA for hypercholesterolemia. Am Health Drug Benefits. 2016;9:123-126.

9. Schwartz GC, Steg PC, Szarek M, et al; ODYSSEY OUTCOMES Committees and Investigators. Alirocumab and cardiovascular outcomes after acute coronary syndrome. N Engl J Med. 2018;379(22):2097-2107. doi:10.1056/NEJMoa1801174

10. Nexletol. Package insert. Esperion Therapeutics Inc; 2020.

11. Laufs U, Banach M, Mancini GBJ, et al. Efficacy and safety of bempedoic acid in patients with hypercholesterolemia and statin intolerance. J Am Heart Assoc. 2019;8(7):e011662. doi:10.1161/JAHA.118.011662

Article PDF
Author and Disclosure Information

Fiona Imarhia is a Clinical Pharmacy Specialist at Michael E. DeBakey Veteran Affairs Medical Center in Houston, Texas. Elisabeth Sulaica is a Clinical Assistant Professor in the Department of Pharmacy Practice and Translational Research, and Tyler Varisco is a Research Assistant Professor in the Department of Pharmaceutical Health Outcomes and Policy, both at the University of Houston College of Pharmacy. Marcy Pilate is an Inpatient Pharmacy Supervisor at G.V. (Sonny) Montgomery Veterans Affairs Medical Center in Jackson, Mississippi.
Correspondence: Fiona Imarhia ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Issue
Federal Practitioner - 38(4)s
Publications
Topics
Page Number
e67-e71
Sections
Author and Disclosure Information

Fiona Imarhia is a Clinical Pharmacy Specialist at Michael E. DeBakey Veteran Affairs Medical Center in Houston, Texas. Elisabeth Sulaica is a Clinical Assistant Professor in the Department of Pharmacy Practice and Translational Research, and Tyler Varisco is a Research Assistant Professor in the Department of Pharmaceutical Health Outcomes and Policy, both at the University of Houston College of Pharmacy. Marcy Pilate is an Inpatient Pharmacy Supervisor at G.V. (Sonny) Montgomery Veterans Affairs Medical Center in Jackson, Mississippi.
Correspondence: Fiona Imarhia ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Author and Disclosure Information

Fiona Imarhia is a Clinical Pharmacy Specialist at Michael E. DeBakey Veteran Affairs Medical Center in Houston, Texas. Elisabeth Sulaica is a Clinical Assistant Professor in the Department of Pharmacy Practice and Translational Research, and Tyler Varisco is a Research Assistant Professor in the Department of Pharmaceutical Health Outcomes and Policy, both at the University of Houston College of Pharmacy. Marcy Pilate is an Inpatient Pharmacy Supervisor at G.V. (Sonny) Montgomery Veterans Affairs Medical Center in Jackson, Mississippi.
Correspondence: Fiona Imarhia ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Article PDF
Article PDF

In 2016, 17.6 million deaths occurred globally due to cardiovascular disease (CVD) with coronary artery disease (CAD) and ischemic stroke as top contributors.1 Elevated low-density lipoprotein cholesterol (LDL-C) has been linked to greater risk of atherosclerotic cardiovascular disease (ASCVD); therefore, LDL-C reduction is imperative to decrease risk of cardiovascular (CV) morbidity and mortality.2 Since 1987, statin therapy has been the mainstay of treatment for hypercholesterolemia, and current practice guidelines recommend statins as first-line therapy given demonstrated reductions in LDL-C and CV mortality reduction in robust clinical trials.2-4 Although generally safe and well tolerated, muscle-related adverse events (AEs) limit optimal use of statins in up to 20% of individuals who have an indication for statin therapy.5 As a consequence, these patients receive suboptimal statin doses or no statin therapy and are at a higher risk for ASCVD.5

Proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitors have been shown to significantly lower LDL-C when used as monotherapy or in combination with statins and/or other lipid-lowering therapies.5 These agents are currently approved by the US Food and Drug Administration as an adjunct to diet with or without other lipid-lowering therapies for the management of primary hypercholesterolemia (including heterozygous familial hypercholesterolemia), homozygous familial hypercholesterolemia (evolocumab only), and for use in patients with established CVD unable to achieve their lipid-lowering goals with maximally tolerated statin doses and ezetimibe.4 With the ability to reduce LDL-C by up to 65%, PCSK9 inhibitors offer an alternative option for LDL-C and potentially CV risk reduction in statin-intolerant patients.5

Alirocumab, the formulary preferred PCSK9 inhibitor at the Michael E. DeBakey Veterans Affairs Medical Center (MEDVAMC) in Houston, Texas, has been increasingly used in high-risk statin-intolerant veterans. The primary objective of this case series was to assess LDL-C reduction associated with alirocumab use in statin-intolerant veterans at the MEDVAMC. The secondary objective was to assess the incidence of CV events. This study was approved by the MEDVAMC Quality Assurance and Regulatory Affairs committee.

Methods

In this single-center case series, a retrospective chart review was conducted to identify statin-intolerant veterans who were initiated on treatment with alirocumab for LDL-C and/or CV risk reduction between June 2017 and May 2019. Adult veterans with a diagnosis of primary hypercholesterolemia (including heterozygous familial hypercholesterolemia) and/or CAD with documented statin intolerance were included in the study. Statin intolerance was defined in accordance with the National Lipid Association (NLA) definition as aninability to tolerate ≥ 2 statins with a trial of at least 1 statin at its lowest daily dose.5 Veterans who previously received treatment with evolocumab, those prescribed concurrent statin therapies, and those missing follow-up lipid panels at 24 weeks were excluded from the study. To assess LDL-C reduction, LDL-C at baseline was compared with LDL-C at 4 and 24 weeks. Incident CV events before and after alirocumab initiation were documented. The US Department of Veteran Affairs (VA) Computerized Patient Record System was used to collect patient data.

Data Collection, Measures, and Analysis

Electronic health records of all eligible patients who received alirocumab were reviewed, and basic demographics (patient age, sex, and race/ethnicity) as well as medical characteristics at baseline were collected. To confirm statin intolerance, each veteran’s history of statin use and use of additional lipid-lowering agents was documented. CV history was measured with an index of categorical measures for hypertension, confirmed CAD, hyperlipidemia, heart failure, arrhythmias, peripheral artery disease, stroke, diabetes mellitus, and hypothyroidism. Additionally, concomitant medications, such as aspirin, P2Y12 inhibitors, β-blockers, angiotensin-converting enzyme inhibitors, and angiotensin II receptor blockers that patients were taking also were collected. Each veteran’s lipid panel at baseline, and at 4 and 24 weeks posttreatment initiation, also was extracted. Continuous variables were summarized with means (SD), and categorical variables were summarized with frequencies and proportions. The paired Wilcoxon signed rank test was used to compare LDL-C at 4 and 24 weeks after alirocumab initiation with patients’ baseline LDL-C.

Results

Between June 2017 and May 2019, 122 veterans were initiated on alirocumab. Of these veterans, 98 were excluded: 35 concurrently received statin therapy, 33 missed follow-up lipid panels, 21 had previously received evolocumab, 6 failed to meet the NLA definition for statin intolerance, 2 did not fill active alirocumab prescriptions, and 1 had an incalculable LDL-C with a baseline triglyceride level of 3079 mg/dL. This resulted in 24 veterans included in the analysis.

Most participants were male (87.5%) and White veterans (79.2%) with a mean (SD) age of 66.0 (8.4) years and mean (SD) baseline LDL-C of 161.9 (74.3) mg/dL. At baseline, 21 veterans had a history of primary hyperlipidemia, 19 had a history of CAD, and 2 had a history of heterozygous familial hypercholesterolemia. Of the 24 patients included, the most trialed statins before alirocumab initiation were atorvastatin (95.8%), simvastatin (79.2%), rosuvastatin (79.2%), and pravastatin (62.5%) (Table).

Baseline Characteristics (N = 24) table

LDL-C Reduction

Veterans were initially treated with alirocumab 75 mg administered subcutaneously every 2 weeks; however, 11 veterans required a dose increase to 150 mg every 2 weeks. At treatment week 4, the median LDL-C reduction was 78.5 mg/dL (IQR, 28.0-107.3; P < .01), and at treatment week 24, the median LDL-C reduction was 55.6 mg/dL (IQR, 18.6-85.3; P < .01). This equated to median LDL-C reductions from baseline of 48.5% at week 4 and 34.3% at week 24. A total of 3 veterans experienced LDL-C increases following initiation of alirocumab. At week 4, 9 veterans were noted to have an LDL-C reduction > 50%, 7 veterans had an LDL-C reduction between 30% and 50%, and 5 veterans had an LDL-C reduction of < 30%. At week 24, 6 had an LDL-C reduction > 50%, 9 veterans had an LDL-C reduction between 30% and 50%, and 6 had a LDL-C reduction < 30%.

 

 

Cardiovascular Events

Before alirocumab initiation, 22 CV events and interventions were reported in 16 veterans: 12 percutaneous coronary interventions, 5 coronary artery bypass surgeries (CABG), 4 myocardial infarctions, and 1 transient ischemic attack. One month following alirocumab initiation, 1 veteran underwent a CABG after a non-ST-elevation myocardial infarction (NSTEMI).

Safety and Tolerability

Alirocumab was discontinued in 5 veterans due to 4 cases of intolerance (reported memory loss, lethargy, myalgias, and body aches with dyspnea) and 1 case of persistent LDL-C of < 40 mg/dL. Alirocumab was discontinued after 1 year in 2 patients (persistent LDL-C < 40 mg/dL and reported memory loss) and after 6 months in the veteran who reported lethargy. Alirocumab was discontinued after 4 months in the veteran with myalgias and within 2 months in the veteran with body aches and dyspnea. No other AEs were reported.

Discussion

The Efficacy and Safety of Alirocumab vs Ezetimibe in Statin-Intolerant Veterans With a Statin Rechallenge Arm trial is the first clinical trial to examine the efficacy and safety of alirocumab use in statin-intolerant patients. In the trial, 314 patients were randomized to receive alirocumab, ezetimibe, or an atorvastatin rechallenge.6 At 24 weeks, alirocumab reduced mean (SE) LDL-C by 45.0% (2.2%) vs 14.6% (2.2%) with ezetimibe (mean difference 30.4% [3.1%], P < .01).6 Fewer skeletal-muscle-related events also were noted with alirocumab vs atorvastatin (hazard ratio, 0.61; 95% CI, 0.38-0.99; P = .04).6

In this case series, an LDL-C reduction of > 50% was observed in 9 veterans (42.9%) following 4 weeks of treatment; however, LDL-C reduction of > 50% compared with baseline was sustained in only 6 veterans (28.6%) at week 24. Additionally, LDL-C increases from baseline were observed in 3 veterans; the reasoning for the observed increase was unclear, but this may have been due to nonadherence and dietary factors.4 Although a majority of patients saw a significant and clinically meaningful reduction in LDL-C, the group of patients with an increase in the same may have benefitted from targeted intervention to improve medication and dietary adherence. PCSK9 inhibitor resistance also may have contributed to an increase in LDL-C during treatment.7

Of the 24 patients included, 4 reported AEs resulted in therapy discontinuation. Memory impairment, a rare AE of alirocumab, was reported 1 year following alirocumab initiation. Additionally, lethargy was reported after 6 months of treatment. Myalgia also was reported in a veteran 4 months following treatment, and 1 veteran experienced body aches and dyspnea < 2 months following treatment. The most common AEs associated with alirocumab, as noted in previous safety and efficacy clinical trials, included: nasopharyngitis, injection site reaction, influenza, urinary tract infection, and myalgias.8 Many of these more common AEs may be subclinical and underreported. This small case series, however, detected 4 events severe enough to lead to therapy discontinuation. Although this sample is not representative of all statin-intolerant patients who receive treatment with alirocumab, our findings suggest the need for patient education on potential AEs before therapy initiation and clinician monitoring at follow-up visits.

The ODYSSEY OUTCOMES trial established a CV benefit associated with alirocumab; however, patients included had a recent acute coronary syndrome event and were receiving a high-intensity statin.9 This case series is unique in that before alirocumab initiation, 22 CV events/interventions were reported in the sample of 24 patients. After therapy initiation, 1 patient underwent a CABG after an NSTEMI in the month following initiation. This suggests that cardiac complications are possible after PCSK-9 initiation; however, little information can be gained from 1 patient. Nevertheless, early therapy failure should be investigated in the context of real-world use in statin-intolerant patients. This is a complex task, however, given the difficulties of achieving a balanced study design. Statin intolerance is a clear source of selection bias into treatment with alirocumab as patients in this population have already initiated and failed statin therapy. The prevalence of prior CV events and the time-dependent association between prior and future CV events stand as another complex confounder. Although there is a clear and pressing need to understand the risks and benefits of PCSK9 therapy in statin-intolerant patients, future research in this area will need to cautiously address these important sources of bias.

Overall, the results of this case series support LDL-C reduction associated with alirocumab in the absence of statin therapy. Despite favorable results, use of alirocumab may be limited by cost and its subcutaneous route of administration. Bempedoic acid, an oral, once-daily lipid-lowering agent poses an alternative to PCSK9 inhibitors, but further data regarding CV outcomes with this agent is needed.10,11 Robust randomized controlled trials also are needed to evaluate CV outcomes for alirocumab use in statin-intolerant veterans.

Limitations

Only 24 veterans were included in the study, reflecting 20% of the charts reviewed (80% exclusion rate), and in this small sample, only 1 CV event was observed. Both of these serve as threats to external validity. As the study information was extracted from chart review, the results may be limited by coding or historical bias. Medical information from outside institutions may be missing from medical records. Additionally, results may be skewed by possible documentation errors. Furthermore, the period between previous CV events and alirocumab initiation is unclear as event dates were often not recorded if treatment was received at an outside institution.

Due to the short follow-up period, the case series is limited in its assessment of CV outcomes and safety outcomes. Larger studies over an extended period are needed to assess CV outcomes and safety of alirocumab use in statin-intolerant patients. Also, medication adherence was not assessed. Given the impact of medication adherence on LDL-C reduction, it is unclear what role medication adherence played in the LDL-C reduction observed in this study.4

Conclusions

Alirocumab use in 24 statin-intolerant veterans resulted in a significant reduction in LDL-C at 4 and 24 weeks after initiation. In addition, 1 CV event/intervention was observed following alirocumab initiation, although this should be interpreted with caution due to the retrospective nature of this case series, small sample size, and short follow-up period. Large, long-term studies would better evaluate the CV benefit associated with alirocumab therapy in a veteran population.

In 2016, 17.6 million deaths occurred globally due to cardiovascular disease (CVD) with coronary artery disease (CAD) and ischemic stroke as top contributors.1 Elevated low-density lipoprotein cholesterol (LDL-C) has been linked to greater risk of atherosclerotic cardiovascular disease (ASCVD); therefore, LDL-C reduction is imperative to decrease risk of cardiovascular (CV) morbidity and mortality.2 Since 1987, statin therapy has been the mainstay of treatment for hypercholesterolemia, and current practice guidelines recommend statins as first-line therapy given demonstrated reductions in LDL-C and CV mortality reduction in robust clinical trials.2-4 Although generally safe and well tolerated, muscle-related adverse events (AEs) limit optimal use of statins in up to 20% of individuals who have an indication for statin therapy.5 As a consequence, these patients receive suboptimal statin doses or no statin therapy and are at a higher risk for ASCVD.5

Proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitors have been shown to significantly lower LDL-C when used as monotherapy or in combination with statins and/or other lipid-lowering therapies.5 These agents are currently approved by the US Food and Drug Administration as an adjunct to diet with or without other lipid-lowering therapies for the management of primary hypercholesterolemia (including heterozygous familial hypercholesterolemia), homozygous familial hypercholesterolemia (evolocumab only), and for use in patients with established CVD unable to achieve their lipid-lowering goals with maximally tolerated statin doses and ezetimibe.4 With the ability to reduce LDL-C by up to 65%, PCSK9 inhibitors offer an alternative option for LDL-C and potentially CV risk reduction in statin-intolerant patients.5

Alirocumab, the formulary preferred PCSK9 inhibitor at the Michael E. DeBakey Veterans Affairs Medical Center (MEDVAMC) in Houston, Texas, has been increasingly used in high-risk statin-intolerant veterans. The primary objective of this case series was to assess LDL-C reduction associated with alirocumab use in statin-intolerant veterans at the MEDVAMC. The secondary objective was to assess the incidence of CV events. This study was approved by the MEDVAMC Quality Assurance and Regulatory Affairs committee.

Methods

In this single-center case series, a retrospective chart review was conducted to identify statin-intolerant veterans who were initiated on treatment with alirocumab for LDL-C and/or CV risk reduction between June 2017 and May 2019. Adult veterans with a diagnosis of primary hypercholesterolemia (including heterozygous familial hypercholesterolemia) and/or CAD with documented statin intolerance were included in the study. Statin intolerance was defined in accordance with the National Lipid Association (NLA) definition as aninability to tolerate ≥ 2 statins with a trial of at least 1 statin at its lowest daily dose.5 Veterans who previously received treatment with evolocumab, those prescribed concurrent statin therapies, and those missing follow-up lipid panels at 24 weeks were excluded from the study. To assess LDL-C reduction, LDL-C at baseline was compared with LDL-C at 4 and 24 weeks. Incident CV events before and after alirocumab initiation were documented. The US Department of Veteran Affairs (VA) Computerized Patient Record System was used to collect patient data.

Data Collection, Measures, and Analysis

Electronic health records of all eligible patients who received alirocumab were reviewed, and basic demographics (patient age, sex, and race/ethnicity) as well as medical characteristics at baseline were collected. To confirm statin intolerance, each veteran’s history of statin use and use of additional lipid-lowering agents was documented. CV history was measured with an index of categorical measures for hypertension, confirmed CAD, hyperlipidemia, heart failure, arrhythmias, peripheral artery disease, stroke, diabetes mellitus, and hypothyroidism. Additionally, concomitant medications, such as aspirin, P2Y12 inhibitors, β-blockers, angiotensin-converting enzyme inhibitors, and angiotensin II receptor blockers that patients were taking also were collected. Each veteran’s lipid panel at baseline, and at 4 and 24 weeks posttreatment initiation, also was extracted. Continuous variables were summarized with means (SD), and categorical variables were summarized with frequencies and proportions. The paired Wilcoxon signed rank test was used to compare LDL-C at 4 and 24 weeks after alirocumab initiation with patients’ baseline LDL-C.

Results

Between June 2017 and May 2019, 122 veterans were initiated on alirocumab. Of these veterans, 98 were excluded: 35 concurrently received statin therapy, 33 missed follow-up lipid panels, 21 had previously received evolocumab, 6 failed to meet the NLA definition for statin intolerance, 2 did not fill active alirocumab prescriptions, and 1 had an incalculable LDL-C with a baseline triglyceride level of 3079 mg/dL. This resulted in 24 veterans included in the analysis.

Most participants were male (87.5%) and White veterans (79.2%) with a mean (SD) age of 66.0 (8.4) years and mean (SD) baseline LDL-C of 161.9 (74.3) mg/dL. At baseline, 21 veterans had a history of primary hyperlipidemia, 19 had a history of CAD, and 2 had a history of heterozygous familial hypercholesterolemia. Of the 24 patients included, the most trialed statins before alirocumab initiation were atorvastatin (95.8%), simvastatin (79.2%), rosuvastatin (79.2%), and pravastatin (62.5%) (Table).

Baseline Characteristics (N = 24) table

LDL-C Reduction

Veterans were initially treated with alirocumab 75 mg administered subcutaneously every 2 weeks; however, 11 veterans required a dose increase to 150 mg every 2 weeks. At treatment week 4, the median LDL-C reduction was 78.5 mg/dL (IQR, 28.0-107.3; P < .01), and at treatment week 24, the median LDL-C reduction was 55.6 mg/dL (IQR, 18.6-85.3; P < .01). This equated to median LDL-C reductions from baseline of 48.5% at week 4 and 34.3% at week 24. A total of 3 veterans experienced LDL-C increases following initiation of alirocumab. At week 4, 9 veterans were noted to have an LDL-C reduction > 50%, 7 veterans had an LDL-C reduction between 30% and 50%, and 5 veterans had an LDL-C reduction of < 30%. At week 24, 6 had an LDL-C reduction > 50%, 9 veterans had an LDL-C reduction between 30% and 50%, and 6 had a LDL-C reduction < 30%.

 

 

Cardiovascular Events

Before alirocumab initiation, 22 CV events and interventions were reported in 16 veterans: 12 percutaneous coronary interventions, 5 coronary artery bypass surgeries (CABG), 4 myocardial infarctions, and 1 transient ischemic attack. One month following alirocumab initiation, 1 veteran underwent a CABG after a non-ST-elevation myocardial infarction (NSTEMI).

Safety and Tolerability

Alirocumab was discontinued in 5 veterans due to 4 cases of intolerance (reported memory loss, lethargy, myalgias, and body aches with dyspnea) and 1 case of persistent LDL-C of < 40 mg/dL. Alirocumab was discontinued after 1 year in 2 patients (persistent LDL-C < 40 mg/dL and reported memory loss) and after 6 months in the veteran who reported lethargy. Alirocumab was discontinued after 4 months in the veteran with myalgias and within 2 months in the veteran with body aches and dyspnea. No other AEs were reported.

Discussion

The Efficacy and Safety of Alirocumab vs Ezetimibe in Statin-Intolerant Veterans With a Statin Rechallenge Arm trial is the first clinical trial to examine the efficacy and safety of alirocumab use in statin-intolerant patients. In the trial, 314 patients were randomized to receive alirocumab, ezetimibe, or an atorvastatin rechallenge.6 At 24 weeks, alirocumab reduced mean (SE) LDL-C by 45.0% (2.2%) vs 14.6% (2.2%) with ezetimibe (mean difference 30.4% [3.1%], P < .01).6 Fewer skeletal-muscle-related events also were noted with alirocumab vs atorvastatin (hazard ratio, 0.61; 95% CI, 0.38-0.99; P = .04).6

In this case series, an LDL-C reduction of > 50% was observed in 9 veterans (42.9%) following 4 weeks of treatment; however, LDL-C reduction of > 50% compared with baseline was sustained in only 6 veterans (28.6%) at week 24. Additionally, LDL-C increases from baseline were observed in 3 veterans; the reasoning for the observed increase was unclear, but this may have been due to nonadherence and dietary factors.4 Although a majority of patients saw a significant and clinically meaningful reduction in LDL-C, the group of patients with an increase in the same may have benefitted from targeted intervention to improve medication and dietary adherence. PCSK9 inhibitor resistance also may have contributed to an increase in LDL-C during treatment.7

Of the 24 patients included, 4 reported AEs resulted in therapy discontinuation. Memory impairment, a rare AE of alirocumab, was reported 1 year following alirocumab initiation. Additionally, lethargy was reported after 6 months of treatment. Myalgia also was reported in a veteran 4 months following treatment, and 1 veteran experienced body aches and dyspnea < 2 months following treatment. The most common AEs associated with alirocumab, as noted in previous safety and efficacy clinical trials, included: nasopharyngitis, injection site reaction, influenza, urinary tract infection, and myalgias.8 Many of these more common AEs may be subclinical and underreported. This small case series, however, detected 4 events severe enough to lead to therapy discontinuation. Although this sample is not representative of all statin-intolerant patients who receive treatment with alirocumab, our findings suggest the need for patient education on potential AEs before therapy initiation and clinician monitoring at follow-up visits.

The ODYSSEY OUTCOMES trial established a CV benefit associated with alirocumab; however, patients included had a recent acute coronary syndrome event and were receiving a high-intensity statin.9 This case series is unique in that before alirocumab initiation, 22 CV events/interventions were reported in the sample of 24 patients. After therapy initiation, 1 patient underwent a CABG after an NSTEMI in the month following initiation. This suggests that cardiac complications are possible after PCSK-9 initiation; however, little information can be gained from 1 patient. Nevertheless, early therapy failure should be investigated in the context of real-world use in statin-intolerant patients. This is a complex task, however, given the difficulties of achieving a balanced study design. Statin intolerance is a clear source of selection bias into treatment with alirocumab as patients in this population have already initiated and failed statin therapy. The prevalence of prior CV events and the time-dependent association between prior and future CV events stand as another complex confounder. Although there is a clear and pressing need to understand the risks and benefits of PCSK9 therapy in statin-intolerant patients, future research in this area will need to cautiously address these important sources of bias.

Overall, the results of this case series support LDL-C reduction associated with alirocumab in the absence of statin therapy. Despite favorable results, use of alirocumab may be limited by cost and its subcutaneous route of administration. Bempedoic acid, an oral, once-daily lipid-lowering agent poses an alternative to PCSK9 inhibitors, but further data regarding CV outcomes with this agent is needed.10,11 Robust randomized controlled trials also are needed to evaluate CV outcomes for alirocumab use in statin-intolerant veterans.

Limitations

Only 24 veterans were included in the study, reflecting 20% of the charts reviewed (80% exclusion rate), and in this small sample, only 1 CV event was observed. Both of these serve as threats to external validity. As the study information was extracted from chart review, the results may be limited by coding or historical bias. Medical information from outside institutions may be missing from medical records. Additionally, results may be skewed by possible documentation errors. Furthermore, the period between previous CV events and alirocumab initiation is unclear as event dates were often not recorded if treatment was received at an outside institution.

Due to the short follow-up period, the case series is limited in its assessment of CV outcomes and safety outcomes. Larger studies over an extended period are needed to assess CV outcomes and safety of alirocumab use in statin-intolerant patients. Also, medication adherence was not assessed. Given the impact of medication adherence on LDL-C reduction, it is unclear what role medication adherence played in the LDL-C reduction observed in this study.4

Conclusions

Alirocumab use in 24 statin-intolerant veterans resulted in a significant reduction in LDL-C at 4 and 24 weeks after initiation. In addition, 1 CV event/intervention was observed following alirocumab initiation, although this should be interpreted with caution due to the retrospective nature of this case series, small sample size, and short follow-up period. Large, long-term studies would better evaluate the CV benefit associated with alirocumab therapy in a veteran population.

References

1. Benjamin EJ, Munter P, Alonso A, et al; American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee. Heart disease and stroke statistics—2019 update: a report from the American Heart Association. Circulation. 2019;139(10):e56-e528. doi:10.1161/CIR.0000000000000659

2. Stone NJ, Robinson JG, Lichtenstein AH, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation. 2014 Jun 24;129(25)(suppl 2):S1-S45. doi:10.1016/j.jacc.2013.11.002

3. Hajar R. Statins: past and present. Heart Views. 2011;12(3): 121-127. doi:10.4103/1995-705X.95070

4. Grundy SM, Stone NJ, Bailey AL, et al. 2018 AHA/ACC/AACVPR/AAPA/ABC/ACPM/ADA/AGS/APhA/ASPC/NLA/PCNA guideline on the management of blood cholesterol: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. J Am Coll Cardiol 2019;73(4):3168-3209. doi:10.1016/j.jacc.2018.11.002

5. Toth PH, Patti AM, Giglio RV, et al. Management of statin intolerance in 2018: still more questions than answers. Am J Cardiovasc Drugs. 2018;18(3):157-173. doi:10.1007/s40256-017-0259-7

6. Moriarty PM, Thompson PD, Cannon CP, et al; ODYSSEY ALTERNATIVE Investigators. Efficacy and safety of alirocumab vs ezetimibe in statin-intolerant patients, with a statin rechallenge arm: The ODYSSEY ALTERNATIVE randomized trial. J Clin Lipidol. 2015;9(6):758-769. doi:10.1016/j.jacl.2015.08.006

7. Shapiro MD, Miles J, Tavori H, Fazio S. Diagnosing resistance to a proprotein convertase subtilisin/kexin type 9 inhibitor. Ann Intern Med. 2018;168(5):376-379. doi:10.7326/M17-2485

8. Raedler LA. Praluent (alirocumab): first PCSK9 inhibitor approved by the FDA for hypercholesterolemia. Am Health Drug Benefits. 2016;9:123-126.

9. Schwartz GC, Steg PC, Szarek M, et al; ODYSSEY OUTCOMES Committees and Investigators. Alirocumab and cardiovascular outcomes after acute coronary syndrome. N Engl J Med. 2018;379(22):2097-2107. doi:10.1056/NEJMoa1801174

10. Nexletol. Package insert. Esperion Therapeutics Inc; 2020.

11. Laufs U, Banach M, Mancini GBJ, et al. Efficacy and safety of bempedoic acid in patients with hypercholesterolemia and statin intolerance. J Am Heart Assoc. 2019;8(7):e011662. doi:10.1161/JAHA.118.011662

References

1. Benjamin EJ, Munter P, Alonso A, et al; American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee. Heart disease and stroke statistics—2019 update: a report from the American Heart Association. Circulation. 2019;139(10):e56-e528. doi:10.1161/CIR.0000000000000659

2. Stone NJ, Robinson JG, Lichtenstein AH, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation. 2014 Jun 24;129(25)(suppl 2):S1-S45. doi:10.1016/j.jacc.2013.11.002

3. Hajar R. Statins: past and present. Heart Views. 2011;12(3): 121-127. doi:10.4103/1995-705X.95070

4. Grundy SM, Stone NJ, Bailey AL, et al. 2018 AHA/ACC/AACVPR/AAPA/ABC/ACPM/ADA/AGS/APhA/ASPC/NLA/PCNA guideline on the management of blood cholesterol: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. J Am Coll Cardiol 2019;73(4):3168-3209. doi:10.1016/j.jacc.2018.11.002

5. Toth PH, Patti AM, Giglio RV, et al. Management of statin intolerance in 2018: still more questions than answers. Am J Cardiovasc Drugs. 2018;18(3):157-173. doi:10.1007/s40256-017-0259-7

6. Moriarty PM, Thompson PD, Cannon CP, et al; ODYSSEY ALTERNATIVE Investigators. Efficacy and safety of alirocumab vs ezetimibe in statin-intolerant patients, with a statin rechallenge arm: The ODYSSEY ALTERNATIVE randomized trial. J Clin Lipidol. 2015;9(6):758-769. doi:10.1016/j.jacl.2015.08.006

7. Shapiro MD, Miles J, Tavori H, Fazio S. Diagnosing resistance to a proprotein convertase subtilisin/kexin type 9 inhibitor. Ann Intern Med. 2018;168(5):376-379. doi:10.7326/M17-2485

8. Raedler LA. Praluent (alirocumab): first PCSK9 inhibitor approved by the FDA for hypercholesterolemia. Am Health Drug Benefits. 2016;9:123-126.

9. Schwartz GC, Steg PC, Szarek M, et al; ODYSSEY OUTCOMES Committees and Investigators. Alirocumab and cardiovascular outcomes after acute coronary syndrome. N Engl J Med. 2018;379(22):2097-2107. doi:10.1056/NEJMoa1801174

10. Nexletol. Package insert. Esperion Therapeutics Inc; 2020.

11. Laufs U, Banach M, Mancini GBJ, et al. Efficacy and safety of bempedoic acid in patients with hypercholesterolemia and statin intolerance. J Am Heart Assoc. 2019;8(7):e011662. doi:10.1161/JAHA.118.011662

Issue
Federal Practitioner - 38(4)s
Issue
Federal Practitioner - 38(4)s
Page Number
e67-e71
Page Number
e67-e71
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Continuous Blood Glucose Monitoring Outcomes in Veterans With Type 2 Diabetes

Article Type
Changed
Tue, 05/03/2022 - 15:03

Nearly 25% of patients served in the US Department of Veterans Affairs (VA) have been diagnosed with type 2 diabetes mellitus (T2DM), although the prevalence among adults in the United States is 9%.1 Patients with DM typically monitor their blood glucose using intermittent fingerstick self-testing. Continuous glucose monitoring (CGM) might offer a more comprehensive picture of glucose control to improve disease management. Within the VA, criteria for CGM use varies among facilities, but generally veterans prescribed at least 3 daily insulin injections and 4 daily blood glucose checks qualify.2

CGM therapy has been extensively researched for type 1 DM (T1DM); however, outcomes of CGM use among older adults with T2DM have not been fully evaluated. In a 2018 review of randomized clinical trials evaluating CGM use, 17 trials examined only patients with T1DM (2009 participants), 4 included only patients with T2DM patients (547 patients), 3 evaluated patients with T1DM or T2DM (655 patients), and 3 included women with gestational diabetes (585 patients).3 Of 27 studies that included change in hemoglobin A1c (HbA1c) as an endpoint, 15 found a statistically significant reduction in HbA1c for the CGM group. Four trials evaluated CGM use in adults with T2DM and 3 found no difference in HbA1c overall. However, 1 study found a difference in HbA1c only in individuals aged < 65 years, and another study found a greater improvement in the CGM group (approximately 0.5%).4,5 These mixed results indicate a need for further subgroup analysis in specific populations to determine the optimal use of CGM in adults with T2DM. Although this study was not designed to measure changes in hypoglycemic episodes or the relative efficacy of different CGM products, it establishes a baseline from which to conduct additional research.

Our primary objective was to determine change in HbA1c in each patient from the year before CGM initiation to the year after. Secondary objectives included changes in blood pressure (BP), weight, and diabetes-related hospital and clinic visits during the same time frame. We also completed subanalysis comparing primary outcomes in engaged or adherent patients compared with the entire study group. This study was completed as a quality improvement project with approval from the Lexington Veterans Affairs Health Care System in Kentucky information security office and was exempted from institutional review board review.

Methods

This project was a retrospective evaluation using the VA database of patient records. Rather than using a control group, our study used a pre–post model to determine the impact of CGM for each patient. For the primary outcome, average HbA1c values were calculated for the year before and year after CGM initiation. Hemoglobin and hematocrit values were included if reported within 3 months of the HbA1c values to ensure validity of HbA1c results. Average HbA1c was 13.37 g/dL (range, 10.5-17.3), and average hematocrit was 43.3% (range, 36-52). Change in average HbA1c was recorded for each patient. Based on research by Taylor and colleagues, a change in HbA1c of 0.8% was considered clinically significant for this project.6

Mean BP and weight were calculated for the years before and after CGM initiation. Only values for routine clinic visits were included; values taken during an acute health incident, inpatient stay, infusion clinic appointments, or home readings were excluded. Changes were recorded for each patient. Patient encounter notes were used to determine the number of DM-related hospital, emergency department, and clinic visits, such as nephrology, podiatry, vascular medicine, or infectious disease clinic or inpatient encounters during the study period. Routine endocrinology or primary care visits were not included, and patient care notes were consulted to ensure that the encounters were related to a DM complication. The change in number of visits was calculated for each patient.

Adherence was defined as patients receiving active medication management, documented treatment regimen adherence, and > 4 annual endocrinology clinic visits. Active medication management was defined as having > 1 dosage or medication change for oral or noninsulin antihyperglycemics, initiation, or adjustment of insulin dosages according to the patient records. Treatment adherence was determined based on medication reconciliation notes and refill request history. Only endocrinology clinic visits at VA outpatient clinics were included.

Study Population

A sample of 166 patients was needed to detect an HbA1c change of 0.8 per power analysis. The normal approximation method using the z statistic was used, with 2-tailed α = 0.05, β = 0.05, E = 0.8, and S = 1.2. We randomly selected 175 patients among all individuals with an active prescription for CGM in 2018 and 2019, who had a diagnosis of T2DM, and were managed by VA endocrinology clinics (including endocrine clinics, diabetes clinics, and patient aligned care team clinics) at 87 VA medical centers. Patients with types of DM other than T2DM were excluded, as well as those with a diagnosed hemoglobinopathy or HbA1c < 10 g/dL. The adherent subgroup included 40 patients of the 175 sample population (Table 1).

Patients Using CGM Table

Baseline Demographics table

Results

Both the total population and the adherent subgroup showed reduction in HbA1c, the primary endpoint. The complete population showed a HbA1c change of –0.3 (95% CI, –0.4 to –0.2), and the adherent subgroup had a change of –1.3 (95% CI, –1.5 to –1.2). The total survey population had a mean change in weight of –1.9 lb (–0.9 kg) (95% CI, –3.7 to –0.1) and the adherent subgroup had an average change of –8.0 lb (–3.6 kg) (95% CI, –12.3 to –3.8). Average systolic BP changes were –0.1 mm Hg (95% CI, –1.6 to 1.5) in the total population and +3.3 mm Hg (95% CI, –0.01 to 6.22) in the adherent subgroup. A decrease in total encounters for DM complications was observed in the population (–0.3 total encounters per patient, 95% CI, –0.5 to –0.2) and the adherent subgroup (–0.6 total encounters per patient, 95% CI, –1.0 to –0.1) (Table 2).

 

 

Before the study, 107 (61.1%) patients were taking oral or noninsulin DM medication only, 4 (2.3%) were on insulin only, and 64 (36.6%) were prescribed both insulin and oral/noninsulin antihyperglycemics. Noninsulin and oral antihyperglycemic regimens included combinations of biguanide, dipeptidyl peptidase- 4 inhibitor, sodium-glucose cotransporter-2 inhibitor, sulfonylurea, meglitinide, β-glucosidase inhibitor, glucagon-like peptide-1 (GLP-1) analog, and thiazolidinedione drug classes. Nearly 70% (122) had no reported changes in DM treatment beyond dosage titrations. Among these patients, 18 (10.3%) were on an insulin pump for the duration of the study. Among the 53 (30.3%) patients who had changes in treatment, 31 (17.7%) transitioned from insulin injections to an insulin pump, 13 (7.4%) changed from 1 insulin injection to another (ie, addition of long-acting insulin, transition to u500 insulin, changing from 1 insulin category or brand to another), 8 (4.6%) began an oral/noninsulin antihyperglycemic, 4 (2.3%) began insulin injections, 13 (7.4%) discontinued noninsulin or oral antihyperglycemics, and 2 (1.1%) discontinued insulin during the study period.

Data showed that 113 (64.5%) patients had no changes in antihypertensives. The remaining 62 (35.4%) had the following adjustments: 14 (8%) increased dose of current medication(s), 9 (5.1%) decreased dose of current medication(s), 8 (4.6%) discontinued all antihypertensive medications, 10 (5.7%) switched to a different antihypertensive class, and 16 (9.1%) added additional antihypertensive medication(s) to their existing regimen during the study period.

Patients in the study group used 7 different types of CGM sensors. Chart review revealed that 84 (47.7%) patients used Medtronic devices, with 26 (14.8%) using first-generation Guardian sensors, 50 (28.4%) using Enlite sensors, and 8 (4.5) using Guardian 3 sensors. We found that 81 (46.0%) veterans were prescribed Dexcom devices, with 5 (2.8%) using SEVEN PLUS sensors, 68 (38.6%) using G4-5 sensors, and 8 (4.5%) using G6 sensors. The remaining 10 (5.7%) patients were using Freestyle Libre sensors during the study period.

Discussion

CGM did not correspond with clinically significant reductions in HbA1c. However, veterans with increased health care engagement were likely to achieve clinically significant HbA1c improvements. The veterans in the adherent subgroup had a higher baseline HbA1c, which could be because of a variety of factors mentioned in patient care notes, including insulin resistance, poor dietary habits, and exercise regimen nonadherence. These patients might have had more room to improve their glycemic control without concern of hypoglycemia, and their higher baseline HbA1c could have provided increased motivation for improving their health during the study period.

Adherent patients also had a greater reduction in weight and hospital or clinic visits with CGM compared with the total population. These veterans’ increased involvement in their health care might have led to better dietary and exercise adherence, which would have decreased insulin dosing and contributed to weight loss. Only 1 patient in the adherent subgroup initiated a GLP-1 agonist during the study period, making it unlikely that medication changes had a significant impact on weight loss in the subgroup analysis. This improvement in overall health status might have contributed to the reduction in hospital or clinic visits observed in this population.

Average systolic BP data decreased minimally in the total survey population and increased in the adherent subgroup over the course of the study. These results were determined to be statistically significant. Changes in systolic BP readings were minimal, indicating that it is unlikely that these changes contributed meaningfully to the patients’ overall health status.

Although not related to the study objectives, the adherent population required less antihypertensive adjustments with similar BP control. This could be explained by improved overall health or better adherence and engagement in therapy. The results of this project show that despite limited medication changes, T2DM management improved among adherent patients using CGM. The general study population, which was more likely to have documented nonadherence with treatment or clinic appointments, had minimal benefit. CGM technology in the T2DM veteran population is more likely to have significant clinical benefit in patients who are adherent with their medication regimens and follow-up appointments compared with the larger study population.

The results of this study are in line with previous studies on CGM use in the T2DM patient population. We agree with the previously published research that CGM alone does not have a meaningful impact on HbA1c reduction. Our study population also was older than those in previous studies, adding to the Haak and colleagues conclusion that patients aged < 65 years might have better outcomes with CGM.4

 

 


Strengths of this study include specificity to the veteran population using VA resources, as well as including nondiabetes outcomes. This allows for specific application to the veteran population and could provide broader evidence for CGM use. Demonstrated decreases in HbA1c, weight, and clinic visits in the adherent population suggest that providing veterans with CGM therapy with frequent endocrinology follow-up improves health outcomes and could decrease overall health spending.

Limitations

Limitations of this study include retrospective design, a small sample size, and solely focusing on T2DM. As a retrospective study, we cannot rule out the influence of outside factors, such as participation in a non-VA weight loss program. This study lacked the power to assess the impact of the different CGM brands. The study did not include data on severe hypoglycemic or hyperglycemic episodes as veterans might have needed emergent care at non-VA facilities. Future research will evaluate the impact of CGM on symptomatic and severe hypoglycemic episodes and use of insulin vs oral or noninsulin antihyperglycemics and the comparative efficacy of different CGM brands among veterans.

Conclusions

CGM did not correspond with clinically significant reductions in HbA1c. However, veterans with increased health care engagement were likely to achieve clinically significant HbA1c improvements. Adherent patients also had more reduction in weight and hospital or clinic visits with CGM compared with the total population. These veterans’ increased involvement in their health care might have led to better dietary and exercise adherence, which would have decreased insulin dosing and contributed to weight loss.

References

1. Liu Y, Sayam S, Shao X, et al. Prevalence of and trends in diabetes among veterans, United States, 2005-2014. Prev Chronic Dis. 2017;14:E135. Published 2017 Dec 14. doi:10.5888/pcd14.170230

2. Hackett M. VA pharmacies now carry the Dexcom G6 CGM at no cost for qualifying patients. September 23, 2020. Accessed September 28, 2021. https://www.mobihealthnews.com/news/va-pharmacies-now-carry-dexcom-g6-cgm-no-cost-qualifying-patients

3. Peters AL. The evidence base for continuous glucose monitoring. In: Role of Continuous Glucose Monitoring in Diabetes Treatment. Arlington (VA): American Diabetes Association; August 2018.3-7. doi:10.2337/db20181-3

4. Haak T, Hanaire H, Ajjan R, Hermanns N, Riveline JP, Rayman G. Flash glucose-sensing technology as a replacement for blood glucose monitoring for the management of insulin-treated type 2 diabetes: a multicenter, open-label randomized controlled trial. Diabetes Ther. 2017;8(1):55-73. doi:10.1007/s13300-016-0223-6

5. Yoo HJ, An HG, Park SY, et al. Use of a real time continuous glucose monitoring system as a motivational device for poorly controlled type 2 diabetes. Diabetes Res Clin Pract. 2008;82(1):73-79. doi:10.1016/j.diabres.2008.06.015

6. Taylor PJ, Thompson CH, Brinkworth GD. Effectiveness and acceptability of continuous glucose monitoring for type 2 diabetes management: A narrative review. J Diabetes Investig. 2018;9(4):713-725. doi:10.1111/jdi.12807

Article PDF
Author and Disclosure Information

Sarah Langford is a PGY-1 Pharmacy Resident at St. Joseph Mercy Hospital in Ann Arbor, Michigan. Matthew Lane is Associate Professor and Pharmacy Residency Program Director in the College of Pharmacy, and Dennis Karounos is Associate Professor in the College of Medicine, all at University of Kentucky. Matthew Lane is Associate Chief of Pharmacy and Dennis Karounos is Director of Endocrinology Services, both at Lexington Veterans Affairs Health Care System in Kentucky.
Correspondence: Sarah Langford ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Issue
Federal Practitioner - 38(4)s
Publications
Topics
Page Number
S14-S17
Sections
Author and Disclosure Information

Sarah Langford is a PGY-1 Pharmacy Resident at St. Joseph Mercy Hospital in Ann Arbor, Michigan. Matthew Lane is Associate Professor and Pharmacy Residency Program Director in the College of Pharmacy, and Dennis Karounos is Associate Professor in the College of Medicine, all at University of Kentucky. Matthew Lane is Associate Chief of Pharmacy and Dennis Karounos is Director of Endocrinology Services, both at Lexington Veterans Affairs Health Care System in Kentucky.
Correspondence: Sarah Langford ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Author and Disclosure Information

Sarah Langford is a PGY-1 Pharmacy Resident at St. Joseph Mercy Hospital in Ann Arbor, Michigan. Matthew Lane is Associate Professor and Pharmacy Residency Program Director in the College of Pharmacy, and Dennis Karounos is Associate Professor in the College of Medicine, all at University of Kentucky. Matthew Lane is Associate Chief of Pharmacy and Dennis Karounos is Director of Endocrinology Services, both at Lexington Veterans Affairs Health Care System in Kentucky.
Correspondence: Sarah Langford ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Article PDF
Article PDF

Nearly 25% of patients served in the US Department of Veterans Affairs (VA) have been diagnosed with type 2 diabetes mellitus (T2DM), although the prevalence among adults in the United States is 9%.1 Patients with DM typically monitor their blood glucose using intermittent fingerstick self-testing. Continuous glucose monitoring (CGM) might offer a more comprehensive picture of glucose control to improve disease management. Within the VA, criteria for CGM use varies among facilities, but generally veterans prescribed at least 3 daily insulin injections and 4 daily blood glucose checks qualify.2

CGM therapy has been extensively researched for type 1 DM (T1DM); however, outcomes of CGM use among older adults with T2DM have not been fully evaluated. In a 2018 review of randomized clinical trials evaluating CGM use, 17 trials examined only patients with T1DM (2009 participants), 4 included only patients with T2DM patients (547 patients), 3 evaluated patients with T1DM or T2DM (655 patients), and 3 included women with gestational diabetes (585 patients).3 Of 27 studies that included change in hemoglobin A1c (HbA1c) as an endpoint, 15 found a statistically significant reduction in HbA1c for the CGM group. Four trials evaluated CGM use in adults with T2DM and 3 found no difference in HbA1c overall. However, 1 study found a difference in HbA1c only in individuals aged < 65 years, and another study found a greater improvement in the CGM group (approximately 0.5%).4,5 These mixed results indicate a need for further subgroup analysis in specific populations to determine the optimal use of CGM in adults with T2DM. Although this study was not designed to measure changes in hypoglycemic episodes or the relative efficacy of different CGM products, it establishes a baseline from which to conduct additional research.

Our primary objective was to determine change in HbA1c in each patient from the year before CGM initiation to the year after. Secondary objectives included changes in blood pressure (BP), weight, and diabetes-related hospital and clinic visits during the same time frame. We also completed subanalysis comparing primary outcomes in engaged or adherent patients compared with the entire study group. This study was completed as a quality improvement project with approval from the Lexington Veterans Affairs Health Care System in Kentucky information security office and was exempted from institutional review board review.

Methods

This project was a retrospective evaluation using the VA database of patient records. Rather than using a control group, our study used a pre–post model to determine the impact of CGM for each patient. For the primary outcome, average HbA1c values were calculated for the year before and year after CGM initiation. Hemoglobin and hematocrit values were included if reported within 3 months of the HbA1c values to ensure validity of HbA1c results. Average HbA1c was 13.37 g/dL (range, 10.5-17.3), and average hematocrit was 43.3% (range, 36-52). Change in average HbA1c was recorded for each patient. Based on research by Taylor and colleagues, a change in HbA1c of 0.8% was considered clinically significant for this project.6

Mean BP and weight were calculated for the years before and after CGM initiation. Only values for routine clinic visits were included; values taken during an acute health incident, inpatient stay, infusion clinic appointments, or home readings were excluded. Changes were recorded for each patient. Patient encounter notes were used to determine the number of DM-related hospital, emergency department, and clinic visits, such as nephrology, podiatry, vascular medicine, or infectious disease clinic or inpatient encounters during the study period. Routine endocrinology or primary care visits were not included, and patient care notes were consulted to ensure that the encounters were related to a DM complication. The change in number of visits was calculated for each patient.

Adherence was defined as patients receiving active medication management, documented treatment regimen adherence, and > 4 annual endocrinology clinic visits. Active medication management was defined as having > 1 dosage or medication change for oral or noninsulin antihyperglycemics, initiation, or adjustment of insulin dosages according to the patient records. Treatment adherence was determined based on medication reconciliation notes and refill request history. Only endocrinology clinic visits at VA outpatient clinics were included.

Study Population

A sample of 166 patients was needed to detect an HbA1c change of 0.8 per power analysis. The normal approximation method using the z statistic was used, with 2-tailed α = 0.05, β = 0.05, E = 0.8, and S = 1.2. We randomly selected 175 patients among all individuals with an active prescription for CGM in 2018 and 2019, who had a diagnosis of T2DM, and were managed by VA endocrinology clinics (including endocrine clinics, diabetes clinics, and patient aligned care team clinics) at 87 VA medical centers. Patients with types of DM other than T2DM were excluded, as well as those with a diagnosed hemoglobinopathy or HbA1c < 10 g/dL. The adherent subgroup included 40 patients of the 175 sample population (Table 1).

Patients Using CGM Table

Baseline Demographics table

Results

Both the total population and the adherent subgroup showed reduction in HbA1c, the primary endpoint. The complete population showed a HbA1c change of –0.3 (95% CI, –0.4 to –0.2), and the adherent subgroup had a change of –1.3 (95% CI, –1.5 to –1.2). The total survey population had a mean change in weight of –1.9 lb (–0.9 kg) (95% CI, –3.7 to –0.1) and the adherent subgroup had an average change of –8.0 lb (–3.6 kg) (95% CI, –12.3 to –3.8). Average systolic BP changes were –0.1 mm Hg (95% CI, –1.6 to 1.5) in the total population and +3.3 mm Hg (95% CI, –0.01 to 6.22) in the adherent subgroup. A decrease in total encounters for DM complications was observed in the population (–0.3 total encounters per patient, 95% CI, –0.5 to –0.2) and the adherent subgroup (–0.6 total encounters per patient, 95% CI, –1.0 to –0.1) (Table 2).

 

 

Before the study, 107 (61.1%) patients were taking oral or noninsulin DM medication only, 4 (2.3%) were on insulin only, and 64 (36.6%) were prescribed both insulin and oral/noninsulin antihyperglycemics. Noninsulin and oral antihyperglycemic regimens included combinations of biguanide, dipeptidyl peptidase- 4 inhibitor, sodium-glucose cotransporter-2 inhibitor, sulfonylurea, meglitinide, β-glucosidase inhibitor, glucagon-like peptide-1 (GLP-1) analog, and thiazolidinedione drug classes. Nearly 70% (122) had no reported changes in DM treatment beyond dosage titrations. Among these patients, 18 (10.3%) were on an insulin pump for the duration of the study. Among the 53 (30.3%) patients who had changes in treatment, 31 (17.7%) transitioned from insulin injections to an insulin pump, 13 (7.4%) changed from 1 insulin injection to another (ie, addition of long-acting insulin, transition to u500 insulin, changing from 1 insulin category or brand to another), 8 (4.6%) began an oral/noninsulin antihyperglycemic, 4 (2.3%) began insulin injections, 13 (7.4%) discontinued noninsulin or oral antihyperglycemics, and 2 (1.1%) discontinued insulin during the study period.

Data showed that 113 (64.5%) patients had no changes in antihypertensives. The remaining 62 (35.4%) had the following adjustments: 14 (8%) increased dose of current medication(s), 9 (5.1%) decreased dose of current medication(s), 8 (4.6%) discontinued all antihypertensive medications, 10 (5.7%) switched to a different antihypertensive class, and 16 (9.1%) added additional antihypertensive medication(s) to their existing regimen during the study period.

Patients in the study group used 7 different types of CGM sensors. Chart review revealed that 84 (47.7%) patients used Medtronic devices, with 26 (14.8%) using first-generation Guardian sensors, 50 (28.4%) using Enlite sensors, and 8 (4.5) using Guardian 3 sensors. We found that 81 (46.0%) veterans were prescribed Dexcom devices, with 5 (2.8%) using SEVEN PLUS sensors, 68 (38.6%) using G4-5 sensors, and 8 (4.5%) using G6 sensors. The remaining 10 (5.7%) patients were using Freestyle Libre sensors during the study period.

Discussion

CGM did not correspond with clinically significant reductions in HbA1c. However, veterans with increased health care engagement were likely to achieve clinically significant HbA1c improvements. The veterans in the adherent subgroup had a higher baseline HbA1c, which could be because of a variety of factors mentioned in patient care notes, including insulin resistance, poor dietary habits, and exercise regimen nonadherence. These patients might have had more room to improve their glycemic control without concern of hypoglycemia, and their higher baseline HbA1c could have provided increased motivation for improving their health during the study period.

Adherent patients also had a greater reduction in weight and hospital or clinic visits with CGM compared with the total population. These veterans’ increased involvement in their health care might have led to better dietary and exercise adherence, which would have decreased insulin dosing and contributed to weight loss. Only 1 patient in the adherent subgroup initiated a GLP-1 agonist during the study period, making it unlikely that medication changes had a significant impact on weight loss in the subgroup analysis. This improvement in overall health status might have contributed to the reduction in hospital or clinic visits observed in this population.

Average systolic BP data decreased minimally in the total survey population and increased in the adherent subgroup over the course of the study. These results were determined to be statistically significant. Changes in systolic BP readings were minimal, indicating that it is unlikely that these changes contributed meaningfully to the patients’ overall health status.

Although not related to the study objectives, the adherent population required less antihypertensive adjustments with similar BP control. This could be explained by improved overall health or better adherence and engagement in therapy. The results of this project show that despite limited medication changes, T2DM management improved among adherent patients using CGM. The general study population, which was more likely to have documented nonadherence with treatment or clinic appointments, had minimal benefit. CGM technology in the T2DM veteran population is more likely to have significant clinical benefit in patients who are adherent with their medication regimens and follow-up appointments compared with the larger study population.

The results of this study are in line with previous studies on CGM use in the T2DM patient population. We agree with the previously published research that CGM alone does not have a meaningful impact on HbA1c reduction. Our study population also was older than those in previous studies, adding to the Haak and colleagues conclusion that patients aged < 65 years might have better outcomes with CGM.4

 

 


Strengths of this study include specificity to the veteran population using VA resources, as well as including nondiabetes outcomes. This allows for specific application to the veteran population and could provide broader evidence for CGM use. Demonstrated decreases in HbA1c, weight, and clinic visits in the adherent population suggest that providing veterans with CGM therapy with frequent endocrinology follow-up improves health outcomes and could decrease overall health spending.

Limitations

Limitations of this study include retrospective design, a small sample size, and solely focusing on T2DM. As a retrospective study, we cannot rule out the influence of outside factors, such as participation in a non-VA weight loss program. This study lacked the power to assess the impact of the different CGM brands. The study did not include data on severe hypoglycemic or hyperglycemic episodes as veterans might have needed emergent care at non-VA facilities. Future research will evaluate the impact of CGM on symptomatic and severe hypoglycemic episodes and use of insulin vs oral or noninsulin antihyperglycemics and the comparative efficacy of different CGM brands among veterans.

Conclusions

CGM did not correspond with clinically significant reductions in HbA1c. However, veterans with increased health care engagement were likely to achieve clinically significant HbA1c improvements. Adherent patients also had more reduction in weight and hospital or clinic visits with CGM compared with the total population. These veterans’ increased involvement in their health care might have led to better dietary and exercise adherence, which would have decreased insulin dosing and contributed to weight loss.

Nearly 25% of patients served in the US Department of Veterans Affairs (VA) have been diagnosed with type 2 diabetes mellitus (T2DM), although the prevalence among adults in the United States is 9%.1 Patients with DM typically monitor their blood glucose using intermittent fingerstick self-testing. Continuous glucose monitoring (CGM) might offer a more comprehensive picture of glucose control to improve disease management. Within the VA, criteria for CGM use varies among facilities, but generally veterans prescribed at least 3 daily insulin injections and 4 daily blood glucose checks qualify.2

CGM therapy has been extensively researched for type 1 DM (T1DM); however, outcomes of CGM use among older adults with T2DM have not been fully evaluated. In a 2018 review of randomized clinical trials evaluating CGM use, 17 trials examined only patients with T1DM (2009 participants), 4 included only patients with T2DM patients (547 patients), 3 evaluated patients with T1DM or T2DM (655 patients), and 3 included women with gestational diabetes (585 patients).3 Of 27 studies that included change in hemoglobin A1c (HbA1c) as an endpoint, 15 found a statistically significant reduction in HbA1c for the CGM group. Four trials evaluated CGM use in adults with T2DM and 3 found no difference in HbA1c overall. However, 1 study found a difference in HbA1c only in individuals aged < 65 years, and another study found a greater improvement in the CGM group (approximately 0.5%).4,5 These mixed results indicate a need for further subgroup analysis in specific populations to determine the optimal use of CGM in adults with T2DM. Although this study was not designed to measure changes in hypoglycemic episodes or the relative efficacy of different CGM products, it establishes a baseline from which to conduct additional research.

Our primary objective was to determine change in HbA1c in each patient from the year before CGM initiation to the year after. Secondary objectives included changes in blood pressure (BP), weight, and diabetes-related hospital and clinic visits during the same time frame. We also completed subanalysis comparing primary outcomes in engaged or adherent patients compared with the entire study group. This study was completed as a quality improvement project with approval from the Lexington Veterans Affairs Health Care System in Kentucky information security office and was exempted from institutional review board review.

Methods

This project was a retrospective evaluation using the VA database of patient records. Rather than using a control group, our study used a pre–post model to determine the impact of CGM for each patient. For the primary outcome, average HbA1c values were calculated for the year before and year after CGM initiation. Hemoglobin and hematocrit values were included if reported within 3 months of the HbA1c values to ensure validity of HbA1c results. Average HbA1c was 13.37 g/dL (range, 10.5-17.3), and average hematocrit was 43.3% (range, 36-52). Change in average HbA1c was recorded for each patient. Based on research by Taylor and colleagues, a change in HbA1c of 0.8% was considered clinically significant for this project.6

Mean BP and weight were calculated for the years before and after CGM initiation. Only values for routine clinic visits were included; values taken during an acute health incident, inpatient stay, infusion clinic appointments, or home readings were excluded. Changes were recorded for each patient. Patient encounter notes were used to determine the number of DM-related hospital, emergency department, and clinic visits, such as nephrology, podiatry, vascular medicine, or infectious disease clinic or inpatient encounters during the study period. Routine endocrinology or primary care visits were not included, and patient care notes were consulted to ensure that the encounters were related to a DM complication. The change in number of visits was calculated for each patient.

Adherence was defined as patients receiving active medication management, documented treatment regimen adherence, and > 4 annual endocrinology clinic visits. Active medication management was defined as having > 1 dosage or medication change for oral or noninsulin antihyperglycemics, initiation, or adjustment of insulin dosages according to the patient records. Treatment adherence was determined based on medication reconciliation notes and refill request history. Only endocrinology clinic visits at VA outpatient clinics were included.

Study Population

A sample of 166 patients was needed to detect an HbA1c change of 0.8 per power analysis. The normal approximation method using the z statistic was used, with 2-tailed α = 0.05, β = 0.05, E = 0.8, and S = 1.2. We randomly selected 175 patients among all individuals with an active prescription for CGM in 2018 and 2019, who had a diagnosis of T2DM, and were managed by VA endocrinology clinics (including endocrine clinics, diabetes clinics, and patient aligned care team clinics) at 87 VA medical centers. Patients with types of DM other than T2DM were excluded, as well as those with a diagnosed hemoglobinopathy or HbA1c < 10 g/dL. The adherent subgroup included 40 patients of the 175 sample population (Table 1).

Patients Using CGM Table

Baseline Demographics table

Results

Both the total population and the adherent subgroup showed reduction in HbA1c, the primary endpoint. The complete population showed a HbA1c change of –0.3 (95% CI, –0.4 to –0.2), and the adherent subgroup had a change of –1.3 (95% CI, –1.5 to –1.2). The total survey population had a mean change in weight of –1.9 lb (–0.9 kg) (95% CI, –3.7 to –0.1) and the adherent subgroup had an average change of –8.0 lb (–3.6 kg) (95% CI, –12.3 to –3.8). Average systolic BP changes were –0.1 mm Hg (95% CI, –1.6 to 1.5) in the total population and +3.3 mm Hg (95% CI, –0.01 to 6.22) in the adherent subgroup. A decrease in total encounters for DM complications was observed in the population (–0.3 total encounters per patient, 95% CI, –0.5 to –0.2) and the adherent subgroup (–0.6 total encounters per patient, 95% CI, –1.0 to –0.1) (Table 2).

 

 

Before the study, 107 (61.1%) patients were taking oral or noninsulin DM medication only, 4 (2.3%) were on insulin only, and 64 (36.6%) were prescribed both insulin and oral/noninsulin antihyperglycemics. Noninsulin and oral antihyperglycemic regimens included combinations of biguanide, dipeptidyl peptidase- 4 inhibitor, sodium-glucose cotransporter-2 inhibitor, sulfonylurea, meglitinide, β-glucosidase inhibitor, glucagon-like peptide-1 (GLP-1) analog, and thiazolidinedione drug classes. Nearly 70% (122) had no reported changes in DM treatment beyond dosage titrations. Among these patients, 18 (10.3%) were on an insulin pump for the duration of the study. Among the 53 (30.3%) patients who had changes in treatment, 31 (17.7%) transitioned from insulin injections to an insulin pump, 13 (7.4%) changed from 1 insulin injection to another (ie, addition of long-acting insulin, transition to u500 insulin, changing from 1 insulin category or brand to another), 8 (4.6%) began an oral/noninsulin antihyperglycemic, 4 (2.3%) began insulin injections, 13 (7.4%) discontinued noninsulin or oral antihyperglycemics, and 2 (1.1%) discontinued insulin during the study period.

Data showed that 113 (64.5%) patients had no changes in antihypertensives. The remaining 62 (35.4%) had the following adjustments: 14 (8%) increased dose of current medication(s), 9 (5.1%) decreased dose of current medication(s), 8 (4.6%) discontinued all antihypertensive medications, 10 (5.7%) switched to a different antihypertensive class, and 16 (9.1%) added additional antihypertensive medication(s) to their existing regimen during the study period.

Patients in the study group used 7 different types of CGM sensors. Chart review revealed that 84 (47.7%) patients used Medtronic devices, with 26 (14.8%) using first-generation Guardian sensors, 50 (28.4%) using Enlite sensors, and 8 (4.5) using Guardian 3 sensors. We found that 81 (46.0%) veterans were prescribed Dexcom devices, with 5 (2.8%) using SEVEN PLUS sensors, 68 (38.6%) using G4-5 sensors, and 8 (4.5%) using G6 sensors. The remaining 10 (5.7%) patients were using Freestyle Libre sensors during the study period.

Discussion

CGM did not correspond with clinically significant reductions in HbA1c. However, veterans with increased health care engagement were likely to achieve clinically significant HbA1c improvements. The veterans in the adherent subgroup had a higher baseline HbA1c, which could be because of a variety of factors mentioned in patient care notes, including insulin resistance, poor dietary habits, and exercise regimen nonadherence. These patients might have had more room to improve their glycemic control without concern of hypoglycemia, and their higher baseline HbA1c could have provided increased motivation for improving their health during the study period.

Adherent patients also had a greater reduction in weight and hospital or clinic visits with CGM compared with the total population. These veterans’ increased involvement in their health care might have led to better dietary and exercise adherence, which would have decreased insulin dosing and contributed to weight loss. Only 1 patient in the adherent subgroup initiated a GLP-1 agonist during the study period, making it unlikely that medication changes had a significant impact on weight loss in the subgroup analysis. This improvement in overall health status might have contributed to the reduction in hospital or clinic visits observed in this population.

Average systolic BP data decreased minimally in the total survey population and increased in the adherent subgroup over the course of the study. These results were determined to be statistically significant. Changes in systolic BP readings were minimal, indicating that it is unlikely that these changes contributed meaningfully to the patients’ overall health status.

Although not related to the study objectives, the adherent population required less antihypertensive adjustments with similar BP control. This could be explained by improved overall health or better adherence and engagement in therapy. The results of this project show that despite limited medication changes, T2DM management improved among adherent patients using CGM. The general study population, which was more likely to have documented nonadherence with treatment or clinic appointments, had minimal benefit. CGM technology in the T2DM veteran population is more likely to have significant clinical benefit in patients who are adherent with their medication regimens and follow-up appointments compared with the larger study population.

The results of this study are in line with previous studies on CGM use in the T2DM patient population. We agree with the previously published research that CGM alone does not have a meaningful impact on HbA1c reduction. Our study population also was older than those in previous studies, adding to the Haak and colleagues conclusion that patients aged < 65 years might have better outcomes with CGM.4

 

 


Strengths of this study include specificity to the veteran population using VA resources, as well as including nondiabetes outcomes. This allows for specific application to the veteran population and could provide broader evidence for CGM use. Demonstrated decreases in HbA1c, weight, and clinic visits in the adherent population suggest that providing veterans with CGM therapy with frequent endocrinology follow-up improves health outcomes and could decrease overall health spending.

Limitations

Limitations of this study include retrospective design, a small sample size, and solely focusing on T2DM. As a retrospective study, we cannot rule out the influence of outside factors, such as participation in a non-VA weight loss program. This study lacked the power to assess the impact of the different CGM brands. The study did not include data on severe hypoglycemic or hyperglycemic episodes as veterans might have needed emergent care at non-VA facilities. Future research will evaluate the impact of CGM on symptomatic and severe hypoglycemic episodes and use of insulin vs oral or noninsulin antihyperglycemics and the comparative efficacy of different CGM brands among veterans.

Conclusions

CGM did not correspond with clinically significant reductions in HbA1c. However, veterans with increased health care engagement were likely to achieve clinically significant HbA1c improvements. Adherent patients also had more reduction in weight and hospital or clinic visits with CGM compared with the total population. These veterans’ increased involvement in their health care might have led to better dietary and exercise adherence, which would have decreased insulin dosing and contributed to weight loss.

References

1. Liu Y, Sayam S, Shao X, et al. Prevalence of and trends in diabetes among veterans, United States, 2005-2014. Prev Chronic Dis. 2017;14:E135. Published 2017 Dec 14. doi:10.5888/pcd14.170230

2. Hackett M. VA pharmacies now carry the Dexcom G6 CGM at no cost for qualifying patients. September 23, 2020. Accessed September 28, 2021. https://www.mobihealthnews.com/news/va-pharmacies-now-carry-dexcom-g6-cgm-no-cost-qualifying-patients

3. Peters AL. The evidence base for continuous glucose monitoring. In: Role of Continuous Glucose Monitoring in Diabetes Treatment. Arlington (VA): American Diabetes Association; August 2018.3-7. doi:10.2337/db20181-3

4. Haak T, Hanaire H, Ajjan R, Hermanns N, Riveline JP, Rayman G. Flash glucose-sensing technology as a replacement for blood glucose monitoring for the management of insulin-treated type 2 diabetes: a multicenter, open-label randomized controlled trial. Diabetes Ther. 2017;8(1):55-73. doi:10.1007/s13300-016-0223-6

5. Yoo HJ, An HG, Park SY, et al. Use of a real time continuous glucose monitoring system as a motivational device for poorly controlled type 2 diabetes. Diabetes Res Clin Pract. 2008;82(1):73-79. doi:10.1016/j.diabres.2008.06.015

6. Taylor PJ, Thompson CH, Brinkworth GD. Effectiveness and acceptability of continuous glucose monitoring for type 2 diabetes management: A narrative review. J Diabetes Investig. 2018;9(4):713-725. doi:10.1111/jdi.12807

References

1. Liu Y, Sayam S, Shao X, et al. Prevalence of and trends in diabetes among veterans, United States, 2005-2014. Prev Chronic Dis. 2017;14:E135. Published 2017 Dec 14. doi:10.5888/pcd14.170230

2. Hackett M. VA pharmacies now carry the Dexcom G6 CGM at no cost for qualifying patients. September 23, 2020. Accessed September 28, 2021. https://www.mobihealthnews.com/news/va-pharmacies-now-carry-dexcom-g6-cgm-no-cost-qualifying-patients

3. Peters AL. The evidence base for continuous glucose monitoring. In: Role of Continuous Glucose Monitoring in Diabetes Treatment. Arlington (VA): American Diabetes Association; August 2018.3-7. doi:10.2337/db20181-3

4. Haak T, Hanaire H, Ajjan R, Hermanns N, Riveline JP, Rayman G. Flash glucose-sensing technology as a replacement for blood glucose monitoring for the management of insulin-treated type 2 diabetes: a multicenter, open-label randomized controlled trial. Diabetes Ther. 2017;8(1):55-73. doi:10.1007/s13300-016-0223-6

5. Yoo HJ, An HG, Park SY, et al. Use of a real time continuous glucose monitoring system as a motivational device for poorly controlled type 2 diabetes. Diabetes Res Clin Pract. 2008;82(1):73-79. doi:10.1016/j.diabres.2008.06.015

6. Taylor PJ, Thompson CH, Brinkworth GD. Effectiveness and acceptability of continuous glucose monitoring for type 2 diabetes management: A narrative review. J Diabetes Investig. 2018;9(4):713-725. doi:10.1111/jdi.12807

Issue
Federal Practitioner - 38(4)s
Issue
Federal Practitioner - 38(4)s
Page Number
S14-S17
Page Number
S14-S17
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Long QT and Cardiac Arrest After Symptomatic Improvement of Pulmonary Edema

Article Type
Changed
Tue, 05/03/2022 - 15:03

A case of extreme QT prolongation induced following symptomatic resolution of acute pulmonary edema is both relatively unknown and poorly understood.

Abnormalities in the T-wave morphology of an electrocardiogram (ECG) are classically attributed to ischemic cardiac disease. However, these changes can be seen in a variety of other etiologies, including noncardiac pathology, which should be considered whenever reviewing an ECG: central nervous system disease, including stroke and subarachnoid hemorrhage; hypothermia; pulmonary disease, such as pulmonary embolism or chronic obstructive pulmonary disease; myopericarditis; drug effects; and electrolyte abnormalities.

Prolongation of the QT interval, on the other hand, can be precipitated by medications, metabolic derangements, or genetic phenotypes. The QT interval is measured from the beginning of the QRS complex to the termination of the T wave and represents the total time for ventricular depolarization and repolarization. The QT interval must be corrected based on the patient’s heart rate, known as the QTc. As the QTc interval lengthens, there is increased risk of R-on-T phenomena, which may result in Torsades de Pointes (TdP). Typical features of TdP include an antecedent prolonged QTc, cyclic polymorphic ventricular tachycardia on the surface ECG, and either a short-lived spontaneously terminating course or degeneration into ventricular fibrillation (VF) and sudden cardiac death.1 These dysrhythmias become more likely as the QTc interval exceeds 500 msec.2

The combination of new-onset global T-wave inversions with prolongation of the QT interval has been reported in only a few limited conditions. Some known causes of these QT T changes include cardiac ischemia, status epilepticus, pheochromocytoma, and acute cocaine intoxication.3 One uncommon and rarely reported cause of extreme QT prolongation and T-wave inversion is acute pulmonary edema. The ECG findings are not present on initial patient presentation; rather the dynamic changes occur after resolution of the pulmonary symptoms. Despite significant ECG changes, all prior reported cases describe ECG normalization without significant morbidity.4,5 We report a case of extreme QT prolongation following acute pulmonary edema that resulted in cardiac arrest secondary to VF.

Case Presentation

A 72-year-old male with medical history of combined systolic and diastolic heart failure, ischemic cardiomyopathy, coronary artery disease, cerebral vascular accident, hypertension, hyperlipidemia, type 2 diabetes mellitus, and tobacco dependence presented to the emergency department (ED) by emergency medical services after awaking with acute onset of dyspnea and diaphoresis. On arrival at the ED, the patient was noted to be in respiratory distress (ie, unable to speak single words) and was extremely diaphoretic. His initial vital signs included blood pressure, 186/113 mm Hg, heart rate, 104 beats per minute, respiratory rate, 40 breaths per minute, and temperature, 36.4 °C. The patient was quickly placed on bilevel positive airway pressure and given sublingual nitroglycerin followed by transdermal nitroglycerin with a single dose of 40 mg IV furosemide, which improved his respiratory status. A chest X-ray was consistent with pulmonary edema, and his brain natriuretic peptide was 1654 pg/mL. An ECG demonstrated new T-wave inversions, and his troponin increased from 0.04 to 0.24 ng/mL during his ED stay (Figure 1). He was started on a heparin infusion and admitted to the hospital for hypertensive emergency with presumed acute decompensated heart failure and non-ST-elevated myocardial infarction.

Electrocardiogram on Presentation and Electrocardiogram 22 Hours After Presentation figures

Throughout the patient’s first night, the troponin level started to down-trend after peaking at 0.24 ng/mL, and his oxygen requirements decreased allowing transition to nasal cannula. However, his repeat ECGs demonstrated significant T-wave abnormalities, new premature ventricular contractions, bradycardia, and a prolonging QTc interval to 703 msec (Figure 2). At this time, the patient’s electrolytes were normal, specifically a potassium level of 4.4 mEq/L, calcium 8.8 mg/dL, magnesium 2.0 mg/dL, and phosphorus 2.6 mg/dL. Given the worsening ECG changes, a computed tomography scan of his head was ordered to rule out intracranial pathology. While in the scanner, the patient went into pulseless VF, prompting defibrillation with 200 J. In addition, he was given 75 mg IV lidocaine, 2 g IV magnesium, and 1 ampule of both calcium chloride and sodium bicarbonate. With treatment, he had return of spontaneous circulation and was taken promptly to cardiac catheterization. The catheterization showed no significant obstructive coronary artery disease, and no interventions were performed. The patient was transferred to the cardiac intensive care unit for continued care.

During his course in the intensive care unit, the patient’s potassium and magnesium levels were maintained at high-normal levels. The patient was started on a dobutamine infusion to increase his heart rate and attempt to decrease his QTc. The patient also underwent cardiac magnetic resonance imaging (MRI) to evaluate for possible myocarditis, which showed no evidence of acute inflammation. Echocardiogram demonstrated an ejection fraction of 40% and global hypokinesis but no specific regional abnormalities and no change from prior echocardiogram performed 1 year earlier. Over the course of 3 days, his ECG normalized and his QTc shortened to 477 msec. Genetic testing was performed and did not reveal any mutations associated with long QT syndrome. Ultimately, an automated internal cardiac defibrillator (AICD) was placed, and the patient was discharged home.

Over the 2 years since his initial event, the patient has not experienced recurrent VF and his AICD has not fired. The patient continues to have ED presentations for heart-failure symptoms, though he has been stable from an electrophysiologic standpoint and his QTc remains less than 500 msec.

 

 

Discussion

Prolongation of the QT interval as a result of deep, global T-wave inversions after resolution of acute pulmonary edema has been minimally reported.4,5 This phenomenon has been described in the cardiology literature but has not been discussed in the emergency medicine literature and bears consideration in this case.4,5 As noted, an extensive evaluation did not reveal another cause of QTc prolongation. The patient had normal electrolytes and temperature, his neurologic examination and computed tomography were not remarkable. The patient had no obstructive coronary artery disease on catheterization, no evidence of acute myocarditis on cardiac MRI, no prescribed medications associated with QT prolongation, and no evidence of genetic mutations associated with QT prolongation on testing. The minimal troponin elevation was felt to represent a type II myocardial infarction related to ischemia due to supply-demand mismatch rather than acute plaque rupture.

Littmann published a case series of 9 cases of delayed onset T-wave inversion and extreme QTc prolongation in the 24 to 48 hours following treatment and symptomatic improvement in acute pulmonary edema.4 In each of his patients, an ischemic cardiac insult was ruled out as the etiology of the pulmonary edema by laboratory assessment, echocardiography, and left heart catheterization.All of the patients in this case series recovered without incident and with normalization of the QTc interval.4 Similarly, in our patient, significant QT T changes occurred approximately 22 hours after presentation and with resolution of symptoms of pulmonary edema. Pascale and colleagues also published a series of 3 patients developing similar ECG patterns following a hypertensive crisis with resolution of ECG findings and without any morbidity.5 In contrast, our patient experienced significant morbidity secondary to the extreme QTc prolongation.

Conclusions

We believe this is the first reported case of excessive prolongation of the QTc with VF arrest secondary to resolution of acute pulmonary edema. The pattern observed in our patient follows the patterns outlined in the previous case series—patients present with acute pulmonary edema and hypertensive crisis but develop significant ECG abnormalities about 24 hours after the resolution of the high catecholamine state. Our patient did have a history of prior cardiac insult, given the QTc changes developed acutely, with frequent premature ventricular contractions, and the cardiac arrest occurred at maximal QTc prolongation, yet after resolution of the high catecholamine state, the treatment team felt there was likely an uncaptured and short-lived episode of TdP that degenerated into VF. This theory is further supported by the lack of recurrent VF episodes, confirmed by AICD interrogation, after normalization of the QTc in our patient.

References

1. Passman R, Kadish A. Polymorphic ventricular tachycardia, long Q-T syndrome, and torsades de pointes. Med Clin North Am. 2001;85(2):321-341. doi:10.1016/s0025-7125(05)70318-7

2. Kallergis EM, Goudis CA, Simantirakis EN, Kochiadakis GE, Vardas PE. Mechanisms, risk factors, and management of acquired long QT syndrome: a comprehensive review. ScientificWorldJournal. 2012;2012:212178. doi:10.1100/2012/212178

3. Miller MA, Elmariah S, Fischer A. Giant T-wave inversions and extreme QT prolongation. Circ Arrhythm Electrophysiol. 2009;2(6):e42-e43. doi:10.1161/CIRCEP.108.825729

4. Littmann L. Large T wave inversion and QT prolongation associated with pulmonary edema: a report of nine cases. J Am Coll Cardiol. 1999;34(4):1106-1110. doi:10.1016/s0735-1097(99)00311-3

5. Pascale P, Quartenoud B, Stauffer JC. Isolated large inverted T wave in pulmonary edema due to hypertensive crisis: a novel electrocardiographic phenomenon mimicking ischemia?. Clin Res Cardiol. 2007;96(5):288-294. doi:10.1007/s00392-007-0504-1

Article PDF
Author and Disclosure Information

James Gragg is an Active Duty Army Staff Physician, and Joel Miller is a Staff Physician at Carl R. Darnall Army Medical Center in Fort Hood, Texas. James Jones is an Active Duty Army Staff Physician at Martin Army Community Hospital in Fort Benning, Georgia. James Gragg and Joel Miller are Assistant Professors of Military and Emergency Medicine at the Uniformed Services University of the Health Sciences in Bethesda, Maryland. Joel Miller is a Reservist serving as Assistant Deputy Commander for Clinical Services for the 228th Combat Support Hospital at Fort Sam Houston in San Antonio, Texas.
Correspondence: James Gragg ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Issue
Federal Practitioner - 38(4)s
Publications
Topics
Page Number
S23-S25
Sections
Author and Disclosure Information

James Gragg is an Active Duty Army Staff Physician, and Joel Miller is a Staff Physician at Carl R. Darnall Army Medical Center in Fort Hood, Texas. James Jones is an Active Duty Army Staff Physician at Martin Army Community Hospital in Fort Benning, Georgia. James Gragg and Joel Miller are Assistant Professors of Military and Emergency Medicine at the Uniformed Services University of the Health Sciences in Bethesda, Maryland. Joel Miller is a Reservist serving as Assistant Deputy Commander for Clinical Services for the 228th Combat Support Hospital at Fort Sam Houston in San Antonio, Texas.
Correspondence: James Gragg ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Author and Disclosure Information

James Gragg is an Active Duty Army Staff Physician, and Joel Miller is a Staff Physician at Carl R. Darnall Army Medical Center in Fort Hood, Texas. James Jones is an Active Duty Army Staff Physician at Martin Army Community Hospital in Fort Benning, Georgia. James Gragg and Joel Miller are Assistant Professors of Military and Emergency Medicine at the Uniformed Services University of the Health Sciences in Bethesda, Maryland. Joel Miller is a Reservist serving as Assistant Deputy Commander for Clinical Services for the 228th Combat Support Hospital at Fort Sam Houston in San Antonio, Texas.
Correspondence: James Gragg ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Article PDF
Article PDF

A case of extreme QT prolongation induced following symptomatic resolution of acute pulmonary edema is both relatively unknown and poorly understood.

A case of extreme QT prolongation induced following symptomatic resolution of acute pulmonary edema is both relatively unknown and poorly understood.

Abnormalities in the T-wave morphology of an electrocardiogram (ECG) are classically attributed to ischemic cardiac disease. However, these changes can be seen in a variety of other etiologies, including noncardiac pathology, which should be considered whenever reviewing an ECG: central nervous system disease, including stroke and subarachnoid hemorrhage; hypothermia; pulmonary disease, such as pulmonary embolism or chronic obstructive pulmonary disease; myopericarditis; drug effects; and electrolyte abnormalities.

Prolongation of the QT interval, on the other hand, can be precipitated by medications, metabolic derangements, or genetic phenotypes. The QT interval is measured from the beginning of the QRS complex to the termination of the T wave and represents the total time for ventricular depolarization and repolarization. The QT interval must be corrected based on the patient’s heart rate, known as the QTc. As the QTc interval lengthens, there is increased risk of R-on-T phenomena, which may result in Torsades de Pointes (TdP). Typical features of TdP include an antecedent prolonged QTc, cyclic polymorphic ventricular tachycardia on the surface ECG, and either a short-lived spontaneously terminating course or degeneration into ventricular fibrillation (VF) and sudden cardiac death.1 These dysrhythmias become more likely as the QTc interval exceeds 500 msec.2

The combination of new-onset global T-wave inversions with prolongation of the QT interval has been reported in only a few limited conditions. Some known causes of these QT T changes include cardiac ischemia, status epilepticus, pheochromocytoma, and acute cocaine intoxication.3 One uncommon and rarely reported cause of extreme QT prolongation and T-wave inversion is acute pulmonary edema. The ECG findings are not present on initial patient presentation; rather the dynamic changes occur after resolution of the pulmonary symptoms. Despite significant ECG changes, all prior reported cases describe ECG normalization without significant morbidity.4,5 We report a case of extreme QT prolongation following acute pulmonary edema that resulted in cardiac arrest secondary to VF.

Case Presentation

A 72-year-old male with medical history of combined systolic and diastolic heart failure, ischemic cardiomyopathy, coronary artery disease, cerebral vascular accident, hypertension, hyperlipidemia, type 2 diabetes mellitus, and tobacco dependence presented to the emergency department (ED) by emergency medical services after awaking with acute onset of dyspnea and diaphoresis. On arrival at the ED, the patient was noted to be in respiratory distress (ie, unable to speak single words) and was extremely diaphoretic. His initial vital signs included blood pressure, 186/113 mm Hg, heart rate, 104 beats per minute, respiratory rate, 40 breaths per minute, and temperature, 36.4 °C. The patient was quickly placed on bilevel positive airway pressure and given sublingual nitroglycerin followed by transdermal nitroglycerin with a single dose of 40 mg IV furosemide, which improved his respiratory status. A chest X-ray was consistent with pulmonary edema, and his brain natriuretic peptide was 1654 pg/mL. An ECG demonstrated new T-wave inversions, and his troponin increased from 0.04 to 0.24 ng/mL during his ED stay (Figure 1). He was started on a heparin infusion and admitted to the hospital for hypertensive emergency with presumed acute decompensated heart failure and non-ST-elevated myocardial infarction.

Electrocardiogram on Presentation and Electrocardiogram 22 Hours After Presentation figures

Throughout the patient’s first night, the troponin level started to down-trend after peaking at 0.24 ng/mL, and his oxygen requirements decreased allowing transition to nasal cannula. However, his repeat ECGs demonstrated significant T-wave abnormalities, new premature ventricular contractions, bradycardia, and a prolonging QTc interval to 703 msec (Figure 2). At this time, the patient’s electrolytes were normal, specifically a potassium level of 4.4 mEq/L, calcium 8.8 mg/dL, magnesium 2.0 mg/dL, and phosphorus 2.6 mg/dL. Given the worsening ECG changes, a computed tomography scan of his head was ordered to rule out intracranial pathology. While in the scanner, the patient went into pulseless VF, prompting defibrillation with 200 J. In addition, he was given 75 mg IV lidocaine, 2 g IV magnesium, and 1 ampule of both calcium chloride and sodium bicarbonate. With treatment, he had return of spontaneous circulation and was taken promptly to cardiac catheterization. The catheterization showed no significant obstructive coronary artery disease, and no interventions were performed. The patient was transferred to the cardiac intensive care unit for continued care.

During his course in the intensive care unit, the patient’s potassium and magnesium levels were maintained at high-normal levels. The patient was started on a dobutamine infusion to increase his heart rate and attempt to decrease his QTc. The patient also underwent cardiac magnetic resonance imaging (MRI) to evaluate for possible myocarditis, which showed no evidence of acute inflammation. Echocardiogram demonstrated an ejection fraction of 40% and global hypokinesis but no specific regional abnormalities and no change from prior echocardiogram performed 1 year earlier. Over the course of 3 days, his ECG normalized and his QTc shortened to 477 msec. Genetic testing was performed and did not reveal any mutations associated with long QT syndrome. Ultimately, an automated internal cardiac defibrillator (AICD) was placed, and the patient was discharged home.

Over the 2 years since his initial event, the patient has not experienced recurrent VF and his AICD has not fired. The patient continues to have ED presentations for heart-failure symptoms, though he has been stable from an electrophysiologic standpoint and his QTc remains less than 500 msec.

 

 

Discussion

Prolongation of the QT interval as a result of deep, global T-wave inversions after resolution of acute pulmonary edema has been minimally reported.4,5 This phenomenon has been described in the cardiology literature but has not been discussed in the emergency medicine literature and bears consideration in this case.4,5 As noted, an extensive evaluation did not reveal another cause of QTc prolongation. The patient had normal electrolytes and temperature, his neurologic examination and computed tomography were not remarkable. The patient had no obstructive coronary artery disease on catheterization, no evidence of acute myocarditis on cardiac MRI, no prescribed medications associated with QT prolongation, and no evidence of genetic mutations associated with QT prolongation on testing. The minimal troponin elevation was felt to represent a type II myocardial infarction related to ischemia due to supply-demand mismatch rather than acute plaque rupture.

Littmann published a case series of 9 cases of delayed onset T-wave inversion and extreme QTc prolongation in the 24 to 48 hours following treatment and symptomatic improvement in acute pulmonary edema.4 In each of his patients, an ischemic cardiac insult was ruled out as the etiology of the pulmonary edema by laboratory assessment, echocardiography, and left heart catheterization.All of the patients in this case series recovered without incident and with normalization of the QTc interval.4 Similarly, in our patient, significant QT T changes occurred approximately 22 hours after presentation and with resolution of symptoms of pulmonary edema. Pascale and colleagues also published a series of 3 patients developing similar ECG patterns following a hypertensive crisis with resolution of ECG findings and without any morbidity.5 In contrast, our patient experienced significant morbidity secondary to the extreme QTc prolongation.

Conclusions

We believe this is the first reported case of excessive prolongation of the QTc with VF arrest secondary to resolution of acute pulmonary edema. The pattern observed in our patient follows the patterns outlined in the previous case series—patients present with acute pulmonary edema and hypertensive crisis but develop significant ECG abnormalities about 24 hours after the resolution of the high catecholamine state. Our patient did have a history of prior cardiac insult, given the QTc changes developed acutely, with frequent premature ventricular contractions, and the cardiac arrest occurred at maximal QTc prolongation, yet after resolution of the high catecholamine state, the treatment team felt there was likely an uncaptured and short-lived episode of TdP that degenerated into VF. This theory is further supported by the lack of recurrent VF episodes, confirmed by AICD interrogation, after normalization of the QTc in our patient.

Abnormalities in the T-wave morphology of an electrocardiogram (ECG) are classically attributed to ischemic cardiac disease. However, these changes can be seen in a variety of other etiologies, including noncardiac pathology, which should be considered whenever reviewing an ECG: central nervous system disease, including stroke and subarachnoid hemorrhage; hypothermia; pulmonary disease, such as pulmonary embolism or chronic obstructive pulmonary disease; myopericarditis; drug effects; and electrolyte abnormalities.

Prolongation of the QT interval, on the other hand, can be precipitated by medications, metabolic derangements, or genetic phenotypes. The QT interval is measured from the beginning of the QRS complex to the termination of the T wave and represents the total time for ventricular depolarization and repolarization. The QT interval must be corrected based on the patient’s heart rate, known as the QTc. As the QTc interval lengthens, there is increased risk of R-on-T phenomena, which may result in Torsades de Pointes (TdP). Typical features of TdP include an antecedent prolonged QTc, cyclic polymorphic ventricular tachycardia on the surface ECG, and either a short-lived spontaneously terminating course or degeneration into ventricular fibrillation (VF) and sudden cardiac death.1 These dysrhythmias become more likely as the QTc interval exceeds 500 msec.2

The combination of new-onset global T-wave inversions with prolongation of the QT interval has been reported in only a few limited conditions. Some known causes of these QT T changes include cardiac ischemia, status epilepticus, pheochromocytoma, and acute cocaine intoxication.3 One uncommon and rarely reported cause of extreme QT prolongation and T-wave inversion is acute pulmonary edema. The ECG findings are not present on initial patient presentation; rather the dynamic changes occur after resolution of the pulmonary symptoms. Despite significant ECG changes, all prior reported cases describe ECG normalization without significant morbidity.4,5 We report a case of extreme QT prolongation following acute pulmonary edema that resulted in cardiac arrest secondary to VF.

Case Presentation

A 72-year-old male with medical history of combined systolic and diastolic heart failure, ischemic cardiomyopathy, coronary artery disease, cerebral vascular accident, hypertension, hyperlipidemia, type 2 diabetes mellitus, and tobacco dependence presented to the emergency department (ED) by emergency medical services after awaking with acute onset of dyspnea and diaphoresis. On arrival at the ED, the patient was noted to be in respiratory distress (ie, unable to speak single words) and was extremely diaphoretic. His initial vital signs included blood pressure, 186/113 mm Hg, heart rate, 104 beats per minute, respiratory rate, 40 breaths per minute, and temperature, 36.4 °C. The patient was quickly placed on bilevel positive airway pressure and given sublingual nitroglycerin followed by transdermal nitroglycerin with a single dose of 40 mg IV furosemide, which improved his respiratory status. A chest X-ray was consistent with pulmonary edema, and his brain natriuretic peptide was 1654 pg/mL. An ECG demonstrated new T-wave inversions, and his troponin increased from 0.04 to 0.24 ng/mL during his ED stay (Figure 1). He was started on a heparin infusion and admitted to the hospital for hypertensive emergency with presumed acute decompensated heart failure and non-ST-elevated myocardial infarction.

Electrocardiogram on Presentation and Electrocardiogram 22 Hours After Presentation figures

Throughout the patient’s first night, the troponin level started to down-trend after peaking at 0.24 ng/mL, and his oxygen requirements decreased allowing transition to nasal cannula. However, his repeat ECGs demonstrated significant T-wave abnormalities, new premature ventricular contractions, bradycardia, and a prolonging QTc interval to 703 msec (Figure 2). At this time, the patient’s electrolytes were normal, specifically a potassium level of 4.4 mEq/L, calcium 8.8 mg/dL, magnesium 2.0 mg/dL, and phosphorus 2.6 mg/dL. Given the worsening ECG changes, a computed tomography scan of his head was ordered to rule out intracranial pathology. While in the scanner, the patient went into pulseless VF, prompting defibrillation with 200 J. In addition, he was given 75 mg IV lidocaine, 2 g IV magnesium, and 1 ampule of both calcium chloride and sodium bicarbonate. With treatment, he had return of spontaneous circulation and was taken promptly to cardiac catheterization. The catheterization showed no significant obstructive coronary artery disease, and no interventions were performed. The patient was transferred to the cardiac intensive care unit for continued care.

During his course in the intensive care unit, the patient’s potassium and magnesium levels were maintained at high-normal levels. The patient was started on a dobutamine infusion to increase his heart rate and attempt to decrease his QTc. The patient also underwent cardiac magnetic resonance imaging (MRI) to evaluate for possible myocarditis, which showed no evidence of acute inflammation. Echocardiogram demonstrated an ejection fraction of 40% and global hypokinesis but no specific regional abnormalities and no change from prior echocardiogram performed 1 year earlier. Over the course of 3 days, his ECG normalized and his QTc shortened to 477 msec. Genetic testing was performed and did not reveal any mutations associated with long QT syndrome. Ultimately, an automated internal cardiac defibrillator (AICD) was placed, and the patient was discharged home.

Over the 2 years since his initial event, the patient has not experienced recurrent VF and his AICD has not fired. The patient continues to have ED presentations for heart-failure symptoms, though he has been stable from an electrophysiologic standpoint and his QTc remains less than 500 msec.

 

 

Discussion

Prolongation of the QT interval as a result of deep, global T-wave inversions after resolution of acute pulmonary edema has been minimally reported.4,5 This phenomenon has been described in the cardiology literature but has not been discussed in the emergency medicine literature and bears consideration in this case.4,5 As noted, an extensive evaluation did not reveal another cause of QTc prolongation. The patient had normal electrolytes and temperature, his neurologic examination and computed tomography were not remarkable. The patient had no obstructive coronary artery disease on catheterization, no evidence of acute myocarditis on cardiac MRI, no prescribed medications associated with QT prolongation, and no evidence of genetic mutations associated with QT prolongation on testing. The minimal troponin elevation was felt to represent a type II myocardial infarction related to ischemia due to supply-demand mismatch rather than acute plaque rupture.

Littmann published a case series of 9 cases of delayed onset T-wave inversion and extreme QTc prolongation in the 24 to 48 hours following treatment and symptomatic improvement in acute pulmonary edema.4 In each of his patients, an ischemic cardiac insult was ruled out as the etiology of the pulmonary edema by laboratory assessment, echocardiography, and left heart catheterization.All of the patients in this case series recovered without incident and with normalization of the QTc interval.4 Similarly, in our patient, significant QT T changes occurred approximately 22 hours after presentation and with resolution of symptoms of pulmonary edema. Pascale and colleagues also published a series of 3 patients developing similar ECG patterns following a hypertensive crisis with resolution of ECG findings and without any morbidity.5 In contrast, our patient experienced significant morbidity secondary to the extreme QTc prolongation.

Conclusions

We believe this is the first reported case of excessive prolongation of the QTc with VF arrest secondary to resolution of acute pulmonary edema. The pattern observed in our patient follows the patterns outlined in the previous case series—patients present with acute pulmonary edema and hypertensive crisis but develop significant ECG abnormalities about 24 hours after the resolution of the high catecholamine state. Our patient did have a history of prior cardiac insult, given the QTc changes developed acutely, with frequent premature ventricular contractions, and the cardiac arrest occurred at maximal QTc prolongation, yet after resolution of the high catecholamine state, the treatment team felt there was likely an uncaptured and short-lived episode of TdP that degenerated into VF. This theory is further supported by the lack of recurrent VF episodes, confirmed by AICD interrogation, after normalization of the QTc in our patient.

References

1. Passman R, Kadish A. Polymorphic ventricular tachycardia, long Q-T syndrome, and torsades de pointes. Med Clin North Am. 2001;85(2):321-341. doi:10.1016/s0025-7125(05)70318-7

2. Kallergis EM, Goudis CA, Simantirakis EN, Kochiadakis GE, Vardas PE. Mechanisms, risk factors, and management of acquired long QT syndrome: a comprehensive review. ScientificWorldJournal. 2012;2012:212178. doi:10.1100/2012/212178

3. Miller MA, Elmariah S, Fischer A. Giant T-wave inversions and extreme QT prolongation. Circ Arrhythm Electrophysiol. 2009;2(6):e42-e43. doi:10.1161/CIRCEP.108.825729

4. Littmann L. Large T wave inversion and QT prolongation associated with pulmonary edema: a report of nine cases. J Am Coll Cardiol. 1999;34(4):1106-1110. doi:10.1016/s0735-1097(99)00311-3

5. Pascale P, Quartenoud B, Stauffer JC. Isolated large inverted T wave in pulmonary edema due to hypertensive crisis: a novel electrocardiographic phenomenon mimicking ischemia?. Clin Res Cardiol. 2007;96(5):288-294. doi:10.1007/s00392-007-0504-1

References

1. Passman R, Kadish A. Polymorphic ventricular tachycardia, long Q-T syndrome, and torsades de pointes. Med Clin North Am. 2001;85(2):321-341. doi:10.1016/s0025-7125(05)70318-7

2. Kallergis EM, Goudis CA, Simantirakis EN, Kochiadakis GE, Vardas PE. Mechanisms, risk factors, and management of acquired long QT syndrome: a comprehensive review. ScientificWorldJournal. 2012;2012:212178. doi:10.1100/2012/212178

3. Miller MA, Elmariah S, Fischer A. Giant T-wave inversions and extreme QT prolongation. Circ Arrhythm Electrophysiol. 2009;2(6):e42-e43. doi:10.1161/CIRCEP.108.825729

4. Littmann L. Large T wave inversion and QT prolongation associated with pulmonary edema: a report of nine cases. J Am Coll Cardiol. 1999;34(4):1106-1110. doi:10.1016/s0735-1097(99)00311-3

5. Pascale P, Quartenoud B, Stauffer JC. Isolated large inverted T wave in pulmonary edema due to hypertensive crisis: a novel electrocardiographic phenomenon mimicking ischemia?. Clin Res Cardiol. 2007;96(5):288-294. doi:10.1007/s00392-007-0504-1

Issue
Federal Practitioner - 38(4)s
Issue
Federal Practitioner - 38(4)s
Page Number
S23-S25
Page Number
S23-S25
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media