User login
Sleep duration of Black infants increased by intervention
An intervention tailored for Black first-time mothers helped increase their infants’ sleep time, researchers have found, a notable result as many studies have shown Black infants get less sleep on average than White infants.
Less sleep has historically put Black children at higher risk for negative outcomes including obesity and poorer social-emotional functioning and cognitive development. These disparities persist into adulthood, the researchers note, as previous studies have shown.
Justin A. Lavner, PhD, with the department of psychology at the University of Georgia in Athens, led this post hoc secondary analysis of the Sleep SAAF (Strong African American Families) study, a randomized clinical trial of 234 participants comparing a responsive parenting (RP) intervention with a safety control group over the first 16 weeks post partum. The original analysis studied the effects of the intervention on rapid weight gain.
In the original analysis, the authors write that “From birth to 2, the prevalence of high weight for length (above the 95th percentile) is 25% higher among African American children compared to White children. From age 2 to 19, the rate of obesity is more than 50% higher among African American children compared to White children. Similar disparities persist into adulthood: rates of obesity are approximately 25% higher among African American adults compared to White adults.”
The differences in early rapid weight gain may be driving the disparities, the authors write.
Elements of the intervention
The intervention in the current analysis included materials delivered at the 3- and 8-week home visits focused on soothing and crying, feeding, and interactive play in the babies’ first months. Families were recruited from Augusta University Medical Center in Augusta, Ga., and had home visits at 1, 3, 8, and 16 weeks post partum.
Mothers got a packet of handouts and facilitators walked through the information with them. The measures involved hands-on activities, discussion, and videos, all tailored for Black families, the authors state.
Mothers were taught about responding appropriately at night when their baby cries, including giving the baby a couple of minutes to fall back to sleep independently and by using calming messages, such as shushing or white noise, before picking the baby up.
Babies learn to fall asleep on their own
They also learned to put infants to bed early (ideally by 8 p.m.) so the babies would be calm but awake and could learn to fall asleep on their own.
The control group’s guidance was matched for intensity and session length but focused on sleep and home safety, such as reducing the risk of sudden infant death syndrome (SIDS), keeping the baby’s sleep area close to, but away from, the mother’s bed, and preventing shaken baby syndrome.
In both groups, the 3-week visit session lasted about 90-120 minutes and the 8-week visit lasted about 45-60 minutes.
Longer sleep with the intervention
A total of 212 Black mothers, average age 22.7, were randomized – 108 to the RP group and 104 to the control group. Answers on questionnaires were analyzed and at 16 weeks post partum, infants in the RP group (relative to controls) had:
- Longer reported nighttime sleep (mean difference, 40 minutes [95% confidence interval, 3-77]).
- Longer total sleep duration (mean difference, 73 minutes [95% CI, 14-131]).
- Fewer nighttime wakings (mean difference, −0.4 wakings [95% CI, −0.6 to −0.1]).
- Greater likelihood of meeting guidelines of at least 12 hours of sleep per day (risk ratio, 1.4 [95% CI, 1.1 to 1.8]) than controls.
Findings were published in JAMA Network Open.
Additionally, mothers in the RP group more frequently reported they engaged in practices such as letting babies have a few minutes to fall back to sleep on their own (RR, 1.6 [95% CI, 1.0-2.6]) and being less likely to feed their infant just before the baby’s bedtime (RR, 0.5 [95% CI, 0.3-0.8]).
In an accompanying invited commentary, Sarah M. Honaker, PhD, department of pediatrics, Indiana University, Indianapolis, and Alicia Chung, EdD, Center for Early Childhood Health and Development at New York University, write that though the added average sleep duration is one of the most significant findings, there is a possibility of desirability bias because it was reported by the mothers after specific guidance by the facilitators.
“Nonetheless,” the editorialists write, “even if the true effect were half as small, this additional sleep duration could yield notable benefits in infant development if the effect persisted over time. The difference in night wakings between the intervention and control groups (1.8 vs 1.5 per night) at 16 weeks postpartum was statistically significant, though it is unclear whether this difference is clinically meaningful to families.”
They note that it is unclear from the study how the intervention was culturally adapted and how the adaptation might have affected outcomes.
Sleep intervention trials have focused on White families
The editorialists write that much is known about the benefits of behavioral sleep intervention in controlled trials and general population settings, and no adverse effects on infant attachment or cortisol levels have been linked to the interventions.
However, they add, “Unfortunately, this substantial progress in our understanding of infant BSI [behavioral sleep intervention] comes with a caveat, in that most previous studies have been performed with White families from mid-to-high socioeconomic backgrounds.”
Dr. Honaker and Dr. Chung write, “[I]t is important to note that much work remains to examine the acceptability, feasibility, and efficacy of infant BSI in other groups that have been historically marginalized.”
Dr. Lavner and colleagues point out that before their study, there had been little emphasis on interventions to encourage better sleep in general for Black infants, “as most early sleep interventions for this population have focused on SIDS prevention.”
“To our knowledge, Sleep SAAF is the first study to show any benefits of [an] RP intervention on sleep and sleep practices among Black infants and their families,” they write.
The researchers note that a limitation of the study is that the study sample was limited to Black first-time mothers recruited from a single medical center in Georgia.
The study by Dr. Lavner et al. was funded by the National Institutes of Health, a Harrington Faculty Fellowship from the University of Texas, and an award from the Penn State Clinical and Translational Sciences Institute supported by the National Center for Advancing Translational Sciences. Editorialist Dr. Honaker reported receiving grants from Nationwide Children’s Hospital (parent grant, Centers for Disease Control and Prevention) to evaluate the acceptability of infant behavioral sleep intervention in Black families.
An intervention tailored for Black first-time mothers helped increase their infants’ sleep time, researchers have found, a notable result as many studies have shown Black infants get less sleep on average than White infants.
Less sleep has historically put Black children at higher risk for negative outcomes including obesity and poorer social-emotional functioning and cognitive development. These disparities persist into adulthood, the researchers note, as previous studies have shown.
Justin A. Lavner, PhD, with the department of psychology at the University of Georgia in Athens, led this post hoc secondary analysis of the Sleep SAAF (Strong African American Families) study, a randomized clinical trial of 234 participants comparing a responsive parenting (RP) intervention with a safety control group over the first 16 weeks post partum. The original analysis studied the effects of the intervention on rapid weight gain.
In the original analysis, the authors write that “From birth to 2, the prevalence of high weight for length (above the 95th percentile) is 25% higher among African American children compared to White children. From age 2 to 19, the rate of obesity is more than 50% higher among African American children compared to White children. Similar disparities persist into adulthood: rates of obesity are approximately 25% higher among African American adults compared to White adults.”
The differences in early rapid weight gain may be driving the disparities, the authors write.
Elements of the intervention
The intervention in the current analysis included materials delivered at the 3- and 8-week home visits focused on soothing and crying, feeding, and interactive play in the babies’ first months. Families were recruited from Augusta University Medical Center in Augusta, Ga., and had home visits at 1, 3, 8, and 16 weeks post partum.
Mothers got a packet of handouts and facilitators walked through the information with them. The measures involved hands-on activities, discussion, and videos, all tailored for Black families, the authors state.
Mothers were taught about responding appropriately at night when their baby cries, including giving the baby a couple of minutes to fall back to sleep independently and by using calming messages, such as shushing or white noise, before picking the baby up.
Babies learn to fall asleep on their own
They also learned to put infants to bed early (ideally by 8 p.m.) so the babies would be calm but awake and could learn to fall asleep on their own.
The control group’s guidance was matched for intensity and session length but focused on sleep and home safety, such as reducing the risk of sudden infant death syndrome (SIDS), keeping the baby’s sleep area close to, but away from, the mother’s bed, and preventing shaken baby syndrome.
In both groups, the 3-week visit session lasted about 90-120 minutes and the 8-week visit lasted about 45-60 minutes.
Longer sleep with the intervention
A total of 212 Black mothers, average age 22.7, were randomized – 108 to the RP group and 104 to the control group. Answers on questionnaires were analyzed and at 16 weeks post partum, infants in the RP group (relative to controls) had:
- Longer reported nighttime sleep (mean difference, 40 minutes [95% confidence interval, 3-77]).
- Longer total sleep duration (mean difference, 73 minutes [95% CI, 14-131]).
- Fewer nighttime wakings (mean difference, −0.4 wakings [95% CI, −0.6 to −0.1]).
- Greater likelihood of meeting guidelines of at least 12 hours of sleep per day (risk ratio, 1.4 [95% CI, 1.1 to 1.8]) than controls.
Findings were published in JAMA Network Open.
Additionally, mothers in the RP group more frequently reported they engaged in practices such as letting babies have a few minutes to fall back to sleep on their own (RR, 1.6 [95% CI, 1.0-2.6]) and being less likely to feed their infant just before the baby’s bedtime (RR, 0.5 [95% CI, 0.3-0.8]).
In an accompanying invited commentary, Sarah M. Honaker, PhD, department of pediatrics, Indiana University, Indianapolis, and Alicia Chung, EdD, Center for Early Childhood Health and Development at New York University, write that though the added average sleep duration is one of the most significant findings, there is a possibility of desirability bias because it was reported by the mothers after specific guidance by the facilitators.
“Nonetheless,” the editorialists write, “even if the true effect were half as small, this additional sleep duration could yield notable benefits in infant development if the effect persisted over time. The difference in night wakings between the intervention and control groups (1.8 vs 1.5 per night) at 16 weeks postpartum was statistically significant, though it is unclear whether this difference is clinically meaningful to families.”
They note that it is unclear from the study how the intervention was culturally adapted and how the adaptation might have affected outcomes.
Sleep intervention trials have focused on White families
The editorialists write that much is known about the benefits of behavioral sleep intervention in controlled trials and general population settings, and no adverse effects on infant attachment or cortisol levels have been linked to the interventions.
However, they add, “Unfortunately, this substantial progress in our understanding of infant BSI [behavioral sleep intervention] comes with a caveat, in that most previous studies have been performed with White families from mid-to-high socioeconomic backgrounds.”
Dr. Honaker and Dr. Chung write, “[I]t is important to note that much work remains to examine the acceptability, feasibility, and efficacy of infant BSI in other groups that have been historically marginalized.”
Dr. Lavner and colleagues point out that before their study, there had been little emphasis on interventions to encourage better sleep in general for Black infants, “as most early sleep interventions for this population have focused on SIDS prevention.”
“To our knowledge, Sleep SAAF is the first study to show any benefits of [an] RP intervention on sleep and sleep practices among Black infants and their families,” they write.
The researchers note that a limitation of the study is that the study sample was limited to Black first-time mothers recruited from a single medical center in Georgia.
The study by Dr. Lavner et al. was funded by the National Institutes of Health, a Harrington Faculty Fellowship from the University of Texas, and an award from the Penn State Clinical and Translational Sciences Institute supported by the National Center for Advancing Translational Sciences. Editorialist Dr. Honaker reported receiving grants from Nationwide Children’s Hospital (parent grant, Centers for Disease Control and Prevention) to evaluate the acceptability of infant behavioral sleep intervention in Black families.
An intervention tailored for Black first-time mothers helped increase their infants’ sleep time, researchers have found, a notable result as many studies have shown Black infants get less sleep on average than White infants.
Less sleep has historically put Black children at higher risk for negative outcomes including obesity and poorer social-emotional functioning and cognitive development. These disparities persist into adulthood, the researchers note, as previous studies have shown.
Justin A. Lavner, PhD, with the department of psychology at the University of Georgia in Athens, led this post hoc secondary analysis of the Sleep SAAF (Strong African American Families) study, a randomized clinical trial of 234 participants comparing a responsive parenting (RP) intervention with a safety control group over the first 16 weeks post partum. The original analysis studied the effects of the intervention on rapid weight gain.
In the original analysis, the authors write that “From birth to 2, the prevalence of high weight for length (above the 95th percentile) is 25% higher among African American children compared to White children. From age 2 to 19, the rate of obesity is more than 50% higher among African American children compared to White children. Similar disparities persist into adulthood: rates of obesity are approximately 25% higher among African American adults compared to White adults.”
The differences in early rapid weight gain may be driving the disparities, the authors write.
Elements of the intervention
The intervention in the current analysis included materials delivered at the 3- and 8-week home visits focused on soothing and crying, feeding, and interactive play in the babies’ first months. Families were recruited from Augusta University Medical Center in Augusta, Ga., and had home visits at 1, 3, 8, and 16 weeks post partum.
Mothers got a packet of handouts and facilitators walked through the information with them. The measures involved hands-on activities, discussion, and videos, all tailored for Black families, the authors state.
Mothers were taught about responding appropriately at night when their baby cries, including giving the baby a couple of minutes to fall back to sleep independently and by using calming messages, such as shushing or white noise, before picking the baby up.
Babies learn to fall asleep on their own
They also learned to put infants to bed early (ideally by 8 p.m.) so the babies would be calm but awake and could learn to fall asleep on their own.
The control group’s guidance was matched for intensity and session length but focused on sleep and home safety, such as reducing the risk of sudden infant death syndrome (SIDS), keeping the baby’s sleep area close to, but away from, the mother’s bed, and preventing shaken baby syndrome.
In both groups, the 3-week visit session lasted about 90-120 minutes and the 8-week visit lasted about 45-60 minutes.
Longer sleep with the intervention
A total of 212 Black mothers, average age 22.7, were randomized – 108 to the RP group and 104 to the control group. Answers on questionnaires were analyzed and at 16 weeks post partum, infants in the RP group (relative to controls) had:
- Longer reported nighttime sleep (mean difference, 40 minutes [95% confidence interval, 3-77]).
- Longer total sleep duration (mean difference, 73 minutes [95% CI, 14-131]).
- Fewer nighttime wakings (mean difference, −0.4 wakings [95% CI, −0.6 to −0.1]).
- Greater likelihood of meeting guidelines of at least 12 hours of sleep per day (risk ratio, 1.4 [95% CI, 1.1 to 1.8]) than controls.
Findings were published in JAMA Network Open.
Additionally, mothers in the RP group more frequently reported they engaged in practices such as letting babies have a few minutes to fall back to sleep on their own (RR, 1.6 [95% CI, 1.0-2.6]) and being less likely to feed their infant just before the baby’s bedtime (RR, 0.5 [95% CI, 0.3-0.8]).
In an accompanying invited commentary, Sarah M. Honaker, PhD, department of pediatrics, Indiana University, Indianapolis, and Alicia Chung, EdD, Center for Early Childhood Health and Development at New York University, write that though the added average sleep duration is one of the most significant findings, there is a possibility of desirability bias because it was reported by the mothers after specific guidance by the facilitators.
“Nonetheless,” the editorialists write, “even if the true effect were half as small, this additional sleep duration could yield notable benefits in infant development if the effect persisted over time. The difference in night wakings between the intervention and control groups (1.8 vs 1.5 per night) at 16 weeks postpartum was statistically significant, though it is unclear whether this difference is clinically meaningful to families.”
They note that it is unclear from the study how the intervention was culturally adapted and how the adaptation might have affected outcomes.
Sleep intervention trials have focused on White families
The editorialists write that much is known about the benefits of behavioral sleep intervention in controlled trials and general population settings, and no adverse effects on infant attachment or cortisol levels have been linked to the interventions.
However, they add, “Unfortunately, this substantial progress in our understanding of infant BSI [behavioral sleep intervention] comes with a caveat, in that most previous studies have been performed with White families from mid-to-high socioeconomic backgrounds.”
Dr. Honaker and Dr. Chung write, “[I]t is important to note that much work remains to examine the acceptability, feasibility, and efficacy of infant BSI in other groups that have been historically marginalized.”
Dr. Lavner and colleagues point out that before their study, there had been little emphasis on interventions to encourage better sleep in general for Black infants, “as most early sleep interventions for this population have focused on SIDS prevention.”
“To our knowledge, Sleep SAAF is the first study to show any benefits of [an] RP intervention on sleep and sleep practices among Black infants and their families,” they write.
The researchers note that a limitation of the study is that the study sample was limited to Black first-time mothers recruited from a single medical center in Georgia.
The study by Dr. Lavner et al. was funded by the National Institutes of Health, a Harrington Faculty Fellowship from the University of Texas, and an award from the Penn State Clinical and Translational Sciences Institute supported by the National Center for Advancing Translational Sciences. Editorialist Dr. Honaker reported receiving grants from Nationwide Children’s Hospital (parent grant, Centers for Disease Control and Prevention) to evaluate the acceptability of infant behavioral sleep intervention in Black families.
FROM JAMA NETWORK OPEN
Progressive Primary Cutaneous Nocardiosis in an Immunocompetent Patient
To the Editor:
The organisms of the genus Nocardia are gram-positive, ubiquitous, aerobic actinomycetes found worldwide in soil, decaying organic material, and water.1 The genus Nocardia includes more than 50 species; some species, such as Nocardia asteroides, Nocardia farcinica, Nocardia nova, and Nocardia brasiliensis, are the cause of nocardiosis in humans and animals.2,3 Nocardiosis is a rare and opportunistic infection that predominantly affects immunocompromised individuals; however, up to 30% of infections can occur in immunocompetent hosts.4 Nocardiosis can manifest in 3 disease forms: cutaneous, pulmonary, or disseminated. Cutaneous nocardiosis commonly develops in immunocompetent individuals who have experienced a predisposing traumatic injury to the skin,5 and it can exhibit a diverse variety of clinical manifestations, making diagnosis difficult. We describe a case of serious progressive primary cutaneous nocardiosis with an unusual presentation in an immunocompetent patient.
A 26-year-old immunocompetent man presented with pain, swelling, nodules, abscesses, ulcers, and sinus drainage of the left arm. The left elbow lesion initially developed at the site of a trauma 6 years prior that was painless but was contaminated with mossy soil. The condition slowly progressed over the next 2 years, and the patient experienced increased swelling and eventually developed multiple draining sinus tracts. Over the next 4 years, the lesions multiplied, spreading to the forearm and upper arm; associated severe pain and swelling at the elbow and wrist joint developed. The patient sought medical care at a local hospital and subsequently was diagnosed with suspected cutaneous tuberculosis. The patient was empirically treated with a 6-month course of isoniazid, rifampicin, pyrazinamide, and ethambutol; however, the lesions continued to progress and worsen. The patient had to stop antibiotic treatment because of substantially elevated alanine aminotransferase and aspartate aminotransferase levels.
He subsequently was evaluated at our hospital. He had no notable medical history and was afebrile. Physical examination revealed multiple erythematous nodules, abscesses, and ulcers on the left arm. There were several nodules with open sinus tracts and seropurulent crusts along with numerous atrophic, ovoid, stellate scars. Other nodules and ulcers with purulent drainage were located along the lymphatic nodes extending up the patient’s left forearm (Figure 1A). The yellowish-white pus discharge from several active sinuses contained no apparent granules. The lesions were densely distributed along the elbow, wrist, and shoulder, which resulted in associated skin swelling and restricted joint movement. The left axillary lymph nodes were enlarged.
Laboratory analyses revealed a hemoglobin level of 9.6 g/dL (reference range, 13–17.5 g/dL), platelet count of 621×109/L (reference range, 125–350×109/L), and leukocyte count of 14.3×109/L (reference range, 3.5–9.5 ×109/L). C-reactive protein level was 88.4 mg/L (reference range, 0–10 mg/L). Blood, renal, and liver tests, as well as tumor marker, peripheral blood lymphocyte subset, immunoglobulin, and complement results were within reference ranges. Results for Treponema pallidum and HIV antibody tests were negative. Hepatitis B virus markers were positive for hepatitis B surface antigen, hepatitis B e antigen, and hepatitis B core antibody, and the serum concentration of hepatitis B virus DNA was 3.12×107 IU/mL (reference range, <5×102 IU/mL). Computed tomography of the chest and cranium were unremarkable. Ultrasonography of the left arm revealed multiple vertical sinus tracts and several horizontal communicating branches that were accompanied by worm-eaten bone destruction (Figure 2).
Additional testing included histopathologic staining of a skin tissue specimen—hematoxylin and eosin, periodic acid–Schiff, and acid-fast staining—showed nonspecific, diffuse, inflammatory cell infiltration suggestive of chronic suppurative granuloma (Figure 3) but failed to reveal any special strains or organisms. Gram stain examination of the purulent fluid collected from the subcutaneous tissue showed no apparent positive bacillus or filamentous granules. The specimen was then inoculated on Sabouraud dextrose agar and Lowenstein-Jensen medium for fungus and mycobacteria culture, respectively. After 5 days, chalky, yellow, adherent colonies were observed on the Löwenstein-Jensen medium, and after 26 days, yellow crinkled colonies were observed on Sabouraud dextrose agar. The colonies were then inoculated on Columbia blood agar and incubated for 1 week to aid in the identification of organisms. Growth of yellow colonies that were adherent to the agar, moist, and smooth with a velvety surface, as well as a characteristic moldy odor resulted. Gram staining revealed gram-positive, thin, and beaded branching filaments (Figure 4). Based on colony characteristics, physiological properties, and biochemical tests, the isolate was identified as Nocardia. Results of further investigations employing polymerase chain reaction analysis of the skin specimen and bacterial colonies using a Nocardia genus 596-bp fragment of 16S ribosomal RNA primer (forward primer NG1: 5’-ACCGACCACAAGGGG-3’, reverse primer NG2: 5’-GGTTGTAACCTCTTCGA-3’)6 were completely consistent with the reference for identification of N brasiliensis. Evaluation of these results led to a diagnosis of cutaneous nocardiosis after traumatic inoculation.
Because there was a high suspicion of actinophytosis or nocardiosis at admission, the patient received a combination antibiotic treatment with intravenous aqueous penicillin (4 million U every 4 hours) and oral trimethoprim-sulfamethoxazole (160/800 mg twice daily). Subsequently, treatment was changed to a combination of oral trimethoprim-sulfamethoxazole (160/800 mg twice daily) and moxifloxacin (400 mg once daily) based on pathogen identification and antibiotic sensitivity testing. After 1 month of treatment, the cutaneous lesions and left limb swelling dramatically improved and purulent drainage ceased, though some scarring occurred during the healing process. In addition, the mobility of the affected shoulder, elbow, and wrist joints slightly improved. Notable improvement in the mobility and swelling of the joints was observed at 6-month follow-up (Figure 1B). The patient continues to be monitored on an outpatient basis.
Cutaneous nocardiosis is a disfiguring granulomatous infection involving cutaneous and subcutaneous tissue that can progress to cause injury to viscera and bone.7 It has been called one of the great imitators because cutaneous nocardiosis can present in multiple forms,8,9 including mycetoma, sporotrichoid infection, superficial skin infection, and disseminated infection with cutaneous involvement. The differential diagnoses of cutaneous nocardiosis are broad and include tuberculosis; actinomycosis; deep fungal infections such as sporotrichosis, blastomycosis, phaeohyphomycosis, histoplasmosis, and coccidioidomycosis; other bacterial causes of cellulitis, abscess, or ecthyma; and malignancies.10 The principle method of diagnosis is the identification of Nocardia from the infection site.
Our patient ultimately was diagnosed with primary cutaneous nocardiosis resulting from a traumatic injury to the skin that was contaminated with soil. The clinical manifestation pattern was a compound type, including both mycetoma and sporotrichoid infections. Initially, Nocardia mycetoma occurred with subcutaneous infection by direct extension10,11 and appeared as dense, predominantly painless, swollen lesions. After 4 years, the skin lesions continued to spread linearly to the patient’s upper arm and forearm and manifested as the sporotrichoid infection type with painful swollen lesions at the site of inoculation and painful enlargement of the ipsilateral axillary lymph node.
Although nocardiosis is found worldwide, it is endemic to tropical and subtropical regions such as India, Africa, Southeast Asia, and Latin America.12 Nocardiosis most often is observed in individuals aged 20 to 40 years. It affects men more than women, and it commonly occurs in field laborers and cultivators whose occupations involve direct contact with the soil.13 Most lesions are found on the lower extremities, though localized nocardiosis infections can occur in other areas such as the neck, breasts, back, buttocks, and elbows.
Our patient initially was misdiagnosed, and treatment was delayed for several reasons. First, nocardiosis is not common in China, and most clinicians are unfamiliar with the disease. Second, the related lesions do not have specific features, and our patient had a complex clinical presentation that included mycetoma and sporotrichoid infection. Third, the characteristic grain of Nocardia species is small but that of N brasiliensis is even smaller (approximately 0.1–0.2 mm in diameter), which makes visualization difficult in both histopathologic and microbiologic examinations.14 The histopathologic examination results of our patient in the local hospital were nonspecific. Fourth, our patient did not initially go to the hospital but instead purchased some over-the-counter antibiotic ointments for external application because the lesions were painless. Moreover, microbiologic smear and culture examinations were not conducted in the local hospital before administering antituberculosis treatment to the patient. Instead, a polymerase chain reaction examination of skin lesion tissue for tubercle bacilli and atypical mycobacteria was negative. These findings imply that the traditional microbial smear and culture evaluations cannot be omitted. Furthermore, culture examinations should be conducted on multiple skin tissue and purulent fluid specimens to increase the likelihood of detection. These cultures should be monitored for at least 2 to 4 weeks because Nocardia is a slow-growing organism.10
The optimal antimicrobial treatment regimens for nocardiosis have not been firmly established.15 Trimethoprim-sulfamethoxazole is regarded as the first-line antimicrobial agent for treatment of nocardial infections. The optimal duration of antimicrobial therapy for nocardiosis also has not been determined, and the treatment regimen depends on the severity and extent of the infection as well as on the presence of infection-related complications. The main complication is bone involvement. Notable bony changes include periosteal thickening, osteoporosis, and osteolysis.
We considered the severity of skin lesions and bone marrow invasion in our patient and planned to treat him continually with oral trimethoprim-sulfamethoxazole according to the in vitro drug susceptibility test. The patient showed clinical improvement after 1 month of treatment, and he continued to improve after 6 months of treatment. To prevent recurrence, we found it necessary to treat the patient with a long-term antibiotic course over 6 to 12 months.16
Cutaneous nocardiosis remains a diagnostic challenge because of its nonspecific and diverse clinical and histopathological presentations. Diagnosis is further complicated by the inherent difficulty of cultivating and identifying the clinical isolate in the laboratory. A high degree of clinical suspicion followed by successful identification of the organism by a laboratory technologist will aid in the early diagnosis and treatment of the infection, ultimately reducing the risk for complications and morbidity.
- McNeil MM, Brown JM. The medically important aerobic actinomycetes: epidemiology and microbiology. Clin Microbiol Rev. 1994;7:357-417.
- Brown-Elliott BA, Brown JM, Conville PS, et al. Clinical and laboratory features of the Nocardia spp. based on current molecular taxonomy. Clin Microbiol Rev. 2006;19:259-282.
- Fatahi-Bafghi M. Nocardiosis from 1888 to 2017. Microb Pathog. 2018;114:369-384.
- Beaman BL, Burnside J, Edwards B, et al. Nocardial infections in the United States, 1972-1974. J Infect Dis. 1976;134:286-289.
- Lerner PI. Nocardiosis. Clin Infect Dis. 1996;22:891-903.
- Laurent FJ, Provost F, Boiron P. Rapid identification of clinically relevant Nocardia species to genus level by 16S rRNA gene PCR. J Clin Microbiol. 1999;37:99-102.
- Nguyen NM, Sink JR, Carter AJ, et al. Nocardiosis incognito: primary cutaneous nocardiosis with extension to myositis and pleural infection. JAAD Case Rep. 2018;4:33-35.
- Sharna NL, Mahajan VK, Agarwal S, et al. Nocardial mycetoma: diverse clinical presentations. Indian J Dermatol Venereol Leprol. 2008;74:635-640.
- Huang L, Chen X, Xu H, et al. Clinical features, identification, antimicrobial resistance patterns of Nocardia species in China: 2009-2017. Diagn Microbiol Infect Dis. 2019;94:165-172.
- Bonifaz A, Tirado-Sánchez A, Calderón L, et al. Mycetoma: experience of 482 cases in a single center in Mexico. PLoS Negl Trop Dis. 2014;8:E3102.
- Welsh O, Vero-Cabrera L, Salinas-Carmona MC. Mycetoma. Clin Dermatol. 2007;25:195-202.
- Nenoff P, van de Sande WWJ, Fahal AH, et al. Eumycetoma and actinomycetoma—an update on causative agents, epidemiology, pathogenesis, diagnostics and therapy. J Eur Acad Dermatol Venereol. 2015;29:1873-1883.
- Emmanuel P, Dumre SP, John S, et al. Mycetoma: a clinical dilemma in resource limited settings. Ann Clin Microbiol Antimicrob. 2018;17:35.
- Reis CMS, Reis-Filho EGM. Mycetomas: an epidemiological, etiological, clinical, laboratory and therapeutic review. An Bras Dermatol. 2018;93:8-18.
- Wilson JW. Nocardiosis: updates and clinical overview. Mayo Clin Proc. 2012;87:403-407.
- Welsh O, Vera-Cabrera L, Salinas-Carmona MC. Current treatment for Nocardia infections. Expert Opin Pharmacother. 2013;14:2387-2398.
To the Editor:
The organisms of the genus Nocardia are gram-positive, ubiquitous, aerobic actinomycetes found worldwide in soil, decaying organic material, and water.1 The genus Nocardia includes more than 50 species; some species, such as Nocardia asteroides, Nocardia farcinica, Nocardia nova, and Nocardia brasiliensis, are the cause of nocardiosis in humans and animals.2,3 Nocardiosis is a rare and opportunistic infection that predominantly affects immunocompromised individuals; however, up to 30% of infections can occur in immunocompetent hosts.4 Nocardiosis can manifest in 3 disease forms: cutaneous, pulmonary, or disseminated. Cutaneous nocardiosis commonly develops in immunocompetent individuals who have experienced a predisposing traumatic injury to the skin,5 and it can exhibit a diverse variety of clinical manifestations, making diagnosis difficult. We describe a case of serious progressive primary cutaneous nocardiosis with an unusual presentation in an immunocompetent patient.
A 26-year-old immunocompetent man presented with pain, swelling, nodules, abscesses, ulcers, and sinus drainage of the left arm. The left elbow lesion initially developed at the site of a trauma 6 years prior that was painless but was contaminated with mossy soil. The condition slowly progressed over the next 2 years, and the patient experienced increased swelling and eventually developed multiple draining sinus tracts. Over the next 4 years, the lesions multiplied, spreading to the forearm and upper arm; associated severe pain and swelling at the elbow and wrist joint developed. The patient sought medical care at a local hospital and subsequently was diagnosed with suspected cutaneous tuberculosis. The patient was empirically treated with a 6-month course of isoniazid, rifampicin, pyrazinamide, and ethambutol; however, the lesions continued to progress and worsen. The patient had to stop antibiotic treatment because of substantially elevated alanine aminotransferase and aspartate aminotransferase levels.
He subsequently was evaluated at our hospital. He had no notable medical history and was afebrile. Physical examination revealed multiple erythematous nodules, abscesses, and ulcers on the left arm. There were several nodules with open sinus tracts and seropurulent crusts along with numerous atrophic, ovoid, stellate scars. Other nodules and ulcers with purulent drainage were located along the lymphatic nodes extending up the patient’s left forearm (Figure 1A). The yellowish-white pus discharge from several active sinuses contained no apparent granules. The lesions were densely distributed along the elbow, wrist, and shoulder, which resulted in associated skin swelling and restricted joint movement. The left axillary lymph nodes were enlarged.
Laboratory analyses revealed a hemoglobin level of 9.6 g/dL (reference range, 13–17.5 g/dL), platelet count of 621×109/L (reference range, 125–350×109/L), and leukocyte count of 14.3×109/L (reference range, 3.5–9.5 ×109/L). C-reactive protein level was 88.4 mg/L (reference range, 0–10 mg/L). Blood, renal, and liver tests, as well as tumor marker, peripheral blood lymphocyte subset, immunoglobulin, and complement results were within reference ranges. Results for Treponema pallidum and HIV antibody tests were negative. Hepatitis B virus markers were positive for hepatitis B surface antigen, hepatitis B e antigen, and hepatitis B core antibody, and the serum concentration of hepatitis B virus DNA was 3.12×107 IU/mL (reference range, <5×102 IU/mL). Computed tomography of the chest and cranium were unremarkable. Ultrasonography of the left arm revealed multiple vertical sinus tracts and several horizontal communicating branches that were accompanied by worm-eaten bone destruction (Figure 2).
Additional testing included histopathologic staining of a skin tissue specimen—hematoxylin and eosin, periodic acid–Schiff, and acid-fast staining—showed nonspecific, diffuse, inflammatory cell infiltration suggestive of chronic suppurative granuloma (Figure 3) but failed to reveal any special strains or organisms. Gram stain examination of the purulent fluid collected from the subcutaneous tissue showed no apparent positive bacillus or filamentous granules. The specimen was then inoculated on Sabouraud dextrose agar and Lowenstein-Jensen medium for fungus and mycobacteria culture, respectively. After 5 days, chalky, yellow, adherent colonies were observed on the Löwenstein-Jensen medium, and after 26 days, yellow crinkled colonies were observed on Sabouraud dextrose agar. The colonies were then inoculated on Columbia blood agar and incubated for 1 week to aid in the identification of organisms. Growth of yellow colonies that were adherent to the agar, moist, and smooth with a velvety surface, as well as a characteristic moldy odor resulted. Gram staining revealed gram-positive, thin, and beaded branching filaments (Figure 4). Based on colony characteristics, physiological properties, and biochemical tests, the isolate was identified as Nocardia. Results of further investigations employing polymerase chain reaction analysis of the skin specimen and bacterial colonies using a Nocardia genus 596-bp fragment of 16S ribosomal RNA primer (forward primer NG1: 5’-ACCGACCACAAGGGG-3’, reverse primer NG2: 5’-GGTTGTAACCTCTTCGA-3’)6 were completely consistent with the reference for identification of N brasiliensis. Evaluation of these results led to a diagnosis of cutaneous nocardiosis after traumatic inoculation.
Because there was a high suspicion of actinophytosis or nocardiosis at admission, the patient received a combination antibiotic treatment with intravenous aqueous penicillin (4 million U every 4 hours) and oral trimethoprim-sulfamethoxazole (160/800 mg twice daily). Subsequently, treatment was changed to a combination of oral trimethoprim-sulfamethoxazole (160/800 mg twice daily) and moxifloxacin (400 mg once daily) based on pathogen identification and antibiotic sensitivity testing. After 1 month of treatment, the cutaneous lesions and left limb swelling dramatically improved and purulent drainage ceased, though some scarring occurred during the healing process. In addition, the mobility of the affected shoulder, elbow, and wrist joints slightly improved. Notable improvement in the mobility and swelling of the joints was observed at 6-month follow-up (Figure 1B). The patient continues to be monitored on an outpatient basis.
Cutaneous nocardiosis is a disfiguring granulomatous infection involving cutaneous and subcutaneous tissue that can progress to cause injury to viscera and bone.7 It has been called one of the great imitators because cutaneous nocardiosis can present in multiple forms,8,9 including mycetoma, sporotrichoid infection, superficial skin infection, and disseminated infection with cutaneous involvement. The differential diagnoses of cutaneous nocardiosis are broad and include tuberculosis; actinomycosis; deep fungal infections such as sporotrichosis, blastomycosis, phaeohyphomycosis, histoplasmosis, and coccidioidomycosis; other bacterial causes of cellulitis, abscess, or ecthyma; and malignancies.10 The principle method of diagnosis is the identification of Nocardia from the infection site.
Our patient ultimately was diagnosed with primary cutaneous nocardiosis resulting from a traumatic injury to the skin that was contaminated with soil. The clinical manifestation pattern was a compound type, including both mycetoma and sporotrichoid infections. Initially, Nocardia mycetoma occurred with subcutaneous infection by direct extension10,11 and appeared as dense, predominantly painless, swollen lesions. After 4 years, the skin lesions continued to spread linearly to the patient’s upper arm and forearm and manifested as the sporotrichoid infection type with painful swollen lesions at the site of inoculation and painful enlargement of the ipsilateral axillary lymph node.
Although nocardiosis is found worldwide, it is endemic to tropical and subtropical regions such as India, Africa, Southeast Asia, and Latin America.12 Nocardiosis most often is observed in individuals aged 20 to 40 years. It affects men more than women, and it commonly occurs in field laborers and cultivators whose occupations involve direct contact with the soil.13 Most lesions are found on the lower extremities, though localized nocardiosis infections can occur in other areas such as the neck, breasts, back, buttocks, and elbows.
Our patient initially was misdiagnosed, and treatment was delayed for several reasons. First, nocardiosis is not common in China, and most clinicians are unfamiliar with the disease. Second, the related lesions do not have specific features, and our patient had a complex clinical presentation that included mycetoma and sporotrichoid infection. Third, the characteristic grain of Nocardia species is small but that of N brasiliensis is even smaller (approximately 0.1–0.2 mm in diameter), which makes visualization difficult in both histopathologic and microbiologic examinations.14 The histopathologic examination results of our patient in the local hospital were nonspecific. Fourth, our patient did not initially go to the hospital but instead purchased some over-the-counter antibiotic ointments for external application because the lesions were painless. Moreover, microbiologic smear and culture examinations were not conducted in the local hospital before administering antituberculosis treatment to the patient. Instead, a polymerase chain reaction examination of skin lesion tissue for tubercle bacilli and atypical mycobacteria was negative. These findings imply that the traditional microbial smear and culture evaluations cannot be omitted. Furthermore, culture examinations should be conducted on multiple skin tissue and purulent fluid specimens to increase the likelihood of detection. These cultures should be monitored for at least 2 to 4 weeks because Nocardia is a slow-growing organism.10
The optimal antimicrobial treatment regimens for nocardiosis have not been firmly established.15 Trimethoprim-sulfamethoxazole is regarded as the first-line antimicrobial agent for treatment of nocardial infections. The optimal duration of antimicrobial therapy for nocardiosis also has not been determined, and the treatment regimen depends on the severity and extent of the infection as well as on the presence of infection-related complications. The main complication is bone involvement. Notable bony changes include periosteal thickening, osteoporosis, and osteolysis.
We considered the severity of skin lesions and bone marrow invasion in our patient and planned to treat him continually with oral trimethoprim-sulfamethoxazole according to the in vitro drug susceptibility test. The patient showed clinical improvement after 1 month of treatment, and he continued to improve after 6 months of treatment. To prevent recurrence, we found it necessary to treat the patient with a long-term antibiotic course over 6 to 12 months.16
Cutaneous nocardiosis remains a diagnostic challenge because of its nonspecific and diverse clinical and histopathological presentations. Diagnosis is further complicated by the inherent difficulty of cultivating and identifying the clinical isolate in the laboratory. A high degree of clinical suspicion followed by successful identification of the organism by a laboratory technologist will aid in the early diagnosis and treatment of the infection, ultimately reducing the risk for complications and morbidity.
To the Editor:
The organisms of the genus Nocardia are gram-positive, ubiquitous, aerobic actinomycetes found worldwide in soil, decaying organic material, and water.1 The genus Nocardia includes more than 50 species; some species, such as Nocardia asteroides, Nocardia farcinica, Nocardia nova, and Nocardia brasiliensis, are the cause of nocardiosis in humans and animals.2,3 Nocardiosis is a rare and opportunistic infection that predominantly affects immunocompromised individuals; however, up to 30% of infections can occur in immunocompetent hosts.4 Nocardiosis can manifest in 3 disease forms: cutaneous, pulmonary, or disseminated. Cutaneous nocardiosis commonly develops in immunocompetent individuals who have experienced a predisposing traumatic injury to the skin,5 and it can exhibit a diverse variety of clinical manifestations, making diagnosis difficult. We describe a case of serious progressive primary cutaneous nocardiosis with an unusual presentation in an immunocompetent patient.
A 26-year-old immunocompetent man presented with pain, swelling, nodules, abscesses, ulcers, and sinus drainage of the left arm. The left elbow lesion initially developed at the site of a trauma 6 years prior that was painless but was contaminated with mossy soil. The condition slowly progressed over the next 2 years, and the patient experienced increased swelling and eventually developed multiple draining sinus tracts. Over the next 4 years, the lesions multiplied, spreading to the forearm and upper arm; associated severe pain and swelling at the elbow and wrist joint developed. The patient sought medical care at a local hospital and subsequently was diagnosed with suspected cutaneous tuberculosis. The patient was empirically treated with a 6-month course of isoniazid, rifampicin, pyrazinamide, and ethambutol; however, the lesions continued to progress and worsen. The patient had to stop antibiotic treatment because of substantially elevated alanine aminotransferase and aspartate aminotransferase levels.
He subsequently was evaluated at our hospital. He had no notable medical history and was afebrile. Physical examination revealed multiple erythematous nodules, abscesses, and ulcers on the left arm. There were several nodules with open sinus tracts and seropurulent crusts along with numerous atrophic, ovoid, stellate scars. Other nodules and ulcers with purulent drainage were located along the lymphatic nodes extending up the patient’s left forearm (Figure 1A). The yellowish-white pus discharge from several active sinuses contained no apparent granules. The lesions were densely distributed along the elbow, wrist, and shoulder, which resulted in associated skin swelling and restricted joint movement. The left axillary lymph nodes were enlarged.
Laboratory analyses revealed a hemoglobin level of 9.6 g/dL (reference range, 13–17.5 g/dL), platelet count of 621×109/L (reference range, 125–350×109/L), and leukocyte count of 14.3×109/L (reference range, 3.5–9.5 ×109/L). C-reactive protein level was 88.4 mg/L (reference range, 0–10 mg/L). Blood, renal, and liver tests, as well as tumor marker, peripheral blood lymphocyte subset, immunoglobulin, and complement results were within reference ranges. Results for Treponema pallidum and HIV antibody tests were negative. Hepatitis B virus markers were positive for hepatitis B surface antigen, hepatitis B e antigen, and hepatitis B core antibody, and the serum concentration of hepatitis B virus DNA was 3.12×107 IU/mL (reference range, <5×102 IU/mL). Computed tomography of the chest and cranium were unremarkable. Ultrasonography of the left arm revealed multiple vertical sinus tracts and several horizontal communicating branches that were accompanied by worm-eaten bone destruction (Figure 2).
Additional testing included histopathologic staining of a skin tissue specimen—hematoxylin and eosin, periodic acid–Schiff, and acid-fast staining—showed nonspecific, diffuse, inflammatory cell infiltration suggestive of chronic suppurative granuloma (Figure 3) but failed to reveal any special strains or organisms. Gram stain examination of the purulent fluid collected from the subcutaneous tissue showed no apparent positive bacillus or filamentous granules. The specimen was then inoculated on Sabouraud dextrose agar and Lowenstein-Jensen medium for fungus and mycobacteria culture, respectively. After 5 days, chalky, yellow, adherent colonies were observed on the Löwenstein-Jensen medium, and after 26 days, yellow crinkled colonies were observed on Sabouraud dextrose agar. The colonies were then inoculated on Columbia blood agar and incubated for 1 week to aid in the identification of organisms. Growth of yellow colonies that were adherent to the agar, moist, and smooth with a velvety surface, as well as a characteristic moldy odor resulted. Gram staining revealed gram-positive, thin, and beaded branching filaments (Figure 4). Based on colony characteristics, physiological properties, and biochemical tests, the isolate was identified as Nocardia. Results of further investigations employing polymerase chain reaction analysis of the skin specimen and bacterial colonies using a Nocardia genus 596-bp fragment of 16S ribosomal RNA primer (forward primer NG1: 5’-ACCGACCACAAGGGG-3’, reverse primer NG2: 5’-GGTTGTAACCTCTTCGA-3’)6 were completely consistent with the reference for identification of N brasiliensis. Evaluation of these results led to a diagnosis of cutaneous nocardiosis after traumatic inoculation.
Because there was a high suspicion of actinophytosis or nocardiosis at admission, the patient received a combination antibiotic treatment with intravenous aqueous penicillin (4 million U every 4 hours) and oral trimethoprim-sulfamethoxazole (160/800 mg twice daily). Subsequently, treatment was changed to a combination of oral trimethoprim-sulfamethoxazole (160/800 mg twice daily) and moxifloxacin (400 mg once daily) based on pathogen identification and antibiotic sensitivity testing. After 1 month of treatment, the cutaneous lesions and left limb swelling dramatically improved and purulent drainage ceased, though some scarring occurred during the healing process. In addition, the mobility of the affected shoulder, elbow, and wrist joints slightly improved. Notable improvement in the mobility and swelling of the joints was observed at 6-month follow-up (Figure 1B). The patient continues to be monitored on an outpatient basis.
Cutaneous nocardiosis is a disfiguring granulomatous infection involving cutaneous and subcutaneous tissue that can progress to cause injury to viscera and bone.7 It has been called one of the great imitators because cutaneous nocardiosis can present in multiple forms,8,9 including mycetoma, sporotrichoid infection, superficial skin infection, and disseminated infection with cutaneous involvement. The differential diagnoses of cutaneous nocardiosis are broad and include tuberculosis; actinomycosis; deep fungal infections such as sporotrichosis, blastomycosis, phaeohyphomycosis, histoplasmosis, and coccidioidomycosis; other bacterial causes of cellulitis, abscess, or ecthyma; and malignancies.10 The principle method of diagnosis is the identification of Nocardia from the infection site.
Our patient ultimately was diagnosed with primary cutaneous nocardiosis resulting from a traumatic injury to the skin that was contaminated with soil. The clinical manifestation pattern was a compound type, including both mycetoma and sporotrichoid infections. Initially, Nocardia mycetoma occurred with subcutaneous infection by direct extension10,11 and appeared as dense, predominantly painless, swollen lesions. After 4 years, the skin lesions continued to spread linearly to the patient’s upper arm and forearm and manifested as the sporotrichoid infection type with painful swollen lesions at the site of inoculation and painful enlargement of the ipsilateral axillary lymph node.
Although nocardiosis is found worldwide, it is endemic to tropical and subtropical regions such as India, Africa, Southeast Asia, and Latin America.12 Nocardiosis most often is observed in individuals aged 20 to 40 years. It affects men more than women, and it commonly occurs in field laborers and cultivators whose occupations involve direct contact with the soil.13 Most lesions are found on the lower extremities, though localized nocardiosis infections can occur in other areas such as the neck, breasts, back, buttocks, and elbows.
Our patient initially was misdiagnosed, and treatment was delayed for several reasons. First, nocardiosis is not common in China, and most clinicians are unfamiliar with the disease. Second, the related lesions do not have specific features, and our patient had a complex clinical presentation that included mycetoma and sporotrichoid infection. Third, the characteristic grain of Nocardia species is small but that of N brasiliensis is even smaller (approximately 0.1–0.2 mm in diameter), which makes visualization difficult in both histopathologic and microbiologic examinations.14 The histopathologic examination results of our patient in the local hospital were nonspecific. Fourth, our patient did not initially go to the hospital but instead purchased some over-the-counter antibiotic ointments for external application because the lesions were painless. Moreover, microbiologic smear and culture examinations were not conducted in the local hospital before administering antituberculosis treatment to the patient. Instead, a polymerase chain reaction examination of skin lesion tissue for tubercle bacilli and atypical mycobacteria was negative. These findings imply that the traditional microbial smear and culture evaluations cannot be omitted. Furthermore, culture examinations should be conducted on multiple skin tissue and purulent fluid specimens to increase the likelihood of detection. These cultures should be monitored for at least 2 to 4 weeks because Nocardia is a slow-growing organism.10
The optimal antimicrobial treatment regimens for nocardiosis have not been firmly established.15 Trimethoprim-sulfamethoxazole is regarded as the first-line antimicrobial agent for treatment of nocardial infections. The optimal duration of antimicrobial therapy for nocardiosis also has not been determined, and the treatment regimen depends on the severity and extent of the infection as well as on the presence of infection-related complications. The main complication is bone involvement. Notable bony changes include periosteal thickening, osteoporosis, and osteolysis.
We considered the severity of skin lesions and bone marrow invasion in our patient and planned to treat him continually with oral trimethoprim-sulfamethoxazole according to the in vitro drug susceptibility test. The patient showed clinical improvement after 1 month of treatment, and he continued to improve after 6 months of treatment. To prevent recurrence, we found it necessary to treat the patient with a long-term antibiotic course over 6 to 12 months.16
Cutaneous nocardiosis remains a diagnostic challenge because of its nonspecific and diverse clinical and histopathological presentations. Diagnosis is further complicated by the inherent difficulty of cultivating and identifying the clinical isolate in the laboratory. A high degree of clinical suspicion followed by successful identification of the organism by a laboratory technologist will aid in the early diagnosis and treatment of the infection, ultimately reducing the risk for complications and morbidity.
- McNeil MM, Brown JM. The medically important aerobic actinomycetes: epidemiology and microbiology. Clin Microbiol Rev. 1994;7:357-417.
- Brown-Elliott BA, Brown JM, Conville PS, et al. Clinical and laboratory features of the Nocardia spp. based on current molecular taxonomy. Clin Microbiol Rev. 2006;19:259-282.
- Fatahi-Bafghi M. Nocardiosis from 1888 to 2017. Microb Pathog. 2018;114:369-384.
- Beaman BL, Burnside J, Edwards B, et al. Nocardial infections in the United States, 1972-1974. J Infect Dis. 1976;134:286-289.
- Lerner PI. Nocardiosis. Clin Infect Dis. 1996;22:891-903.
- Laurent FJ, Provost F, Boiron P. Rapid identification of clinically relevant Nocardia species to genus level by 16S rRNA gene PCR. J Clin Microbiol. 1999;37:99-102.
- Nguyen NM, Sink JR, Carter AJ, et al. Nocardiosis incognito: primary cutaneous nocardiosis with extension to myositis and pleural infection. JAAD Case Rep. 2018;4:33-35.
- Sharna NL, Mahajan VK, Agarwal S, et al. Nocardial mycetoma: diverse clinical presentations. Indian J Dermatol Venereol Leprol. 2008;74:635-640.
- Huang L, Chen X, Xu H, et al. Clinical features, identification, antimicrobial resistance patterns of Nocardia species in China: 2009-2017. Diagn Microbiol Infect Dis. 2019;94:165-172.
- Bonifaz A, Tirado-Sánchez A, Calderón L, et al. Mycetoma: experience of 482 cases in a single center in Mexico. PLoS Negl Trop Dis. 2014;8:E3102.
- Welsh O, Vero-Cabrera L, Salinas-Carmona MC. Mycetoma. Clin Dermatol. 2007;25:195-202.
- Nenoff P, van de Sande WWJ, Fahal AH, et al. Eumycetoma and actinomycetoma—an update on causative agents, epidemiology, pathogenesis, diagnostics and therapy. J Eur Acad Dermatol Venereol. 2015;29:1873-1883.
- Emmanuel P, Dumre SP, John S, et al. Mycetoma: a clinical dilemma in resource limited settings. Ann Clin Microbiol Antimicrob. 2018;17:35.
- Reis CMS, Reis-Filho EGM. Mycetomas: an epidemiological, etiological, clinical, laboratory and therapeutic review. An Bras Dermatol. 2018;93:8-18.
- Wilson JW. Nocardiosis: updates and clinical overview. Mayo Clin Proc. 2012;87:403-407.
- Welsh O, Vera-Cabrera L, Salinas-Carmona MC. Current treatment for Nocardia infections. Expert Opin Pharmacother. 2013;14:2387-2398.
- McNeil MM, Brown JM. The medically important aerobic actinomycetes: epidemiology and microbiology. Clin Microbiol Rev. 1994;7:357-417.
- Brown-Elliott BA, Brown JM, Conville PS, et al. Clinical and laboratory features of the Nocardia spp. based on current molecular taxonomy. Clin Microbiol Rev. 2006;19:259-282.
- Fatahi-Bafghi M. Nocardiosis from 1888 to 2017. Microb Pathog. 2018;114:369-384.
- Beaman BL, Burnside J, Edwards B, et al. Nocardial infections in the United States, 1972-1974. J Infect Dis. 1976;134:286-289.
- Lerner PI. Nocardiosis. Clin Infect Dis. 1996;22:891-903.
- Laurent FJ, Provost F, Boiron P. Rapid identification of clinically relevant Nocardia species to genus level by 16S rRNA gene PCR. J Clin Microbiol. 1999;37:99-102.
- Nguyen NM, Sink JR, Carter AJ, et al. Nocardiosis incognito: primary cutaneous nocardiosis with extension to myositis and pleural infection. JAAD Case Rep. 2018;4:33-35.
- Sharna NL, Mahajan VK, Agarwal S, et al. Nocardial mycetoma: diverse clinical presentations. Indian J Dermatol Venereol Leprol. 2008;74:635-640.
- Huang L, Chen X, Xu H, et al. Clinical features, identification, antimicrobial resistance patterns of Nocardia species in China: 2009-2017. Diagn Microbiol Infect Dis. 2019;94:165-172.
- Bonifaz A, Tirado-Sánchez A, Calderón L, et al. Mycetoma: experience of 482 cases in a single center in Mexico. PLoS Negl Trop Dis. 2014;8:E3102.
- Welsh O, Vero-Cabrera L, Salinas-Carmona MC. Mycetoma. Clin Dermatol. 2007;25:195-202.
- Nenoff P, van de Sande WWJ, Fahal AH, et al. Eumycetoma and actinomycetoma—an update on causative agents, epidemiology, pathogenesis, diagnostics and therapy. J Eur Acad Dermatol Venereol. 2015;29:1873-1883.
- Emmanuel P, Dumre SP, John S, et al. Mycetoma: a clinical dilemma in resource limited settings. Ann Clin Microbiol Antimicrob. 2018;17:35.
- Reis CMS, Reis-Filho EGM. Mycetomas: an epidemiological, etiological, clinical, laboratory and therapeutic review. An Bras Dermatol. 2018;93:8-18.
- Wilson JW. Nocardiosis: updates and clinical overview. Mayo Clin Proc. 2012;87:403-407.
- Welsh O, Vera-Cabrera L, Salinas-Carmona MC. Current treatment for Nocardia infections. Expert Opin Pharmacother. 2013;14:2387-2398.
Practice Points
- Although unusual, cutaneous nocardiosis can present with both mycetoma and sporotrichoid infection, which should be treated based on pathogen identification and antibiotic sensitivity testing.
- A high degree of clinical suspicion by clinicians followed by successful identification of the organism by a laboratory technologist will aid in the early diagnosis and treatment of the infection, ultimately reducing the risk for complications and morbidity.
Commentary: Disease activity, JAK inhibitors, and pregnancy risks in PsA, April 2023
Patients with active PsA require early effective therapy to improve long-term outcomes. The choice of therapy should balance effectiveness and potential toxicity. Janus kinase (JAK) inhibitors are a relatively new class of drugs that have been shown to be efficacious in treating PsA, but there are concerns about safety. To evaluate the efficacy and safety of JAK inhibitors in patients with psoriatic disease, Yang and colleagues conducted a systematic review and meta-analysis of 17 phase 2/3 randomized controlled trials including 6802 patients with PsA or moderate to severe plaque psoriasis who received at least one JAK inhibitor. They demonstrated that, compared with placebo, JAK inhibitors were associated with a significantly higher American College of Rheumatology 20 response rate (relative risk [RR] 2.09; P < .00001), with the response being the highest for filgotinib (RR 2.40; P < .00001), followed by upadacitinib, tofacitinib, and deucravacitinib. However, the overall incidence of adverse events was higher with JAK inhibitors vs placebo (RR 1.17; P < .00001) and significantly higher with 10-mg vs 5-mg tofacitinib (P = .03). Thus, JAK inhibitors are efficacious in the treatment of PsA but are associated with adverse effects, particularly at higher doses.
Safety is best assessed in real-world observational studies. Clinical trials have raised concerns about a higher cancer risk in rheumatoid arthritis (RA) patients treated with JAK inhibitors compared with patients treated with tumor necrosis factor (TNF) inhibitors. To evaluate this further, Huss and colleagues conducted an observational cohort study that evaluated prospectively collected data from national Swedish data sources on 4443 patients with PsA and 10,447 patients with RA, all without previous cancer, who received JAK inhibitors, TNF inhibitors, or other non–TNF inhibitor biologic disease-modifying antirheumatic drugs. Overall, use of JAK inhibitors vs TNF inhibitors was not significantly associated with a higher risk for cancer other than nonmelanoma skin cancer, especially in RA. In patients with PsA, there was a trend toward higher risk for nonmelanoma skin cancer, but it was not statistically significant. The study provides reassurance that JAK inhibitors are generally as safe, as are TNF inhibitors in PsA, but continued vigilance is required.
There are limited data on the effect of disease activity on pregnancy outcomes in individuals with PsA. Using data from the Medical Birth Registry of Norway linked to data from a Norwegian nationwide observational register recruiting women with inflammatory rheumatic diseases, Skorpen and colleagues evaluated the association of active disease and cesarean section (CS) rates in singleton births in women with PsA (n = 121), axial spondyloarthritis (n = 312), and controls (n = 575,798). Compared with control individuals, women with PsA had a higher risk for CS (risk difference [RD] 15.0%; P < .001) and for emergency CS (RD 10.6%; P < .001), with active disease in the third trimester further amplifying both risks (CS: RD 17.7%; P = .028; emergency CS: RD 15.9%; P = .015). Thus, although in many patients disease activity decreases during pregnancy, this study highlights the importance of pregestational counseling and disease control along with regular monitoring of PsA during pregnancy such that disease activity remains well controlled.
Patients with active PsA require early effective therapy to improve long-term outcomes. The choice of therapy should balance effectiveness and potential toxicity. Janus kinase (JAK) inhibitors are a relatively new class of drugs that have been shown to be efficacious in treating PsA, but there are concerns about safety. To evaluate the efficacy and safety of JAK inhibitors in patients with psoriatic disease, Yang and colleagues conducted a systematic review and meta-analysis of 17 phase 2/3 randomized controlled trials including 6802 patients with PsA or moderate to severe plaque psoriasis who received at least one JAK inhibitor. They demonstrated that, compared with placebo, JAK inhibitors were associated with a significantly higher American College of Rheumatology 20 response rate (relative risk [RR] 2.09; P < .00001), with the response being the highest for filgotinib (RR 2.40; P < .00001), followed by upadacitinib, tofacitinib, and deucravacitinib. However, the overall incidence of adverse events was higher with JAK inhibitors vs placebo (RR 1.17; P < .00001) and significantly higher with 10-mg vs 5-mg tofacitinib (P = .03). Thus, JAK inhibitors are efficacious in the treatment of PsA but are associated with adverse effects, particularly at higher doses.
Safety is best assessed in real-world observational studies. Clinical trials have raised concerns about a higher cancer risk in rheumatoid arthritis (RA) patients treated with JAK inhibitors compared with patients treated with tumor necrosis factor (TNF) inhibitors. To evaluate this further, Huss and colleagues conducted an observational cohort study that evaluated prospectively collected data from national Swedish data sources on 4443 patients with PsA and 10,447 patients with RA, all without previous cancer, who received JAK inhibitors, TNF inhibitors, or other non–TNF inhibitor biologic disease-modifying antirheumatic drugs. Overall, use of JAK inhibitors vs TNF inhibitors was not significantly associated with a higher risk for cancer other than nonmelanoma skin cancer, especially in RA. In patients with PsA, there was a trend toward higher risk for nonmelanoma skin cancer, but it was not statistically significant. The study provides reassurance that JAK inhibitors are generally as safe, as are TNF inhibitors in PsA, but continued vigilance is required.
There are limited data on the effect of disease activity on pregnancy outcomes in individuals with PsA. Using data from the Medical Birth Registry of Norway linked to data from a Norwegian nationwide observational register recruiting women with inflammatory rheumatic diseases, Skorpen and colleagues evaluated the association of active disease and cesarean section (CS) rates in singleton births in women with PsA (n = 121), axial spondyloarthritis (n = 312), and controls (n = 575,798). Compared with control individuals, women with PsA had a higher risk for CS (risk difference [RD] 15.0%; P < .001) and for emergency CS (RD 10.6%; P < .001), with active disease in the third trimester further amplifying both risks (CS: RD 17.7%; P = .028; emergency CS: RD 15.9%; P = .015). Thus, although in many patients disease activity decreases during pregnancy, this study highlights the importance of pregestational counseling and disease control along with regular monitoring of PsA during pregnancy such that disease activity remains well controlled.
Patients with active PsA require early effective therapy to improve long-term outcomes. The choice of therapy should balance effectiveness and potential toxicity. Janus kinase (JAK) inhibitors are a relatively new class of drugs that have been shown to be efficacious in treating PsA, but there are concerns about safety. To evaluate the efficacy and safety of JAK inhibitors in patients with psoriatic disease, Yang and colleagues conducted a systematic review and meta-analysis of 17 phase 2/3 randomized controlled trials including 6802 patients with PsA or moderate to severe plaque psoriasis who received at least one JAK inhibitor. They demonstrated that, compared with placebo, JAK inhibitors were associated with a significantly higher American College of Rheumatology 20 response rate (relative risk [RR] 2.09; P < .00001), with the response being the highest for filgotinib (RR 2.40; P < .00001), followed by upadacitinib, tofacitinib, and deucravacitinib. However, the overall incidence of adverse events was higher with JAK inhibitors vs placebo (RR 1.17; P < .00001) and significantly higher with 10-mg vs 5-mg tofacitinib (P = .03). Thus, JAK inhibitors are efficacious in the treatment of PsA but are associated with adverse effects, particularly at higher doses.
Safety is best assessed in real-world observational studies. Clinical trials have raised concerns about a higher cancer risk in rheumatoid arthritis (RA) patients treated with JAK inhibitors compared with patients treated with tumor necrosis factor (TNF) inhibitors. To evaluate this further, Huss and colleagues conducted an observational cohort study that evaluated prospectively collected data from national Swedish data sources on 4443 patients with PsA and 10,447 patients with RA, all without previous cancer, who received JAK inhibitors, TNF inhibitors, or other non–TNF inhibitor biologic disease-modifying antirheumatic drugs. Overall, use of JAK inhibitors vs TNF inhibitors was not significantly associated with a higher risk for cancer other than nonmelanoma skin cancer, especially in RA. In patients with PsA, there was a trend toward higher risk for nonmelanoma skin cancer, but it was not statistically significant. The study provides reassurance that JAK inhibitors are generally as safe, as are TNF inhibitors in PsA, but continued vigilance is required.
There are limited data on the effect of disease activity on pregnancy outcomes in individuals with PsA. Using data from the Medical Birth Registry of Norway linked to data from a Norwegian nationwide observational register recruiting women with inflammatory rheumatic diseases, Skorpen and colleagues evaluated the association of active disease and cesarean section (CS) rates in singleton births in women with PsA (n = 121), axial spondyloarthritis (n = 312), and controls (n = 575,798). Compared with control individuals, women with PsA had a higher risk for CS (risk difference [RD] 15.0%; P < .001) and for emergency CS (RD 10.6%; P < .001), with active disease in the third trimester further amplifying both risks (CS: RD 17.7%; P = .028; emergency CS: RD 15.9%; P = .015). Thus, although in many patients disease activity decreases during pregnancy, this study highlights the importance of pregestational counseling and disease control along with regular monitoring of PsA during pregnancy such that disease activity remains well controlled.
Commentary: Disease activity, JAK inhibitors, and pregnancy risks in PsA, April 2023
Patients with active PsA require early effective therapy to improve long-term outcomes. The choice of therapy should balance effectiveness and potential toxicity. Janus kinase (JAK) inhibitors are a relatively new class of drugs that have been shown to be efficacious in treating PsA, but there are concerns about safety. To evaluate the efficacy and safety of JAK inhibitors in patients with psoriatic disease, Yang and colleagues conducted a systematic review and meta-analysis of 17 phase 2/3 randomized controlled trials including 6802 patients with PsA or moderate to severe plaque psoriasis who received at least one JAK inhibitor. They demonstrated that, compared with placebo, JAK inhibitors were associated with a significantly higher American College of Rheumatology 20 response rate (relative risk [RR] 2.09; P < .00001), with the response being the highest for filgotinib (RR 2.40; P < .00001), followed by upadacitinib, tofacitinib, and deucravacitinib. However, the overall incidence of adverse events was higher with JAK inhibitors vs placebo (RR 1.17; P < .00001) and significantly higher with 10-mg vs 5-mg tofacitinib (P = .03). Thus, JAK inhibitors are efficacious in the treatment of PsA but are associated with adverse effects, particularly at higher doses.
Safety is best assessed in real-world observational studies. Clinical trials have raised concerns about a higher cancer risk in rheumatoid arthritis (RA) patients treated with JAK inhibitors compared with patients treated with tumor necrosis factor (TNF) inhibitors. To evaluate this further, Huss and colleagues conducted an observational cohort study that evaluated prospectively collected data from national Swedish data sources on 4443 patients with PsA and 10,447 patients with RA, all without previous cancer, who received JAK inhibitors, TNF inhibitors, or other non–TNF inhibitor biologic disease-modifying antirheumatic drugs. Overall, use of JAK inhibitors vs TNF inhibitors was not significantly associated with a higher risk for cancer other than nonmelanoma skin cancer, especially in RA. In patients with PsA, there was a trend toward higher risk for nonmelanoma skin cancer, but it was not statistically significant. The study provides reassurance that JAK inhibitors are generally as safe, as are TNF inhibitors in PsA, but continued vigilance is required.
There are limited data on the effect of disease activity on pregnancy outcomes in individuals with PsA. Using data from the Medical Birth Registry of Norway linked to data from a Norwegian nationwide observational register recruiting women with inflammatory rheumatic diseases, Skorpen and colleagues evaluated the association of active disease and cesarean section (CS) rates in singleton births in women with PsA (n = 121), axial spondyloarthritis (n = 312), and controls (n = 575,798). Compared with control individuals, women with PsA had a higher risk for CS (risk difference [RD] 15.0%; P < .001) and for emergency CS (RD 10.6%; P < .001), with active disease in the third trimester further amplifying both risks (CS: RD 17.7%; P = .028; emergency CS: RD 15.9%; P = .015). Thus, although in many patients disease activity decreases during pregnancy, this study highlights the importance of pregestational counseling and disease control along with regular monitoring of PsA during pregnancy such that disease activity remains well controlled.
Patients with active PsA require early effective therapy to improve long-term outcomes. The choice of therapy should balance effectiveness and potential toxicity. Janus kinase (JAK) inhibitors are a relatively new class of drugs that have been shown to be efficacious in treating PsA, but there are concerns about safety. To evaluate the efficacy and safety of JAK inhibitors in patients with psoriatic disease, Yang and colleagues conducted a systematic review and meta-analysis of 17 phase 2/3 randomized controlled trials including 6802 patients with PsA or moderate to severe plaque psoriasis who received at least one JAK inhibitor. They demonstrated that, compared with placebo, JAK inhibitors were associated with a significantly higher American College of Rheumatology 20 response rate (relative risk [RR] 2.09; P < .00001), with the response being the highest for filgotinib (RR 2.40; P < .00001), followed by upadacitinib, tofacitinib, and deucravacitinib. However, the overall incidence of adverse events was higher with JAK inhibitors vs placebo (RR 1.17; P < .00001) and significantly higher with 10-mg vs 5-mg tofacitinib (P = .03). Thus, JAK inhibitors are efficacious in the treatment of PsA but are associated with adverse effects, particularly at higher doses.
Safety is best assessed in real-world observational studies. Clinical trials have raised concerns about a higher cancer risk in rheumatoid arthritis (RA) patients treated with JAK inhibitors compared with patients treated with tumor necrosis factor (TNF) inhibitors. To evaluate this further, Huss and colleagues conducted an observational cohort study that evaluated prospectively collected data from national Swedish data sources on 4443 patients with PsA and 10,447 patients with RA, all without previous cancer, who received JAK inhibitors, TNF inhibitors, or other non–TNF inhibitor biologic disease-modifying antirheumatic drugs. Overall, use of JAK inhibitors vs TNF inhibitors was not significantly associated with a higher risk for cancer other than nonmelanoma skin cancer, especially in RA. In patients with PsA, there was a trend toward higher risk for nonmelanoma skin cancer, but it was not statistically significant. The study provides reassurance that JAK inhibitors are generally as safe, as are TNF inhibitors in PsA, but continued vigilance is required.
There are limited data on the effect of disease activity on pregnancy outcomes in individuals with PsA. Using data from the Medical Birth Registry of Norway linked to data from a Norwegian nationwide observational register recruiting women with inflammatory rheumatic diseases, Skorpen and colleagues evaluated the association of active disease and cesarean section (CS) rates in singleton births in women with PsA (n = 121), axial spondyloarthritis (n = 312), and controls (n = 575,798). Compared with control individuals, women with PsA had a higher risk for CS (risk difference [RD] 15.0%; P < .001) and for emergency CS (RD 10.6%; P < .001), with active disease in the third trimester further amplifying both risks (CS: RD 17.7%; P = .028; emergency CS: RD 15.9%; P = .015). Thus, although in many patients disease activity decreases during pregnancy, this study highlights the importance of pregestational counseling and disease control along with regular monitoring of PsA during pregnancy such that disease activity remains well controlled.
Patients with active PsA require early effective therapy to improve long-term outcomes. The choice of therapy should balance effectiveness and potential toxicity. Janus kinase (JAK) inhibitors are a relatively new class of drugs that have been shown to be efficacious in treating PsA, but there are concerns about safety. To evaluate the efficacy and safety of JAK inhibitors in patients with psoriatic disease, Yang and colleagues conducted a systematic review and meta-analysis of 17 phase 2/3 randomized controlled trials including 6802 patients with PsA or moderate to severe plaque psoriasis who received at least one JAK inhibitor. They demonstrated that, compared with placebo, JAK inhibitors were associated with a significantly higher American College of Rheumatology 20 response rate (relative risk [RR] 2.09; P < .00001), with the response being the highest for filgotinib (RR 2.40; P < .00001), followed by upadacitinib, tofacitinib, and deucravacitinib. However, the overall incidence of adverse events was higher with JAK inhibitors vs placebo (RR 1.17; P < .00001) and significantly higher with 10-mg vs 5-mg tofacitinib (P = .03). Thus, JAK inhibitors are efficacious in the treatment of PsA but are associated with adverse effects, particularly at higher doses.
Safety is best assessed in real-world observational studies. Clinical trials have raised concerns about a higher cancer risk in rheumatoid arthritis (RA) patients treated with JAK inhibitors compared with patients treated with tumor necrosis factor (TNF) inhibitors. To evaluate this further, Huss and colleagues conducted an observational cohort study that evaluated prospectively collected data from national Swedish data sources on 4443 patients with PsA and 10,447 patients with RA, all without previous cancer, who received JAK inhibitors, TNF inhibitors, or other non–TNF inhibitor biologic disease-modifying antirheumatic drugs. Overall, use of JAK inhibitors vs TNF inhibitors was not significantly associated with a higher risk for cancer other than nonmelanoma skin cancer, especially in RA. In patients with PsA, there was a trend toward higher risk for nonmelanoma skin cancer, but it was not statistically significant. The study provides reassurance that JAK inhibitors are generally as safe, as are TNF inhibitors in PsA, but continued vigilance is required.
There are limited data on the effect of disease activity on pregnancy outcomes in individuals with PsA. Using data from the Medical Birth Registry of Norway linked to data from a Norwegian nationwide observational register recruiting women with inflammatory rheumatic diseases, Skorpen and colleagues evaluated the association of active disease and cesarean section (CS) rates in singleton births in women with PsA (n = 121), axial spondyloarthritis (n = 312), and controls (n = 575,798). Compared with control individuals, women with PsA had a higher risk for CS (risk difference [RD] 15.0%; P < .001) and for emergency CS (RD 10.6%; P < .001), with active disease in the third trimester further amplifying both risks (CS: RD 17.7%; P = .028; emergency CS: RD 15.9%; P = .015). Thus, although in many patients disease activity decreases during pregnancy, this study highlights the importance of pregestational counseling and disease control along with regular monitoring of PsA during pregnancy such that disease activity remains well controlled.
‘Excess’ deaths surging, but why?
This transcript has been edited for clarity.
“Excess deaths.” You’ve heard the phrase countless times by now. It is one of the myriad of previously esoteric epidemiology terms that the pandemic brought squarely into the zeitgeist.
As a sort of standard candle of the performance of a state or a region or a country in terms of health care, it has a lot of utility – if for nothing more than Monday-morning quarterbacking. But this week, I want to dig in on the concept a bit because, according to a new study, the excess death gap between the United States and Western Europe has never been higher.
You might imagine that the best way to figure this out is for some group of intelligent people to review each death and decide, somehow, whether it was expected or not. But aside from being impractical, this would end up being somewhat subjective. That older person who died from pneumonia – was that an expected death? Could it have been avoided?
Rather, the calculation of excess mortality relies on large numbers and statistical inference to compare an expected number of deaths with those that are observed.
The difference is excess mortality, even if you can never be sure whether any particular death was expected or not.
As always, however, the devil is in the details. What data do you use to define the expected number of deaths?
There are options here. Probably the most straightforward analysis uses past data from the country of interest. You look at annual deaths over some historical period of time and compare those numbers with the rates today. Two issues need to be accounted for here: population growth – a larger population will have more deaths, so you need to adjust the historical population with current levels, and demographic shifts – an older or more male population will have more deaths, so you need to adjust for that as well.
But provided you take care of those factors, you can estimate fairly well how many deaths you can expect to see in any given period of time.
Still, you should see right away that excess mortality is a relative concept. If you think that, just perhaps, the United States has some systematic failure to deliver care that has been stable and persistent over time, you wouldn’t capture that failing in an excess mortality calculation that uses U.S. historical data as the baseline.
The best way to get around that is to use data from other countries, and that’s just what this article – a rare single-author piece by Patrick Heuveline – does, calculating excess deaths in the United States by standardizing our mortality rates to the five largest Western European countries: the United Kingdom, France, Germany, Italy, and Spain.
Controlling for the differences in the demographics of that European population, here is the expected number of deaths in the United States over the past 5 years.
Note that there is a small uptick in expected deaths in 2020, reflecting the pandemic, which returns to baseline levels by 2021. This is because that’s what happened in Europe; by 2021, the excess mortality due to COVID-19 was quite low.
Here are the actual deaths in the US during that time.
Highlighted here in green, then, is the excess mortality over time in the United States.
There are some fascinating and concerning findings here.
First of all, you can see that even before the pandemic, the United States has an excess mortality problem. This is not entirely a surprise; we’ve known that so-called “deaths of despair,” those due to alcohol abuse, drug overdoses, and suicide, are at an all-time high and tend to affect a “prime of life” population that would not otherwise be expected to die. In fact, fully 50% of the excess deaths in the United States occur in those between ages 15 and 64.
Excess deaths are also a concerning percentage of total deaths. In 2017, 17% of total deaths in the United States could be considered “excess.” In 2021, that number had doubled to 35%. Nearly 900,000 individuals in the United States died in 2021 who perhaps didn’t need to.
The obvious culprit to blame here is COVID, but COVID-associated excess deaths only explain about 50% of the excess we see in 2021. The rest reflect something even more concerning: a worsening of the failures of the past, perhaps exacerbated by the pandemic but not due to the virus itself.
Of course, we started this discussion acknowledging that the calculation of excess mortality is exquisitely dependent on how you model the expected number of deaths, and I’m sure some will take issue with the use of European numbers when applied to Americans. After all, Europe has, by and large, a robust public health service, socialized medicine, and healthcare that does not run the risk of bankrupting its citizens. How can we compare our outcomes to a place like that?
How indeed.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale University’s Clinical and Translational Research Accelerator in New Haven,Conn. He reported no relevant conflicts of interest.
A version of this article originally appeared on Medscape.com.
This transcript has been edited for clarity.
“Excess deaths.” You’ve heard the phrase countless times by now. It is one of the myriad of previously esoteric epidemiology terms that the pandemic brought squarely into the zeitgeist.
As a sort of standard candle of the performance of a state or a region or a country in terms of health care, it has a lot of utility – if for nothing more than Monday-morning quarterbacking. But this week, I want to dig in on the concept a bit because, according to a new study, the excess death gap between the United States and Western Europe has never been higher.
You might imagine that the best way to figure this out is for some group of intelligent people to review each death and decide, somehow, whether it was expected or not. But aside from being impractical, this would end up being somewhat subjective. That older person who died from pneumonia – was that an expected death? Could it have been avoided?
Rather, the calculation of excess mortality relies on large numbers and statistical inference to compare an expected number of deaths with those that are observed.
The difference is excess mortality, even if you can never be sure whether any particular death was expected or not.
As always, however, the devil is in the details. What data do you use to define the expected number of deaths?
There are options here. Probably the most straightforward analysis uses past data from the country of interest. You look at annual deaths over some historical period of time and compare those numbers with the rates today. Two issues need to be accounted for here: population growth – a larger population will have more deaths, so you need to adjust the historical population with current levels, and demographic shifts – an older or more male population will have more deaths, so you need to adjust for that as well.
But provided you take care of those factors, you can estimate fairly well how many deaths you can expect to see in any given period of time.
Still, you should see right away that excess mortality is a relative concept. If you think that, just perhaps, the United States has some systematic failure to deliver care that has been stable and persistent over time, you wouldn’t capture that failing in an excess mortality calculation that uses U.S. historical data as the baseline.
The best way to get around that is to use data from other countries, and that’s just what this article – a rare single-author piece by Patrick Heuveline – does, calculating excess deaths in the United States by standardizing our mortality rates to the five largest Western European countries: the United Kingdom, France, Germany, Italy, and Spain.
Controlling for the differences in the demographics of that European population, here is the expected number of deaths in the United States over the past 5 years.
Note that there is a small uptick in expected deaths in 2020, reflecting the pandemic, which returns to baseline levels by 2021. This is because that’s what happened in Europe; by 2021, the excess mortality due to COVID-19 was quite low.
Here are the actual deaths in the US during that time.
Highlighted here in green, then, is the excess mortality over time in the United States.
There are some fascinating and concerning findings here.
First of all, you can see that even before the pandemic, the United States has an excess mortality problem. This is not entirely a surprise; we’ve known that so-called “deaths of despair,” those due to alcohol abuse, drug overdoses, and suicide, are at an all-time high and tend to affect a “prime of life” population that would not otherwise be expected to die. In fact, fully 50% of the excess deaths in the United States occur in those between ages 15 and 64.
Excess deaths are also a concerning percentage of total deaths. In 2017, 17% of total deaths in the United States could be considered “excess.” In 2021, that number had doubled to 35%. Nearly 900,000 individuals in the United States died in 2021 who perhaps didn’t need to.
The obvious culprit to blame here is COVID, but COVID-associated excess deaths only explain about 50% of the excess we see in 2021. The rest reflect something even more concerning: a worsening of the failures of the past, perhaps exacerbated by the pandemic but not due to the virus itself.
Of course, we started this discussion acknowledging that the calculation of excess mortality is exquisitely dependent on how you model the expected number of deaths, and I’m sure some will take issue with the use of European numbers when applied to Americans. After all, Europe has, by and large, a robust public health service, socialized medicine, and healthcare that does not run the risk of bankrupting its citizens. How can we compare our outcomes to a place like that?
How indeed.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale University’s Clinical and Translational Research Accelerator in New Haven,Conn. He reported no relevant conflicts of interest.
A version of this article originally appeared on Medscape.com.
This transcript has been edited for clarity.
“Excess deaths.” You’ve heard the phrase countless times by now. It is one of the myriad of previously esoteric epidemiology terms that the pandemic brought squarely into the zeitgeist.
As a sort of standard candle of the performance of a state or a region or a country in terms of health care, it has a lot of utility – if for nothing more than Monday-morning quarterbacking. But this week, I want to dig in on the concept a bit because, according to a new study, the excess death gap between the United States and Western Europe has never been higher.
You might imagine that the best way to figure this out is for some group of intelligent people to review each death and decide, somehow, whether it was expected or not. But aside from being impractical, this would end up being somewhat subjective. That older person who died from pneumonia – was that an expected death? Could it have been avoided?
Rather, the calculation of excess mortality relies on large numbers and statistical inference to compare an expected number of deaths with those that are observed.
The difference is excess mortality, even if you can never be sure whether any particular death was expected or not.
As always, however, the devil is in the details. What data do you use to define the expected number of deaths?
There are options here. Probably the most straightforward analysis uses past data from the country of interest. You look at annual deaths over some historical period of time and compare those numbers with the rates today. Two issues need to be accounted for here: population growth – a larger population will have more deaths, so you need to adjust the historical population with current levels, and demographic shifts – an older or more male population will have more deaths, so you need to adjust for that as well.
But provided you take care of those factors, you can estimate fairly well how many deaths you can expect to see in any given period of time.
Still, you should see right away that excess mortality is a relative concept. If you think that, just perhaps, the United States has some systematic failure to deliver care that has been stable and persistent over time, you wouldn’t capture that failing in an excess mortality calculation that uses U.S. historical data as the baseline.
The best way to get around that is to use data from other countries, and that’s just what this article – a rare single-author piece by Patrick Heuveline – does, calculating excess deaths in the United States by standardizing our mortality rates to the five largest Western European countries: the United Kingdom, France, Germany, Italy, and Spain.
Controlling for the differences in the demographics of that European population, here is the expected number of deaths in the United States over the past 5 years.
Note that there is a small uptick in expected deaths in 2020, reflecting the pandemic, which returns to baseline levels by 2021. This is because that’s what happened in Europe; by 2021, the excess mortality due to COVID-19 was quite low.
Here are the actual deaths in the US during that time.
Highlighted here in green, then, is the excess mortality over time in the United States.
There are some fascinating and concerning findings here.
First of all, you can see that even before the pandemic, the United States has an excess mortality problem. This is not entirely a surprise; we’ve known that so-called “deaths of despair,” those due to alcohol abuse, drug overdoses, and suicide, are at an all-time high and tend to affect a “prime of life” population that would not otherwise be expected to die. In fact, fully 50% of the excess deaths in the United States occur in those between ages 15 and 64.
Excess deaths are also a concerning percentage of total deaths. In 2017, 17% of total deaths in the United States could be considered “excess.” In 2021, that number had doubled to 35%. Nearly 900,000 individuals in the United States died in 2021 who perhaps didn’t need to.
The obvious culprit to blame here is COVID, but COVID-associated excess deaths only explain about 50% of the excess we see in 2021. The rest reflect something even more concerning: a worsening of the failures of the past, perhaps exacerbated by the pandemic but not due to the virus itself.
Of course, we started this discussion acknowledging that the calculation of excess mortality is exquisitely dependent on how you model the expected number of deaths, and I’m sure some will take issue with the use of European numbers when applied to Americans. After all, Europe has, by and large, a robust public health service, socialized medicine, and healthcare that does not run the risk of bankrupting its citizens. How can we compare our outcomes to a place like that?
How indeed.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale University’s Clinical and Translational Research Accelerator in New Haven,Conn. He reported no relevant conflicts of interest.
A version of this article originally appeared on Medscape.com.
Is it time to stop treating high triglycerides?
PROMINENT trial, where pemafibrate successfully lowered high levels but was not associated with a lower risk for cardiovascular events, reinforced the point. Is it time to stop measuring and treating high triglycerides?
The publication of theThere may be noncardiovascular reasons to treat hypertriglyceridemia. Pancreatitis is the most cited one, given that the risk for pancreatitis increases with increasing triglyceride levels, especially in patients with a prior episode.
There may also be practical reasons to lower trigs. Because most cholesterol panels use the Friedewald equation to calculate low-density lipoprotein cholesterol (LDL-C) rather than measuring it directly, very high triglyceride levels can invalidate the calculation and return error messages on lab reports.
But we now have alternatives to measuring LDL-C, including non–high-density lipoprotein cholesterol (HDL-C) and apolipoprotein B (apoB), that better predict risk and are usable even in the setting of nonfasting samples when triglycerides are elevated.
Independent cardiovascular risk factor?
If we are going to measure and treat high triglycerides for cardiovascular reasons, the relevant question is, are high triglycerides an independent risk factor for cardiovascular disease?
Proponents have a broad swath of supportive literature to point at. Multiple studies have shown an association between triglyceride levels and cardiovascular risk. The evidence even extends beyond traditional epidemiologic analyses, to genetic studies that should be free from some of the problems seen in observational cohorts.
But it is difficult to be certain whether these associations are causal or merely confounding. An unhealthy diet will increase triglycerides, as will alcohol. Patients with diabetes or metabolic syndrome have high triglycerides. So do patients with nephrotic syndrome or hypothyroidism, or hypertensive patients taking thiazide diuretics. Adjusting for these baseline factors is possible but imperfect, and residual confounding is always an issue. An analysis of the Reykjavik and the EPIC-Norfolk studies found an association between triglyceride levels and cardiovascular risk. That risk was attenuated, but not eliminated, when adjusted for traditional risk factors such as age, smoking, blood pressure, diabetes, and cholesterol.
Randomized trials of triglyceride-lowering therapies would help resolve the question of whether hypertriglyceridemia contributes to coronary disease or simply identifies high-risk patients. Early trials seemed to support the idea of a causal link. The Helsinki Heart Study randomized patients to gemfibrozil or placebo and found a 34% relative risk reduction in coronary artery disease with the fibrate. But gemfibrozil didn’t only reduce triglycerides. It also increased HDL-C and lowered LDL-C relative to placebo, which may explain the observed benefit.
Gemfibrozil is rarely used today because we can achieve much greater LDL-C reductions with statins, as well as ezetimibe and PCSK9 inhibitors. The success of these drugs may not leave any room for triglyceride-lowering medications.
The pre- vs. post-statin era
In the 2005 FIELD study, participants were randomized to receive fenofibrate or placebo. Although patients weren’t taking statin at study entry, 17% of the placebo group started taking one during the trial. Fenofibrate wasn’t associated with a reduction in the primary endpoint, a combination of coronary heart disease death or nonfatal myocardial infarction (MI). Among the many secondary endpoints, nonfatal MI was lower but cardiovascular mortality was not in the fibrate-treated patients. In the same vein, the 2010 ACCORD study randomized patients to receive simvastatin plus fenofibrate or simvastatin alone. The composite primary outcome of MI, stroke, and cardiovascular mortality was not lowered nor were any secondary outcomes with the combination therapy. In the statin era, triglyceride-lowering therapies have not shown much benefit.
The final nail in the coffin may very well be the aforementioned PROMINENT trial. The new agent, pemafibrate, fared no better than its predecessor fenofibrate. Pemafibrate had no impact on the study’s primary composite outcome of nonfatal MI, stroke, coronary revascularization, or cardiovascular death despite being very effective at lowering triglycerides (by more than 25%). Patients treated with pemafibrate had increased LDL-C and apoB compared with the placebo group. When you realize that, the results of the study are not very surprising.
Some point to the results of REDUCE-IT as proof that triglycerides are still a valid target for pharmacotherapy. The debate on whether REDUCE-IT tested a good drug or a bad placebo is one for another day. The salient point for today is that the benefits of eicosapentaenoic acid (EPA) were seen regardless of either baseline or final triglyceride level. EPA may lower cardiac risk, but there is no widespread consensus that it does so by lowering triglycerides. There may be other mechanisms at work.
You could still argue that high triglycerides have value as a risk prediction tool even if their role as a target for drug therapy is questionable. There was a time when medications to lower triglycerides had a benefit. But this is the post-statin era, and that time has passed.
If you see patients with high triglycerides, treating them with triglyceride-lowering medication probably isn’t going to reduce their cardiovascular risk. Dietary interventions, encouraging exercise, and reducing alcohol consumption are better options. Not only will they lead to lower cholesterol levels, but they’ll lower cardiovascular risk, too.
Dr. Labos is a cardiologist at Hôpital Notre-Dame, Montreal, with a degree in epidemiology. He has disclosed no relevant financial relationships. He spends most of his time doing things that he doesn’t get paid for, like research, teaching, and podcasting. Occasionally he finds time to practice cardiology to pay the rent. He realizes that half of his research findings will be disproved in 5 years; he just doesn’t know which half. He is a regular contributor to the Montreal Gazette, CJAD radio, and CTV television in Montreal and is host of the award-winning podcast The Body of Evidence. The Body of Evidence.
A version of this article originally appeared on Medscape.com.
PROMINENT trial, where pemafibrate successfully lowered high levels but was not associated with a lower risk for cardiovascular events, reinforced the point. Is it time to stop measuring and treating high triglycerides?
The publication of theThere may be noncardiovascular reasons to treat hypertriglyceridemia. Pancreatitis is the most cited one, given that the risk for pancreatitis increases with increasing triglyceride levels, especially in patients with a prior episode.
There may also be practical reasons to lower trigs. Because most cholesterol panels use the Friedewald equation to calculate low-density lipoprotein cholesterol (LDL-C) rather than measuring it directly, very high triglyceride levels can invalidate the calculation and return error messages on lab reports.
But we now have alternatives to measuring LDL-C, including non–high-density lipoprotein cholesterol (HDL-C) and apolipoprotein B (apoB), that better predict risk and are usable even in the setting of nonfasting samples when triglycerides are elevated.
Independent cardiovascular risk factor?
If we are going to measure and treat high triglycerides for cardiovascular reasons, the relevant question is, are high triglycerides an independent risk factor for cardiovascular disease?
Proponents have a broad swath of supportive literature to point at. Multiple studies have shown an association between triglyceride levels and cardiovascular risk. The evidence even extends beyond traditional epidemiologic analyses, to genetic studies that should be free from some of the problems seen in observational cohorts.
But it is difficult to be certain whether these associations are causal or merely confounding. An unhealthy diet will increase triglycerides, as will alcohol. Patients with diabetes or metabolic syndrome have high triglycerides. So do patients with nephrotic syndrome or hypothyroidism, or hypertensive patients taking thiazide diuretics. Adjusting for these baseline factors is possible but imperfect, and residual confounding is always an issue. An analysis of the Reykjavik and the EPIC-Norfolk studies found an association between triglyceride levels and cardiovascular risk. That risk was attenuated, but not eliminated, when adjusted for traditional risk factors such as age, smoking, blood pressure, diabetes, and cholesterol.
Randomized trials of triglyceride-lowering therapies would help resolve the question of whether hypertriglyceridemia contributes to coronary disease or simply identifies high-risk patients. Early trials seemed to support the idea of a causal link. The Helsinki Heart Study randomized patients to gemfibrozil or placebo and found a 34% relative risk reduction in coronary artery disease with the fibrate. But gemfibrozil didn’t only reduce triglycerides. It also increased HDL-C and lowered LDL-C relative to placebo, which may explain the observed benefit.
Gemfibrozil is rarely used today because we can achieve much greater LDL-C reductions with statins, as well as ezetimibe and PCSK9 inhibitors. The success of these drugs may not leave any room for triglyceride-lowering medications.
The pre- vs. post-statin era
In the 2005 FIELD study, participants were randomized to receive fenofibrate or placebo. Although patients weren’t taking statin at study entry, 17% of the placebo group started taking one during the trial. Fenofibrate wasn’t associated with a reduction in the primary endpoint, a combination of coronary heart disease death or nonfatal myocardial infarction (MI). Among the many secondary endpoints, nonfatal MI was lower but cardiovascular mortality was not in the fibrate-treated patients. In the same vein, the 2010 ACCORD study randomized patients to receive simvastatin plus fenofibrate or simvastatin alone. The composite primary outcome of MI, stroke, and cardiovascular mortality was not lowered nor were any secondary outcomes with the combination therapy. In the statin era, triglyceride-lowering therapies have not shown much benefit.
The final nail in the coffin may very well be the aforementioned PROMINENT trial. The new agent, pemafibrate, fared no better than its predecessor fenofibrate. Pemafibrate had no impact on the study’s primary composite outcome of nonfatal MI, stroke, coronary revascularization, or cardiovascular death despite being very effective at lowering triglycerides (by more than 25%). Patients treated with pemafibrate had increased LDL-C and apoB compared with the placebo group. When you realize that, the results of the study are not very surprising.
Some point to the results of REDUCE-IT as proof that triglycerides are still a valid target for pharmacotherapy. The debate on whether REDUCE-IT tested a good drug or a bad placebo is one for another day. The salient point for today is that the benefits of eicosapentaenoic acid (EPA) were seen regardless of either baseline or final triglyceride level. EPA may lower cardiac risk, but there is no widespread consensus that it does so by lowering triglycerides. There may be other mechanisms at work.
You could still argue that high triglycerides have value as a risk prediction tool even if their role as a target for drug therapy is questionable. There was a time when medications to lower triglycerides had a benefit. But this is the post-statin era, and that time has passed.
If you see patients with high triglycerides, treating them with triglyceride-lowering medication probably isn’t going to reduce their cardiovascular risk. Dietary interventions, encouraging exercise, and reducing alcohol consumption are better options. Not only will they lead to lower cholesterol levels, but they’ll lower cardiovascular risk, too.
Dr. Labos is a cardiologist at Hôpital Notre-Dame, Montreal, with a degree in epidemiology. He has disclosed no relevant financial relationships. He spends most of his time doing things that he doesn’t get paid for, like research, teaching, and podcasting. Occasionally he finds time to practice cardiology to pay the rent. He realizes that half of his research findings will be disproved in 5 years; he just doesn’t know which half. He is a regular contributor to the Montreal Gazette, CJAD radio, and CTV television in Montreal and is host of the award-winning podcast The Body of Evidence. The Body of Evidence.
A version of this article originally appeared on Medscape.com.
PROMINENT trial, where pemafibrate successfully lowered high levels but was not associated with a lower risk for cardiovascular events, reinforced the point. Is it time to stop measuring and treating high triglycerides?
The publication of theThere may be noncardiovascular reasons to treat hypertriglyceridemia. Pancreatitis is the most cited one, given that the risk for pancreatitis increases with increasing triglyceride levels, especially in patients with a prior episode.
There may also be practical reasons to lower trigs. Because most cholesterol panels use the Friedewald equation to calculate low-density lipoprotein cholesterol (LDL-C) rather than measuring it directly, very high triglyceride levels can invalidate the calculation and return error messages on lab reports.
But we now have alternatives to measuring LDL-C, including non–high-density lipoprotein cholesterol (HDL-C) and apolipoprotein B (apoB), that better predict risk and are usable even in the setting of nonfasting samples when triglycerides are elevated.
Independent cardiovascular risk factor?
If we are going to measure and treat high triglycerides for cardiovascular reasons, the relevant question is, are high triglycerides an independent risk factor for cardiovascular disease?
Proponents have a broad swath of supportive literature to point at. Multiple studies have shown an association between triglyceride levels and cardiovascular risk. The evidence even extends beyond traditional epidemiologic analyses, to genetic studies that should be free from some of the problems seen in observational cohorts.
But it is difficult to be certain whether these associations are causal or merely confounding. An unhealthy diet will increase triglycerides, as will alcohol. Patients with diabetes or metabolic syndrome have high triglycerides. So do patients with nephrotic syndrome or hypothyroidism, or hypertensive patients taking thiazide diuretics. Adjusting for these baseline factors is possible but imperfect, and residual confounding is always an issue. An analysis of the Reykjavik and the EPIC-Norfolk studies found an association between triglyceride levels and cardiovascular risk. That risk was attenuated, but not eliminated, when adjusted for traditional risk factors such as age, smoking, blood pressure, diabetes, and cholesterol.
Randomized trials of triglyceride-lowering therapies would help resolve the question of whether hypertriglyceridemia contributes to coronary disease or simply identifies high-risk patients. Early trials seemed to support the idea of a causal link. The Helsinki Heart Study randomized patients to gemfibrozil or placebo and found a 34% relative risk reduction in coronary artery disease with the fibrate. But gemfibrozil didn’t only reduce triglycerides. It also increased HDL-C and lowered LDL-C relative to placebo, which may explain the observed benefit.
Gemfibrozil is rarely used today because we can achieve much greater LDL-C reductions with statins, as well as ezetimibe and PCSK9 inhibitors. The success of these drugs may not leave any room for triglyceride-lowering medications.
The pre- vs. post-statin era
In the 2005 FIELD study, participants were randomized to receive fenofibrate or placebo. Although patients weren’t taking statin at study entry, 17% of the placebo group started taking one during the trial. Fenofibrate wasn’t associated with a reduction in the primary endpoint, a combination of coronary heart disease death or nonfatal myocardial infarction (MI). Among the many secondary endpoints, nonfatal MI was lower but cardiovascular mortality was not in the fibrate-treated patients. In the same vein, the 2010 ACCORD study randomized patients to receive simvastatin plus fenofibrate or simvastatin alone. The composite primary outcome of MI, stroke, and cardiovascular mortality was not lowered nor were any secondary outcomes with the combination therapy. In the statin era, triglyceride-lowering therapies have not shown much benefit.
The final nail in the coffin may very well be the aforementioned PROMINENT trial. The new agent, pemafibrate, fared no better than its predecessor fenofibrate. Pemafibrate had no impact on the study’s primary composite outcome of nonfatal MI, stroke, coronary revascularization, or cardiovascular death despite being very effective at lowering triglycerides (by more than 25%). Patients treated with pemafibrate had increased LDL-C and apoB compared with the placebo group. When you realize that, the results of the study are not very surprising.
Some point to the results of REDUCE-IT as proof that triglycerides are still a valid target for pharmacotherapy. The debate on whether REDUCE-IT tested a good drug or a bad placebo is one for another day. The salient point for today is that the benefits of eicosapentaenoic acid (EPA) were seen regardless of either baseline or final triglyceride level. EPA may lower cardiac risk, but there is no widespread consensus that it does so by lowering triglycerides. There may be other mechanisms at work.
You could still argue that high triglycerides have value as a risk prediction tool even if their role as a target for drug therapy is questionable. There was a time when medications to lower triglycerides had a benefit. But this is the post-statin era, and that time has passed.
If you see patients with high triglycerides, treating them with triglyceride-lowering medication probably isn’t going to reduce their cardiovascular risk. Dietary interventions, encouraging exercise, and reducing alcohol consumption are better options. Not only will they lead to lower cholesterol levels, but they’ll lower cardiovascular risk, too.
Dr. Labos is a cardiologist at Hôpital Notre-Dame, Montreal, with a degree in epidemiology. He has disclosed no relevant financial relationships. He spends most of his time doing things that he doesn’t get paid for, like research, teaching, and podcasting. Occasionally he finds time to practice cardiology to pay the rent. He realizes that half of his research findings will be disproved in 5 years; he just doesn’t know which half. He is a regular contributor to the Montreal Gazette, CJAD radio, and CTV television in Montreal and is host of the award-winning podcast The Body of Evidence. The Body of Evidence.
A version of this article originally appeared on Medscape.com.
A new way to gauge suicide risk?
Researchers found SDOH are risk factors for suicide among U.S. veterans and NLP can be leveraged to extract SDOH information from unstructured data in the EHR.
“Since SDOH is overwhelmingly described in EHR notes, the importance of NLP-extracted SDOH can be very significant, meaning that NLP can be used as an effective method for epidemiological and public health study,” senior investigator Hong Yu, PhD, from Miner School of Information and Computer Sciences, University of Massachusetts Lowell, told this news organization.
Although the study was conducted among U.S. veterans, the results likely hold for the general population as well.
“The NLP methods are generalizable. The SDOH categories are generalizable. There may be some variations in terms of the strength of associations in NLP-extracted SDOH and suicide death, but the overall findings are generalizable,” Dr. Yu said.
The study was published online JAMA Network Open.
Improved risk assessment
SDOH, which include factors such as socioeconomic status, access to healthy food, education, housing, and physical environment, are strong predictors of suicidal behaviors.
Several studies have identified a range of common risk factors for suicide using International Classification of Diseases (ICD) codes and other “structured” data from the EHR. However, the use of unstructured EHR data from clinician notes has received little attention in investigating potential associations between suicide and SDOH.
Using the large Veterans Health Administration EHR system, the researchers determined associations between veterans’ death by suicide and recent SDOH, identified using both structured data (ICD-10 codes and Veterans Health Administration stop codes) and unstructured data (NLP-processed clinical notes).
Participants included 8,821 veterans who committed suicide and 35,284 matched controls. The cohort was mostly male (96%) and White (79%). The mean age was 58 years.
The NLP-extracted SDOH were social isolation, job or financial insecurity, housing instability, legal problems, violence, barriers to care, transition of care, and food insecurity.
All of these unstructured clinical notes on SDOH were significantly associated with increased risk for death by suicide.
Legal problems had the largest estimated effect size, more than twice the risk of those with no exposure (adjusted odds ratio 2.62; 95% confidence interval, 2.38-2.89), followed by violence (aOR, 2.34; 95% CI, 2.17-2.52) and social isolation (aOR, 1.94; 95% CI, 1.83-2.06).
Similarly, all of the structured SDOH – social or family problems, employment or financial problems, housing instability, legal problems, violence, and nonspecific psychosocial needs – also showed significant associations with increased risk for suicide death, once again, with legal problems linked to the highest risk (aOR, 2.63; 95% CI, 2.37-2.91).
When combining the structured and NLP-extracted unstructured data, the top three risk factors for death by suicide were legal problems (aOR, 2.66; 95% CI 2.46-2.89), violence (aOR, 2.12; 95% CI, 1.98-2.27), and nonspecific psychosocial needs (aOR, 2.07; 95% CI, 1.92-2.23).
“To our knowledge, this the first large-scale study to implement and use an NLP system to extract SDOH information from unstructured EHR data,” the researchers write.
“We strongly believe that analyzing all available SDOH information, including those contained in clinical notes, can help develop a better system for risk assessment and suicide prevention. However, more studies are required to investigate ways of seamlessly incorporating SDOHs into existing health care systems,” they conclude.
Dr. Yu said it’s also important to note that their NLP system is built upon “the most advanced deep-learning technologies and therefore is more generalizable than most existing work that mainly used rule-based approaches or traditional machine learning for identifying social determinants of health.”
In an accompanying editorial, Ishanu Chattopadhyay, PhD, of the University of Chicago, said this suggests that unstructured clinical notes “may efficiently identify at-risk individuals even when structured data on the relevant variables are missing or incomplete.”
This work may provide “the foundation for addressing the key hurdles in enacting efficient universal assessment for suicide risk among the veterans and perhaps in the general population,” Dr. Chattopadhyay added.
This research was funded by a grant from the National Institute of Mental Health. The study authors and editorialist report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Researchers found SDOH are risk factors for suicide among U.S. veterans and NLP can be leveraged to extract SDOH information from unstructured data in the EHR.
“Since SDOH is overwhelmingly described in EHR notes, the importance of NLP-extracted SDOH can be very significant, meaning that NLP can be used as an effective method for epidemiological and public health study,” senior investigator Hong Yu, PhD, from Miner School of Information and Computer Sciences, University of Massachusetts Lowell, told this news organization.
Although the study was conducted among U.S. veterans, the results likely hold for the general population as well.
“The NLP methods are generalizable. The SDOH categories are generalizable. There may be some variations in terms of the strength of associations in NLP-extracted SDOH and suicide death, but the overall findings are generalizable,” Dr. Yu said.
The study was published online JAMA Network Open.
Improved risk assessment
SDOH, which include factors such as socioeconomic status, access to healthy food, education, housing, and physical environment, are strong predictors of suicidal behaviors.
Several studies have identified a range of common risk factors for suicide using International Classification of Diseases (ICD) codes and other “structured” data from the EHR. However, the use of unstructured EHR data from clinician notes has received little attention in investigating potential associations between suicide and SDOH.
Using the large Veterans Health Administration EHR system, the researchers determined associations between veterans’ death by suicide and recent SDOH, identified using both structured data (ICD-10 codes and Veterans Health Administration stop codes) and unstructured data (NLP-processed clinical notes).
Participants included 8,821 veterans who committed suicide and 35,284 matched controls. The cohort was mostly male (96%) and White (79%). The mean age was 58 years.
The NLP-extracted SDOH were social isolation, job or financial insecurity, housing instability, legal problems, violence, barriers to care, transition of care, and food insecurity.
All of these unstructured clinical notes on SDOH were significantly associated with increased risk for death by suicide.
Legal problems had the largest estimated effect size, more than twice the risk of those with no exposure (adjusted odds ratio 2.62; 95% confidence interval, 2.38-2.89), followed by violence (aOR, 2.34; 95% CI, 2.17-2.52) and social isolation (aOR, 1.94; 95% CI, 1.83-2.06).
Similarly, all of the structured SDOH – social or family problems, employment or financial problems, housing instability, legal problems, violence, and nonspecific psychosocial needs – also showed significant associations with increased risk for suicide death, once again, with legal problems linked to the highest risk (aOR, 2.63; 95% CI, 2.37-2.91).
When combining the structured and NLP-extracted unstructured data, the top three risk factors for death by suicide were legal problems (aOR, 2.66; 95% CI 2.46-2.89), violence (aOR, 2.12; 95% CI, 1.98-2.27), and nonspecific psychosocial needs (aOR, 2.07; 95% CI, 1.92-2.23).
“To our knowledge, this the first large-scale study to implement and use an NLP system to extract SDOH information from unstructured EHR data,” the researchers write.
“We strongly believe that analyzing all available SDOH information, including those contained in clinical notes, can help develop a better system for risk assessment and suicide prevention. However, more studies are required to investigate ways of seamlessly incorporating SDOHs into existing health care systems,” they conclude.
Dr. Yu said it’s also important to note that their NLP system is built upon “the most advanced deep-learning technologies and therefore is more generalizable than most existing work that mainly used rule-based approaches or traditional machine learning for identifying social determinants of health.”
In an accompanying editorial, Ishanu Chattopadhyay, PhD, of the University of Chicago, said this suggests that unstructured clinical notes “may efficiently identify at-risk individuals even when structured data on the relevant variables are missing or incomplete.”
This work may provide “the foundation for addressing the key hurdles in enacting efficient universal assessment for suicide risk among the veterans and perhaps in the general population,” Dr. Chattopadhyay added.
This research was funded by a grant from the National Institute of Mental Health. The study authors and editorialist report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Researchers found SDOH are risk factors for suicide among U.S. veterans and NLP can be leveraged to extract SDOH information from unstructured data in the EHR.
“Since SDOH is overwhelmingly described in EHR notes, the importance of NLP-extracted SDOH can be very significant, meaning that NLP can be used as an effective method for epidemiological and public health study,” senior investigator Hong Yu, PhD, from Miner School of Information and Computer Sciences, University of Massachusetts Lowell, told this news organization.
Although the study was conducted among U.S. veterans, the results likely hold for the general population as well.
“The NLP methods are generalizable. The SDOH categories are generalizable. There may be some variations in terms of the strength of associations in NLP-extracted SDOH and suicide death, but the overall findings are generalizable,” Dr. Yu said.
The study was published online JAMA Network Open.
Improved risk assessment
SDOH, which include factors such as socioeconomic status, access to healthy food, education, housing, and physical environment, are strong predictors of suicidal behaviors.
Several studies have identified a range of common risk factors for suicide using International Classification of Diseases (ICD) codes and other “structured” data from the EHR. However, the use of unstructured EHR data from clinician notes has received little attention in investigating potential associations between suicide and SDOH.
Using the large Veterans Health Administration EHR system, the researchers determined associations between veterans’ death by suicide and recent SDOH, identified using both structured data (ICD-10 codes and Veterans Health Administration stop codes) and unstructured data (NLP-processed clinical notes).
Participants included 8,821 veterans who committed suicide and 35,284 matched controls. The cohort was mostly male (96%) and White (79%). The mean age was 58 years.
The NLP-extracted SDOH were social isolation, job or financial insecurity, housing instability, legal problems, violence, barriers to care, transition of care, and food insecurity.
All of these unstructured clinical notes on SDOH were significantly associated with increased risk for death by suicide.
Legal problems had the largest estimated effect size, more than twice the risk of those with no exposure (adjusted odds ratio 2.62; 95% confidence interval, 2.38-2.89), followed by violence (aOR, 2.34; 95% CI, 2.17-2.52) and social isolation (aOR, 1.94; 95% CI, 1.83-2.06).
Similarly, all of the structured SDOH – social or family problems, employment or financial problems, housing instability, legal problems, violence, and nonspecific psychosocial needs – also showed significant associations with increased risk for suicide death, once again, with legal problems linked to the highest risk (aOR, 2.63; 95% CI, 2.37-2.91).
When combining the structured and NLP-extracted unstructured data, the top three risk factors for death by suicide were legal problems (aOR, 2.66; 95% CI 2.46-2.89), violence (aOR, 2.12; 95% CI, 1.98-2.27), and nonspecific psychosocial needs (aOR, 2.07; 95% CI, 1.92-2.23).
“To our knowledge, this the first large-scale study to implement and use an NLP system to extract SDOH information from unstructured EHR data,” the researchers write.
“We strongly believe that analyzing all available SDOH information, including those contained in clinical notes, can help develop a better system for risk assessment and suicide prevention. However, more studies are required to investigate ways of seamlessly incorporating SDOHs into existing health care systems,” they conclude.
Dr. Yu said it’s also important to note that their NLP system is built upon “the most advanced deep-learning technologies and therefore is more generalizable than most existing work that mainly used rule-based approaches or traditional machine learning for identifying social determinants of health.”
In an accompanying editorial, Ishanu Chattopadhyay, PhD, of the University of Chicago, said this suggests that unstructured clinical notes “may efficiently identify at-risk individuals even when structured data on the relevant variables are missing or incomplete.”
This work may provide “the foundation for addressing the key hurdles in enacting efficient universal assessment for suicide risk among the veterans and perhaps in the general population,” Dr. Chattopadhyay added.
This research was funded by a grant from the National Institute of Mental Health. The study authors and editorialist report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
FROM JAMA NETWORK OPEN
Risk for MS in children often missed
Imaging tests may miss early signs of multiple sclerosis (MS) in children who have no symptoms of the disease, according to a recent study that points to the need for a change in diagnostic criteria for the neuromuscular condition.
The findings suggest that children, unlike adults, may not need to meet the current clinical standard criteria to be considered at risk for MS.
“This is an important study confirming that some children who have no symptoms of demyelinating disease may nonetheless have MRI findings suggestive of demyelination detected on brain imaging,” said Naila Makhani, MD, associate professor of pediatrics and of neurology at Yale University and director of the Yale Pediatric Neuroimmunology Program, New Haven, Conn. Dr. Makhani was not affiliated with the study.
Researchers reviewed the MRI scans of 38 children aged 7-17 years who had radiologically isolated syndrome (RIS), a possible precursor to MS.
Like MS, RIS is characterized by destruction of the myelin. However, RIS is generally asymptomatic.
While RIS has been linked to MS, a diagnosis of RIS does not mean someone will be diagnosed with MS. Previous studies have shown that at least 3% of MS cases begin before age 16.
The children in the study likely received an MRI because of complaints of headaches or after having been diagnosed with a concussion, according to the researchers. The participants also did not show physical symptoms for MS, nor did they meet the McDonald or Barkohf criteria, which are clinical standards used to diagnose the condition in adults and children.
Within an average of 3 years following the imaging and RIS diagnosis, almost 36% of the children experienced a clinical attack, which led to an MS diagnosis. Almost three-fourths of the children developed additional brain and spinal cord lesions in the myelin that were evident on MRI.
MS often is diagnosed after a patient has had a clinical attack, such as vision impairment, loss of balance, inflammation, or severe fatigue. Identifying the potential for the disease earlier may allow clinicians to treat sooner, according to Leslie Benson, MD, assistant director of pediatric neuroimmunology at Massachusetts General Hospital, Boston, and one of the study authors.
“The field is leaning toward [the question of], ‘Should we treat presymptomatic MS?’ ” said Dr. Benson. “If we have the opportunity to prevent disability and improve long-term outcomes with safe medications, then we would like to do so.”
The findings were published in the journal Multiple Sclerosis and Related Disorders.
According to Dr. Benson and her colleagues, adjustments to the McDonald or Barkohf criteria for children may help in the detection of RIS and may allow earlier identification of MS.
“We don’t really know when MS first starts,” Dr. Benson said. “Unless you happen to have an MRI or symptoms, there’s no way to know how long the lesions have been evolving and how long the disease progression that led to those lesions has been there.”
MRI images showing lesions in the brain stem and spinal cord of children appeared to be different from those typically seen in adults, according to Tanuja Chitnis, MD, director of the Mass General Brigham Pediatric MS Center in Boston, who is one of the study’s coauthors.
“The concern of many practitioners is whether we should be treating at the first sign of MS,” Dr. Chitnis said. “We need to understand it better in children, and in teenagers especially, when these probably start biologically.”
Dr. Benson said current criteria for diagnosing MS in children require meeting a high threshold, which may limit diagnoses to those whose condition has progressed.
“This may miss patients at risk for MS,” Dr. Benson said. “That idea of who do you diagnose RIS and what criteria work to accurately diagnose RIS is really important.”
For now, the challenge remains of investigating characteristics of patients with RIS who will later have a clinical attack.
“We need a better understanding of what criteria do need to be met and how we can best risk-stratify our patients,” Dr. Benson said. “If it is recommended to treat presymptomatic cases, that we can best stratify those at risk and not overtreat those not at risk.”
Dr. Makhani receives funding from the National Institutes of Health, the Charles H. Hood Foundation, and the Multiple Sclerosis Society.
A version of this article originally appeared on Medscape.com.
Imaging tests may miss early signs of multiple sclerosis (MS) in children who have no symptoms of the disease, according to a recent study that points to the need for a change in diagnostic criteria for the neuromuscular condition.
The findings suggest that children, unlike adults, may not need to meet the current clinical standard criteria to be considered at risk for MS.
“This is an important study confirming that some children who have no symptoms of demyelinating disease may nonetheless have MRI findings suggestive of demyelination detected on brain imaging,” said Naila Makhani, MD, associate professor of pediatrics and of neurology at Yale University and director of the Yale Pediatric Neuroimmunology Program, New Haven, Conn. Dr. Makhani was not affiliated with the study.
Researchers reviewed the MRI scans of 38 children aged 7-17 years who had radiologically isolated syndrome (RIS), a possible precursor to MS.
Like MS, RIS is characterized by destruction of the myelin. However, RIS is generally asymptomatic.
While RIS has been linked to MS, a diagnosis of RIS does not mean someone will be diagnosed with MS. Previous studies have shown that at least 3% of MS cases begin before age 16.
The children in the study likely received an MRI because of complaints of headaches or after having been diagnosed with a concussion, according to the researchers. The participants also did not show physical symptoms for MS, nor did they meet the McDonald or Barkohf criteria, which are clinical standards used to diagnose the condition in adults and children.
Within an average of 3 years following the imaging and RIS diagnosis, almost 36% of the children experienced a clinical attack, which led to an MS diagnosis. Almost three-fourths of the children developed additional brain and spinal cord lesions in the myelin that were evident on MRI.
MS often is diagnosed after a patient has had a clinical attack, such as vision impairment, loss of balance, inflammation, or severe fatigue. Identifying the potential for the disease earlier may allow clinicians to treat sooner, according to Leslie Benson, MD, assistant director of pediatric neuroimmunology at Massachusetts General Hospital, Boston, and one of the study authors.
“The field is leaning toward [the question of], ‘Should we treat presymptomatic MS?’ ” said Dr. Benson. “If we have the opportunity to prevent disability and improve long-term outcomes with safe medications, then we would like to do so.”
The findings were published in the journal Multiple Sclerosis and Related Disorders.
According to Dr. Benson and her colleagues, adjustments to the McDonald or Barkohf criteria for children may help in the detection of RIS and may allow earlier identification of MS.
“We don’t really know when MS first starts,” Dr. Benson said. “Unless you happen to have an MRI or symptoms, there’s no way to know how long the lesions have been evolving and how long the disease progression that led to those lesions has been there.”
MRI images showing lesions in the brain stem and spinal cord of children appeared to be different from those typically seen in adults, according to Tanuja Chitnis, MD, director of the Mass General Brigham Pediatric MS Center in Boston, who is one of the study’s coauthors.
“The concern of many practitioners is whether we should be treating at the first sign of MS,” Dr. Chitnis said. “We need to understand it better in children, and in teenagers especially, when these probably start biologically.”
Dr. Benson said current criteria for diagnosing MS in children require meeting a high threshold, which may limit diagnoses to those whose condition has progressed.
“This may miss patients at risk for MS,” Dr. Benson said. “That idea of who do you diagnose RIS and what criteria work to accurately diagnose RIS is really important.”
For now, the challenge remains of investigating characteristics of patients with RIS who will later have a clinical attack.
“We need a better understanding of what criteria do need to be met and how we can best risk-stratify our patients,” Dr. Benson said. “If it is recommended to treat presymptomatic cases, that we can best stratify those at risk and not overtreat those not at risk.”
Dr. Makhani receives funding from the National Institutes of Health, the Charles H. Hood Foundation, and the Multiple Sclerosis Society.
A version of this article originally appeared on Medscape.com.
Imaging tests may miss early signs of multiple sclerosis (MS) in children who have no symptoms of the disease, according to a recent study that points to the need for a change in diagnostic criteria for the neuromuscular condition.
The findings suggest that children, unlike adults, may not need to meet the current clinical standard criteria to be considered at risk for MS.
“This is an important study confirming that some children who have no symptoms of demyelinating disease may nonetheless have MRI findings suggestive of demyelination detected on brain imaging,” said Naila Makhani, MD, associate professor of pediatrics and of neurology at Yale University and director of the Yale Pediatric Neuroimmunology Program, New Haven, Conn. Dr. Makhani was not affiliated with the study.
Researchers reviewed the MRI scans of 38 children aged 7-17 years who had radiologically isolated syndrome (RIS), a possible precursor to MS.
Like MS, RIS is characterized by destruction of the myelin. However, RIS is generally asymptomatic.
While RIS has been linked to MS, a diagnosis of RIS does not mean someone will be diagnosed with MS. Previous studies have shown that at least 3% of MS cases begin before age 16.
The children in the study likely received an MRI because of complaints of headaches or after having been diagnosed with a concussion, according to the researchers. The participants also did not show physical symptoms for MS, nor did they meet the McDonald or Barkohf criteria, which are clinical standards used to diagnose the condition in adults and children.
Within an average of 3 years following the imaging and RIS diagnosis, almost 36% of the children experienced a clinical attack, which led to an MS diagnosis. Almost three-fourths of the children developed additional brain and spinal cord lesions in the myelin that were evident on MRI.
MS often is diagnosed after a patient has had a clinical attack, such as vision impairment, loss of balance, inflammation, or severe fatigue. Identifying the potential for the disease earlier may allow clinicians to treat sooner, according to Leslie Benson, MD, assistant director of pediatric neuroimmunology at Massachusetts General Hospital, Boston, and one of the study authors.
“The field is leaning toward [the question of], ‘Should we treat presymptomatic MS?’ ” said Dr. Benson. “If we have the opportunity to prevent disability and improve long-term outcomes with safe medications, then we would like to do so.”
The findings were published in the journal Multiple Sclerosis and Related Disorders.
According to Dr. Benson and her colleagues, adjustments to the McDonald or Barkohf criteria for children may help in the detection of RIS and may allow earlier identification of MS.
“We don’t really know when MS first starts,” Dr. Benson said. “Unless you happen to have an MRI or symptoms, there’s no way to know how long the lesions have been evolving and how long the disease progression that led to those lesions has been there.”
MRI images showing lesions in the brain stem and spinal cord of children appeared to be different from those typically seen in adults, according to Tanuja Chitnis, MD, director of the Mass General Brigham Pediatric MS Center in Boston, who is one of the study’s coauthors.
“The concern of many practitioners is whether we should be treating at the first sign of MS,” Dr. Chitnis said. “We need to understand it better in children, and in teenagers especially, when these probably start biologically.”
Dr. Benson said current criteria for diagnosing MS in children require meeting a high threshold, which may limit diagnoses to those whose condition has progressed.
“This may miss patients at risk for MS,” Dr. Benson said. “That idea of who do you diagnose RIS and what criteria work to accurately diagnose RIS is really important.”
For now, the challenge remains of investigating characteristics of patients with RIS who will later have a clinical attack.
“We need a better understanding of what criteria do need to be met and how we can best risk-stratify our patients,” Dr. Benson said. “If it is recommended to treat presymptomatic cases, that we can best stratify those at risk and not overtreat those not at risk.”
Dr. Makhani receives funding from the National Institutes of Health, the Charles H. Hood Foundation, and the Multiple Sclerosis Society.
A version of this article originally appeared on Medscape.com.
New coalition aims to revolutionize stalled lupus research
Clinical research into lupus has long been hampered by failures of medications that initially seemed promising. Now, a coalition of drugmakers, federal regulators, and activists has come together to forge a path toward better-designed studies and – potentially – groundbreaking new drugs.
“We have an opportunity to work collaboratively in lupus to address the challenges in drug development,” Teodora Staeva, PhD, vice president and chief scientific officer of the Lupus Research Alliance, said in an interview.
The alliance held a press conference on March 29 to announce the formation of the public-private Lupus Accelerating Breakthroughs Consortium. Coalition members include several major drugmakers, lupus organizations such as the LRA, the American College of Rheumatology, the Food and Drug Administration, and other federal agencies. Academic researchers, people living with lupus, caregivers and family members, and other members of the lupus community are also on board.
As Dr. Staeva explained, research into lupus has been marked by a high rate of failure. “Often, phase 2 trial successes have not translated into phase 3 successes,” she said.
But researchers, she said, don’t tend to think this is because the drugs themselves are useless.
Instead, it appears that “trial designs are not adequate to capture meaningful readouts of the drug effects, and that may have contributed to the multiple failures,” she said.
According to her, this may because the trials aren’t yet designed to fully detect whether drugs are useful. This is difficult to accomplish since patients have so many manifestations of the disease and trial participants already take a variety of existing drugs.
“Another major limitation has been the lack of integration of the patient’s voice and needs in the drug development process,” she said. It’s also challenging to recruit patients with the most severe lupus to participate in studies, especially since the trials often last 52 weeks.
The new coalition will not directly develop or favor specific drugs. Instead, it will focus on clinical research priorities. “It’s all open and collaborative,” Dr. Staeva explained, and a patient council will provide input. “We have a unique opportunity to bring the voice of people [living with lupus] to the table for the first time and be able to integrate their needs and priorities into the infrastructure.”
The new coalition was inspired by existing public-private partnerships such as the Kidney Health Initiative, she said. That initiative was founded in 2012 by the FDA and the American Society of Nephrology and has dozens of members, including multiple drugmakers and medical societies.
The leadership of the Lupus ABC coalition will include three nonvoting members from the FDA. They’ll offer guidance, Dr. Staeva said. At the press conference, Albert T. Roy, president and CEO of the LRA, said drug companies will appreciate the opportunity to speak with FDA representatives “in a space that is not competitive with respect to intellectual property or anything like that.”
The coalition will meet later in spring 2023, Dr. Staeva said. She hopes it will launch a couple of projects by the end of 2023 and be able to release preliminary results by the end of 2024.
One challenge will be figuring out how to stratify trial subjects so drug studies will more easily detect medications that may work in smaller populations of patients, Hoang Nguyen, PhD, director of scientific partnerships at the LRA, said in an interview. “Now we lump [patients] all together, and that’s not the optimal way to test drugs on patients who have a lot of differences.”
According to Dr. Staeva, the LRA funded the development of the coalition, and drugmakers will primarily provide financial support going forward. The pharmaceutical company members of the coalition are Biogen, Bristol-Myers Squibb, Eli Lilly, EMD Serono, Genentech, Gilead, GlaxoSmithKline, Merck, and Takeda.
Dr. Staeva, Dr. Nguyen, and Mr. Roy have no disclosures.
Clinical research into lupus has long been hampered by failures of medications that initially seemed promising. Now, a coalition of drugmakers, federal regulators, and activists has come together to forge a path toward better-designed studies and – potentially – groundbreaking new drugs.
“We have an opportunity to work collaboratively in lupus to address the challenges in drug development,” Teodora Staeva, PhD, vice president and chief scientific officer of the Lupus Research Alliance, said in an interview.
The alliance held a press conference on March 29 to announce the formation of the public-private Lupus Accelerating Breakthroughs Consortium. Coalition members include several major drugmakers, lupus organizations such as the LRA, the American College of Rheumatology, the Food and Drug Administration, and other federal agencies. Academic researchers, people living with lupus, caregivers and family members, and other members of the lupus community are also on board.
As Dr. Staeva explained, research into lupus has been marked by a high rate of failure. “Often, phase 2 trial successes have not translated into phase 3 successes,” she said.
But researchers, she said, don’t tend to think this is because the drugs themselves are useless.
Instead, it appears that “trial designs are not adequate to capture meaningful readouts of the drug effects, and that may have contributed to the multiple failures,” she said.
According to her, this may because the trials aren’t yet designed to fully detect whether drugs are useful. This is difficult to accomplish since patients have so many manifestations of the disease and trial participants already take a variety of existing drugs.
“Another major limitation has been the lack of integration of the patient’s voice and needs in the drug development process,” she said. It’s also challenging to recruit patients with the most severe lupus to participate in studies, especially since the trials often last 52 weeks.
The new coalition will not directly develop or favor specific drugs. Instead, it will focus on clinical research priorities. “It’s all open and collaborative,” Dr. Staeva explained, and a patient council will provide input. “We have a unique opportunity to bring the voice of people [living with lupus] to the table for the first time and be able to integrate their needs and priorities into the infrastructure.”
The new coalition was inspired by existing public-private partnerships such as the Kidney Health Initiative, she said. That initiative was founded in 2012 by the FDA and the American Society of Nephrology and has dozens of members, including multiple drugmakers and medical societies.
The leadership of the Lupus ABC coalition will include three nonvoting members from the FDA. They’ll offer guidance, Dr. Staeva said. At the press conference, Albert T. Roy, president and CEO of the LRA, said drug companies will appreciate the opportunity to speak with FDA representatives “in a space that is not competitive with respect to intellectual property or anything like that.”
The coalition will meet later in spring 2023, Dr. Staeva said. She hopes it will launch a couple of projects by the end of 2023 and be able to release preliminary results by the end of 2024.
One challenge will be figuring out how to stratify trial subjects so drug studies will more easily detect medications that may work in smaller populations of patients, Hoang Nguyen, PhD, director of scientific partnerships at the LRA, said in an interview. “Now we lump [patients] all together, and that’s not the optimal way to test drugs on patients who have a lot of differences.”
According to Dr. Staeva, the LRA funded the development of the coalition, and drugmakers will primarily provide financial support going forward. The pharmaceutical company members of the coalition are Biogen, Bristol-Myers Squibb, Eli Lilly, EMD Serono, Genentech, Gilead, GlaxoSmithKline, Merck, and Takeda.
Dr. Staeva, Dr. Nguyen, and Mr. Roy have no disclosures.
Clinical research into lupus has long been hampered by failures of medications that initially seemed promising. Now, a coalition of drugmakers, federal regulators, and activists has come together to forge a path toward better-designed studies and – potentially – groundbreaking new drugs.
“We have an opportunity to work collaboratively in lupus to address the challenges in drug development,” Teodora Staeva, PhD, vice president and chief scientific officer of the Lupus Research Alliance, said in an interview.
The alliance held a press conference on March 29 to announce the formation of the public-private Lupus Accelerating Breakthroughs Consortium. Coalition members include several major drugmakers, lupus organizations such as the LRA, the American College of Rheumatology, the Food and Drug Administration, and other federal agencies. Academic researchers, people living with lupus, caregivers and family members, and other members of the lupus community are also on board.
As Dr. Staeva explained, research into lupus has been marked by a high rate of failure. “Often, phase 2 trial successes have not translated into phase 3 successes,” she said.
But researchers, she said, don’t tend to think this is because the drugs themselves are useless.
Instead, it appears that “trial designs are not adequate to capture meaningful readouts of the drug effects, and that may have contributed to the multiple failures,” she said.
According to her, this may because the trials aren’t yet designed to fully detect whether drugs are useful. This is difficult to accomplish since patients have so many manifestations of the disease and trial participants already take a variety of existing drugs.
“Another major limitation has been the lack of integration of the patient’s voice and needs in the drug development process,” she said. It’s also challenging to recruit patients with the most severe lupus to participate in studies, especially since the trials often last 52 weeks.
The new coalition will not directly develop or favor specific drugs. Instead, it will focus on clinical research priorities. “It’s all open and collaborative,” Dr. Staeva explained, and a patient council will provide input. “We have a unique opportunity to bring the voice of people [living with lupus] to the table for the first time and be able to integrate their needs and priorities into the infrastructure.”
The new coalition was inspired by existing public-private partnerships such as the Kidney Health Initiative, she said. That initiative was founded in 2012 by the FDA and the American Society of Nephrology and has dozens of members, including multiple drugmakers and medical societies.
The leadership of the Lupus ABC coalition will include three nonvoting members from the FDA. They’ll offer guidance, Dr. Staeva said. At the press conference, Albert T. Roy, president and CEO of the LRA, said drug companies will appreciate the opportunity to speak with FDA representatives “in a space that is not competitive with respect to intellectual property or anything like that.”
The coalition will meet later in spring 2023, Dr. Staeva said. She hopes it will launch a couple of projects by the end of 2023 and be able to release preliminary results by the end of 2024.
One challenge will be figuring out how to stratify trial subjects so drug studies will more easily detect medications that may work in smaller populations of patients, Hoang Nguyen, PhD, director of scientific partnerships at the LRA, said in an interview. “Now we lump [patients] all together, and that’s not the optimal way to test drugs on patients who have a lot of differences.”
According to Dr. Staeva, the LRA funded the development of the coalition, and drugmakers will primarily provide financial support going forward. The pharmaceutical company members of the coalition are Biogen, Bristol-Myers Squibb, Eli Lilly, EMD Serono, Genentech, Gilead, GlaxoSmithKline, Merck, and Takeda.
Dr. Staeva, Dr. Nguyen, and Mr. Roy have no disclosures.
COAPT 5-year results ‘remarkable,’ but patient selection issues remain
It remained an open question in 2018, on the unveiling of the COAPT trial’s 2-year primary results, whether the striking reductions in mortality and heart-failure (HF) hospitalization observed for transcatheter edge-to-edge repair (TEER) with the MitraClip (Abbott) would be durable with longer follow-up.
The trial had enrolled an especially sick population of symptomatic patients with mitral regurgitation (MR) secondary to HF.
As it turns out, the therapy’s benefits at 2 years were indeed durable, at least out to 5 years, investigators reported March 5 at the joint scientific sessions of the American College of Cardiology and the World Heart Federation. The results were simultaneously published in the New England Journal of Medicine.
Patients who received the MitraClip on top of intensive medical therapy, compared with a group assigned to medical management alone, benefited significantly at 5 years with risk reductions of 51% for HF hospitalization, 28% for death from any cause, and 47% for the composite of the two events.
Still, mortality at 5 years among the 614 randomized patients was steep at 57.3% in the MitraClip group and 67.2% for those assigned to meds only, underscoring the need for early identification of patients appropriate for the device therapy, Gregg W. Stone, MD, said during his presentation.
Dr. Stone, of the Icahn School of Medicine at Mount Sinai, New York, is a COAPT co-principal investigator and lead author of the 5-year outcomes publication.
Outcomes were consistent across all prespecified patient subgroups, including by age, sex, MR, left ventricular (LV) function and volume, cardiomyopathy etiology, and degree of surgical risk, the researchers reported.
Symptom status, as measured by New York Heart Association (NYHA) functional class, improved throughout the 5-year follow-up for patients assigned to the MitraClip group, compared with the control group, and the intervention group was significantly more likely to be in NYHA class 1 or 2, the authors noted.
The relative benefits in terms of clinical outcomes of MitraClip therapy narrowed after 2-3 years, Dr. Stone said, primarily because at 2 years, patients who had been assigned to meds only were eligible to undergo TEER. Indeed, he noted, 45% of the 138 patients in the control group who were eligible for TEER at 2 years “crossed over” to receive a MitraClip. Those patients benefited despite their delay in undergoing the procedure, he observed.
However, nearly half of the control patients died before becoming eligible for crossover at 2 years. “We have to identify the appropriate patients for treatment and treat them early because the mortality is very high in this population,” Dr. Stone said.
“We need to do more because the MitraClip doesn’t do anything directly to the underlying left ventricular dysfunction, which is the cause of the patient’s disease,” he said. “We need advanced therapies to address the underlying left ventricular dysfunction” in this high-risk population.
Exclusions based on LV dimension
The COAPT trial included 614 patients with HF and symptomatic MR despite guideline-directed medical therapy. They were required to have moderate to severe (3+) or severe (4+) MR confirmed by an echocardiographic core laboratory and a left ventricular ejection fraction (LVEF) of 20%-50%.
Among the exclusion criteria were an LV end-systolic diameter greater than 70 mm, severe pulmonary hypertension, and moderate to severe symptomatic right ventricular failure.
The systolic LV dimension exclusion helped address the persistent question of whether “severe mitral regurgitation is a marker of a bad left ventricle or ... contributes to the pathophysiology” of MR and its poor outcomes, Dr. Stone said.
The 51% reduction in risk for time-to-first HF hospitalization among patients assigned to TEER “accrued very early,” Dr. Stone pointed out. “You can see the curves start to separate almost immediately after you reduce left atrial pressure and volume overload with the MitraClip.”
The curves stopped diverging after about 3 years because of crossover from the control group, he said. Still, “we had shown a substantial absolute 17% reduction in mortality at 2 years” with MitraClip. “That has continued out to 5 years, with a statistically significant 28% relative reduction,” he continued, and the absolute risk reduction reaching 10%.
Patients in the control group who crossed over “basically assumed the death and heart failure hospitalization rate of the MitraClip group,” Dr. Stone said. That wasn’t surprising “because most of the patients enrolled in the trial originally had chronic heart failure.” It’s “confirmation of the principal results of the trial.”
Comparison With MITRA-FR
“We know that MITRA-FR was a negative trial,” observed Wayne B. Batchelor, MD, an invited discussant following Dr. Stone’s presentation, referring to an earlier similar trial that showed no advantage for MitraClip. Compared with MITRA-FR, COAPT “has created an entirely different story.”
The marked reductions in mortality and risk for adverse events and low number-needed-to-treat with MitraClip are “really remarkable,” said Dr. Batchelor, who is with the Inova Heart and Vascular Institute, Falls Church, Va.
But the high absolute mortality for patients in the COAPT control group “speaks volumes to me and tells us that we’ve got to identify our patients well early,” he agreed, and to “implement transcatheter edge-to-edge therapy in properly selected patients on guideline-directed medical therapy in order to avoid that.”
The trial findings “suggest that we’re reducing HF hospitalization,” he said, “so this is an extremely potent therapy, potentially.
“The dramatic difference between the treated arm and the medical therapy arm in this trial makes me feel that this therapy is here to stay,” Dr. Batchelor concluded. “We just have to figure out how to deploy it properly in the right patients.”
The COAPT trial presents “a practice-changing paradigm,” said Suzanne J. Baron, MD, of Lahey Hospital & Medical Center, Burlington, Mass., another invited discussant.
The crossover data “really jumped out,” she added. “Waiting to treat patients with TEER may be harmful, so if we’re going to consider treating earlier, how do we identify the right patient?” Dr. Baron asked, especially given the negative MITRA-FR results.
MITRA-FR didn’t follow patients beyond 2 years, Dr. Stone noted. Still, “we do think that the main difference was that COAPT enrolled a patient population with more severe MR and slightly less LV dysfunction, at least in terms of the LV not being as dilated, so they didn’t have end-stage LV disease. Whereas in MITRA-FR, more of the patients had only moderate mitral regurgitation.” And big dilated left ventricles “are less likely to benefit.”
There were also differences between the studies in technique and background medical therapies, he added.
The Food and Drug Administration has approved – and payers are paying – for the treatment of patients who meet the COAPT criteria, “in whom we can be very confident they have a benefit,” Dr. Stone said.
“The real question is: Where are the edges where we should consider this? LVEF slightly less than 20% or slightly greater than 50%? Or primary atrial functional mitral regurgitation? There are registry data to suggest that they would benefit,” he said, but “we need more data.”
COAPT was supported by Abbott. Dr. Stone disclosed receiving speaker honoraria from Abbott and consulting fees or equity from Neovasc, Ancora, Valfix, and Cardiac Success; and that Mount Sinai receives research funding from Abbott. Disclosures for the other authors are available at nejm.org. Dr. Batchelor has disclosed receiving consultant fees or honoraria from Abbott, Boston Scientific, Idorsia, and V-Wave Medical, and having other ties with Medtronic. Dr. Baron has disclosed receiving consultant fees or honoraria from Abiomed, Biotronik, Boston Scientific, Edwards Lifesciences, Medtronic, Shockwave, and Zoll Medical, and conducting research or receiving research grants from Abiomed and Boston Scientific.
A version of this article originally appeared on Medscape.com.
It remained an open question in 2018, on the unveiling of the COAPT trial’s 2-year primary results, whether the striking reductions in mortality and heart-failure (HF) hospitalization observed for transcatheter edge-to-edge repair (TEER) with the MitraClip (Abbott) would be durable with longer follow-up.
The trial had enrolled an especially sick population of symptomatic patients with mitral regurgitation (MR) secondary to HF.
As it turns out, the therapy’s benefits at 2 years were indeed durable, at least out to 5 years, investigators reported March 5 at the joint scientific sessions of the American College of Cardiology and the World Heart Federation. The results were simultaneously published in the New England Journal of Medicine.
Patients who received the MitraClip on top of intensive medical therapy, compared with a group assigned to medical management alone, benefited significantly at 5 years with risk reductions of 51% for HF hospitalization, 28% for death from any cause, and 47% for the composite of the two events.
Still, mortality at 5 years among the 614 randomized patients was steep at 57.3% in the MitraClip group and 67.2% for those assigned to meds only, underscoring the need for early identification of patients appropriate for the device therapy, Gregg W. Stone, MD, said during his presentation.
Dr. Stone, of the Icahn School of Medicine at Mount Sinai, New York, is a COAPT co-principal investigator and lead author of the 5-year outcomes publication.
Outcomes were consistent across all prespecified patient subgroups, including by age, sex, MR, left ventricular (LV) function and volume, cardiomyopathy etiology, and degree of surgical risk, the researchers reported.
Symptom status, as measured by New York Heart Association (NYHA) functional class, improved throughout the 5-year follow-up for patients assigned to the MitraClip group, compared with the control group, and the intervention group was significantly more likely to be in NYHA class 1 or 2, the authors noted.
The relative benefits in terms of clinical outcomes of MitraClip therapy narrowed after 2-3 years, Dr. Stone said, primarily because at 2 years, patients who had been assigned to meds only were eligible to undergo TEER. Indeed, he noted, 45% of the 138 patients in the control group who were eligible for TEER at 2 years “crossed over” to receive a MitraClip. Those patients benefited despite their delay in undergoing the procedure, he observed.
However, nearly half of the control patients died before becoming eligible for crossover at 2 years. “We have to identify the appropriate patients for treatment and treat them early because the mortality is very high in this population,” Dr. Stone said.
“We need to do more because the MitraClip doesn’t do anything directly to the underlying left ventricular dysfunction, which is the cause of the patient’s disease,” he said. “We need advanced therapies to address the underlying left ventricular dysfunction” in this high-risk population.
Exclusions based on LV dimension
The COAPT trial included 614 patients with HF and symptomatic MR despite guideline-directed medical therapy. They were required to have moderate to severe (3+) or severe (4+) MR confirmed by an echocardiographic core laboratory and a left ventricular ejection fraction (LVEF) of 20%-50%.
Among the exclusion criteria were an LV end-systolic diameter greater than 70 mm, severe pulmonary hypertension, and moderate to severe symptomatic right ventricular failure.
The systolic LV dimension exclusion helped address the persistent question of whether “severe mitral regurgitation is a marker of a bad left ventricle or ... contributes to the pathophysiology” of MR and its poor outcomes, Dr. Stone said.
The 51% reduction in risk for time-to-first HF hospitalization among patients assigned to TEER “accrued very early,” Dr. Stone pointed out. “You can see the curves start to separate almost immediately after you reduce left atrial pressure and volume overload with the MitraClip.”
The curves stopped diverging after about 3 years because of crossover from the control group, he said. Still, “we had shown a substantial absolute 17% reduction in mortality at 2 years” with MitraClip. “That has continued out to 5 years, with a statistically significant 28% relative reduction,” he continued, and the absolute risk reduction reaching 10%.
Patients in the control group who crossed over “basically assumed the death and heart failure hospitalization rate of the MitraClip group,” Dr. Stone said. That wasn’t surprising “because most of the patients enrolled in the trial originally had chronic heart failure.” It’s “confirmation of the principal results of the trial.”
Comparison With MITRA-FR
“We know that MITRA-FR was a negative trial,” observed Wayne B. Batchelor, MD, an invited discussant following Dr. Stone’s presentation, referring to an earlier similar trial that showed no advantage for MitraClip. Compared with MITRA-FR, COAPT “has created an entirely different story.”
The marked reductions in mortality and risk for adverse events and low number-needed-to-treat with MitraClip are “really remarkable,” said Dr. Batchelor, who is with the Inova Heart and Vascular Institute, Falls Church, Va.
But the high absolute mortality for patients in the COAPT control group “speaks volumes to me and tells us that we’ve got to identify our patients well early,” he agreed, and to “implement transcatheter edge-to-edge therapy in properly selected patients on guideline-directed medical therapy in order to avoid that.”
The trial findings “suggest that we’re reducing HF hospitalization,” he said, “so this is an extremely potent therapy, potentially.
“The dramatic difference between the treated arm and the medical therapy arm in this trial makes me feel that this therapy is here to stay,” Dr. Batchelor concluded. “We just have to figure out how to deploy it properly in the right patients.”
The COAPT trial presents “a practice-changing paradigm,” said Suzanne J. Baron, MD, of Lahey Hospital & Medical Center, Burlington, Mass., another invited discussant.
The crossover data “really jumped out,” she added. “Waiting to treat patients with TEER may be harmful, so if we’re going to consider treating earlier, how do we identify the right patient?” Dr. Baron asked, especially given the negative MITRA-FR results.
MITRA-FR didn’t follow patients beyond 2 years, Dr. Stone noted. Still, “we do think that the main difference was that COAPT enrolled a patient population with more severe MR and slightly less LV dysfunction, at least in terms of the LV not being as dilated, so they didn’t have end-stage LV disease. Whereas in MITRA-FR, more of the patients had only moderate mitral regurgitation.” And big dilated left ventricles “are less likely to benefit.”
There were also differences between the studies in technique and background medical therapies, he added.
The Food and Drug Administration has approved – and payers are paying – for the treatment of patients who meet the COAPT criteria, “in whom we can be very confident they have a benefit,” Dr. Stone said.
“The real question is: Where are the edges where we should consider this? LVEF slightly less than 20% or slightly greater than 50%? Or primary atrial functional mitral regurgitation? There are registry data to suggest that they would benefit,” he said, but “we need more data.”
COAPT was supported by Abbott. Dr. Stone disclosed receiving speaker honoraria from Abbott and consulting fees or equity from Neovasc, Ancora, Valfix, and Cardiac Success; and that Mount Sinai receives research funding from Abbott. Disclosures for the other authors are available at nejm.org. Dr. Batchelor has disclosed receiving consultant fees or honoraria from Abbott, Boston Scientific, Idorsia, and V-Wave Medical, and having other ties with Medtronic. Dr. Baron has disclosed receiving consultant fees or honoraria from Abiomed, Biotronik, Boston Scientific, Edwards Lifesciences, Medtronic, Shockwave, and Zoll Medical, and conducting research or receiving research grants from Abiomed and Boston Scientific.
A version of this article originally appeared on Medscape.com.
It remained an open question in 2018, on the unveiling of the COAPT trial’s 2-year primary results, whether the striking reductions in mortality and heart-failure (HF) hospitalization observed for transcatheter edge-to-edge repair (TEER) with the MitraClip (Abbott) would be durable with longer follow-up.
The trial had enrolled an especially sick population of symptomatic patients with mitral regurgitation (MR) secondary to HF.
As it turns out, the therapy’s benefits at 2 years were indeed durable, at least out to 5 years, investigators reported March 5 at the joint scientific sessions of the American College of Cardiology and the World Heart Federation. The results were simultaneously published in the New England Journal of Medicine.
Patients who received the MitraClip on top of intensive medical therapy, compared with a group assigned to medical management alone, benefited significantly at 5 years with risk reductions of 51% for HF hospitalization, 28% for death from any cause, and 47% for the composite of the two events.
Still, mortality at 5 years among the 614 randomized patients was steep at 57.3% in the MitraClip group and 67.2% for those assigned to meds only, underscoring the need for early identification of patients appropriate for the device therapy, Gregg W. Stone, MD, said during his presentation.
Dr. Stone, of the Icahn School of Medicine at Mount Sinai, New York, is a COAPT co-principal investigator and lead author of the 5-year outcomes publication.
Outcomes were consistent across all prespecified patient subgroups, including by age, sex, MR, left ventricular (LV) function and volume, cardiomyopathy etiology, and degree of surgical risk, the researchers reported.
Symptom status, as measured by New York Heart Association (NYHA) functional class, improved throughout the 5-year follow-up for patients assigned to the MitraClip group, compared with the control group, and the intervention group was significantly more likely to be in NYHA class 1 or 2, the authors noted.
The relative benefits in terms of clinical outcomes of MitraClip therapy narrowed after 2-3 years, Dr. Stone said, primarily because at 2 years, patients who had been assigned to meds only were eligible to undergo TEER. Indeed, he noted, 45% of the 138 patients in the control group who were eligible for TEER at 2 years “crossed over” to receive a MitraClip. Those patients benefited despite their delay in undergoing the procedure, he observed.
However, nearly half of the control patients died before becoming eligible for crossover at 2 years. “We have to identify the appropriate patients for treatment and treat them early because the mortality is very high in this population,” Dr. Stone said.
“We need to do more because the MitraClip doesn’t do anything directly to the underlying left ventricular dysfunction, which is the cause of the patient’s disease,” he said. “We need advanced therapies to address the underlying left ventricular dysfunction” in this high-risk population.
Exclusions based on LV dimension
The COAPT trial included 614 patients with HF and symptomatic MR despite guideline-directed medical therapy. They were required to have moderate to severe (3+) or severe (4+) MR confirmed by an echocardiographic core laboratory and a left ventricular ejection fraction (LVEF) of 20%-50%.
Among the exclusion criteria were an LV end-systolic diameter greater than 70 mm, severe pulmonary hypertension, and moderate to severe symptomatic right ventricular failure.
The systolic LV dimension exclusion helped address the persistent question of whether “severe mitral regurgitation is a marker of a bad left ventricle or ... contributes to the pathophysiology” of MR and its poor outcomes, Dr. Stone said.
The 51% reduction in risk for time-to-first HF hospitalization among patients assigned to TEER “accrued very early,” Dr. Stone pointed out. “You can see the curves start to separate almost immediately after you reduce left atrial pressure and volume overload with the MitraClip.”
The curves stopped diverging after about 3 years because of crossover from the control group, he said. Still, “we had shown a substantial absolute 17% reduction in mortality at 2 years” with MitraClip. “That has continued out to 5 years, with a statistically significant 28% relative reduction,” he continued, and the absolute risk reduction reaching 10%.
Patients in the control group who crossed over “basically assumed the death and heart failure hospitalization rate of the MitraClip group,” Dr. Stone said. That wasn’t surprising “because most of the patients enrolled in the trial originally had chronic heart failure.” It’s “confirmation of the principal results of the trial.”
Comparison With MITRA-FR
“We know that MITRA-FR was a negative trial,” observed Wayne B. Batchelor, MD, an invited discussant following Dr. Stone’s presentation, referring to an earlier similar trial that showed no advantage for MitraClip. Compared with MITRA-FR, COAPT “has created an entirely different story.”
The marked reductions in mortality and risk for adverse events and low number-needed-to-treat with MitraClip are “really remarkable,” said Dr. Batchelor, who is with the Inova Heart and Vascular Institute, Falls Church, Va.
But the high absolute mortality for patients in the COAPT control group “speaks volumes to me and tells us that we’ve got to identify our patients well early,” he agreed, and to “implement transcatheter edge-to-edge therapy in properly selected patients on guideline-directed medical therapy in order to avoid that.”
The trial findings “suggest that we’re reducing HF hospitalization,” he said, “so this is an extremely potent therapy, potentially.
“The dramatic difference between the treated arm and the medical therapy arm in this trial makes me feel that this therapy is here to stay,” Dr. Batchelor concluded. “We just have to figure out how to deploy it properly in the right patients.”
The COAPT trial presents “a practice-changing paradigm,” said Suzanne J. Baron, MD, of Lahey Hospital & Medical Center, Burlington, Mass., another invited discussant.
The crossover data “really jumped out,” she added. “Waiting to treat patients with TEER may be harmful, so if we’re going to consider treating earlier, how do we identify the right patient?” Dr. Baron asked, especially given the negative MITRA-FR results.
MITRA-FR didn’t follow patients beyond 2 years, Dr. Stone noted. Still, “we do think that the main difference was that COAPT enrolled a patient population with more severe MR and slightly less LV dysfunction, at least in terms of the LV not being as dilated, so they didn’t have end-stage LV disease. Whereas in MITRA-FR, more of the patients had only moderate mitral regurgitation.” And big dilated left ventricles “are less likely to benefit.”
There were also differences between the studies in technique and background medical therapies, he added.
The Food and Drug Administration has approved – and payers are paying – for the treatment of patients who meet the COAPT criteria, “in whom we can be very confident they have a benefit,” Dr. Stone said.
“The real question is: Where are the edges where we should consider this? LVEF slightly less than 20% or slightly greater than 50%? Or primary atrial functional mitral regurgitation? There are registry data to suggest that they would benefit,” he said, but “we need more data.”
COAPT was supported by Abbott. Dr. Stone disclosed receiving speaker honoraria from Abbott and consulting fees or equity from Neovasc, Ancora, Valfix, and Cardiac Success; and that Mount Sinai receives research funding from Abbott. Disclosures for the other authors are available at nejm.org. Dr. Batchelor has disclosed receiving consultant fees or honoraria from Abbott, Boston Scientific, Idorsia, and V-Wave Medical, and having other ties with Medtronic. Dr. Baron has disclosed receiving consultant fees or honoraria from Abiomed, Biotronik, Boston Scientific, Edwards Lifesciences, Medtronic, Shockwave, and Zoll Medical, and conducting research or receiving research grants from Abiomed and Boston Scientific.
A version of this article originally appeared on Medscape.com.
FROM ACC 2023